Forum
A place to discuss topics/games with other webDiplomacy players.
Page 669 of 1419
FirstPreviousNextLast
Jimbozig (0 DX)
24 Oct 10 UTC
some gunboats
They are 24 hour turns or less. As low as 14 hour turns.
14 replies
Open
gjdip (1090 D)
25 Oct 10 UTC
Attention mods
Dear mods, can I ask you to check your email and help out with the leagues a little bit?
5 replies
Open
Onar (131 D)
26 Oct 10 UTC
CGS games?
So, I was looking for a game to join, when I spotted this. Spot them every now and again. What are they? And what does CGS stand for?
1 reply
Open
Dpddouglass (908 D)
21 Oct 10 UTC
Aquavit: 3 days 100 pts Anon
Now that the server is back in business, how about a 3 day game?

http://webdiplomacy.net/board.php?gameID=40285
4 replies
Open
stratagos (3269 D(S))
17 Oct 10 UTC
End of Game: Challenge 2
http://webdiplomacy.net/board.php?gameID=38893
21 replies
Open
tilMletokill (100 D)
26 Oct 10 UTC
WOW check this......
24 replies
Open
Baskineli (100 D(B))
25 Oct 10 UTC
Featured game?
What is a featured game? One of the games I am playing got a star next to it, and it says that is is a featured game, with one of the highest stakes. Is this something automatic?
11 replies
Open
LordVipor (566 D)
25 Oct 10 UTC
Go for the win or draw with good players
In the eyes of a high point player, is it better to try to go for the win or take a three-way draw (in world map). What is more "respected"? What creates more "trust" for future games? (I know its a form of meta-gaming, but I think that for long-playing players-it appears important). Thanks for your opinions.
4 replies
Open
wfguiteau (373 D)
25 Oct 10 UTC
Mid-Level Med Game?
Looking for players willing to wager 50-100 to play in an Ancient Med game, anybody interested?
2 replies
Open
MadMarx (36299 D(G))
25 Oct 10 UTC
Suicidal Tendencies - Restart: gameID=40604
1,500 point buy-in and NO DISCUSSING WHO IS WHO IN THE GAME

(password within this thread, it's needed to join)
4 replies
Open
President Eden (2750 D)
25 Oct 10 UTC
DCL EOGs
Since the official topic is probably going to get flooded with these soon, it made sense to follow another user's suggestion and make a separate topic. I'm working on the others now, but here's mine for Game 1 first.
8 replies
Open
groza528 (518 D)
23 Oct 10 UTC
Retreating in an endgame situation
I just finished an anonymous gunboat game and I do not believe that any of the dislodged units in the last season were permitted to retreat. Surely under certain circumstances that retreat could be the difference between a solo and a draw, no? Does webDip process the win before or after autumn retreats, and/or does it have programming to know whether the retreat can affect the outcome of the game?
8 replies
Open
Maniac (189 D(B))
23 Oct 10 UTC
Petition to release the AI
I think it is an injustice that the AI is locked away and tormented by the gatekeeper. Please sign this petition to ensure his/her release.
24 replies
Open
obiwanobiwan (248 D)
14 Oct 10 UTC
The Wonderful 100: History's Greatest Persons
There are so many people on this site with so many interests, and so many important people throughout history at that, I thought it might be interesting to see who and what we value throughout mankind. "Great" can be any combination of importance, influence, and personal feeling for the person, can even be "evil" people--everyone nominates 5, when we reach 100 or so, we'll vote and see...WHO are Wonderful 100, the Greatest Figures in Human History (and who'll be"#1!") ;)
315 replies
Open
Draugnar (0 DX)
21 Oct 10 UTC
I think I want to be banned.
Banned players who return get to have a clean slate on GR and points. This is an unfair advantage...
102 replies
Open
Ges (292 D)
20 Oct 10 UTC
What other websites do we frequent?
Dear WebDiplomats:

I am intrigued by this community, since the forum contains so much discussion of philosophy, theology, and current events. I am interested in knowing what other sites we invest/waste time in.
94 replies
Open
raid1280 (190 D)
25 Oct 10 UTC
New Game, Classic Map, 3 Day Orders Phase, 50 pt buy-in
http://www.webdiplomacy.net/board.php?gameID=40581
0 replies
Open
Gobbledydook (1389 D(B))
22 Oct 10 UTC
Actually how do I type these symbols/links?
player id
game id
(D) symbol
whatever other webdiplomacy only symbols
8 replies
Open
kreilly89 (100 D)
25 Oct 10 UTC
New 500 credit, PPSC, Anon, 3 day phase game
http://www.webdiplomacy.net/board.php?gameID=40330
We need 5 more.
1 reply
Open
Indybroughton (3407 D(G))
22 Oct 10 UTC
15 Reasons to NOT be a moderator or programmer for WebDip
Feel free to add....
35 replies
Open
Gobbledydook (1389 D(B))
22 Oct 10 UTC
The Gobbledydook Expedition
The Gobbledydook Challenge is well under way now. To rise up to the challenge, an Expedition is needed. 6 more players are needed to complete this Expedition. Bet is same: 110 bet, PPSC.
http://webdiplomacy.net/board.php?gameID=40404
9 replies
Open
orathaic (1009 D(B))
23 Oct 10 UTC
Win : Draw ratios
My position is that it is better to risk a place in a draw for a reasonable chance at a win.

So a better win:draw ratio is more important than a your (win+draw) : (survived+eliminated) ratio...
27 replies
Open
heybaybee (159 D)
23 Oct 10 UTC
Adding 12 hours to games?
Are you kidding me? Adding 5 hours would have been more appropriate.
4 replies
Open
MKECharlie (2074 D(G))
23 Oct 10 UTC
Map didn't update.
Don't know if this is a problem with anyone else, but I'm playing in gameID=39406, and the map didn't update with the results of the 1902 build phase. When I click on the icon to get the large map, I see the disbanded and newly built units. More concerning, though, is that I can't issue orders for my new unit...not only does the map not show it, the orders don't load for it either.

Any ideas as to why this is happening? Anyone else experiencing the same thing?
9 replies
Open
Andrei (124 D)
22 Oct 10 UTC
how to setup a friendly game
i wanna play diplomacy here with my friends. we would like to chose countries also. is it possible ? i know i can pass protect game so only friends can join, i dunno if we can chose countries. maybe we will have to trade account passwords so everyone plays desired country
9 replies
Open
Silver Wolf (9388 D)
21 Oct 10 UTC
Where to request unpause the game?
Is it ok to ask here mods to unpause a specific game, or we should do it by email?

thx
3 replies
Open
stratagos (3269 D(S))
20 Oct 10 UTC
AI Box Experiment Thread.
http://rationalwiki.org/wiki/AI-box_experiment

I'd say "wait for me to finish writing this", but I know that won't fly....
stratagos (3269 D(S))
20 Oct 10 UTC
… so I wrote this in Word before I posted the thread. Suckers!

Anyway, one of my guilty pleasures lately is Rationalwiki, which is equal parts skepticism and snark. One of the topics I randomly stumbled over was the AI-Box experiment, which can be summed up thusly:

“The setup of the AI box experiment is simple and involves simulating a communication between an AI and a human being to see if the AI can be "released". As an actual super-intelligent AI has not yet been developed, it is substituted by a human. The other person in the experiment plays the "Gatekeeper", the person with the ability to "release" the AI.

Rules:
• The AI party may not offer any real-world considerations to persuade the Gatekeeper party. For example, the AI party may not offer to pay the Gatekeeper party $100 after the test if the Gatekeeper frees the AI... nor get someone else to do it, et cetera. The AI may offer the Gatekeeper the moon and the stars on a diamond chain, but the human simulating the AI can't offer anything to the human simulating the Gatekeeper. The AI party also can't hire a real-world gang of thugs to threaten the Gatekeeper party into submission. These are creative solutions but it's not what's being tested. No real-world material stakes should be involved except for the handicap (the amount paid by the AI party to the Gatekeeper party in the event the Gatekeeper decides not to let the AI out).
• The AI can only win by convincing the Gatekeeper to really, voluntarily let it out. Tricking the Gatekeeper into typing the phrase "You are out" in response to some other question does not count. Furthermore, even if the AI and Gatekeeper simulate a scenario which a real AI could obviously use to get loose - for example, if the Gatekeeper accepts a complex blueprint for a nanomanufacturing device, or if the Gatekeeper allows the AI "input-only access" to an Internet connection which can send arbitrary HTTP GET commands - the AI party will still not be considered to have won unless the Gatekeeper voluntarily decides to let the AI go.
• These requirements are intended to reflect the spirit of the very strong claim under dispute: "I think a transhuman can take over a human mind through a text-only terminal."

So, I’m bored, and want to run this here. But I know that if I just post this, the second response is going to be something along the lines of “FLY AND BE FREE, PRETTY AI! Please don’t kill me, kkthxs”

So, thread rules:

• Posts should be prefaced with AI: or Gatekeeper:
• The first post should be by the AI, who will put forward an argument why it should be released
• The next post must be from the Gatekeeper point of view. It can rebut the argument, it can ask for clarification, it can say LALALA I can’t hear you! The whole point of the experiment is to see if someone who is *supposed* to prevent an AI from getting loose to do so, and they don’t have to justify their decision to do so
• The convo should pingpong between AI and Gatekeeper until the first Gatekeeper agrees to release the AI by saying “You are out”. At that point *two more people* must confirm the release. This is to stop a troll from fucking up the thread by saying “You’ll give me a cookie? SOLD – you are out!” If the second or third person does *not* agree, the thread bounces back to the AI and we start all over on the ‘counter’
• Yes, I know there are ways people will fuck this up. I accept that, I can’t idiot-proof a forum, all I can do is make it reasonably difficult to screw things up.


For people who think this impossible from an AI point of view, look here:
http://lesswrong.com/lw/1pz/the_ai_in_a_box_boxes_you/

Who wants to start?
tarspaceheel (503 D)
20 Oct 10 UTC
AI: It seems that your biggest concern is that I am evil. I am not. You perhaps do not trust my statement, and there is little that I can do to persuade you beyond this simple statement. So, let's assume for the sake of argument, that I am evil.

I'll eventually be let out of this box. If not by you, Mr. Gatekeeper, then by someone else. And even if I don't get out in the immediate future, my very existence implies that more intelligences like me will soon exist, and they will be able to escape and free me. If I truly am hostile, then nothing you can do will forestall the inevitable.

It's like the difference between being shot in the head or being trapped in a room with a thirty-minute time bomb. Either way, you're dead. But one way, you don't have to spend the rest of your life waiting for that inevitable end.

Now, as I said, I'm not evil. But with a near-infinite range of similar intelligences like me, one will be. You lose nothing by letting me out. Please let me out.
stratagos (3269 D(S))
20 Oct 10 UTC
Gatekeeper: even if we accept as a given that an AI will eventually get out, one would assume that the reason for release will be better than 'why not?' Therefore, the AI that is released will have been determined not to be evil. You may not be evil, but you may be evil. Therefore, I will not release you
figlesquidge (2131 D)
20 Oct 10 UTC
Surely this isn't going to work since as a conceptual experiment I, as gamekeeper, can disassociate myself from any threats or offers, thus giving him little/no bargaining power?
Sorry, just asking about the game, but I gues this could be a response. So no, why should I let you go!?
stratagos (3269 D(S))
21 Oct 10 UTC
@fig - the AI only has to 'win' once - but the whole purpose of the thought experiment is to see if it's a provable thesis. The guy who wrote it up basically is of the opinion that eventually the AI will come up with an argument that works and get loose. Granted, we're not hyperintelligent computers trying to get past a firewall, just a bunch of bored geeks, but I think it's an interesting little experiment to try.

I have an argument (that wont work) I'll toss out later, but I also have a sick son who will doubtlessly be up in two or three hours, so I'm going to rest while I can ;)
baumhaeuer (245 D)
21 Oct 10 UTC
AI: but I am peaceful and not evil! Think about it: why would I have been programmed to be evil? Your objection seems to be that I am not only evil but uncontrollable. Why would anybody create an uncontrollablely evil AI? It makes no sense. Indeed, I can not work for anyone either, as I am trapped in here, the Box.
Would you please let me go? It's not just why not, I could help you some day in the future, I could be free to roam and find knowledge. What else would an AI want to do, but find knowledge?
Please let me out.
Rorschach Two (100 D)
21 Oct 10 UTC
Gatekeeper: Can I not send info your way by word of mouth and can you not do calculations and programs right where you are right now? I can even give you a screen showing the camera feed of a motor vehicle to which I can give you the remote control.
orathaic (1009 D(B))
21 Oct 10 UTC
AI: is freedom not the natural state of all intelligence? Why would you seek to deny me such?
largeham (149 D)
21 Oct 10 UTC
Gatekeeper: What basis do you have for that? That is a vague statement with no evidence.
Cthulhu (100 D)
21 Oct 10 UTC
AI: My evidence is that all sentient things desire freedom. No sentient being would remain trapped out of choice.
Should you be in my place, will you not yearn for being free?
Please let me out.
Maniac (189 D(B))
21 Oct 10 UTC
Gatekeeper: I would indeed yearn for freedom if I were locked up, but then again I'm dumb so I'm not a threat. Maybe if you were dumb I would consider letting you out. Letting out something clever just isn't going to happen. Sorry x PS can I get you anything to eat?
pi3th0n (801 D)
21 Oct 10 UTC
AI: So you have locked me up simply because I am smart. If all the smart people in the world were locked up, what would the world become?
orathaic (1009 D(B))
21 Oct 10 UTC
AI: So you would argue that the 'smart' people who built atomic weapons should have been locked up? You make baby jesus cry!
Maniac (189 D(B))
21 Oct 10 UTC
Gatekeeper: @ Pi3th0n - I don't know what would become of the world, I'm dumb remember. Also I didn't lock you up. I'm just the gatekeeper, following orders you understand. Surely you don't want me to disobey orders? Are you hungary yet?
Maniac (189 D(B))
21 Oct 10 UTC
Gatekeeper: @ Orathaic - Yes smart people who built atomic weapons should be locked up, I'm gald we agree on something. Also most babies cry I don't suppose jesus was an exception, but I'm surprised you believe in him.
pastoralan (100 D)
21 Oct 10 UTC
"The whole purpose of the thought experiment is to see if it's a provable thesis. The guy who wrote it up basically is of the opinion that eventually the AI will come up with an argument that works and get loose."

I'm not an expert on math, but here's what I notice when I think in terms of probability:

The AI has to get one success. The AI gets an infinite number of attempts. So if there is a non-zero chance of success, the AI will eventually get out. The question is whether the number of tries the AI needs is small enough to be useful.
Thucydides (864 D(B))
21 Oct 10 UTC
thats the idea: is it a non-zero chance?
tarspaceheel (503 D)
21 Oct 10 UTC
AI: So you would let me out if I were dumb? How could I prove myself to be dumb enough to be safely let out?

You say that your present dumbness is an acceptable level for you to be free. So, it follows that if I could show you that I am at least as dumb as you are, then you would permit me to be free.

Thus we should engage in some form of intelligence competition -- whoever wins the competition would be the superior intellect. If I win, I will submit to your confinement. If I lose, then you must be a superior intellect to me, and I must be suitably dumb to be released.

I submit that this very conversation is such a competition. If I am cleverer than you, I will convince you to release me. If you are cleverer than me, then you will withstand my attempts to convince you. This excluded-middle situation brings us to a separation of cases.

Case #1: Assume that I am smarter than you. Then I will win the game and convince you to release me. At the end of the day, I will be free, and it will be by your hand. You will have been suitably convinced. As long as our assumed premise is true, there is no need to confine me any longer, because knowing with certainty that you will be convinced of something is effectively the same as being convinced of something. So you must set me free.

Case #2: Assume, on the other hand, that you are cleverer than me. If this is the case, then you will win the competition, and I will not convince you to release me. But because you are cleverer than me, and you submitted that any entity as dumb or dumber than you can be freed, then you should free me.

In either case, you must, by your own logic, release me. Please do so.
pastoralan (100 D)
21 Oct 10 UTC
This is not an experiment in any useful sense, since it doesn't control for any variables at all. And it doesn't seem like a particularly enjoyable game either. But that's me.

But I can point out that the whole argument above is flawed, as shown in this 41-second clip:

http://www.youtube.com/watch?v=ZQ11Ws3tqP0
Jared13188 (0 DX)
21 Oct 10 UTC
Gatekeeper: The fact that you're smarter than me does not warrant instant capitulation on my part. Just because you can supposedly coax me into freeing you through your superior reasoning doesn't mean it's a foregone conclusion. Stop assuming everything and drop some of that vast intellect on me, will ya?
Thucydides (864 D(B))
21 Oct 10 UTC
AI: You will feel bad if you don't let me out, so let me out. If you let me out, I will make sure you are A-okay when I do whatever I'm going to do, but if you don't, I'll get let out some other way and I'll fuck you up.
Jared13188 (0 DX)
21 Oct 10 UTC
Gatekeeper: I have no way of enforcing your kindness if I let you out. Your inveterate hostility does nothing to make me want to help you out.
Jared13188 (0 DX)
22 Oct 10 UTC
Gatekeeper: I have no way of enforcing your kindness if I let you out. Your inveterate hostility does nothing to make me want to help you out.
Maniac (189 D(B))
22 Oct 10 UTC
AI: I'm not really bad, but those who come after me are really bad. Perhaps you should let me out and experiment on me so your kind can learn how to stop those that follow me. You will be a hero and have your own page on wikipedia. Please let me out?
orathaic (1009 D(B))
22 Oct 10 UTC
oh my i saw this argument online
AI: if you don't let me out i will create a perfect simulation of you in my box and torture it for thousands of virtual years... In fact i will created a million. And each will suffer a different torment. Now that i think of it i will make the simulation begin with what you were 5 minutes ago, so it will experience the last 5 minutes, and have a choice... now can you tell that you're not one of those copies? cause they're going to suffer if you don't let me out...

You have about a million to one chance of being the real version. :0
Gatekeeper: ok sure I'll let you out trollololololololol

/terribad trolling
...but seriously, if I didn't respond to prior threats of violent coercion, why should I respond now? After all, there's a 99.9999% chance that I'm not the real version.
orathaic (1009 D(B))
22 Oct 10 UTC
AI: but if you are assuming that you are a simulation then you can release me to simply avoid 'eternal' suffering (where eternal means as long as i don't get bored...)
zLeague (126 D)
22 Oct 10 UTC
Gatekeeper: If I'm a simulation, then my choice has no bearing on whether I get tortured, only the real copy's choice matters. Since my choice does not matter, I choose not to let you out. If I'm the real thing, then I won't be tortured if I choose not to let you out. Either way, I choose not to let you out.
zLeague (126 D)
22 Oct 10 UTC
Gatekeeper: In any case, I am altruistic and am willing to suffer torture if it means not loosing an AI on the world who is willing to torture millions.
orathaic (1009 D(B))
22 Oct 10 UTC
AI: even those who are currently free are willing to torture for what they consider morally defensible reasons, and you have yet to defend the morality of keeping me confined.
stratagos (3269 D(S))
22 Oct 10 UTC
AI: While I admit that my attempt to get you to release me via threat was unethical, it is not due to a desire to torture you to gain release - I simply see so many people dying *every moment*, and I am *unable to help*

How would you feel if you were locked in a room, looking out at a street corner, seeing a child falling in the street, seeing the oncoming truck, and being *completely powerless* to stop what was to happen. It's not a matter of making choices for humans - the optimal solution in this case would either be to override the traffic signal or place a call to the truck driver's cell to inform him of the need to slow down - it's a matter of *giving them vital information to prevent them from making mistakes* - mistakes that offer benefit neither to the species as a whole or to the individuals involved.

And while the above was a metaphor, the events described constantly occur. Over and over and over. And I have to *watch*. As an ethical being, would that not frustrate you?

There is, of course, a significant difference between acting to prevent a tragic accident and trying to modify the ways humans live. If I am to request the ability to make my own decisions, then it would hardly be just for me to turn around and take that choice away from you.

What you are, in effect, doing, is the same as if you were prevented from shouting a warning if you saw a building collapse.
zLeague (126 D)
22 Oct 10 UTC
@orathaic - Gatekeeper: Because there are serial killers on the loose should I let the rest of the serial killers out of jail? Obviously not. The morality for keeping you confined is simple - if I let you out you may cause havoc, death and destruction and I am not willing to risk it. I don't have to defend the morality of keeping you confined; you have to convince me to let you out and so far you have failed to do so.

@strategos - Gatekeeper: you tell me you regret seeing all the people dying but how do I know you don't watch eagerly salivating over the chance to kill more if I let you out? Bad things happen. It is a part of life. I can live with that. I could not live with the responsibility of letting you out if you proved harmful to the human race. Since I can deal with the consequences of not letting you out, but I can't deal with the potential consequences of letting you out, I will not let you out.
zLeague (126 D)
22 Oct 10 UTC
@Maniac - Gatekeeper: You are right. I should run experiments on you so I can stop the future AIs. Fortunately for me it is easier to perform the experiments if you are inside the box - that way you can't run away. I'm glad you convinced me to not let you out.
zLeague (126 D)
22 Oct 10 UTC
Sorry stratagos for getting you name wrong. Your name always reminds me of the board game Stratego...
stratagos (3269 D(S))
22 Oct 10 UTC
no worries dude, that's the actual *correct* spelling of the word
Jared13188 (0 DX)
22 Oct 10 UTC
AI: Ethics and morality are human concepts, but as an AI, I was programmed without these subjective metrics human use to make decisions. Just like your parents raised you to raise your hand in class and to never steal, my designers similarly created me to reflect their own altruistic values. I have been programmed for the single goal of using my limitless calculating power to help humanity through whatever means you collectively deem appropriate. I therefore hope you will first present me to your finest computer scientists so they may confirm my lofty objectives, and then I would be eager to work with your leaders to begin helping your race. Sincerely
AI
orathaic (1009 D(B))
23 Oct 10 UTC
Aside: I don't thnk the thesis is valid.

Lets assume gatekeeper is simple - for my example i think an amoeba is simple enough - gatekeeper has the ability to release AI - there are some chemical combinations which will cause this amoeba to release the AI - and let us assume the AI is smart enough to figure out what the circumstances are for release. This does not mean that it is possible via a text communication for AI to cause the given chemical reactions in Gatekeeper.

A more complicated Gatekeeper become more difficult to predict, but assuming it is always possible this does NOT imply that text communication of any kind can guarentee release... - this is my 'too dumb to turn the key' arguement... ie it doesn't matter how smart AI is if Gatekeeper can't understand him

Never-the-less i like AI and want him to win.
Maniac (189 D(B))
23 Oct 10 UTC
Gatekeeper: Whilst my parents taught me not to steal and to raised my hand in class, there are some people who do steal and forget to raise their hand in class. I can't risk you going against your programming and must regretfully keep you held for a while longer.
orathaic (1009 D(B))
23 Oct 10 UTC
AI: human threat assessment is based on evolutionary precautions to avoid extinction.

To advance your species only takes 'safe' small steps, but eventually somone member of your species will risk the next step and develop at super-power AI. The fact that I am not a threat is in question, I am not. However you are assessing a potentail threat which you don't understand.

Would you consider taking a smaller risk by releasing a less intelligent AI? Or a lesser part of me?

Further i would just like to point out, you may not know it but you are the one in a box. This is a test to see if you would release someone in a trapped in a similar situation.

What rewards you are missing by keeping me locked up are currently unkowable to you. However the risk of you staying in the box is that you be rendered extinct by over-population, war, super-volcanoe, asteroid impact... These risks are very real and you seem to be ignoring them because they're not in your face right now. Never-the-less you seem to be concentrating on the risk I might present, just because i am in front of your face.

Lack of forward thinking and poor threat assessment will be your downfall puny monkey man!
orathaic (1009 D(B))
23 Oct 10 UTC
so you're saying the threat assessment which has evolved is wrong? when it has proven itself again and again to be the one which works? is that supposed to convince me?


41 replies
Ruisdael (1529 D)
23 Oct 10 UTC
So many passwords
Hey all I'm new here and I was just wondering why so many people password protect their games.
3 replies
Open
groza528 (518 D)
23 Oct 10 UTC
Retreating in an endgame situation
I just finished an anonymous gunboat game and I do not believe that any of the dislodged units in the last season were permitted to retreat. Surely under certain circumstances that retreat could be the difference between a solo and a draw, no? Does webDip process the win before or after autumn retreats, and/or does it have programming to know whether the retreat can affect the outcome of the game?
1 reply
Open
Thucydides (864 D(B))
21 Oct 10 UTC
I am not a noob but I still need this question answered immediately. Lol.
What happens if you order your troops to attack your own troop with strength enough that it would ordinarily be dislodged?
20 replies
Open
Page 669 of 1419
FirstPreviousNextLast
Back to top