I need to start writing more here again. I'm starting to feel limited by 120 character text boxes.
The NY Times has an interesting AI experiment on its web site today - a basic rock/paper/scissors game in which the computer opponent learns based on the outcome of previous games. You can start either with a rookie computer opponent that has to learn from scratch, or a veteran opponent that can use all of its experience gained so far with other players. Here are my results (so far) against the veteran version:
The interesting thing about this, to me, is that I fared this poorly while intentionally trying to be unpredictable. In other words, humans are predictable in their unpredictability - we try to be unpredictable in predictable patterns that even a basic AI like this can identify. Imagine what a smarter AI could do. (Note that you might do better in the first 100 games or so, but I'm pretty confident that over time, anyone's results would look similar in percentage terms.)
Watson, IBM's Jeopardy-playing AI computer that we've all been talking about lately, was a major achievement in natural speech recognition. But it was not particularly adept at gaming; Jeopardy is not the kind of game where you need to search for patterns in gameplay and factor in the unpredictability of your human opponents. Most modern chess-playing computers aren't either; rather, they take a brute force approach, analyzing all possible moves to their natural conclusion and making the one that produced a positive result in the most scenarios. This has been made possible with increases in computing speed and power over the past couple decades.
The Times' rock paper scissors game is a true "AI" in that it's learning from patterns it gleans from its human opponents. It's a harbinger of the future; for AI to be truly useful, it will need to learn what we want and adapt, without us needing to continuously program new sets of instructions into our Butler-bot 3000's every time our routine changes slightly. AI will need to learn human patterns on its own (in addition to natural language) to accomplish basic tasks without help, and without screwing things up.
I tried to beat the Times' computer by being human; that is, by doing things I thought the computer would find unpredictable. The problem is, other humans apparently thought the same way I did, and every "unpredictable" move I made (like choosing paper five times in a row) was easily thwarted by the computer's AI. If it ever comes down to it and we end up with a self-aware Skynet-like network that's out to get us, it's not going to be our "human-ness" that wins the war. We think like a collective even when we're trying not to, and we play right into the AI's hands when we try to be unpredictable. That "randomness" isn't really random, and the AI has already factored it in.
Ironically, when machines lose in movies and TV shows like the Terminator series, Battlestar Galactica and The Matrix, it's not because we've shown our human thinking to be superior. It's because the machines have started thinking like humans, failing to anticipate or even guard against the possibility of an "unpredictable" pattern that should have been thoroughly predictable to a machine designed to look for it. (Remember that Skynet in the Terminator films was created to identify and respond to enemy attacks. The Matrix franchise at least explicitly acknowledges the same plot hole in its films with both the Oracle and Architect characters, but then moves forth with it anyway because it wouldn't be very exciting to a human audience if we lost.) This will not happen in real life. Real computers don't suddenly have lapses in pattern recognition, or conveniently ignore or even forget logic trees. These are human foibles.
In fact, the one mini-winning streak I went on in my rock, paper, scissors game was when I intentionally tried to think like a computer, analyzing both my own previous patterns and the patterns of the computer. I then tried to predict the computer's move based on what it probably thought my next move would be. Doing this, I was able to win about 50% of the time, for a little while. But I eventually regressed into my human ways, because I am a human (regardless what you may have heard).
If we ever need to beat an advanced and malevolent computer AI, we are going to need to learn how to think like computers. Our humanity won't save us.
P.S. Of course, a computer could never come up with a scene like this. Rock flies right through paper! Nothing beats rock!