What does the history of AI tell us about its future?

But where computers have traditionally been bad at is strategy—the ability to think about the shape of a game many, many moves into the future. That’s where people still had an advantage.
At least, that’s what Kasparov thought until Deep Blue’s move in the second game scared him. It seemed so sophisticated that Kasparov became worried: maybe the machine is much better than he thought! Convinced that he had no chance of winning, he abandoned the second game.
But he shouldn’t have. Deep Blue, as it turned out, wasn’t really all that great. Kasparov did not find a move that would allow the game to end in a draw. He was nervous: worried that the machine might be much more powerful than it really is, he began to see human reasoning where there was none.
Lost in rhythm, Kasparov played worse and worse. He pissed himself off again and again. At the beginning of the sixth game, where the winner takes all, he made such a lousy move that the chess observers screamed in shock. “I wasn’t in the mood to play at all,” he later said at a press conference.
IBM capitalized on its success. In the media frenzy that followed Deep Blue’s success, the company’s market capitalization surged $11.4 billion in a single week. More importantly, though, IBM’s triumph felt like a thaw in the long winter of AI. If chess could be beaten, what was next? The audience trembled.
“That,” Campbell tells me, “got people’s attention.”
In truth, it is not surprising that the computer beat Kasparov. Most people who paid attention to AI – and to chess – expected this to happen someday.
Chess may seem like the pinnacle of human thought, but it’s not. Indeed, this is a mental task quite amenable to brute force calculation: the rules are clear, there is no hidden information, and the computer doesn’t even have to keep track of what happened in previous moves. It’s just evaluating the position of the pieces right now.
“There are very few problems where, like in chess, you have all the information you need to make the right decision.”
Everyone knew that once computers became fast enough, they would outperform humans. It was just a matter of when. By the mid-90s, “in a sense, the text was already on the wall,” says Demis Hassabis, head of Alphabet’s AI firm DeepMind.
Deep Blue’s victory was a moment that showed how limited hand-coded systems can be. IBM has spent years and millions of dollars developing the chess computer. But it couldn’t do anything else.
“It did not lead to breakthroughs that allowed [Deep Blue] AI will have a huge impact on the world,” says Campbell. In fact, they have not discovered any principles of intelligence, because the real world is not like chess. “There are very few problems where, like in chess, you have all the information you need to make the right decision,” adds Campbell. “Most of the time there are unknowns. There is a coincidence.”
But while Deep Blue scrubbed the floor with Kasparov, a handful of clueless upstarts tinkered with a radically more promising form of AI: the neural network.
The idea behind neural networks was not, as in the case of expert systems, to patiently write the rules for every decision the AI would make. Instead, learning and reinforcement reinforce internal connections in a crude (theorized) mimic of how the human brain learns.
AP PHOTO / ADAM NADEL
The idea has been around since the 1950s. But training a large neural network required lightning-fast computers, tons of memory, and loads of data. None of this was available at the time. Even in the 90s, neural networks were considered a waste of time.
“At the time, most AI people thought neural networks were just bullshit,” says Jeff Hinton, Professor Emeritus of Computer Science at the University of Toronto and a pioneer in the field. “I was called a true believer” is not a compliment.
But by the 2000s, the computer industry was evolving to make neural networks viable. The desire of video game players to constantly improve graphics has created a huge industry of ultra-fast GPUs that have proven to be perfect for the mathematics of neural networks. Meanwhile, the Internet was booming, producing a flood of images and text that could be used to train systems.
By the early 2010s, these technical leaps had allowed Hinton and his team of true believers to take neural networks to new heights. Now they could create networks with many layers of neurons (which is what “deep” means in “deep learning”). In 2012, his team handily won Imagenet’s annual competition, where AIs compete to recognize elements in images. It stunned the computer science world: self-learning machines were finally viable.
In a decade of the deep learning revolution, neural networks and their ability to recognize patterns have colonized every corner of everyday life. They help Gmail automatically complete your sentences, help banks detect fraud, allow photo apps to automatically recognize faces, and—in the case of OpenAI GPT-3 and DeepMind Gopher—write long, human-sounding essays and summaries of texts. They even change the way science is done; In 2020, DeepMind introduced AlphaFold2, an artificial intelligence that can predict how proteins will fold, a superhuman skill that could help researchers develop new drugs and treatments.
Meanwhile, Deep Blue disappeared, leaving no useful inventions behind. As it turned out, playing chess was not a computer skill needed in everyday life. “In the end, Deep Blue showed the shortcomings of trying to create everything by hand,” says DeepMind founder Hassabis.
IBM tried to remedy the situation with Watson, another specialized system designed to solve a more practical problem: getting a machine to answer questions. He used statistical analysis of vast amounts of text to achieve an understanding of the language that was cutting edge for the time. It was more than a simple “if-then” system. But Watson faced an unfortunate moment: just a few years later, it was eclipsed by the deep learning revolution, which led to the emergence of a generation of language models much more subtle than Watson’s statistical methods.
Deep learning has surpassed old-school AI precisely because “pattern recognition is incredibly powerful,” says Daphne Koller, a former Stanford professor who founded and leads Insitro, which uses neural networks and other forms of machine learning to research new drugs. The flexibility of neural networks—the wide range of ways to recognize patterns—is why another AI winter hasn’t arrived. “Machine learning has really made a difference,” she says, in a way that “previous waves of exuberance” in AI never did.
The inverted fate of Deep Blue and neural networks shows just how badly we’ve been failing to judge what’s hard — and what’s valuable — about AI for so long.
For decades, people believed that mastering chess would be important because it was difficult for people to play chess at a high level. But it turned out that chess is quite easy for a computer to master because it is so logical.
What a computer has been much more difficult to learn is the random, unconscious mental work that humans do, like having a lively conversation, driving a car in a traffic jam, or reading a friend’s emotional state. We make these things so easy that we rarely realize how complex they are and how much fuzzy, half-tone judgment they require. A big benefit of deep learning is that it can capture little bits and pieces of this subtle, unannounced human intelligence.
However, there is no final victory for artificial intelligence. Deep learning may be at its peak right now, but it is also under fire.
“For a very long time there was this techno-chauvinistic enthusiasm, okay, AI will solve all the problems!” says Meredith Broussard, programmer turned journalism professor at New York University and author of the book Artificial intelligence. But as she and other critics have pointed out, deep learning systems often train on biased data and absorb those biases. Computer scientists Joy Buolamwini and Timnit Gebru found that three commercially available AI visual systems were poor at analyzing the faces of black women. Amazon has trained AI to check resumes only to find that it underrates women.
While computer scientists and many AI engineers are now aware of these bias issues, they don’t always know how to deal with them. On top of that, neural networks are also “huge black boxes,” says Daniela Rus, an AI veteran who currently runs the Computer Science and Artificial Intelligence Laboratory at MIT. Once a neural network is trained, its mechanics are not easy to understand even for its creator. It is not clear how he comes to his conclusions or how he will fail.
“For a very long time there was this techno-chauvinistic enthusiasm, okay, AI will solve all the problems!”
According to Ras, it may not be a problem to rely on a black box for a task that is not “security critical”. But what about higher-stakes jobs like autonomous driving? “It’s actually quite remarkable that we were able to trust and believe in them so much,” she says.
Here, Deep Blue had the advantage. The old-fashioned style of handmade rules may have been fragile, but it was understandable. The car was complex, but not a mystery.
Ironically, this old style of programming may become something of a comeback as engineers and computer scientists grapple with the limitations of pattern matching.
Language generators like OpenAI’s GPT-3 or DeepMind’s Gopher can take a few sentences you write and keep writing page after page of believable-sounding prose. But despite some impressive facial expressions, Gopher “still doesn’t quite get what he’s talking about,” Hassabis says. – Not in the literal sense.
Similarly, visual AI can make terrible mistakes when faced with an edge case. Self-driving cars crashed into fire trucks parked on the highway because in all the millions of hours of video they were trained on, they had never encountered such a situation. Neural networks have their own version of the fragility problem.
Source link