Champion bridge player Sharon Osberg once wrote, “Playing bridge is like running a business. It’s about hunting, chasing, nuance, deception, reward, danger, cooperation and, on a good day, victory.”
While it’s little surprise chess fell to number-crunching supercomputers long ago, you’d expect humans to maintain a more unassailable advantage in bridge, a game of incomplete information, cooperation, and sly communication. Over millennia, our brains have evolved to read subtle facial queues and body language. We’ve assembled sprawling societies dependent on the competition and cooperation of millions. Surely such skills are beyond the reach of machines?
For now, yes. But perhaps not forever. In recent years, the most advanced AI has begun encroaching on some of our most proudly held territory; the ability to navigate an uncertain world where information is limited, the game is infinitely nuanced, and no one succeeds alone.
Last week, French startup NukkAI took another step when its NooK bridge-playing AI outplayed eight bridge world champions in a competition held in Paris.
The game was simplified, and NooK didn’t exactly go head-to-head with the human players—more on that below—but the algorithm’s performance was otherwise spectacular. Notably, NooK is a kind of hybrid algorithm, combining symbolic (or rule-based) AI with today’s dominant deep learning approach. Also, in contrast to its purely deep learning peers, NooK is more transparent and can explain its actions.
“What we’ve seen represents a fundamentally important advance in the state of artificial intelligence systems,” Stephen Muggleton, a machine learning professor at Imperial College London, told The Guardian. In other words, not too bad for a cold, calculating computer.
Black Box, White Box
To play bridge, maybe the most challenging card or board game yet tackled by AI, the NukkAI team combined deep reinforcement learning with symbolic AI, an approach famously used by IBM’s Deep Blue to defeat Garry Kasparov at chess in the 90s.
Deep reinforcement learning algorithms are made up of a network of interconnected artificial neurons. To learn a game, an algorithm plays itself billions of times, evaluates its performance after each round, and incrementally improves by tuning and retuning its neural connections until it finally masters play.
Symbolic AI, on the other hand, is rules-based. Software engineers hard code the rules the AI needs to know to succeed. These might be, for example, that a bishop can move diagonally any number of squares on a chess board, or that if an opponent pursues a particular strategy, then employing some counterstrategy increases the chances of winning. This approach is fine for the finite, but as the space of all possible moves rises in complex games, it becomes untenable.
That’s why the 2016 defeat of Go world champion, Lee Sedol, by DeepMind’s AlphaGo was a big deal. At the time, experts hadn’t expected AI to beat top Go players for a decade. AlphaGo showed the surprising power of deep learning compared to “good ol’ fashioned AI.”
But deep learning has its drawbacks. One of them is that it’s a “black box.” How the billions of nodes in a neural network achieve any given task is mysterious.
AlphaGo’s Move 37 against Lee Sedol was a choice no human would make—it calculated the odds a professional would have chosen that move at 1 in 10,000—but it made the move anyway, and won. Still, the algorithm couldn’t explain what in its training informed its confidence. This opacity is a problem when the stakes are higher than a board game. To trust self-driving cars or medical algorithms making life-and-death decisions and diagnoses, we need to understand their rationale.
One potential solution, championed by researchers like NukkAI, would mash deep learning and symbolic AI together, exploiting each one’s strengths in what’s called a “neurosymbolic” approach.
NooK, for example, learns the rules of the game first, then improves its skills by playing. The combination refines the algorithm’s probabilistic “brain,” Muggleton told The Telegraph, taking it beyond statistics. NooK, he said, uses “background knowledge much in the way that we augment our own learning with information from books and previous experience.” As a result, the algorithm can explain decisions: It’s a “white box” AI.
This is why bridge—a game of communication and strategy that’s resisted conquest by AI—is a great test for the approach. “In bridge, you can’t play if you don’t explain,” NukkAI cofounder Véronique Ventos told The Guardian.
There are bridge-playing algorithms out there, but they don’t hold a candle to the best humans. In NukkAI’s Paris competition a little over a week ago, the situation looks to have changed.
Fun and Games
The NukkAI Challenge pitted NooK against eight bridge world champions.
Each champion played ten sets of ten games, while NooK played 80 sets of ten games, or 800 straight deals. Instead of playing each other, human and AI played the same hands against the same opponents, a pair of bridge bots (not built by NukkAI) called Wbridge5.
A game of bridge begins with players bidding on how many tricks, or rounds of play, they think they can win. The highest bid is called the contract, and whoever sets the contract is the declarer. The declarer’s partner, or the dummy, lays their hand down on the table face up, and exits the game. The declarer now plays both hands against their opponents, and tries to win enough tricks to meet their bid.
The NukkAI Challenge removed bidding to simplify play, and both the humans and NooK assumed the role of declarer in each game, with the bridge bot pair as opponents (or defenders). The difference between NooK’s score and each human player’s score was averaged over each set. NooK beat its rivals in 67, or 83 percent, of the 80 sets played.
“It’s pretty desperate for the humans,” French champion Thomas Bessis said. “There are just times that we don’t understand why the AI is playing better than us—but it is. It’s very frustrating.”
NooK’s victory is an impressive feat, but there are caveats. Skipping the bidding process and playing only the declarer role removed challenging and nuanced parts of the game in which partners must communicate with each other and deceive their opponents. Also, it’s challenging for a human to stay focused for 100 straight hands, but not so a computer. Finally, NukkAI cofounder, Jean-Baptiste Fantun, said he was confident the machine would prevail over thousands of deals, but was less sanguine about its prospects over just 800. In other words, the more it plays, the better its odds of winning, so playing a lot of hands consecutively may have helped the AI nudge out the humans in this case.
“So even in bridge, there are other things to be solved,” Fantun said. “We still have a roadmap in front of us.” That is, it’s too much to say bridge has fallen to AI, like chess or Go. But AI outscoring top human players in part of the game is a key milestone on Fantun’s map. And while ever-bigger AI algorithms, like OpenAI’s GPT-3, continue to impress, NukkAI’s performance in bridge may add weight to the argument for a hybrid approach.
Next, they’ll have to show NooK can play and win—no disclaimers needed.
Image Credit: T A T I A N A / Unsplash
Looking for ways to stay ahead of the pace of change? Rethink what’s possible. Join a highly curated, exclusive cohort of 80 executives for Singularity’s flagship Executive Program (EP), a five-day, fully immersive leadership transformation program that disrupts existing ways of thinking. Discover a new mindset, toolset and network of fellow futurists committed to finding solutions to the fast pace of change in the world. Click here to learn more and apply today!
* This article was originally published at Singularity Hub
0 Comments