If you had to sum up what has made humans such a successful species, it’s teamwork. There’s growing evidence that getting AIs to work together could dramatically improve their capabilities too.
Despite the impressive performance of large language models, companies are still scrabbling for ways to put them to good use. Big tech companies are building AI smarts into a wide-range of products, but none has yet found the killer application that will spur widespread adoption.
One promising use case garnering attention is the creation of AI agents to carry out tasks autonomously. The main problem is that LLMs remain error-prone, which makes it hard to trust them with complex, multi-step tasks.
But as with humans, it seems two heads are better than one. A growing body of research into “multi-agent systems” shows that getting chatbots to team up can help solve many of the technology’s weaknesses and allow them to tackle tasks out of reach for individual AIs.
The field got a significant boost last October when Microsoft researchers launched a new software library called AutoGen designed to simplify the process of building LLM teams. The package provides all the necessary tools to spin up multiple instances of LLM-powered agents and allow them to communicate with each other by way of natural language.
Since then, researchers have carried out a host of promising demonstrations.
In a recent article, Wired highlighted several papers presented at a workshop at the International Conference on Learning Representations (ICLR) last month. The research showed that getting agents to collaborate could boost performance on math tasks—something LLMs tend to struggle with—or boost their reasoning and factual accuracy.
In another instance, noted by The Economist, three LLM-powered agents were set the task of defusing bombs in a series of virtual rooms. The AI team performed better than individual agents, and one of the agents even assumed a leadership role, ordering the other two around in a way that improved team efficiency.
Chi Wang, the Microsoft researcher leading the AutoGen project, told The Economist that the approach takes advantage of the fact most jobs can be split up into smaller tasks. Teams of LLMs can tackle these in parallel rather than churning through them sequentially, as an individual AI would have to do.
So far, setting up multi-agent teams has been a complicated process only really accessible to AI researchers. But earlier this month, the Microsoft team released a new “low-code” interface for building AI teams called AutoGen Studio, which is accessible to non-experts.
The platform allows users to choose from a selection of preset AI agents with different characteristics. Alternatively, they can create their own by selecting which LLM powers the agent, giving it “skills” such as the ability to fetch information from other applications, and even writing short prompts that tell the agent how to behave.
So far, users of the platform have put AI teams to work on tasks like travel planning, market research, data extraction, and video generation, say the researchers.
The approach does have its limitations though. LLMs are expensive to run, so leaving several of them to natter away to each other for long stretches can quickly become unsustainable. And it’s unclear whether groups of AIs will be more robust to mistakes, or whether they could lead to cascading errors through the entire team.
Lots of work needs to be done on more prosaic challenges too, such as the best way to structure AI teams and how to distribute responsibilities between their members. There’s also the question of how to integrate these AI teams with existing human teams. Still, pooling AI resources is a promising idea that’s quickly picking up steam.
Image Credit: Mohamed Nohassi / Unsplash
* This article was originally published at Singularity Hub
0 Comments