Silicon Valley Is Reviving the Dream of General-Purpose Humanoid Robots

Robots are nothing new. They build our cars, vacuum our floors, prepare our e-commerce orders, and even help carry out surgeries. But now the sci-fi vision of a general-purpose humanoid robot seems to be edging closer.

While disembodied artificial intelligence has seen rapid improvements in performance in recent years, most robots are still relatively dumb. For the most part, they are used for highly specialized purposes, the environments they operate in are carefully controlled, and they are not particularly autonomous.

That’s because operating in the messy uncertainty of the real world remains difficult for current AI approaches. As impressive as the recent feats of large language models have been, they are dealing with a fairly limited palette of data types that are fed to them in predictable ways.

The real world is messy and multi-faceted. A general-purpose robot needs to integrate input from multiple data sources, understand how those inputs vary at different times of the day or in different kinds of weather, predict the behavior of everything from humans to pets to vehicles, and then sync this all up with the challenging tasks of locomotion and object manipulation.

That kind of flexibility has so far eluded AI. That’s why, despite billions of dollars of investment, companies like Waymo and Cruise are still struggling to roll out autonomous vehicles even in the more restricted domain of driving.

If company announcements are anything to go by, though, many in Silicon Valley think that’s about to change. The last few months have seen a flurry of announcements from companies touting autonomous humanoid robots that could soon take on a broad gamut of tasks that currently only humans can perform.

Most recent was Sanctuary’s announcement of its new Phoenix robot last week. The company has already shown that, when tele-operated by a human, its robots can carry out more than 100 tasks in a retail environment, like packing merchandise, cleaning, and labeling products. But the new robot, which is bipedal, stands five feet seven inches tall and has a hand nearly as dexterous as a human’s. It is designed to eventually be completely autonomous.

The company plans to get there in increments, according to IEEE Spectrum. Their first step is to record the motion of humans doing all kinds of activities, then use this to build better tele-operated robots. They will gradually begin to automate some of the most common sub-tasks, while the human operator still takes care of the most complex ones. As time goes on, the company hopes to automate more and more tasks until the operator is essentially just supervising and directing. Ultimately, the goal is to be able to remove the operator completely.

It seems that human workers training their robot replacements is a popular approach. A video released by Tesla last week showed off a bunch of new features for the latest version of its Optimus robot, including improved object manipulation, environment navigation, and fine motor control. But it also included footage of engineers wearing motion capture equipment to teach the robot how to complete various tasks.

Tesla’s robot still seemed fairly slow and wobbly compared to the slick demos we’ve become used to seeing from Boston Dynamics, the original humanoid robot company. But as impressive as these have become, the company has struggled to find commercial applications for their technology. And perhaps companies with a firmer sense of what’s needed in industry or by consumers will have more luck in making them a reality.

In that vein, news of a secret robot project at Amazon also recently broke. The company has successfully deployed robots in its warehouses for many years, but its first attempt at a domestic robot called Astro was somewhat of a flop. But now, according to Insider, the tech giant is apparently planning to use large language models (LLMs) to boost the capabilities of its next-generation helper bot.

Code-named Burnham, the device will supposedly take advantage of the emergent problem-solving capabilities seen in the largest language models to improve things like conversational fluency, social awareness, and problem-solving ability.

Astro is still pretty much just a screen on wheels, so it’s not going to be fetching your morning coffee. But some of the potential applications Insider references include telling the owner if they find a stove left burning unattended, helping find lost car keys, or monitoring whether kids have friends over after school.

They might not be the only ones looking to see how LLMs can push robotics forward. It was recently announced that ChatGPT creator OpenAI led a multi-million-dollar investment round in Norwegian company 1X, which is preparing to unveil a bipedal robot called NEO. While details were scant, it’s not hard to imagine that the AI leader is keen to find ways to interface its technology with the real world.

Perhaps the most intriguing of all the general-purpose robot companies, though, is Figure, which emerged from stealth in March. With a team made up of Boston Dynamics, Tesla, Cruise, and Apple veterans, and at least $100 million in funding, the company has ambitions of replacing human labor in everything from logistics to manufacturing and retail. So far though, the company hasn’t released much detail about its humanoid Figure 01 robot, and images have only been graphical renders rather than actual photographs.

This does seem par for the course. Heavily produced promotional videos and shiny computer-generated images are not a good marker of progress, so until these companies start sharing concrete demos in real-world contexts, it’s probably wise to reserve judgment. Nonetheless, there is a new sense of optimism that robots may soon be walking among us.

Image Credit: Sanctuary AI



* This article was originally published at Singularity Hub

Post a Comment

0 Comments