This Week’s Awesome Tech Stories From Around the Web (Through December 16)

ARTIFICIAL INTELLIGENCE

Google DeepMind Used a Large Language Model to Solve an Unsolvable Math Problem
Will Douglas Heaven | MIT Technology Review
“In a paper published in Nature [on Thursday], the researchers say it is the first time a large language model has been used to discover a solution to a long-standing scientific puzzle—producing verifiable and valuable new information that did not previously exist. ‘It’s not in the training data—it wasn’t even known,’ says coauthor Pushmeet Kohli, vice president of research at Google DeepMind.”

COMPUTING

Supercomputer That Simulates Entire Human Brain Will Switch on in 2024
James Woodford | New Scientist
“Unlike an ordinary computer, its hardware chips are designed to implement spiking neural networks, which model the way synapses process information in the brain. Such neuromorphic computers, as they are known, have been built before, but DeepSouth will be the largest yet, capable of 228 trillion synaptic operations per second, which is on par with the estimated number of synaptic operations in a human brain.”

BIOTECH

In a World First, a Patient’s Antibody Cells Were Just Genetically Engineered
Emily Mullin | Wired
“[B cells] make a lot of antibodies—thousands of them every second. What if these antibody factories could be harnessed to make other things the body needs? That’s the idea behind a trial launched by Seattle-based biotech company Immusoft. The company announced today that its scientists have genetically programmed a patient’s B cells and put them back in his body in an effort to treat disease. It’s the first time engineered B cells have been tested in a person.”

TECH

Everybody’s Talking About Mistral, an Upstart French Challenger to OpenAI
Benj Edwards | Ars Technica
“Mistral, based in Paris and founded by Arthur Mensch, Guillaume Lample, and Timothée Lacroix, has seen a rapid rise in the AI space recently. It has been quickly raising venture capital to become a sort of French anti-OpenAI, championing smaller models with eye-catching performance. Most notably, some (but not all) of Mistral’s models run locally with open weights that can be downloaded and used with fewer restrictions than closed AI models from OpenAI, Anthropic, or Google.”

ARTIFICIAL INTELLIGENCE

OpenAI Demos a Control Method for Superintelligent AI
Eliza Strickland | IEEE Spectrum
“Earlier this year, OpenAI launched its superalignment program, an ambitious attempt to find technical means to control a superintelligent AI system, or ‘align’ it with human goals. …The biggest challenge for this project: ‘This is a future problem about future models that we don’t even know how to design, and certainly don’t have access to,’ says Collin Burns, a member of OpenAI’s superalignment team. ‘This makes it very tricky to study—but I think we also have no choice.’ The first preprint paper to come out from the superalignment team showcases one way the researchers tried to get around that constraint.”

ROBOTICS

Agility Is Using Large Language Models to Communicate With Its Humanoid Robots
Brian Heater | TechCrunch
“It’s become increasingly clear that these sorts of technologies are primed to revolutionize the way robots communicate, learn, look, and are programmed. Accordingly, a number of top universities, research labs, and companies are exploring the best methods for leveraging these artificial intelligence platforms. Well-funded Oregon-based startup Agility has been playing around with the tech for a while now using its bipedal robot, Digit.”

ROBOTICS

This New System Can Teach a Robot a Simple Household Task Within 20 Minutes
Rhiannon Williams | MIT Technology Review
“A new system that teaches robots a domestic task in around 20 minutes could help the field of robotics overcome one of its biggest challenges: a lack of training data. The open-source system, called Dobb-E, was trained using data collected from real homes. It can help to teach a robot how to open an air fryer, close a door, or straighten a  cushion, among other tasks.”

ETHICS

Cheating Fears Over Chatbots Were Overblown, New Research Suggests
Natasha Singer | The New York Times
“Last December, as high school and college students began trying out a new AI chatbot called ChatGPT to manufacture writing assignments, fears of mass cheating spread across the United States. …But the alarm may have been overblown—at least in high schools. According to new research from Stanford University, the popularization of AI chatbots has not boosted overall cheating rates in schools.”

DIGITAL MEDIA

News Publishers See Google’s AI Search Tool as a Traffic-Destroying Nightmare
Keach Hagey, Miles Kruppa, and Alexandra Bruell | The Wall Street Journal
“Shortly after the launch of ChatGPT, the Atlantic drew up a list of the greatest threats to the 166-year-old publication from generative artificial intelligence. At the top: Google’s embrace of the technology. About 40% of the magazine’s web traffic comes from Google searches, which turn up links that users click on. …What was once a hypothetical threat is now a very real one. Since May, Google has been testing an AI product dubbed ‘Search Generative Experience’ on a group of roughly 10 million users, and has been vocal about its intention to bring it into the heart of its core search engine.”

Image Credit: Greg Rakozy / Unsplash 



* This article was originally published at Singularity Hub

Post a Comment

0 Comments