This Week’s Awesome Tech Stories From Around the Web (Through March 30)


The Best Qubits for Quantum Computing Might Just Be Atoms
Philip Ball | Quanta
“In the search for the most scalable hardware to use for quantum computers, qubits made of individual atoms are having a breakout moment. …’We believe we can pack tens or even hundreds of thousands in a centimeter-scale device,’ [Mark Saffman, a physicist at the University of Wisconsin] said.”


AI Chatbots Are Improving at an Even Faster Rate Than Computer Chips
Chris Stokel-Walker | New Scientist
“Besiroglu and his colleagues analyzed the performance of 231 LLMs developed between 2012 and 2023 and found that, on average, the computing power required for subsequent versions of an LLM to hit a given benchmark halved every eight months. That is far faster than Moore’s law, a computing rule of thumb coined in 1965 that suggests the number of transistors on a chip, a measure of performance, doubles every 18 to 24 months.”


How AI Could Explode the Economy
Dylan Matthews | Vox
“Imagine everything humans have achieved since the days when we lived in caves: wheels, writing, bronze and iron smelting, pyramids and the Great Wall, ocean-traversing ships, mechanical reaping, railroads, telegraphy, electricity, photography, film, recorded music, laundry machines, television, the internet, cellphones. Now imagine accomplishing 10 times all that—in just a quarter century. This is a very, very, very strange world we’re contemplating. It’s strange enough that it’s fair to wonder whether it’s even possible.”


What’s Next for Generative Video
Will Douglas Heaven | MIT Technology Review
“The first batch of models that could turn text into video appeared in late 2022, from companies including Meta, Google, and video-tech startup Runway. It was a neat trick, but the results were grainy, glitchy, and just a few seconds long. Fast-forward 18 months, and the best of Sora’s high-definition, photorealistic output is so stunning that some breathless observers are predicting the death of Hollywood. …As we continue to get to grips what’s ahead—good and bad—here are four things to think about.”


Salt-Sized Sensors Mimic the Brain
Gwendolyn Rak | IEEE Spectrum
“To gain a better understanding of the brain, why not draw inspiration from it? At least, that’s what researchers at Brown University did, by building a wireless communications system that mimics the brain using an array of tiny silicon sensors, each the size of a grain of sand. The researchers hope that the technology could one day be used in implantable brain-machine interfaces to read brain activity.”


Understanding Humanoid Robots
Brian Heater | TechCrunch
“A lot of smart people have faith in the form factor and plenty of others remain skeptical. One thing I’m confident saying, however, is that whether or not future factories will be populated with humanoid robots on a meaningful scale, all of this work will amount to something. Even the most skeptical roboticists I’ve spoken to on the subject have pointed to the NASA model, where the race to land humans on the moon led to the invention of products we use on Earth to this day.”


Blazing Bits Transmitted 4.5 Million Times Faster Than Broadband
Michael Franco | New Atlas
“An international research team has sent an astounding amount of data at a nearly incomprehensible speed. It’s the fastest data transmission ever using a single optical fiber and shows just how speedy the process can get using current materials.”


How We’ll Reach a 1 Trillion Transistor GPU
Mark Liu and HS Philip Wong | IEEE Spectrum
“We forecast that within a decade a multichiplet GPU will have more than 1 trillion transistors. We’ll need to link all these chiplets together in a 3D stack, but fortunately, industry has been able to rapidly scale down the pitch of vertical interconnects, increasing the density of connections. And there is plenty of room for more. We see no reason why the interconnect density can’t grow by an order of magnitude, and even beyond.”


Astronomers Watch in Real Time as Epic Supernova Potentially Births a Black Hole
Isaac Schultz | Gizmodo
“‘Calculations of the circumstellar material emitted in the explosion, as well as this material’s density and mass before and after the supernova, create a discrepancy, which makes it very likely that the missing mass ended up in a black hole that was formed in the aftermath of the explosion—something that’s usually very hard to determine,’ said study co-author Ido Irani, a researcher at the Weizmann Institute.”


Large Language Models’ Emergent Abilities Are a Mirage
Stephen Ornes | Wired
“[In some tasks measured by the BIG-bench project, LLM] performance remained near zero for a while, then performance jumped. Other studies found similar leaps in ability. The authors described this as ‘breakthrough’ behavior; other researchers have likened it to a phase transition in physics, like when liquid water freezes into ice. …[But] a new paper by a trio of researchers at Stanford University posits that the sudden appearance of these abilities is just a consequence of the way researchers measure the LLM’s performance. The abilities, they argue, are neither unpredictable nor sudden.”

Image Credit: AedrianUnsplash

* This article was originally published at Singularity Hub

Post a Comment