This Week’s Awesome Tech Stories From Around the Web (Through April 15)


A New Approach to Computation Reimagines Artificial Intelligence
Anil Ananthaswamy | Quanta
“By imbuing enormous vectors with semantic meaning, we can get machines to reason more abstractly—and efficiently—than before. …This is the starting point for a radically different approach to computation known as hyperdimensional computing. The key is that each piece of information, such as the notion of a car, or its make, model or color, or all of it together, is represented as a single entity: a hyperdimensional vector.”


Bacteria Can Be Engineered to Fight Cancer in Mice. Human Trials Are Coming.
Jessica Hamzelou | MIT Technology Review
“There are trillions of microbes living in and on our bodies—and we might be able to modify them to help us treat diseases. Scientists have altered the genomes of some of these bacteria that live on skin, essentially engineering microbes that can prevent or treat cancer. It appears to work in mice, and human trials are in the cards.”


Relativity Space Is Moving on From the Terran 1 Rocket to Something Much Bigger
Eric Berger | Ars Technica
“Foremost among these changes is the plan to move directly into development of the Terran R rocket. In response to customer demand, Ellis said, this rocket is getting even bigger than before. A fully expendable version will now be able to lift a staggering 33.5 metric tons. This sets up Relativity to compete directly with the largest players in the global launch industry. ‘It’s a big, bold bet,’ Relativity Space Chief Executive Tim Ellis said in an interview. ‘But it’s actually a really obvious decision.’i


No, Fusion Energy Won’t Be ‘Limitless’
Gregory Barber | Wired
“…as the physics progresses, some are now beginning to explore the likely practical and economic limits on fusion. The early conclusion is that fusion energy ain’t going to be cheap—certainly not the cheapest source of electricity over the coming decades as more solar and wind come online. But fusion may still find its place, because the grid needs energy in different forms and at different times.”


That Famous Black Hole Just Got Bigger and Darker
Dennis Overbye | The New York Times
i‘We used machine learning to fill in the gaps,’ Dr. Medeiros said in an interview. Her team trained the neural network to recognize the black hole by feeding the AI simulations of all kinds of black holes consistent with Einstein’s equations. In the improved version, Dr. Medeiros said, the doughnut of doom—the visible radiation from matter falling into the hole—is thinner than in the original. And the empty spot in the doughnut’s center appears blacker and bigger, bolstering the idea that there really is a black hole there.”


OpenAI’s CEO Confirms the Company Isn’t Training GPT-5 and ‘Won’t for Some Time’
James Vincent | The Verge
However, just because OpenAI is not working on GPT-5 doesn’t mean it’s not expanding the capabilities of GPT-4—or, as Altman was keen to stress, considering the safety implications of such work. ‘We are doing other things on top of GPT-4 that I think have all sorts of safety issues that are important to address and were totally left out of the letter,’ he said. Altman’s comments are interesting—though not necessarily because of what they reveal about OpenAI’s future plans. Instead, they highlight a significant challenge in the debate about AI safety: the difficulty of measuring and tracking progress.


Ethereum’s Shanghai Update Opens a Rift in Crypto
Joel Khalili | Wired
“At 19:27 Eastern time on April 12, the Ethereum blockchain, home to the world’s second-most-popular cryptocurrency, ether, will finally sever its links to crypto mining. …By demonstrating that a large-scale blockchain can shift from one system to another, Shanghai will reignite a debate over whether the practice of mining that still supports bitcoin, the most widely traded cryptocurrency, is viable and sustainable.”


The Hacking of ChatGPT Is Just Getting Started
Matt Burgess | Wired
“The attacks are essentially a form of hacking—albeit unconventionally—using carefully crafted and refined sentences, rather than code, to exploit system weaknesses. While the attack types are largely being used to get around content filters, security researchers warn that the rush to roll out generative AI systems opens up the possibility of data being stolen and cybercriminals causing havoc across the web.”

Image Credit: Ambrose Chua / Unsplash

* This article was originally published at Singularity Hub

Post a Comment