Industry’s Influence on AI Is Shaping the Technology’s Future—for Better and for Worse

The enormous potential of AI to reshape the future has seen massive investment from industry in recent years. But the growing influence of private companies in the basic research that is powering this emerging technology could have serious implications for how it develops, say researchers.

The question of whether machines could replicate the kind of intelligence seen in animals and humans is almost as old as the field of computer science itself. Industry’s engagement with this line of research has fluctuated over the decades, leading to a series of AI winters as investment has flowed in and then back out again as the technology has failed to live up to expectations.

The advent of deep learning at the turn of the previous decade, however, has resulted in one of the most sustained runs of interest and investment from private companies. This is now beginning to yield some truly game-changing AI products, but a new analysis in Science shows that it’s also leading to industry taking an increasingly dominant position in AI research.

This is a doubled-edged sword, say the authors. Industry brings with it money, computing resources, and vast amounts of data that have turbo-charged progress, but it is also refocusing the entire field on areas that are of interest to private companies rather than those with the greatest potential or benefit to humanity.

Industry’s commercial motives push them to focus on topics that are profit-oriented. Often such incentives yield outcomes in line with the public interest, but not always,” the authors write. “Although these industry investments will benefit consumers, the accompanying research dominance should be a worry for policy-makers around the world because it means that public interest alternatives for important AI tools may become increasingly scarce.”

The authors show that industry’s footprint in AI research has increased dramatically in recent years. In 2000, only 22 percent of presentations at leading AI conferences featured one or more co-authors from private companies, but by 2020 that had hit 38 percent. But the impact is most clearly felt at the cutting edge of the field.

Progress in deep learning has to a large extent been driven by the development of ever larger models. In 2010, industry accounted for only 11 percent of the biggest AI models, but by 2021 that had hit 96 percent. This has coincided with growing dominance on key benchmarks in areas like image recognition and language modeling, where industry involvement in the leading model has grown from 62 percent in 2017 to 91 percent in 2020.

A key driver of this shift is the much larger investments the private sector is able to make compared to public bodies. Excluding defense spending, the US government allocated $1.5 billion for spending on AI in 2021, compared to the $340 billion spent by industry around the world that year.

That extra funding translates to far better resources—both in terms of computing power and data access—and the ability to attract the best talent. The size of AI models is strongly correlated with the amount of data and computing resources available, and in 2021 industry models were 29 times larger than academic ones on average.

And while in 2004 only 21 percent of computer science PhDs that had specialized in AI went into industry, by 2020 that had jumped to almost 70 percent. The rate at which AI experts have been hired away from university by private companies has also increased eight-fold since 2006.

The authors point to OpenAI as a marker of the increasing difficulty of doing cutting-edge AI research without the financial resources of the private sector. In 2019, the organization transformed from a non-profit to a “capped for-profit organization” in order to “rapidly increase our investments in compute and talent,” the company said at the time.

This extra investment has had its perks, the authors note. It’s helped to bring AI technology out of the lab and into everyday products that can improve people’s lives. It’s also led to the development of a host of valuable tools used by industry and academia alike, such as software packages like TensorFlow and PyTorch and increasingly powerful computer chips tailored to AI workloads.

But it’s also pushing AI research to focus on areas with potential commercial benefits for its sponsors, and just as importantly, data-hungry and computationally-expensive AI approaches that dovetail nicely with the kind of things big technology companies are already good at. As industry increasingly sets the direction of AI research, this could lead to the neglect of competing approaches towards AI and other socially beneficial applications with no clear profit motive.

Given how broadly AI tools could be applied across society, such a situation would hand a small number of technology firms an enormous amount of power over the direction of society,” the authors note.

There are models for how the gap between the private and public sector could be closed, say the authors. The US has proposed the creation of a National AI Research Resource made up of public research cloud and public datasets. China recently approved a “national computing power network system.” And Canada’s Advanced Research Computing platform has been running for almost a decade.

But without intervention from policymakers, the authors say that academics will likely be unable to properly interpret and critique industry models or offer public interest alternatives. Ensuring they have the capabilities to continue to shape the frontier of AI research should be a key priority for governments around the world.

Image Credit: DeepMind / Unsplash 



* This article was originally published at Singularity Hub

Post a Comment

0 Comments