As a grad student working on artificial intelligence, Mohamed Abdalla could probably walk into a number of well-paid industry jobs. Instead, he wants to draw attention to how Big Tech’s big bucks may be warping the perspective of his field.
Abdalla, who is finishing his PhD at the University of Toronto, has coauthored a paper highlighting the number of top AI researchers—including those who study the ethical challenges raised by the technology—who receive funding from tech companies. That can be a particular problem, he says, when corporate AI systems raise ethical issues, such as algorithmic bias, military use, or questions about the fairness and accuracy of face recognition programs.
Abdalla found that more than half of tenure-track AI faculty at four prominent universities who disclose their funding sources have received some sort of backing from Big Tech. Abdalla says he doesn’t believe any of those faculty are acting unethically, but he thinks their funding could bias their work—even unconsciously. He suggests universities introduce rules to raise awareness of potential conflicts of interest.
Industry funding for academic research is nothing new, of course. The flow of capital, ideas, and people between companies and universities is part of a vibrant innovation ecosystem. But large tech companies now wield unprecedented power, and the importance of cutting-edge AI algorithms to their businesses has led them to tap academia for talent.
Students with AI expertise can command large salaries at tech firms, but companies also back important research and young researchers with grants and fellowships. Many top AI professors have been lured away to tech companies or work part-time at those companies. Besides money, large companies can offer computational resources and data sets that most universities cannot match.
A paper published in July by researchers from the University of Rochester and China’s Cheung Kong Graduate School of Business found that Google, DeepMind, Amazon, and Microsoft hired 52 tenure-track professors between 2004 and 2018. It concluded that this “brain drain” has coincided with a drop in the number of students starting AI companies.
The growing reach and power of Big Tech prompted Abdalla to question how it influences his field in more subtle ways.
Together with his brother, also a graduate student, Abdalla looked at how many AI researchers at Stanford, MIT, UC Berkeley, and the University of Toronto have received funding from Big Tech over their careers.
The Abdallas examined the CVs of 135 computer science faculty who work on AI at the four schools, looking for indications that the researcher had received funding from one or more tech companies. For 52 of those, they couldn’t make a determination. Of the remaining 83 faculty, they found that 48, or 58 percent, had received funding such as a grant or a fellowship from one of 14 large technology companies: Alphabet, Amazon, Facebook, Microsoft, Apple, Nvidia, Intel, IBM, Huawei, Samsung, Uber, Alibaba, Element AI, or OpenAI. Among a smaller group of faculty that works on AI ethics, they also found that 58 percent of those had been funded by Big Tech. When any source of funding was included, including dual appointments, internships, and sabbaticals, 32 out of 33, or 97 percent, had financial ties to tech companies. “There are very few people that don’t have some sort of connection to Big Tech,” Abdalla says.
Adballa says industry funding is not necessarily compromising, but he worries that it might have some influence, perhaps discouraging researchers from pursuing certain projects or prompting them to agree with solutions proposed by tech companies. Provocatively, the Abdallas’ paper draws parallels between Big Tech funding for AI research and the way tobacco companies paid for research into the health effects of smoking in the 1950s.