Written by Marietje Schaake, International Policy Director at Stanford University Cyber Policy Center
Talking about AI risk is risky business these days. So entrenched are the camps of people with opposing views of how to assess risk, what risk is the most urgent to deal with, or who to trust when they speak about it. The tribal nature of the debate should not cloud intellectual clarity.
About a year ago, an open letter went around the world, turning eyeballs and raising eyebrows with a stark warning. The call that is now signed by almost 34.000 people urged a pause or moratorium on the development of AI.[1] That lull should then be used by regulators to step in to ensure audits for AI safety and independent oversight.
Two months later major AI executives added their voice to the alarmist brigade when they warned: ‘Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.’[2]
A number of people pointed to the contradiction between such stark words, and the fact that signatories of the letters such as Elon Musk or Sam Altman could single-handedly pause their own companies from racing ahead with deploying AI but didn’t.
Others were worried that the strongly worded letters would distract from problems AI creates for real people in the here and now. Researchers including Emily Bender and Timnit Gebru pointed to the responsibility of people in creating AI, instead of treating the technology as a force of nature. They criticized ‘longtermers’ for dreaming of utopian AI and fearing the apocalypse instead and urged a focus on transparency and accountability of AI systems.[3] Significant research into AI applications indeed shows how systems continue to discriminate and help churn out disinformation or synthetically created child sexual exploitation materials (child porn). One shared conclusion is that there is a lot of concern about the risks of AI coming from experts of different disciplines.
Besides the confrontational nature of the debate about who is right about AI risk, having experts defend their increasingly entrenched positions may well blur the scientific or intellectual mind. How likely would any of the people who took strong positions be to openly admit they have a new insight, and change perspective? It is essential that researchers keep an open mind to the possibilities of outcomes they had not foreseen, especially as AI’s future trajectory is all but certain.
One important step towards surfacing facts is independent research on AI risks that would be done by a consortium of researchers with the hope of growing trust in the outcomes. They would also need sufficient data and compute, the resources that private companies currently hoard, at the detriment of the public interest. Similarly, all kinds of researchers, not only computer scientists but also social scientists and behavioral scientists must have better opportunities to investigate proprietary AI models. Scholars such as Sonia Katyal argue that trade secrecy laws have been used not just as a market-competitive instruments but as tools for broad seclusion.[4] Such practices will not contribute to either solid research or good governance. A precondition for good public policy is a public understanding of AI’s impact on societies, including the risks but really the broader effects to grasp and anticipate.
[1] https://futureoflife.org/open-letter/pause-giant-ai-experiments/
[2] https://www.safe.ai/work/statement-on-ai-risk
[3] https://www.dair-institute.org/blog/letter-statement-March2023/
[4] https://www.law.georgetown.edu/georgetown-law-journal/wp-content/upload…
Science, technology and innovation can be catalysts for achieving the sustainable development goals.
In the context of the UN Commission on Science and Technology for Development, the CSTD Dialogue brings together leaders and experts to address this question and contribute to rigorous thinking on the opportunities and challenges of STI in several crucial areas including gender equality, food security and poverty reduction.
The conversation continues at the annual session of the Commission on Science and Technology for Development and as an online exchange by thought leaders.