The interdisciplinary imperative of Artificial Intelligence, Ethics and Governance

30 April 2019

Written by Joi Ito, Director of the MIT Media Lab

This past semester I co-taught a course at Harvard Law School with Jonathan Zittrain about the ethics and governance of artificial intelligence. 

We gathered a diverse group of students studying law, policy, government, engineering, and education to discuss challenges in AI ethics and consider the costs and benefits of potential solutions to those challenges.

In ten class sessions, we investigated current research in three AI topic areas: fairness, interpretability, and adversarial examples. I came away with many new insights about AI and its governance as well as our approach to addressing challenges in this space.

In Part I of the class, we attempted to define the topic area and tried to frame and understand some of the problems. 

We left with concerns about the reductionist, poorly defined and oversimplified notions of fairness and explainability in much of the literature. 

We also left feeling quite challenged by how the technical community will face new risks such as adversarial attacks and approaches like it.

In Part II, we continued our journey into a sense of despair about AI ethics and governance as we read papers explaining that many of the problems created by autonomous systems, such as bias and discrimination, cannot be solved through technical or legal mechanisms. We learned that much of the discrimination and inequality that we blame on algorithmic decisions already exist in our society. Algorithms just amplify existing bias by relying on historical data that reflects past societal values to inform current and future decisions.

In our final stage of the course, we considered whether we should deploy autonomous systems at all in certain high stakes contexts, such as in criminal sentencing.  Some scholars contend that algorithms can help uncover human bias and prevent it from being perpetuated. Others argue against using these systems given their tendency to exacerbate existing bias and our limited technical and legal ability to correct for those biases. 

We had our last class two weeks ago and at the end, I felt that we had gone deeper on many of the topics than I had ever gone before. Even then, I felt that we had only just begun to see how difficult the problems are. I think most of us ended the class feeling a bit overwhelmed by the scale of the work we have ahead of us to try to minimize the harm to society by the deployment of these algorithms. That said, the class demonstrated the value of incorporating interdisciplinary perspectives when thinking through the problems and solutions in the AI space.

Throughout the course, I observed students learning from one another, rethinking their own assumptions, and collaborating on projects outside of class. We may not have figured out how to eliminate algorithmic bias or come up with a satisfactory definition of what makes an autonomous system interpretable, but we did find ourselves having conversations and coming to new points of view that I don’t think would have happened otherwise.

It is clear that integrating humanities and social science into the conversation about law, economics and technology is required for us to navigate ourselves out of the mess that we’ve created and to chart a way forward into an uncertain future with our increasingly algorithmic societal systems.

 
 

What must be done to ensure that the potential offered by science, technology and innovation towards achieving the SDGs is ultimately realized?

In the context of the UN Commission on Science and Technology for Development, the CSTD Dialogue brings together leaders and experts to address this question and contribute to rigorous thinking on the opportunities and challenges of STI in several crucial areas including gender equality, food security and poverty reduction.

The conversation continues at the twenty-second session of the CSTD and as an online exchange by thought leaders.