Picture from the second panel debate

What about trustworthiness and sustainability in AI?

On the 3rd of June, NordSTAR invited experts from different fields, with different perspectives of AI, to discuss trustworthiness and sustainability in AI.

Trustworthiness and sustainability are two fundamental pillars underlying modern society. Simultaneously, with the development of information technologies, AI has become ubiquitous in almost all our routines.

We increasingly interact not only with each other, but also with AI-empowered machines. Therefore, it is more and more necessary to properly assess how, and how much, humans trust AI tools and devices and how these tools contribute to a sustainable society.

Because of this, AI can no longer be seen as a technical isolated discipline within the umbrella of natural sciences and pure technological knowledge.

This was the background for the first official NordSTAR workshop, where questions like “how successful have the scientific approaches in AI research been to develop trustworthy and sustainable AI?” and “ what interdisciplinary challenges do we need to face and solve?” were discussed. 

Summary

The workshop was opened by the research director at OsloMet, Yngve Foss, followed by the NordSTAR directors Pedro Lind and Anis Yazidi. 

Speakers

The first presentation of the day was held by Alexander Buhmann and Christian Fieseler from the Norwegian Business School (BI). In their talk, titled Deep Learning meets Deep Democracy, they shared their research on deliberative governance and responsible innovation in artificial intelligence. 

They have developed a framework of responsibilities for AI innovation, and a deliberative governance approach for enacting these responsibilities.

More information on this can be found in their article with the same title: Deep Learning meets Deep Democracy (cambridge.org)

Henrik Skaug Sætra from Østfold University College then gave his talk, titled AI in context and the sustainable development goals.

The presentation was based on a book by the same title, written by Sætra. In his talk, and more in-depth in the book (routledge.com), he shows how AI can potentially affect all the sustainable development goals, both positively and negatively. 

The last speaker of the day was Audun Jøsang from the University of Oslo. In his talk, titled Assessing trust in IT systems, he presented some key elements and a framework for reasoning from his book “Subjective Logic: A Formalism for Reasoning Under Uncertainty”. 

Round tables 

The workshop ended with two round tables. In the first round table NordSTAR had invited experts from different fields and contexts to give their perspectives on trust and sustainability in AI: 

The panelists were asked if they think artificial intelligence today is sustainable and trustworthy, and what AI experts should consider to make the tools they develop more trustworthy and sustainable. 

In the first discussion, all panelists agreed that there is still a lot of work that needs to be done to make artificial intelligence more trustworthy and sustainable. 

Transparency was mentioned as a key factor when it comes to what AI experts should consider when developing tools. In regards to sustainability, they have to be aware of the carbon footprint. For example, training a single AI model can emit as much carbon as five cars in their lifetimes.

In the second round table, it was discussed how the development of trustworthy and sustainable AI will change AI basic research. The round table consisted of experts in artificial intelligence, and collaborators working closely with AI. 

In this discussion, the panelists brought forward important input. Interdisciplinary research will become more important, and there is a need for more collaborations across disciplines.

More people should have knowledge of artificial intelligence, and the students in the field need to understand AI in the context of trustworthiness and sustainability. 

Usefulness should be looked at in the context of trustworthiness, how is the process done today, and how can AI support that? Both humans and machines will be flawed and biased, but by working together we can increase the accuracy. 

Closing

The workshop was closed by Vahid Hassani, Vice-Dean at OsloMet, who emphasized the importance of inviting groups of people with different backgrounds and perspectives to discuss big topics like sustainability and trustworthiness in AI. 

Image description

The image shows pictures from the second round table. From left: Pedro Lind, Michael Riegler, Helge Røsjø, Audun Jøsang, Henrik Sætra, Henrik Wiig, Elena Pamiggiani, Ira Haraldsen. 

Published: 10/06/2022 | Maria Normann