“Trustworthy AI requires interdisciplinary collaboration”

July 03, 2021

Mr Krause, you're one of Europe’s leading machine learning and artificial intelligence (AI) researchers. Behind the scenes there are actually several very useful AI and machine learning technologies that make my day-to-day work easier. Nevertheless, existing machine learning algorithms are still very useful for specialised tasks. Especially in data science subfields like machine learning, computer vision and natural language processing, but also in application domains such as healthcare, robotics etc. That's why we established the ETH AI Center as an interdisciplinary effort and joined ELLIS, the European Laboratory for Learning & Intelligent Systems.

Mr Krause, you're one of Europe’s leading machine learning and artificial intelligence (AI) researchers. Are there tasks that you used to do yourself a decade ago but that you now delegate to intelligent computer programs?
Behind the scenes there are actually several very useful AI and machine learning technologies that make my day-to-day work easier. Searching through academic literature is greatly supported by recommendation engines, and speech recognition and language translation can, to a valuable extent, be automated today. That wasn't yet possible ten years ago.

Can artificial intelligence understand problems that humans have not yet understood?
It's hard to define what ‘understanding’ exactly means. Machines are capable of efficiently extracting complex statistical patterns from large data sets and utilizing them computationally. That doesn't mean in any way that they ‘understand’ them. Nevertheless, existing machine learning algorithms are still very useful for specialised tasks. It’s still a uniquely human ability, however, to generalize knowledge across domains and to quickly grasp and solve very different types of complex problems. We are very far away from achieving this in artificial intelligence.

What's your take on AI research at ETH Zurich?
We’re carrying out excellent AI research here at ETH, both in the Computer Science department and in many other disciplines. Especially in data science subfields like machine learning, computer vision and natural language processing, but also in application domains such as healthcare, robotics etc. Many of the most exciting questions pop up at the interface between different disciplines, so I see opportunities for working together systematically. That's why we established the ETH AI Center as an interdisciplinary effort and joined ELLIS, the European Laboratory for Learning & Intelligent Systems. Such networking is key. Going forward, we will only be able to influence AI and shape it according to European values if we take on a technological leadership role.

What do ‘European values’ mean in connection with AI?
That we're reflecting on how technological development impacts our economy and open society. For example, protecting personal privacy is an important value in Europe. This raises new questions about how to develop AI technology. Reliability, fairness and transparency play a key role here too, and they're connected to highly relevant questions about societal acceptance, inclusion and trust in AI.

What are the current challenges when working towards trustworthy AI?
AI and machine learning should be as reliable and manageable as conventional software systems, and they should enable complex applications that we can rely on. In my view, a great challenge lies in the fact that one can only assess the trustworthiness of AI in context of specific applications. Particular issues arising in medicine, for instance, can't be directly translated to issues involving the legal sector or the insurance industry. So we need to know the specific requirements of an application to be able to develop trustworthy and reliable AI systems.

What makes a machine learning algorithm reliable?
Reliability is a central issue when it comes to acceptance of new AI technologies. Again, the concrete requirements for reliability depend very much on each application. When a recommendation engine suggests a movie that someone doesn't like, the consequences aren't as far-reaching as when an AI system for medical decision support or an autonomous vehicle makes a mistake. Those kinds of applications require methods with much higher levels of reliability and safety.

And when mistakes creep in anyway?

The source of this news is from ETH Zurich
ETH Zurich
Rämistrasse 101, 8092 Zürich, Switzerland

"We have to ask ourselves what to do with hacked data"

September 23, 2021

Tracking down track ballast

September 23, 2021

Simplifying quantum systems

September 22, 2021

A higher calling

September 22, 2021

The whole is the truth

September 22, 2021

Popular in Research

Maxine Waters Slams Biden For Treatment Of Haitian Migrants: ‘Worse Than What We Witnessed In Slavery’

Sep 23, 2021

Two Haitian migrants bite ICE officers on deportation flight

Sep 23, 2021

U.S. to ease travel restrictions for foreign visitors who are vaccinated against Covid

Sep 22, 2021

After years of being 'squeaky clean,' the Federal Reserve is surrounded by controversy

Sep 18, 2021

Taking on the stormy seas

Sep 23, 2021

Cambridge researchers elected Fellows of the Royal Academy of Engineering

Sep 23, 2021