Jump to content

Supporting the human use of artificial intelligence

Artificial intelligence is no longer a vision of the future, but is already in our midst: whether it is parking aids or search engines, we use the technology quite naturally in many areas of daily life. It is also increasingly used in medicine and the life sciences. It promises new, unlimited opportunities, but also poses risks. Experts from the Integrata Foundation in Tübingen work on ethical issues and the human use of IT for improving the life of as many people as possible.

The physicist Michael Mörike is chairman of the Integrata Foundation for the human use of Information Technology (IT) and focuses, amongst other things, on the issue of how ethics can best be incorporated into AI. © Integrata-Stiftung

Artificial intelligence (AI) has been the focus of research for several decades. However, the breakthrough in terms of everyday applications has only been achieved in recent years with the development of powerful computers and corresponding artificial neural networks. Nowadays, AI is one of the big topics of the digital revolution, which is becoming increasingly influential in many countries, including Germany, in more and more areas of life. The number of future AI applications seems virtually unlimited.

Bringer of hope on the one hand, especially in the medical field, and potential risks on the other. “It is high time that we face up to these risks,” says Michael Mörike, chairman of the Integrata Foundation for the Human Use of IT in Tübingen. "With AI, we are dealing with a phenomenon that humanity has never known before. Although people managed to find ethical solutions for major inventions in the past such as the knife and the steam engine, now it is not just a case of applying the technology properly, but incorporating ethics, in this case, morality, into the new processes.

Improving the quality of life of as many people as possible 

Mörike has been involved in the Integrata Foundation for over ten years. It was founded in 1999 by the economist Prof. Dr. Wolfgang Heilmann and has always been committed to using information technology for automation, but above all for improving the quality of life of as many people as possible.

The basic idea is first and foremost the "social" application of IT and technical applications are secondary: IT is seen as a tool to make the world more humane. Among other things, the foundation contributes to this idea by awarding the annual Wolfgang Heilmann and eCare Prizes for exemplary work. The foundation has also set itself the task of promoting public discussion on the topic. To this end, it runs, for example, the HumanIThesia portal or the Special Interest Group AI which provide information exchange opportunities for company representatives in the AI sector in Baden-Württemberg.

Applying deep learning in a responsible way

The work done by Mörike and his colleagues is based on a vision of humans as part of a comprehensive evolutionary process and not as the "crown of creation". "In this context, humans can improve their quality of life by using IT, but must do so in a responsible way," explains Mörike. "And that's particularly the case when it comes to the latest development, i.e. artificial intelligence or, to be more precise, machines that are capable of learning, and in particular, deep learning. This is not just our point of view but that of many other experts, as shown by the latest EU working document on ethics guidelines, which invited contributions up to January 2019.* The plan is to complete the document by March 2019. Our major concern is to incorporate the right ethics. If we do nothing, the whole project of using AI for the benefit of humans could fail. Let’s take Facebook as an example to show that things happen although they are not wanted, that they just happen because nothing is being done to prevent it. Facebook is a kind of accelerator, stirs up hatred. This is unethical and we have by all means to prevent such unethical developments from happening."

Developing ethical AI with society

Mörike emphasises that integrating ethics into AI so that machines work for the benefit of humans, is not easy: "This is a complex and grey area, because ethics and morality cannot be calculated using traditional mathematics. Although the machines are learning and are taught correct behaviour as they are introduced to different situations, they have only learned ethical behaviour in theory. The problem is that morality is very situational and requires different decisions on a case-by-case basis. In addition, it is often culture-dependent. Things may be viewed differently in China than in Germany. It also depends on our internal values and their relation to society."

 

So, first of all, the big question is, what do we really want to integrate into ethical AI? The Integrata Foundation would like to work with the public to clarify this. "That's why we are fuelling the debate through our platforms, and by giving lectures and holding conferences with the idea of discussing the issue with society as a whole if possible,” says Mörike. "Participation so far is still relatively low, but this is a phenomenon that we already know." In order to integrate ethics into AI, you first need a model that does not yet exist. Mörike comments: "I know of no one who is working solely on this issue. So society as a whole has to come up with something together. In my opinion, the European way that facilitates public participation is the right one."

Medical applications: pros and cons

As far as decisions made by particularly complicated artificial neural networks, which are incomprehensible to humans, are concerned, the Tübingen physicist is convinced that it is best to let machines make recommendations, but to leave the final decision to humans. Mörike believes that this applies particularly to medical application and research: "Rather than rejecting AI outright, we should explore it. If a particular aspect turns out to be useful, then we should take advantage of it,” he says. He strongly believes that AI should be used in the field of medicine, commenting: "It saves doctors a lot of time and opens up promising new possibilities for treatment. It may be a case of innovative methods in drug discovery, robots performing operations, or pattern recognition. This is where humans cannot beat machines, and where we'll find that machines will make life easier for us. "However, we should be careful with therapy recommendations: they will certainly come, but do patients really want this? AI cannot replace comfort.” Mörike also has a critical attitude towards interventions in the human brain. “I’d draw the line here, depending on the application.”

However, generally speaking, a good start has been made with regard to the human application of AI. Mörike comments: "So far, I have not come across any technology that such a large number of people are worried about - not even genetic engineering. It all just has to become much more concrete.”

Sources:

* The European Commission’s HIGH-LEVEL EXPERT GROUP ON ARTIFICIAL INTELLIGENCE: Draft: Ethics guidelines for trustworthy AI” AI Working Document for stakeholders’ consultation Brussels, 18 December 2018 (https://ec.europa.eu/futurium/en/system/files/ged/ai_hleg_draft_ethics_guidelines_18_december.pdf)

Website address: https://www.gesundheitsindustrie-bw.de/en/article/news/supporting-the-human-use-of-artificial-intelligence