Brain-computer interfaces are the latest developments in the neurotechnology field. They are used to record brain activity, which is then decoded with artificial intelligence techniques and converted into control signals for robots or computers. While this brings hope to severely paralysed people, it also implies risks due to the interest of companies like Google and Facebook in this type of data. Both these aspects are being investigated in the BrainLinks-BrainTools cluster of excellence at the University of Freiburg.
The idea behind brain-computer interfaces (BCI) is to be able to control machines solely through thought. The technology enables humans to undertake certain actions, such as communicating with the environment or driving a robot, without using peripheral nerves and muscles. Electrical activity in the brain’s nerve cells is measured and recorded using electroencephalography (EEG), i.e. a method in which electrodes are placed on the scalp. A computer then analyses brain waves using computer-based models, most recently with advanced machine-learning methods – artificial intelligence (AI) – and translates them into application-specific control signals.
Neurologist Dr. Philipp Kellmeyer from the Neuromedical AI Lab (head: PD Dr. Tonio Ball) of the BrainLinks-BrainTools cluster of excellence and the Freiburg Institute for Advanced Studies (FRIAS) at the University of Freiburg focuses on the clinical application of BCIs. “The cluster offers us an exceptionally innovation-friendly environment, which is unique in Germany,” says Kellmeyer. “By teaming up with scientists from various disciplines at FRIAS we can develop and investigate everything at one location – from electrode measurement to animal studies and translation into clinical application, and sometimes even small neurotech start-ups. In 2017, for example, along with other scientists in the cluster we were at the forefront of developing the first BCI robot controller based on artificial intelligence, i.e. deep learning.”
The method has been used for a year now and enables humans to drive robots solely through brain activity, and it works so well that the researchers intend to extend it to other fields of application, explains the neurologist. Group leader Ball calls this approach “neuromedical artificial intelligence”. The researchers also have plans to use the method to increase our general understanding of brain function.
Ball and Kellmeyer have teamed up with neuroscientists, clinicians and engineers in the BrainLinks-BrainToolscluster of excellence to develop a BCI-controlled communication system. The researchers are also testing new electrodes for their ability to measure nerve signals directly on the surface of the brain. In traditional EEG, which uses electrodes placed on the skull, the brain signal is attenuated by the skull bone and is therefore sometimes difficult to interpret. In Freiburg, electrodes have been specifically developed for the new approach, and are implanted beneath the skull bone and placed directly on the cerebral membrane, where they can record the brain’s bioelectrical activity without attenuation.
Analysis of the data that has been measured is another focus of the study. "Artificial intelligence methods are extremely helpful in decoding the extensive data from recorded brain activity," says the scientist. "This then creates an algorithm that executes the desired control process. And it would be a huge leap forward in terms of innovation if this enabled patients to communicate.” The new system that can be used by paralysed people to generate letters is based purely on the intuitive change in brain activity. The algorithm learns from every application and is able to adapt individually to the user.
Asked when he thought patients would be able to use the new system, Kellmeyer answered that the process also depends on the dynamics of innovation. He said: “If we are able to show that the principle works, a market could then develop in which companies, perhaps even large ones, might be interested. Initially, however, the use of the system by severely paralysed patients will probably be a niche market involving smaller companies.” The scientists plan to launch a pilot study in 2019 to test this completely new medical procedure.
Apart from these patient-oriented aspects, Kellmeyer also looks at ethical issues around the application of advanced AI methods to brain data. "Basically, ethics are involved in all areas of medicine where large quantities of images and data are generated. Of course, we would like to use AI as widely as possible. However, its application raises many questions,” Kellmeyer explains. "This starts with practical issues such as restructuring and integrating clinical data. After all, you need data from a very large number of patients in order to be able to identify patterns. Then there is the issue of data protection. And finally of course, we must also deal fundamentally with normative aspects, which results from the interaction of humans with intelligent systems."
Kellmeyer has teamed up with colleagues from the fields of computer science, philosophy and law at the FRIAS in a research focus called "Responsible Artificial Intelligence – Normative Aspects of the Interaction of Humans and Intelligent Systems", which has been set up to address the very fundamental challenges of how to deal with such brain data. As early as 2017, the Freiburg neurologist was working with other scientists to develop and publish a list of ethical priorities for the use of AI in neurotechnology.*
"We are dealing with two main areas," reports Kellmeyer. "First, the transparency and accountability problem of algorithms. You cannot predict how a system will behave in the future because it is constantly learning, renewing and changing. Second, there could be fundamental difficulties with interpretability because we have not yet been able to process such large amounts of data. For example, an MRI scan has more than 5,000 shades of grey, which have so far been downsized to under one hundred because 5,000 shades would not be visually resolvable for the human perceptive apparatus. However, an algorithm would be able to use this data and perhaps come up with completely different results. We would have to trust it blindly, and that's still a difficult issue at the moment. "
This year, Kellmeyer and his colleagues are planning to develop an alternative to "disruptive AI", Kellmeyer’s name for what is practiced in Silicon Valley and for what is known as "dystopian AI" which is occasionally used in China to monitor citizens: "We are aiming for a European third way – responsible AI – in other words, only using AI systems when one understands them."
For Kellmeyer, the fact that these issues are largely beyond political control is a far-reaching problem. “This is why we need to raise awareness and encourage public discourse to enable us to benefit from such a powerful technology. We have witnessed something similar with the ecology movement; it took decades before there was a certain level of public environmental awareness. AI is also about individual responsibility as far as personal data is concerned. This is all the more important because Facebook, Google and other firms have a huge interest in data that can be derived users’ brain activity. We call this consumer-oriented neurotechnology.”
* Yuste, R. et al. (2017): Four ethical priorities for neurotechnologies and AI. Nature News 551 (7679) 159: https://www.nature.com/news/four-ethical-priorities-for-neurotechnologies-and-ai-1.22960; DOI: 10.1038/551159a