© istockphoto.com/OstapenkoOlena

“Ultimately, the focus is always on people”

An interview with Professor Bernhard Humm, Ph.D., Steinbeis Entrepreneur at the Steinbeis Transfer Center for Applied Artificial Intelligence at Darmstadt University of Applied Sciences

The futuristic image some people associate with AI is that of an intelligent, humanoid robot. Yet we already use AI applications in many areas of everyday life: cell phones with facial recognition, digital voice assistants, smart home devices – with more developments coming along. Will it become dangerous if something that was created outperforms the one who created it? Steinbeis expert Professor Bernhard Humm believes it will not, since machines can only ever behave or learn within the frame of reference of tasks set for them by humans. In an interview for TRANSFER magazine, Professor Humm highlights the importance of our societal responsibility when it comes to AI.

Hello Professor Humm. People talk about artificial intelligence as if it’s a matter of course, but what exactly does the term mean?

You’re right, one of the reasons AI is such a controversial topic at the moment is that the term is fuzzy. Few people know exactly what AI means, so anyone can make up their own definition of it. This is why I’m happy to provide a definition of AI: AI is the area of developing computer systems or apps that exhibit aspects of human intelligence. In other words, they’re not intelligent the way you or I are – they merely simulate aspects of human intelligence. For instance, one such quality is communication. Apps like Alexa or Siri reproduce this. But also thinking or reasoning are aspects of human intelligence; those are qualities that support decision-making. These qualities are simulated by so-called expert systems – for example systems that recommend courses of action to physicians. Or take the ability to carry out an action, an example of which would be a self-driving car. And then there are cameras with face recognition, which is also AI applications. So as you see, we already have many AI applications in everyday use, but they’re often taken for granted and not necessarily perceived as AI.

You deal with concrete AI applications in your work. What sort of issues do your customers approach you with?

I can think of many interesting examples, especially from my research projects at the university together with my team. Let’s start with medicine. We have developed applications for doctors treating cancer patients. These draw on patient information held in electronic health records to provide physicians with evidence-based recommendations regarding diagnostic procedures and therapies.

Another example comes from the field of psychotherapy, where support is given to patients with borderline personality disorders and their therapists – for example, if there is a risk that the patient will drop out of therapy.

Then there is the area of industrial manufacturing, where it is all about smart or connected factories. The idea here is to pinpoint production errors or machine faults early on and then, give maintenance engineers an indication of where an error is occurring, what might be causing it, and how to solve the problem.

Also, we worked on a project in the field of tourism. The challenge of this project was to recommend hotels to end customers, e.g., families, and match their wishes, interests, and preferences without them having to enter the exact same wording used in the hotel descriptions. So for example, if a customer searches for a family-friendly hotel but the hotel description says children-friendly, the hotel will still be recommended because the two terms relate to the same concept.

The last example is from the world of creative arts. It was a project for the Städel Museum in Frankfurt, one of the most prominent arts museums in Germany. It resulted in a digital version of its collection with interesting cross-references between various artworks.

All the examples I’ve just named have one thing in common: when you’re applying AI technology, the focus is always on people using the applications. This is sometimes called user-centered AI.

Often fears come up when people hear the term AI. What do you see as the actual risks of AI?

I think we can start with the dystopian fears – there are quite a few of them. You’re probably familiar with this one: Machines evolve all by themselves to the point where they surpass human intelligence and eventually take over the entire world. The whole idea has something fascinating about it, so Hollywood and the media have greeted it with open arms. But let’s be honest, if – and that’s “if” with a capital I – if it really were possible for a genuine intelligence to emerge that would actually be able to evolve on its own, it’s not going to happen in the foreseeable future. And I go further and say there’s currently zero evidence that this could ever be the case, because there are fundamental differences between humans and machines. Humans are unified entities comprising body, mind, and soul, so our intelligence is not just something that exists in our brains; it’s a product of this unified entity. Add to that the way we interact with our environment, the way we learn and develop from within. Human intelligence is much more complex than AI. It’s important to remember that AI systems always operate within the framework of a purpose defined by humans, and there’s no evidence that it could break out of that frame of reference by itself.

But let’s think about the real risks of AI. One is the risk that AI systems take wrong decisions like incorrect medical diagnoses or actions leading to accidents in self-driving cars. And then there are risks caused by humans not understanding decisions made by machines, and then intervening incorrectly. In the 1980s, there was an accident with an Airbus aircraft equipped with new autopilot systems. The pilot wanted to perform an impressive maneuver at an airshow, but the autopilot was programmed to compensate for it, so the pilot steered against the system and the system steered against the pilot, which resulted in the airplane crashing. The human-machine interaction wasn’t right.

The next risk relates to the quality of data used by AI systems to carry out actions. If data is faulty or erroneous, machines make the wrong decisions.

And there’s another important aspect which I call blind belief in technology. Let’s take an example from medicine. There are now some very good AI systems capable of supporting diagnosis. But what will happen if physicians decides on purpose, and for good reasons, not to agree with certain recommendations by the machine? They will might come under pressure to justify their decision. One generation of physicians later, perhaps the ability to consciously make own decisions could be lost.

What trends do you think will shape the future of AI applications? Do we need certain regulation for future developments?

This is an important question, and I think it has less to do with technical developments and much more to do with the responsibility held by society – or actually the global community. A good example are autonomous weapons, which can even make decisions to launch an attack automatically. The question is: Do we want this as a society?

I see three directions in different continents: The American way, which primarily business-oriented. Then there’s the Chinese way, which is largely about social control. And the third is the direction targeted by the EU, which is about ethical responsibility or human-centered AI. One aspect of this is data protection. But this puts us in a situation of conflict – data protection versus data-driven technology. On the one hand, the more data you have access to, the better such technologies work. On the other hand, data protection actually impedes such technologies. But what you have to remember is that AI isn’t an end in itself, so you have to operate within that situation of conflict. This not only about regulatory or legal measures, it’s also the general ethical framework – which we have to define as a society.

Contact

Prof. Dr. Bernhard Humm (interviewee)
Steinbeis Entrepreneur
Steinbeis Transfer Center Applied Artificial Intelligence (Darmstadt)

215406-41