TRANSFER magazine speaks to AI expert Carsten Ullrich, professor at Steinbeis University
Sometimes it feels like the only opinions you hear about artificial intelligence (AI) are extreme opinions – it’s not often you experience moderate opinions. Some see AI as a 21st-century panacea, the solution that will surpass natural human intelligence in coming decades. Others see it as Pandora’s box, something we lost control of many years ago without even noticing. In an interview with TRANSFER magazine, Professor Dr.-Ing. Carsten Ullrich shows how to weigh up the opportunities and risks. As an AI expert at CENTOGENE, a specialist company in diagnosis and therapy development for rare diseases, Ullrich also lectures at Steinbeis University. He explains how in many areas it can be extremely helpful to use AI, but he also points out its limitations, which clearly still exist.
Hello Professor Ullrich. You’ve been working on artificial intelligence for more than 15 years now. Which developments would you describe as the milestones in this area?
There have been many milestones. You could even go back to Leibniz and the first time it was realized that machines could not only be used to calculate numbers, but also to formalize logic or thought processes. In the 40s and 50s of the last century, when the first computers came along and started performing calculations on numbers semi-electronically, the smart thinkers realized that maybe you could use them to do a lot more. It was the very early days of AI, but it didn’t take long before some incredibly impressive results were produced. They came up with AI programs that were capable of proving mathematical theorems, there were the first translation programs – and that was all in the 1960s, when there were no such thing as a computer as we know it now.
Since then, more and more ways have been found to get computers to calculate things and behave intelligently. The processes of machine learning have also made another major leap forward in recent years. This was all thanks to the huge volumes of data made available by the Internet, but also the ability to perform monumental computations in parallel. This has made it possible to run algorithms on huge amounts of data in order to detect patterns.
There have also been numerous milestones in the last 15 years. Look at the board game Go: Computers now play it better than any human, which was basically inconceivable six years ago. Or the protein prediction system AlphaFold recently unveiled a sister company of Google. For a long time, the protein problem was seen as one of those unsolved challenges. Being able to predict the folding of many proteins not only constitutes a milestone in terms of AI technology, it also spills over into the corresponding field of life science. The problem with all milestones is that the moment you reach them, people no longer see it as AI, it becomes the standard. So you’re permanently raising the bar.
You’re a professor of artificial intelligence at Steinbeis University. What are your main areas of focus?
I’ve been dealing with AI in one way or another for the whole of my career until now, and it’s always revolved around the same question: How can we use AI to make life easier for humans? The first studies I carried out looked at adaptive learning systems for schoolchildren and students based on AI processes; the tasks I was working on in Shanghai involved AI in distance learning. But also for my work at the German Research Center for Artificial Intelligence, I developed assistance systems for smart factories to provide operators with support in different working environments. This is the topic I’ve continued working on at Steinbeis. The key question for me is, where can AI be used to help people do their jobs, or automate monotonous tasks, or give people more time to focus on what’s important to them? So it’s about more than programming algorithms. When you introduce a technical system to an organization, the other question that arises is how best to do that. Basically, when you develop an AI system you should get staff involved to understand their apprehensions and expectations. Only then will you be in a position to apply AI successfully. After all, you don’t want to introduce AI just for the sake of it; you want to make things better or change business processes.
What opportunities does AI offer to companies, but also to society, and what risks does it entail?
My main concern is always how to use a new form of technology and what I want to achieve with it. For me, AI should make things easier for people or give them more leeway to take action. This is where it offers tremendous potential. It’s comparable to the impact of previous industrial revolutions, and it’s this potential that needs to be leveraged.
Of course, every technology involves certain risks. The concerns you hear, over and over again, are that technology will mean humans will have less to do and they’ll be given less and less room to intervene themselves. But at the end of the day, what matters is how you design the overall system. I can create an AI solution that calculates the optimal bed occupancy schedule for a hospital. So this immediately raises a question: What’s “optimal?” Should I define it as meaning there’s a nurse available for each patient, as quickly as possible, someone who can look after them a lot? Or should I define optimal as the minimum number of people I need to perform the minimum number of hours of care? There’s always a target for an AI system, and it doesn’t come from the system itself, but from the people who put it to use. AI then computes the best solution for the given target, but in itself, it has no ethical aptitude – that comes from the outside. Of course, there is a risk that AI achieves its targets but they’re not in the general public interest. But that’s a risk you have with any technology, even everyday automation. AI solves a specific problem in a quasi-superhuman way – faster and more accurately. But it solves the exact problem given to it by people. That’s something you must always be aware of.
A number of people feel skeptical about the widespread use of AI applications. How justified do you think they are to feel that way? What can be done about this skepticism?
The way I see it, general understanding regarding AI needs to be improved in society. We need to ensure people understand that it’s not AI that dictates what the ideal solution is, it’s human beings who define the best solution. AI will never decide on its own to cut back tens of thousands of jobs – that decision always comes from the outside. An AI solution is developed for a certain target and it’s used for that target. And that should also match our human values.
If you decide to introduce an AI system to a company, for it to be accepted it’s also important that from the moment you start the development process, you include the people that’ll ultimately use the tool – that you listen to their fears, their reservations, but also their expectations. Sometimes people have much too high hopes, so you have to discuss these and clarify things. But then you also have to go into their specific fears and show how the system will prevent those things happening. If there’s a particular apprehension, you align the overall system to impair it. But also, AI systems can be created to open the door to new opportunities and give employees more leeway to take action. It’s not easy and it takes a lot of discussion with all groups of stakeholders that’ll use the AI system. But I believe it’s the only way to translate AI – or technology in general – into use.
In addition to your work at Steinbeis University, you’re also Senior Director of Artificial Intelligence at CENTOGENE, where you apply AI methods to diagnostics and the development of rare disease therapies. What advantages does AI offer in that area, and what hurdles still need to be overcome?
It currently takes more than eight years on average to correctly diagnose a rare disease – eight years of a patient going on an odyssey, from doctor to doctor, because no-one knows how to interpret the symptoms. If a doctor’s given a way to send our company a blood sample and gets a diagnosis back in two weeks, that’s tremendously helpful. We analyze the sample and tell the physician that a mutation was found indicating the presence of a rare disease or even that a rare disease can be discounted. The doctor then decides what to do next, and we also provide them with information on how we arrived at the diagnosis – which variant or mutation was identified, or any research articles that are relevant. This allows doctors to understand our diagnosis. We use AI to identify quick ways to improve the diagnostic process. This starts with little things: For example, there’s still a tendency for medical practitioners to send us paper records, so we scan them. We analyze the words behind the letters. Often the patient records are handwritten, so we use an AI solution to digitize them. We then have another application that tries to pick out specific content within the scanned text, such as the name of the patient or their symptoms, and it tries to suggest ways to simplify the process used by our staff to enter data manually.
The bit where it gets exciting with AI is when it’s used on the data we’ve been allowed to collect over the years at CENTOGENE. Many of our customers have agreed to us analyzing data beyond the actual diagnosis, because this is the only way to move things forward when it comes to exploring new diagnostic options and drugs. We’ve succeeded in collecting data on approximately 600,000 patients with rare diseases since 2007. We use a variety of AI methods at CENTOGENE, all of them aimed at improving diagnoses or enabling new therapies in the future. It’s a dream job actually. You get to turn the thing you love – AI – into an opportunity to help people.
And it’s a wonderful feeling for us when the innovations we use, across all the different disciplines, are given awards. Just a couple of weeks ago, we received the Health-i Award from Handelsblatt newspaper and the health insurance fund Techniker Krankenkasse. We were awarded the prize for an AI-based platform used to examine the metabolome. We succeeded in using the platform to shorten biomarker searches from months to just days – so it’s a lovely example of the disruptive capability of AI.
Data protection is the top priority for us, but at the same time it’s one of those obstacles in our area. Depending on the nature of the approvals the patients give us to use data, some data we may use, some data we may not. Basically this part comes first, before any other process. The more data you have, the more robust the results you get. So having the option to use data is an extremely important issue for us. I was chatting with a corporate partner the other day, and she said, “Data privacy protection is for healthy people.” She was exaggerating of course, but I think her comment gets to the heart of the matter. If your child’s suffering from a rare disease and nobody knows how to diagnose it or there’s no treatment for it, you’re grateful for any donated information that’ll allow a drug to be developed. That’s why I’d appeal again to everyone to think again about releasing their data so it can be used for medical development. Surely the key point is that this results in new treatment and diagnostic options. Ultimately, the obstacles we’re facing at the moment when it comes to using data are drastically slowing down a number of processes. But when it’s about human lives, I think you have to consider what the top priority is. For many rare diseases, we only have one or two patients, so of course that’s very little. This problem is a challenge to us every day.
On top of that, medicine is extremely tightly regulated, which is also quite challenging. Of course there’s always a reason for it, but sometimes you have to question why there’s a need for the sheer number of formalities it takes to gain approvals for a medical product. And when I think about the ideas being mooted by the EU to regulate AI systems, I envisage clear competitive disadvantages compared to China and the US. The question this raises for me is what we’re trying to achieve. Of course it’s important to consider social factors, but how do you assess the other goals in this area, and what priorities do you set?
A final question then – and of course we can’t wait to ask this, because scientists harmlessly describe this issue as “singularity”: Do you think artificial intelligence will supersede human intelligence?
I see no danger of this happening. Let me attempt to explain why. What’s possible with AI today? If I specifically ring-fence a problem well enough, I can create an AI solution that’s better than any human. I mentioned the example a moment ago with the board game Go. The problem is that as a human being, I always use human intelligence as my benchmark and take that to assess AI systems. I project this view of human intelligence onto an artificial intelligence. So if we go back to the Go game, which AI currently plays better than human beings, and we expand the board from 19 to 20 squares each way, people can cope with that, of course; they can adapt their skills to changing circumstances. But AI can’t; it founders. It has to completely retrain itself, because it only knows how to solve the one, highly specific problem. It’s the same with every AI solution. AI solutions can only solve problems within the framework defined by humans. And it can’t extend that framework by itself. As human beings, we’re defining new targets for ourselves and we keep trying to develop in order to get there. AI is an algorithm – it was programmed and it operates within its framework. But it would never occur to an AI system to expand its own algorithm. And until now, no methods have been found in intelligence research for how to fundamentally solve this. Once you’ve understood that, you can see how much of the stuff that’s propagated about AI, even in the media, is just pie in the sky. Despite its superhuman abilities, compared to natural intelligence AI is extremely limited.