An interview with Professor Dr. Michael Munz, Steinbeis Entrepreneur at the Steinbeis Transfer Center for AI Systems and Software Solutions
Although “artificial intelligence” and “machine learning” have become part of our everyday vocabulary, both terms are commonly used in several different ways. TRANSFER wanted to pin down exactly how they differ from each other. We asked this question in an interview with Professor Dr. Michael Munz, Steinbeis Entrepreneur at the Steinbeis Transfer Center for AI Systems and Software Solutions. As well as explaining the difference between the two terms, the Steinbeis expert also looks back at the history of these technologies and discusses what the future might hold for them.
Professor Munz, what do you see as the main difference between AI and machine learning?
Artificial intelligence, or AI, and machine learning, or ML, are both fields in computer science. AI essentially encompasses all algorithms and software systems that are capable of autonomous problem-solving requiring some form of “intelligence”. The easiest way to explain the difference between ML and AI is to think of ML as a subset of artificial intelligence. ML employs methods to automatically learn patterns from data so it can apply these learnings as successfully as possible to unknown data and scenarios. This is known as training. Examples include object detection and classification with camera data or weather forecasting. In other words, these methods learn the solution, rather than it being explicitly programmed by the developers.
AI encompasses several other technologies apart from ML, for example expert systems based on explicitly modeled knowledge that make it possible to make plausible deductions or even prove propositions. But unlike ML, these methods do not employ data-based training.
Recently, however, people have started using the terms AI and ML more or less synonymously. So it’s always worth checking exactly what they mean. “AI” is often also used to refer to the field of generative AI. This encompasses a whole family of methods that all have their roots in the field of ML. Unlike classifiers or prediction algorithms, generative methods can generate new data by themselves. They include the ubiquitous large language models (LLMs) like OpenAI’s GPT and image generators like Stable Diffusion & Co.
Which developments do you think had the biggest influence on the fields of AI and machine learning?
There have been several key stages in the development of ML. The field’s entire development has been characterized by a series of highs and lows. Each new discovery would lead to a sharp rise in global research interest in the technology, accompanied by high expectations. But sometimes these expectations were so high that they eventually proved impossible to fulfill due to technological barriers.
The huge recent waves of activity in the field of ML were largely triggered by the widespread availability of graphics processors, or GPUs. For the first time, these processors made it possible to train very large models with very large datasets, something that had previously been unthinkable due to the necessary computing time. This heralded the advent of “deep learning”, where large models are now trained on billions of parameters. The next and arguably most defining stage was undoubtedly the achievement of rapid advances in the field of generative AI, especially language models like GPT, Llama and Gemini, to name but a few. In addition to the hardware advances, the enabler technologies were based on new scientific discoveries, especially algorithms like transformers. At a single stroke, these revolutionary algorithms enabled a new quality of complex data processing.
What sort of issues do your customers come to you with?
My customer inquiries generally involve very specific questions about machine learning. They often entail the development of new models for specific problems. For instance, I had a request for a system where identical objects that had been photographed differently could be correctly reidentified in images and distinguished from similar objects. It involved developing, training and evaluating a completely new model. Projects like this are of course particularly exciting because they cover the entire processing chain, from data acquisition, data preparation and training to evaluation and integration with the customer’s system.
We also get other inquiries to do with technical feasibility assessments and regulatory matters.
Which trends do you expect to shape the future of AI and machine learning, and how can businesses prepare for the coming changes?
AI technologies, especially ML applications, have become an integral part of our everyday lives. Their capabilities are developing at an incredibly fast rate. And multimodality – where several data types like text, images, audio and so on are combined in a model – is opening up previously unimaginable possibilities. But I currently see two specific major challenges. The first is how to incorporate these technologies into the everyday world of work so that the systems really help to make processes easier and faster. While it would be great to have AI assistants to perform monotonous, repetitive tasks, it is vital to ensure that the quality of the results is not compromised. And that’s not as easy as it sounds. So this is an issue that businesses must pay special attention to. You shouldn’t adopt AI for the sake of it – it should be used to provide specific process support, and a clear quality assurance strategy must be in place.
I think the second major challenge relates to the systems’ transparency. Almost all modern ML systems are extremely powerful, especially the large language models. But models this large and complex come with one major drawback. Because of the millions of parameters and different algorithms, the models themselves are no longer transparent to humans, which is why they’re known as black boxes. This also means that the outputs are not explainable and transparent for users. In order to have confidence in the models’ outputs, we need to use methods from the field of explainable artificial intelligence or XAI. These methods can make the decisions or other outputs of these systems explainable to humans. But a lot more research is still needed in this field.
If a business is thinking about using AI systems, they need to understand that a certain degree of transparency and explainability is essential for outputs that are of any consequence. Without this explainability, the use of AI will either be extremely risky or will not deliver any efficiency gains because its outputs will need to be painstakingly checked.
Contact
Prof. Dr. Michael Munz (interviewee)
Steinbeis Entrepreneur
Steinbeis Transfer Center for AI Systems and Software Solutions (Ulm)