- Steinbeis Transfer-Magazin - https://transfermagazin.steinbeis.de -

Visions of Artificial Intelligence: Science Fiction or Just Around the Corner?

Professor Dr. Wolfgang Ertel outlines the potential scenarios of future AI singularity

A central theme of Origin, the novel by American author Dan Brown, is a form of artificial intelligence that kills its human inventor after weighing all the facts. From a logical standpoint, this seemed like the right decision for the AI system. Realistically speaking, what is still science fiction in technological terms today may actually be possible within decades. Using conventional methods based on mathematics, computer simulation, and software engineering, previously unsolvable problems can now be solved through machine learning methods. AI is now driving the economy, not only improving the quality of living, comfort, and convenience, but also promising to change more and more areas of our lives. In addition to improving human health, learning diagnostic systems help protect the environment and our climate. As our author Professor Dr. Wolfgang Ertel explains, one prerequisite for this is a smart approach to AI based on a long-term outlook, especially if we want to avoid drifting into disaster. An entrepreneur at the Steinbeis Transfer Center for Artificial Intelligence and Data Safety, Ertel also lectures at Ravensburg-Weingarten University of Applied Sciences.

Three potential developments:
Humans are the crown of creation (left), humans degenerate (center), and humankind diverges (right).

Singularity (left), limited singularity (center), and the superintelligent convergence of humankind and machines (right).

 

Machine learning has achieved significant breakthroughs in the last ten years. Thanks to deep learning, neural networks are better than humans at recognizing random objects in photos. In the coming years deep learning will, on top of technology such as radiology imaging, deliver significantly better medical diagnoses. Deep learning is already powering advancements in self-driving vehicles, and one day they will become a global phenomenon. Service robots are also becoming an important area of application, achieving excellent standards, with robots now capable of reliably detecting and grasping objects. In creative areas, so-called GAN networks are now able to create artworks or portraits of artificial people, or integrate existing people into generated videos.

The next level – complex language models

Adaptive AI systems are now able to solve a whole host of special tasks better than humans. But these systems are still restricted to solving one task at a time [1]. In 2020, software development specialist OpenAI from San Francisco unveiled its Generative Pre-trained Transformer 3 (GPT-3) networks, taking the quality of the technology into an entirely new dimension [2]. GPT-3 is a highly complex language model that learns from texts found in books, databases, and Wikipedia, absorbing no less than four billion pages of text. You can hold a conversation with GPT-3 like talking to an academic. In response, you get grammatically and semantically correct answers to any question from any field of knowledge.

This makes the next step toward “general intelligence” seem entirely feasible. All that is left to do now is to get systems to talk, perform any chosen action, and acquire new skills. And that is exactly what OpenAI is now attempting to achieve with Codex [3], a system based on GPT-3 that, in addition to learning the language model, has also learned a model based on Python programming language by training on GitHub, the open source software database. Codex makes it possible to use task descriptions in text formats to automatically generate nontrivial computer programs.

Extrapolating these successes into the future raises the question of whether and, if so, when AI systems as a whole will be superior to us humans in all areas. The point at which AI achieves the same level of intelligence as humans is called singularity. It is assumed that this will happen within the next twenty to fifty years. Shortly after achieving singularity, AI systems will be far superior to us humans because they will keep evolving much faster than we do. So what would this mean for us?

The race between human beings and AI: different scenarios

Developments could quite conceivably go in a number of directions when it comes to the temporal evolution of human and computer intelligence. In the figure top left, human intelligence increases gradually on a linear basis, whereas AI initially increases exponentially before asymptotically starting to approximate to human beings. This is based on the assumption that AI developed by humans can never become smarter than its creators. What speaks against this hypothesis is that human intelligence could also decline, especially if life becomes more and more convenient as AI does more and more thinking for us, for example through car navigation systems. This scenario is shown in the middle diagram above. It is also possible, however, that a division emerges in society as shown in the diagram on the right, with an “intellectual elite” in contrast to “wallowers” – a situation that is certain to offer considerable potential for conflict.

Things really become interesting in the graphs below, where the red lines cross the blue lines at the point of singularity. On the left, the exponential rise in AI continues unabated. This cannot be completely ruled out, because AI that is smarter than us would also steer research and develop completely new technologies and algorithms. Such a superintelligent form of AI is called artificial general intelligence (AGI). There could also be limited singularity, however, with a stagnation in the rise of intelligence as shown in the middle graph.

With both of these scenarios, human beings are outstripped by AI, which takes the lead and can define its own plans for homo sapiens. Precisely because AI is then smarter than us, we will likely have no opportunity to predict how AI deals with us humans – heaven or hell, both scenarios are conceivable. What’s interesting and new is that reality and our concept of the future – previously dismissed as science fiction – are increasingly converging.

Finally, the graph on the right is another fascinating vision of the future. It more or less represents the vision of the future held by Ray Kurzweil, who heads up development at Google and is an AI futurist. Assuming we humans find ways to connect our brains to a digital AGI and thus develop into superintelligent beings ourselves, we would be in a position to keep determining our own destiny. And thanks to modern genetic engineering, we would potentially no longer age and thus be able to live much longer.

AGI: a curse or a blessing for the human race?

How long singularity will take and what will change as a result, is impossible to say. But we can think through the possibilities. We can draw on four scenarios to paint a brief picture of the future and its consequences for homo sapiens – with or without AGI:

 

  1. Humans do not develop AGI
    Life goes on as before and is essentially about the important (albeit under this scenario no longer pertinent) question of how we humans interact with one another and nature.
  2. Humans develop AGI
    Since, by its very definition, AGI is smarter than humans, it will decide the future of humanity. The following possibilities are conceivable: AGI decides to serve humankind, which could theoretically lead to paradisiacal conditions. Whether, in the process, this would actually make us humans happy is difficult to say. Moreover, it is unlikely that AGI would really want to be a servant to us. If anything, it is more likely to want to make use of human beings, for its own purposes – just as we breed farm animals, for example. AGI could, however, also decide to destroy humanity or let it die out [4].
  3. Humans develop AGI and die out
    When the dinosaurs became extinct 66 million years ago, they left behind a biological niche for us humans. Had they survived, we would not exist today. Presumably, there will be no humans left in 66 million years. It is entirely probable that humanity will die out as a result of climate change, environmental pollution, a pandemic, or a world war. It would be a shame if we departed from this planet without a legacy. AGI would lend itself to this. We should ensure we have pushed it far enough to develop autonomously – before our downfall. Unlike biological beings, AGI can be easily separated from the body and transmitted by radio waves, opening up exciting possibilities for its propagation throughout the universe.
  4. Humans die out before the inception of AGI
    In contrast to the third scenario, we depart from the universe without leaving behind a legacy. In the same way homo sapiens once succeeded the dinosaurs, humankind has clearly not succeeded in creating a species to replace itself. Tough luck.

 You might be wondering if there’s really any need to think about these things – indeed, is there any point? Well, on the one hand, from an academic standpoint researching the future of humankind is tremendously interesting. But on the other, we may also want to shape the future, for example by preventing the development of AGI, especially if we think it could pose a threat to us humans. And we should think about these concepts early – before singularity happens. This is because once AI matches the intelligence of primates, it would be too late to do anything about it. Really clever AI would replicate itself many times over on the internet and evolve rapidly.

Even if AGI doesn’t yet enter the scene, or never does, the AI systems that are in wide-scale use today already pose highly tangible dangers. They were developed to lighten the load on us humans, which they do. The work environment will change dramatically, since AI and automation are taking on more and more work, in almost all professions. We must therefore think about life in a future in which fewer people practice a profession [5], also within the context of the limitations of this planet – which our never-ending consumption has long since exceeded [6].