- Steinbeis Transfer-Magazin - https://transfermagazin.steinbeis.de -

When AI Analyzes The Risks It Faces Itself

Research team investigates impact of artificial intelligence on data and information security

Over the last two years, the specialist press in Germany has published no less than 1,500 articles on the risk posed to companies by artificial intelligence. According to the authors, using AI could exacerbate the strategic and systemic risks faced by companies, undermine data security standards and customer protection, and even result in governmental organizations becoming irrelevant, as legislation lags behind technological developments. So far, so good. To be honest, though, so much was written that even an expert could not read all the articles and analyze them. So what can you do? Well, you could allow AI to analyze articles about the risks it creates: With AI, it only takes seconds of computer processing to analyze texts for language content. AI makes it easier to process text containing natural language and it is already used by major international law firms and a growing number of public administration bodies to produce summaries and reviews of large documents. Steinbeis Transfer Institute zeb/business.school and process mining specialist celonis have also been using the technology to model the amplification and inhibition of causal loops as part of an AI study. Based on their modeling, the experts have also generated short-, medium-, and long-term forecasts.

Using AI to conduct a linguistic text analysis of the 1,500 articles on AI risks and security threats resulted in so-called topic clouds. These show the relevance of and relationships between dominant words featured in the articles. The project team compared a general topic cloud covering all articles with a topic cloud revolving around a specific area, which mapped 105 articles on regulatory issues.

Each topic comprises a mixture of words. Words that occur predominantly within a topic tend to be found side by side in articles. The relative positions on the topic cloud indicate how closely topics are discussed in relation to one together within an article. The closer the position of topics on the map, the more frequently they are discussed with each other and placed within a similar context.

The analysis showed that general discussion regarding AI mainly revolves around access to client information and supplementary data, as well as transparency requirements affecting the use of AI at companies. There was no specific discussion about the role of AI in society. Instead, discussion revolved around the possibilities offered by AI when it comes to future interaction between customers and businesses, as well as the development of technical systems. Discussion regarding risks did not play a predominant role in public discussion. The only risk that received some degree of discussion was the threat of jobs being lost as a result of automation.

A comparison between general discussion and discussion within articles relating to supervisory responsibilities and regulatory affairs provided the project team with a number of interesting insights. The topics were more closely related in this area. In particular, there were different nuances in the discussion regarding AI within the context of regulatory frameworks, and this highlighted impacts on the workplace. One important issue is risk, which was considered from the standpoint of requirements affecting companies. There were strong correlations between, on the one hand, access to information, transparency, and how exactly data is analyzed, and on the other, customers and the use of customer data.

The search for the causal loop

Drawing on the topics that arose from the linguistic text evaluations, the experts from Steinbeis and celonis categorized 32 topics as influencing factors. The remaining twelve topics were categorized as areas of risk. Examples of the influencing factors range from societal aspects (such as citizens’ opinions of AI, or general access to AI technology), to organizational aspects (such as efficiency gains through advanced automation and human workers being replaced by AI), and also technical aspects (such as IT investments and enterprise data management). Areas of risk included threats to reputation, legal threats, operational risks, data security risks, and systemic risks.

For the project team, this was just the starting point. It then asked 50 experts if they felt there were positive, negative, or neutral relationships not only between the 32 influencing factors, but also between each factor and the 12 areas of risk. This resulted in a heat map showing inhibiting and amplifying influences.

 

This slideshow requires JavaScript.

Neural forecasting – setting the loops in motion

The assessments offered by the experts provided an evaluation of one factor at a time, without considering interdependencies. Such networks of interrelationships – in this case comprising more than 1,400 linked factors – cannot possibly be evaluated by human beings. The research team therefore transferred the expert evaluations to a self-learning, artificial neural network that simulates all influences in the form of limiting conditions. This allowed forecasts to be made based on calculations of future developments. If a large number of inhibiting influences have an impact on a factor, that factor does not progress and loses importance. This contrasts to intensifying influences, which cause factors to evolve and become increasingly relevant.

The factors affecting systemic and strategic risks faced by financial service providers can then be summarized by assessing whether they will have a positive or negative impact on areas of risk if they develop, or whether they make them more or less relevant. The most important factor fueling strategic risk at the moment is a lack of suitable data for conducting assessments and a lack of clarity regarding regulatory conditions. Laws and regulations use wording that cannot be directly translated into the function of algorithms, and such “language barriers” result in uncertainty. One short-term factor that mitigates risk is using AI to support human beings in making decisions rather than replace them.

Because algorithms are becoming more and more sophisticated, experts agree that AI will, on average, help reduce systemic risk. In the event that a risk actually does occur, the shock will become more intense because algorithms are not trained to deal with extreme events on which they have little or no data.

AI does not replace people – it replaces occupations and qualifications

Innovations created by AI cannot push aside human beings. Typical human skills will play an even more crucial role in the future, but these will need to adjust to new challenges. It will be necessary to redefine roles within all processes of management and work. The principles of lifelong learning also need to be integrated into everyday work. Twenty years ago, there was no such thing as a data scientist or software developer. Today, there are not enough of such professionals on the labor market. The result of this development will be the emergence of completely new jobs, with everything ranging from user experience designers (who optimize human-machine interactions) to virtual assistants (who no longer provide on-site support but help remotely via online tools).

So what does the team’s simulated forecast tell us? In the future, understanding the methods used by AI and retaining transparency will not be enough to learn from available data. Currently, AI methods can still offer competitive advantage but in the future, they will probably become a commonplace asset. What really matters is that we not just engage in the short-term promotion of technology and infrastructure, but also support business concepts and how they are implemented. Whether sharing AI and making it accessible in the market proves successful will be determined by the degree of clarity regarding the use of information and comprehensibility of AI output.

If we look at current developments in AI, it is clear that improving data and information security is not the same thing as mitigating strategic risk. Aside from specialist knowledge and skills, it will be necessary to develop and test new forms of human-machine interaction. Technology based on AI will not increase risk, but reduce it. But this will only happen if organizations and technology use become more user-centric, i.e. human-centric.