© istockphoto.com/NicoElNino

WeKI-Go – Ensuring AI Conforms with Values

Experts at the Ferdinand Steinbeis Institute support SMEs on the path to forms of artificial intelligence that people can trust

Artificial intelligence (AI) should at all times be equitable, transparent, and sustainable. It should also never harm people. At the same time, it should be of value to business enterprises. Scientists at the Ferdinand Steinbeis Institute (FSTI) are currently working on the best ways to manage this delicate balancing act as part of the WeKI-Go project, which is commissioned by the Baden-Württemberg Foundation.

Governance framework

 

There is currently a lot of hype surrounding AI, much of it fueled by creative systems such as chatbots and image generators. AI holds enormous economic potential for companies in all areas of industry. Of course using AI also harbors risks, however, and this presents companies with a number of challenges due to phenomena such as artificial hallucinations and fakes, resulting in spurious output.

Working on behalf of the Baden-Württemberg Foundation, an FSTI research team from Stuttgart has therefore been considering how companies can apply AI in ways that conform with their business values. Going by the acronym WeKI-Go (short in German for “Recommendations for Value-Compliant AI Deployment Based on Case-Specific Governance Concepts”), in specific terms the research project involves developing a governance framework model, offering companies with little AI experience a set of recommended actions for meeting those challenges.

AI works by recognizing patterns in data. It then uses those patterns to establish the probability of certain outcomes in the future. Although this principle is not generally associated with norms of human behavior, it can be applied to numerous areas and contexts by applying different machine learning methods. In contrast to previous technical solutions, AI is often also associated with an immediate decision or action, taken by a computer autonomously – without the involvement of a human decision-maker.

Value conformity – looking beyond the surface

As part of its AI strategy, Germany wants to differentiate itself from other international stakeholders through AI that conforms to certain values. One of its goals is to “find a European response to data-based business models and new ways of data-based value creation” that corresponds to the country’s economic, value, and social structures.

But what does “AI that conforms to values” actually mean? Value-conformable AI is defined by the following criteria:

  • Data protection: AI solutions conform to legal requirements regarding data protection.
  • Fairness: AI solutions do not discriminate and they deal with all circumstances according to the same principles.
  • Sustainability: AI should be guided by the principles of social, environmental, and economic sustainability. Much of the current discussion surrounding pattern recognition revolves around energy consumption.
  • Non-harmfulness: AI solutions should not cause harm to health through medical applications and, as far as possible, they should not cause financial damage.
  • Transparency: It should be possible to understand decisions made by AI. Presently, this is not always the case, especially with so-called deep learning.
  • Responsibility: Organizations should allocate clear responsibilities regarding the development, operation, and application of AI; this should be clearly set out, for example in emergency planning and approval procedures.

That all seems clear-cut, but ensuring these criteria are met is no mean task, and for companies developing or deploying AI solutions for the first time, this raises many questions.

An AI governance framework

Drawing on a review of literature, more than 20 expert interviews, and discussions with DIN and VDE committees working on the standardization or certification of AI solutions, scientists at the FSTI worked with the Stuttgart University chair of Business Administration and Information Systems 1 to develop an AI governance framework model. This comprises more than 100 individual measures to ensure AI is designed and deployed to be “value-conformable.”

For the final part of the project, the team is currently working on three areas: expressing measures in concrete terms; adapting measures on a case-by-case basis to both the situation faced by companies and the goals of AI deployment, according to a set of rules and a knowledge graph; and providing concrete templates. For example, recommendations made to data protection officers depend on the size of the company, and potential recommendations regarding stakeholder management depend on the context within which AI is applied and the stage of the life cycle that AI is used in. Checklists are being developed for companies to use for approvals and risk management purposes. On completion of the project in October 2023, the project results will be made available for use by others in the form of an online questionnaire.

AI at SMEs

In parallel to completing the WeKI-Go project, the FSTI will continue to devote energy to the issue of AI at SMEs. In April 2023, a joint research project was launched with Dresden University of Technology to look at the application of AI assistance systems in production. The aim of the project is to accelerate the introduction of trusted AI systems at SMEs and thus address the shortage of skilled workers.

Contact

Maximilian Werling (author)
Research Assistant
Ferdinand Steinbeis Institute, Stuttgart
https://ferdinand-steinbeis-institut.de

Dr. Jens F. Lachenmaier (author)
Senior Researcher
Ferdinand Steinbeis Institute, Stuttgart
https://ferdinand-steinbeis-institut.de

223075-44