First edition of the German Standardization Roadmap Artificial Intelligence
Groundbreaking results and action recommendations
After about a year of intensive work, the results of the first edition of the Standardization Roadmap AI were presented at the Digital Summit of the Federal Government on 30 November 2020 and handed over to the Federal Government. The results of the first edition provided a comprehensive overview of the status quo, requirements and challenges, as well as the need for standardization on seven key topics related to artificial intelligence.
The approximately 300 experts who contributed to the first edition of the Roadmap formulated five cross-sectoral and central recommendations for action. The implementation of these recommendations for action will help to strengthen German industry and science in the international competition in the field of artificial intelligence, and create innovation-friendly conditions for this technology of the future. In addition, they aim to foster trust and confidence in AI.
The complete set of recommendations and action items can be found in the first edition of the Standardization Roadmap AI. If you would like to participate in the implementation of the recommendations for action, you are welcome to register directly with DIN.ONE.
The Standardization Roadmap AI will be regularly updated and refined to take into account changing requirements. A kick-off event was scheduled for 20 January 2021 to commence the work Experts from industry, civil society, science and the public sector are cordially invited to contribute to the second edition of the Roadmap.
Seven key topics of the Standardization Roadmap AI
What is artificial intelligence? How can AI applications actually be assessed - and what defines the basis of ethical, legal and technical criteria? In brief: Those who want to discuss AI must first clarify the basics.
Basics of AI:
- Terminologies
- Classifications (e.g. of AI methods, capabilities, applications ...)
- Data (data analyses, data formats, data quality ...)
Whether AI systems are ethical or lead to discrimination, injustice and other risks through unintentional distortions is one of the major public discourses on this topic. These dangers must be minimized, especially where critical AI applications can affect life and limb or cause high financial losses. But it is also important not to slow down the further development of the technology. Standards and specifications can be used to describe minimum ethical requirements for AI applications and thus create trust and acceptance.
Artificial intelligence only develops its full potential if it is of high quality. It must be reliable, robust and efficient, and needs functional safety to inspire confidence. Quality criteria and test methods are necessary to ensure this. Standards and specifications describe requirements for these properties and thus form the basis for the certification and conformity assessment of AI systems.
Without comprehensive security/safety and risk minimization no car will drive, no plane will fly, no operation will be performed and no house will be built. Innovations only become economically viable when safety and security in use is ensured. This also applies to AI. Probably the greatest challenge for the use of AI systems by industry is to prevent manipulation and thus establish trust in (IT) security and in the AI system. Standards and specifications describe clear requirements for facing this challenge.
Germany is a leader in Industrie 4.0. AI can expand this position and thus further strengthen Germany's economic performance. In particular, it can make the procedures and processes in the manufacturing industry more dynamic and flexible and thus increase the added value. However, these opportunities must also be taken - standards and specifications can help here, for example by defining interfaces for interoperability and ensuring data quality in the selection of appropriate data for AI systems' learning processes.
AI holds massive innovation potential for mobility and logistics - it is the basis for making new mobility solutions such as autonomous driving a reality. But how do you make sure that AI is safe on the roads and does not pose a danger to other road users?
Standards and specifications promote safe AI-controlled mobility:
- On a technical level, standards and specifications help to ensure the safety of autonomously running vehicles during their commissioning, for example by describing clear requirements for test methods.
- AI systems for mobility and logistics must be capable of explanation and validation. This is the only way to understand how they make decisions in road traffic. Standards and specifications can help in this.
- AI-controlled cars, trucks or trams - they all have to interact with each other in traffic. For this to work, they need systems that can work together. Uniform standardized data models form the basis for their interoperability.
AI brings new possibilities for medicine in prevention, diagnostics and therapy - from early detection via apps to the treatment of cancer. In order to take advantage of such opportunities, secure framework conditions are necessary. Challenges still need to be mastered, especially in the areas of ethics, legal frameworks, economics, technical aspects, but also acceptance and empathy. What regulations should there be so that technology always serves people and not the other way around?
The success of AI in medicine depends mainly on the following points:
- How can the availability and quality of health data for AI development be ensured - and at the same time how can these data be protected?
- Legal framework: Who is liable for misdiagnoses or damage? How can self-learning AI be brought into line with the highly regulated approval procedure?
- Ethical questions: To what extent are machines involved in medical decisions or even make these decisions themselves?
Key recommendations
Many different actors come together in value chains. In order for the various AI systems of these actors to be able to work together automatically, a data reference model is needed to exchange data securely, reliably, flexibly and compatibly. Standards for data reference models from different areas create the basis for a comprehensive data exchange and thus ensure the interoperability of AI systems worldwide.
AI systems are essentially IT systems - for the latter there are already many standards and specifications from a wide range of application areas. To enable a uniform approach to the IT security of AI applications, an overarching "umbrella standard" that bundles existing standards and test procedures for IT systems and supplements them with AI aspects would be expedient. This basic security standard can then be supplemented by subordinate standards on other topics.
When self-learning AI systems decide about people, their possessions, or access to scarce resources, unplanned problems in AI can endanger individual fundamental rights or democratic values. So that AI systems in ethically uncritical fields of application can still be freely developed, an initial criticality test should be designed through standards and specifications - this can quickly and legally clarify whether an AI system can even trigger such conflicts.
So far, there is a lack of reliable quality criteria and test procedures for AI systems - this endangers the economic growth and competitiveness of this future technology. A national implementation programme "Trusted AI" is required, which forms the foundation for reproducible and standardized test procedures. These are used to test the characteristics of AI systems such as reliability, robustness, performance and functional safety and to make statements about trustworthiness. Standards and specifications describe requirements for these properties and thus form the basis for the certification and conformity assessment of AI systems. With such an initiative, Germany has the opportunity to develop a certification programme that will be the first of its kind in the world and will be internationally recognized.
AI research and the industrial development and application of AI systems are highly dynamic. Already today there are many applications in the different fields of AI. Standardization needs for AI applications ready for industrial use can be derived from application-typical and industry-relevant use cases. In order to shape standards and specifications, it is important to integrate mutual impulses from research, industry, society and regulation. At the centre of this approach, the developed standards should be tested and further developed on the basis of use cases. In this way, application-specific requirements can be identified at an early stage and marketable AI standards realized.