In recent years, AI regulation has become a hotly debated topic by policymakers and businesses worldwide — including at the 2023 G7 Summit. The regulatory leader in this space is the EU, who has already developed concrete AI standards, guidelines, and legislation.
In particular, the EU’s AI Act is expected to be finalized in the coming months, in parallel to Japan’s “soft law” approach to AI regulation. Japanese companies will need to pay close attention to the standards put forward by the AI Act and consider how to respond moving forward.
This article describes what to pay attention to in 2023 regarding AI regulation. It will be useful if your company has exposure to the European market or citizens, or if you’re interested in the practical impact that AI regulation may have on your business.
General Timeline of Standards Development
As Dr. Yutaka Oiwa stated at the AI Quality Management Symposium in 2022, the discussion surrounding AI standards and regulation has matured through several stages. In this article, we’ve consolidated these efforts into four levels.
Level 1: Social Principles (2018–)
This first stage gradually materialized as AI technology became more widespread. The main focus here is defining various principles that AI systems should fulfill, such as fairness, safety, and transparency.
Although these principles don’t force AI developers to do anything, they have become the basis of technical guidelines later. Here are some examples of these documents.
Level 2: National Frameworks and Social Guidelines (2020–)
At this stage, we can see the initial versions of frameworks created by government institutions.
Although the content of these frameworks were not yet finalized, they do describe a general, high-level overview of the governance of an end-to-end AI development pipeline from data collection to deployment. The following are examples of such documents.
Level 3: Technical Guidelines and Standardization (2021–)
Technical AI standards have attracted a lot of attention over the last few years. Several industry-backed guidelines have been published, in addition to standards based on real-world applications that have been developed by international organizations such as ISO/IEC and IEEE.
- QA4AI (Consortium of Quality Assurance for Artificial-Intelligence-based products and service)
- Machine Learning Quality Management Guidelines (AIST)
- ISO/IEC JTC 1/SC 42 AI Standards
- IEEE AI Standards
Level 4: Harmonization and Growth of Standards/Regulations
The phases so far have quickly evolved from one to the next, but the next phase should be longer-lasting.
In general, the global trend has shifted from the perspective of end users (making AI systems fair, understandable, controllable) to the organizational and engineering processes to make those expectations a reality.
- NIST AI Risk Management Framework: v1.0 is published
- EU AI Act: To be finalized in the coming months
- ISO, IEEE, etc: Multiple standards have been published and will be harmonized with the European Committee for Standardisation (CEN) and the European Committee for Electrotechnical Standardisation (CENELEC).
There will be important opportunities coming up for every company to take part in the debate about these trends and to develop and demonstrate a robust AI system. And companies will not only be able to streamline their development processes through these opportunities, but they will also lead to earning the trust of users, investors, and regulators.
To understand more about global AI regulation, we need to first dive deeper into AI regulation in Europe. The EU’s regulatory leadership will provide important precedent for global trends in the next few years.
EU AI Act
The EU is taking a risk-based approach to regulating AI systems. This approach is utilized in the EU AI Act, and targets high-risk AI applications for regulation first by assessing potential dangers.
The AI Act relies on government-industry collaboration, regulatory sandboxes, and international standards that establish technical requirements with legally binding force.
One of the areas most heavily impacted by this new legislation is the “high risk” domain of AI medical technology. We believe this domain is the best-equipped to deal with new AI-focused requirements because companies already need to meet strict certification requirements for market approval.
If you’re interested in the latest about the EU AIA, we recommend subscribing to the Artificial Intelligence Act newsletter.
A crucial point to note is that the AI Act itself does not include technical specifications. Therefore, technical standard documents created by organizations like ISO play a critical role to fill the gaps between the general requirements determined by the AI Act, allowing system developers to demonstrate conformity to the AI Act.
These standards define clear benchmarks and metrics that AI systems must meet while simultaneously providing an overview of tools and processes for appropriately constructing AI systems. If companies follow these standards, they will be deemed to be in compliance with the AI Act.
Several of these technical standards are already published, and some are still in development. Some argue that publishing standards quickly enough to keep up with the pace of AI technological development is unreasonable. Later, this article also discusses ISO standards, but first we’ll describe the state of AI regulation in Japan.
AI Regulation in Japan
So far we’ve been describing trends in the EU, but now let’s turn our attention to Japan.
Like the EU, Japan also has legislation surrounding AI, but their content is more cautious compared with the EU. This legislation regulates specific uses of AI in individual industries, in contrast to the approaches we’ve seen so far that regulate AI horizontally.
For example, the amended Road Traffic Act and Road Transport Vehicle Act only mentions one specific use of AI: “Level 4” autonomous driving. In addition, in the “Guidelines on the Assessment of AI Reliability in Plant Safety”, the description of quality evaluation methods for AI is limited to the specific domain of industrial plants.
The thinking behind Japan’s approach is described in the “Report from the Expert Group on How AI Principles Should Be Implemented” from the Ministry of Environment, Technology, and Industry (METI).
In addition, specific AI-based technologies themselves should not be subject to mandatory regulation. Even when mandatory regulation is necessary, its scope must be carefully defined regarding AI uses and fields of application so that regulation does not extend into unintended areas. This is because the potential benefits and damage to society differ depending on the specific use of the technology (e.g. field of use, purpose of use, scale of use, scene of use, whether the impacted target is unspecified, whether advance notice is possible, whether opt-outs are possible, etc.).
In addition, according to the Cabinet Office, a goal of Japan’s AI strategy is to become a leader in international research, training, and social networks.
There is in fact movement in this direction. Several notable international standards are being drawn up by Japanese academics and industry specialists. For example, some of these standards are: ISO/IEC TR 24030 (AI Use cases), ISO/IEC DIS 5338 (AI system life cycle), ISO/IEC 5259–2 (Data quality measures), and IEEE P7001 (Transparency of Autonomous Systems).
Given that, it should come as no surprise that Japan also has an international body for the ISO/IEC JTC 1/SC 42 Subcommittee. This subcommittee works on standards related to AI and is creating a common foundation for conducting internal projects and research hosted by the National Institute of Advanced Industrial Science and Technology (AIST) and the Japanese Standards Association (JSA) under METI. Tokyo has hosted several important conferences such as SC42 meetings and recently the international GPAI 2022 Conference.
The Japanese AI industry currently finds itself in a situation where it isn’t subject to compulsory regulatory pressure. On the other hand, it’s also establishing communities for discussing these kinds of topics. The Japan Deep Learning Association (JDLA) and the Japan Society for AI (JSAI) are examples of such communities, and they host various events and offer useful resources.
Moreover, the Machine Learning Quality Management Guidelines is also authored in Japan, which is a document that comprehensively describes the entire AI pipeline from data sourcing to risk management. This was authored by AIST and also comes with an experimental testbed tool calledQunomon. In addition, Japan’s first AI product quality assurance guidelineshave been created based on real case studies. These were created by industry experts affiliated with QA4AI.
Survey of Japanese Companies
To investigate how widespread these standards and guidelines are in the Japanese AI industry, we conducted a survey about AI standards and guidelines, targeting AI developers at leading companies in Japan.
In this survey, nearly 60% of respondents said they were unaware of international standardization trends, particularly in ISO/IEC and IEEE. Even for Japan’s “Machine Learning Quality Management Guidelines” and “AI Product Quality Assurance Guidelines,” only 30% responded that “we’re already following these guidelines in our AI development process”.
On the other hand, if we include the response “we’re considering incorporating these guidelines in 2023”, this reaches 40% of companies, indicating that awareness of AI quality and reliability is gradually increasing.
From this survey, not all companies in Japan are knowledgeable about AI standards yet, though more companies expect to invest in this area in the future.
ISO/IEC JTC 1/SC 42 is the group within the ISO and IEC organizations working on developing 40+ technical AI standards. Specifically, ISO/IEC JTC 1 is a joint committee focused on software standards such as JPEG and C++, and SC42 is its sub-committee that focuses on AI technology.
ISO/IEC 42001 is a core standard of the SC 42 group, and it parallels the well-known cybersecurity standard ISO/IEC 27001 (Information Security Management System) that virtually all major tech companies conform to.
Japan is also advancing efforts to have industry proposals and guidelines accepted as part of these official ISO standards. For example, the aforementioned “Machine Learning Quality Management Guidelines” were originally inspired by the IEC 61508 standard, and its authors are now trying to refine it for inclusion in the ISO CD TR 5469 “Functional safety and AI systems” standard.
The independent and technical ISO standards are some of the best areas for companies to invest their engineering resources in, because these standards are guaranteed to be relevant and highly valued well into the future. We’ll list a few examples of standards that are already published:
1. ISO/IEC TR 24027 (Bias in AI systems)
This is one of the documents that’s already well known in the field of AI standards. It includes an overview of bias in AI-aided decision making, as well as a list of concrete fairness metrics with mathematical formulas for evaluating models.
2. ISO/IEC TS 4213 (Assessment of machine learning classification performance)
Only looking at accuracy may be misleading in many cases, so this document presents a wide array of recommended metrics that go beyond accuracy for binary, multi-class, and multi-label classification that should be included in performance reports.
3. ISO/IEC TR 24029–1 (Assessment of the robustness of neural networks)
This document explains various statistical, formal, and empirical methods for assessing the robustness of neural network models (for example, robustness against blurry input images).
To keep up to date with the latest on AI standardization, we recommend subscribing to the AI Standards Hub newsletter.
Within the ISO terminology, the documents above are classified as “Technical Reports” or “Technical Specifications”. They have several important benefits.
- These standards are a key component in demonstrating that an AI system conforms with the EU AI Act.
- These standards have been created by experts from international organizations and hold significant weight globally. They’re helpful for clearly communicating the state of your AI model both within your team (including reports to non-technical audiences) and outside your team (for marketing purposes).
- They focus on the technical aspects of AI implementation (they refer to the metrics that should actually be measured). These implementation methods are straightforward for engineers.
- Since these standards are comprehensive overviews of AI validation techniques, you may discover some points you haven’t paid attention to before, which are usually a great opportunity to improve your model.
In addition to ISO standards, IEEE’s AI standards are also useful. AI Watch, for example, rates standards such as P7003 and P7001 as “valuable content on operational requirements related to AI bias, human oversight, record keeping, and risk management,” filling in the gaps left by other documents.
We hope to cover these IEEE standards in more detail in future articles.
Many of our readers may have already developed in-house AI evaluation platforms, or use AI testing automation software like Citadel Lens, to evaluate model quality against internally-agreed metrics and KPIs.
The reality is that the days of one data scientist developing and testing a model in a Jupyter Notebook are coming to an end. Companies with sufficient engineering investment in technical and compliance issues will likely be able to smoothly transition to a post-standardization world and earn the “AI Trust” label.
To avoid the engineering and compliance teams from being overwhelmed by the regulatory burden of AI systems, it’s critical to develop AI quality assurance and risk management processes early, and adopt automated testing tools to streamline the AI validation process.
Citadel AI’s products can automatically test your AI systems and generate reports based on these standards, with minimal effort from your engineering and compliance teams.
If you’re developing machine learning models and would like a demo of Citadel’s products, please contact us here.