Technology31 July 2023

Standardisation, trust and democratic principles: The global race to regulate artificial intelligence

It is difficult to overestimate the international implications of the accelerated development and deployment of Artificial Intelligence (AI) systems across the globe. As noted by Antony Blinken, the US Secretary of State, “[t]he world’s leading powers are racing to develop and deploy new technologies like artificial intelligence (…) that could shape everything about our lives — from where we get energy, to how we do our jobs, to how wars are fought.”
technology-graphic.jpg
Report by
jose-miguel-bello-y-villarino.jpg

Dr José-Miguel Bello y Villarino

Research FellowUniversity of Sydney Law School
david-hua.jpg

David Hua

Research AssistantARC Centre of Excellence for Automated Decision-Making and Society
barry-wang.jpg

Barry Wang

Former Research OfficerARC Centre of Excellence for Automated Decision-Making and Society
melanie-trezise.jpg

Melanie Trezise

PhD candidate in Law at the University of Sydney

Executive summary

  • The race is on to lead the future of Artificial Intelligence (AI). The United States, the European Union and China are not only attempting to shape and develop the future of the technology, but also the governance and regulation of its use.
  • The race to regulate AI is a key aspect of strategic competition as major powers seek to export value systems through the technical standards they promote. This can be seen in the alignment of governance initiatives led by the European Union and the United States and their disparity with China’s initiatives.
  • Across multiple US administrations, the United States has remained committed to the promotion of “AI with American values” through voluntary standardisation measures and has led the charge in the creation of foundational global standards and an exportable domestic AI Risk Management Framework (AI RMF).
  • Australia is still considering its place in this complex global competition. In June 2023, the Australian Government’s Department of Industry Science and Resources released a Discussion Paper on Safe and Responsible AI to consider whether further regulatory and governance responses are required to ensure appropriate safeguards are in place.
  • This report considers the main regulatory frameworks worldwide, with examples from the United States, the European Union, Japan, China and Canada, as well as various international initiatives. It then evaluates the possible pathways for Australia’s engagement in AI regulation.
DownloadStandardisation, trust and democratic principles: The global race to regulate artificial intelligence

Introduction: AI, standardisation, trust and power

It is difficult to overestimate the international implications of the accelerated development and deployment of Artificial Intelligence (AI) systems across the globe. As noted by Antony Blinken, the US Secretary of State, “[t]he world’s leading powers are racing to develop and deploy new technologies like artificial intelligence (…) that could shape everything about our lives — from where we get energy, to how we do our jobs, to how wars are fought.”1 For the United States, maintaining its scientific and technological edge in this race is critical and it is willing to significantly invest, economically and politically, in support of this objective.2

However, there is also a quieter — and perhaps more complex — race than the technological one: a race to influence and control the regulation of AI. The United States, the European Union and China are involved in a global game where incentives, standards and hard regulation are intertwined with geopolitical, technological and value-driven interests. Given the global dimension of this race, arrays of policy actions and alliances are being deployed at the national and international levels by key global actors in a remarkable exercise of regulatory power.

The United States, the European Union and China are involved in a global game where incentives, standards and hard regulation are intertwined with geopolitical, technological and value-driven interests.

The objective of this report is to deliver an overview of the current regulatory race, including how the US-declared intention of leadership in this race is playing out, the alliances that are being established and the outcomes we can expect.

Despite its secondary position, Australia has a role to play in this race. It is a complex one. Blindly following international alignments — such as routinely adhering to US or British policies — may not optimally serve Australia’s interests. Australia must consider its political, economic and strategic interests with an adequate balance between its traditional allies and key trading partners, in a manner that ensures that developers, users and “subjects”3 of AI in Australia enjoy all the opportunities offered by the technology. At the same time, Australia’s stakeholders must be confidently reassured by an adequate regulatory framework that respects its domestic values. Beginning with the United States, this report provides an overview of the international landscape of regulatory initiatives from key global AI players and proposes some targeted interventions that could help Australia achieve those complex objectives.

artificial-technology.jpg
Source: Getty

The report generally assumes that there is currently no willingness in the Australian Government to introduce new broad-scope regulations for AI, but that there is an interest among the Labor government in exploring available regulatory options.4 However, it acknowledges that, beyond ChatGPT, for many decision makers AI risks are still seen as fairly theoretical, even if they are aware that more tangible risks, such as autonomous vehicles on public roads, are not properly addressed by ongoing initiatives.

The report considers that to mitigate the risk of AI systems, targeted regulatory interventions are a feasible option in Australia. It pays particular attention to standardisation and regulatory processes at the international level and in the United States, the European Union, Japan, China and Canada, as possible alternative paths to shape a regulatory framework for AI systems suitable for Australia.

Technical standards5 seem unexciting and often impenetrable. Nevertheless, they are becoming the cornerstone of domestic regulatory approaches to AI and the foundations of future international rules for the use of this technology, as they facilitate interoperability with other jurisdictions. Standards are often expressed in specialist language and negotiated among technical experts in processes, which are rather obscure to those not directly involved in their development. This report brings to the attention of Australian audiences a domain which too often escapes many policy or legal analysts.

This report provides an overview of the international landscape of regulatory initiatives from key global AI players and proposes some targeted interventions that could help Australia achieve those complex objectives.

The starting point of the analysis is the idea of ‘trustworthy AI.’ There is a clear interest in Australia in ensuring that we can trust AI systems, even if stakeholders have different views about what that means. The report approaches this demand from a comparative perspective and considers how that idea is being embedded in standards and other types of regulation.

Section 1 of this report presents the situation in the United States regarding the regulation of AI and the use of standards in this context. Section 2 explores the international relevance of standards. Section 3 provides an overview of the different actors and their global alignments, starting with the European Union. In section 4, we introduce the most notable transnational initiatives in the AI domain. The fifth and final section of the report offers several policy choices for consideration by Australian authorities and broader society, in the hope of improving awareness of what is at stake.

1. In AI we (may) trust

The concept of trust is at the core of recent approaches to the soft regulation of AI, especially in the United States. This section explores what it means and why it matters. It opens with a description of the official position of the United States, as reflected in its legislation and government documents, as well as the values that underpin the conception of trust in its vision of AI. It then explores standards as a vehicle to channel those values, looking at how the international aspects of that dynamic are currently developing.

Trustworthy AI in the official US view

The origins of US AI regulatory policy can be traced back to 2016 when the Obama administration released the first official AI-focused documents, including Preparing for the Future of Artificial Intelligence and the National Artificial Intelligence Research and Development Strategic Plan.6 These documents defined the AI regulatory focus on facilitating innovation through minimal market intervention and self-regulation by the private sector alongside promoting ethics and diversity in the AI research and development (R&D) pipeline.7 The Trump administration continued this emphasis on restricting new regulatory interventions by establishing the American AI Initiative (under Executive Order (EO) 13859 in 2019 titled Maintaining American Leadership in Artificial Intelligence) which articulated an approach defined by public engagement, limited regulatory overreach to avoid hampering innovation, and the promotion of trustworthy AI.8

One of the strongest messages from the United States about the global regulatory framework for AI appeared in the text of the 2019 Trump administration EO 13859. It declared that “continued American leadership in AI is of paramount importance to maintaining the economic and national security of the United States and to shaping the global evolution of AI in a manner consistent with our Nation’s values, policies, and priorities.”9 This should be done while “promoting the trust of the American people in the development and deployment of AI-related technologies” [emphasis added],10 anticipating one of the strategic pillars identified in the National Artificial Intelligence Initiative Act (NAIIA) of 202011 — “advancing trustworthy AI”— the main piece of legislation about AI in the United States.

The genesis of NAIIA and its key components

The NAIIA is a piece of bipartisan legislation that was enacted under the National Defense Authorization Act for Fiscal Year 2021 (H.R. 6395, Division E), a federal law allocating the annual budget for national defence in the United States (National Artificial Intelligence Initiative Act of 2020).12 It is a response by the US Congress to four recent developments: 1) the federal government currently lacks an understanding of the capabilities and transformative social and economic ramifications of AI; 2) there is a marked absence of standards and benchmarks for evaluating the technical performance of AI systems throughout their lifecycle; 3) the United States is suffering from the lack of a diverse and talented AI educational pipeline needed to sustain its AI policy ambitions; and 4) the federal government has a central role to play in advancing AI R&D efforts to benefit American society, particularly around facilitating funding, computing infrastructure, and collaboration among diverse stakeholders.13 The NAIIA was likely further motivated by the urgency of the National Security Commission on Artificial Intelligence (NSCAI) findings in various interim reports and quarterly recommendation memos throughout 2019 and 2020, which emphatically highlighted the strategic significance alongside the economic, geopolitical, and national security implications of the United States lagging behind its adversaries and allies in the global race to achieve AI leadership.14

AI regulatory policy has both domestic and foreign policy dimensions. In the US Government’s view, the potential gains of AI cannot be fully realised without the trust and confidence of those who will be using it and those who will be subject to decisions made by it. Official US documents recognise that this requires assuring US citizens about the technical soundness of AI and that it will operate in ways that protect reasonable expectations of their civil liberties, privacy, safety, security, and other US values.15 Guidelines and principles established via technical standards will be crucial to ensuring that the design, development, acquisition, and deployment of AI occurs in a trustworthy, responsible way that benefits Americans and will facilitate a justified broad acceptance and adoption of AI.16

However, its external dimension may even be more relevant. At this stage, the United States lacks the willingness (or institutional capacity, given the very disparate interest in its legislature) to proceed with substantive domestic legislation on AI for adoption in the short term.17 Nevertheless, US domestic legislation would be the strongest tool to trigger legal alignments abroad in the AI domain. If the United States federally required AI developers to meet certain legal standards, it is likely that these legal standards would be highly influential in other jurisdictions.

Without federal AI-specific regulation,18 the main ‘regulatory’ contribution of the United States is to promote “AI with American values”19 through voluntary standardisation.20 Accordingly, we can observe US authorities, notably the Department of Commerce’s National Institute of Standards and Technology (NIST), being proactively involved in the development of exportable technical standards and the promotion of a technological world order centred around these standards.21

This vision, expressed in various government documents and formally codified in the NAIIA, provides a framework to coordinate sustained US federal action in collaboration with diverse stakeholders (e.g., academia, private sector, civil society) to materialise these ambitions.22

Promoting domestic adoption of standards

While the 2019 US Executive Order remains seminal, a different executive order of December 2020 declared that it is a US policy to “promote the innovation and use of AI, where appropriate (…) in a manner that fosters public trust, builds confidence in AI, protects our Nation’s values, and remains consistent with all applicable laws.”23

That order also establishes that the US Government “shall continue to use voluntary consensus standards developed with industry participation, where available, when such use would not be inconsistent with applicable law or otherwise impracticable.”24

If, for example, this mandate was systematically built into government procurement rules, this could mean that the US Government may turn voluntary standards into de facto compulsory ones to compete for public contracts. If voluntary standards are forced upon procurement partners and government procurement is essential for the survival of certain private companies, economies of scale would favour AI for private or public purposes that incorporate those “voluntary” standards.

The question is then, what are these “American values?” In US documents, “trustworthy AI technology” seeks to uphold democratic values and be human-centric by focusing on ethics, preserving privacy and fairness, and mitigating bias.25 As we will explore below, this closely aligns with the views of the Organisation of Economic Cooperation and Development (OECD), as noted in several US documents implementing this policy.26

In this context, the US Government mandated one of its agencies, the Department of Commerce’s NIST, to develop guidance from a broad socio-technical perspective that “connects these practices to societal values” in order to create new norms around how AI is built and deployed, operationalising these values.27 Note that this idea of “norms” does not refer to legislation or administrative rules, but, on paper, refers merely to voluntary standards.

Genesis and functions of NIST

NIST was established as a physical sciences laboratory by Congress in 1901 to address the deficient measurement infrastructure of the United States, especially when compared to its contemporary economic rivals (e.g., the United Kingdom, Germany), which was a significant hindrance to its industrial competitiveness.28 Today, NIST is a non-regulatory government agency under the Department of Commerce. Its mission is to improve the economic security and the quality of life of the United States by promoting technological innovation.29 It achieves this through advancing measurement science and voluntary consensus standards in the domestic and global marketplace which are foundational to the design, development, optimisation, quality assurance, and regulation (particularly by providing legal metrology) of technologies.30 Although standards-setting is largely industry-driven in the United States, NIST plays an important convening role in coordinating efforts among diverse stakeholders and harmonising regulatory practices.31 The work of NIST spans a wide array of domains (e.g., AI, climate science, telecommunications, quantum science, manufacturing) with many of its measurement solutions and standards being used in critical technologies such as atomic clocks, computer chips, and electronic health records.32

Following international practices, including the 2021 EU proposal for a comprehensive regulation of AI discussed below, the US approach to achieving global leadership in AI regulation has recently shifted to a risk-management mentality. In other words, there has been a move from “ethics” frameworks or other soft approaches to a process initiated in 2021 of targeting standards that AI systems must meet to control that risk and align it with US values.33

The NAIIA specifically mandated NIST to develop a risk framework that establishes “definitions and characterizations for aspects of trustworthiness, including explainability, transparency, safety, privacy, security, robustness, fairness, bias, ethics, validation, verification, interpretability” that “align with international standards, as appropriate” and that “incorporate voluntary consensus standards and industry best practices.”34 These aspects, except for ‘ethics,’ reflect the usual characteristics discussed in the literature for adequate development and deployment of AI systems. In other words, through this list, the NAIIA adopted the conventional assumptions of what can be considered trustworthy AI, aligning itself with, for example, OECD practice.

Figure 1. Highlights in the genesis of US AI-related public policy

Figure 1. Highlights in the genesis of US AI-related public policy
Source: USSC

Yet, there is a significant jump from ‘trustworthiness,’ as an idea, to operationalising the abovementioned concepts. The NAIIA tasks NIST with identifying and providing “standards, guidelines, best practices, methodologies, procedures and processes for (A) developing trustworthy artificial intelligence systems; [and] (B) assessing the trustworthiness of artificial intelligence systems.”35

Trustworthiness develops into something substantive and measurable. It abandons the realm of ethics and becomes a series of technical concepts and indicators that, if met, would be considered to generate trustworthy AI. From an international perspective, the most important aspect of this transition is that these technical concepts are exportable to other countries and international organisations in ways that a definition of “trust” cannot be. The next section presents how these value-driven technical standards are exported to the world. Yet, it is necessary first to understand that technical standards, despite their descriptor, are not value-neutral.

The values built into technical standards

Traditionally, technical standards have “not draw[n] attention to themselves; they disappear into the background allowing us to think, work, and communicate efficiently.”36 Although they are often assumed to be neutral and technical, standards embed valuations of technologies and appreciations of their risks or benefits.37 Influencing these standards means shaping what a technology can do and how. Standardisation can undermine established principles of societies and law: they can shape how societies operate, alter the relationship between citizens and the state or, more simply, be a vehicle to exercise power by those whose values, preferences and priorities are embedded in those standards.

Therefore, and perhaps more than ever, standards are being seen as a tool to exercise power. The European Commission, in its new strategy for standardisation, stresses the role of standards in providing domestic industries with a competitive edge, but further notes that they can also be used to support EU values.38 Given that standards “play a central role in virtually all economic and social activity,”39 inserting values into them can act as a silent mechanism to shape society, as recognised by the US National Security Commission on Artificial Intelligence in its Final Report.40 If we extend this reasoning into AI as a technology that will increasingly inform our lives — and even permeate all aspects of it, as assumed in Japan’s Society 5.041 — one can easily see why we should care about how AI standards are created.

Perhaps more than ever, standards are being seen as a tool to exercise power.

The integration of AI systems in goods and services will have social, legal and economic effects across the private and public sectors, beyond national borders, perhaps only comparable to the expansion of electricity. The private trade of products and services that incorporate AI systems will inevitably increase over time. Governments are already employing (and will increasingly employ) AI systems to deliver services and inform policy. Whether acquired by public procurement, or through in-house development, AI systems will become more necessary.

Generally, this will require some degree of confidence that such systems are reliable and fit for purpose. To assess their reliability, it is necessary to use some kind of metrics or descriptors that could be shared by developers, sellers and buyers of those AI systems.

A common technical language is therefore required to explain and measure what ‘bias,’ ‘transparency’ or ‘reliability’ mean in the official documentation. This is normally left to what is called ‘foundational’ or ‘basic’ standards. At the international level, they are commonly defined by a standards development organisation (SDO), often as a first step to making sense of other standards. Their name (“foundational”) denotes their role as the groundwork for more technical standards developed in a series of standard specifications. However, the process is not necessarily chronological, and the foundational standards for AI have been developed at the same time as more technical ones.

The creation of these foundational standards for AI means that “a practitioner can talk the same language as a regulator and both can talk the same language as a technical expert.”42 That is, the importance of foundational standards goes beyond the standards world and official documents.

Defining and shaping the meaning of terms such as ‘trustworthiness,’ ‘explainability’ or ‘reliability,’ as they relate to AI, can even shape the definition of AI itself. Due to the “anchoring” effect,4344 it will be more challenging to develop future definitions that are completely separate from that anchor. As more legal, normative or policy instruments use concepts (consciously or unconsciously) based on those standards, network externalities enhance their common acceptance and increase the cost (and limit the value) of departing from the ideas anchored in those standards.

From a regulatory and policy perspective, we can expect that the control (or shaping according to the relevant preferences) of these foundational standards will produce a much broader return over time. Any jurisdiction or company that manages to reflect its vision of “trust” in AI in a global foundational standard, will be able to influence not only the technical standards developed on that basis but many subsequent years of scholarship, regulation and policy.

It is therefore not surprising that a recurring theme throughout US AI policy is the need to collaborate with international allies to advance its strategic interests and promote convergence towards a shared vision of trustworthy AI centred around liberal values.

Exporting the US view of trustworthy AI through standardisation

International cooperation is identified as one of the strategic pillars of the NAIIA.45 The NAIIA assumes that cooperation will help to achieve trustworthy AI that represents US values, safeguards fundamental rights and liberties, minimises the risks of AI-related harms; and promotes innovation.

Consequently, there has been a conscious effort by the United States to achieve alignment with existing and ongoing AI developments from allies in ways that will reinforce shared respect for democratic values and human rights as well as promote research and development. Examples include the OECD Recommendation on AI (2019), the US-EU Trade and Technology Council (in place since 2021), and the Global Partnership on AI (GPAI, launched in 2020) as discussed below in Section 4.

There has been a conscious effort by the United States to achieve alignment with existing and ongoing AI developments from allies in ways that will reinforce shared respect for democratic values and human rights as well as promote research and development.

The US pursuit of multilateral cooperation, however, often frames the beneficiaries of AI to be the United States and other liberal democracies. This concept of cooperation excludes other important international actors, like China and Russia, and either implicitly or explicitly becomes a policy of “othering” where liberal standards and ideals of privacy and human rights are a counterpoint to the views of AI coming from “illiberal” actors, which are assumed to be about efficiency and control.

Focusing on the work of NIST, the United States is openly promoting the idea that the risk generated by AI systems needs to be managed and that this can be done through (its own or international) voluntary technical standards, which are aligned with the principles of liberal societies.

From the technical side, NIST has identified nine key areas to be addressed in AI standards including concepts and terminology, data and knowledge, human interactions, metrics, networking, performance testing and reporting methodology, safety, risk management and trustworthiness (National Institute of Standards and Technology 2019, which is strongly aligned with the International Organization for Standardization [ISO] work underway since 2018 and led by the US organisation American National Standards Institute [ANSI], developed below).

The most advanced line of work at NIST is the development of a voluntary risk-mitigation framework that goes beyond mere ethical considerations, providing instead practical and translatable operational instructions. The development of this framework is intertwined with the advance of technical standards and it is part of a formal plan for federal engagement in the development of technical standards and related tools, as mandated by the 2019 Executive Order.46 In January 2023, after more than a year of work, NIST presented its first formal version of the AI Risk Management Framework (AI RMF) for (voluntary) use, with the aim of improving the ability to incorporate trustworthiness considerations into the design, development, use, and evaluation of AI products, services, and systems.47

The NIST AI Risk Management Framework (AI RMF) and its connection to international standards

From the perspective of the US federal government, achieving its objective of influencing global policy through standards must be balanced against the domestic objective of generating standards that meet the needs of stakeholders in the United States (namely government, individuals and companies). As noted, achieving this through a voluntary system is particularly challenging as it requires the buy-in of domestic and foreign actors for those standards.

With this objective in mind, NIST adopted an open method to develop the AI Risk Management Framework (AI RMF). In contrast to the usual lack of transparency in standard creation, the AI RMF was developed through a consensus-driven and collaborative process that included several online workshops open to any person interested, as well as other opportunities to provide input.48 The process also involved non-US actors and representatives of different jurisdictions — including Australia, Japan and the European Union — presenting their views in their official capacities in the workshops’ panels. Many significant written contributions to the drafts were also generated outside the United States. The development process of the AI RMF started in mid-2021 and the wording for trustworthy AI has remained consistent throughout the process.49 AI will be trustworthy if it is “valid and reliable, safe, secure and resilient, accountable and transparent, explainable and interpretable, privacy-enhanced, and fair with harmful bias managed.”50

Yet, later versions of the framework, including the first formal version (v.1.0), have stressed the importance of responsible use and governance of AI systems.51 It sees responsible use as a “counterpart to AI system trustworthiness,” with previous versions noting that “AI systems are not inherently bad or risky, and it is often the contextual environment that determines whether or not negative impact will occur.”52 In this view, the risk of AI systems has two dimensions: the system itself (normally the scope of technical standards) and the way it is used (something not always covered by standardisation, although possible as in ISO 38507 and 24368, which ANSI helped develop).

Risks and benefits of AI, as the AI RMF notes, “can emerge from the interplay of technical aspects combined with societal factors related to how a system is used, its interactions with other AI systems, who operates it, and the social context in which it is deployed.53 NIST, therefore, considers it necessary to provide a framework and standards for trustworthiness and responsibility.

The AI RMF, since its first drafts, has offered an idea about how its characteristics compare to the OECD and EU documents as well as its original mandate, the US Executive Order of 2019. Table 1 summarises this mapping.

Table 1: Mapping of AI Risk Management Framework taxonomy to AI policy documents (based on NIST 2022, 10)

  Instrument
  AI RMF (2023) OECD AI Recommendation (2019) EU AI Act (Proposed 2021) EO 13960 (2020)
Concepts used Valid and reliable Robustness Technical robustness Purposeful and performance-driven
Accurate, reliable, and effective
Regularly monitored
Safe Safety Safety Safe
Fair
Bias is managed
Human-centred values and fairness Non-discrimination
Diversity and fairness
Data governance
Lawful and respectful of the nation’s values
Secure and resilient Security Security and resilience Secure and resilient
Transparent and accountable Transparency and responsible disclosure
Accountability
Transparency
Accountability
Human agency and oversight
Transparent
Accountable
Lawful and respectful of the nation’s values
Responsible and traceable
Regularly monitored
Explainable and interpretable Explainability Understandable by subject matter experts, users, and others, as appropriate  
Privacy-enhanced Human values
Respect for human rights
Privacy
Data governance
Lawful and respectful of the nation’s values

Strategy is the reason why NIST is so concerned about how its AI RMF maps against an OECD Recommendation or a mere proposal of EU legislation, which will not be approved until 2024. Adopting a voluntary framework at the international level will only happen if it meets the requirements of more binding rules. Having a (US-driven) taxonomy of risk that facilitates international consensus on terminology for the discourse around AI risk54 is an excellent lever to incorporate US values and priorities into the development of foreign regulatory instruments or, at least, a way to map interoperability between the rules on both sides of the Atlantic, reassuring users of the AI RMF.

As the document notes, the AI RMF strives to be “law- and regulation-agnostic. The Framework should support organizations’ abilities to operate under applicable domestic and international legal or regulatory regimes.”55 At the same time, the AI RMF process facilitates a symbiotic relationship with these foreign instruments. NIST has approached the AI RMF drafting in a way that openly draws inspiration from and refers to existing policy directives around trustworthy AI, encouraging confidence in international partners. In other words, it was drafted to be used without friction outside the United States.

Influencing international standards

The global governance of AI through standards influences meetings of international organisations tasked with the development of AI-related standards. Although there is “no standard for standards,”56 the most common path to creating them involves technical experts tasked by their employers — generally, significant stakeholders in the domain covered by the standard developed — to agree on them within an official setting, broadly accepted by the relevant community. This entity could be domestic or international.

At the international level, there are several entities and organisations that list standard production as part of their mandates. Global standard-setting organisations (or SSO, acting as international versions of domestic SDOs) cover at least the Geneva ‘Big Three,’ that is, the International Telecommunication Union (ITU), the International Organization for Standardization (ISO) and the International Electrotechnical Commission (IEC).

Although they are normally grouped together, the dynamics in the ITU and the other global SSOs are notably different. ISO and IEC are: (i) private; (ii) focal regulatory institutions, i.e., “uncontested in their respective areas”;57 and (iii) unrepresentative of the global community. Although the voting rules are democratic in appearance — for example, ISO grants one vote per state SDO — the reality is that almost all decisions are made with the support of SDOs from a handful of countries. The bulk of the work within ISO and IEC is also driven by representatives of companies active within those few national SDOs. The European Union, the United States or China, for example, can then exercise disproportionate influence on these organisations through their corporations, especially if they are focusing on a concrete committee or developing a group of standards.

Table 2: Main traits of international SDOs

  International Telecommunication Union (ITU) International Organization for Standardization (ISO) and International Electrotechnical Commission (IEC)
Membership Members are states Members are almost exclusively domestic non-governmental national SDOs
Legal status Public international organisation Non-governmental international organisations (private)
Voting procedure Vote is nominally democratic Process-driven work, national entities need to be active in a group to vote ­— adoption is mainly by consensus

ITU, on the other hand, is the only global standardisation forum mainly driven by governments, not private entities. This means that voices from the Global South can be heard. While “companies from emerging economies do not have the resources to shape standards like Western and Asian companies” within ISO or IEC (even if some countries like South Africa sometimes play a leadership role),58 they do have a role in the ITU, even if it is to decide whose initiative will be supported from among those put forward by the usual players (namely the European Union, United States or China).

In the AI context, NIST has identified engagement levels for global SSOs, ranging from monitoring (following the standard), through to participating (providing input), influencing (exerting influence over key players developing standards), and leading (convening or administering standards).59

Traditionally, the US Government has remained relatively detached from these standardisation processes, but the geopolitical shift vis-à-vis China has also changed its approach to standardising AI. The US Government’s interventionist approach to AI standardisation has already blocked the development of global standards that would go against certain US values. However, their new approach goes well beyond that. NIST has even been tasked with producing a study on state-directed interference and China’s standardisation policy and how this might be prejudicial to US efforts.60

At the international level, the clearest example of this was the proud and public recognition by the US State Department of its actions to coordinate a block at the ITU early in 2021 of proposed standards “that would allow the use, for example, of facial recognition technology in ways that could threaten human rights.”61

The United States is also actively involved in the development of standards, not only in blocking them, especially regarding foundational AI standards. This is mainly being developed within a joint technical committee (JTC1) of ISO and IEC, concretely in an AI-specific Subcommittee (SC42).

Traditionally, the US Government has remained relatively detached from these standardisation processes, but the geopolitical shift vis-à-vis China has also changed its approach to standardising AI.

The first working group of this subcommittee (formally called WG1, “foundational standards”)62 has been operating since 2017 on aspects “that necessitate a common vocabulary, as well as agreed taxonomies and definitions.”63 This group agreed and published the first global foundational standards for AI in July 2022, the ISO/IEC 22989:2022(en).

Among the four levels of engagement described by NIST (monitoring, participating, influencing and leading), the foundational standard ISO/IEC 22989:2022(en) has received the full attention of US organisations. The ISO/IEC JTC1/SC42 secretariat has been held by a representative of the US-based private SDO with representation in ISO and ANSI. Although ANSI is nominally private, its efforts are clearly being supported by the US Government, making explicit the intention of the United States to lead on that front. ANSI has, in turn, actively participated in the standards developed by NIST, both publicly (e.g., in the open workshops, with ANSI-driven presentations) and, perhaps more importantly, through internal coordination. This has created a channel for a US public agency to actively shape ISO/IEC JTC1/SC42 in coordination with Japan, EU members and, to some extent, Australia.

Perhaps it is then surprising that the ISO/IEC 22989:2022(en) appears to be fairly neutral from a geopolitical point of view, by avoiding contentious issues. For example, it only refers to human rights once, and only as an optional consideration: “Legal, human rights, social responsibility and environmental frameworks can help in refining and describing the values.”64 Trustworthiness is simply the “ability to meet stakeholder expectations in a verifiable way,”65 stressing that it is the expectation of the stakeholder which really matters and not a broader consideration. Yet, when one goes a bit deeper into the text, one can notice the US influence in the drafting. For example, all the attributes of trustworthiness that appear in sec. 3.5.16 are the same as those listed above in the AI RMF.66

Overall, in recent years the United States has been heavily involved in shaping the regulation of AI in a way consistent with its needs and preferences but is maintaining a soft-touch approach. As we will discuss below, the evolution from non-committal ethics frameworks into more interventionist standard-driven actions (although still voluntary) may have been driven by a need to keep pace with more assertive action by the European Union and China.

2. AI standards at the international level: Economic, cultural and geopolitical dimensions

Traditionally, standards have served four different purposes: (i) to establish the basis for the compatibility or interoperability or interfacing between modules, components, products and systems; (ii) to promote similarity (or reduce variety between technical solutions), which enables economies of scale; (iii) to facilitate the transmission of information about a technology; and (iv) to determine the minimum quality or safety requirements.67

These four purposes follow the assumption that standards are neutral and that their value is mainly economic. However, in the current geopolitical environment and in the context of AI, foundational AI standards can be a normative tool with strong cultural implications. For instance, what global standards can say about what trustworthy AI is (and what it is not) is not just a matter of economic advantage, but a lever to exercise power and dismiss AI systems from certain jurisdictions with different standards.68

Standardisation is not just a matter of economics but also a tool for foreign policy implementation.

For example, from an international perspective, an AI system designed with the purpose of performing a task in violation of international human rights would undermine trust in the technology and the system.69 Standards that would accept the development of that technology are therefore expected to be opposed by the United States and other developed and democratic countries. The United States blocking the standards for facial recognition systems at the ITU discussed above (1.2.2) is an example of the value of standards as a tool to promote a certain vision of AI, with cultural and geopolitical implications.70

Conversely, if the language of human rights were to be openly incorporated in foundational AI standards, as seems to be the intention of the European Union, it will restrict the kind of AI systems that could be offered. Therefore, standardisation is not just a matter of economics but also a tool for foreign policy implementation.

In this section we analyse standards development as a way to gain power in the race to regulate AI, differentiating three types of power at the international level: economic, cultural and geopolitical.

Economic power

Companies participate directly in the development of standards because they are mainly economic tools and a potential avenue for profit.71 The traditional view sees technical standards as a way to prevent unnecessary costs and eliminate transborder protectionism. Once adopted, their acceptance by stakeholders (manufacturers, distributors, professional buyers and users) is often based on a proven track record for efficiency and reliability,72 offering new users the economic advantages attached to them.73

Technical standards influence the competitiveness of the market, setting a level of minimal compliance of which everyone is aware of and can choose to meet. Losers in standards wars risk being ejected from the market.74 For this reason, countries want to support their companies to have their technical specifications and methods recognised as acceptable at the international level.

Overall, for states favouring a particular standard for economic reasons — for example, because a majority of companies in their jurisdiction already use it — there are strong incentives to support any initiative favouring that standard and its diffusion. However, for countries without that economic incentive — such as many developing countries — economic reasons may not be particularly relevant unless there are broader benefits for the users of AI in that country derived from using a technology developed in conformity with that concrete standard.

In the latter case, these state members of the ITU — or their national SDOs in the ISO and IEC — can be encouraged by other entities (including corporations, civil society, and other states) to support a given standard for reasons unrelated to the merits of the given proposal. This kind of vote-seeking could be a mechanism to create a standard with international legal recognition and with the apparent support of many national organisations, even if some of these countries or representatives of national SDOs may not fully understand its contents.

Cultural power

Sometimes, however, the economic value of AI standards may be limited and other factors may be at play. How AI systems are built — and which standards are followed —can also be attributed to cultural, social and even environmental differences. In that context, accepting an AI system that is developed to a certain set of standards originating from another country means incorporating part of that culture. In something like AI systems, which can be deployed at an unprecedented scale, the capacity to exercise that cultural power is amplified — such as with chatbots, content recommendation algorithms or metaverse settings.

There may also be cultural reasons to determine what is acceptable to expect from an AI system. For example, in terms of risks derived from the operation of AI systems, an AI system that uses race or religion as a variable for training purposes — even to promote positive discrimination — would be illegal in France, where the law has traditionally criminalised the collection of personal data based on race, ethnicity, political, religious or philosophical opinion (or, similarly, the prohibition of ethnic identification in contemporary Rwandan society). In other countries, this restriction may seem absurd and there would be no need to build standards around such a consideration.

In a more dystopian example, a system of remote recognition of people by their heartbeat based on sonar technology from smart speakers — something much more invasive and hidden than facial recognition, but already possible75 — can be considered completely unacceptable in one country, as it may not have obvious applications that benefit society, but may be acceptable in another. These differences may relate to cultural or social factors and not only legal regulations of the right to privacy. How “privacy” is understood and how standards are meant to consider it, is mainly a socio-cultural factor.

technology-new.jpg
Source: Getty

It may be that there are no privacy concerns about being remotely recognised by a machine, in the same way that individual personal tax rates are seen as public information in some countries, whereas revealing any kind of personal tax information is a crime in other jurisdictions. It could also be that there are positive uses of an AI system in one country but no relevant public benefit in another. For example, in extreme climates this sonar system could be used to detect and identify people in cases of snowstorms, to support their rescue, but in other jurisdictions, this same system may have no obvious application other than intrusive data collection.

Finally, in this context, language cannot be ignored. Testing and measurement are ancillary parts of standards. AI foundational models for speech recognition or natural language processing are overwhelmingly developed with English as the default language setting. Systems trained on English internalise certain norms and other cultural inferences that attach to the language.76 If standards are set with English in mind, the existing advantage could become irreversible and systems could be influenced by the cultures that employ English. Similarly, standards developed with Chinese in mind could carry similar cultural biases or be a tool to exercise cultural power.

This influence could also operate at a domestic level. For example, if standards for the acceptability of language models in China are developed from the perspective of Mandarin (around 800 million speakers), systems using Wu or Yue (less than 200 million speakers together) may not be able to meet the standards (e.g., if they favour being trained in official documents). Systems will then reinforce the cultural power of Mandarin speakers, limiting the weight of those “minority” languages.

Geopolitical power

At several points in section one, we highlighted how the United States stresses the capacity of standards to project its values. As Frankel and Højbjerg noted in a classic article on the ‘political standardizer,’ every technical question can carry a political dimension, and, therefore, standardisation can always become a political process.77 This is especially the case if we look at AI systems as services (for example, deciding who gets credit or insurance) with the potential to remove humans from service delivery.

Article 26(5) of the EU Services Directive establishes that standardisation is a tool to improve the compatibility and quality of services. Services delivered “with the support of” or “by” an AI system could easily achieve this result. This can significantly modify social patterns in other countries beyond where the standards originated.

Traditionally, it has been assumed in policy-making that the quality of services should not be regulated by reference to standards “since services are too diverse to be standardised.”78 For example, the quality of legal advice that could be considered acceptable cannot be easily measured, so it could not be converted into a measurable indicator, which is what a standard often does.

Yet, this argument will be superseded if a machine is capable of reducing variance and tested with real cases to see if it is valid and reliable by delivering the correct legal advice above a desirable performance threshold — such as being correct in 99 per cent or more of the cases used in testing the system.

AI standards, therefore, become a tool to exercise geopolitical power, blocking (or making less attractive) the international commercialisation of AI systems developed in conformance with some domestic standards but which do not align with international ones.

Moreover, from a political point of view, standards can be a way to help shape language about a legal problem. The creation of foundational standards is a race to establish “common terms” for effective (and more efficient) communication between the intentions of regulators and the outputs of AI developers, but can also close the door to many potential uses of AI that do not adequately fit into preformed definitions.

This process could be exacerbated if foundational standards are indirectly given some degree of legal recognition, allowing countries to potentially weaponise standards. For example, if the foundational standard noted above (ISO/IEC 22989:2022(En)) is embedded in a product covered by the WTO Technical Barriers to Trade (TBT) Agreement,79 its use would receive favourable regulatory treatment. According to paragraph F “[w]here international standards exist or their completion is imminent, the [domestic] standardizing body shall use them, or the relevant parts of them, as a basis for the standards it develops, except where such international standards or relevant parts would be ineffective or inappropriate.”80 In this case, countries could be forced to accept AI systems that comply with international standards even if they run afoul of domestic ones. This is the effect in relation to just one international economic law agreement, but if we also examine bilateral and regional agreements, we could find tens of examples following similar principles.

AI standards, therefore, become a tool to exercise geopolitical power, blocking (or making less attractive) the international commercialisation of AI systems developed in conformance with some domestic standards but which do not align with international ones. This also creates clear incentives for countries developing a set of domestic AI standards to influence other states to adopt compatible ones. In the context of China-US competition for areas of influence, particularly in Southeast Asia and the Pacific, this geopolitical power game could favour the acquisition of AI technologies by public authorities from one particular country, with significant socio-political implications.

3. Standardisation of AI and international alignments

In this section, we provide an overview of the main regulatory frameworks for trustworthy AI across the world. The paths chosen by the biggest economies and AI global players are notably diverse. The relevance of this overview is to provide an adequate framework for the comparison of different options that Australia could envisage, as explored in section 5.

The view from the European Union

For the European Union, standards are not a matter of voluntary adoption but act as a hard-wired regulatory tool. Since the “New (regulatory) Approach” started in the late 1980s, the relationship between standards and law has been both symbiotic and hierarchical. EU regulations and directives have often accepted that products (or, incidentally, services) developed according to standards determined by the European standardisation organisations were presumed to be compliant with the principles established in its legal frameworks.81 It was believed and hoped by the European Union that the standardisation process would be more flexible and faster than the EU legislative process.8283 Delegating small regulatory details to standards seemed more practical and cheaper than putting the burden on legislators. Standardisation was chosen as the core tool in the so-called “New Approach” for efficiency reasons. Although this New Approach is currently under review84 as part of the new EU Strategy on Standardisation,85 this logic remains.

Interestingly, however, AI is not identified in the European Union’s review programme as one of the priorities for standardisation.86 Nevertheless, standards for “data” in more general terms are indeed there;87 and AI standards still carry particular importance for the European Union.

If its internal schedule is met, the European Union will have in place a broad-scope regulation for AI systems by 2024.88 AI systems developed in accordance with EU-accepted standards will be assumed to be compliant with the relevant requirements of the regulation. AI systems developed outside the European Union, but intended to be marketed in Europe, are therefore likely to pay attention to those standards.

In its international dimension, and in a rare example of unapologetic assertiveness for the European Union, the European Commission has noted that “the EU’s objective is to shape international standards [not restricted to AI] in line with its values and interests, but it is in strong competition to do so.”

In its international dimension, and in a rare example of unapologetic assertiveness for the European Union, the European Commission has noted that “the EU’s objective is to shape international standards [not restricted to AI] in line with its values and interests, but it is in strong competition to do so.”89 The new strategy envisages an institutionalised coordination — a relatively unprecedented High-Level Forum — to achieve effective representation of European interests in international standardisation settings.90 The Forum will include representatives of member states, European standardisation organisations and national standardisation bodies, industry, civil society and academia.

This is an attempt to respond to a “recent shift” in the geopolitical landscape where “other actors follow a much more assertive approach to international standardisation than the EU and have gained influence in international standardisation committees.”91 This, contends the European Commission, “has led to a situation whereby in sensitive areas, like (…) facial recognition or the digital twin, other world regions are taking the lead in international technical committees promoting their technological solutions, which are often incompatible with the EU’s values, policies and regulatory framework.”92

The European Union, however, does not have the intention of developing those standards independently. The European Commission has committed itself to actively addressing this issue “in close coordination with like-minded partners, linking to the work in the G7 and the EU-US Technology and Trade Council (TTC).”93 The latter is analysed below, but it is interesting to note the series of alliances that the European Union is trying to forge, envisaging “discussions on standards in the planned Digital Partnerships with Japan, the Republic of Korea and Singapore” as “good examples of EU standardisation cooperation with international partners.”94

The EU-US coordination: othering China?

The newly created EU-US Trade and Technology Council is a coordination framework between the United States and the European Union for international trade and economic policy. However, its field of action is broader than these two entities as it intends “to feed into coordination in multilateral bodies, including in the World Trade Organization (WTO), and wider efforts with like-minded partners, with the aim of promoting democratic and sustainable models of digital and economic governance.”95

Notably, its Inaugural Joint Statement includes a specific ‘Statement on AI.’96 This Statement, besides including some not-very-subtle condemnatory references to China (they “have significant concerns that authoritarian governments are piloting social scoring systems with an aim to implement social control at scale”), agrees that “policy and regulatory measures should be based on, and proportionate to the risks posed by the different uses of AI.” Concretely, the United States and the European Union stated their intention to “develop a mutual understanding on the principles underlining trustworthy and responsible AI” and to discuss “measurement and evaluation tools and activities to assess the technical requirements for trustworthy AI,” that is, the foundational work for standardisation.97

technology-globe.jpg
Source: Getty

Furthermore, the first working group created by the Trade and Technology Council is on Technology Standards. This working group is “tasked to develop approaches for coordination and cooperation in critical and emerging technology standards including AI” to “support the development of technical standards in line with our core values.”98 Such an explicit reference to values in one of the constitutional documents of an entity seemingly created to facilitate international trade and limit regulatory divergence is already fairly unusual.

Furthermore, while recognising “the importance of international standardisation activities underpinned by core WTO principles” the working group will “identify opportunities for collaborative proactive action and to defend our common interests in international standards activities for critical and emerging technologies.”99 Finally, the European Union and the United States committed themselves to develop “both formal and informal cooperation mechanisms to share information regarding technical proposals in specified technology areas and seek opportunities to coordinate on international standards activities.”100

Japan: A confident and self-reliant actor open to cooperation

In contrast to the changes in attitude in the European Union and the United States, official documents in Japan do not seem to imply a shift in its approach to standardisation in connection to AI. It appears as if Japan, in its transition to the new governance model for Society 5.0101 — which seems to be the core concern, and not AI specifically — did not find diverging standards as a potential obstacle to achieving its broader socio-political objectives. It even seems quite confident about how its own existing legal institutions can tackle some of the problems specific to AI, regardless of standards. Japan has instead focused on a principles-based approach, which stresses the importance of all stakeholders (including users) to employ AI systems responsibly. Conventional explanations may justify this on a broader image of Japan living with cutting-edge technology, such as robots. This implies that the acceptance of technology development in Japan is better processed by its population which, in turn, may be better able to understand its risks and limitations.

This is not to say that Japan does not see the value of technical standards. In fact, the Ministry of Economy, Trade and Industry (METI) encourages Japanese companies to make strategic use of standardisation.102 Furthermore, 2018 amendments to the Japan Industrial Standardization Act (Act No. 185 of 1949) included broadening its scope to encompass data, services and management methods.103 However, with respect to regulating AI, high-ranking advisory bodies in Japan openly favour a “soft law” approach for goals-based governance, rather than a rules-based approach. The view is that concerted goals-based governance (for example, that technology should be human-centred) is capable of being more responsive to rapid technological change.104

The Social Principles of Human Centric AI, issued by METI in 2019, cites a need to “redesign society in every way” to prepare for the new technological and social epoch, but, overall, Japan’s policy approach to AI integration into society is highly positive and optimistic, while also taking a tone of inevitability. Japan’s Social Principles of Human Centric AI, with limited (if any) legal consequences if violated, can be summarised as follows:

  1. The Human-Centric Principle: This principle states that the “utilization of AI must not infringe upon the fundamental human rights guaranteed by the Constitution and international standards.”
  2. The Principle of Education/Literacy: This principle acknowledges the risk that AI may create social disparities and pledges that “an educational environment that fosters education and literacy… must be provided equally to all people,” with a focus on vulnerable individuals and an educational system that ensures the acquisition of “the basics of AI, mathematics, and data science.” It anticipates that this education goes beyond the school environment with a view to also engaging private enterprises and (adult) citizens.
  3. The Principle of Privacy Protection: This principle acknowledges that “when utilizing AI, that more careful discretion may be required than the mere handling of personal data.” It uses the language of risk mitigation, including that personal information “must be protected appropriately according to its degree of importance and sensitivity.”
  4. The Principle of Ensuring Security: This principle aims to balance the risks and benefits of AI use: “The active use of AI automates many social systems and greatly improves safety. On the other hand […] it is not always possible for AI to respond appropriately to rare events or deliberate attacks.”
  5. The Principle of Fair Competition: This principle makes clear that in Japan, “[a] fair competitive environment must be maintained in order to create new businesses and services, to maintain sustainable economic growth, and to present solutions to social challenges.” It expresses willingness to combat the dominant position of industry players: “Even if resources related to AI are concentrated into specific companies, we must not have a society where unfair [a term not explicitly defined] data collection and unfair competition take place using their dominant position.”
  6. The Principle of Fairness, Accountability, and Transparency: This principle strives to ensure fairness, transparency and accountability in decision-making. Although it does not define these terms, the explanation of this principle includes reference to non-discrimination, provision of case-by-case explanations, opportunities for open dialogue on automated decision making and “a mechanism… to ensure trust in AI.”
  7. The Principle of Innovation: This principle includes that innovation should be continuous, global, and safe.

In this context, Japan has developed a concept of trust in AI occasionally interwoven with the trust in corporations responsible for using the systems: “it is important for companies to have shared understanding that a broader sense of quality or trust in AI systems needs to be reinforced by other elements such as corporate governance and dialogue among stakeholders.”105 This is consistent with a more general view of trust put forward by Japan in 2019 at the World Economic Forum (WEF): the Data Free Flow with Trust (DFFT). This initiative resulted in an associated white paper in 2021, proposing a Trust Governance Framework.106 The white paper, authored by Hitachi & METI together with the WEF — perhaps as a way to establish its international credentials without having to involve any other states — offered a fairly generic definition of trust: “the expectation, based on certain values held by a trustee, that other entities (including people, organizations and systems) will behave positively and/or not behave in certain negative ways.”107

This is consistent with a comparatively benevolent view in Japan about its corporations. For example, according to an expert group appointed by the cabinet office108 Japanese corporate governance may be suited to approach terms such as “reliability” or “trust” of an AI system as it is often “characterized by the careful balancing of valuing the interests of various stakeholders such as employees, business partners, and the local community.” These characteristics may allow the incorporation of the different interests within the supply chain of the AI system and use them to control and assess an AI system, making direct or indirect regulation unnecessary.

Similarly, the same expert group appears to dismiss AI standards as a governance tool, favouring instead a “goal-based governance framework.”109 It even proposes a shift “from rule-based regulations that specify detailed duties of conduct to goal-based regulations that specify value to be attained ultimately, in order to overcome the problem of laws not being able to accommodate the speed and complexity of society.” The logic is to “[o]blige or incentivize information disclosure (transparency rules) so that discipline based on market and social norms will work effectively,”110 that is to provide enough information so all stakeholders can understand risks and limitations. Unlike the European Union’s perspective, self-regulation appears to be preferred, trusting companies to use AI in accordance with broader social considerations.

This positive view of the opportunities for AI — and the belief that setting some general goals would be sufficient — may explain the comparatively limited interest in AI-related standards in government documents. In July 2020, Japan’s METI issued the Governance Innovation: Redesigning Law and Architecture for Society 5.0 followed by subsequent updates in 2021 and 2022.111 This document proposes a governance architecture model that is agile, goals-focused and innovation-friendly, applying an “innovation” approach directly to governance itself (“Governance for Innovation, Governance of Innovation, and Governance by Innovation”), aligning with the technological innovation that it aims to support.112 The approach is overt in its view that traditional forms of governance are ill-suited to a rapidly changing cyber-centric society and may stifle innovation by rigid adherence to prescriptive rules.

The approach is overt in its view that traditional forms of governance are ill-suited to a rapidly changing cyber-centric society and may stifle innovation by rigid adherence to prescriptive rules.

This does not necessarily equate to a dismissal of regulation, but conveys a lack of urgency, at least while AI is in a rapidly changing state of development. Japan is committed to “discuss[ing] ideal approaches to AI governance in Japan, including regulation, standardization, guidelines, and audits, conducive to the competitiveness of Japanese industry and increased social acceptance, for the purpose of operationalizing the AI Principles, taking domestic and international AI trends into account,” as noted in their AI Strategy of 2019 and their Integrated Innovation Strategy of 2020.

In terms of international cooperation, Japan seems to be willing to assume that the standardisation of AI is business as usual. It notes in different documents its participation in the ISO/IEC JTC1 SC 42 and the creation of a Japanese Technical Committee for SC 42 has been established under the Information Technology Standards Commission of Japan in the Information Processing Society of Japan, which is taking the lead in negotiations. This Technical Committee is “building positive relationships” with the European Union’s standardisation bodies and notes the existence of an AI joint committee “established between the Directorate-General for Communications Networks, Content and Technology of the European Commission and the Ministry of Economy, Trade and Industry.”113

China: Follow the plan by permitting experimentation

Generating new insights into AI policy and standards in China faces various challenges, namely restrictions on information flows and legal non-disclosures imposed by Chinese authorities, in addition to the inherent difficulties in Chinese language translation. Most scholarly analysis and commentary within this area of research is centred on translated national-level documents released by various Chinese Government agencies.114

Yet, assessing the practical consequences or presumed intention of the Chinese state generally involves interpreting these documents and the surrounding discourse — often complemented by a combination of established theories, case studies and data collection. In our view, AI has emerged as a highly coordinated and distinct vertical policy of central importance to the Chinese state. AI is no longer viewed as one of many emerging technologies capable of realising broad strategic objectives but is now framed as the centrepiece for technological and societal development.

Within this context, standardisation, at least since the release in July 2017 of the country’s first main strategy for AI,115 has been a key pillar of its initiatives, both domestically and internationally. In the six years since, Chinese standardisation efforts have made significant progress, most notably with the release of the Guidelines for the Construction of the National New Generation Artificial Intelligence Standard System 2020, which outlines the high-level design of the AI standards architecture, and the release of in-depth sector-specific standards including the Guidelines for the Construction of a National Smart Manufacturing Standards System 2021 and the National IoV Industry Standard System Construction Guide 2021. These initiatives may partially explain the sudden US interest in developing its own NIST-driven standardisation initiatives and its leadership in the international standardisation process.

From an international perspective, the 2018 AI Standardization White Paper published by the China Electronics Standardization Institute (CESI), a state-run think tank within the Ministry of Industry and Information Technology, recommended that “China should strengthen international cooperation and promote the formulation of a set of universal regulatory principles and standards to ensure the safety of artificial intelligence technology.”116 However, at least at the global level and in relation to foundational standards, there are no significant records of China successfully negotiating its policy objectives into the standards of the JTC 42 or ITU.

As in Japan, in 2020, China established a Technical Subcommittee (SAC/TC28/SC42) as the counterpart for ISO/IEC JTC1/SC42, which has responsibility for the formulation and revision of national standards in AI fields such as AI foundations, technology, risk management, trustworthiness, governance, products, and applications.117 Overall, as an author recently noted, “[i]ncreased Chinese engagement in international standards bodies is certainly a reality; sudden Chinese supremacy in these bodies is not.”118

However, China has strong incentives to influence international standards, both economic (responding to the needs of its increasingly global companies to develop exportable systems) and political (to shape global efforts to set standards for ethical and social issues related to AI algorithm deployment).

However, China has strong incentives to influence international standards, both economic (responding to the needs of its increasingly global companies to develop exportable systems) and political (to shape global efforts to set standards for ethical and social issues related to AI algorithm deployment). These are within the broader aim to “skew toward the interests of government-driven technical organizations [while] attenuating the voices of independent civil society actors that inform the debate in North America and Europe.”119 Although that vision may be oversimplistic, it seems reasonable to assume that these could be the international objectives of China in this domain.

Responding to these incentives, the Chinese Government contributes to the costs of participation of representatives of its companies in the international standardisation process and, in other areas not directly related to AI, such as 5G, this has been successful.120121 China’s increasing presence and activity in global SSOs over the past decade has received significant attention. One notable example was the contested chair position in the AI-specific ISO-IEC SC42, which went to Wael Diab, a senior director at Huawei, China.122 Perhaps this influence could partially explain the vague definitions in the new fundamental standard for AI123 of terms such as “trustworthiness” or the limited references to human rights in the text. Yet, these victories are seen as the exception more than the rule. Even within SC42, the role of the secretariat for concrete standards is generally more relevant to directing the work of the subcommittee in that area, leaving the chair mainly as a moderator figure. In the case of foundational standards, this secretariat role was held by the US-based ANSI.

Outside of the SSOs, according to some sources, China’s standardisation strategy “has been incorporated into the Belt and Road Initiative [BRI] so that, as countries are weaved into this network, they adopt China’s standards.”124 The State Council has produced two development plans for standards specifically related to the Belt and Road Initiative (BRI),125 the Action Plan to Connect “One Belt, One Road” Through Standardisation (2015-2017), and the Action Plan on Belt and Road Standard Connectivity (2018-2020) which include actionable guidance for firms. Since then, various bilateral agreements tied to the BRI have been used to promote regional standardisation cooperation in Northeast Asia, the Asia-Pacific, Pan-American, European and African regions. As of 2020, China had signed 85 cooperation agreements on technical standardisation with 52 countries.126 However, from a cursory analysis of the texts, it is apparent that these agreements are more likely to relate to export pathways for Chinese products rather than any legal obligations to incorporate Chinese standards.

Domestically, though, the scope of standardisation work has been impressive. In 2018, there were already 23 active AI-related national standardisation processes;127 in 2020, the Guidelines for the Construction of a National New Generation Artificial Intelligence Standards System128 were published; and in 2021, CESI updated its White Paper129 explaining the advances and challenges of the process. According to the latter, the domestic standardisation organisations covered in that white paper “have issued, approved, researched, and planned a total of 215 AI standards,” with the vast majority of them already issued by July 2021.130 This means that there has been a discussion between stakeholders in China (and agreements have been reached) for many of the issues currently faced by AI governance actors across the world.

From a more general regulatory perspective, Table 3 provides an overview of AI-related Chinese regulatory instruments identified in this research.

Table 3: Regulatory instruments for AI in China

Released Title Department/Ministry Notes
2017 New Generation of Artificial Intelligence Development Plan (AIDP) State Council Seminal document outlining strategic objectives of AI policy
2017 Three-Year Action Plan to Promote the Development of the New Generation Artificial Intelligence Industry Ministry of Industry and Information Technology Outlines implementation of the AIDP for industry
2018 Guidelines for the Construction of the National Intelligent Manufacturing Standard System Ministry of Industry and Information Technology Outlines high-level standards architecture (updated in 2021)
2018 White Paper on AI Standardization Standardization Administration of China Discussion on AI standardisation (updated in 2021)
2020 New Generation AI Innovation and Development Pilot Zones Ministry of Science and Technology Implementation of a regulatory sandbox approach
2020 Main Points of National Standardization Work in 2020 Standardization Administration of China Discussion on standard reforms
2020 Guidelines for the Construction of the National New Generation Artificial Intelligence Standard System Ministry of Science and Technology, Ministry of Industry and Information Technology, National Development and Reform Commission, Cyberspace Administration, Standardization Administration of China Outlines high-level AI standards architecture
2021 Ethical Norms for New Generation Artificial Intelligence Ministry of Science and Technology Establishes ethical norms for AI development
2021 Artificial Intelligence Standardization White Paper Standardization Administration of China Discussion on standardisation of AI
2021 National Standardization Development Outline State Council Seminal document outlining strategic objectives of standard reforms in China
2021 Guidelines for the Construction of a National Smart Manufacturing Standards System Ministry of Industry and Information Technology, Standardization Administration of China Outlines high-level standards architecture
2021 National IoV Industry Standard System Construction Guide Ministry of Industry and Information Technology, Standardization Administration of China Outlines high-level standards architecture
2022 Internet Information Service Algorithmic Recommendation Management Provisions Cyberspace Administration Regulation (first of its kind)
2022 Provisions on the Administration of Deep Synthesis Internet Information Services (Draft for solicitation of comments) Cyberspace Administration Regulation (first of its kind)
2023 Draft Measures for the Management of Generative Artificial Intelligence Services Cyberspace Administration Draft Regulation (first of its kind, open for comment)

At the national level, most of the attention has been focused on the Internet Information Service Algorithmic Recommendation Management Provisions131 and the Provisions on the Administration of Deep Synthesis Internet Information Services.132 Both instruments create specific rules and objectives for those systems, which respectively deal with recommendations on the internet and so-called ‘deep fakes.’ Among others, the provisions stress the importance of protecting the disadvantaged (including the elderly) and preventing social disruption, while “positive energy (正能量)” should be promoted. Although some have seen these provisions as tools for social stability, they are perhaps better seen as modest regulatory approaches to some AI systems, albeit using very vague and undetermined concepts. From that perspective, they have a higher value as messages to economic operators in the field, rather than strict regulations. Furthermore, their approach is precisely the opposite of standardisation, as several terms and concepts used in those instruments escape measurement or uniformity.

The latest regulatory intervention of the Cyberspace Administration of China was the proposal of draft measures for the Management of Generative Artificial Intelligence Services (April 2023). This proposal is a world-first in the very hot topic of generative AI, no doubt triggered to some extent by the very successful release of ChatGPT and GPT4.

Yet, beyond these concrete instruments, one of the most important characteristics of the Chinese approach is a willingness to trial experimental regulatory approaches.

Yet, beyond these concrete instruments, one of the most important characteristics of the Chinese approach is a willingness to trial experimental regulatory approaches. In its Guidelines for National New Generation Artificial Intelligence Innovation and Development Pilot Zone Construction Work (2019) the Ministry of Science and Technology created the legal framework for companies and provinces to specialise and regulate independently from each other different applications or aspects of AI. The guidelines envisage an environment for local stakeholders and some named companies to: (i) test institutional mechanisms, policies, and regulations; (ii) promote the in-depth integration of AI with economic and social development; and (iii) explore new approaches to governance in the intelligent era.

The methodology employed is precisely the opposite of the EU approach (which promotes one unique rule for the whole of the European Union) and assumes instead that there is no clear singular path forward with AI policy; rather, it is only through experiments with technology and governance mechanisms, that the objectives of the AIDP can be achieved. Each identified zone is tasked with specialisations based on the strengths of that specific province, which can locally apply a mix of industrial policy, incentives and delegated regulations. For example, in the province of Hefei, the focus is on intelligent speech and robotics, whereas Deqing County is tasked with working on autonomous driving, smart agriculture, and county-level intelligent governance. In other words, China is allowing contained regulatory and policy experimentation for AI in a real-world setting but constrained to a limited geographical and social space.

Canada: The first to regulate

Canada, especially compared to Japan and the United States, has followed a path mainly driven by hard regulation. This is driven through two key pieces of legislation, which are relatively self-contained and not particularly dependent on standards.

First, the Treasury Board133 Directive on Automated Decision-Making134 is a mandatory policy instrument which applies to almost all federal government institutions. It has the objective of ensuring “that Automated Decision Systems are deployed in a manner that reduces risks to Canadians and federal institutions, and leads to more efficient, accurate, consistent, and interpretable decisions made pursuant to Canadian law” (Article 4.1). It requires conducting impact assessments before the deployment of these systems (6.1) and requires transparency (6.2) and quality assurance (6.3) in their operation.

The reception in the literature has been mixed135 but, overall, it is viewed as a good attempt to address the risks generated by these systems without resorting to standards for its implementation. It includes instead a list of definitions in its Appendix A.

Second, on 16 June 2022, the Canadian Government introduced Bill C-27, the Digital Charter Implementation Act,136 which includes the Artificial Intelligence and Data Act (AIDA) in its Part 3. AIDA is reminiscent of the EU proposal for an AI Act as the bill extends the regulation in Canada of AI to the use of AI systems in the private sector, establishes tiers of risk and also focuses regulatory efforts on higher-risk activities. Standards, however, are not used here to facilitate implementation, focusing instead on the outputs of the system and the risk of harm.

On the international stage, Canada was the co-founder with France of the International Panel on Artificial Intelligence (IPAI, currently the Global Partnership on AI (GPAI), as analysed in the next section). Initially announced in 2018 at the G7 Multistakeholder Conference on Artificial Intelligence, it was made public in May 2019 at the end of the informal meeting of G7 digital ministers.137 Although not particularly active in recent times, the GPAI remains the most representative alliance for the global governance of AI and it places Canada as a thought leader in this space.

4. Other international initiatives and alignments

In this section, we provide an overview of the main international alignments and initiatives in the AI domain. As presented below, liberal democracies are prominently trying to lead in this space, with a significant investment of resources to attract as many countries as possible to regulatory approaches that align with “Western” values.

In the current state of affairs, the practical relevance of these initiatives is relatively limited, but their existence, and the rapid development of some outputs, clearly illustrate the commitment of the European Union and the United States to delivering the basis of global regulation for AI modelled on their shared views and values. The possibility of the creation of a global treaty for AI within the Council of Europe makes that initiative particularly notable.

Global Partnership on AI (GPAI)

As mentioned above, the GPAI is the result of Franco-Canadian cooperation, born at the margins of the G7. The partnership, however, is now closely attached to the OECD, which holds its secretariat.

Its membership is heavily dominated by Global North countries, although it has managed to attract a few Global South states (Brazil, India, Mexico), as well as some like-minded countries in Asia (Singapore and the Republic of Korea).138 It is “built around a shared commitment to the OECD Recommendation on Artificial Intelligence,” presented below, and has the objective of fostering international cooperation among a variety of stakeholders, namely science, industry, civil society, governments, international organisations and academia.139

It frames itself as a forum for the exchange of “scientific, technical, and socio-economic information relevant to understanding AI impacts” and, more tellingly, an initiative that encourages the “responsible development and options for adaptation and mitigation of potential challenges.”140 Its work is distributed among four working groups, with the first two being ‘responsible AI’ and ‘data governance,’ which illustrate the importance of coordination of governance tools between members based on the principles developed within the OECD.

AI initiatives within the Organisation for Economic Co-operation and Development (OECD)

The OECD is an international organisation with a membership composed of 38 liberal democracies. Within this framework, states have developed international norms closely aligned with liberal democratic values. Its sui generis working system, in which member states vote on a council recommendation, allows it to tackle difficult tasks on the international stage.

In the AI context, its main output is the creation of the OECD Principles on Artificial Intelligence141 adopted in May 2019. The principles are attached to the OECD Council Recommendation on Artificial Intelligence, which promotes AI that is innovative, trustworthy and respectful of human rights and democratic values, in line with the principles that guide the members of the organisation. These principles, as noted, for example, in the US section, have been moderately influential at the domestic level in several countries.

The OECD, which can also develop standards — and its website states that it has developed more than 400 — monitors the application of the recommendation and the principles through a system of soft peer review.142 It also produces many documents on the use of AI, especially in the public sector and its effects on the labour market.143

Council of Europe

The Council of Europe is an international organisation created shortly after the Second World War with the idea of being akin to a continental United Nations. Its main feature is the European Convention on Human Rights.144

Although its membership is slightly more diverse than the OECD — Turkey is a member, as well as Russia until early in 2022 — the organisation positions itself as a strong supporter of liberal values.

The Council of Europe is the first international organisation to develop a legally binding instrument with respect to AI, as opposed to a set of recommendations or guidelines. The “transversal legal instrument to regulate the design, development and use of artificial intelligence systems” is a treaty under development at the Council of Europe, which could be open to signature by non-members.

In a meeting of the Council of Ministers (Deputies level) (30 June 2022) the Committee on Artificial Intelligence was instructed to “proceed speedily with the elaboration of a legally binding instrument of a transversal nature (‘convention’/ ‘framework convention’) on artificial intelligence based on the Council of Europe’s standards on human rights, democracy and the rule of law.”145

United Nations Educational, Scientific and Cultural Organization (UNESCO)

Finally, the only global instrument on AI is a Recommendation on the Ethics of Artificial Intelligence approved by consensus at the General Conference (including China) of UNESCO after a two-year process of elaboration on 24 November 2021.146

The recommendation does not define AI and focuses instead on what AI systems can do and how they should do it. It assumes that AI systems have the capacity to process data and information in a way that resembles intelligent behaviour. This may include aspects of apparent reasoning, learning, perception, prediction, planning or control. On this basis, the recommendation aims to provide a common global ground to make AI systems work for the good of humanity and prevent harm.

The organisation presents the instrument as the first global standard-setting instrument on the ethics of AI, but its legal form (as a recommendation) significantly diminishes its importance. However, it shows that some degree of global alignment is possible and, theoretically, it could serve as the basis for truly global foundational standards.

5. Finding a place for Australia in the global power game of regulating AI: Status quo and options

In the context of domestic AI regulation in Australia, the expectations of commentators have never been particularly high. As Lindsay and Hogan put it, given the “overwhelmingly utilitarian orientation of Australian regulatory culture…, Australia [is] likely to pragmatically favour technological development over ethical design.”147

In June 2021, their prediction materialised in Australia’s first Artificial Intelligence Action Plan.148 The action plan is essentially a document drafted as a vague AI industrial policy and a recall of Australia’s intention to progress the implementation of its AI Ethics Principles. The government also states its intention to make Australia a global leader in responsible and inclusive AI149 even if the tools at its disposal to achieve that objective are very limited as a minor consumer of AI systems and an even smaller developer of the same.

Australia’s first Artificial Intelligence Action Plan is essentially a document drafted as a vague AI industrial policy and a recall of Australia’s intention to progress the implementation of its AI Ethics Principles.

Perhaps more interestingly, the plan establishes a link between Australia’s trade with neighbouring countries and the rapidly increasing digital export opportunities in the region. In this context, it states that “[i]nternational cooperation plays an important role in shaping technology standards, norms and ethics in line with our values. Building on partnerships and collaborations will help grow international trust in Australian AI products… [and] ensure our interests are supported through our involvement in international standard setting.”150

It is important to note that Australia has been actively engaged in the work of ISO/IEC JTC1/SC42 since 2018,151 despite its minor economic interests at stake, initiating, for example, the current project on AI sustainability at ISO. More relevantly, it has also followed through with the idea of partnerships. Australia and Singapore, for example, signed a Digital Economy Agreement in August 2020 accompanied by a Memorandum of Understanding (MoU) on Cooperation on Artificial Intelligence, which includes a mutual commitment to support the “development and adoption of ethical governance frameworks for the trusted, safe and responsible development and use of AI technologies.”152 There have also been some minor para-regulatory initiatives with some degree of public-sector recognition, such as the creation of the “responsible AI Network,” established by the National AI Centre (March 2023).

However, at the current juncture, public authorities in Australia must decide on a more coherent path to follow. The final part of the report presents five options and their main trade-offs. These options are not all mutually exclusive and could be combined in several ways.

Option 1: Do nothing

Given the limited importance of the AI sector in Australia in terms of companies involved in the development of AI systems that could be exported abroad,153 Australia may decide that, in the current state of uncertainty, it is preferable to avoid any kind of regulation, whether soft (through standards) or hard (through binding norms). From the perspective of AI developers and companies commercialising AI systems, the Australian AI Ethics Principles may be sufficient for guidance purposes to fill the vacuum. Their operationalisations would then be covered by initiatives from civil society and private parties providing guidance about implementation, without the intervention of public authorities. In cases where AI systems do cause harm, such matters could always reach the courts, which, in a common law system, could progressively generate norms.

This approach would avoid the risks derived from early regulatory action as a possible obstacle to innovation or the technology’s development, but may in turn delay or undermine the improvement of human wellbeing, as useful AI systems would not be deployed if their compliance with new regulations is not clear.154

Legal certainty also promotes legal compatibility among markets, favouring exports and imports of AI systems. In the period of regulatory uncertainty, legislative and executive branches would be relinquishing their power and trusting courts to decide cases ad-hoc. Finally, clear norms, even if not optimal, can nudge professional users of AI systems and developers towards safer AI or AI that better meets the societal needs of Australia and uplifts the organisations’ governance processes.

From a geopolitical perspective, the “do nothing” option would avoid needing to choose either side of the ‘EU-US versus China’ dichotomy, which would have some advantages — such as unrestricted access to AI systems developed in both areas — but this could also be seen as a partial abandonment of Australia’s liberal-democratic alignment.

Option 2: Do nothing but openly promote the use of AI RMF (or similar frameworks) while participating in Western-driven international initiatives

Compared to the previous option, the promotion of voluntary standards developed abroad would define Australia’s position in the global race to regulate AI, aligning itself with the United States and like-minded countries. Maintaining or increasing its participation in the GPAI, OECD and, potentially, in the Council of Europe treaty, would carry significant political value.

From a standardisation perspective, Australia could leverage its English-language advantage and promote the use of the AI RMF (or similar frameworks) without creating domestic standards or giving legal recognition to international standards. Given the overlaps noted above between the AI RMF and ISO/IEC standards, it would also favour some degree of coherence with international developments.

Countries such as Canada and Singapore have already opted to participate in international standards, partially to understand common challenges and learn how to navigate their own local requirements.

This option would still allow public authorities to acquire or co-design AI systems developed in any place in the world (including systems developed in China), and private entities could also be freer to deploy in Australia systems developed abroad. Domestic companies could still develop systems according to foreign standards (for example, the AI RMF or the future EU standards) and know that there is no risk that Australia would limit the use of their systems domestically. A degree of legal certainty may be created by statements of intent by Australian authorities explicitly engaging the country to promote other standardisation frameworks and guaranteeing that they will not become compulsory at any level or favoured in public tenders.

On the negative side, the level of protection for society provided through this option is still very limited and AI systems could be developed or deployed on the basis of values which do not reflect that of Australian society. As Australia does not have a national bill of rights to provide clear backstop protections, this risk could have significant repercussions.

Option 3: Favour the adoption of AI that aligns with domestically developed standards or global technical standards

A more intense degree of intervention would be to favour the adoption of domestic standards, which could be standards developed from scratch by the private entity Standards Australia or simply by reference to international standards, such as those developed in the JTC SC 42.

A possible way to do this would be to incorporate public procurement clauses that favour systems developed according to these standards and monitoring their fulfilment. Courts and regulators could also look favourably at companies that employ AI systems developed and deployed in accordance with the chosen standards, for example, to limit their liability for harm caused by these systems.

On the one hand, this option would significantly enhance the current level of protection for society and, if the international standards are adopted, could significantly facilitate the import and export of AI systems. On the other hand, it may significantly limit the choices for public authorities and companies in the acquisition of AI systems or goods with AI systems embedded in them, as they will not be able to acquire systems developed to foreign standards. This may reduce the choice of systems or make impossible the acquisition of certain systems.

Example: Automated mining

If Australia favours international standards that exclude, for example, an AI system that controls mining robots because of a lack of properly recorded testing in real conditions, it could mean that some Chinese companies that have developed the technology in Africa could be excluded from the Australian market. This could force companies from countries not adhering to Australia’s AI standards to incur extra costs to retest their systems, not offer their products in the Australian market or, simply, delay the adoption of the technology until a standard-compliant option is developed. This would have a clear economic cost for the relevant stakeholders, but may also be a policy matter, for example in terms of the safety of miners who would need to keep working in dangerous conditions, instead of having dangerous work being replaced by machines.

Simultaneously, a robot controlled by AI technology not sufficiently tested according to certain international standards, but nevertheless conforms with other foreign standards, could present a risk that is unacceptable in Australian society and would therefore be legitimately barred from the Australian market.

Option 4: Adopt a decentralised and experimental approach

The Australian Government could leave the issue of regulation of AI systems to the state government level. Generally, this could favour regulatory experimentation at the expense of interoperability (e.g., autonomous cars). The most compelling argument in favour of this approach is that, at the current level of knowledge, it is not clear what the optimal level of regulation for AI is or how to technically implement it. A reasonable degree of regulatory diversity between states could serve as a regulatory experimentation field.

For example, Western Australia could focus its interest on autonomous machines (cars, mining robots) and work towards comprehensive sectorial regulation of these AI systems. New South Wales may continue its process of regulating the use of AI by public authorities. Victoria may instead develop an approach focused on human rights, based on its existing legislation and institutions, which could be adapted to consider AI systems (perhaps with a Victorian ombudsman for human rights and AI systems).

If this approach were adopted, the federal government could look at the experiences in the United States and China. In the United States, there is no central coordination and states are free to legislate according to their powers. California’s proposal to regulate the use of AI for hiring decisions, for example, has received significant media attention, and many other state legislatures are exploring (or have passed) legislation relating to AI systems.155

In China, regional experimentation is directly encouraged and coordinated by the central authorities, with the hope of avoiding overlaps which may entail regulatory disruptions to the market. This approach could potentially avoid some of the highest risks of regulatory diversity, which is the disruption of the internal market of the country.156

Option 5: Regulate AI with binding rules

Generally, there are three possible approaches to the creation of AI-specific binding rules. Their shared characteristic is a focus on safeguarding individual AI systems. However, early binding regulation is always risky and could potentially stifle innovation or, more likely in this case, limit the options of available AI systems that can be used or bought in Australia.

The Canadian example

Canada has shown that it is possible for a middle-sized economy to establish a binding regulatory framework focused on protecting individuals and society from the potential risks generated by AI systems. At least in the context of public services Canada’s Treasury Directive is not that different from the logic of the New South Wales AI Assurance Framework.157 It is essentially a system of ex-ante assessment against the risks generated by an AI system that is going to be used by a government entity. A compulsory assessment of any significant AI system employed by government could be mandated at the federal level to reassure citizens about the process and the system itself.

In line with current initiatives in Canada (also similar to Brazil’s Marco Legal Da Inteligência Artificial 2020), new rules could also be developed for the private sector focusing on the outputs of the systems and the risk of materialisation of negative outcomes. This option, however, would require pre-existing standards that could be applied to assess those risks. For adequate regulatory purposes, comprehensive legislation would also require stronger public involvement in standard development.

This option offers the advantage of a path already explored by a middle power in a way that has not jeopardised its international standing.

This option offers the advantage of a path already explored by a middle power in a way that has not jeopardised its international standing. A narrow version of the legislation that imposes the assessment, but still allows authorities to go against that assessment, will not automatically limit the choice among AI systems. Laws could also permit other factors to be considered in the final decision, such as risk mitigation strategies, costs, potential societal benefits, etc.

This type of legislation could also be a way to favour the development of a local AI industry, as it could better target the requirements of those expert assessments.

Alignment with the EU proposal

The main advantage of this approach, when compared to other hard-regulation options, would be an early alignment with the rules that would operate in one of the biggest markets in the world for AI systems. Australia could even directly influence the EU standards called for in the proposal, which are currently under development in the European Union (in the European Committee for Standardization and the European Committee for Electrotechnical Standardization).

Furthermore, if EU outreach efforts are successful and EU norms generate some degree of regulatory alignment beyond its borders, this approach will also offer a privileged position both for the Australian developers of AI systems and for buyers of those systems, as it will increase the choice of AI systems compliant with that legislation.

However, if the EU AI Act is not successful, alignment with EU rules would severely limit the choice among AI systems for acquisitions in Australia, while diverging the country from legal alignment with some of its other key allies (such as the United Kingdom, United States, and Canada). It could also hamper the potential growth of a domestic AI industry which could export globally. Nonetheless, this risk may be overplayed, as Japan and China have both expressed their intention to find ways to facilitate the compliance of their companies with EU rules and the United States and the European Union are already undertaking a process of coordination on that front.

Developing its own hard regulatory framework but with a sectoral approach

Finally, Australia may consider developing its own regulatory framework at the federal level for AI systems deployed for a particular task or domain. Concretely, as a mining and higher education global powerhouse, Australia could intensively work in the development of standards for the use of AI in those two sectors. This could be done in partnership with the relevant actors, with the expectation that legislation could follow to legally recognise those standards. This process will not solve the risks of AI systems to society but could be a first step in regulating AI. Australia could then learn through doing and establish itself as a knowledgeable regulatory authority in those domains. It could then shape those standards in a way that could influence the development of similar ones at the global level. Given the global importance of these sectors, Australian standards for education and mining could also be adopted in other countries.

Which path to choose?

In this report, we have presented ourselves as agnostic to the choices. All of them contain trade-offs between different priorities and all involve a notable degree of uncertainty. However, and more importantly, the regulation of AI is not an isolated choice, but one that must consider international policy developments and other domestic priorities.

For new rules to work well, public authorities will need to continuously engage and monitor how those rules are fulfilling the desired objectives. These objectives may not be purely about creating an efficient regulatory framework but may involve political and geostrategic judgments. For example, a government may decide that it is better to prioritise certain rules based on their societal implications and align Australia with the United States instead of following the Canadian example. Contrarily, it may well want to develop sectoral rules and be a leader in, for example, AI in mining and education, and consider regulation as a tool to promote local developers and achieve trust in AI developed and used in Australia in those fields, regardless of any potential international frictions, such as closing the market to US or Chinese systems.

What we consider essential in any case is a proper reflection on the options and a sound consideration of the advantages and disadvantages of the different approaches. We hope this report contributes to that reflection.

References

‘AI Risk Management Framework: Initial Draft’ (National Institute of Standards and Technology 2022) <https://www.nist.gov/document/ai-risk-management-framework-initial-draft>

‘AI Risk Management Framework: Second Draft’ (National Institute of Standards and Technology 2022) <https://www.nist.gov/document/ai-risk-management-framework-2nd-draft>

Bartram R, ‘The New Frontier for Artificial Intelligence’ (ISO, 18 October 2018) <https://www.iso.org/cms/render/live/en/sites/isoorg/contents/news/2018/10/Ref2336.html> accessed 27 July 2022

Bello y Villarino J-M and Vijeyarasa R, ‘International Human Rights, Artificial Intelligence and the Challenge for the Pondering State: Time to Regulate?’ [2022] Nordic Journal of Human Rights

Bertuzzi L, ‘The Risk of Fragmentation for International Standards’ (www.euractiv.com, 6 April 2022) <https://www.euractiv.com/section/digital/news/the-risk-of-fragmentation-for-international-standards/> accessed 27 July 2022

Biddle CB, ‘No Standard for Standards: Understanding the ICT Standards-Development Ecosystem’ in Jorge L Contreras (ed), The Cambridge Handbook of Technical Standardization Law: Competition, Antitrust, and Patents (Cambridge University Press 2017) <https://www.cambridge.org/core/books/cambridge-handbook-of-technical-standardization-law/no-standard-for-standards-understanding-the-ict-standardsdevelopment-ecosystem/4A5E1B2411DC3821F4D7BAAE8D7DAD39> accessed 12 July 2022

Blind K and Kahin B, ‘Standards and the Global Economy’ in Jorge L Contreras (ed), The Cambridge Handbook of Technical Standardization Law: Competition, Antitrust, and Patents (Cambridge University Press 2017)

Blinken AJ, ‘Antony Blinken - Speech on Foreign Policy and the American People (Text-Audio-Video)’ (3 March 2021) <https://www.americanrhetoric.com/speeches/antonyblinkenfirstmajorforeignpolicy.htm> accessed 14 October 2022

Büthe T and Mattli W, The New Global Rulers: The Privatization of Regulation in the World Economy (Princeton University Press 2011) <http://www.degruyter.com/view/books/9781400838790/9781400838790/9781400838790.xml> accessed 28 July 2022

Cabinet Office (Japan), ‘Society 5.0’ <https://www8.cao.go.jp/cstp/english/society5_0/index.html> accessed 4 August 2022

China Electronics Standardization Institute, ‘Artificial Intelligence Standardization White Paper (2021 Edition)’ <https://cset.georgetown.edu/publication/artificial-intelligence-standardization-white-paper-2021-edition/> accessed 5 August 2022

Cihon P, ‘Standards for AI Governance: International Standards to Enable Global Coordination in AI Research & Development’ [2019] Future of Humanity Institute. University of Oxford

Commonwealth of Australia, ‘Australia’s Artifical Intelligence Action Plan’

Council of Europe, ‘Possible Elements of a Legal Framework on Artificial Intelligence’ <https://rm.coe.int/possible-elements-of-a-legal-framework-on-artificial-intelligence/1680a5ae6b>

Demortain D, ‘Expertise, Regulatory Science and the Evaluation of Technology and Risk: Introduction to the Special Issue’ (2017) 55 Minerva 139

Department of Foreign Affairs and Trade, ‘Digital Economy Agreement’ <https://www.dfat.gov.au/trade/services-and-digital-trade/Pages/australia-and-singapore-digitaleconomy-agreement>

Department of Industry Science and Resources G of A, ‘Safe and Responsible AI in Australia’ <https://storage.googleapis.com/converlens-au-industry/industry/p/prj2452c8e24d7a400c72429/public_assets/Safe-and-responsible-AI-in-Australia.pdf>

Ding J, ‘Deciphering China’s AI Dream’ [2018] Future of Humanity Institute Technical Report

Ding J, Triolo P and Sacks S, ‘Chinese Interests Take a Big Seat at the AI Governance Table’ (New America, 20 June 2018) <http://newamerica.org/cybersecurity-initiative/digichina/blog/chinese-interests-take-big-seat-ai-governance-table/> accessed 5 August 2022

Dr Lynne Parker, ‘National Artificial Intelligence Initiative | Artificial Intelligence and Emerging Technology Inaugural Stakeholder Meeting’ (United States Patent and Trademark Office, 2022) <https://www.uspto.gov/sites/default/files/documents/National-Artificial-Intelligence-Initiative-Overview.pdf>

European Commission, ‘An EU Strategy on Standardisation - Setting Global Standards in Support of a Resilient, Green and Digital EU Single Market’ <https://ec.europa.eu/docsroom/documents/48598> accessed 27 July 2022

——, ‘Proposal for a Regulation Amending Regulation (EU) No 1025/2012 as Regards the Decisions of European Standardisation Organisations Concerning European Standards and European Standardisation Deliverables’ <https://ec.europa.eu/docsroom/documents/48599> accessed 3 August 2022

——, ‘The 2022 Annual EU Work Programme for European Standardisation’ <https://ec.europa.eu/docsroom/documents/48601> accessed 3 August 2022

‘Executive Order 13859 Maintaining American Leadership in Artificial Intelligence’ <https://www.federalregister.gov/documents/2019/02/14/2019-02544/maintaining-american-leadership-in-artificial-intelligence> accessed 2 August 2022

‘Executive Order 13960 Promoting the Use of Trustworthy Artificial Intelligence in the Federal Government’ <https://www.federalregister.gov/documents/2020/12/08/2020-27065/promoting-the-use-of-trustworthy-artificial-intelligence-in-the-federal-government> accessed 2 August 2022

Expert Group on how AI Principles should be implemented [Japan], ‘AI Governance in Japan Ver. 1.1’ (2021)

Frankel C and Højbjerg E, ‘The Political Standardizer’ (2012) 51 Business & Society 602

‘Global Partnership on Artificial Intelligence - GPAI’ (2022) <https://gpai.ai/> accessed 5 August 2022

Hine E and Floridi L, ‘Artificial Intelligence with American Values and Chinese Characteristics: A Comparative Analysis of American and Chinese Governmental AI Policies’ (11 January 2022) <https://papers.ssrn.com/abstract=4006332> accessed 25 August 2022

——, ‘Artificial Intelligence with American Values and Chinese Characteristics: A Comparative Analysis of American and Chinese Governmental AI Policies’ (11 January 2022) <https://papers.ssrn.com/abstract=4006332> accessed 4 August 2022

‘ISO/IEC 22989:2022(En)’ <https://www.iso.org/obp/ui/#iso:std:iso-iec:22989:ed-1:v1:en> accessed 16 August 2022

‘ISO/IEC JTC 1/SC 42 - Artificial Intelligence’ (ISO, 2022) <https://www.iso.org/cms/render/live/en/sites/isoorg/contents/data/committee/67/94/6794475.html> accessed 4 August 2022

Kahneman D, Thinking, Fast and Slow (Penguin UK 2011)

Karanicolas M, ‘To Err Is Human, to Audit Divine: A Critical Assessment of Canada’s AI Directive’ (20 April 2019) <https://papers.ssrn.com/abstract=3582143> accessed 5 August 2022

Kratsios M, ‘AI That Reflects American Values’ Bloomberg.com (7 January 2020) <https://www.bloomberg.com/opinion/articles/2020-01-07/ai-that-reflects-american-values> accessed 30 May 2023

Kuziemski M and Misuraca G, ‘AI Governance in the Public Sector: Three Tales from the Frontiers of Automated Decision-Making in Democratic Settings’ (2020) 44 Telecommunications Policy 101976

Lindsay D and Hogan J, ‘An Australian Perspective on AI, Ethics and Its Regulatory Challenges’ (2019) 12(2) Journal of Law & Economic Regulation 12

Lozada P, Rühlig T and Toner H, ‘Chinese Involvement in International Technical Standards’ (DigiChina, 6 December 2021) <https://digichina.stanford.edu/work/chinese-involvement-in-international-technical-standards-a-digichina-forum/> accessed 1 June 2023

Ministry of Economy, Trade and Industry, ‘Governance Innovation: Redesigning Law and Architecture for Society 5.0 (v 1.1)’ (2020)

——, ‘Market Formation Potential Index Ver. 1.0’ <https://www.meti.go.jp/english/press/2021/0421_002.html>

National Institute of Standards and Technology, ‘U.S. Leadershio in AI: A Plan for Federal Engagement in Developing Technical Standards and Related Tools’ <https://www.nist.gov/system/files/documents/2019/08/10/ai_standards_fedengagement_plan_9aug2019.pdf>

——, ‘About NIST’ (NIST, 2023) <https://www.nist.gov/about-nist> accessed 14 January 2023

——, ‘AI Risk Management Framework: AI RMF (1.0)’ (National Institute of Standards and Technology 2023) <https://nvlpubs.nist.gov/nistpubs/ai/NIST.AI.100-1.pdf> accessed 18 April 2023

National Security Commission on Artificial Intelligence, ‘Previous Reports’ (NSCAI, 2023) <https://www.nscai.gov/previous-reports/> accessed 14 January 2023

‘National Security Commission on Artificial Intelligence Final Report’ (National Security Commission on Artificial Intelligence 2021) <https://www.nscai.gov/2021-final-report/>

Nativi S and De Nigris S, ‘AI Standardisation Landscape: State of Play and Link to the EC Proposal for an AI Regulatory Framework’ (European Commission Joint Research Centre 2021) EUR 30772 EN <https://data.europa.eu/doi/10.2760/376602> accessed 27 July 2022

NIST, ‘AI Risk Management Framework: Second Draft’ <https://www.nist.gov/system/files/documents/2022/08/18/AI_RMF_2nd_draft.pdf>

‘NSW AI Assurance Framework’ (2022) <https://www.digital.nsw.gov.au/policy/artificial-intelligence/nsw-ai-assurance-framework> accessed 31 August 2022

OECD, ‘OECD Principles on Artificial Intelligence -’ <https://www.oecd.org/going-digital/ai/principles/> accessed 18 August 2021

——, ‘State of Implementation of the OECD AI Principles: Insights from National AI Policies’ <https://www.oecd-ilibrary.org/science-and-technology/state-of-implementation-of-the-oecd-ai-principles_1cd40c44-en> accessed 12 July 2021

——, ‘The Global Partnership on Artificial Intelligence’ (OECD AI Policy Observatory Portal, October 2021) <https://oecd.ai/en/dashboards/policy-initiatives/http:%2F%2Faipo.oecd.org%2F2021-data-policyInitiatives-24565> accessed 5 August 2022

——, ‘Artificial Intelligence - OECD’ (2022) <https://www.oecd.org/digital/artificial-intelligence/> accessed 5 August 2022

‘Plan for Federal AI Standards Engagement’ (National Institute of Standards and Technology 2019) <https://www.nist.gov/document/report-plan-federal-engagement-developing-technical-standards-and-related-tools>

Roberts H and others, ‘Achieving a “Good AI Society”: Comparing the Aims and Progress of the EU and the US’ (2021) 27 Science and Engineering Ethics 68

Rühlig TN and ten Brink T, ‘The Externalization of China’s Technical Standardization Approach’ (2021) 52 Development and Change 1196

Scassa T, ‘Administrative Law and the Governance of Automated Decision Making: A Critical Look at Canada’s Directive on Automated Decision Making’ (2021) 54 U.B.C. Law Review [i]

Schwartz R and others, ‘Towards a Standard for Identifying and Managing Bias in Artificial Intelligence’ (National Institute of Standards and Technology 2022) <https://nvlpubs.nist.gov/nistpubs/SpecialPublications/NIST.SP.1270.pdf> accessed 21 October 2022

Shapiro C and Varian HR, ‘The Art of Standards Wars’ (1999) 41 California management review 8

Standardization Administration of China (SAC), ‘Guidelines for the Construction of a National New Generation Artificial Intelligence Standards System (Translation)’ <https://cset.georgetown.edu/publication/guidelines-for-the-construction-of-a-national-new-generation-artificial-intelligence-standards-system/> accessed 5 August 2022

State Council of China, ‘China’s New Generation of Artificial Intelligence Development Plan (Non-Official Translation)’ <https://flia.org/notice-state-council-issuing-new-generation-artificial-intelligence-development-plan/> accessed 7 October 2021

Takako H, ‘JIS法改正の狙いと企業への影響を読み解く(Interpreting the Revision Aims of the JIS Act and Its Impact on Companies)’ <https://www2.deloitte.com/jp/ja/pages/strategy/articles/cbs/isos-01.html>

Thayer BA and Lianchao H, ‘We Cannot Let China Set the Standards for 21st Century Technologies’ (The Hill, 16 April 2021) <https://thehill.com/opinion/technology/548048-we-cannot-let-china-set-the-standards-for-21st-century-technologies/> accessed 5 August 2022

Toner H, ‘Will China Set Global Tech Standards?’ (Center for Security and Emerging Technology, 22 March 2022) <https://cset.georgetown.edu/article/will-china-set-global-tech-standards/> accessed 5 August 2022

UNESCO, ‘Recommendation on the Ethics of Artificial Intelligence’ <https://unesdoc.unesco.org/ark:/48223/pf0000380455> accessed 7 March 2022

US Government, ‘Advancing Trustworthy AI’ (National Artificial Intelligence Initiative, 2022) <https://www.ai.gov/strategic-pillars/advancing-trustworthy-ai/> accessed 31 October 2022

US-EU Trade and Technology Council, ‘Inaugural Joint Statement’ <https://www.whitehouse.gov/briefing-room/statements-releases/2021/09/29/u-s-eu-trade-and-technology-council-inaugural-joint-statement/> accessed 2 August 2022

van Leeuwen B, ‘Standardisation in the internal market for services: An effective alternative to harmonisation?’ (2018) t. XXXII Revue internationale de droit économique 319

Wang A and others, ‘Using Smart Speakers to Contactlessly Monitor Heart Rhythms’ (2021) 4 Communications Biology 1

White House Office of Science and Technology Policy, ‘Blueprint for an AI Bill of Rights’ <https://www.whitehouse.gov/ostp/ai-bill-of-rights/> accessed 24 October 2022

World Economic Forum, Ministry of Economy, Trade and Industry and Hitachi, ‘Rebuilding Trust and Governance: Towards Data Free Flow with Trust (DFFT): White Paper’ (2021) <https://www3.weforum.org/docs/WEF_rebuilding_trust_and_Governance_2021.pdf> accessed 27 September 2022

Council (EU), Council Resolution of 7 May 1985 on a new approach to technical harmonization and standards 1985 [OJ C 136, pp. 1–9]

European Commission, Proposal for a Regulation of the European Parliament and of the Council laying down harmonised rules on artificial intelligence (Artificial Intelligence Act) and amending certain Union legislative acts 2021

Marco Legal da Inteligência Artificial 2020 [PL 21/2020]

Parliament of Canada, Digital Charter Implementation Act, 2022 - 2022 [Government Bill (House of Commons) C-27 (44-1)]

Translation of Provisions on the Administration of Deep Synthesis Internet Information Services (Draft for solicitation of comments) 2022

Directive on Automated Decision-Making 2019

European Convention on Human Rights 1950

Exec. Order No. 13859 2019 (84 FR 3967)

Guidelines for National New Generation Artificial Intelligence Innovation and Development Pilot Zone Construction Work (English version) 2019

Internet Information Service Algorithmic Recommendation Management Provisions (English Translation) 2022

Marrakesh Agreement Establishing the World Trade Organization 1994

National Artificial Intelligence Initiative Act of 2020 2020

National Artificial Intelligence Initiative Act of 2020 2020

National Artificial Intelligence Initiative Act of 2020 2021

Treasury Board of Canada, Directive on Automated Decision-Making 2019

Endnotes