Understanding 42001: Guide for Australian Business PDF

Summary

This document is a guide for Australian businesses seeking to understand the AS ISO/IEC 42001 which is an Al management system. It details the benefits of Al standards and how they can create a framework for governance and the responsible use of AI. The guide explores the purpose of 42001 and why Al standards are needed.

Full Transcript

GUIDE FOR AUSTRALIAN BUSINESS Understanding 42001 AS ISO/IEC 42001:2023, Information Technology – Artificial Intelligence – Management System Page 2 Understanding 42001 Guide for Australian business Contents Introduction 3 What is a management system stand...

GUIDE FOR AUSTRALIAN BUSINESS Understanding 42001 AS ISO/IEC 42001:2023, Information Technology – Artificial Intelligence – Management System Page 2 Understanding 42001 Guide for Australian business Contents Introduction 3 What is a management system standard (MSS)? 5 Benefits of an ISO MSS 5 Why do we need an MSS for AI? 6 Background and development of 42001 7 What is 42001? 8 Who is the standard for? 8 What is the purpose of 42001? 8 Why do we need 42001? 9 How can 42001 help Australian organisations? 9 Key benefits of 42001 for Australian organisations 11 Certification through 42001: a mark of trustworthiness 12 Why do we need AI standards in Australia? 15 Conclusion 18 Training 19 Understanding 42001 Page 3 Guide for Australian business Introduction If the 20th century was the golden age of steel, the 21st century will be the age of artificial intelligence (AI). Across the world, AI is transforming communities, cultures, and economies. From healthcare to telecommunications, banking to government services, algorithms are disrupting the way we live, work, play and do business. Australian organisations are harnessing the power of AI in innovative ways. AI tools are now being developed to predict early signs of Parkinson’s disease, when solar storms will strike, and even in brain-computer interface technology.1 Generative AI is set to revolutionise the global economy, with a study by PricewaterhouseCoopers (PwC) estimating that global GDP may increase by up to 14% (the equivalent of US$15.7 trillion) by 2030 due to the accelerating development and take-up of AI.2 Global AI adoption is growing steadily, up four points from 2021 according to the IBM Global AI Adoption Index 2022.3 Today, around half of organisations worldwide are reporting substantial benefits from using AI to deliver efficiency and productivity gains; boost creativity and innovation; and increase profitability.4 Algorithms will shape the futures – and bottom lines – of the next generation. Leashing AI: the need for checks and balances For the full potential of AI to be realised, however, there needs to be safeguards around its use. From gender bias to privacy breaches to racial profiling, the brave new world of AI comes with burdens as well as benefits. Consumers are increasingly concerned about the potential risks of AI systems due to their perceived opacity, complexity, potential for bias, loss of accountability, and unpredictability. An Edelman Trust Barometer report confirms that in 2021, trust in AI decreased in 25 out of 27 countries.5 This is a critical challenge. As Kimberly Lucy, director, GRC Standards, Microsoft, says in Creating Trust in AI Through Standards: A Management System Approach, “artificial intelligence has the power to affect virtually every aspect of human life, from work to healthcare to leisure. At the same time, the damage that can be created by such powerful systems is immense if left unchecked.6 How can AI be developed and used in a way that is responsible and that leads to trust and assurance for consumers and other stakeholders? And what is the role of standards in creating this trust and assurance?” Trust is the most powerful force underlying the success of any organisation – yet it can be shattered in an instant. This helps explain concerted actions by governments and industry to create a credible framework for trustworthy AI” — Michel Girard, senior fellow at the Centre for International Governance Innovation (CIGI).7 1 Unlocking the Benefits of AI (pwc.com.au) 2 Economic impacts of artificial intelligence (europa.eu) 3 IBM Global AI Adoption Index 2022 4 IBM Global AI Adoption Index 2022 5 2021-edelman-trust-barometer.pdf 6 01_05_Kimberly_ISO_IEC_AI_Workshop-Trust-in-AI-through-MSS_Kim-Lucy.pdf (jtc1info.org) 7 A Two-Track Approach for Trustworthy AI by Centre for International Governance Innovation – Issuu Page 4 Understanding 42001 Guide for Australian business AI standards: a critical watchdog These growing concerns have prompted a move worldwide to regulate AI. The new EU AI Act, for example, aims to ensure that fundamental rights, democracy, the rule of law and environmental sustainability are protected from high-risk AI.8 The Act establishes obligations and safeguards for AI based on its potential risks and level of impact. As AI advances across industries and jurisdictions, AI standards are being regarded as the next step in the regulation of AI.9 Leading AI expert Geoffrey Hinton is one of several prominent voices warning of the dangers of scaling up AI without adequate guardrails created through greater global collaboration, regulation, and standardisation.10 Worldwide, governments are responding to the need for robust standards in AI, from the US National Institute of Standards and Technology (NIST)’s AI Risk Management Framework (RMF)11, to the UK Government’s National AI strategy, which notes that the integration of standards in the government’s model for AI governance and regulation is critical for harnessing the power of AI while leashing its risks.12 The Australian government released an interim response to the Safe and Responsible AI in Australia consultation in early 2024 which included working with industry on a voluntary AI Safety Standard, the potential for a watermarking or labelling scheme for AI- generated materials and the establishment of a national expert advisory group to support these developments. Australia consumes more AI products than it produces, so the response of other countries such as the EU, UK, US and Canada are being closely monitored to best align Australia with international efforts.13 In addition, the NSW AI Assurance Framework was put into effect in 2022 to aid government departments in designing, building and using AI enabled products and solutions appropriately and in co-ordination with the NSW Government AI Ethics principles.14 As AI systems continue to evolve, standards have a critical role to play in creating safeguards to manage risks, mitigate harms, and build that vital element – trust. Wael William Diab, chair of ISO/IEC JTC 1/SC 42 says, “The importance of a trustworthy AI system to ensure widescale adoption cannot be understated.”15 AS ISO/IEC 42001:2023 – a new way forward In helping to drive the responsible use of AI, AS ISO/IEC 42001:2023 (abbreviated to “42001” in this report) is a new global AI management system that addresses the need for consistency and ethical implementation of AI across borders. 8 Artificial Intelligence Act: deal on comprehensive rules for trustworthy AI | News | European Parliament (europa.eu) 9 What will the role of standards be in AI governance? | Ada Lovelace Institute 10 The Race to Regulate Artificial Intelligence: A Global Challenge | by Jean Loup P. G. Le Roux | Medium 11 AI Risk Management Framework | NIST 12 Unlocking the benefits of artificial intelligence with standards (iec.ch). 13 https://www.minister.industry.gov.au/ministers/husic/media-releases/action-help-ensure-ai-safe-and-responsible 14 https://www.digital.nsw.gov.au/policy/artificial-intelligence/nsw-artificial-intelligence-assurance-framework 15 Achieving trustworthy AI with standards | IEC e-tech Understanding 42001 Page 5 Guide for Australian business About this report The purpose of this report is to provide information on 42001, its benefits, and how it will impact Australian organisations and the broader community. The four key objectives are: 1. What are management system standards (MSS) and why do we need them? 2. Background and information about 42001. 3. The benefits of the standard 4. Why we need AI standards in Australia What is a management system standard (MSS)? Management system standards (MSS) are standards that can provide support to organisations of all types and sizes in implementing an integrated system for dealing with areas such as health and safety, environmental issues, governance, risk management and training.16 “Management system standards help organisations to put an integrated system in place, including, for example, senior management support, training, governance processes and risk management – all essential to getting AI governance and accountability right,” says Tim McGarr, Sector Lead (Digital) BSI.17 There are more than 80 ISO MMS18 covering a wide range of areas critical to organisational culture and processes. These include ISO 9001:2015, Quality management systems — Requirements; ISO/IEC 27001:2022, Information security, cybersecurity and privacy protection — Information security management systems — Requirements, and ISO 14001:2015, Environmental management systems — Requirements with guidance for use.19 Benefits of an ISO MSS An ISO management system standard can help organisations by:20 Specifying clear, repeatable steps to achieve specific company objectives and goals. Assisting with risk assessment and system impact assessment. Helping to establish a healthy organisational culture, from leadership to employee engagement. Helping to deliver better services and products, and increasing general customer value.21 16 ISO – Management system standards 17 How a standard in development (ISO/IEC 42001) can meet collective AI Governance goals – AI Standards Hub 18 ISO – Management system standards 19 ISO – Management system standards 20 Managing AI: What Businesses Should Know about the proposed ISO Standa (cms-lawnow.com) 21 ISO – Management system standards Page 6 Understanding 42001 Guide for Australian business Why do we need an MSS for AI? “AI presents unique risks that need to be managed carefully and systematically due to its self- learning nature. Unlike traditional software and IT systems, AI is an ‘inference machine’ trained for decision-making, with or without a human.22 The automated, complex, self-learning and scalable nature of AI systems pose particular risks beyond that of regular software”, says Geoff Clarke, Regional Standards Manager, Microsoft. “A key challenge lies in the fact that continuous machine learning AI systems could produce different results over time from the same input”, Clarke says. “This makes such systems very different to those where you test at launch and can be confident the results will always stay within your acceptable parameters. The huge opportunity with AI is that it can adapt and use data instead of human-coded logic. This is really a leap in capability for any organisation, but also puts much more emphasis on the quality and protection of the underlying data. AI also could potentially make decisions, and this has serious governance ramifications because you need to give the right amount of decision-making power to the AI system. You must make sure that whoever is in charge of the AI system remains responsible for the system – and has the requisite authority and control to do so.” Key challenges of AI Professor Mark Levene, Principal Research Scientist, National Physical Laboratory, says key challenges with AI exist around:23 the relative transparency of automated decision systems the degree of autonomy of AI systems the fact that machine-learning systems are trained on data, unlike traditional procedural programming AI systems are inherently socio-technical in nature, meaning they are influenced by societal dynamics and human behaviour. AI risks – and benefits – can emerge from the interplay of technical aspects combined with societal factors related to how a system is used, its interactions with other AI systems, who operates it, and the social context in which it is deployed.” — AI RMF24 22 The new global risks management standard benchmark for AI? | Gilbert + Tobin Lawyers: Law Firm in Sydney, Melbourne & Perth (gtlaw.com.au) 23 How ISO/IEC 42001 guides organisations toward Trustworthy AI – AI Standards Hub 24 The new global risks management standard benchmark for AI? | Gilbert + Tobin Lawyers: Law Firm in Sydney, Melbourne & Perth (gtlaw.com.au) Understanding 42001 Page 7 Guide for Australian business Managing risk Stela Solar, director at the CSIRO National Artificial Intelligence Centre, says AI systems have a critical role to play in this space. “AI systems that are developed without appropriate checks and balances can have unintended consequences that can significantly damage company reputation and customer loyalty.”25 At the 2023 TechLeaders Conference,26 Solar said that 42001 helps identify “which organisations are more mature with AI governance. By default, organisations who embrace those standards will be demonstrating that they’re more mature in their AI practice and governance. Those who are not may be seen as higher risk.” The importance of guardrails for AI It is critical that the right guardrails are in place when it comes to implementing AI. PwC Australia cites a number of key risks that organisations face without appropriate safeguards: 27 Non-compliance with data protection and privacy regulation Over-reliance on AI for automation and decision-making. Human oversight and review are critical safeguards Potential reputational damage from misuse of AI in products and services Discrimination or bias arising from lack of oversight and review of training data Inaccurate insights or faulty decisions arising from the quality of training data, model design, training approach or improper usage of the model Background and development of 42001 At the inaugural Overview of the AI Standards Program and Novel Ecosystem Approach ISO/ IEC Workshop Series in May 2022, Wael William Diab, Chair of ISO/IEC JTC 1/SC 42, Artificial Intelligence, said the AI ecosystem was “ripe for standardisation. Digital transformation of industries has fundamentally changed the landscape for IT standardisation.”28 These key changes include:29 The rise in importance of non-technical requirements such as ethical considerations and designing trustworthy AI systems. The importance of the data ecosystem alongside hardware, software and operational technologies. The increasing importance of certification, third-party audits and increasing end-user confidence. With the push for artificial intelligence regulation gaining momentum globally, the need for a robust AI governance framework led to the development of 42001.30 In developing this standard, SC 42 set out to leverage the MSS approach, says Mr Diab. “MSSs have been successful in other areas, such as ISO 9001, which specifies requirements for a quality management system, and the idea [was] to apply a similar approach for AI.”31 25 Many Australian businesses stuck in building responsible AI programs: CSIRO – AI – Digital Nation (digitalnationaus.com.au) 26 ISO set to release an AI management system standard this year – AI – Digital Nation (digitalnationaus.com.au) 27 Unlocking the Benefits of AI (pwc.com.au) 28 03_09_Wael_Overview-of-ISO_IEC-AI-for-ISO-IEC-AI-Workshop-0522-rev-28_3.pdf (jtc1info.org) 29 03_09_Wael_Overview-of-ISO_IEC-AI-for-ISO-IEC-AI-Workshop-0522-rev-28_3.pdf (jtc1info.org) 30 ISO/IEC 42001 | BSI (bsigroup.com) 31 Transforming industry and societyTransforming industry and society through beneficial AI | RAPS Page 8 Understanding 42001 Guide for Australian business What is 42001? 42001 is an MSS developed specifically for AI.32 It specifies requirements and provides guidance on establishing, implementing, maintaining and continually improving an AI management system.33 Tim McGarr, Sector Lead (Digital) BSI, says that “there is a widespread and growing recognition that management systems can have a positive long-term impact for organisations. Given that AI is so wide-reaching, it is anticipated that 42001 will become as integral to organisational success as established management system standards such as ISO 9001 in quality management, ISO 14001 in environmental management and ISO/IEC 27001 in cyber security.”34 Who is the standard for? 42001 is a broad-ranging standard by design. It is applicable to any organisation, regardless of size, type and nature, that provides or uses products or services that use AI systems, helping them develop or use AI systems responsibly.35 42001 is not a replacement of an organisation’s existing frameworks and guidelines. It is a complementary document. In Australia, 42001 will complement existing frameworks and guides that include Australia’s AI Ethics Principles36. Fast facts According to the Creating Trust in AI Through Standards: A Management System Approach report:37 42001 is based on a common “high-level structure” with required management clauses. This is a feature of all MSS. It promotes an iterative process of continual self-evaluation and accountability. Organisations can be certified by a third party to the applicable MSS. What is the purpose of 42001? 42001 helps guide organisations on how to best manage their AI systems. Trustworthy, responsible AI practices are becoming critically important to the market at large. Over 85% of IT professionals agree that consumers are more likely to choose a company that adopts transparency in how its AI models are designed and used.38 “The 42001 provides a framework with in-built flags that acts as a system of checks and balances for an organisation when it comes to implementing AI responsibly”, says Dr Ian Oppermann, Co-founder ServiceGen and Industry Professor, UTS. “It is not a fool-proof system”, Dr Oppermann warns, “but signals that as an organisation, you are trying to embed correct principles in the way you manage your business. In a way, 42001 acts like a recipe to follow.” 32 How a standard in development (ISO/IEC 42001) can meet collective AI Governance goals – AI Standards Hub 33 Information Technology — Artificial intelligence — Management system – AI Standards Hub 34 How a standard in development (ISO/IEC 42001) can meet collective AI Governance goals – AI Standards Hub 35 Information Technology — Artificial intelligence — Management system – AI Standards Hub 36 Australia’s AI Ethics Principles | Australia’s Artificial Intelligence Ethics Framework | Department of Industry Science and Resources 37 01_05_Kimberly_ISO_IEC_AI_Workshop-Trust-in-AI-through-MSS_Kim-Lucy.pdf (jtc1info.org) 38 IBM Global AI Adoption Index 2022 Understanding 42001 Page 9 Guide for Australian business Why do we need 42001? There is a knowledge gap across Australian organisations when it comes to managing the potential risks and complexities of AI. Data gathered for the latest Australian Responsible AI Index report found that although the vast majority (82%) of companies surveyed believed they were taking a best-practice approach to responsibly using AI in their businesses, less than a quarter (24%) had any measures in place to ensure that was what they were actually doing.39 These figures are mirrored globally. The IBM Global AI Adoption Index40 shows that while most companies understand the vital importance of ensuring consumer trust in the way an organisation uses AI, relatively few companies actually codified these principles into official rules and policies. In addressing demand from organisations for a structured approach to govern and manage AI technologies, 42001 helps provide a framework to address the challenges associated with AI implementation while helping to promote transparency and responsibility in the use of AI. How can 42001 help Australian organisations? A management system standard like 42001 helps address the “potential risks in an unregulated AI system”, says Microsoft’s Geoff Clarke. “42001 requires the organisation to conduct an impact assessment report to understand the necessary boundaries, and have system controls, monitoring and ‘brakes’ in place to ensure the decisions you are making via that AI system are within the acceptable parameters. It’s all about the strategic and responsible use of AI… how do you use this wonderful new technology which brings opportunities as well as risks and new responsibilities to an organisation. AI is so powerful that every organisation has to look into its potential use, or they’re really not doing the right thing by their stakeholders.” Asking the right questions The standard prompts Australian organisations to “ask the right questions”, says Lyria Bennett Moses, Professor in the Faculty of Law and Justice at UNSW Sydney, and Director of the UNSW Allens Hub for Technology Law and Innovation. “The standard is quite clear at setting out all the different things to consider, such as the policies that need to exist and the resources that might be required. It helps organisations do all of that work in a way so they are less likely to miss out an important step.” As a useful framework for organisations, “the standard will help organisations to implement controls around the appropriate use of artificial intelligence. It’s a vehicle to help organisations develop good policies in line with their own objectives and values.” 39 Artificial intelligence regulation: Government needs to step in to prevent ‘irresponsible’ guardrails (afr.com) 40 IBM Global AI Adoption Index 2022 Page 10 Understanding 42001 Guide for Australian business A safeguard against bias Bennett Moses says 42001 also “helps, indirectly, to ensure data quality and guard against bias, because it prompts the questions.” Our human biases can infiltrate AI, which trains on data. Organisations have an ethical obligation to root out algorithmic bias and help ensure equity when it comes to everything from screening CVs to hiring or security checks. For organisations, it is not just a matter of social responsibility but a commercial imperative.41 Bias in AI erodes consumer trust and raises the risks of severe reputational damage – see the furore over Amazon’s AI recruiting tool that was found to be discriminating against female job applicants.42 Bennett Moses says that “there can be a risk of unfair bias in machine learning processes so while further standards work is taking place in that space, this standard flags the issue.” Fostering public trust and transparency in AI “Australians have a trust problem with AI”, says Dr Ian Oppermann. Standards like 42001 can play an important role in fostering greater public confidence in the responsible use of AI. “The dangers of not having any kind of regulation or any kind of frameworks form part of a broader conversation that we’re having globally. Certainly, in Australia, people are losing trust in AI and the way that data is being handled. What we can do is demonstrate trustworthiness through systems and processes, and then eventually trust is built over a period of time,” says Dr Oppermann. “Until we develop those overarching systems or processes, we don’t have a way to tell if we can trust something or not – we don’t have something to measure against. The AI management standard gives you a way to think about how to do that.” 41 Why AI bias can hurt your business | WIRED UK 42 Amazon scraps secret AI recruiting tool that showed bias against women | Reuters Understanding 42001 Page 11 Guide for Australian business Key benefits of 42001 for Australian organisations Risk management Ethical AI implementation 42001 can assist organisations in 42001 helps organisations embed an trouble-spotting and mitigating risks in AI ethical approach to their AI management technologies.43 By incorporating robust system. risk management practices into their AI governance framework, organisations can help Improved decision-making and safeguard against biases, security breaches, accountability and other potential harms.44 42001 helps promote transparency and explicability in AI systems. This, in turn, A scalable, integrable management system enables better decision-making processes The flexibility provided by the standard’s and accountability for AI outcomes. harmonised structure allows easy integration of any existing privacy or cybersecurity Boosting innovation and creativity management systems.45 The AI management standard acts like a kind of safety net – and this helps A globally recognised standard to boost confidence, innovation, and 42001 is a globally recognised standard that experimentation. “Australians are great at provides guidelines for the governance and inventing little things, very bad at scaling management of AI technologies.46 internationally,” says Dr Ian Oppermann. “The ability to experiment safely and try Systematic approach new things, whether it’s in the public or 42001 offers a systematic approach to private sector, is quite a powerful thing. addressing the key challenges with AI Having management frameworks like these technologies including ethical use, data means giving people safety mechanisms privacy, bias, accountability and transparency. that makes experimentation easier – there’s a real potential for innovation. This means that a whole lot of backyard industries could, just through serendipity, 43 ISO 42001 Consultants for the AI Management create really powerful innovations and System Standard (assentriskmanagement.co.uk) scale internationally.” 44 ISO 42001 Consultants for the AI Management System Standard (assentriskmanagement.co.uk) 45 ISO – Management system standards 46 ISO 42001 Consultants for the AI Management System Standard (assentriskmanagement.co.uk) Page 12 Understanding 42001 Guide for Australian business Benefits of 42001 Provides certification, sending a signal to the market that an organisation takes their responsible AI use seriously.47 Helps improve the quality, security, traceability, transparency and reliability of AI technologies. Helps meet customer, staff and other stakeholder expectations around the ethical and responsible use of AI. Helps improve efficiency and risk management. Certification through 42001: a mark of trustworthiness 42001 is the first ISO standard defining a certifiable management system for AI.48 As a certifiable AI standard, it helps provide organisations with a mark of trustworthiness in AI use. This can help promote commercial scalability – vital for encouraging growth and expansion across all critical infrastructure sectors, including health, education, and transport. As Tim McGarr, Sector Lead (Digital) BSI, says in Unlocking the benefits of artificial intelligence with standards:49 “Alongside good standards and regulation, there is also recognition of the need to utilise conformity assessment to build trust in AI, building on the established global testing, certification and accreditation infrastructure. AS ISO/IEC 42001 is one such standard being written in such a way that it can be certified against, and many more such standards should come. Ultimately, to build trust in AI there is a need for a discoverable and navigable framework of regulation, standards and conformity assessment that both cuts across sectors and deals with sector specific challenges.” Why certification matters Governments around the world are looking at making AI trustworthy through standards. “As expected, the EU, the UK and China have pledged to incorporate international digital governance standards and certification programs as a compliance mechanism in upcoming regulations”, says Michel Girard in his report A Two-Track Approach for Trustworthy AI.50 “As anticipated, compliance to mandatory requirements will rely on digital governance standards, certification programs and accreditation schemes.” A robust and globally recognised certification process through 42001 can help avoid overregulation in the AI space while boosting innovation and public confidence in AI use by organisations.51 47 ISO/IEC 42001 | BSI (bsigroup.com) 48 The Race to Regulate Artificial Intelligence: A Global Challenge | by Jean Loup P. G. Le Roux | Medium 49 Unlocking the benefits of artificial intelligence with standards | IEC 50 PB_no.174.pdf (cigionline.org) 51 AG1_3_WP_Executive_Summary_Certification_AIsystems.pdf (plattform-lernende-systeme.de) Understanding 42001 Page 13 Guide for Australian business “A trustmark” – Geoff Clarke Microsoft’s Geoff Clarke tells Standards Australia that “this is the major standard that organisations will certify their AI systems against to show they are doing AI in a responsible manner. We expect most major companies and hopefully government departments – as well as thousands of smaller organisations – will get certified for conformance to this standard. Once you have certification against 42001, it’s a kind of trustmark that an organisation’s stakeholders can look at and be confident that they are taking at least a baseline, responsible approach, they have the right management processes in place to ensure they have done a proper impact assessment, a proper risk analysis, they are improving as they go and getting the feedback loop going. It’s not about AI as a technology in itself, but how you implement it, and making sure you are doing so with the right processes.” Clarke says he envisages Australian organisations embracing certification for two key purposes: firstly, to make sure that the organisation itself is implementing AI responsibly and following all the right processes, checks and balances; and secondly, to ensure that other organisations they deal with are also using AI responsibly. Certification provides great value to Australian consumers because “you can’t just go to an organisation and peel back the covers and know that they are using AI responsibly. But what you can do is get a third-party auditor to go through the organisation to check and then provide the certification to show that, in terms of the question of ‘is this organisation using AI responsibly’, certification of conformance to 42001, while not answering the whole question, provides a good base-level answer.” “Sending a message to customers and the government” – Lyria Bennett Moses “With certification, you’re sending a message through the supply chain, both to customers and to government,” says Bennett Moses. Bennett Moses says that if the Australian government allows for certification to 42001 or other international standards to be used to demonstrate responsible practices for AI rather than creating niche Australian requirements, it will help organisations avoid duplicative compliance. “What we don’t want is organisations having to do different things for different jurisdictions within Australia, with each coming up with its own version of what ‘responsible AI’ looks like. Certification also increases the ability of businesses to rely on each other’s systems, directly or indirectly, so I think it helps lift performance in that way. It is particularly helpful in larger organisations, with all the systems, processes and policies that need to be co-ordinated.” Page 14 Understanding 42001 Guide for Australian business What are the potential economic benefits of certification? Clarke says “this affects a lot of organisations but is particularly important for organisations who want to be exporting because this is going to be a globally relevant standard. So, if you want to sell to not just 25 million people but billions of people, you are going to need an international tick of approval. This is a really good one. I think anyone who is going to use AI at scale is going to find it helpful – so agriculture, mining, aged care, health, all the key sectors in Australia would benefit from 42001 because it is so broad.” Potential benefits of certification with 42001 It acts as a kind of trustmark, demonstrating that an organisation takes its responsibilities for the ethical use of AI seriously. It can provide a competitive advantage in a crowded marketplace. It raises public confidence and trust that there are baseline guidelines in place and that organisations are being audited by an independent body. Accreditation pilot for AI management systems Canada has implemented a first-of-its-kind pilot to define and test requirements for a conformity assessment program for 42001.52 “Certification to national and international standards for AI management systems will allow organisations to prove their dedication to responsible use of AI, raising the confidence of customers and partners in their operations”, says Chantal Guay, SCC CEO. The first stage of the pilot will involve one conformity assessment body and one AI developer/user, assessing against the ISO 42001 (AIMS) standard requirements for AI management systems as well as the Algorithmic Impact Assessment (AIA) developed by Treasury Board of Canada Secretariat. “Standardisation is one of the main pillars of the Pan-Canadian Artificial Intelligence Strategy,” says Chantal Guay. “Standards and conformity assessment provide the assurance and trust that products and services meet all necessary requirements, which will help drive innovation and the adoption of novel AI technologies in support of this national strategy. This project will help us define the requirements for developing, implementing, maintaining and continually improving AI management systems.” 52 SCC Launches Accreditation Pilot for AI Management Systems (responsible.ai) Understanding 42001 Page 15 Guide for Australian business Why do we need AI standards in Australia? AI standards are critical to building trust through the responsible use of AI. The scalability and international scope of AI makes it difficult to develop governance frameworks with the appropriate scope to guide its use. AI standards are an increasingly powerful tool to address these governance challenges by providing common guidelines, principles and technical specifications for the development, deployment and governance of AI systems.53 The EU’s AI Act, for example, aims to boost AI trustworthiness, accountability, risk management and transparency through the adoption of technical and process standards.54 The UK Government’s National AI strategy also notes the critical importance of integrating standards in the government’s model for AI governance and regulation, referencing the importance of the 42001 as a key tool in achieving this goal.55 Benefits of AI standards AI standards can provide clear baselines that help increase reliability, transparency and trust in AI systems while mitigating risks and concerns around bias, fairness, privacy and accountability. These standards can also play a vital coordinating role in the increasingly complex AI governance landscape through enabling a collaborative culture between key stakeholders.56 As outlined in Standard Australia’s Artificial Intelligence Standards Roadmap: Making Australia’s Voice Heard report,57 consistent international standards in ICT have increased interoperability and security across technology platforms, decreased barriers to trade, ensured quality and built greater public and user trust in digital products and services. Standards, including through the ISO and IEC, have enabled agreement across borders and within large commercial environments, on issues as diverse as information security (ISO/IEC 27001), cloud computing (ISO/IEC 27017) and quality management (ISO 9001). Australian companies and public sector agencies already use international standards adopted in Australia to improve a range of administrative and assurance processes.58 Standards can help address some of the key risks and concerns around AI identified in public submissions to the Artificial Intelligence Standards Roadmap: Making Australia’s Voice Heard, 59 including data privacy, data quality and bias, inclusion and fairness, and safety and security. 53 Mapping AI Standards Across AI Governance, Risk and Compliance (holisticai.com) 54 Mapping AI Standards Across AI Governance, Risk and Compliance (holisticai.com) 55 Unlocking the benefits of artificial intelligence with standards | IEC 56 Mapping AI Standards Across AI Governance, Risk and Compliance (holisticai.com) 57 R_1515-An-Artificial-Intelligence-Standards-Roadmap-soft 58 The NSW Cyber Security Policy outlines the mandatory requirements to which all NSW Government departments and Public Service agencies must adhere to ensure cyber security risks to their information and systems are appropriately managed, referencing standards such as ISO/IEC 27001. 59 R_1515-An-Artificial-Intelligence-Standards-Roadmap-soft Page 16 Understanding 42001 Guide for Australian business Bridging the gap “Standards connect ‘hard ethics’ such as legislative and regulatory policy from government and industry with ‘soft ethics’ which foster community expectations and democratic resilience which arise from civil society, families, and individuals. Only a whole-of-society approach can meet the challenges to governance in the digital age. Standards are the crucial connective tissue.” —Written submission, Jeff Bleich Centre60 Dr Ian Oppermann says AI standards are analogous to electrical safety standards when it comes to public health and safety. “When electricity first came around, it was a modern marvel that could do all these amazing things...but until we set the standards, people were being electrocuted, buildings were burning down, there were a whole lot of unintended consequences from people wanting to use this new thing called electricity for good purposes. We are in the same situation with AI.” “There are regulations around discrimination, regulations around privacy, cyber security. But unless you understand how what you’re doing with AI data lines up against those principles-based legislation, you really are just guessing. Standards help bridge the gap. That’s why we need them.” Assessing bias in AI systems Bias in AI can creep into algorithms because AI systems learn on training data. Data can include biased human decisions or reflect historical or social inequities.61 ISO/IEC TR 24027:2022, Information technology — Artificial intelligence (AI) — Bias in AI systems and AI-aided decision making, provides requirements to help ensure AI technologies and systems meet critical objectives for functionality, interoperability and trustworthiness.62 It also specifies the measurement techniques and methods for assessing bias with the intention to address bias-related vulnerabilities. Aurelie Jacquet, Chair of the IT- 043 committee, says the issue of bias is a critical one for Australian businesses. “[The standard] is invaluable because it provides guidance on how to assess bias and fairness through metrics and on how to address and manage unwanted bias. It can effectively help organisation operationalise our AI fairness principle.”63 60 R_1515-An-Artificial-Intelligence-Standards-Roadmap-soft 61 What Do We Do About the Biases in AI? (hbr.org) 62 Artificial Intelligence standard to help industries mitigate bias in AI systems – Standards Australia 63 Artificial Intelligence standard to help industries mitigate bias in AI systems – Standards Australia Understanding 42001 Page 17 Guide for Australian business The AI Standards Roadmap and the importance of standards in AI64 Standards affect 80% of global trade, and are important in relation to emerging technologies like AI. Standards provide an adaptive and responsive approach to managing AI. According to NIST, “AI standards that articulate requirements, specifications, guidelines, or characteristics can help to ensure that AI technologies and systems meet critical objectives for functionality, interoperability, and trustworthiness—and that they perform accurately, reliably, and safely”.65 Increasing instances in global AI regulation, particularly in the EU, UK, and US, underscores the growing need for standardised AI governance to ensure trustworthiness, accountability, risk management, and transparency.66 Differences in regulatory language and approaches create a lack of global alignment and consensus on crucial AI aspects like taxonomy, governance mechanisms, assessment methodologies, and measurement, making standardisation crucial.67 Adopting standards enables organisations to benchmark, audit, and assess AI systems, ensuring conformity and performance evaluation, benefiting developers, consumers, and data subjects impacted by AI technologies.68 AI systems deal with highly sensitive data including personal and financial information. Organisations must ensure that concerns about privacy and security are addressed and that data is protected from unauthorised access, theft and misuse – standards play a critical role in this space. Algorithmic bias can entrench unfairness. Standards help ensure data quality, thus guarding against bias. Standards can play a strong role in promoting inclusive design and use of AI consistent with laws or good practice. 64 R_1515-An-Artificial-Intelligence-Standards-Roadmap-soft 65 NIST (2019). US Leadership in AI: A Plan for Federal Engagement in Developing Technical Standards and Related Tools. Washington: NIST (US Department of Commerce), p. 8. 66 Mapping AI Standards Across AI Governance, Risk and Compliance (holisticai.com) 67 Mapping AI Standards Across AI Governance, Risk and Compliance (holisticai.com) 68 Mapping AI Standards Across AI Governance, Risk and Compliance (holisticai.com) Page 18 Understanding 42001 Guide for Australian business Conclusion 42001 is a powerful new tool to harness and regulate the risks and potentialities of AI. However, we must not forget that human oversight is critical. Ultimately, AI systems cannot be afforded too much autonomy. “From a governance perspective, people must still be accountable for AI. You can’t just say this is a great idea, let’s run a system and stand back. You have to be responsible for it,” says Geoff Clarke. “This is the essence of responsible AI.” Lyria Bennett Moses concurs: “People having more trust in AI systems is not necessarily the end goal. People maybe sometimes rely too heavily on flawed systems. The whole discussion about AI and AI standards comes back to the importance of people, of human intelligence and human intervention.” As Lorraine Finlay, Australia’s Human Rights Commissioner, says, “humanity needs to be placed at the heart of our engagement with AI. At all stages of a product’s lifespan – from the initial concept through to its use by the consumer – we need to be asking not just what the technology is capable of doing, but why we want it to do those things. Technology has to be developed and used in responsible and ethical ways so that our fundamental rights and freedoms are protected.”69 Australia needs to be a world leader in responsible and ethical AI. The truth is that AI itself is neither friend nor foe. The more important question is whether the people using AI tools are using them in positive or negative ways.” 69 Australia needs to be a world leader in ethical AI | Australian Human Rights Commission Understanding 42001 Page 19 Guide for Australian business Training Standards Australia has partnered with Australian National University to offer training for AS ISO/IEC 42001:2023. For course details, scan the QR code or visit: https://cce.anu.edu.au/ai-management-system-standard/