ISO/IEC 42001:2023 Information Technology - Artificial Intelligence Management System PDF
Document Details
Uploaded by Deleted User
Tags
Related
- Chapter 09: Knowledge Management and Specialized Information Systems PDF
- Technology Readiness of B2B Firms in 2023 PDF
- Chapter 2 Digital Transformation and AI PDF
- IDB Knowledge Management Platform Development TORs 2024 PDF
- Pharmacy Technology and Automation PDF
- ITM 100 Class 10 Emerging Technologies & AI PDF
Summary
This document specifies the requirements and provides guidance for establishing, implementing, maintaining, and continually improving an AI management system within an organization. It is intended for organizations providing or using AI products and services.
Full Transcript
Foreword Introduction 1 Scope This document specifies the requirements and provides guidance for establishing, implementing, maintaining and continually improving an AI (artificial intelligence) management system within the context of an organization. This document is intended for use by an organi...
Foreword Introduction 1 Scope This document specifies the requirements and provides guidance for establishing, implementing, maintaining and continually improving an AI (artificial intelligence) management system within the context of an organization. This document is intended for use by an organization providing or using products or services that utilize AI systems. This document is intended to help the organization develop, provide or use AI systems responsibly in pursuing its objectives and meet applicable requirements, obligations related to interested parties and expectations from them. This document is applicable to any organization, regardless of size, type and nature, that provides or uses products or services that utilize AI systems. 2 Normative references The following documents are referred to in the text in such a way that some or all of their content constitutes requirements of this document. For dated references, only the edition cited applies. For undated references, the latest edition of the referenced document (including any amendments) applies. 3 Terms and definitions For the purposes of this document, the terms and definitions given in ISO/IEC 22989 and the following apply. ISO and IEC maintain terminology databases for use in standardization at the following addresses: — ISO Online browsing platform: available at https://www.iso.org/obp — IEC Electropedia: available at https://www.electropedia.org/ 3.1 organization person or group of people that has its own functions with responsibilities, authorities and relationships to achieve its objectives (3.6) Note 1 to entry: The concept of organization includes, but is not limited to, sole-trader, company, corporation, firm, enterprise, authority, partnership, charity or institution or part or combination thereof, whether incorporated or not, public or private. Note 2 to entry: If the organization is part of a larger entity, the term “organization” refers only to the part of the larger entity that is within the scope of the AI management system (3.4). 3.2 interested party person or organization (3.1) that can affect, be affected by, or perceive itself to be affected by a decision or activity Note 1 to entry: An overview of interested parties in AI is provided in ISO/IEC 22989:2022, 5.19. 3.3 top management person or group of people who directs and controls an organization (3.1) at the highest level Note 1 to entry: Top management has the power to delegate authority and provide resources within the organization. Note 2 to entry: If the scope of the management system (3.4) covers only part of an organization, then top management refers to those who direct and control that part of the organization. 3.4 management system set of interrelated or interacting elements of an organization (3.1) to establish policies (3.5) and objectives (3.6), as well as processes (3.8) to achieve those objectives Note 1 to entry: A management system can address a single discipline or several disciplines. Note 2 to entry: The management system elements include the organization’s structure, roles and responsibilities, planning and operation. 3.5 policy intentions and direction of an organization (3.1) as formally expressed by its top management (3.3) 3.6 objective result to be achieved Note 1 to entry: An objective can be strategic, tactical, or operational. Note 2 to entry: Objectives can relate to different disciplines (such as finance, health and safety, and environment). They can be, for example, organization-wide or specific to a project, product or process (3.8). Note 3 to entry: An objective can be expressed in other ways, e.g. as an intended result, as a purpose, as an operational criterion, as an AI objective or by the use of other words with similar meaning (e.g. aim, goal, or target). Note 4 to entry: In the context of AI management systems (3.4), AI objectives are set by the organization (3.1), consistent with the AI policy (3.5), to achieve specific results. 3.7 risk effect of uncertainty Note 1 to entry: An effect is a deviation from the expected — positive or negative. Note 2 to entry: Uncertainty is the state, even partial, of deficiency of information related to, understanding or knowledge of, an event, its consequence, or likelihood. Note 3 to entry: Risk is often characterized by reference to potential events (as defined in ISO Guide 73) and consequences (as defined in ISO Guide 73), or a combination of these. Note 4 to entry: Risk is often expressed in terms of a combination of the consequences of an event (including changes in circumstances) and the associated likelihood (as defined in ISO Guide 73) of occurrence. 3.8 process set of interrelated or interacting activities that uses or transforms inputs to deliver a result Note 1 to entry: Whether the result of a process is called an output, a product or a service depends on the context of the reference. 3.9 competence ability to apply knowledge and skills to achieve intended results 3.10 documented information information required to be controlled and maintained by an organization (3.1) and the medium on which it is contained Note 1 to entry: Documented information can be in any format and media and from any source. Note 2 to entry: Documented information can refer to: — the management system (3.4), including related processes (3.8); — information created in order for the organization to operate (documentation); — evidence of results achieved (records). 3.11 performance measurable result Note 1 to entry: Performance can relate either to quantitative or qualitative findings. Note 2 to entry: Performance can relate to managing activities, processes (3.8), products, services, systems or organizations (3.1). Note 3 to entry: In the context of this document, performance refers both to results achieved by using AI systems and results related to the AI management system (3.4). The correct interpretation of the term is clear from the context of its use. 3.12 continual improvement recurring activity to enhance performance (3.11) 3.13 effectiveness extent to which planned activities are realized and planned results are achieved 3.14 requirement need or expectation that is stated, generally implied or obligatory Note 1 to entry: “Generally implied” means that it is custom or common practice for the organization (3.1) and interested parties (3.2) that the need or expectation under consideration is implied. Note 2 to entry: A specified requirement is one that is stated, e.g. in documented information (3.10). 3.15 conformity fulfilment of a requirement (3.14) 3.16 nonconformity non-fulfilment of a requirement (3.14) 3.17 corrective action action to eliminate the cause(s) of a nonconformity (3.16) and to prevent recurrence 3.18 audit systematic and independent process (3.8) for obtaining evidence and evaluating it objectively to determine the extent to which the audit criteria are fulfilled Note 1 to entry: An audit can be an internal audit (first party) or an external audit (second party or third party), and it can be a combined audit (combining two or more disciplines). Note 2 to entry: An internal audit is conducted by the organization (3.1) itself, or by an external party on its behalf. Note 3 to entry: “Audit evidence” and “audit criteria” are defined in ISO 19011. 3.19 measurement process (3.8) to determine a value 3.20 monitoring determining the status of a system, a process (3.8) or an activity Note 1 to entry: To determine the status, there can be a need to check, supervise or critically observe. 3.21 control measure that maintains and/or modifies risk (3.7) Note 1 to entry: Controls include, but are not limited to, any process, policy, device, practice or other conditions and/or actions which maintain and/or modify risk. Note 2 to entry: Controls may not always exert the intended or assumed modifying effect. [SOURCE:ISO 31000:2018, 3.8, modified — Added as application domain ] 3.22 governing body person or group of people who are accountable for the performance and conformance of the organization Note 1 to entry: Not all organizations, particularly small organizations, will have a governing body separate from top management. Note 2 to entry: A governing body can include, but is not limited to, board of directors, committees of the board, supervisory board, trustees or overseers. [SOURCE:ISO/IEC 38500:2015, 2.9, modified — Added Notes to entry.] 3.23 information security preservation of confidentiality, integrity and availability of information Note 1 to entry: Other properties such as authenticity, accountability, non-repudiation and reliability can also be involved. [SOURCE:ISO/IEC 27000:2018, 3.28] 3.24 AI system impact assessment formal, documented process by which the impacts on individuals, groups of individuals, or both, and societies are identified, evaluated and addressed by an organization developing, providing or using products or services utilizing artificial intelligence 3.25 data quality characteristic of data that the data meet the organization’s data requirements for a specific context [SOURCE:ISO/IEC 5259-1:—1, 3.4] 3.26 statement of applicability documentation of all necessary controls (3.23) and justification for inclusion or exclusion of controls Note 1 to entry: Organizations may not require all controls listed in Annex A or may even exceed the list in Annex A with additional controls established by the organization itself. Note 2 to entry: All identified risks shall be documented by the organization according to the requirements of this document. All identified risks and the risk management measures (controls) established to address them shall be reflected in the statement of applicability. 3.1 Terms related to AI 3.1.1 AI agent automated (3.1.7) entity that senses and responds to its environment and takes actions to achieve its goals 3.1.2 AI component functional element that constructs an AI system (3.1.4) 3.1.3 artificial intelligence AI research and development of mechanisms and applications of AI systems (3.1.4) Note 1 to entry: Research and development can take place across any number of fields such as computer science, data science, humanities, mathematics and natural sciences. 3.1.4 artificial intelligence system AI system engineered system that generates outputs such as content, forecasts, recommendations or decisions for a given set of human-defined objectives Note 1 to entry: The engineered system can use various techniques and approaches related to artificial intelligence (3.1.3) to develop a model (3.1.23) to represent data, knowledge (3.1.21), processes, etc. which can be used to conduct tasks (3.1.35). Note 2 to entry: AI systems are designed to operate with varying levels of automation (3.1.7). 3.1.5 autonomy autonomous characteristic of a system that is capable of modifying its intended domain of use or goal without external intervention, control or oversight 3.1.6 application specific integrated circuit ASIC integrated circuit customized for a particular use [SOURCE:ISO/IEC/IEEE 24765:2017, 3.193, modified — Acronym has been moved to separate line.] 3.1.7 automatic automation automated pertaining to a process or system that, under specified conditions, functions without human intervention [SOURCE:ISO/IEC 2382:2015, 2121282, modified — In the definition, “a process or equipment” has been replaced by “a process or system” and preferred terms of “automated and automation” are added.] 3.1.8 cognitive computing category of AI systems (3.1.4) that enables people and machines to interact more naturally Note 1 to entry: Cognitive computing tasks are associated with machine learning (3.3.5), speech processing, natural language processing (3.6.9), computer vision (3.7.1) and human-machine interfaces. 3.1.9 continuous learning continual learning lifelong learning incremental training of an AI system (3.1.4) that takes place on an ongoing basis during the operation phase of the AI system life cycle 3.1.10 connectionism connectionist paradigm connectionist model connectionist approach form of cognitive modelling that uses a network of interconnected units that generally are simple computational units 3.1.11 data mining computational process that extracts patterns by analysing quantitative data from different perspectives and dimensions, categorizing them, and summarizing potential relationships and impacts [SOURCE:ISO 16439:2014, 3.13, modified — replace “categorizing it” with “categorizing them” because data is plural.] 3.1.12 declarative knowledge knowledge represented by facts, rules and theorems Note 1 to entry: Usually, declarative knowledge cannot be processed without first being translated into procedural knowledge (3.1.28). [SOURCE:ISO/IEC 2382-28:1995, 28.02.22, modified — Remove comma after “rules” in the definition.] 3.1.13 expert system AI system (3.1.4) that accumulates, combines and encapsulates knowledge (3.1.21) provided by a human expert or experts in a specific domain to infer solutions to problems 3.1.14 general AI AGI artificial general intelligence type of AI system (3.1.4) that addresses a broad range of tasks (3.1.35) with a satisfactory level of performance Note 1 to entry: Compared to narrow AI (3.1.24). Note 2 to entry: AGI is often used in a stronger sense, meaning systems that not only can perform a wide variety of tasks, but all tasks that a human can perform. 3.1.15 genetic algorithm GA algorithm which simulates natural selection by creating and evolving a population of individuals (solutions) for optimization problems 3.1.16 heteronomy heteronomous characteristic of a system operating under the constraint of external intervention, control or oversight 3.1.17 inference reasoning by which conclusions are derived from known premises Note 1 to entry: In AI, a premise is either a fact, a rule, a model, a feature or raw data. Note 2 to entry: The term "inference" refers both to the process and its result. [SOURCE:ISO/IEC 2382:2015, 2123830, modified – Model, feature and raw data have been added. Remove “Note 4 to entry: 28.03.01 (2382)”. Remove “Note 3 to entry: inference: term and definition standardized by ISO/IEC 2382-28:1995”.] 3.1.18 internet of things IoT infrastructure of interconnected entities, people, systems and information resources together with services that process and react to information from the physical world and virtual world [SOURCE:ISO/IEC 20924:2021, 3.2.4, modified – “…services which processes and reacts to…” has been replaced with “…services that process and react to…” and acronym has been moved to separate line.] 3.1.19 IoT device entity of an IoT system (3.1.20) that interacts and communicates with the physical world through sensing or actuating Note 1 to entry: An IoT device can be a sensor or an actuator. [SOURCE:ISO/IEC 20924:2021, 3.2.6] 3.1.20 IoT system system providing functionalities of IoT (3.1.18) Note 1 to entry: An IoT system can include, but not be limited to, IoT devices, IoT gateways, sensors and actuators. [SOURCE:ISO/IEC 20924:2021, 3.2.9] 3.1.21 knowledge abstracted information about objects, events, concepts or rules, their relationships and properties, organized for goal-oriented systematic use Note 1 to entry: Knowledge in the AI domain does not imply a cognitive capability, contrary to usage of the term in some other domains. In particular, knowledge does not imply the cognitive act of understanding. Note 2 to entry: Information can exist in numeric or symbolic form. Note 3 to entry: Information is data that has been contextualized, so that it is interpretable. Data is created through abstraction or measurement from the world. 3.1.22 life cycle evolution of a system, product, service, project or other human-made entity, from conception through retirement [SOURCE:ISO/IEC/IEEE 15288:2015, 4.1.23] 3.1.23 model physical, mathematical or otherwise logical representation of a system, entity, phenomenon, process or data [SOURCE:ISO/IEC 18023-1:2006, 3.1.11, modified – Remove comma after “mathematical” in the definition. "or data" is added at the end.] 3.1.24 narrow AI type of AI system (3.1.4) that is focused on defined tasks (3.1.35) to address a specific problem Note 1 to entry: Compared to general AI (3.1.14). 3.1.25 performance measurable result Note 1 to entry: Performance can relate either to quantitative or qualitative findings. Note 2 to entry: Performance can relate to managing activities, processes, products (including services), systems or organizations. 3.1.26 planning computational processes that compose a workflow out of a set of actions, aiming at reaching a specified goal Note 1 to entry: The meaning of the “planning” used in AI life cycle or AI management standards can be also actions taken by human beings. 3.1.27 prediction primary output of an AI system (3.1.4) when provided with input data (3.2.9) or information Note 1 to entry: Predictions can be followed by additional outputs, such as recommendations, decisions and actions. Note 2 to entry: Prediction does not necessarily refer to predicting something in the future. Note 3 to entry: Predictions can refer to various kinds of data analysis or production applied to new data or historical data (including translating text, creating synthetic images or diagnosing a previous power failure). 3.1.28 procedural knowledge knowledge which explicitly indicates the steps to be taken in order to solve a problem or to reach a goal [SOURCE:ISO/IEC 2382-28:1995, 28.02.23] 3.1.29 robot automation system with actuators that performs intended tasks (3.1.35) in the physical world, by means of sensing its environment and a software control system Note 1 to entry: A robot includes the control system and interface of a control system. Note 2 to entry: The classification of a robot as industrial robot or service robot is done according to its intended application. Note 3 to entry: In order to properly perform its tasks (3.1.35), a robot makes use of different kinds of sensors to confirm its current state and perceive the elements composing the environment in which it operates. 3.1.30 robotics science and practice of designing, manufacturing and applying robots [SOURCE:ISO 8373:2012, 2.16] 3.1.31 semantic computing field of computing that aims to identify the meanings of computational content and user intentions and to express them in a machine-processable form 3.1.32 soft computing field of computing that is tolerant of and exploits imprecision, uncertainty and partial truth to make problem-solving more tractable and robust Note 1 to entry: Soft computing encompasses various techniques such as fuzzy logic, machine learning and probabilistic reasoning. 3.1.33 symbolic AI AI (3.1.3) based on techniques and models (3.1.23) that manipulate symbols and structures according to explicitly defined rules to obtain inferences Note 1 to entry: Compared to subsymbolic AI (3.1.34), symbolic AI produces declarative outputs, whereas subsymbolic AI is based on statistical approaches and produces outputs with a given probability of error. 3.1.34 subsymbolic AI AI (3.1.3) based on techniques and models (3.1.23) that use an implicit encoding of information, that can be derived from experience or raw data. Note 1 to entry: Compared to symbolic AI (3.1.33). Whereas symbolic AI produces declarative outputs, subsymbolic AI is based on statistical approaches and produces outputs with a given probability of error. 3.1.35 task action required to achieve a specific goal Note 1 to entry: Actions can be physical or cognitive. For instance, computing or creation of predictions (3.1.27), translations, synthetic data or artefacts or navigating through a physical space. Note 2 to entry: Examples of tasks include classification, regression, ranking, clustering and dimensionality reduction. 3.2 Terms related to data 3.2.1 data annotation process of attaching a set of descriptive information to data without any change to that data Note 1 to entry: The descriptive information can take the form of metadata, labels and anchors. 3.2.2 data quality checking process in which data is examined for completeness, bias and other factors which affect its usefulness for an AI system (3.1.4) 3.2.3 data augmentation process of creating synthetic samples by modifying or utilizing the existing data 3.2.4 data sampling process to select a subset of data samples intended to present patterns and trends similar to that of the larger dataset (3.2.5) being analysed Note 1 to entry: Ideally, the subset of data samples will be representative of the larger dataset (3.2.5). 3.2.5 dataset collection of data with a shared format EXAMPLE 1: Micro-blogging posts from June 2020 associated with hashtags #rugby and #football. EXAMPLE 2: Macro photographs of flowers in 256x256 pixels. Note 1 to entry: Datasets can be used for validating or testing an AI model (3.1.23). In a machine learning (3.3.5) context, datasets can also be used to train a machine learning algorithm (3.3.6). 3.2.6 exploratory data analysis EDA initial examination of data to determine its salient characteristics and assess its quality Note 1 to entry: EDA can include identification of missing values, outliers, representativeness for the task at hand – see data quality checking (3.2.2). 3.2.7 ground truth value of the target variable for a particular item of labelled input data Note 1 to entry: The term ground truth does not imply that the labelled input data consistently corresponds to the real-world value of the target variables. 3.2.8 imputation procedure where missing data are replaced by estimated or modelled data [SOURCE:ISO 20252:2019, 3.45] 3.2.9 input data data for which an AI system (3.1.4) calculates a predicted output or inference 3.2.10 label target variable assigned to a sample 3.2.11 personally identifiable information PII personal data any information that (a) can be used to establish a link between the information and the natural person to whom such information relates, or (b) is or can be directly or indirectly linked to a natural person Note 1 to entry: The “natural person” in the definition is the PII principal. To determine whether a PII principal is identifiable, account should be taken of all the means which can reasonably be used by the privacy stakeholder holding the data, or by any other party, to establish the link between the set of PII and the natural person. Note 2 to entry: This definition is included to define the term PII as used in this document. A public cloud PII processor is typically not in a position to know explicitly whether information it processes falls into any specified category unless this is made transparent by the cloud service customer. [SOURCE:ISO/IEC 29100:2011/Amd1:2018, 2.9] 3.2.12 production data data acquired during the operation phase of an AI system (3.1.4), for which a deployed AI system (3.1.4) calculates a predicted output or inference (3.1.17) 3.2.13 sample atomic data element processed in quantities by a machine learning algorithm (3.3.6) or an AI system (3.1.4) 3.2.14 test data evaluation data data used to assess the performance of a final model (3.1.23) Note 1 to entry: Test data is disjoint from training data (3.3.16) and validation data (3.2.15). 3.2.15 validation data development data data used to compare the performance of different candidate models (3.1.23) Note 1 to entry: Validation data is disjoint from test data (3.2.14) and generally also from training data (3.3.16). However, in cases where there is insufficient data for a three-way training, validation and test set split, the data is divided into only two sets – a test set and a training or validation set. Cross-validation or bootstrapping are common methods for then generating separate training and validation sets from the training or validation set. Note 2 to entry: Validation data can be used to tune hyperparameters or to validate some algorithmic choices, up to the effect of including a given rule in an expert system. 3.3 Terms related to machine learning 3.3.1 Bayesian network probabilistic model (3.1.23) that uses Bayesian inference (3.1.17) for probability computations using a directed acyclic graph 3.3.2 decision tree model (3.1.23) for which inference (3.1.17) is encoded as paths from the root to a leaf node in a tree structure 3.3.3 human-machine teaming integration of human interaction with machine intelligence capabilities 3.3.4 hyperparameter characteristic of a machine learning algorithm (3.3.6) that affects its learning process Note 1 to entry: Hyperparameters are selected prior to training and can be used in processes to help estimate model parameters. Note 2 to entry: Examples of hyperparameters include the number of network layers, width of each layer, type of activation function, optimization method, learning rate for neural networks; the choice of kernel function in a support vector machine; number of leaves or depth of a tree; the K for K-means clustering; the maximum number of iterations of the expectation maximization algorithm; the number of Gaussians in a Gaussian mixture. 3.3.5 machine learning ML process of optimizing model parameters (3.3.8) through computational techniques, such that the model's (3.1.23) behaviour reflects the data or experience 3.3.6 machine learning algorithm algorithm to determine parameters (3.3.8) of a machine learning model (3.3.7) from data according to given criteria EXAMPLE: Consider solving a univariate linear function y = θ0 + θ1x where y is an output or result, x is an input, θ0 is an intercept (the value of y where x=0) and θ1 is a weight. In machine learning (3.3.5), the process of determining the intercept and weights for a linear function is known as linear regression. 3.3.7 machine learning model mathematical construct that generates an inference (3.1.17) or prediction (3.1.27) based on input data or information EXAMPLE: If a univariate linear function (y = θ0 + θ1x) has been trained using linear regression, the resulting model can be y = 3 + 7x. Note 1 to entry: A machine learning model results from training based on a machine learning algorithm (3.3.6). 3.3.8 parameter model parameter internal variable of a model (3.1.23) that affects how it computes its outputs Note 1 to entry: Examples of parameters include the weights in a neural network and the transition probabilities in a Markov model. 3.3.9 reinforcement learning RL learning of an optimal sequence of actions to maximize a reward through interaction with an environment 3.3.10 retraining updating a trained model (3.3.14) by training (3.3.15) with different training data (3.3.16) 3.3.11 semi-supervised machine learning machine learning (3.3.5) that makes use of both labelled and unlabelled data during training (3.3.15) 3.3.12 supervised machine learning machine learning (3.3.5) that makes only use of labelled data during training (3.3.15) 3.3.13 support vector machine SVM machine learning algorithm (3.3.6) that finds decision boundaries with maximal margins Note 1 to entry: Support vectors are sets of data points that define the positioning of the decision boundaries (hyper-planes). 3.3.14 trained model result of model training (3.3.15) 3.3.15 training model training process to determine or to improve the parameters of a machine learning model (3.3.7), based on a machine learning algorithm (3.2.10), by using training data (3.3.16) 3.3.16 training data data used to train a machine learning model (3.3.7) 3.3.17 unsupervised machine learning machine learning (3.3.5) that makes only use of unlabelled data during training (3.3.15) 3.4 Terms related to neural networks 3.4.1 activation function function applied to the weighted combination of all inputs to a neuron (3.4.9) Note 1 to entry: Activation functions allow neural networks to learn complicated features in the data. They are typically non-linear. 3.4.2 convolutional neural network CNN deep convolutional neural network DCNN feed forward neural network (3.4.6) using convolution (3.4.3) in at least one of its layers 3.4.3 convolution mathematical operation involving a sliding dot product or cross-correlation of the input data 3.4.4 deep learning deep neural network learning approach to creating rich hierarchical representations through the training (3.3.15) of neural networks (3.4.8) with many hidden layers Note 1 to entry: Deep learning is a subset of ML (3.3.5). 3.4.5 exploding gradient phenomenon of backpropagation training (3.3.15) in a neural network where large error gradients accumulate and result in very large updates to the weights, making the model (3.1.23) unstable 3.4.6 feed forward neural network FFNN neural network (3.4.8) where information is fed from the input layer to the output layer in one direction only 3.4.7 long short-term memory LSTM type of recurrent neural network (3.4.10) that processes sequential data with a satisfactory performance for both long and short span dependencies 3.4.8 neural network NN neural net artificial neural network network of one or more layers of neurons (3.4.9) connected by weighted links with adjustable weights, which takes input data and produces an output Note 1 to entry: Neural networks are a prominent example of the connectionist approach (3.1.10). Note 2 to entry: Although the design of neural networks was initially inspired by the functioning of biological neurons, most works on neural networks do not follow that inspiration anymore. 3.4.9 neuron primitive processing element which takes one or more input values and produces an output value by combining the input values and applying an activation function (3.4.1) on the result Note 1 to entry: Examples of nonlinear activation functions are a threshold function, a sigmoid function and a polynomial function. 3.4.10 recurrent neural network RNN neural network (3.4.8) in which outputs from both the previous layer and the previous processing step are fed into the current layer 3.5 Terms related to trustworthiness 3.5.1 accountable answerable for actions, decisions and performance [SOURCE:ISO/IEC 38500:2015, 2.2] 3.5.2 accountability state of being accountable (3.5.1) Note 1 to entry: Accountability relates to an allocated responsibility. The responsibility can be based on regulation or agreement or through assignment as part of delegation. Note 2 to entry: Accountability involves a person or entity being accountable for something to another person or entity, through particular means and according to particular criteria. [SOURCE:ISO/IEC 38500:2015, 2.3, modified — Note 2 to entry is added.] 3.5.3 availability property of being accessible and usable on demand by an authorized entity [SOURCE:ISO/IEC 27000:2018, 3.7] 3.5.4 bias systematic difference in treatment of certain objects, people or groups in comparison to others Note 1 to entry: Treatment is any kind of action, including perception, observation, representation, prediction (3.1.27) or decision. [SOURCE:ISO/IEC TR 24027:2021, 3.3.2, modified – remove oxford comma in definition and note to entry] 3.5.5 control purposeful action on or in a process to meet specified objectives [SOURCE:IEC 61800-7-1:2015, 3.2.6] 3.5.6 controllability controllable property of an AI system (3.1.4) that allows a human or another external agent to intervene in the system’s functioning 3.5.7 explainability property of an AI system (3.1.4) to express important factors influencing the AI system (3.1.4) results in a way that humans can understand Note 1 to entry: It is intended to answer the question “Why?” without actually attempting to argue that the course of action that was taken was necessarily optimal. 3.5.8 predictability property of an AI system (3.1.4) that enables reliable assumptions by stakeholders (3.5.13) about the output [SOURCE:ISO/IEC TR 27550:2019, 3.12, “by individuals, owners, and operators about the PII and its processing by a system” has been replaced with “by stakeholders about the outputs”.] 3.5.9 reliability property of consistent intended behaviour and results [SOURCE:ISO/IEC 27000:2018, 2.55] 3.5.10 resilience ability of a system to recover operational condition quickly following an incident 3.5.11 risk effect of uncertainty on objectives Note 1 to entry: An effect is a deviation from the expected. It can be positive, negative or both and can address, create or result in opportunities and threats. Note 2 to entry: Objectives can have different aspects and categories and can be applied at different levels. Note 3 to entry: Risk is usually expressed in terms of risk sources, potential events, their consequences and their likelihood. [SOURCE:ISO 31000:2018, 3.1, modified — Remove comma after “both” in Note 1 to entry. Remove comma after “categories” in Note 2 to entry.] 3.5.12 robustness ability of a system to maintain its level of performance under any circumstances 3.5.13 stakeholder any individual, group, or organization that can affect, be affected by or perceive itself to be affected by a decision or activity [SOURCE:ISO/IEC 38500:2015, 2.24, modified — Remove comma after “be affected by” in the definition.] 3.5.14 transparency property of an organization that appropriate activities and decisions are communicated to relevant stakeholders (3.5.13) in a comprehensive, accessible and understandable manner Note 1 to entry: Inappropriate communication of activities and decisions can violate security, privacy or confidentiality requirements. 3.5.15 transparency property of a system that appropriate information about the system is made available to relevant stakeholders (3.5.13) Note 1 to entry: Appropriate information for system transparency can include aspects such as features, performance, limitations, components, procedures, measures, design goals, design choices and assumptions, data sources and labelling protocols. Note 2 to entry: Inappropriate disclosure of some aspects of a system can violate security, privacy or confidentiality requirements. 3.5.16 trustworthiness ability to meet stakeholder (3.5.13) expectations in a verifiable way Note 1 to entry: Depending on the context or sector, and also on the specific product or service, data and technology used, different characteristics apply and need verification to ensure stakeholders’ (3.5.13) expectations are met. Note 2 to entry: Characteristics of trustworthiness include, for instance, reliability, availability, resilience, security, privacy, safety, accountability, transparency, integrity, authenticity, quality and usability. Note 3 to entry: Trustworthiness is an attribute that can be applied to services, products, technology, data and information as well as, in the context of governance, to organizations. [SOURCE:ISO/IEC TR 24028:2020, 3.42, modified — Stakeholders’ expectations replaced by stakeholder expectations; comma between quality and usability replaced by “and”.] 3.5.17 verification confirmation, through the provision of objective evidence, that specified requirements have been fulfilled Note 1 to entry: Verification only provides assurance that a product conforms to its specification. [SOURCE:ISO/IEC 27042:2015, 3.21] 3.5.18 validation confirmation, through the provision of objective evidence, that the requirements for a specific intended use or application have been fulfilled [SOURCE:ISO/IEC 27043:2015, 3.16] 3.6 Terms related to natural language processing 3.6.1 automatic summarization task (3.1.35) of shortening a portion of natural language (3.6.7) content or text while retaining important semantic information 3.6.2 dialogue management task (3.1.35) of choosing the appropriate next move in a dialogue based on user input, the dialogue history and other contextual knowledge (3.1.21), to meet a desired goal 3.6.3 emotion recognition task (3.1.35) of computationally identifying and categorizing emotions expressed in a piece of text, speech, video or image or combination thereof Note 1 to entry: Examples of emotions include happiness, sadness, anger and delight. 3.6.4 information retrieval IR task (3.1.35) of retrieving relevant documents or parts of documents from a dataset (3.2.5), typically based on keyword or natural language (3.6.7) queries 3.6.5 machine translation MT task (3.1.35) of automated translation of text or speech from one natural language (3.6.7) to another using a computer system [SOURCE:ISO 17100:2015, 2.2.2] 3.6.6 named entity recognition NER task (3.1.35) of recognizing and labelling the denotational names of entities and their categories for sequences of words in a stream of text or speech Note 1 to entry: Entity refers to concrete or abstract thing of interest, including associations among things. Note 2 to entry: “Named entity” refers to an entity with a denotational name where a specific or unique meaning exists. Note 3 to entry: Denotational names include the specific names of persons, locations, organizations and other proper names based on the domain or application. 3.6.7 natural language language that is or was in active use in a community of people and whose rules are deduced from usage Note 1 to entry: Natural language is any human language, which can be expressed in text, speech, sign language, etc. Note 2 to entry: Natural language is any human language, such as English, Spanish, Arabic, Chinese or Japanese, to be distinguished from programming and formal languages, such as Java, Fortran, C++ or First-Order Logic. [SOURCE:ISO/IEC 15944-8:2012, 3.82, modified — “and the rules of which are mainly deduced from the usage” replaced by “and its rules are deduced from usage. Removed comma after “Chinese” in Note 2 to entry 3.6.8] 3.6.8 natural language generation NLG task (3.1.35) of converting data carrying semantics into natural language (3.6.7) 3.6.9 natural language processing NLP information processing based upon natural language understanding (3.6.11) or natural language generation (3.6.8) 3.6.10 natural language processing NLP discipline concerned with the way systems acquire, process and interpret natural language (3.6.7) 3.6.11 natural language understanding NLU natural language comprehension extraction of information, by a functional unit, from text or speech communicated to it in a natural language (3.6.7), and the production of a description for both the given text or speech and what it represents [SOURCE:ISO/IEC 2382:2015, 2123786, modified – Note to entry has been removed, hyphen in natural-language has been removed, NLU has been added.] 3.6.12 optical character recognition OCR conversion of images of typed, printed or handwritten text into machine-encoded text 3.6.13 part-of-speech tagging task (3.1.35) of assigning a category (e.g. verb, noun, adjective) to a word based on its grammatical properties 3.6.14 question answering task (3.1.35) of determining the most appropriate answer to a question provided in natural language (3.6.7) Note 1 to entry: A question can be open-ended or be intended to have a specific answer. 3.6.15 relationship extraction relation extraction task (3.1.35) of identifying relationships among entities mentioned in a text 3.6.16 sentiment analysis task (3.1.35) of computationally identifying and categorizing opinions expressed in a piece of text, speech or image, to determine a range of feeling such as from positive to negative Note 1 to entry: Examples of sentiments include approval, disapproval, positive toward, negative toward, agreement and disagreement. 3.6.17 speech recognition speech-to-text STT conversion, by a functional unit, of a speech signal to a representation of the content of the speech [SOURCE:ISO/IEC 2382:2015, 2120735, modified — Note to entry has been removed.] 3.6.18 speech synthesis text-to-speech TTS generation of artificial speech [SOURCE:ISO/IEC 2382: 2015, 2120745] 3.7 Terms related to computer vision 3.7.1 computer vision capability of a functional unit to acquire, process and interpret data representing images or video Note 1 to entry: Computer vision involves the use of sensors to create a digital image of a visual scene. This can include images, such as images that capture wavelengths beyond those of visible light such as infrared imaging. 3.7.2 face recognition automatic pattern recognition comparing stored images of human faces with the image of an actual face, indicating any matching, if it exists, and any data, if they exist, identifying the person to whom the face belongs [SOURCE:ISO 5127:2017, 3.1.12.09] 3.7.3 image graphical content intended to be presented visually Note 1 to entry: This includes graphics that are encoded in any electronic format, including, but not limited to, formats that are comprised of individual pixels (e.g. those produced by paint programs or by photographic means) and formats that comprised of formulas (e.g. those produced as scalable vector drawings). [SOURCE:ISO/IEC 20071-11:2019, 3.2.1] 3.7.4 image recognition image classification process that classifies object(s), pattern(s) or concept(s) in an image (3.7.3) 4 Context of the organization 4.1 Understanding the organization and its context 4.2 Understanding the needs and expectations of interested parties 4.3 Determining the scope of the AI management system 4.4 AI management system 5 Leadership 5.1 Leadership and commitment 5.2 AI policy 5.3 Roles, responsibilities and authorities 6 Planning 6.1 Actions to address risks and opportunities 6.2 AI objectives and planning to achieve them 6.3 Planning of changes 7 Support 7.1 Resources 7.2 Competence 7.3 Awareness 7.4 Communication 7.5 Documented information 8 Operation 8.1 Operational planning and control 8.2 AI risk assessment 8.3 AI risk treatment 8.4 AI system impact assessment 9 Performance evaluation 9.1 Monitoring, measurement, analysis and evaluation 9.2 Internal audit 9.3 Management review 10 Improvement 10.1 Continual improvement 10.2 Nonconformity and corrective action Annex A Reference control objectives and controls A.1 General Annex B Implementation guidance for AI controls B.1 General B.2 Policies related to AI B.3 Internal organization B.4 Resources for AI systems B.5 Assessing impacts of AI systems B.6 AI system life cycle B.7 Data for AI systems B.8 Information for interested parties B.9 Use of AI systems B.10 Third-party and customer relationships Annex C Potential AI-related organizational objectives and risk sources C.1 General C.2 Objectives C.3 Risk sources Annex D Use of the AI management system across domains or sectors D.1 General D.2 Integration of AI management system with other management system s