Understanding AI terms and definitions according to the standard ISO 22989

19 Feb 2025 07:00 AM - By itSMF Staff

Reading time: ~ 7 min.

direttiva sulla responsabilità da intelligenza artificiale itsmf blog

AI terms and definitions according to the standard ISO 22989

These days, we can find products and services everywhere claiming that improve productivity and boost performance thanks to AI. It is no difficult to find different kind of references to the artificial intelligence with terms and definitions not always «standardized».

In our today post, we're going to take a look at the ISO 22989:2022 standard on «information technology — artificial intelligence — artificial intelligence concepts and terminology», which provides an AI terminology and some description on concepts in the artificial intelligence field.

It is important to note that this document can be used to develop other standard as well as it can be a useful to manage the communication between different stakeholders with a common point of reference.

The ISO 22989:2022 is suitable indeed for a many different types of organizations (businesses, government agencies, nonprofit organizations).

Focus on the ISO/IEC 22989:2022 standard and its clauses

Please bear in mind that the ISO/IEC 22989:2022 standard has a direct relation to the ISO/IEC 42001:2023 on «Information technology Artificial intelligence management system», a standard we have already written about some months ago on our blog.


Let's have a look now at the ISO 22989 standard, which is divided into these clauses:

✅ 1. Scope

✅ 2. Normative references

3. Terms and definitions

✅ 4. Abbreviated terms

✅ 5. AI Concepts

✅ 6. AI systems life cycle

✅ 7. AI system functional overview

✅ 8. AI Ecosystem

✅ 9. Fields or AI

✅ 10. Application or AI Systems

✅ Annex A: Mapping of the AI ​​systems life cycle with the OECD's definition of an AI system life cycle.


We're going now to take a closer look at the section «3» of ISO 22989:2022 standard.

The ISO 22989:2022 standard section 3: terms related to AI

On the section 3, the ISO 22989:2022 standard provides us a well-structured and useful focus about AI terms and definitions. Let's have first a look at the artificial intelligence related terms.

📘 3.1 Terms related to AI


📕 3.1.1 AI agent: automated (3.1.7) entity that senses and responds to its environment and takes actions to achieve its goals.

📕 3.1.2 AI component: functional element that constructs an AI system (3.1.4).

📕 3.1.3 artificial intelligence, AI: <discipline> research and development of mechanisms and applications of AI systems (3.1.4).
Note 1 to entry: Research and development can take place across any number of fields such as computer science, data science, humanities, mathematics and natural sciences.

📕 3.1.4 artificial intelligence system, AI system: engineered system that generates outputs such as content, forecasts, recommendations or decisions for a given set of human-defined objectives.
Note 1 to entry: The engineered system can use various techniques and approaches related to artificial intelligence (3.1.3) to develop a model (3.1.23) to represent data, knowledge (3.1.21), processes, etc. which can be used to conduct tasks (3.1.35).
Note 2 to entry: AI systems are designed to operate with varying levels of automation (3.1.7).

📕 3.1.5 autonomy, autonomous: characteristic of a system that is capable of modifying its intended domain of use or goal without external intervention, control or oversight.

📕 3.1.6 application specific integrated circuit, ASIC: integrated circuit customized for a particular use.
[SOURCE:ISO/IEC/IEEE 24765:2017, 3.193, modified — Acronym has been moved to separate line.]

📕 3.1.7 automatic, automation, automated: pertaining to a process or system that, under specified conditions, functions without human intervention.
[SOURCE:ISO/IEC 2382:2015, 2121282, modified — In the definition, “a process or equipment” has been replaced by “a process or system” and preferred terms of “automated and automation” are added.]

📕 3.1.8 cognitive computing: category of AI systems (3.1.4) that enables people and machines to interact more naturally.
Note 1 to entry: Cognitive computing tasks are associated with machine learning (3.3.5), speech processing, natural language processing (3.6.9), computer vision (3.7.1) and human-machine interfaces.

📕 3.1.9 continuous learning, continual learning, lifelong learning: incremental training of an AI system (3.1.4) that takes place on an ongoing basis during the operation phase of the AI system life cycle.

📕 3.1.10 connectionism, connectionist paradigm, connectionist model, connectionist approach: form of cognitive modelling that uses a network of interconnected units that generally are simple computational units.

📕3.1.11 data mining: computational process that extracts patterns by analysing quantitative data from different perspectives and dimensions, categorizing them, and summarizing potential relationships and impacts.
[SOURCE:ISO 16439:2014, 3.13, modified — replace “categorizing it” with “categorizing them” because data is plural.]

📕 3.1.12 declarative knowledge: knowledge represented by facts, rules and theorems
Note 1 to entry: Usually, declarative knowledge cannot be processed without first being translated into procedural knowledge (3.1.28).
[SOURCE:ISO/IEC 2382-28:1995, 28.02.22, modified — Remove comma after “rules” in the definition.]

📕 3.1.13 expert system: AI system (3.1.4) that accumulates, combines and encapsulates knowledge (3.1.21) provided by a human expert or experts in a specific domain to infer solutions to problems.

📕 3.1.14 general AI, AGI, artificial general intelligence: type of AI system (3.1.4) that addresses a broad range of tasks (3.1.35) with a satisfactory level of performance.
Note 1 to entry: Compared to narrow AI (3.1.24).
Note 2 to entry: AGI is often used in a stronger sense, meaning systems that not only can perform a wide variety of tasks, but all tasks that a human can perform.

📕 3.1.15 genetic algorithm, GA: algorithm which simulates natural selection by creating and evolving a population of individuals (solutions) for optimization problems.

📕 3.1.16 heteronomy, heteronomous: characteristic of a system operating under the constraint of external intervention, control or oversight.

📕 3.1.17 inference: reasoning by which conclusions are derived from known premises.
Note 1 to entry: In AI, a premise is either a fact, a rule, a model, a feature or raw data.
Note 2 to entry: The term "inference" refers both to the process and its result.
[SOURCE:ISO/IEC 2382:2015, 2123830, modified – Model, feature and raw data have been added. Remove “Note 4 to entry: 28.03.01 (2382)”. Remove “Note 3 to entry: inference: term and definition standardized by ISO/IEC 2382-28:1995”.]

📕 3.1.18 internet of things, IoT: infrastructure of interconnected entities, people, systems and information resources together with services that process and react to information from the physical world and virtual world.
[SOURCE:ISO/IEC 20924:2021, 3.2.4, modified – “…services which processes and reacts to…” has been replaced with “…services that process and react to…” and acronym has been moved to separate line.]

📕 3.1.19 IoT device: entity of an IoT system (3.1.20) that interacts and communicates with the physical world through sensing or actuating.
Note 1 to entry: An IoT device can be a sensor or an actuator.
[SOURCE:ISO/IEC 20924:2021, 3.2.6]

📕 3.1.20 IoT system: system providing functionalities of IoT (3.1.18).
Note 1 to entry: An IoT system can include, but not be limited to, IoT devices, IoT gateways, sensors and actuators.
[SOURCE:ISO/IEC 20924:2021, 3.2.9]

📕 3.1.21knowledge: <artificial intelligence> abstracted information about objects, events, concepts or rules, their relationships and properties, organized for goal-oriented systematic use.
Note 1 to entry: Knowledge in the AI domain does not imply a cognitive capability, contrary to usage of the term in some other domains. In particular, knowledge does not imply the cognitive act of understanding.
Note 2 to entry: Information can exist in numeric or symbolic form.
Note 3 to entry: Information is data that has been contextualized, so that it is interpretable. Data is created through abstraction or measurement from the world.

📕 3.1.22 life cycle: evolution of a system, product, service, project or other human-made entity, from conception through retirement.
[SOURCE:ISO/IEC/IEEE 15288:2015, 4.1.23]

📕 3.1.23 model: physical, mathematical or otherwise logical representation of a system, entity, phenomenon, process or data.
[SOURCE:ISO/IEC 18023-1:2006, 3.1.11, modified – Remove comma after “mathematical” in the definition. "or data" is added at the end.]

📕 3.1.24 narrow AI: type of AI system (3.1.4) that is focused on defined tasks (3.1.35) to address a specific problem.
Note 1 to entry: Compared to general AI (3.1.14).

📕 3.1.25 performance: measurable result.
Note 1 to entry: Performance can relate either to quantitative or qualitative findings.
Note 2 to entry: Performance can relate to managing activities, processes, products (including services), systems or organizations.

📕3.1.26 planning: <artificial intelligence> computational processes that compose a workflow out of a set of actions, aiming at reaching a specified goal.
Note 1 to entry: The meaning of the “planning” used in AI life cycle or AI management standards can be also actions taken by human beings.

📕 3.1.27 prediction: primary output of an AI system (3.1.4) when provided with input data (3.2.9) or information.
Note 1 to entry: Predictions can be followed by additional outputs, such as recommendations, decisions and actions.
Note 2 to entry: Prediction does not necessarily refer to predicting something in the future.
Note 3 to entry: Predictions can refer to various kinds of data analysis or production applied to new data or historical data (including translating text, creating synthetic images or diagnosing a previous power failure).

📕 3.1.28 procedural knowledge: knowledge which explicitly indicates the steps to be taken in order to solve a problem or to reach a goal.
[SOURCE:ISO/IEC 2382-28:1995, 28.02.23]

📕 3.1.29 robot: automation system with actuators that performs intended tasks (3.1.35) in the physical world, by means of sensing its environment and a software control system.
Note 1 to entry: A robot includes the control system and interface of a control system.
Note 2 to entry: The classification of a robot as industrial robot or service robot is done according to its intended application.
Note 3 to entry: In order to properly perform its tasks (3.1.35), a robot makes use of different kinds of sensors to confirm its current state and perceive the elements composing the environment in which it operates.

📕 3.1.30 robotics: science and practice of designing, manufacturing and applying robots.
[SOURCE:ISO 8373:2012, 2.16]

📕 3.1.31 semantic computing: field of computing that aims to identify the meanings of computational content and user intentions and to express them in a machine-processable form.

📕 3.1.32 soft computing: field of computing that is tolerant of and exploits imprecision, uncertainty and partial truth to make problem-solving more tractable and robust
Note 1 to entry: Soft computing encompasses various techniques such as fuzzy logic, machine learning and probabilistic reasoning.

📕 3.1.33 symbolic AI: AI (3.1.3) based on techniques and models (3.1.23) that manipulate symbols and structures according to explicitly defined rules to obtain inferences
Note 1 to entry: Compared to subsymbolic AI (3.1.34), symbolic AI produces declarative outputs, whereas subsymbolic AI is based on statistical approaches and produces outputs with a given probability of error.

📕 3.1.34 subsymbolic AI: AI (3.1.3) based on techniques and models (3.1.23) that use an implicit encoding of information, that can be derived from experience or raw data.
Note 1 to entry: Compared to symbolic AI (3.1.33). Whereas symbolic AI produces declarative outputs, subsymbolic AI is based on statistical approaches and produces outputs with a given probability of error.

📕 3.1.35 task: <artificial intelligence>action required to achieve a specific goal.
Note 1 to entry: Actions can be physical or cognitive. For instance, computing or creation of predictions (3.1.27), translations, synthetic data or artefacts or navigating through a physical space.
Note 2 to entry: Examples of tasks include classification, regression, ranking, clustering and dimensionality reduction.

The ISO 22989:2022 standard section 3: terms related to data

On the section 3, the ISO 22989:2022 standard provides us a well-structured and useful focus about AI terms and definitions. Let's have a look now at the terms related to data.

📘 3.2 Terms related to data


📕 3.2.1 data annotation: process of attaching a set of descriptive information to data without any change to that data.
Note 1 to entry: The descriptive information can take the form of metadata, labels and anchors.

📕3.2.2 data quality checking: process in which data is examined for completeness, bias and other factors which affect its usefulness for an AI system (3.1.4).

📕 3.2.3 data augmentation: process of creating synthetic samples by modifying or utilizing the existing data.

📕 3.2.4 data sampling: process to select a subset of data samples intended to present patterns and trends similar to that of the larger dataset (3.2.5) being analysed.
Note 1 to entry: Ideally, the subset of data samples will be representative of the larger dataset (3.2.5).

📕 3.2.5 dataset: collection of data with a shared format.
EXAMPLE  1:
Micro-blogging posts from June 2020 associated with hashtags #rugby and #football.
EXAMPLE  2:
Macro photographs of flowers in 256x256 pixels.
Note 1 to entry: Datasets can be used for validating or testing an AI model (3.1.23). In a machine learning (3.3.5) context, datasets can also be used to train a machine learning algorithm (3.3.6).

📕 3.2.6 exploratory data analysis, EDA: initial examination of data to determine its salient characteristics and assess its quality.
Note 1 to entry: EDA can include identification of missing values, outliers, representativeness for the task at hand – see data quality checking (3.2.2).

📕 3.2.7 ground truth: value of the target variable for a particular item of labelled input data.
Note 1 to entry: The term ground truth does not imply that the labelled input data consistently corresponds to the real-world value of the target variables.

📕 3.2.8 imputation: procedure where missing data are replaced by estimated or modelled data
[SOURCE:ISO 20252:2019, 3.45]

📕 3.2.9 input data: data for which an AI system (3.1.4) calculates a predicted output or inference.

📕 3.2.10 label: target variable assigned to a sample.

📕 3.2.11personally identifiable information, PII personal data: any information that (a) can be used to establish a link between the information and the natural person to whom such information relates, or (b) is or can be directly or indirectly linked to a natural person.
Note 1 to entry: The “natural person” in the definition is the PII principal. To determine whether a PII principal is identifiable, account should be taken of all the means which can reasonably be used by the privacy stakeholder holding the data, or by any other party, to establish the link between the set of PII and the natural person.
Note 2 to entry: This definition is included to define the term PII as used in this document. A public cloud PII processor is typically not in a position to know explicitly whether information it processes falls into any specified category unless this is made transparent by the cloud service customer.
[SOURCE:ISO/IEC 29100:2011/Amd1:2018, 2.9]

📕 3.2.12 production data: data acquired during the operation phase of an AI system (3.1.4), for which a deployed AI system (3.1.4) calculates a predicted output or inference (3.1.17).

📕 3.2.13 sample: atomic data element processed in quantities by a machine learning algorithm (3.3.6) or an AI system (3.1.4).

📕 3.2.14 test data, evaluation data: data used to assess the performance of a final model (3.1.23)
Note 1 to entry: Test data is disjoint from training data (3.3.16) and validation data (3.2.15).

📕 3.2.15 validation data, development data: data used to compare the performance of different candidate models (3.1.23).
Note 1 to entry: Validation data is disjoint from test data (3.2.14) and generally also from training data (3.3.16). However, in cases where there is insufficient data for a three-way training, validation and test set split, the data is divided into only two sets – a test set and a training or validation set. Cross-validation or bootstrapping are common methods for then generating separate training and validation sets from the training or validation set.
Note 2 to entry: Validation data can be used to tune hyperparameters or to validate some algorithmic choices, up to the effect of including a given rule in an expert system.

The ISO 22989:2022 standard section 3: terms related to machine learning

On the section 3, the ISO 22989:2022 standard provides us a well-structured and useful focus about AI terms and definitions. Let's have a look now at the terms related to machine learning.

📘 3.3 Terms related to machine learning


📕 3.3.1 Bayesian network: probabilistic model (3.1.23) that uses Bayesian inference (3.1.17) for probability computations using a directed acyclic graph.

📕 3.3.2 decision tree: model (3.1.23) for which inference (3.1.17) is encoded as paths from the root to a leaf node in a tree structure.

📕 3.3.3 human-machine teaming: integration of human interaction with machine intelligence capabilities.

📕 3.3.4 hyperparameter: characteristic of a machine learning algorithm (3.3.6) that affects its learning process.
Note 1 to entry: Hyperparameters are selected prior to training and can be used in processes to help estimate model parameters.
Note 2 to entry: Examples of hyperparameters include the number of network layers, width of each layer, type of activation function, optimization method, learning rate for neural networks; the choice of kernel function in a support vector machine; number of leaves or depth of a tree; the K for K-means clustering; the maximum number of iterations of the expectation maximization algorithm; the number of Gaussians in a Gaussian mixture.

📕 3.3.5 machine learning, ML: process of optimizing model parameters (3.3.8) through computational techniques, such that the model's (3.1.23) behaviour reflects the data or experience

📕 3.3.6 machine learning algorithm: algorithm to determine parameters (3.3.8) of a machine learning model (3.3.7) from data according to given criteria
EXAMPLE:
Consider solving a univariate linear function y = θ0 + θ1x where y is an output or result, x is an input, θ0 is an intercept (the value of y where x=0) and θ1 is a weight. In machine learning (3.3.5), the process of determining the intercept and weights for a linear function is known as linear regression.

📕 3.3.7 machine learning model: mathematical construct that generates an inference (3.1.17) or prediction (3.1.27) based on input data or information
EXAMPLE:
If a univariate linear function (y = θ0 + θ1x) has been trained using linear regression, the resulting model can be y = 3 + 7x.
Note 1 to entry: A machine learning model results from training based on a machine learning algorithm (3.3.6).

📕 3.3.8 parameter, model parameter: internal variable of a model (3.1.23) that affects how it computes its outputs
Note 1 to entry: Examples of parameters include the weights in a neural network and the transition probabilities in a Markov model.

📕 3.3.9 reinforcement learning, RL: learning of an optimal sequence of actions to maximize a reward through interaction with an environment.

📕 3.3.10 retraining: updating a trained model (3.3.14) by training (3.3.15) with different training data (3.3.16).

📕 3.3.11 semi-supervised machine learning: machine learning (3.3.5) that makes use of both labelled and unlabelled data during training (3.3.15).

📕 3.3.12 supervised machine learning: machine learning (3.3.5) that makes only use of labelled data during training (3.3.15).

📕 3.3.13 support vector machine, SVM: machine learning algorithm (3.3.6) that finds decision boundaries with maximal margins
Note 1 to entry: Support vectors are sets of data points that define the positioning of the decision boundaries (hyper-planes).

📕 3.3.14 trained model: result of model training (3.3.15).

📕 3.3.15 training, model training: process to determine or to improve the parameters of a machine learning model (3.3.7), based on a machine learning algorithm (3.2.10), by using training data (3.3.16)

📕 3.3.16 training data: data used to train a machine learning model (3.3.7)

📕 3.3.17 unsupervised machine learning: machine learning (3.3.5) that makes only use of unlabelled data during training (3.3.15).

The ISO 22989:2022 standard section 3: terms related to neural networks

On the section 3, the ISO 22989:2022 standard provides us a well-structured and useful focus about AI terms and definitions. Let's have a look now at the terms related to neural networks.

📘 3.4 Terms related to neural networks


📕 3.4.1 activation function: function applied to the weighted combination of all inputs to a neuron (3.4.9).
Note 1 to entry: Activation functions allow neural networks to learn complicated features in the data. They are typically non-linear.

📕 3.4.2 convolutional neural network, CNN, deep convolutional neural network, DCNN: feed forward neural network (3.4.6) using convolution (3.4.3) in at least one of its layers.

📕 3.4.3 convolution: mathematical operation involving a sliding dot product or cross-correlation of the input data.

📕 3.4.4 deep learning, deep neural network learning: <artificial intelligence> approach to creating rich hierarchical representations through the training (3.3.15) of neural networks (3.4.8) with many hidden layers.
Note 1 to entry: Deep learning is a subset of ML (3.3.5).

📕 3.4.5 exploding gradient: phenomenon of backpropagation training (3.3.15) in a neural network where large error gradients accumulate and result in very large updates to the weights, making the model (3.1.23) unstable.

📕 3.4.6 feed forward neural network, FFNN: neural network (3.4.8) where information is fed from the input layer to the output layer in one direction only.

📕 3.4.7 long short-term memory, LSTM: type of recurrent neural network (3.4.10) that processes sequential data with a satisfactory performance for both long and short span dependencies.

📕 3.4.8 neural network, NN, neural net, artificial neural network:
<artificial intelligence> network of one or more layers of neurons (3.4.9) connected by weighted links with adjustable weights, which takes input data and produces an output.
Note 1 to entry: Neural networks are a prominent example of the connectionist approach (3.1.10).
Note 2 to entry: Although the design of neural networks was initially inspired by the functioning of biological neurons, most works on neural networks do not follow that inspiration anymore.

📕 3.4.9 neuron: <artificial intelligence> primitive processing element which takes one or more input values and produces an output value by combining the input values and applying an activation function (3.4.1) on the result.
Note 1 to entry: Examples of nonlinear activation functions are a threshold function, a sigmoid function and a polynomial function.

📕 3.4.10 recurrent neural network, RNN: neural network (3.4.8) in which outputs from both the previous layer and the previous processing step are fed into the current layer.

The ISO 22989:2022 standard section 3: terms related to trustworthiness

On the section 3, the ISO 22989:2022 standard provides us a well-structured and useful focus about AI terms and definitions. Let's have a look now at the terms related to trustworthiness.

📘 3.5 Terms related to trustworthiness


📕 3.5.1 accountable: answerable for actions, decisions and performance
[SOURCE:ISO/IEC 38500:2015, 2.2]

📕 3.5.2 accountability: state of being accountable (3.5.1).
Note 1 to entry: Accountability relates to an allocated responsibility. The responsibility can be based on regulation or agreement or through assignment as part of delegation.
Note 2 to entry: Accountability involves a person or entity being accountable for something to another person or entity, through particular means and according to particular criteria.
[SOURCE:ISO/IEC 38500:2015, 2.3, modified — Note 2 to entry is added.]

📕 3.5.3 availability: property of being accessible and usable on demand by an authorized entity.
[SOURCE:ISO/IEC 27000:2018, 3.7]

📕 3.5.4 bias: systematic difference in treatment of certain objects, people or groups in comparison to others.
Note 1 to entry: Treatment is any kind of action, including perception, observation, representation, prediction (3.1.27) or decision.
[SOURCE:ISO/IEC TR 24027:2021, 3.3.2, modified – remove oxford comma in definition and note to entry]

📕 3.5.5 control: purposeful action on or in a process to meet specified objectives.
[SOURCE:IEC 61800-7-1:2015, 3.2.6]

📕 3.5.6 controllability, controllable: property of an AI system (3.1.4) that allows a human or another external agent to intervene in the system’s functioning.

📕 3.5.7 explainability: property of an AI system (3.1.4) to express important factors influencing the AI system (3.1.4) results in a way that humans can understand.
Note 1 to entry: It is intended to answer the question “Why?” without actually attempting to argue that the course of action that was taken was necessarily optimal.

📕 3.5.8 predictability: property of an AI system (3.1.4) that enables reliable assumptions by stakeholders (3.5.13) about the output.
[SOURCE:ISO/IEC TR 27550:2019, 3.12, “by individuals, owners, and operators about the PII and its processing by a system” has been replaced with “by stakeholders about the outputs”.]

📕 3.5.9 reliability: property of consistent intended behaviour and results.
[SOURCE:ISO/IEC 27000:2018, 2.55]

📕 3.5.10 resilience: ability of a system to recover operational condition quickly following an incident.

📕 3.5.11risk: effect of uncertainty on objectives.
Note 1 to entry: An effect is a deviation from the expected. It can be positive, negative or both and can address, create or result in opportunities and threats.
Note 2 to entry: Objectives can have different aspects and categories and can be applied at different levels.
Note 3 to entry: Risk is usually expressed in terms of risk sources, potential events, their consequences and their likelihood.
[SOURCE:ISO 31000:2018, 3.1, modified — Remove comma after “both” in Note 1 to entry. Remove comma after “categories” in Note 2 to entry.]

📕 3.5.12 robustness: ability of a system to maintain its level of performance under any circumstances.

📕3.5.13 stakeholder: any individual, group, or organization that can affect, be affected by or perceive itself to be affected by a decision or activity.
[SOURCE:ISO/IEC 38500:2015, 2.24, modified — Remove comma after “be affected by” in the definition.]

📕 3.5.14 transparency: <organization> property of an organization that appropriate activities and decisions are communicated to relevant stakeholders (3.5.13) in a comprehensive, accessible and understandable manner.
Note 1 to entry: Inappropriate communication of activities and decisions can violate security, privacy or confidentiality requirements.

📕 3.5.15 transparency: <system> property of a system that appropriate information about the system is made available to relevant stakeholders (3.5.13).
Note 1 to entry: Appropriate information for system transparency can include aspects such as features, performance, limitations, components, procedures, measures, design goals, design choices and assumptions, data sources and labelling protocols.
Note 2 to entry: Inappropriate disclosure of some aspects of a system can violate security, privacy or confidentiality requirements.

📕 3.5.16 trustworthiness: ability to meet stakeholder (3.5.13) expectations in a verifiable way.
Note 1 to entry: Depending on the context or sector, and also on the specific product or service, data and technology used, different characteristics apply and need verification to ensure stakeholders’ (3.5.13) expectations are met.
Note 2 to entry: Characteristics of trustworthiness include, for instance, reliability, availability, resilience, security, privacy, safety, accountability, transparency, integrity, authenticity, quality and usability.
Note 3 to entry: Trustworthiness is an attribute that can be applied to services, products, technology, data and information as well as, in the context of governance, to organizations.
[SOURCE:ISO/IEC TR 24028:2020, 3.42, modified — Stakeholders’ expectations replaced by stakeholder expectations; comma between quality and usability replaced by “and”.]

📕 3.5.17 verification: confirmation, through the provision of objective evidence, that specified requirements have been fulfilled.
Note 1 to entry: Verification only provides assurance that a product conforms to its specification.
[SOURCE:ISO/IEC 27042:2015, 3.21]

📕 3.5.18 validation: confirmation, through the provision of objective evidence, that the requirements for a specific intended use or application have been fulfilled.
[SOURCE:ISO/IEC 27043:2015, 3.16]

The ISO 22989:2022 standard section 3: terms related to natural language processing

On the section 3, the ISO 22989:2022 standard provides us a well-structured and useful focus about AI terms and definitions. Let's have a look now at the terms related to natural language processing.

📘 3.6 Terms related to natural language processing


📕 3.6.1 automatic summarization: task (3.1.35) of shortening a portion of natural language (3.6.7) content or text while retaining important semantic information.

📕 3.6.2 dialogue management: task (3.1.35) of choosing the appropriate next move in a dialogue based on user input, the dialogue history and other contextual knowledge (3.1.21), to meet a desired goal.

📕 3.6.3 emotion recognition: task (3.1.35) of computationally identifying and categorizing emotions expressed in a piece of text, speech, video or image or combination thereof.
Note 1 to entry: Examples of emotions include happiness, sadness, anger and delight.

📕 3.6.4 information retrieval, IR: task (3.1.35) of retrieving relevant documents or parts of documents from a dataset (3.2.5), typically based on keyword or natural language (3.6.7) queries.

📕 3.6.5 machine translation, MT: task (3.1.35) of automated translation of text or speech from one natural language (3.6.7) to another using a computer system.
[SOURCE:ISO 17100:2015, 2.2.2]

📕 3.6.6 named entity recognition, NER: task (3.1.35) of recognizing and labelling the denotational names of entities and their categories for sequences of words in a stream of text or speech
Note 1 to entry: Entity refers to concrete or abstract thing of interest, including associations among things.
Note 2 to entry: “Named entity” refers to an entity with a denotational name where a specific or unique meaning exists.
Note 3 to entry: Denotational names include the specific names of persons, locations, organizations and other proper names based on the domain or application.

📕 3.6.7 natural language: language that is or was in active use in a community of people and whose rules are deduced from usage
Note 1 to entry: Natural language is any human language, which can be expressed in text, speech, sign language, etc.
Note 2 to entry: Natural language is any human language, such as English, Spanish, Arabic, Chinese or Japanese, to be distinguished from programming and formal languages, such as Java, Fortran, C++ or First-Order Logic.
[SOURCE:ISO/IEC 15944-8:2012, 3.82, modified — “and the rules of which are mainly deduced from the usage” replaced by “and its rules are deduced from usage. Removed comma after “Chinese” in Note 2 to entry 3.6.8]

📕 3.6.8 natural language generation, NLG: task (3.1.35) of converting data carrying semantics into natural language (3.6.7).

📕 3.6.9 natural language processing, NLP: <system> information processing based upon natural language understanding (3.6.11) or natural language generation (3.6.8).

📕 3.6.10 natural language processing, NLP: <discipline> discipline concerned with the way systems acquire, process and interpret natural language (3.6.7).

📕 3.6.11 natural language understanding, NLU, natural language comprehension: extraction of information, by a functional unit, from text or speech communicated to it in a natural language (3.6.7), and the production of a description for both the given text or speech and what it represents.
[SOURCE:ISO/IEC 2382:2015, 2123786, modified – Note to entry has been removed, hyphen in natural-language has been removed, NLU has been added.]

📕 3.6.12 optical character recognition, OCR: conversion of images of typed, printed or handwritten text into machine-encoded text.

📕 3.6.13 part-of-speech tagging: task (3.1.35) of assigning a category (e.g. verb, noun, adjective) to a word based on its grammatical properties.

📕 3.6.14 question answering: task (3.1.35) of determining the most appropriate answer to a question provided in natural language (3.6.7)
Note 1 to entry: A question can be open-ended or be intended to have a specific answer.

📕 3.6.15 relationship extraction, relation extraction: task (3.1.35) of identifying relationships among entities mentioned in a text.

📕 3.6.16 sentiment analysis: task (3.1.35) of computationally identifying and categorizing opinions expressed in a piece of text, speech or image, to determine a range of feeling such as from positive to negative.
Note 1 to entry: Examples of sentiments include approval, disapproval, positive toward, negative toward, agreement and disagreement.

📕 3.6.17 speech recognition, speech-to-text, STT: conversion, by a functional unit, of a speech signal to a representation of the content of the speech.
[SOURCE:ISO/IEC 2382:2015, 2120735, modified — Note to entry has been removed.]

📕 3.6.18 speech synthesis, text-to-speech, TTS: generation of artificial speech.
[SOURCE:ISO/IEC 2382: 2015, 2120745]

The ISO 22989:2022 standard section 3: terms related to computer vision

On the section 3, the ISO 22989:2022 standard provides us a well-structured and useful focus about AI terms and definitions. Let's have a look now at the terms related to computer vision.

📘 3.7 Terms related to computer vision


📕 3.7.1 computer vision: capability of a functional unit to acquire, process and interpret data representing images or video.
Note 1 to entry: Computer vision involves the use of sensors to create a digital image of a visual scene. This can include images, such as images that capture wavelengths beyond those of visible light such as infrared imaging.

📕 3.7.2 face recognition: automatic pattern recognition comparing stored images of human faces with the image of an actual face, indicating any matching, if it exists, and any data, if they exist, identifying the person to whom the face belongs.
[SOURCE:ISO 5127:2017, 3.1.12.09]

📕 3.7.3 image: <digital> graphical content intended to be presented visually.
Note 1 to entry: This includes graphics that are encoded in any electronic format, including, but not limited to, formats that are comprised of individual pixels (e.g. those produced by paint programs or by photographic means) and formats that comprised of formulas (e.g. those produced as scalable vector drawings).
[SOURCE:ISO/IEC 20071-11:2019, 3.2.1]

📕 3.7.4 image recognition: image classification process that classifies object(s), pattern(s) or concept(s) in an image (3.7.3).

EU AI Act and the main useful ISO standards

The EU Artificial Intelligence Act brings new challenges in terms of compliance of AI services and products. Let's take a final look at our inographic to better figure out the relationship between the European Regulation and the main ISO standards:
By Andrea Leonardi (VP @ Minerva Group Service, MP @ Alpemi Consulting & itSMF Swizerland board member).
If you want to keep you up-to-date with the most recent news on this topic, don't forget to follow us on our social media channels or subscribe to our newsletter: you will receive exclusive content for our community!

FOLLOW US ON OUR YOUTUBE CHANNEL

Need to know more about it?

Click on one of the options below to enter in the itSMF Enviroment and for being updated the way which is best for you.

Subscribe to itSMF Newsletter
CONTACT US TO SEND YOUR MESSAGE
DISCOVER OUR EVENT CALENDAR
Get the benefits of Membership Program

Our sponsors

A special thanks to our Advanced Sponsors:

itSMF Staff