Key definitions
Artificial intelligence (AI) system: a machine-based system, that for explicit or implicit objectives infers from the input it receives. Inputs generate outputs such as predictions, content, recommendations, or decisions that can influence physical or virtual environments. Different AI systems vary in their levels of autonomy and adaptiveness after deployment.
Artificial intelligence (AI) system and model
AI model: the raw, mathematical essence that is often the ‘engine’ of AI applications.
AI system: the ensemble of several components, including one or more AI models, designed to be particularly useful to humans in some way.
Developer: Organisations or individuals who design, build, train, adapt or combine AI models and applications.
Deployer: Organisations or individuals that supply or use an AI system to provide a product or service. Deployment can be for internal purposes or used externally impacting others, such as customers or individuals.
End user: Organisations or individuals that consume an AI-based product or service, interact with it or is impacted by it after it is deployed.
AI activity: Is not only the sale of AI products or services, but also includes provision of products and services which rely on AI in their development. For example, where there is AI-powered decision-making, or where generative AI is used in design.
Narrow AI system: A type of AI system or model that focuses on defined tasks and uses to address a specific problem. Unlike GPAI models, these types of AI systems must be re-designed for a broader range of problems. They are models or systems developed to perform a particular task, such as speech recognition.
General purpose AI (GPAI): An AI model made for many purposes. GPAI use and adaption can be both direct and as part in other systems. Some GPAI models like GPT-n, DALL-E and Sora, are now widely used to generate ‘human-like’ text, images and videos based on simple user prompts.
High-risk AI: A set of proposed principles to designate an AI system as ‘high-risk’ based on how it’s used. We must consider the:
- risk of adverse impacts to an individual’s rights recognised in Australian human rights law without justification, and Australia’s international human rights law obligations
- risk of adverse impacts to an individual’s physical or mental health or safety
- risk of adverse legal effects, defamation or similarly significant effects on an individual
- risk of adverse impacts to groups of individuals or collective rights of cultural groups
- risk of adverse impacts to the broader Australian economy, society, environment and rule of law
- severity and extent of those adverse impacts outlined in principles (a) to (e) above.
The proposals paper is consulting on the best way for the government to define ‘high-risk’ AI models and systems. By way of comparison and for informational purposes only, other countries have identified the following use cases as high-risk. You may find this table useful in considering if your own activities are high-risk.
High-risk use cases identified in other countries
Domain areas |
General description |
---|---|
Biometrics |
AI systems used to identify or categorise individuals, assess behaviour or mental state, or monitor and influence emotions. |
Critical infrastructure |
AI systems intended for use as safety components in the management and operation of critical digital infrastructure. This includes, road traffic and the supply of water, gas, heating and electricity. |
Education/Training |
AI systems used in determining admission to education programs, evaluating learning outcomes or monitoring student behaviour. |
Employment |
AI systems in employment matters including recruitment, referral, hiring, remuneration, promotion, training, apprenticeship, transfer or termination. |
Access to essential public services and products |
AI systems used to determine access and type of services for individuals, including healthcare, social security benefits and emerging services. |
Access to essential private services |
AI systems used to make decisions that affect access to essential private services, including credit, insurance in a manner that poses significant risk. |
Products and services affecting individual and public health and safety |
AI used as a safety component of a product, is itself a safety product or something that impacts on individual and public health and safety. This includes AI-enabled medical devices, food products and other goods and services. |
Law enforcement |
AI systems used in aspects of law enforcement, including profiling of individuals, assessing offender recidivism risk, polygraph-style technologies or evaluating evidence. |
Administration of justice and democratic processes |
AI systems used for making a determination about an individual in a court or administrative tribunal. Systems used for evaluating facts, evidence and submission to proceedings. May include any system which can influence the voting behaviour of individuals or the outcome of an election or democratic process. |