An AI system is considered limited risk when it:
- interacts directly with users
- poses minimal risks to individuals’ rights and safety
Examples include chatbots and virtual assistants (for example, AI-powered customer service bots), AI-generated content tools (such as text generators and AI art creators), and deepfake or synthetic media generators.
As a provider of a Limited-Risk AI System, you must ensure that:
- users are clearly informed that they are interacting with an AI system
- any AI-generated or manipulated content,including audio, images, video, or text, is labeled in a machine-readable format to make it detectable as artificially generated
- the labeling of such content is robust, reliable, and interoperable, considering technical feasibility and relevant industry standards
As a deployer of a Limited-Risk AI System, you must:
- inform users if the AI system is used to analyze emotions or categorize individuals based on biometric data, unless the processing is legally authorized for the detection or investigation of crimes
- disclose when the system creates deepfakes or alters publicly shared images, videos, or text, unless the content is clearly artistic, creative, or satirical in nature
- ensure that all required information is provided to users at the first point of interaction or exposure, and that it meets accessibility standards