Classification of AI systems under the AI Act
The necessity to classify AI systems is a key obligation introduced by the AI Act (EU Parliament and Council Regulation 2024/1689 of June 13, 2024). This classification is based on the use of AI systems for a specific purpose, not on their functionality. The classification of a system into a particular category of AI systems will determine the system's ability to be marketed or put into use on the European market, the scope of record-keeping, documentation and reporting obligations, as well as the scope of criminal liability.
The first prohibitions imposed by the AI Act on so-called AI banned systems are already in effect. As of August 2, 2025, further AI Act regulations imposing new obligations will take effect (including implementing penalties for failure to comply with at least the ban on the deployment of prohibited AI systems, or regulations on general-purpose AI models).
From August 2, 2026 application of the AI Act will include further regulations (mandating transparency of AI systems, use of regulatory sandboxes, or a database of high-risk AI systems). The last, essentially the most important provisions, will take effect on August 2, 2027 (classification of AI systems into high-risk AI systems and related obligations). On that date, the AI Act will be in complete effect.
What is an AI system?
In interpreting the definition of an AI system, it is crucial to distinguish AI systems from simple and traditional software systems. Getting to the point, according to the AI Act's definition of an AI system, it is a machine system (and therefore a system based on or related to machines, including computer or electronic systems) that:
- is designed to operate autonomously (it can manifest various levels of autonomy), and
- once implemented, can exhibit adaptability, and
- inferences how to generate results based on received inputs (such as predictions, content, recommendations or decisions that may affect the physical or virtual environment), and
- the inference made by the system can be carried out for both explicit and implicit purposes.
The first functional characteristic, of AI systems regulated by the AI Act, is its autonomous operation (possible at different levels). This means that the AI system should operate to some extent independent of human involvement or intervention.
The second of the functional characteristics indicated, is the adaptability that an AI system can exhibit once it is deployed (i.e., the ability to self-learn, allowing the system to change over time). The key phrase used here is “may”. This means that an AI system may or may not manifest this feature in order to be considered an AI system under the AI Act (and so an AI system may be one whose ability to learn after deployment has been disabled).
The third key functional feature is inference capability. This concept goes beyond basic data processing and refers to the process of producing results (such as predictions, content, recommendations or decisions) that can affect the physical and virtual environment. In addition, the term should refer to the ability of AI systems to create models or algorithms based on information or inputs. Techniques that enable inference during the development of an AI system include machine learning mechanisms that indicate how to achieve specific goals (based on data made available for learning), as well as logic and knowledge-based approaches that rely on inference based on coded knowledge or a schematic representation of the task (even a symbolic one) to be solved.
The fourth functional characteristic is inference for explicit or implicit purposes. This means that AI systems can operate according to both explicit and implicit purposes (which may differ from the purpose of the AI system in a particular context).
Very importantly, AI systems can be used as stand-alone solutions or as part of a product, regardless of whether the system is physically integrated into the product (embedded) or whether it serves a product function, although not integrated into it (non-embedded).
What are the basic categories of AI systems?
The AI Act distinguishes four basic categories of AI systems.
- Prohibited AI systems
- High-risk AI systems
- Limited-risk AI systems
- Minimal risk AI systems

Image generated using ChatGPT
| Important!
The AI Act does not apply to AI systems or AI models (including their results) developed and put into service exclusively for research and development purposes. The AI Act also does not apply to research and development activities aimed at producing such an AI system (i.e. before the system is put into use or placed on the market). |
Prohibited AI Systems
AI Act imposes a complete ban on placing on market, putting into service or using so-called prohibited systems within the European Union, a catalogue of which is indicated in the already famous Article 5 of the AI Act.
Systems falling into this category include AI systems:
- using subliminal techniques that are beyond a person's consciousness,
- using deliberate manipulative or misleading techniques,
- that take advantage of a person's vulnerabilities due to his or her age, disability, or special social or economic situation,
- to evaluate or classify people on the basis of social behaviour or personality traits (i.e. social scoring),
- for assessing the risk of committing a crime (based on profiling or assessing personality traits and characteristics),
- for creating or expanding databases for facial recognition by extracting images from the Internet or television (i.e. untargeted scraping),
- emotion analysis in the workplace or educational institutions,
- biometric categorization (categorization of individuals based on biometric data) to infer race, political views, trade union membership, religious or worldview beliefs, or sexuality or sexual orientation,
- real-time biometric identification (identification of persons based on biometric data) in public spaces for law enforcement purposes.
The exceptions to the above categorizations are as follows.
- In the case of emotion recognition systems in the workplace or educational institutions, the prohibition does not extend to systems that serve medical purposes (e.g. systems intended for therapeutic use) or security (concerning only the protection of life and health, and not, for example, the protection of other interests, such as property from theft or fraud),
- In the case of biometric categorization, the prohibition does not extend to the labelling and filtering of lawfully acquired data, such as images, based on biometric data, or to the categorization of biometric data in the area of law enforcement (e.g. sorting images by hair color or eye color, which can, for example, be used in the area of law enforcement),
- In the case of biometric identification, the ban does not extend to the use of AI systems for the prosecution of certain serious crimes:
- targeted search for specific victims of abduction, human trafficking or sexual exploitation, as well as search for missing persons,
- the prevention of a specific, substantial and imminent threat to the life or safety of individuals or an actual and present or actual and foreseeable threat of a terrorist attack,
- locating or identifying a person suspected of committing a crime in a pre-trial investigation or the prosecution or execution of punishments for crimes subject to custodial sentences (or, depending on the EU State, measures involving deprivation of liberty, the upper limit of which is at least 4 years), e.g. terrorism, human trafficking, sexual exploitation of children and child pornography, drug trafficking, arms trafficking, murder or grievous bodily harm, kidnapping and holding hostages, crimes subject to the International Criminal Court, rape, environmental crimes, robbery, sabotage or participation in a criminal organization.
| Exception!
An emotion recognition system refers to an AI system for recognizing the emotions or intentions of individuals based on biometric data (i.e., image data or dactyloscopic data, i.e., fingerprints), or making inferences about such emotions or intentions. Interestingly, the concept applies to emotions such as joy, sadness, anger, surprise, disgust, embarrassment, excitement, shame, contempt, satisfaction and amusement. However it does not, include physical states such as pain or fatigue (so systems used to detect fatigue levels of professional pilots or drivers to prevent accidents would not be prohibited AI systems). Also, detection of basic, easy-to-see expressions, gestures or movements (e.g. a grimace, smile, hand movement, features, whisper) will not be prohibited (unless the detection is for further identification or inference of emotions). This means that AI systems for emotion recognition based only on written text will not be banned systems. On the other hand, systems that infer emotions based on keystrokes (i.e. the way someone types), facial expressions, posture or movements will fall into this category, and the analysis will be based on biometric data. |
| Exception!
The exception regarding the possibility of using emotion detection systems for medical purposes (in workplaces or educational institutions) does not cover the use of emotion recognition systems to detect general aspects of wellbeing. General monitoring of stress levels in the workplace is not permitted in the context of health or safety aspects. For example, AI systems designed to detect job burnout or depression in workplaces or educational institutions are prohibited. |
High-risk AI systems
High-risk AI systems are the systems to which the most AI Act provisions are devoted. Putting these systems into service is subject to the broadest catalogue of documentation and reporting obligations.
Systems falling into this category include AI systems that are components of a product covered by EU harmonization legislation and the purpose of such systems is related to product safety (or when the AI system itself is such a product), such as:
- a medical device (Regulation of April 5, 2017, No. 2017/745),
- an in vitro diagnostic medical device (Regulation of April 5, 2017, No. 2017/746),
- a machine (Directive of May 17, 2006, No. 2006/42/EC),
- a crane or safety component for cranes (Directive of February 26, 2014, No. 2014/33/EU),
- a recreational watercraft or jet ski (Directive of November 20, 2013, No. 2013/53/EU),
- a toy (Directive of June 18, 2009, No. 2009/48/EC),
- a pressure equipment (Directive of May 15, 2014 , No. 2014/68/EU),
- equipment or protective system intended for use in a potentially explosive atmosphere (Directive of February 26, 2014, No. 2014/34/EU),
- radio equipment (Directive of April 16, 2014 , No. 2014/53/EU),
- personal protective equipment (Regulation of March 9, 2016, No. 2016/425),
- a device that burns gaseous fuels (Ordinance of March 9, 2016, No. 2016/426), or
- a cable car device (Ordinance of March 9, 2016, No. 2016/424).
| Important!
The list of EU harmonization legislation to which the AI Act will apply is indicated in Annex 1 to the AI Act. |
In addition, such systems must be subject to third-party conformity assessment procedure in connection with placing that product on the market or putting it into service. Both of the above conditions (being covered by EU harmonization legislation and being subject to third-party conformity assessment) must be met together.
Aslo, high-risk AI systems will be the following (indicated in detail in Annex 3 to the AI Act)
1. Biometric systems, to the extent that their use is permitted under relevant EU or national laws and includes:
- remote biometric identification systems,
- AI systems intended for use in biometric categorization, according to sensitive or protected attributes or characteristics based on the inference of those attributes or characteristics,
- AI systems designed for emotion recognition.
| Exception!
A high-risk biometric system will not be a system for merely confirming that a particular individual is the person he or she claims to be (based on biometric data obtained by legitimate means). |
2. Critical infrastructure systems, i.e. security-related systems that are part of the processes for managing critical digital infrastructure, traffic and their operation processes, or the supply of water, gas, heat or electricity.
3. Systems for vocational education and training, used for:
- making decisions on access, admission or assignment to an educational/vocational training institution,
- assessing learning outcomes (including when outcomes are assessed only in terms of guiding the learning process),
- assessing the appropriate level of education that an individual can receive or will have access to,
- monitoring and detection of unauthorized student behaviours during tests (at all levels).
4. Systems for hiring, managing employees and access to self-employment, for:
- recruiting or selecting individuals (in particular, placing targeted job advertisements, analysing/filtering resumes and evaluating candidates),
- making decisions affecting the terms of employment relationships or terminations, promotions, assigning tasks (based on individual behaviour/character/personality traits), monitoring or evaluating performance and behaviour.
| Note!
This is a special category that all companies thinking about or even already using AI systems in the work of HR (human resources) teams should pay attention to. Even the simple use of available AI tools, such as ChatCPT, Copilot, for analysing employee data can fall into the classification of AI systems to high risk. |
5. Systems for access to and use of basic private services, but also public services and public benefits for:
- assessing individuals' eligibility for basic public benefits and services (including health care, granting/restricting/cancelling/requesting reimbursement of such benefits or services; used by and on behalf of public authorities),
- assessing the creditworthiness of individuals or establishing their credit scoring,
- evaluation and pricing of individuals for social insurance purposes (life and health insurance),
- evaluation and classification of emergency calls made by individuals or prioritization of dispatching first aid services (including police, firefighters, doctors), as well as emergency patient assessments.
6. Law enforcement systems (authorized by EU or national law, used by law enforcement authorities or on their behalf by EU institutions, bodies and organizational units), for:
- assessing whether an individual will become a victim of a crime,
- support provided to victims of law enforcement,
- assessing the reliability of evidence in the course of prosecuting crimes or conducting pre-trial investigations,
- profiling of natural persons in the course of detecting and prosecuting crimes or conducting pre-trial investigations,
- to assess the risk that an individual will commit (including again) a crime or to assess personality traits/character/previous behaviour (including within groups).
7. Systems for migration, asylum and border control management (authorized by EU or national law, used by or on behalf of public authorities by EU institutions, bodies, authorities and entities) for:
- operating variographs or similar tools,
- risk assessment (including security, irregular migration, health risks posed by an individual who intends to enter or has entered the EU),
- processing of applications for asylum, visas, residence permits, and related complaints with regard to the eligibility of individuals applying for a particular status, including when assessing the credibility of evidence,
- management of migration, asylum, border control, for the purpose of detecting, recognizing or identifying individuals
8. Systems for the administration of justice and democratic processes (used by or on behalf of judicial authorities), for:
- assist in the study of the interpretation of facts and laws and in the application of the law to a specific factual situation, or to be used in a similar manner in alternative dispute resolution methods,
- to influence the outcome of elections or referendums or the voting behaviour of individuals in elections or referendums.
Under the AI Act, however, high-risk systems will not include:
- systems for detecting financial fraud (which may originally fit into credit scoring and credit scoring systems),
- systems for verifying travel documents (which may originally fit into systems for managing migration, asylum, border control, etc.),
- systems for, among other things, organizing, optimizing or structuring political campaigns from an administrative or logistical point of view (i.e. such systems that do not directly affect individuals participating in campaigns or referendums - these individuals are not exposed to the results of such a system; originally, such systems may fit into systems for the administration of justice and democratic processes).
| Important!
The qualification of AI systems as high-risk AI systems should not extend to AI systems intended for purely ancillary administrative activities that do not affect the actual administration of justice on a case-by-case basis, such as the anonymization or pseudonymization of court decisions, documents or data, communication between staff members, or the performance of administrative tasks. |
| Important!
An AI system that does not pose a significant risk of harm to the health, safety, or fundamental rights of individuals, including by not significantly affecting the outcome of the decision-making process, is not considered a high-risk system (even if it concerns areas identified among high-risk systems). |
The key is the risk of harm and the impact on decision-making. So, to classify high-risk AI systems, the key is:
- significant risk of harm to the legally protected interests of individuals (health, safety or fundamental rights), and
- materiality of impact on the outcome of the decision-making process.
Particularly important here is the indication that if an AI system does not have an impact on the substance, and thus on the outcome, of a decision-making process, whether carried out by humans or by automated means, it will not be classified as a high-risk AI system.
An AI system that does not significantly affect the outcome of the decision-making process may include systems that:
- are designed to perform procedural tasks of a narrow and limited nature (e.g. a system that converts unstructured data into structured data, a system that categorizes incoming documents, or a system used to detect duplicates in a large number of applications),
- perform tasks aimed at improving the results of an already completed action carried out by a human (e.g., systems that aim to linguistically correct previously prepared documents, for example, to introduce a professional tone, an academic style, or to adapt the text to a specific brand message),
- designed to detect patterns of decision-making or deviations from patterns of decisions previously made by a human, so their use would follow human evaluation (e.g. systems that, given a particular evaluation pattern used by a teacher, can be used ex post to check whether the teacher has deviated from the pattern used, and thus indicate potential inconsistencies or irregularities)
- intended only for preparatory tasks (e.g. file management systems that include various functions such as indexing, searching, text and speech processing, or linking data to other data sources, or systems used to translate preliminary documents).
| Exception!
When a system profiles individuals, it is always considered a high-risk system. |
Limited-risk AI systems
Limited-risk AI systems are systems that are not capable of causing serious damage like high-risk AI systems but may pose a risk of misrepresentation, manipulation or fraud. The key with such systems is to ensure that they are transparent in their operation.
Transparency in this case is that if an AI system is to interact directly with individuals, they should be designed and developed in such a way that the individuals concerned are informed that they are interacting with the AI system (unless it is obvious from the point of view of an individual who is sufficiently informed, attentive and careful, taking into account the circumstances and context of use).
The key here, then, is adequate information to the user about the interaction with the AI system, at the latest at the time of the first interaction or first use of the system.
The following types of AI interaction information can be applied to various types of limited-risk AI systems.
- General-purpose AI systems that generate text, synthetic sounds, images, video, or text, e.g. ChatGPT, Perplexity (appropriate labeling in machine-readable format and detectable as artificially generated or manipulated),
- Emotion recognition or biometric categorization systems, where they do not pose a risk to fundamental rights, such as those used in the entertainment industry (they should inform the individuals to whom they will be applied of the fact that they will be used and data processing should follow the applicable regulations in this regard),
- Deepfake generators, i.e., systems that generate or manipulate audio, video or image content that can be mistaken for reality (revealing that the content has been artificially generated or manipulated; in the case of a work or program of a clearly artistic, creative, satirical, fictional or analogous nature, the obligations to transparency are limited to disclosing the existence of such generated or manipulated content in an appropriate manner that does not impede the display or use of the work),
- Text generation systems to inform the public on matters of public interest or manipulate such text (disclosure that the content has been artificially generated or manipulated).
The above transparency principle does not apply in the following cases.
- In the case of general-purpose AI systems, this obligation does not apply to the extent that the systems perform a standard editing support function or do not materially alter the input data or its semantics provided by the applying entity, or to the extent that their use is permitted by law for the purpose of detecting, preventing, investigating or prosecuting crimes.
- In the case of systems for emotion recognition and biometric categorization, this obligation shall not apply in cases where their use is permitted by law for the detection, prevention, investigation or prosecution of crimes, subject to appropriate safeguards for the rights and freedoms of third parties and in accordance with EU law.
- For deepfake generating systems, this obligation does not apply where the use is permitted by law for the detection, prevention, investigation or prosecution of crimes.
- In the case of systems that generate social or public interest content, this obligation does not apply where the use is permitted by law for the detection, prevention, investigation, or prosecution of offenders, or where the content generated by the system has undergone human verification or editorial control, and where an individual or legal entity has editorial responsibility for the publication of the content.
So, the basic situation in which the above transparency principle does not apply is for AI systems whose use is permitted by law for the purpose of detecting, preventing, investigating or prosecuting crimes.
Minimum Risk AI Systems
Minimum risk AI systems under the AI Act are systems that do not pose a significant risk to fundamental rights, public health or security. These systems are almost unregulated and are not subject to special obligations, except for the recommended transparency so that users are aware of interactions with the AI system.
Examples of AI systems with minimal risk.
- spam filters: simple systems that filter unwanted emails,
- simple recommendation systems: systems that suggest products or content based on browsing history,
- AI-enabled computer games: games that use AI to generate levels or enemies, but do not affect fundamental rights,
- tools for automating simple tasks: AI systems that help automate routine tasks such as adjusting the brightness of photos.
What are general-purpose AI models and general-purpose AI systems?
The last category to keep in mind when classifying AI systems are the so-called general-purpose AI models and the general-purpose AI systems that can be built on them.
As defined in the AI Act, a general-purpose AI model means an AI model, including an AI model trained with large amounts of data using large-scale self-surveillance, that exhibits significant generality and is able to competently perform a wide range of different tasks, regardless of how the model is marketed, and that can be integrated into various downstream systems or applications.
General-purpose AI models can include models with at least a billion parameters. Large generative AI models are a typical example of a general-purpose AI model, given that they enable flexible generation of content, such as text, audio, images or video, and can easily perform a wide range of different tasks. These can include, among others: GPT or DALL-E (OpenAI), Claude (Anthropic), Grok (xAI), Gemini (Google), LLaMA (Meta), or the Polish PLLuM.
General-purpose AI models are typically trained on large amounts of data using a variety of methods, such as self-supervised learning, unsupervised learning or reinforcement learning.
General-purpose AI models can be marketed in a variety of ways, including through libraries, application programming interfaces (APIs), by direct download or in a physical version.
These models can be further modified or adapted as a basis for creating new models.
While AI models are essential components of AI systems, they do not constitute AI systems in themselves.

Image generated using ChatGPT
For an AI model to become an AI system, additional elements must be added to it, such as a user interface. AI models are usually integrated with and are part of AI systems. If a general-purpose AI model is integrated with or is part of an AI system, the system should be considered a general-purpose AI system if, as a result of the integration of the model, the system can serve different purposes. A general-purpose AI system can be used directly or be integrated with other AI systems. General-purpose AI systems can be used as stand-alone high-risk AI systems or be part of other high-risk AI systems.
| Important!
If there is an integration of a general-purpose AI model with an AI system that is already made available on the market or put into use, then the general-purpose AI model should be considered to have been placed on the market and the AI Act regulations should begin to apply. However, AI Act regulations will not apply to general-purpose AI models used in purely internal processes that are not necessary to provide a product or service to third parties, and the rights of individuals are not affected in any way. |
General-purpose AI models may fall into an additional classification as general-purpose AI models with systemic risk. A general-purpose AI model poses systemic risk if it demonstrates high-impact capabilities, as assessed by appropriate technical tools and methodologies, or if it has a significant impact on the internal market due to its scope. Systemic risk increases with the model's capabilities and reach, can occur throughout the model's life cycle, and depends on conditions of misuse, reliability, impartiality and security of the model, the level of autonomy of the model, access to tools, use of novel or combined methods, sharing and distribution strategies, security removal capabilities and other factors.
A general purpose AI model is classified as a general purpose AI model with systemic risk if:
- it has high-impact capabilities assessed based on appropriate technical tools and methodologies, including metrics and benchmarks (this includes such a system whose total number of calculations used for training measured in floating point operations is greater than 10),
- based on a decision by the European Commission - either ex officio or following a qualified warning issued by a scientific panel, the model can be found to have capabilities or high impact (as above), taking into account the criteria set forth in Annex XIII to the AI Act.
General-purpose AI models may pose systemic risks, which include, but are not limited to:
- any actual or reasonably foreseeable negative consequences of major accidents, disruptions to critical sectors, and serious consequences for public health and safety (e.g. the creation of events that could cause a chain reaction with significant negative consequences that could even affect entire cities, all activity in an area, or entire communities
- any actual or reasonably foreseeable negative effects on democratic processes, public safety and economic security (e.g., facilitating disinformation or invading privacy),
- distributing illegal, false or discriminatory content,
- development, design, acquisition or use of weapons,
- offensive cyber security capabilities,
- risks associated with self-copying by models or self-replication or resulting from the training of other models by a model.
However, this category does not include, as in the case of AI systems, AI models used prior to placing on market solely for research and development activities and also prototyping.
| Important!
One way to approximate model capability is the total number of calculations used to train a general-purpose AI model, measured in floating-point operations. The total number of computations used for training includes computations applied to all activities and methods that are intended to increase the model's capability before deployment, such as pretreatment, synthetic data generation and tuning. For now, the minimum threshold for floating-point operations, for a model to be considered a general-purpose AI model with systemic risk, is 10. This threshold should be adjusted over time to reflect technological and industry changes, such as algorithmic improvements or greater hardware performance, and should be supplemented with benchmarks and indicators of model capability. |
Where to look for guidance?
The regulations for categorizing AI systems according to the categories set forth in the AI Act are long, complicated, and contain many variants for classifying a system into different categories.
Of course, the primary act used to categorize AI systems should be the AI Act (in particular, the preamble contains wile of interesting guidance). However, guidance can also be sought in the European Commission's recently issued Guidelines on Prohibited AI Practices as Defined by the AI Act.
The guidelines are more than 135 pages of instructions with detailed examples of systems that may fall primarily into the so-called prohibited AI systems, but also situations when they do not constitute such systems, and thus fall into the other categories indicated in the AI Act.
Transition Periods
In general, the AI Act will fully enter into force on August 2, 2027. However, as an exception, the general provisions (Chapter I) and the provisions on prohibited AI systems (Chapter II) will already apply from February 2, 2025.
The next batch of provisions will enter into force on August 2, 2025, and will concern provisions on notified bodies (Chapter III, Section 4), provisions on general-purpose AI models (Chapter V), provisions on AI law enforcement authorities (Chapter VII), criminal provisions (Chapter XII) and Article 78 of the AI Act (confidentiality of information and data obtained in the performance of their tasks and activities by AI law enforcement authorities), with the exception of Article 101 of the AI Act – concerning penalties imposed on suppliers of general-purpose AI models (suppliers of general-purpose AI models that were placed on the market before August 2, 2025, shall take the necessary steps to comply with the obligations under the AI Act by August 2, 2027).
Then, on August 2, 2026, the AI Act will come into force almost in its entirety and will apply to most high-risk AI systems (i.e., high-risk AI systems identified in Annex III to the AI Act), provisions on transparency obligations for providers and users of AI systems (Chapter IV), measures to support innovation (Chapter VI), an EU database for high-risk AI systems (Chapter VIII), post-market monitoring, information exchange and market surveillance (Chapter IX), codes of conduct and guidelines (Chapter X), and the delegation of powers to the European Commission to adopt delegated acts and the committee procedure (Chapter XI).
Some of the latest provisions will enter into force on August 2, 2027, and will concern the classification of AI systems as high-risk systems to the extent that these systems are part of a product covered by EU harmonization legislation and the obligations for such systems. Requirements will also apply to operators of high-risk AI systems that were placed on the market or put into service before August 2, 2026, but only if significant changes are made to the design of these systems after that date, and requirements for suppliers of general-purpose models that were placed on the market before August 2, 2025.
Finally, August 2, 2030, will be the deadline for ensuring compliance with the AI Act for high-risk AI systems used by public authorities and large-scale IT systems listed in Annex X to the Regulation that were put into service before August 2, 2027.
| Important!
The AI Act applies to high-risk AI systems that were placed on the market or put into service before August 2, 2026, only if they will undergo significant changes (e.g. retraining) after that date.. |
Summary
Classifying AI systems may cause many difficulties. The AI Act regulations, national laws on artificial intelligence systems, guidelines from both EU and national administrative bodies are extensive and intricate.
Various mechanisms will be introduced over time to facilitate the categorization process. First of all, a special Database of high-risk AI systems will be created, in which systems considered high-risk AI systems and AI systems not considered high-risk will be recorded (the database will be publicly available). In addition, Codes of Conduct are to be created to promote the voluntary application to non-high-risk AI systems of some or all of the requirements for high-risk AI systems.
In addition to this, the European Commission is to create a special kind of Guidance on the practical implementation of the AI Act, and thus on the classification of AI systems (like the previously indicated Guidance on the Classification of Prohibited AI Systems). We are certainly waiting in this regard for the Guidelines on Identification of AI Systems, which we can expect probably in the near future. The Guidelines on AI system definition are already available.
* We stipulate that the above study does not constitute a binding evaluation or classification of any AI system for the purposes of the AI Act. Any analysis should be carried out on a case-by-case basis. Crido is not responsible for decisions and classifications of AI systems made on the basis of the above study.
Listen