Articles

A Review of the Europian Union Artificial Intelligence Act

July 2023, ERDEMİR & ÖZMEN LEGAL PARTNERSHIP

A Review of the Europian Union Artificial Intelligence Act

The development and increasing prevalence of artificial intelligence systems in various fields, while leading to many positive developments in human life, also brings along a number of problems. In order to prevent potential violations that may directly affect human rights, equality and the use of personal data, the European Commission has taken a step towards a regulation to prevent violations that may arise from the use of artificial intelligence systems.

On 21 April 2021, the European Commission presented the "Proposal for a Regulation of the European Parliament and of the Council Laying Down Harmonised Rules on Artificial Intelligence (Artificial Intelligence Act)", a legislative proposal regulating the use of artificial intelligence. In December 2022, the Council of the European Union, composed of member states, announced its position on the legislative proposal.

On 14 June 2023, the European Parliament held a final vote to determine its official position and the draft act was adopted by a large majority. Following this stage, on 8 December 2023, the European Commission, the European Parliament and the member states agreed on the first "Artificial Intelligence Act" in the world.

In this context, the European Union aims to take the lead in the regulation of artificial intelligence in the world and to complete the necessary steps for the creation of an overarching legal framework for the future, which includes clear provisions on the use of artificial intelligence and addresses every issue. This historic step will make the European Union and its Member States an example for all other regulators in the world.

The European Union Artificial Intelligence Act, which was voted by the European Parliament and adopted by a large majority, provides a detailed definition of the artificial intelligence system; the Parliament has acted with the aim of creating a definition that can be applied to future artificial intelligence systems and the systems that can be considered as artificial intelligence were also determined. In addition, the regulation envisages the establishment of the European Artificial Intelligence Office by the Commission, which will co-operate with the European Artificial Intelligence Board, and the task of this office is to control and supervise the use of artificial intelligence models.

The Parliament's primary objective is to ensure that artificial intelligence systems used in the European Union are reliable, transparent and auditable and to guarantee the protection of fundamental human rights and freedoms when using these systems. The Regulation defines "artificial intelligence systems" in detail and stipulates the responsibilities of persons involved in different areas of artificial intelligence systems. A series of severe sanctions were introduced with this Regulation for artificial intelligence systems that carry high risks, especially for the providers, users and importers of these systems.

When the Artificial Intelligence Act is reviewed, it is observed that a risk systematics based on four basic risk categories has been adopted by the European Parliament. These categories include low risk, limited risk, high risk and unacceptable risk levels. Depending on these risk levels, certain rules and responsibilities have been determined for all stakeholders involved in the development, use, distribution, import or production of artificial intelligence models. Accordingly;

  • The use of artificial intelligence systems used against fundamental human rights is explicitly prohibited and is considered an unacceptable risk. In this context, artificial intelligence systems that manipulate and rate individual behaviour, biometric recognition applications used in public places that categorise individuals according to their gender, race, sexual or political orientation, and emotion recognition systems used to identify individuals' emotions, thoughts or mental states are included in this category. The use of real-time remote biometric identification technology (e.g. facial recognition via CCTV) in public places by law enforcement agencies is prohibited, except in certain circumstances. These situations are listed in the text of the Act that biometric identification can be used in public places as follows;

-      Rape, crimes falling under the jurisdiction of the International Criminal Court, murder, among the 16 offences listed among them,

-      In cases of abduction of certain victims, search for missing persons, trafficking and sexual exploitation,

-      In cases of preventing a threat to the life or physical safety of persons or responding to an actual or foreseeable threat of terrorist attack,

  • Systems classified as high risk are specified within the scope of artificial intelligence systems that can only be used under human supervision and limited to symbolic data. High-risk systems must additionally undergo a conformity assessment. Within the scope of Article 6, Annex 3 lists 8 use cases that are considered high-risk, in the order of use cases, and systems used by law enforcement agencies, systems used in the administration of justice and democratic processes, systems used to assess candidates by analysing and filtering job applications, and systems used for vocational training are included in this list, but are subject to a conformity assessment.
  • Chat bots used on websites for support purposes will be governed by more flexible rules and are categorised in the specific transparency category. When using the relevant artificial intelligence systems, the other party should be informed that they are interacting with artificial intelligence systems. Biometric facial recognition systems used in the public domain and chatbots on banks' websites are also subject to transparency obligations and are subject to user notification.
  • With regard to the use of artificial intelligence systems that are less risky in terms of human rights, including applications such as video games and spam filters, no specific obligations are stipulated.

The Artificial Intelligence Act will enter into force on the twentieth day following its adoption by the European Parliament and the Council and its publication in the Official Journal. It will be fully implemented 24 months after entry into force in the following phased approach;

-      6 months after entry into force Member States will be faced with prohibited systems;

-    12 months after entry into force the obligations on general-purpose artificial intelligence governance will enter into force.

-      24 months after entry into force, all the rules of the Artificial Intelligence Act, including the obligations for high risks as defined in Article 6 Annex 3, will become applicable.

However, artificial intelligence developers may bring forward the implementation date of the provisions by voluntarily declaring that they will be bound by the obligations of their systems to comply with all or part of the Act before any effective date.

The consequences of non-compliance with the rules and obligations set out in the Artificial Intelligence Act will have serious consequences for all persons involved in the development, use, distribution, import or production of artificial intelligence models in the sector. As a matter of fact, the Act stipulates fines of up to 35 million Euros or 2% and 7% of the annual global turnover, depending on the degree of violation. This penal sanction, which may increase or decrease according to the degree of violation, reveals that all persons involved in artificial intelligence systems must abide by the act and act within the rules.

As a result, artificial intelligence systems are rapidly developing in the world and are used in all areas of life. With the development of the use of artificial intelligence, artificial intelligence systems may cause many problems in terms of human rights and safety, and some regulations were introduced with the act adopted in order to prevent these problems. The Artificial Intelligence Act, regulated for the first time in the world, regulates artificial intelligence systems, which are increasingly being used, according to risk classes and with an approach to protect human rights and aims to create sanctions. It is obvious that the restrictions introduced by the Act will have a significant impact on the activities of algorithm and artificial intelligence based technology companies. It is expected that the experience of the implementation of regulatory rules following the entry into force of the act will be reflected in the regulations of states such as the USA, South Korea, the UK, which focus on the development of artificial intelligence, as well as economic-based international organisations such as the G7 and the OECD.

REFERENCES:

1-   https://www.europarl.europa.eu/doceo/document/TA-9-2023-0236_EN.pdf

2-   https://ec.europa.eu/commission/presscorner/detail/en/QANDA_21_1683

3-   https://www.europarl.europa.eu/news/en/press-room/20231206IPR15699/artificial-intelligence-act-deal-on-comprehensive-rules-for-trustworthy-ai

4- http://4-https//www.consilium.europa.eu/en/press/press-releases/2023/12/09/artificial-intelligence-act-council-and-parliament-strike-a-deal-on-the-first-worldwide-rules-for-ai/

Similar Articles

September 2023 Using Artificial Intelligence in the Workplace: What are the Main Risks for Ethics?
February 2023 Mandatory Mediation Between Traders in case of Negative Declaratory Actions Arising From a Lease Contract of Commercial Nature