AI e legge

Artificial Intelligence has come a long way in recent years: growing in possibility, skill, and prevalence, it has become part of our everyday lives, influencing our habits.

By private initiative, algorithms now regulate our behavior on social networks, preferences for movies and games, the ability to apply for a mortgage or loan, the ability to succeed in a job interview, and so on.

Fascinated and at the same time wary of this new technology, the public and institutions now have to balance the countless benefits of AI with the risks it may pose to our rights.

It has recently arisen the need, shared between the European Union and the United States, to regulate the use of Artificial Intelligence on several fronts, lest private individuals end up with more power and responsibility in their hands than they can really control.

In this in-depth look at the features of the most recent regulations – the AI Bill of Rights and the AI Act, both of which are currently non-binding – which are already expected to become the benchmark for Artificial Intelligence regulation in the world in the coming years.

AI Bill of Rights

The AI Bill of Rights does not yet constitute a legislative proposal, nor does it mention penalties or sanctions for automated systems that do not comply with these rules: rather, it is intended as a set of non-binding recommendations to companies and government organizations that intend to take advantage of or are already making use of Artificial Intelligences.

The document outlines 5 principles by which to regulate the design and implementation of Artificial Intelligence: the text refers specifically to the U.S. public sector, but the guidelines it contains can also be applied in other contexts where it is used.

1 - Safe and effective systems

Citizens should be protected from the use of potentially unsafe or ineffective systems that can do harm to the individual and/or the community.

For this reason, automated systems, even before they are implemented, should undergo independent testing to assess their effectiveness and safety.

2 - Protections against algorithmic discrimination

Artificial Intelligences should be designed and used fairly, taking preventive measures to prevent the risk of algorithmic discrimination.

"Algorithmic discrimination" is defined as unequal treatment by an automated system on the basis of ethnic, social or cultural criteria.

3 - Protection of personal data

Automated systems should by design contain options regarding the protection of personal data and privacy of users and should collect only the data strictly necessary for their operation.

The point also refers to the need to establish explicit consent from the user, not unlike what is done in the European Union with the GDPR.

4 - Notices and explanations

One should always know if and when one is interfacing with an automated system and what the impacts, if any, of the interaction are.

Transparency in the use of Artificial Intelligence includes the need to make explicit not only the presence of the technology in question, but also how it works, in clear language that is accessible to as many people as possible.

5 - Human alternatives, evaluations and reservations

Citizens should always be able to choose a human alternative to an automated system, in ways in which this choice is feasible and appropriate.

The presence of oversight and monitoring figures is particularly recommended in areas considered "most sensitive": criminal justice, human resources, education, and health.

AI Act (European Union)

The AI Bill Of Rights represents an important signal from the United States, the world superpower and cradle of Big Tech, but from the perspective of AI regulation, the European Union has already taken several steps forward.

The AI Act, presented in the spring of 2021 and currently under discussion between the European Parliament and member states, is a piece of legislation that aims to apply the principle of transparency and respect for human rights to the design and use of Artificial Intelligence.

The goal of the AI Act is to regulate the entire sector of the production and deployment of automated systems on the European territory, in accordance with existing legislation in the member states and the General Data Protection Regulation (GDPR).

The AI Act has several points of contact with the more recent AI Bill Of Rights, from which it differs in that it has a more regulatory slant: in fact, it includes a prior registration requirement for this type of technology and a ban on the use of types of Artificial Intelligence deemed "unacceptable risk."

What are high-risk AIs?

The text of the AI Act divides AIs into four classes of risk, calculated proportionally based on potential threats to people's health, safety, or fundamental rights.

Unacceptable risk

Artificial Intelligences that make use of practices such as profiling for coercion or social scoring purposes, or use subliminal techniques, i.e., distort people's behaviors to cause physical or psychological harm, fall into this category.

Unacceptable risk Artificial Intelligence systems are to be considered prohibited, as they contravene in their operation the values of the European Union and fundamental human rights such as the presumption of innocence.

High risk

Artificial Intelligence systems that have the potential to significantly affect the course of democracy or individual or collective health fall into this category.

Examples of high-risk Artificial Intelligences are:

  • Systems used in education or vocational training for the evaluation of tests or access to institutions;
  • The systems used to assign decisions in labor relations and in credit;
  • Systems intended for use in the administration of justice and crime prevention, detection, investigation and prosecution;
  • Systems intended for use in the management of migration, asylum and border control.

The AI Act pays special attention to high-risk applications of Artificial Intelligence. These will be allowed to enter the market, but only if they meet a set of mandatory horizontal requirements that ensure their reliability and have passed several conformity assessment procedures.

Limited risk

This category includes systems such as chatbots or deepfakes, which may originate a risk of manipulation when the nature of the conversational agent is not made clear to the user.

For AI systems considered low risk, the act imposes a code of conduct on manufacturers based on transparency of information shared with the public, who must be aware at all times that they are interacting with a machine.

Minimal risk

The vast majority of expert, automated and Artificial Intelligence systems currently in use in Europe fall into this category.

For AI systems considered to be minimal risk, the regulations leave vendors free to adhere to codes of conduct and reliability on a voluntary basis.

AI Liability Directive: toward regulation

At the end of September 2022, just days before the AI Bill Of Rights was published overseas, the European Commission released the AI Liability Directive, a proposal on the legal responsibilities of Artificial Intelligence.

In other words, this document is a first step toward enforcing legal measures against individuals or entities that suffer damages related to the use of this type of technology.

In the AI Liability Directive, the European Commission also divides the assumption of legal liability among several actors: first and foremost, it will fall on the companies that make Artificial Intelligence available, but it will also involve other actors in the entire supply chain, not least the users themselves.

Conclusion: is it right to limit innovation?

It is never right to limit innovation, and moreover, blocking the progress of a technology is never the purpose of well-written norms and laws.

Norms live in the culture and history in which they are written, follow its sensibilities, and simply direct technologies toward the most felt needs of the moment, limiting the dangers of creating harm to society.

In fact, it is not forbidden-to take one example-to research new therapies and medicines through genetic technologies; on the other hand, it is forbidden to clone a human being.

Artificial Intelligence will be no exception: as evidenced by the proposals put on the table in recent years by the European Union and the United States, in the near future this technology will be subject to rules that will lead manufacturers to take the necessary responsibility for the products and services they put on the market.

The intent driving these measures is to preserve individual and collective freedoms. It will only be possible to innovate without risking people's freedoms through third-party reality checks, in light of the contemporary and through contributions from different areas of expertise-from pure science to law, via data science and the humanities.

We at Neosperience are explorers of innovation. What has guided us in the development of our Artificial Intelligence algorithms, to the analysis of user behavior to the simplification of business processes, is the desire to bring people and organizations into a more human and empathetic digital environment.