Since the release of ChatGPT on November 30, 2022, there has been a steady increase in searches about artificial intelligence (AI) on Google. The world has taken notice how powerful AI has become, its benefits, and how scammers are exploiting the technology.
With growing interest in this emerging technology, it is crucial to explore AI law that applies to the creation and utilization of AI platforms. In this article, we will explore the realm of AI law and shed light on what lies ahead as this transformative technology continues to evolve.
What AI Law Exists?
A few years ago, governments began to take steps to regulate AI. In the European Union, the AI Act was first proposed in April 2021 and was recently approved by a large majority vote. The EU AI Act is the very first AI law enacted by any government, and the only one that exists at the time this is written.
Before the law was proposed, the European Commission identified six problems that may arise from development of AI systems. The purpose of the AI Act is to address these six concerns.
- Risk to the safety and security of EU citizens,
- Increased risk involving violations of fundamental rights,
- Authorities lack of power or procedures to monitor AI development,
- Legal complexity and uncertainty may deter businesses from developing AI,
- Lack of trust in AI would reduce EU’s competitiveness in the field, and
- A fragmented AI market.
So the AI Act cannot be circumvented, it applies to providers located outside of the EU that place AI systems into the EU market or put AI systems into service within the EU, and providers and users located outside of the EU where the AI output is used in the EU.
Current US AI Laws
The United States has yet to propose a draft of a federal AI law that would apply to all states.
However, the White House Office of Science of Technology Policy (“OSTP”) published the Blueprint for an AI Bill of Rights as a framework to create AI-related legislation.
This Blueprint provides a defined test to determine the type of systems to which the framework applies. It applies to automated systems that have the potential to meaningfully impact the American public’s rights, opportunities, or access to critical resources or services.
For those AI systems, the Blueprint identifies five principles that the OSTP believes should guide their development and use. These principles would shape future AI law, promoting the following goals:
- Ensure the safety and effectiveness of AI systems.
- Promote non-discriminatory AI systems.
- Safeguard data privacy in AI applications.
- Encourage transparency with appropriate notification mechanisms.
- Establish a human alternative backup as an alternative to using AI systems.
Promoting Safe and Effective Systems
Based on the recommendations, there are concerns that AI systems can be dangerous to the public. To mitigate the dangers, the OSTP recommends that AI systems undergo pre-deployment testing to identify risks. They also propose that systems be continually monitored and independently evaluated to confirm they are safe. Systems should also be designed to proactively protect the public from foreseeable harm.
Protection Against Discrimination
The issue of discrimination by AI algorithms is also a concern. The aim is to foster the development of fair and unbiased AI systems that avoid discrimination based on factors such as race, gender, age, disability, or other legally protected classes.
In an effort to prevent future discrimination by AI, the OSTP suggests conducting proactive equity assessments. To ensure the assessments are conducted in a fair and transparent manner, the proposal emphasizes the importance of independent evaluations and making the assessment results publicly available.
Data Privacy Protection
Protecting data privacy has been a developing policy in the past few decades since companies have been collecting and selling significant amounts of data from consumers.
To safeguard privacy, the OSTP proposes that developers of AI systems should be mandated to obtain user consent before collecting personal data. Moreover, the information collected should be strictly limited to what is necessary for each specific situation. The consent process should be concise and easily comprehensible for users.
Addressing the concern of surveillance by AI systems, the OSTP suggests enhancing oversight measures. As an initial step, they recommend conducting a pre-deployment assessment to identify potential harms and protect civil liberties, including privacy rights. Additionally, the OSTP emphasizes that AI surveillance should not be employed in domains such as education, work, housing, and other contexts where there is a risk of limited rights, opportunities, and access.
Adequate Notice and Explanation of AI System
Users of AI systems should have knowledge and understanding when automated systems are utilized, and how the systems affect them. A recommendation is requiring developers of AI to provide documentation that clearly describes the system’s function, the role of automation, and notice of its usage.
It would be beneficial for the public to be informed of significant changes and have access to explanations about how outcomes are determined, ensuring these explanations are technically valid, meaningful, and tailored to the level of risk. Public reporting, including assessments of notice and explanations, should be prioritized whenever feasible.
The public should know when automated systems are being used and how they affect them. Eevelopers and deployers of AI systems should provide clear explanations about how they work and why they make certain decisions.
They should also let you know if there are any important changes in how the systems are used. If an automated system affects an outcome that involves an individual, that person should be able to understand why it happened, and receive explanations that are understandable. It’s also important that this information is shared with the public whenever possible.
Human Alternative to AI
The OSTP also suggests that the public should have the option to choose not to use automated systems when appropriate. This decision should be based on what is reasonable and fair, aiming to make sure everyone can access it and to protect people from any harmful effects.
If an automated system fails, makes a mistake, or if an individual wants to challenge a decision that impacts that person, the public should have the right to seek human assistance and get a fair solution. This assistance should be easy to access, fair for everyone, effective, and not too burdensome.
When AI is used in sensitive areas like criminal justice, employment, education, and health, there should be extra safeguards put in place. These safeguards include ensuring there is proper oversight to monitor its usage and incorporating human judgment for significant decisions. Whenever feasible, it is important to share information with the public about these human processes and their effectiveness.
State AI Laws
Despite the rapid development of AI, there are currently no specific legislation or regulations at the state level that directly govern the development of automated systems. However, some states have introduced laws that touch upon AI-related matters although they are not specifically enacted to be AI laws.
For instance, California enacted Business & Professions Code § 17941, which aims to prevent misleading practices and enhance transparency. Section 17941(a) of the code prohibits the use of bots to deceive individuals in California by pretending to be human and provide incentives to influence sales, purchases, or votes. Subsection (b) requires websites and digital platforms with over 10 million monthly visitors in the United States to disclose their use of bots for communication.
It’s worth noting that, currently, no state has implemented an AI law as robust as the EU AI Act, which provides a more extensive framework specifically created to govern AI technology.
Future of AI law
As AI technology progresses, the Blueprint highlights crucial matters that both state and federal legislatures are likely to consider when proposing or revising any AI law. While safety and the protection of fundamental rights align with the concerns addressed by the EU AI Act, the Blueprint does not directly cover all the issues raised in that legislation.
It is essential to understand that the Blueprint is only a guide. It is not legally binding and does not represent official U.S. government policy. Rather, it serves as a valuable resource for lawmakers to draw upon when crafting AI-related legislation.
The development of AI law is an ongoing process, but it may not progress as swiftly as AI technology itself.