New Eu Ai Act 2024: Impact On Arab Banks
By Dr Soha Maad
Introduction
The European Union (EU) recently approved the new Artificial Intelligence act (EU AI act) to regulate the use of Artificial Intelligence (AI) and to make it more responsible and to reduce the misuse of AI especially in disinformation and misinformation. The new EU AI act classifies AI based systems into high and low risk systems and will be enforced through various measures. The new EU AI act will have an impact on technology firms and all institutions inside and outside the EU. Banks will also be concerned with the new EU AI act and will have to comply with its mandates.
This article overviews the new EU AI act. The overview covers the main legal issues addressed in the EU AI act, its impact inside and outside the EU and the measures to enforce it. The article concludes with a roadmap for Arab banks and institutions to prepare for the new EU AI act and its various implications.
The Eu Ai Act Risk Based Framework
As part of its digital strategy, the EU is regulating artificial intelligence (AI) to ensure better conditions for the development and use of this innovative technology. AI can create many benefits, such as better healthcare; safer and cleaner transport; more efficient manufacturing; and cheaper and more sustainable energy.
The European Commission proposes a regulatory framework for AI. AI systems that can be used in different applications are analysed and classified according to the risk they pose to users. The different risk levels determine the level of regulation.
The EU Parliament priority is to make sure that AI systems are safe, transparent, traceable, non-discriminatory and environmentally friendly. AI systems should be overseen by people, rather than by automation, to prevent harmful outcomes.
The EU parliament also is establishing a technology-neutral, uniform definition for AI that could be applied to future AI systems.
The new EU AI Act rules establish obligations for providers and users depending on the level of risk from artificial intelligence. While many AI systems pose minimal risk, the risk level need to be assessed.
The EU Artificial Intelligence Act (EU AI Act) aims to govern the use of artificial intelligence within the EU. The new EU AI Act adopts a risk-based approach in regulating AI products and services. Rather than focusing solely on the technology, the new EU AI act emphasizes the regulation of the use of AI. The objective of the new EU AI act is to include safeguarding democracy, upholding the rule of law, protecting fundamental rights (such as freedom of speech), and promoting investment and innovation.
The severity of the EU AI act regulations depends on the risk level associated with specific AI applications.
Under the new EU AI act, AI systems are classified in various risk categories:
- Unacceptable risk AI systems
- High risk AI systems
- Limited risk AI systems
- Low and minimal risk AI systems
Examples Of Unacceptable Risk Ai Systems
Unacceptable risk AI systems are systems considered a threat to people and are banned under the new EU AI act. These systems include:
- Systems for cognitive behavioural manipulation of people or specific vulnerable groups. Examples of these systems are voice-activated toys that encourage dangerous behaviour in children.
- Social scoring systems: These systems classify people based on behaviour, socio-economic status or personal characteristics.
- Real-time and remote biometric identification systems, such as facial recognition
Examples Of High-Risk Ai Systems
Examples of high-risk AI systems as defined by the EU AI Act include:
- Medical Devices and Diagnosis: AI systems used for medical diagnosis, treatment planning, or patient monitoring fall into the high-risk category. These systems directly impact patients’ health and well-being.
- Transportation and Autonomous Vehicles: Self-driving cars and other autonomous vehicles rely heavily on AI. Their operation involves safety-critical decisions, making them high-risk applications.
- Critical Infrastructure and Energy Systems: AI systems controlling power grids, water supply networks, and other critical infrastructure are high-risk. Failures in these systems can have severe consequences.
- Biometric Identification and Surveillance: Facial recognition systems, especially when used for law enforcement or surveillance, pose significant risks to privacy and civil liberties.
- Educational AI: AI systems used in educational settings, such as automated grading or student performance prediction, can impact students’ futures and must be carefully regulated.
- Recruitment and Employment Decisions: AI-driven hiring tools, if biased or discriminatory, can perpetuate inequalities. Ensuring fairness and transparency is crucial.
- Financial Services and Credit Scoring: AI algorithms used for credit scoring, loan approvals, or investment decisions fall under high-risk categories due to their impact on individuals’ financial lives.
- Criminal Justice and Predictive Policing: AI systems used by law enforcement agencies for predictive policing or risk assessment can affect people’s rights and freedoms.
The new EU AI Act aims to strike a balance between innovation and protection, ensuring that high-risk AI systems are developed and deployed responsibly. All high-risk AI systems will be assessed before being put on the market and also throughout their lifecycle.
Examples Of Low-Risk Ai Systems
Examples of low-risk AI systems as defined by the EU AI Act include:
- Content Recommendation Algorithms: These algorithms suggest articles, videos, or products based on user preferences. While they impact user experiences, they pose minimal risks.
- Spam Filters: Email spam filters use AI to identify and filter out unwanted messages. Their impact is relatively low.
- Language Translation Tools: AI-powered language translation services help users understand content in different languages. Errors may occur, but the risks are generally low.
- Virtual Assistants (Chatbots): Chatbots that assist with answering queries, providing information, and engaging in conversations fall into the low-risk category.
- Basic Image Recognition: Simple image recognition tools identify objects, animals, or scenes in images. These tools do not have significant consequences and fall into the low-risk category.
- Spell Checkers and Grammar Correction: AI-driven tools that correct spelling and grammar errors in text fall into the low-risk category.
- Basic Personalization Algorithms: Websites and apps use AI to personalize content based on user behavior (e.g., showing relevant ads). These systems have minimal impact and fall into the low-risk category.
Low-risk AI systems still require transparency and ethical considerations, but their potential harm is relatively limited.
Enforcement Of The New Eu Ai Act Law
The European Union (EU) has taken a significant step by introducing the Artificial Intelligence Act (AI Act), the world’s first comprehensive law specifically addressing AI. The EU AI Act law will be enforced through various measures:
- Binding Rules on Transparency and Ethics: The AI Act imposes legally binding rules on tech companies, ensuring transparency and ethical practices. Tech companies must notify users when interacting with chatbots, biometric categorization, or emotion recognition systems. Tech companies should also label deepfakes and AI-Generated Content ensuring users can distinguish between real and AI-generated media. Tech companies must also conduct impact assessments. Organizations offering essential services (such as insurance and banking) must assess how AI systems affect fundamental rights.
- The Development of Foundation Models: The new EU AI act proposes new foundation and powerful AI models to be used for various purposes. These includes models like ChatGPT.
- EU as the AI Police: The EU aims to become the world’s go-to tech regulator by enforcing binding rules on AI. The AI Act introduces governance mechanisms to regulate an influential sector globally.
The new EU AI Act is expected to take effect two years after the final approval by European lawmakers in early 2024. Violations could result in fines of up to 35 million euros or 7% of a company’s global revenue.
Global Impact Beyond The Eu
The new EU AI Act brings essential rules and enforcement mechanisms to the AI domain, promoting responsible practices and protecting fundamental rights. Its impact will resonate beyond Europe, shaping AI governance worldwide.
While the new EU AI Act directly applies in the EU countries, its influence extends beyond Europe shaping global standards. As a world-leading set of rules, the new EU AI act may serve as a reference point for other regions and countries in their own AI regulations.
The EU AI Act represents a significant step toward responsible AI governance, emphasizing risk assessment, transparency, and protection of fundamental rights. Its impact will likely resonate worldwide as other jurisdictions consider their own AI frameworks.
Impact Of Eu Ai Act On Banks
The new EU Artificial Intelligence Act (EU AI Act), will significantly impact various economic sectors, including the banking and financial sectors. The EU AI Act affects banks and financial institutions in various ways:
- Financial sector firms are more likely to use some of the classes of software regulated by the new EU AI act. Examples of these software are biometric identification tools and tools for credit assessment of individuals.
- The AI Act imposes obligations on banking and financial software providers, importers, distributors, and users.
- Banks and financial institutions should clearly identify their artificial intelligence systems that are subject to the new EU AI act regulations.
- Banks and financial institutions must develop procedures, systems, and controls to ensure compliance with the new EU AI act that applies within the EU and beyond the EU.
- Banks and financial institutions need to prepare and adapt to the evolving landscape of AI regulation.
- The new EU AI Act represents a significant step toward responsible AI governance in banks and financial institutions, emphasizing risk assessment, transparency, and protection of fundamental rights. The impact of the new EU AI act will likely resonate worldwide as other jurisdictions consider their own AI frameworks.
Road Ahead For Arab Banks
The EU’s proposed regulation on artificial intelligence (EU AI Act) aims to establish a harmonized framework that balances the benefits and risks of AI systems. Arab banks and financial institutions should comply with the new EU AI act. Key steps for Arab banks and financial institutions to undertake in order to comply with the new EI AI Act include:
Step #1. Conducting a risk-based analysis: The AI Act categorizes AI systems into various risk levels: minimal or no risk, limited risk, high risk, and unacceptable risk. Unacceptable risk AI systems will be strictly prohibited, with obligations tapering based on risk level.
Step #2. Understanding applicability: Financial institutions both within and outside the EU should understand the potential effects of the new EI AI Act. It will be directly applicable across EU member states and will apply to all AI systems placed in the global market.
Step #3. Navigating the regulatory landscape: Arab banks should start assessing their AI systems in order to navigate the evolving global regulatory landscape effectively. This involves identifying AI-related activities, assessing risk profiles, and adopting suitable frameworks.
Step #4. Avoiding fines and penalties: Financial institutions could face fines of up to €40 million for non-compliance with the new AI act. Arab banks, operating in the global markets, should avoid fines and penalties for non-compliance with the new EU AI act.
Step #5. Stay informed: The AI Act is still subject to change as a result of ongoing negotiations, but staying informed and proactive will be crucial for compliance.