Artificial Intelligence (AI) Act- What do we know about the obligations of banks so far?
AI systems are revolutionizing societies and the economy; therefore, they will soon become subject to a comprehensive regulation covering the entire European Union.
Even though the European law regulating AI, i.e. Artificial Intelligence Act (AI Act) is in the final stage of interinstitutional negotiations, and we do not have the final text yet, it is already possible to describe the crucial assumptions and key obligations that will be the subject to both AI system providers and their users with a high degree of certainty. Although a final compromise and the entry into force of the AI Act is needed, it is already known, based on political announcements, that the main assumptions regarding roles and responsibilities, as well as qualifications of AI systems regarding the banking sector, will not differ from those presented in previous legislative proposals.
This is of particular importance for the banking sector, which is dynamically introducing and increasingly using AI systems to improve its competitiveness.[1] It is worth noting how big the potential of AI systems potential is in areas such as creditworthiness assessment and risk assessment, optimization of processes and operations, customer behavior analysis, and the possibility of providing service to clients in an automated manner.
Due to the critical role of the banking sector, and the risk associated with the use of AI systems in banking for fundamental constitutional rights, the banking sector shall be subject to increased attention from regulators worldwide.
Furthermore, even more importantly, a thorough understanding of the nature of the systems banks wish to use, their design, and possible usage will be crucial because many potential applications in banking may range from systems whose risk is predefined as either too high or high but eventually acceptable under certain conditions. To give an example, the automatic credit risk assessment may or may not be allowed or prohibited depending on the details of the system’s design and its implementation.[2]
Therefore, banks shall be required to have combined legal and technical expertise, considering the ethical dimension of the artificial intelligence applications (to realistically assess the possible impact of IA systems on consumers). This will be essential to face the legal challenges created by the AI Act.
The AI Act will impose a diversified range of obligations on both the providers and the users of AI systems, depending on the level of risk a particular system presents. The risk level is predefined by law through categorizations based on specific applications of the AI systems.
The AI Act divides AI systems into those whose use involves: unacceptable risk, high risk, limited risk, and low risk. The use of systems with unacceptable risk is prohibited. The creation and use of high-risk AI systems are regulated, and these systems will be those used in the banking sector, as they will be qualified as those systems used to provide socially necessary services, such as access to credit. It is also possible that a large part of systems in the banking sector, which is not directly related to consumer service, may also be considered high-risk systems due to being part of critical infrastructure, important for the security and financial stability of the state.
The use of high-risk AI systems will be acceptable provided that several requirements imposed by the AI Act are met. The key aspects relating to the obligations the AI Act imposes on the banking sector and banks are:
- Risk Compliance: Banks will have to assess and manage the risks associated with the use of AI systems. This includes identifying, assessing, and minimizing potential harm or error resulting from the use of AI.
- Transparency and explaining: Banks will have to ensure that AI systems are transparent and that decisions made by these systems can be explained to customers and regulators.
- Data protection and privacy: AI systems used in banking will have to comply with strict data protection regulations, in line with the General Data Protection Regulation (GDPR) and additional requirements set out in the AI Act.
- Supervision and reporting: Banks will be required to monitor and report on the use of AI systems, including any incidents or issues related to these systems.
- Risk and compliance assessment: Banks will need to regularly carry out risk and compliance assessments of their AI systems to ensure they continue to meet regulatory requirements.
Meeting these requirements will involve designing and implementing verification processes for AI systems vendors, following the criteria. Simultaneously, it will be detrimental to assess not only the technical risk posed by an AI system but also its impact from the perspective of the risks posed to fundamental rights, which will require a combination of specialized legal and technical expertise.[3]
Even though the AI Act will probably come into force only two years after its adoption at the EU level, the process of ensuring compliance with the law shall be demanding and time-consuming, so it is worth starting to prepare now.
[1] Omar H Fares, Irfan Butt and Seung Hwan Mark Lee, ‘Utilization of Artificial Intelligence in the Banking Sector: A Systematic Literature Review’ [2022] Journal of Financial Services Marketing.
[2]European Parliament, ‘Texts Adopted – Artificial Intelligence Act – Wednesday, 14 June 2023’ (www.europarl.europa.eu2023), link: https://www.europarl.europa.eu/doceo/document/TA-9-2023-0236_EN.html, access from November 23, 2023.
[3]Mantelero, A. (2022). Human Rights Impact Assessment and AI. In: Beyond Data. Information Technology and Law Series, vol 36. TMC Asser Press, The Hague. https://doi.org/10.1007/978-94-6265-531-7_2, access from November 23, 2023.