Recent developments around the globe

Written by Sam Williamson (February 2024)

Intro

The recent development of artificial intelligence (AI) systems has brought great benefits and significant risks to all sectors, including financial services, over the last few years and these opportunities and potential dangers only look to increase in the near future. In order to walk the line between harnessing the opportunities of AI and protecting the public from the potential dangers posed by its unchecked advancement, many governments globally have begun the process of looking at ways to regulate the development and use of AI systems.

Unsurprisingly, the European Union (EU) has led the way in developing a broad-based legislative approach to regulating AI systems with the development of the Artificial Intelligence Act (the AI Act), while the UK and USA have pursued a regulator-led approach that favours harnessing the potential opportunities of AI while imposing a more principles-based framework for their use.

This article will summarise recent AI regulatory developments in several jurisdictions and assess how these may impact the use of AI systems in the financial services industry.

European Artificial Intelligence Act

On 2 February 2024 the EU’s lawmaking bodies approved the final text of a world-leading AI Act. When formally adopted, the AI regime aims to apply a risk-based approach to “ensure that fundamental rights, democracy, the rule of law and environmental sustainability are protected from high-risk AI, while boosting innovation and making Europe a leader in the field. The rules establish obligations for AI based on its potential risks and level of impact”. [1]

At a high-level, the AI Act will prohibit certain types of application of AI that are deemed to pose an unacceptable risk and regulate other AI activities that fall into less serious categories through less onerous regulation.[2]  Subject to changes in the final lawmaking process, the table below summarises the different categories of risk and how the AI Act proposes to regulate them.

The AI Act will apply to entities operating both inside and outside the EU as long as the relevant AI system operates within the EU market or its use affects people located in the EU. While the regime primarily places obligations on ‘providers’ of the AI system (e.g., the developers) it will also impact ‘deployers’ of the AI systems (e.g. a website that uses a tool purchased from a third party developer).

Non-compliance with the AI Act can lead to substantial fines ranging from €7.5 million or 1.5% of global annual turnover of a business (whichever is higher) to €35 million or 7% of global annual turnover of a business (whichever is higher), depending on the infringement and size of the company.

Impact on financial institutions

As the AI Act is a broad-based horizontal piece of legislation intended to regulate the full remit of the developing use of AI across all industries, AI used in the financial industry is not specifically delineated as an area for regulation within the AI Act. However, several areas that are specifically designated as high-risk and limited-risk within the AI Act will have an impact on the financial institutions.

Principally, high-risk AI systems specifically include systems used to assess the “creditworthiness evaluation of natural persons, and risk assessment and pricing in relation to life and health insurance.” These areas are identified as high-risk as they intersect with protection of fundamental individual rights, specifically access to capital and insurance protection. Where a financial institution operating in the EU wishes to employ an AI system to assess an individual’s creditworthiness, it will have to ensure that the system provider has carried out the appropriate conformity assessment procedure and has complied with all other obligations under the AI Act before putting the system into use. This would apply equally to any insurance agency wishing to implement an AI actuarial system used to assess an individual’s premia for life insurance.

These protections aim to ensure that the AI system does not unfairly or arbitrarily discriminate against individuals, preventing their access to fundamental services. It should also be noted that advancements in the use of AI can often benefit customers as well as the institutions, for example the use of non-traditional datasets in determining creditworthiness by using AI can result in more individuals having access to credit who otherwise would not. This is why the AI Act seeks to regulate the use of high-risk AI systems while still providing a space for their legitimate use.

Other high-risk AI systems utilised by financial institutions may also be caught by the AI Act, including biometric identification systems used to protect access to accounts (e.g., fingerprints, facial identification, and voice PIN soft wear). Where these systems are provided by a third-party, the financial institution may have an obligation to ensure that the provider satisfies the requirements and accreditations required under the AI Act before deploying the AI system in the EU.

Finally, financial institutions may already use (or may use in the future) limited-risk AI systems that will be subject to transparency obligations under the AI Act, such as chatbots for the provision of customer services through websites or AI call centers. Financial institutions operating in the EU will need to ensure that these systems comply with the requirements of the AI Act regarding ‘opt out’ options for consumers who do not wish to continue to engage with AI processes.

Regarding enforcement and oversight, while the AI Act sets up a European Artificial Intelligence Office to facilitate the implementation and harmonization of the AI Act with existing legislation, the AI Act places ultimate oversight for compliance with the incumbent prudential supervisory authorities; the European Central Bank, in the case of European financial institutions.

The AI Act still needs the formal approval of the European Parliament which is expected to vote on the AI Act in April 2024.

UK and USA approach

In contrast to the EU’s legislative approach the UK and the USA have taken a regulator-led principles-based approach to the regulation of AI for the time being.

In March 2023 the UK Government issued a principles-based pro-innovation framework white paper [4] for AI which acknowledged the risks posed by developments in AI systems but strongly championed AI’s propensity for progress and development. The UK framework is based on five key principles:

  • Safety, security and robustness
  • Appropriate transparency and explainability
  • Fairness
  • Accountability and governance
  • Contestability and redress

Regarding the framework’s application to financial institutions, the whitepaper stated an expectation that joint guidance should be produced collaboratively by relevant regulators to provide greater clarity on the regulatory requirements relevant to AI in financial services as well as guidance on how to satisfy those requirements.

The white paper was supported by ‘Artificial Intelligence and Machine Learning’ discussion paper published by the Bank of England (BoE) (including the Prudential Regulation Authority) and Financial Conduct Authority (FCA) in October 2022. This discussion paper aimed to “further their understanding and to deepen dialogue on how [AI] may affect their respective objectives for the prudential and conduct supervision of financial firms.” The BoE and FCA published the results of industry feedback on the discussion paper in October 2023 [5]. Among other points, feedback received pointed to financial services industry participants recommended that regulators could respond to evolving technologies and AI capabilities “by designing and maintaining ‘live’ regulatory guidance ie periodically updated guidance and examples of best practice”.

Interestingly, in February 2024 the UK government appeared to change its position slightly, while maintaining its pro-innovation stance, it conceded that “the challenges posed by AI technologies will ultimately require legislative action in every country once understanding of risk has matured.” [6] It has also signaled that it may pursue targeted regulation organisations developing highly capable general-purpose AI systems to ensure that they are accountable for making these technologies sufficiently safe.

In the USA, the White House issued an ‘Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence’ [7] in October 2023. With a focus on consumer protection from the risks posed by AI the Executive Order states that “the Federal Government will enforce existing consumer protection laws and principles and enact appropriate safeguards against fraud, unintended bias, discrimination, infringements on privacy, and other harms from AI. Such protections are especially important in critical fields like […] financial services”.

Regarding the regulation of financial institutions specifically, the Executive Order instructs the US Treasury to issue a public report on best practices for financial institutions to manage AI-specific cybersecurity risks by March 2024. The Executive Order also encourages independent regulatory agencies to use their authority to make rules or issue guidance clarifying the responsibility of financial institutions to conduct due diligence on and monitor any third-party AI services they use to protect consumers from fraud, discrimination, and threats to privacy and to address other risks that may arise from the use of AI, including risks to financial stability.

Recent developments in Australia

While there is no specific legislation currently in Australia regulating AI, on 17 January 2024 the Australian Government announced [8] after consultation that it will consider mandatory safeguards for those who develop or deploy AI systems in legitimate, high-risk settings.

In addition, the Australian Government also announced that it will also be taking immediate action through:

  • working with industry to develop a voluntary AI Safety Standard, implementing risk-based guardrails for industry;
  • working with industry to develop options for voluntary labelling and watermarking of AI-generated materials; and
  • establishing an expert advisory body to support the development of options for further AI guardrails.

Separately, on January 2024 the Chair of the Australian Securities and Investment Commission (ASIC), Joe Longo, presented specifically on the impact of AI on the regulation of the financial industry in Australia.[9] In ASIC’s view the risk of firms engaging in deliberate or accidental misleading or deceptive business practices is exacerbated by the availability of vast consumer data sets and the use of tools such as AI and machine learning which allow for quick iteration and micro-targeting.

ASIC says that existing regulatory obligations around good governance and the provision of financial services are slow to change with new technology which means that all participants in the financial sector have a duty to balance innovation with the responsible, safe, and ethical use of emerging technologies. Bridging the governance gap means that ASIC will have to strengthen its regulatory framework “where it’s good”, and “shoring it up” where it needs further development until further specific regulation or legislation is enacted.

Impact on New Zealand

New Zealand businesses and law makers will be grappling with the same AI issues and opportunities as our trading partners in this fast-changing area. New Zealand businesses (including financial institutions) that trade in the EU will soon be required to comply with the AI Act for their operations undertaken on European soil. Equally, New Zealand-based businesses which use third-party AI systems which are supplied by providers based in the EU will be purchasing compliant systems by default. Domestically, there is currently no specific AI statute or regulation for the private sector, however business’ use of AI will continue to be subject to the protections afforded under general law of New Zealand including the Privacy Act, the Human Rights Act, the Fair Trading Act, the Harmful Digital Communications Act, the Crimes Act, and the Financial Markets Conduct Act.

The office of the Privacy Commissioner has been especially active; in September 2023 they produced guidance on how the information privacy principles contained in the Privacy Act apply to the collection, protection, and sharing of personal information in the training and use of AI systems by New Zealand businesses.[10] Relatedly, in December 2023 the EU determined that New Zealand is one of only a small number of countries that has an ‘adequate’ level of protection for personal data transferred from the EU. An ‘adequacy’ decision by the EU means that while New Zealand’s privacy legislation differs from the EU’s, its outcomes are similar and can be trusted. The Privacy Commissioner said that “Adequacy means that New Zealand is a good place for the world to do business; we have strong privacy protections in our legislation”.[11]

Additionally, the new Government indicated that it is developing an AI framework to support responsible and trustworthy AI innovation in government, although details on this policy are currently limited.[12]

Alongside legislative developments, there are other initiatives operating within New Zealand that aim to guide the development and use of responsible AI. For example, the AI Forum is an NGO formed in 2017 to promote the economic opportunities raised by AI, support great applications of AI and emerging New Zealand AI firms, and also work to ensure that society can adapt to the rapid and far-reaching changes that AI technology will bring.[13] More recently, the AI Forum launched an AI Working Group which aims to provide thought leadership on the responsible governance of AI in New Zealand and develop a curated set of frameworks, tools and approaches that meet the needs of New Zealand organisations.[14]

As the regulation of AI evolves throughout the rest of the world, it will be interesting to see how the public and private sector work together to harness the opportunities and mitigate the risks associated with AI, and whether New Zealand lawmakers will follow the EU’s legislative approach, the US/UK regulator-based approach, or Australia’s guardrails approach.


1. https://www.europarl.europa.eu/news/en/press-room/20231206IPR15699/artificial-intelligence-act-deal-on-comprehensive-rules-for-trustworthy-ai

2. https://ec.europa.eu/commission/presscorner/detail/en/ip_23_6473

3. https://artificialintelligenceact.eu/wp-content/uploads/2024/01/AI-Act-Overview_24-01-2024.pdf

4. https://www.gov.uk/government/publications/ai-regulation-a-pro-innovation-approach/white-paper

5. https://www.bankofengland.co.uk/prudential-regulation/publication/2023/october/artificial-intelligence-and-machine-learning

6. https://www.gov.uk/government/consultations/ai-regulation-a-pro-innovation-approach-policy-proposals/outcome/a-pro-innovation-approach-to-ai-regulation-government-response#executive-summary

7. https://www.whitehouse.gov/briefing-room/presidential-actions/2023/10/30/executive-order-on-the-safe-secure-and-trustworthy-development-and-use-of-artificial-intelligence/

8. https://storage.googleapis.com/converlens-au-industry/industry/p/prj2452c8e24d7a400c72429/public_assets/safe-and-responsible-ai-in-australia-governments-interim-response.pdf

9. https://asic.gov.au/about-asic/news-centre/speeches/we-re-not-there-yet-current-regulation-around-ai-may-not-be-sufficient/

10. https://www.privacy.org.nz/assets/New-order/Resources-/Publications/Guidance-resources/AI-Guidance-Resources-/AI-and-the-Information-Privacy-Principles.pdf

11. https://privacy.org.nz/publications/statements-media-releases/new-zealand-is-adequate-and-we-couldnt-be-happier-about-itnew-news-page/

12. https://www.nzherald.co.nz/business/survey-finds-most-kiwis-worried-about-malicious-ai-technology-minister-judith-collins-responds/DJWKCXXSF5CCHPPNE47BTCQDJU/

13. https://aiforum.org.nz/about/

14. https://aiforum.org.nz/our-work/working-groups/ai-governance-working-group/

SHARE THIS ARTICLE

SHARE THIS ARTICLE