Article

Executive Order on the Use of AI Includes Impact to Consumer Finance and Lending

By Kathryn Rock, Bassel Haidar

In October 2022, the White House shared a framework with five main principles as a “Blueprint for an AI Bill of Rights” to guide the design, use, and deployment of AI and automated decision systems. The blueprint stated the framework should apply to housing, credit, employment, and financial services. On October 29, 2023, the White House issued an Executive Order (EO) to enforce requirements on developing and leveraging AI and automated decision systems. While the immediate impact is to federal agencies, the EO has long-term implications for private sector adoption and use of AI.

 

Highlights

The EO aims to ensure that automated decision systems using AI or machine learning embed data security, equity, and fairness considerations into the development and deployment process. The EO focuses on:

1. Standards for AI Safety and Security — It requires the National Institute of Standards and Technology to establish standards for adversarial testing (“red-team testing”) foundation models for flaws and/or vulnerabilities that pose a serious risk to national security, national economic security, or national public health and safety. This includes “dual-use” foundation models, which in addition to posing the risks mentioned above, refer to AI systems that have applications in civilian and military or other sensitive sectors. The EO relies on the Defense Production Act to impose reporting requirements on these dual-use foundation models regarding planned activities related to their development and production.

2. Advancing Equity and Civil Rights In addition to enforcing existing authorities to protect the rights and safety of Americans, the EO provides guidance to landlords, federal benefits programs, and federal contractors to prevent AI from being used to exacerbate discrimination. The White House will coordinate with federal agencies and the Department of Justice to address algorithmic discrimination in housing, healthcare, lending, and employment. Among others, the EO:

  • Directs the Secretary of Labor to publish guidance for federal contractors regarding nondiscrimination in hiring
  • Encourages the director of the Federal Housing Finance Agency and the director of the Consumer Financial Protection Bureau (CFPB) to require regulated entities to evaluate underwriting and automated valuation models for bias
  • Encourages the Secretary of Housing and Urban Development and the director of the CFPB to issue guidance on how either the Fair Housing Act or the Equal Credit Opportunity Act apply to tenant screening systems or advertising of housing, credit, and other real estate-related transactions through digital platforms

3. Opportunities for Collaboration and Partnership The EO emphasizes the need for collaboration and competition in the AI space to promote innovation. 

 

What Does This Mean For Your Organization

Companies who currently use or intend to use automated decision systems using AI or machine learning in regulated industries or who contract with the federal government should pay close attention to the variety of standards developed under the new EO. Having robust model risk management and documented controls monitoring AI systems are critical to meeting higher expectations of transparency, protection of consumer data, and an audit of the equitable use of AI on decisions surrounding lending, healthcare services, and employment.

"Organizations that currently use or intend to use automated decision systems in regulated industries or who contract with the federal government should pay close attention to the variety of standards developed under the new Executive Order. It will be important for them to have robust model risk management controls monitoring AI systems to meet higher expectations of transparency, consumer protection, and equality."

— Kathryn Rock, Partner, Financial Services

What Should Your Organization Be Doing

Firms should take an inventory of current AI capabilities and where automated decision systems are currently deployed throughout the organization. Guidehouse can work with your team to evaluate current AI systems and processes, identify any gaps and/or weaknesses, and develop a roadmap for enhancing AI capabilities. A key consideration for managing AI risk is developing an AI Governance Framework to ensure the responsible use and deployment of AI systems organization-wide.

 

How Guidehouse Can Help

As firms navigate the implications of the new EO on AI, it is essential to have a partner who can provide expert guidance and support. Guidehouse has a deep understanding of regulated industries and extensive experience in responsible AI. 

Guidehouse can comprehensively review, assess, and validate automated decision systems. We can also help your organization maximize the predictive power from automated decision systems while implementing industry standards for data privacy and protection, model monitoring and risk mitigation, and the equitable and responsible use of AI.

 
This article is authored by Kathryn Rock and Parth Kapoor with contributions by Bassel Haidar.

insight_image

Kathryn Rock, Partner

insight_image

Bassel Haidar, Director


Let Us Guide You

Guidehouse is a global consultancy providing advisory, digital, and managed services to the commercial and public sectors. Purpose-built to serve the national security, financial services, healthcare, energy, and infrastructure industries, the firm collaborates with leaders to outwit complexity and achieve transformational changes that meaningfully shape the future.

Stay ahead of the curve with news, insights and updates from Guidehouse about issues relevant to your organization and its work.