Federal Agencies Poised to Crack Down on AI-Driven Bias
Financial institutions increasingly use artificial intelligence (AI) and machine learning when making mortgage lending decisions. Federal regulators are concerned about the potential for bias in these processes and have proposed new rules to reduce this risk.
On June 1, 2023, the Federal Reserve, Federal Deposit Insurance Corp. (FDIC), and several other agencies invited public comment on proposed rules that would impose quality control standards on automated valuation models (AVMs). The proposed rule is meant to address inherent biases in AI tools that could result in violations of fair lending laws.
Machine learning systems are trained using historical data and human-generated algorithms. If inaccurate, incomplete, or discriminatory data is used for training, these systems can make predictions that perpetuate inherent bias. The algorithm may also reflect the developer’s cognitive or actual prejudices. Priming, selective perception, stereotyping, and other biases may influence the system’s predictions.
How AVMs Work
AVMs use statistical models to estimate the value of real estate by analyzing the property’s sales history and tax assessor value as well as recent data involving comparable properties. They have seen widespread adoption thanks to advances in AI and the availability of larger property datasets. Financial institutions, mortgage lenders, real estate brokers, and Wall Street institutions use AVM services for property valuations. Consumers can use them on free real estate sites such as Trulia and Zillow.
AVMs are more efficient than human appraisers, analyzing large datasets in seconds without the risk of human error. They eliminate subjectivity and reduce the risk of deliberately inaccurate valuations and fraud. However, they can’t visit a property and assess it visually — they can only make decisions based on the data available to them.
A 2021 Freddie Mac study of census data found that 12.5% of appraisals in majority-Black neighborhoods resulted in valuations below the contract price. In other words, buyers were willing to pay more for the property than the appraised value. This occurred in just 7.4% of appraisals in majority-white neighborhoods. Below-contract appraisals can result in downward price negotiations (harming the seller) or higher down payment requirements (harming the buyer). If an AVM uses this data, its predictions will only reinforce that historical bias.
Regulating the Use of AVMs
The Dodd-Frank Act of 2010 added Section 1125 to the Financial Institutions Reform, Recovery, and Enforcement Act (FIRREA), which includes rules governing the performance of real estate appraisals. Section 1125 requires that the FDIC, National Credit Union Administration (NCUA), and others promulgate quality control standards for AVMs. These standards must:
Ensure a high level of confidence in the estimates produced by automated valuation models.
Protect against the manipulation of data.
Seek to avoid conflicts of interest.
Require random sample testing and reviews.
Account for any other such factor that the agencies determine to be appropriate.
The proposed rule would add a requirement that AVMs “comply with applicable nondiscrimination laws.” The proposed rule would apply to the use of AVMs to make credit decisions based on “the value of a consumer’s principal dwelling collateralizing a mortgage.” This includes any decision to “originate, modify, terminate, or make other changes to a mortgage” or to approve or change the credit limits of a line of credit.
Additional Regulatory Scrutiny
The agencies that regulate mortgage lending are not alone in their concerns. Four other agencies announced in April 2023 that they would be cracking down on automated tools that cause bias and discrimination in business practices. They include the Department of Justice, the Federal Trade Commission (FTC), the Equal Employment Opportunity Commission (EEOC), and the Consumer Financial Protection Bureau (CFPB).
In their joint statement, the agencies acknowledged that AI has “the potential to perpetuate unlawful bias, automate unlawful discrimination, and produce other harmful outcomes.” They referred to it as “digital redlining” and pledged to uphold “core principles of fairness, equality, and justice.”
The CFPB has begun hiring technologists and data scientists to help address these issues. The FTC recently launched its new Office of Technology that will employ experts in these fields. Together, the four agencies are taking a “whole of government” approach to combating bias and discrimination.
How Organizations Should Respond
Organizations that adopt AI-enabled technologies should be aware of increased scrutiny by the federal government. They should assess whether their use of AI follows best practices and any applicable regulations. Because AI is being added to a broad range of applications, organizations should also ensure that they’re aware of all AI tools in use.
Determining whether an AI system is biased can be difficult given that the inner workings of these technologies are generally not accessible to the user. Developers should work with the organizations that use their systems to ensure that the algorithms are not based on flawed assumptions. They should also use large, high-quality datasets when training the systems.
Most importantly, organizations should stay abreast of any changes to federal and state regulations related to the use of AI. The Biden Administration has made it clear that it will use existing consumer rights and civil rights laws to crack down on AI-driven bias.
Learn More About New Developments in Law
Stay up to date on the most current legal developments in California and the rest of the nation with Purdue Global Law School.
Purdue Global Law School offers an online Juris Doctor if you wish to become an attorney licensed in California. If you wish to advance your legal education but do not intend to become a practicing attorney, you may consider an online Executive Juris Doctor.