top of page

When Algorithms Judge Your Credit: Understanding AI Bias in Lending Decisions

  • masonwimberley
  • May 9
  • 27 min read

Updated: 5 days ago


Korin Munsterman*

Professor and Director of the Legal Education Technology program at UNT Dallas College of Law


ISSUE 17

SPRING 2025

AI

I.              Introduction

Imagine a small business owner with a strong record of paying her bills and managing her finances being denied a loan, not because of her creditworthiness, but because an AI system flagged her irregular income patterns as “high risk.” Or say a recent immigrant with an advanced degree and a stable job receives an interest rate 3% higher than his colleagues simply because he lacks a conventional American credit history. Or think about a twenty-year-old African American woman who has virtually no credit history, owns an Android cell phone, has a Yahoo email account, and shops online late at night because she works during the day and goes to school at night is denied a car loan. These aren’t hypothetical scenarios—they represent real consequences of AI bias in lending. As artificial intelligence increasingly determines who gets loans and at what cost, seemingly neutral algorithms can perpetuate and amplify existing patterns of discrimination, affecting millions of Americans’ access to credit.¹


II.            The Evolution of Credit Assessment

Before diving into AI bias, it’s important to understand how lending decisions have evolved. Before the advent of algorithms, decisions regarding hiring, advertising, criminal sentencing, and lending were primarily made by individuals and organizations. These decisions were typically guided by various federal, state, and local laws that aimed to ensure fairness, transparency, and equity in the decision-making process.² In contrast, today, many of these decisions are either fully automated or significantly impacted by machines, which offer remarkable efficiencies due to their scale and statistical precision.³ The emergence of AI in credit scoring stems from the need for more sophisticated approaches that can analyze vast datasets, identify intricate patterns, and make more accurate predictions.


Traditional vs. AI-Powered Credit Assessment

Anyone who has applied for a credit card, mortgage, or car loan is familiar with the process and often believe that only traditional determine whether the credit will be granted and at what interest rate.  Those traditional sources come from the three main credit reporting agencies:  Equifax, Experian, and TransUnion. Non-traditional credit assessment might rely on other additional key factors:


  • Income verification

  • Employment history

  • Outstanding debts

  • Payment history

  • Credit utilization


Modern AI Credit Assessment: Leveraging Your “Digital Footprint”

AI is revolutionizing how lenders decide who gets credit by looking beyond traditional credit scores. While FICO scores have long served as the standard, they exclude many individuals without sufficient credit history. Today’s AI systems can analyze thousands of data points from your “digital footprint”: information you leave online simply by browsing websites or making purchases. That information can predict repayment behavior. For example:


Device type: Studies found that iPhone users default at nearly half the rate of Android users. This likely connects to income differences (iPhones tend to be much more expensive), as iOS device ownership strongly correlates with being in the top income quartile.


Email provider choice: Your email provider says a lot about you financially. Research shows that people using premium email services, such as Outlook, defaulted at just 0.51% (well below average), while users of older free services like Yahoo and Hotmail had much higher default rates (1.96%). This pattern suggests different economic profiles among email service users.


Shopping timing patterns: Night owls beware: people shopping between midnight and 6 AM defaulted at nearly twice the rate (1.97%) of those shopping during business hours. This correlation might reflect different lifestyle patterns or self-control habits.¹


Text formatting habits: Consistently typing in all lowercase correlated with a default rate more than twice that of people who used proper capitalization. Even more striking, customers who made typing errors in their email addresses had a default rate of 5.09% compared to the average of 0.94%.¹¹


Shopping approach: People arriving at shopping sites via price comparison websites were almost half as likely to default as those clicking through advertising links, suggesting that comparison shoppers are more financially cautious overall.¹²


These correlations demonstrate how seemingly trivial digital behaviors can form a surprisingly accurate picture of financial responsibility.


If all of this analysis of your behavior online sounds a little like “Big Brother,” there are advantages to leveraging digital footprints for both lenders and borrowers alike. There are an estimated two billion adults worldwide who are considered “unbanked” and who lack traditional credit histories.¹³ Because traditional sources used by credit bureaus are limited, banks are prevented from lending to these people. By using digital footprints instead of the traditional methods to determine credit worthiness—which are proven to be just as reliable when predicting repayment behavior—these people can get credit.¹⁴


Additionally, since digital footprints capture different aspects of creditworthiness than traditional scores, combining both approaches creates significantly more accurate predictions than either method alone. This means fewer good customers being denied credit and fewer risky loans being approved. Research indicates “that digital footprints have the potential to boost financial inclusion to parts of the currently two billion working-age adults worldwide that lack access to services in the formal financial sector.”¹⁵


When lenders use your online behavior to make credit decisions, they enter a complex ethical territory. Every website visit, device choice, and even typing habit becomes part of an invisible financial profile. A significant concern is how digital signals can function as “proxies,” which are indirect indicators that correlate with characteristics that lenders are not supposed to consider by law. For example, certain email providers or shopping patterns might unintentionally correlate with age, socioeconomic status, or even protected characteristics like race or gender, potentially creating discriminatory outcomes without explicitly using those factors.


And how does it affect our behavior? Imagine second-guessing every website you visit or feeling pressure to purchase an iPhone that you may not be able to afford, instead of an Android device, simply to appear more creditworthy. You might feel you have to “game the system” instead of navigating the web as you would normally, without knowing how your credit might be affected. And the convenience of instant credit decisions comes with a troubling tradeoff—the transformation of casual online activities into financial data points that follow consumers across the digital landscape. Finding a balance between innovation and privacy protection remains one of the most significant challenges in this evolving landscape.

 

III.          Understanding AI Bias

AI bias manifests in various forms, each with its own challenges and implications.


Understanding these different types of bias is important for both consumers and those working in the credit industry.


Historical Bias: The Legacy of Discrimination

Historical bias occurs when AI systems learn from data that reflects past discriminatory practices. A recent study found that AI lending models trained on historical data consistently replicated patterns of racial discrimination in mortgage lending.¹⁶ This happens because of:


  1. Training Data Issues: Historical lending data reflects decades of systematic discrimination;

  2. Pattern Recognition: AI systems identify and replicate historical correlations;

  3. Reinforcement: Biased decisions create new data that reinforces existing patterns.¹⁷


For example, although outlawed by the Fair Housing Act of 1968, beginning in 1930, banks and lenders engaged in a practice called “redlining.” Redlining refers to the discriminatory practice where banks and lenders would literally draw red lines on maps around certain neighborhoods—predominantly those with Black, immigrant, or minority populations—and systematically deny mortgages to residents in these areas regardless of their actual creditworthiness.¹⁸


In 2022, Wells Fargo faced accusations of discriminatory lending practices driven by an algorithm intended to assess the creditworthiness of loan applicants.¹⁹ An investigation found that the algorithm gave higher risk scores to Black and Latino applicants compared to white applicants with similar financial backgrounds.²⁰ As a result, Black and Latino individuals were denied loans at a significantly higher rate, even though their qualifications were on par with those of white applicants.²¹


Representation Bias: The Missing Stories in AI Training Data

Representation bias is a significant issue in the use of artificial intelligence for credit decisions, especially in areas like mortgage lending and consumer credit. This type of bias arises when the data used to train AI does not adequately represent the varied demographics of the population it aims to assist, which can lead to certain communities being overlooked by AI systems. According to the Consumer Financial Protection Bureau (CFPB), “one in ten adults in the U.S., or about 26 million people, are ‘credit invisible.’”²² Being credit invisible means that someone has no credit history at all with any of the major credit bureaus (Equifax, Transunion, and Experian).²³ Twenty-six million people, then, lack a credit history with one of the national credit reporting firms. Approximately 10 million people had thin files with inadequate credit history or stale files lacking any recent credit history.²⁴ There are 45 million consumers overall who might be denied credit since they lack credit records with a scoring capability.²⁵ Credit invisible or unscorable consumers typically lack access to quality credit and may have a variety of problems, from trying to get credit to leasing an apartment.²⁶


Who are these credit-invisible people?

Low-income consumers are more likely to have unscorable credit records and are disproportionately credit invisible.²⁷ Of the low-income area residents, roughly 30% are credit invisible, and another 15% have records that are unscorable.²⁸ In upper-income communities, these rates are even lower. For instance, just 4% of the population in upper-income communities is credit invisible, and another 5% is unscorable.²⁹

 

In addition, along with having unscored credit reports, Black and Hispanic individuals are far more likely to be credit invisible than White or Asian consumers. Compared to 9% of White customers, over 15% of Black and Hispanic consumers are credit invisible.³⁰ Comparatively, 7% of White consumers, 13% of Black consumers, and 12% of Hispanic consumers have unscorable records.³¹ Studies have shown that minority borrowers face higher barriers to obtaining credit. Even when controlling for income and debt ratios, high-earning Black applicants with less debt were found to be rejected more often than high-earning White applicants with more debt.³²

 

Not surprisingly, younger consumers are likely to be credit invisible simply because they are just starting out in life and have not built up any or sufficient credit.³³

 

Furthermore, according to geographical analysis, 22.3% of adults in Mississippi are either credit invisible or unscorable.³⁴ In fact, of “the Top 25 Micropolitan Statistical Areas, 21 are located in the South,” and all of the areas have very high poverty rates; more than half “those areas have poverty rates more than double the national rate.”³⁵


Impact of Underrepresentation

Credit scoring isn’t just used for loans; it has a wide range of effects on people who want to get a lot of different financial goods and services. Credit scores and other scoring systems are used by employers to judge job applicants; insurers use credit scores to decide who gets car, life, and home insurance and set the premiums accordingly;³⁶ and landlords to check out potential tenants.³⁷ Indeed, most states allow utility companies to consider credit scores when determining service.³⁸ Credit scores are even being used to figure out which people are more likely to follow through with their medicine.³⁹


Algorithmic Bias: The Proxy Problem

Algorithmic decisions can be made based on proxies for explicitly protected characteristics (e.g., race, gender, or religion), but which correlate strongly with these characteristics. For example, a zip code might serve as a proxy for race because of historical housing segregation patterns. While the algorithm isn’t directly considering race—which would likely violate anti-discrimination laws—it achieves a similar effect by using zip code as a decision factor.


Social media usage as a credit factor raises even more complex issues. Algorithms may analyze a person's online connections, posting patterns, or even the content of their communications to make inferences about creditworthiness. When people with limited financial resources use technology that increases their exposure to monitoring, and they don’t limit access to their online content (whether intentionally or not), they may face additional forms of commercial data gathering.⁴⁰ This creates a troubling dynamic where those with fewer resources are more exposed to algorithmic scrutiny of their social lives.


Indeed, data-driven analysis might provide businesses with novel justifications for denying specific groups access to certain opportunities. For instance, one analytics study found that individuals who complete online job applications using deliberately installed browsers (such as Firefox or Chrome) rather than pre-installed ones tend to perform better and have lower job turnover rates.⁴¹ Should employers use this correlation as a proxy to avoid hiring candidates who use particular browsers, they might inadvertently reject qualified applicants based on factors irrelevant to the actual job requirements. And, of course, the denial of employment opportunities could lead to poor credit and financial conditions.


Purchase history represents another problematic data source. Credit algorithms might analyze where a person shops, what they purchase, and even the timing of their purchases and use those patterns as a proxy for socio-economic status. For example, in the CompuCredit case, the Federal Trade Commission alleged that a credit card marketing company promoted consumers’ ability to obtain cash advances while deceptively omitting that the company would reduce consumers’ credit limits based on a behavioral scoring model if they used their cards for cash advances or specific transactions. These transactions included visits to marriage counselors, bars and nightclubs, pawn shops, automobile tire retreading and repair shops, and massage parlors.⁴²


The “black box” nature of these algorithms compounds these problems. When credit decisions move from straightforward formulas to complex machine learning models, decisions become increasingly opaque, making it nearly impossible for consumers—or even regulators—to understand why a particular decision was made. For example, algorithms might assign low scores to jobs like seasonal agricultural work or low-wage service positions. While this correlation might not be intentionally discriminatory, it could unfairly affect loan outcomes for racial minorities who disproportionately hold these jobs. To verify this, we would need access to the core components of credit-scoring systems—source code, programmer documentation, and algorithms—which remain unavailable to us. Credit bureaus may effectively be concealing discrimination within these opaque scoring systems that remain protected from examination.⁴³ This immunity stems from both technical complexity and legal protections for trade secrets, creating a situation where discriminatory effects can persist undetected and unchallenged.⁴⁴


The consequences of these proxy variables extend beyond individual credit decisions. When algorithms systematically disadvantage certain communities by using seemingly neutral factors, they create what scholars call “networked discrimination.”⁴⁵ This occurs when an individual is judged not only by their own characteristics but by those of people with whom they are associated, whether through neighborhood, social connections, or shopping patterns. This form of discrimination is particularly harmful because it can be difficult to identify, challenge, or remedy under existing legal frameworks that were designed for more direct forms of discrimination.


Generalization Bias: When AI Makes Assumptions

In the context of credit scoring and financial decision-making, generalization bias manifests when prediction models (like those used to determine creditworthiness) perform with different levels of precision across different demographic groups. For example, research indicates that credit scores are statistically noisier indicators of default risk for minority and low-income applicants compared to other populations.⁴⁶ In simple terms, “statistical noise” refers to random or unpredictable variation in data that makes it harder to identify true patterns or relationships. When a measurement or prediction contains more noise, it’s less reliable or accurate. This means lenders face greater uncertainty when assessing the default risk of historically underserved populations, even when looking at the same credit score numbers. Two applicants with identical credit scores—one from a minority group and one not—may actually have different true default risks, but the scoring system is less able to accurately capture this for the minority applicant.


Why? As previously discussed, minority consumers are roughly twice as likely as non-minority consumers to have very few accounts. When there’s less data to analyze, predictions naturally become less reliable.


Also, the credit data available for minority applicants may not fully capture their financial behaviors and responsibilities. Many minority consumers may use financial services that don’t report to traditional credit bureaus, such as payday lenders or community-based lending circles.⁴⁷


This bias doesn't necessarily stem from intentional discrimination or flawed modeling techniques, but often results from underlying differences in the quality, quantity, and representativeness of data available for different groups. When certain populations have “thinner” credit files (less historical data), more limited types of credit accounts, or different patterns of financial behavior that aren’t well-captured by traditional metrics, the models make less reliable predictions for these groups.


This generalization bias creates a troubling feedback loop in credit markets. When scoring systems cannot accurately assess the creditworthiness of minority or low-income applicants, lenders face greater uncertainty about these borrowers’ default risk. This uncertainty typically translates into more conservative lending decisions, further limiting these groups’ access to credit. Without access to mainstream credit, these consumers cannot build the robust credit histories needed for accurate assessment, perpetuating the cycle of exclusion. The problem is structural rather than individual—even if a consumer from an underserved group is equally creditworthy as another borrower, the information asymmetry means they’re less likely to be recognized as such.⁴⁸


IV.          Real-World Impact: The Apple Card Controversy

In November 2019, tech entrepreneur David Heinemeier Hansson ignited a firestorm on Twitter when he revealed that Apple’s newly launched credit card had offered him a credit limit 20 times higher than his wife’s, despite her having a better credit score.⁴⁹ Adding fuel to the controversy, Apple co-founder Steve Wozniak chimed in, sharing that he had experienced a similar disparity with his wife.⁵⁰ These allegations prompted an immediate investigation by New York’s Department of Financial Services (DFS), which stated that “any algorithm that intentionally or not results in discriminatory treatment of women or any other protected class violates New York law.”⁵¹


The resulting investigation by the DFS ultimately “did not produce evidence of deliberate or disparate impact discrimination but showed deficiencies in customer service and transparency.”⁵² Nevertheless, the controversy highlighted a significant concern about algorithmic bias in financial services. Goldman Sachs, the issuing bank for the Apple Card, defended itself by claiming that its algorithm was “gender-blind” and didn’t use gender as an input. However, as reported in Wired, this defense is “doubly misleading” because “it is entirely possible for algorithms to discriminate on gender, even when they are programmed to be ‘blind’ to that variable.”⁵³


Furthermore, the controversy revealed a paradoxical challenge in preventing algorithmic discrimination. As Paul Resnick, a professor at the University of Michigan’s School of Information, noted, the fact that Equal Credit Opportunity Act prohibits financial businesses from using information such as gender or race “may actually make this problem worse by deterring those businesses from collecting this important information in the first place.”⁵⁴ This creates what AI ethics researchers call the “fairness paradox”: we can’t directly measure bias against protected categories if we don’t collect data about those categories, yet collecting such data raises concerns about potential misuse. This tension makes auditing for bias extraordinarily difficult and complicates regulatory oversight in an increasingly algorithmic financial landscape.


The Apple Card incident underscores how modern algorithms can perpetuate historical discrimination. The New York DFS report acknowledges that “[e]ven when credit scoring is done in compliance with the law, it can reflect and perpetuate societal inequality” because “these same variables often reflect the nation’s long history of racial and gender discrimination.”⁵⁵ Consequently, the algorithms can inadvertently perpetuate biases through proxy variables, i.e., data points that correlate with protected characteristics like gender or race.


Accordingly, the Apple Card controversy has illuminated the need for more sophisticated regulation of financial technology. Karen Mills, a Senior Fellow at Harvard Business School, argues that “this is a new frontier for the financial services sector—and the industry's regulators are also operating without a roadmap.”⁵⁶ What makes this regulatory challenge particularly vexing is that as algorithms become more complex and opaque, traditional regulatory approaches centered on disclosure requirements and compliance checklists become less effective. The future of financial regulation likely requires regulatory bodies to develop significant technical expertise in AI and machine learning, enabling them to conduct meaningful algorithm audits that can detect potential discrimination without stifling innovation. This represents a fundamental shift in how we approach financial regulation, moving from rules-based compliance to outcomes-based oversight. Mills proposes a three-part approach to “smart regulation”: disclosure rules regarding algorithm transparency, increased technical expertise at regulatory agencies, and better data collection to identify lending gaps.⁵⁷ Without such measures, we risk allowing technological innovation to undermine decades of progress in fair lending.


V.            Constitutional and Statutory Protections in Credit Decisions

Equal access to credit is fundamental to economic opportunity in America, protected by both constitutional principles and statutory frameworks designed to prevent discrimination. The Equal Credit Opportunity Act (ECOA) of 1974⁵⁸ stands as one of the most significant legal protections, prohibiting lenders from discriminating against credit applicants on the basis of race, color, religion, national origin, sex, marital status, age, or because they receive public assistance.⁵⁹ This protection extends to all aspects of a credit transaction, including application procedures, evaluation standards, and terms of credit.⁶⁰ The Fair Housing Act (FHA) provides additional protection specifically in the housing market, prohibiting discrimination in financing real estate transactions based on similar protected characteristics.⁶¹


Despite these protections, companies have employed Big Data models to identify and exclude internet users with poor credit scores from loan advertisements.⁶² Similarly, Big Data could eliminate the need for housing providers to explicitly state discriminatory preferences in advertisements.⁶³ Instead, they could develop algorithms to predict personal identifying information of potential buyers or renters and selectively advertise properties only to preferred demographic profiles.⁶⁴


In the housing market, providers may increasingly utilize publicly available personal information to create these targeted profiles. Just as Big Data can prevent qualified candidates from seeing beneficial loan opportunities, housing suppliers could potentially use these techniques to discriminate while evading fair housing regulations.⁶⁵


The legal framework addressing credit discrimination encompasses two distinct concepts: disparate treatment and disparate impact. Disparate treatment occurs when lenders explicitly treat applicants differently based on a protected characteristic.⁶⁶ For example, let’s say a mortgage lender explicitly tells Black applicants they need a credit score of 700 to qualify for a loan, while telling white applicants they only need a score of 650. This is disparate treatment because the lender is directly treating applicants differently based on their race. The discrimination is intentional and applies different standards to different racial groups.


Disparate impact, meanwhile, occurs when facially neutral practices nevertheless result in disproportionate adverse effects on protected groups.⁶⁷ In this situation, a lender has a policy requiring all applicants to have at least five years of credit history to qualify for a mortgage. This policy appears neutral since it applies to everyone equally. However, it may have a disparate impact on recent immigrants who haven’t had the opportunity to build lengthy credit histories in the U.S., even if they are financially responsible. In this case, the lender isn’t intentionally discriminating against immigrants, but their seemingly neutral policy disproportionately excludes people based on national origin.


The key difference is that disparate treatment involves intentional discrimination based on a protected characteristic, while disparate impact occurs when a seemingly neutral policy or practice disproportionately affects a protected group, regardless of intent.


As the Supreme Court affirmed in Texas Department of Housing and Community Affairs v. Inclusive Communities Project, the FHA permits disparate impact claims, with the Court noting that such claims play “a role in uncovering discriminatory intent” by allowing plaintiffs to “counteract unconscious prejudices and disguised animus.”⁶⁸


The rise of artificial intelligence and machine learning in credit decisions presents both opportunities and new challenges for these legal frameworks. While automated systems theoretically remove human bias from decisions, they may still perpetuate discrimination if trained on historically biased data or if they employ variables that correlate with protected characteristics. As noted in President Biden’s 2023 Executive Order on AI, “irresponsible use could exacerbate societal harms such as fraud, discrimination, bias, and disinformation.⁶⁹ The Consumer Financial Protection Bureau (CFPB) has specifically addressed this concern, stating in a 2022 circular that “ECOA and Regulation B require creditors to provide statements of specific reasons to applicants against whom adverse action is taken” regardless of the technology used.⁷⁰ Furthermore, in a 2023 circular, the CFPB clarified that creditors may not rely on standardized checklists of reasons for adverse action notices if those reasons don’t “specifically and accurately indicate the principal reason(s) for the adverse action.”⁷¹


Consequently, as lending increasingly relies on complex algorithms, regulatory authorities have emphasized the need for explainability and transparency. AI explainability is the ability to understand and explain how an AI system reaches its decisions.⁷² Transparency is about making the AI’s processes visible and understandable to humans.⁷³ Imagine you apply for a loan and receive this response: “Your loan was denied.” This shows a lack of explainability. With explainability, you would get a response closer to, “Your loan was denied because your debt-to-income ratio is 45%, which exceeds our threshold of 40%. Additionally, you’ve had two late payments in the past year.” Transparency works hand-in-hand with explainability. More than simply showing what was considered, transparency involves showing how the system arrived at the decision, for example, “the model considers 15 factors of income, credit score, payment history, employment duration, and debt levels.”  Also, the application or bank might offer to show you the complete criteria and how different factors are weighted in the decision process before you apply for the loan.


However, most AI systems are not transparent. And this is why they are called the “black box.” Transparency requirements are about opening up the “black box” of the AI system before, during, and after decisions are made. It’s not just explaining the outcome (explainability) but making visible how the entire system operates.


Complex lending algorithms that predict creditworthiness better often can’t explain their decisions clearly. This creates a conflict:


On one side: Modern AI can make more accurate predictions about who will repay loans.


On the other side: Consumer protection laws require lenders to explain why they approved or denied your application.


This creates a dilemma: Do we use the more accurate “black box” systems that can’t explain themselves well, or stick with simpler methods that might be less accurate but can provide clear explanations as required by law?


It’s like having a brilliant doctor who makes excellent diagnoses but can’t tell you how they reached their conclusions, when the law specifically requires doctors to explain their reasoning to patients.


But the regulatory landscape is evolving… away from regulation.


VI.          Recent Changes to the AI Regulatory Framework

Since President Trump was sworn into office on January 20, 2025, significant administrative changes have occurred.


Executive Order 14110 on “Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence” (issued October 23, 2023) was rescinded on January 20, 2025, as part of a broader policy shift by the new Trump administration.⁷⁴


A new Executive Order titled “Removing Barriers to American Leadership in Artificial Intelligence” aims to reorient federal AI policy toward promoting innovation and competitiveness.⁷⁵ It directs federal agencies to reevaluate existing AI regulations, calls for the development of a new AI Action Plan, and prioritizes policies that strengthen economic growth and national security interests.⁷⁶ This change represents a significant pivot in the federal government's approach to AI governance, emphasizing market-driven development and reduced regulatory oversight.⁷⁷


Additionally, operations at the Consumer Financial Protection Bureau (CFPB) were shut down in February 2025.⁷⁸


These recent developments may affect the regulatory framework for AI in financial services. Legal professionals should consult current sources when addressing matters related to AI governance in lending practices, as the regulatory landscape continues to evolve.


VII.        Practical Guidance for Consumers: Understanding and Managing Your Credit Report and Score

As we have seen, credit reports play a critical role in determining access to essential financial opportunities for a vast number of people in the United States and influencing numerous aspects of consumers’ lives beyond mere lending decisions. According to the CFPB, “[c] credit reports serve as economic gatekeepers for millions of Americans seeking to buy a home, start a business, or get a car loan.”⁷⁹ Given this pivotal role, understanding the mechanics of credit reporting and your legal rights is essential for navigating today’s credit-dependent marketplace effectively.


Your credit report contains detailed information about your financial history, including residential information, payment patterns, and legal matters like judgments or bankruptcies. An important distinction exists between credit reports and credit scores—related but distinct concepts: Your credit reports and your credit scores are two different things. A credit report is a statement that has information about your credit activity and current credit situation.⁸⁰ Your credit scores are calculated based on the information in your credit report.⁸¹


The Fair Credit Reporting Act (FCRA) establishes the legal framework for consumer rights in credit reporting.⁸² This federal legislation mandates that credit reporting agencies implement reasonable procedures for information collection and maintenance while setting accuracy standards for information furnishers.⁸³ Under this framework, consumers possess several fundamental rights: access to their file information, the ability to dispute inaccurate or incomplete information, and protection from outdated negative information after specified time periods.⁸⁴


Regular monitoring of your credit reports constitutes a fundamental consumer protection strategy. The three major credit reporting companies—Equifax, Experian, and TransUnion—must provide you with a free annual credit report upon request.⁸⁵ Consumers can access credit reports through AnnualCreditReport.com. Furthermore, you qualify for additional free reports in specific circumstances, such as adverse credit decisions, credit limit reductions, or identity theft victimization.⁸⁶


When reviewing your credit report, check carefully for errors, which are surprisingly common. If you find inaccuracies, federal law gives you the right to dispute them. The CFPB advises that “if you identify an error on your credit report, you should start by disputing that information with the credit reporting company.”⁸⁷ You can file disputes online, by mail, or by phone, but for significant issues, many experts recommend sending your dispute via certified mail with return receipt requested to ensure documentation of your communication.⁸⁸ A sample letter can be found here: https://consumer.ftc.gov/articles/sample-letter-disputing-errors-credit-reports-business-supplied-information.


The dispute resolution process follows specific procedural requirements. Your dispute correspondence should contain your complete contact information, clearly identify each contested item with supporting rationale, and explicitly request correction or removal.⁸⁹ Including supporting documentation and a marked copy of your credit report highlighting disputed items enhances your claim’s effectiveness. Credit reporting agencies must investigate disputes within a 30-day timeframe and must transmit relevant information to the data furnisher.⁹¹ Upon investigation completion, the agency must provide written results and a free credit report copy if the dispute prompts any changes.⁹²


The dispute resolution system has demonstrated significant practical limitations. In January 2022, the CFPB reported that “Equifax, Experian, and TransUnion failed to appropriately respond to almost all of the 700,000 complaints filed against them from January 2020 through September 2021.”⁹³ When disputes remain unresolved, consumers have several recourse options: submitting new disputes with additional information, adding explanatory statements to their reports, filing CFPB complaints, or consulting consumer law attorneys for potential legal action.⁹⁴


Beyond remedial measures for inaccuracies, proactive credit management requires consistent positive financial behaviors. The CFPB recommends several essential practices: consistent on-time payments, maintaining low credit utilization ratios (preferably below 30%), establishing extended credit histories, limiting credit applications to necessary purposes, and regular credit report monitoring. For consumers establishing or rebuilding credit, specialized financial products offer structured pathways, including secured credit cards, credit builder loans, and retail credit accounts. These instruments, when used responsibly, facilitate gradual credit profile development over time.⁹⁶


VIII.     Beyond the Credit Score: Consumer Strategies in an AI-Driven Credit World

Monitoring traditional credit reports is vitally important to protect your financial interests. However, in today’s digital financial landscape, understanding how your credit is assessed has become increasingly complex. Traditional credit reporting systems remain important gatekeepers for major financial decisions, but they now operate alongside sophisticated artificial intelligence systems that analyze aspects of your digital life you might never have considered relevant to your creditworthiness.


Beyond traditional credit monitoring, consumers should recognize that their everyday digital behaviors—from email correspondence habits to the timing of online shopping—may now factor into credit decisions. While traditional credit reports offer transparency and dispute mechanisms, these newer assessment dimensions often operate in proprietary “black box” systems with limited oversight or recourse options. This opacity creates challenges for consumers seeking to understand or contest adverse credit decisions based partly on digital footprint analysis. As AI-powered credit assessment becomes more prevalent, consumers face a challenging balance between leveraging the benefits of digital convenience and protecting their privacy. Consider creating strategic boundaries between your financial activities and personal digital life when possible. Using privacy tools like virtual private networks (VPNs)⁹⁷ or browser settings that limit tracking may help reduce unwanted data collection, though this could potentially limit the positive aspects of your digital footprint that might otherwise support credit access. The most practical approach may be maintaining awareness of these systems while focusing primarily on the fundamentals of financial responsibility that remain relevant across all assessment models.


Ultimately, while the mechanics of credit assessment are evolving rapidly, the principles of financial responsibility remain consistent. By understanding both traditional credit reporting protections and the emerging reality of AI-powered assessment, consumers can more effectively navigate this increasingly complex landscape. Regular monitoring, prompt dispute resolution, responsible credit usage, and awareness of digital footprint implications provide a comprehensive approach to maintaining positive credit standing in today’s financial ecosystem.


IX.          Conclusion

The challenge of AI bias in lending represents a critical intersection of technology, finance, and civil rights. While AI promises more efficient and accurate lending decisions, its potential to perpetuate and amplify existing biases cannot be ignored. Success in addressing these challenges requires a coordinated effort from regulators, industry participants, and consumers.


The future of fair lending depends on continued trustworthy technological innovation, strong regulatory oversight (which is unlikely in the next four years), an industry commitment to fairness (also unlikely given the race toward being the first at market with new technologies), consumer awareness and advocacy, ongoing research and development, and community engagement.


Suggested Citation: Korin Munsterman, When Algorithms Judge Your Credit: Understanding AI Bias in Lending Decisions, ACCESSIBLE LAW, Spring 2025, at 1.



Sources:

* Korin Munsterman is a Professor and the Director of the Legal Education Technology program at UNT Dallas College of Law. Prior to joining UNT Dallas, she worked as a professor of practice, law librarian, and director of technology at several law schools, and consulted with law schools, firms, and government agencies on the integration of technology into legal practice and education.  She is currently writing AI and the Practice of Law in a Nutshell for West Academic.


[1] Nicol Turner Lee et al., Algorithmic bias detection and mitigation: Best practices and policies to reduce consumer harms, Brookings Inst.(May 22, 2019) [hereinafter Lee], https://www.brookings.edu/articles/algorithmic-bias-detection-and-mitigation-best-practices-and-policies-to-reduce-consumer-harms/.

[2] Id.

[3] Id.

[4] Wilhelmina Afua Addy et al., AI in Credit Scoring: A Comprehensive Review of Models and Predictive Analytics, Glob. J. Eng'g & Tech. Advances, March 2025, at 118, 126.

[5] Mikella Hurley & Julius Adebayo, Credit Scoring in the Era of Big Data, 18 Yale J.L. & Tech. 148, 154 (2016).

[6] Id. at 162.

[7] Tobias Berg et al., On the Rise of FinTechs: Credit Scoring using Digital Footprints, 33 Rev. Fin. Stud. 2845, 2845–46 (2019) [hereinafter Berg].

[8] Id. at 2859.

[9] Id.

[10] Id.

[11] Id.

[13] Anastasiya Shitikova, How to Boost Predictive Power Using Digital Footprints, RiskSeal (Aug. 6, 2024), https://riskseal.io/blog/how-to-improve-credit-scoring-using-digital-footprints.

[14] Berg, supra note 6, at 2847.

[15] Id.

[16] Lee, supra note 1.

[17] Lee, supra note 1.

[19] Bashir Iliyasu Bashir, Case Study: Algorithmic Bias in Loan Denials, Medium (Jan. 23, 2024) https://medium.com/@vanderbash/case-study-algorithmic-bias-in-loan-denials-6aa4c6736b8e.

[20] Id.

[21] Id.

[22] CFPB, Who are the credit invisibles? How to help people with limited credit histories 2 (2016) [hereinafter Credit Invisibles], https://files.consumerfinance.gov/f/documents/201612_cfpb_credit_invisible_policy_report.pdf.

[23] Id.

[24] Id.

[25] Id.

[26] Id.

[28] Id. at 3.

[29] Id.

[30] Id. at 4.

[31] Id.

[32] Emmanuel Martinez & Lauren Kirchner, The Secret Bias Hidden in Mortgage-Approval Algorithms, The Ctr. for Pub. Integrity (Aug. 25, 2021), https://publicintegrity.org/inequality-poverty-opportunity/bias-mortgage-approval-algorithms.

[33] Credit Invisibles, supra note 21, at 5.

[34] Credit Invisibles, supra note 21, at 6.

[35] Credit Invisibles, supra note 21, at 6, 15–17.

[36] Credit-Based Insurance Scores Aren’t the Same as a Credit Score. Understand How Credit and Other Factors Determine Your Premiums, Nat’l Ass’n Ins. Comm’rs: Consumer Insight (July 22, 2020), https://content.naic.org/article/consumer-insight-credit-based-insurance-scores-arent-same-credit-score-understand-how-credit-and-other-factors.

[37] Lisa Rice & Deidre Swesnik, Discriminatory Effects of Credit Scoring on Communities of Color, 46 Suffolk U. L. Rev. 935, 938 (2012).

[38] Getting Utility Services: Why Your Credit Matters, F.T.C. Consumer Advice (Oct. 2024), https://consumer.ftc.gov/articles/getting-utility-services-why-your-credit-matters (“Getting utility services—gas, electricity, water—has a lot to do with your credit history. The better your credit history, the easier it can be for you to get services. And your on-time (or late) payment history with utility companies can be an important factor for your credit in the future.”).

[39] Tara Parker-Pope, Keeping Score on How You Take Your Medicine, N.Y. Times: Well (June 20, 2011, 5:23 PM), https://archive.nytimes.com/well.blogs.nytimes.com/2011/06/20/keeping-score-on-how-you-take-your-medicine/  (“ FICO, a company whose credit score is widely used to assess the credit worthiness of millions of consumers … developed a new FICO Medication Adherence Score that it says can predict which patients are at highest risk for skipping or incorrectly using prescription medications . . . . The FICO medication score is based on publicly available data, like home ownership and job status, and does not rely on a patient’s medical history or financial information to predict whether he or she will take medication as directed. So, like a credit rating, it can be compiled without a person’s knowledge or permission.”).

[40] Mary Madden et al., Privacy, Poverty, and Big Data: A Matrix of Vulnerabilities for Poor Americans. 95 Wash. U. L. Rev. 53, 56–57 (2017) [hereinafter Madden].

[41] FTC, Big Data: A Tool for Inclusion or Exclusion? Understanding the Issues 10–11 (2016), https://www.ftc.gov/system/files/documents/reports/big-data-tool-inclusion-or-exclusion-understanding-issues/160106big-data-rpt.pdf.

[42] Complaint at 35, FTC v. CompuCredit Corp., No. 1:08-cv-1976-BBM-RGV, 2008 U.S. Dist. LEXIS 123512 (N.D. Ga. filed June 10, 2008).

[43] Danielle Keats Citron & Frank Pasquale, The Scored Society: Due Process for Automated Predictions, 89 Wash. L. Rev. 1, 14 (2014).

[44] Brenda Reddix-Smalls, Credit Scoring and Trade Secrecy: An Algorithmic Quagmire or How the Lack of Transparency in Complex Financial Models Scuttled the Finance Market, 12 U.C. Davis Bus. L.J. 87, 116–20 (2011).

[45] Madden, supra note 39, at 82.

[46] Laura Blattner & Scott Nelson, How Costly is Noise? Data and Disparities in Consumer Credit 2 (Stan. Graduate Sch. of Bus., Working Paper No. 3978), https://arxiv.org/pdf/2105.07554.

[47] Id. at 18.

[48] Id.

[49] Apple’s ‘sexist’ credit card investigated by US regulator, BBC (Nov. 11, 2019), https://www.bbc.com/news/business-50365609.

[51] Id.

[52] N.Y. Dep’t of Fin. Servs., Report on Apple Card Investigation 2 (2021) [hereinafter Apple Card Investigation].

[53] Will Knight, The Apple Card Didn't 'See' Gender—and That's the Problem, Wired (Nov. 19, 2019, 9:15 AM), https://www.wired.com/story/the-apple-card-didnt-see-genderand-thats-the-problem/.

[55] Apple Card Investigation, supra note 51, at 15.

[56] Karen G. Mills, Gender Bias Complaints against Apple Card Signal a Dark Side to Fintech, Harv. Bus. Sch.: Working Knowledge (Nov. 19, 2019), https://www.library.hbs.edu/working-knowledge/gender-bias-complaints-against-apple-card-signal-a-dark-side-to-fintech.  

[57] Id.

[58] Equal Credit Opportunity Act (ECOA), 15 U.S.C. § 1691.

[59] Kate Crawford & Jason Schultz, Big Data and Due Process: Toward a Framework to Redress Predictive Privacy Harms, 55 B.C. L. Rev. 93, 100 (2014) [hereinafter Crawford].

[60] See 15 U.S.C. § 1691.

[61] Fair Housing Act of 1968, 42 U.S.C. § 3604(c).

[62] Crawford, supra note 58, at 100–01.

[63] Id. at 100.

[64] Id. at 101.

[66] Fed. Deposit Ins. Corp., Consumer Compliance Examination Manual pt. IV, at 1.2–.3 (2024).

[67] Id. at 1.3.

[68] Tex. Dep’t of Hous. & Cmty. Affs. v. Inclusive Cmtys. Project, 576 U.S. 519, 521 (2015).

[69] Exec. Order No. 14110 on Safe, Secure, and Trustworthy Artificial Intelligence, 88 Fed. Reg. 75191 (Oct. 30, 2023), revoked by Exec. Order No. 14148, 90 Fed. Reg. 8237 (Jan. 20, 2025).

[70] CFPB, Circular on adverse action notification requirements in connection with credit decisions based on complex algorithms (May 26, 2022).

[71] Id.

[72] What is explainable AI?, IBM (Mar. 29, 2023), https://www.techtarget.com/whatis/definition/explainable-AI-XAI.

[73] Id.

[74] Exec. Order No. 14148, 90 Fed. Reg. 8237 (Jan. 20, 2025).

[75] Exec. Order No. 14179, 90 Fed. Reg. 8741 (Jan. 23, 2025).

[78] Joe Hernandez, The Trump administration has stopped work at the CFPB. Here’s what the agency does, NPR (Feb. 10, 2025, 4:35 PM), https://www.npr.org/2025/02/10/nx-s1-5292123/the-trump-administration-has-stopped-work-at-the-cfpb-heres-what-the-agency-does.

[79] Seth Frotman, Holding Credit Reporting Companies Accountable for Junk Data, CFPB (Jan. 16, 2025), https://www.consumerfinance.gov/about-us/blog/holding-credit-reporting-companies-accountable-for-junk-data/.  

[80] What is a credit report?, CFPB: ask CFPB, https://www.consumerfinance.gov/ask-cfpb/what-is-a-credit-report-en-309/ (Jan. 29, 2024).

[81] Frotman, supra note 78.

[82] 15 U.S.C. § 1681.

[83] Amy Loftsgordon, Disputing Incomplete and Inaccurate Information in Your Credit Reports, Nolo (Jan. 7, 2025), https://www.nolo.com/legal-encyclopedia/disputing-incomplete-and-inaccurate-information-in-your-credit-report.html.

[84] Frotman, supra note 78.

[85] 15 U.S.C. § 1681j(a)(1)(A).

[86] Loftsgordon, supra note 82.

[87] What are some ways to start or rebuild a good credit history?, CFPB: Ask CFPB, https://www.consumerfinance.gov/ask-cfpb/what-are-some-ways-to-start-or-rebuild-a-good-credit-history-en-2155/ (Sep. 13, 2024).  

[88] St. Mary’s Univ. Sch. of L. Ctr. for Legal & Soc. Just., How to Dispute Errors in a Credit Report, Texas Law Help [hereinafter Ctr. for Legal & Soc. Just.], https://texaslawhelp.org/article/how-to-dispute-errors-in-a-credit-report (Feb. 27, 2023).  

[89] How do I dispute an error on my credit report?, CFPB: ask CFPB, https://www.consumerfinance.gov/ask-cfpb/how-do-i-dispute-an-error-on-my-credit-report-en-314/ (Dec. 12, 2024).

[90] Id.

[91] Id.

[92] Ctr. for Legal & Soc. Just., supra note 87.

[93] Loftsgordon, supra note 82.

[94] Loftsgordon, supra note 82.

[95] How do I get and keep a good credit score?, CFPB: Ask CFPB, https://www.consumerfinance.gov/ask-cfpb/how-do-i-get-and-keep-a-good-credit-score-en-318/ (Dec. 12, 2024).

[97] While detailed explanations of technology precautions are beyond the scope of this article, a VPN, or Virtual Private Network, is a service that creates a secure, encrypted connection between your device and the internet. Think of it as a private tunnel for your internet traffic. Pull The Shades Down On Your Browsing With A VPN, Nat’l Cybersecurity All. (July 3, 2023), https://www.staysafeonline.org/articles/pull-the-shades-down-on-your-browsing-with-a-vpn.

S U B S C R I B E

  • LinkedIn
  • Facebook
  • Instagram
  • Twitter
  • YouTube

Thanks for subscribing!

AL Logo.png

Accessible Law

Legislation and ordinances related to the COVID-19 Pandemic of 2020 may affect standard outcomes.


The information and opinions published by Accessible Law are offered for educational purposes only and should not be construed as legal advice.

 

Law Review Logo (2).png
UNTD_COL_Logo_White.png
bottom of page