Fair Lending in the Age of Machines | Wolters Kluwer
  • Insights

  • Fair Lending in the Age of Machines

    by Britt Faircloth, CRCM, Senior Regulatory Consultant at Wolters Kluwer

    Published January 04, 2019

    As published in ABA Banking Journal in January 2019.

    "COME WITH ME IF YOU WANT TO LIVE” is a widely known catch phrase from the popular Terminator film series, which dramatizes the basic tension between humans and machines, especially high-tech machines that learn and grow smarter on their own. With the advent of Artificial Intelligence (AI) and Machine Learning (ML) seeping into many aspects of the lending ecosystem, one wonders how the traditional methods of analyzing compliance with established principles of fair lending will evolve and survive.

    For those who bear the responsibility for fair lending compliance, words like “Artificial Intelligence,” “Machine Learning,” and “Alternative Data” may be a part of the general lexicon in discussions and strategy sessions these days. However, many financial institutions (FI) may not fully understand the degree to which these algorithms, models or platforms impact their underwriting, pricing, or advertising practices.

    Digital Redlining

    Consider, for example, the degree to which fair lending compliance officers fully understand their FI’s social media footprint, down to the level of activity by individual loan originators or third-party brokers. A very large social media platform is the subject of a recent complaint filed by the U.S. Department of Housing and Urban Development (HUD). HUD alleges that Facebook violates the Fair Housing Act (FHA) by enabling advertisers to control which users receive housing-related ads based upon the recipient’s race, color, religion, sex, familial status, national origin, disability, and/or zip code.

    The complaint alleges that advertisers are invited to express unlawful preferences by offering discriminatory options, allowing them to effectively limit housing options for these protected classes under the guise of “targeted advertising”. This includes giving landlords and developers advertising tools that made it possible to exclude people based on a number of prohibited characteristics, including gender, familial status, and location within minority tracts. In particular, the ability to draw lines around minority tracts, and to exclude those living in those tracts from receiving ads, brings forth a new risk to consider—digital redlining.

    “The Fair Housing Act prohibits housing discrimination including those who might limit or deny housing options with a click of a mouse,” said Anna María Farías, HUD’s assistant secretary for Fair Housing and Equal Opportunity. “When Facebook uses the vast amount of personal data it collects to help advertisers to discriminate, it’s the same as slamming the door in someone’s face.”

    While Facebook has denied these allegations and states that it is changing its practices, this case should be considered a cautionary tale—and encourages FIs to fully understand the new digital ecosystem. Do you really know what everyone at your FI is doing on all of the various social media platforms?

    As Artificial Intelligence and Machine Learning-based platforms, models and algorithms quickly become the new norm, fair lending compliance teams need to be able to understand all aspects of their digital footprint, and understand how automated technology works, because automated technology is not necessarily colorblind. Additionally, it is critical to analyze expansive amounts of data and work with partners, both internal and external, to assess and mitigate the associated risks. All of this will have to be done quickly—speed to market is critical in today’s environment. Compliance must be able to maintain robust controls, without hindering business.

    Here are some key considerations for managing the rise of the machines.

    Novel Approach=Novel Risk

    The first step in managing the risks associated with Artificial Intelligence, Machine Learning, and Alternative Data is to understand, at least at a very basic level, what they are. Artificial Intelligence and Machine Learning are terms that are often used interchangeably, which can cause some confusion. Amazon, arguably one of the more prolific users of the technology, defines Artificial Intelligence as “the field of computer science dedicated to solving cognitive problems commonly associated with human intelligence, such as learning, problem solving, and pattern recognition.”

    AI is already a part of everyday life, though it may not always be recognizable as such. Virtual assistants, such as Siri, Cortana and Alexa, use voice recognition functionality that is AI. Within your institution, you may be using AI-based functionality for activities such as fraud monitoring, or AI-based chatbots that learn about users’ financial profiles and offer predictions and advice around finances.

    If we think of AI as an umbrella, one item underneath it would be Machine Learning. Machine Learning is a set of tools that allow a computer to “think” using mathematical algorithms based on accumulated data. It is flexible—it can change and grow based upon the data presented. That fundamentally is what drives increased risk, and our suspicion as fair lending advocates—the fact that a computer can and will “learn” from a data set in ways that may inject a prohibited bias that may not be readily transparent.

    Alternative Data is also a part of that increased risk. With Machine Learning, we are generally giving computers access to very large data sets, which include Alternative Data, and letting them find and use patterns to make or assist in making underwriting, pricing or marketing decisions. Alternative Data can be considered information that is used to evaluate creditworthiness that is not typically part of a consumer’s credit profile. While this can include factors that would seem very reasonable, such as rental history or utility payments, it can also include data points that may carry more risk.

    Nonbank financial firms such as marketplace lenders generally report greater use of less-traditional data sources and newer modeling approaches, including ones based upon Machine Learning. This is confirmed in the recent U.S. Department of the Treasury report titled “A Financial System That Creates Economic Opportunities: Nonbank Financials, Fintech, and Innovation,” which includes a discussion on new credit models and data sources that “have the potential to meaningfully expand access to credit.”

    In this report, the Treasury presents a table of Types of Credit Data points that lists some surprising, potential non-traditional sources of data as part of the explosion in the types of data that may be available, including social media, browsing history, behavioral data, shopping patterns, and data about consumers’ friends and associates. That list is by no means exhaustive or all-inclusive. These days, it is not unusual to hear about institutions exploring correlations between creditworthiness and factors such as the college attended, retirement balances, whether an applicant uses an Apple or an Android-based phone, whether and how a consumer uses emojis, or even the type of web browser that is being used.

    Treasury does note that these more novel approaches raise certain risk considerations:

    “New models and data may also unintentionally run the risk of producing results that arguably risk violating fair-lending laws if they result in a “disparate impact” on a protected class or because the FTC or the CFPB might find the use of such models and data to be a violation of UDAP or UDAAP, respectively.”

    Therefore, FIs should be mindful that responsible innovation includes adequately identifying and managing all associated risks. Alternative Data sources, AI and ML all present novel challenges that require equally innovative risk management solutions.

    Vendor Management

    The risks raised by new technology does not change the institution’s responsibility to manage those risks, whether the technology is developed in-house or applied when partnering with a third party. The regulatory agencies have all made it clear via regulatory guidance and subsequent enforcement actions that FIs retain ownership of risk when outsourcing to a third-party vendor.

    As with any third-party relationship, robust due diligence on the front end can assist in managing the risk throughout the relationship with a fintech or non-traditional FI. Consider the type of contract provisions you want included, such as access to vendor systems, processes, methodologies, algorithms, and/or data for monitoring and analytics.

    Also consider the degree of customer interface, and access to information such as complaints and error resolution; as well as who would bear the cost of remediation if required by your regulator. Many vendors will try to deflect transparency with claims that the requested access is “proprietary” in nature, so institutions will need to understand their risk appetite as they enter into these relationships. Monitoring will of course play a key role in managing risk to prevent the digital ecosystem used from being a black box.

    Monitoring Fair Lending Risks

    Whether monitoring your in-house activity or your third-party partner, robust monitoring is critical in understanding and managing risk. Institutions need to be creative and open to new approaches for gathering and analyzing data to self-identify any potential issues in order to avoid any inadvertent prohibited bias. At its core, a comprehensive fair lending monitoring program should include the type of monitoring that has long served FIs for more traditional underwriting and pricing models—and which involve comprehensive data analytics.

    Consider keeping a careful watch on the following:

    Approval and denial rates by prohibited basis category. Are you seeing significant disparities in the approval and denial rates between control group and prohibited basis group applicants? Are the disparities for products or business units using these technologies different from those that are using more traditional underwriting models? Some differences can be expected—remember the Treasury report indicated a belief that Alternative Data could help to expand access to credit for those with thin or no credit files.

    If that is the case, we would hope to see an increase in the approval rates for groups that were previously unbanked or underserved. If, however, that disparity shows higher rates of denial for those groups when using Alternative Data, that may indicate a need for additional research or changes to the algorithms, as appropriate.

    Average price by prohibited basis category. While some variance in pricing is normal, it is important to frequently monitor for pricing disparities and understand them. This becomes particularly important when using Alternative Data factors in pricing loans. Are you seeing all, or a majority of, your prohibited basis group borrowers get the highest rates? That may be cause for concern, particularly if those disparities cannot be explained with regression and/or other further analysis, such as comparative file review.

    Application volume. While most institutions already monitor application volume for HMDA reportable loans, this may need to be expanded to other types of lending. For institutions that are using Alternative Data via a third party in the marketing space, monitoring application volume should be considered standard for all product types.

    Are there groups that are represented in the demographics that are not represented in your population of applications? If there has been a recent marketing campaign, for example, are all of the applications from that campaign from non-minority areas or from other control group categories? If so, that could be indicative of a potential issue with the criteria.

    As with all monitoring activities, make sure that results are communicated as appropriate and are used to improve the process. All applicable committees and management should understand the results and be prepared to take action as needed to control risk.

    Advanced analytics. The basic monitoring above is a start and should be considered as mandatory for any robust fair lending program. Recognize that advanced analytics may be required, as well. Where there are disparities in denial rates or pricing, statistical regression may be necessary. Assuming you have an understanding of what data points are included in the Alternative Data utilized, regression will allow the ability to see the influence that each factor is having on the decision made or pricing granted. Once the factors influencing the results are understood, an institution can then determine if they are comfortable with the factors being utilized or if they would prefer to remove them.

    Complaints. As with implementing any new product or service, complaints can give an idea of areas that may have increased risk. Are you seeing an increase or a concerning trend in complaints? If so, this may suggest an underlying issue—particularly if there are complaints alleging discrimination. Not every complaint that alleges discrimination will be indicative of a true problem, but sometimes where there is smoke there is fire.

    Remain Flexible

    The age of machines is here. For many, this an exciting time to be in fair lending! It gives us a chance to play an important role at the leading edge of methodology and practices that help to keep lending fair, and we have the potential to expand access to credit to previously unbanked or underserved populations. So perhaps that means the machines aren’t all that bad—that is, if they don’t become too self-aware!

    ABOUT THE AUTHOR

    Britt Faircloth, CRCM, is a senior regulatory consultant on the Advisory Services team at Wolters Kluwer (www.wolterskluwerfs.com/consulting). In this role, she brings over 18 years of relevant banking and regulatory compliance experience to assist institutions of all sizes in performing fair lending data analytics, CRA self-assessments, redlining and REMA analysis, and other HMDA-related analytics. Britt can be reached at britt.faircloth@wolterskluwer.com.




  • Please take a moment and tell us what you think of our content.