Thursday, April 23, 2020

A Survey of Fair and Responsible Machine Learning and Artificial Intelligence: Implications for Consumer Financial Services

by Stephen C. Rea, PhD, Research Assistant Professor at Colorado School of Mines and former IMTFI Research Assistant


Capital One Eno Chatbot

Stephen Rea recently published a white paper surveying literature in computer science, law, and the social sciences on developments in machine learning and artificial intelligence, with special focus on their implications for consumer financial services in the United States. This project grew out of a joint collaboration between IMTFI and Capital One's Responsible AI Program. We present here an excerpt from the introduction to the white paper, followed by some updates about recent developments in this space. 

Excerpt
Machine learning (ML) algorithms and the artificial intelligence (AI) systems that they enable are powerful technologies that have inspired a lot of excitement, especially within large business and governmental organizations. In an era when increasingly concentrated computing power enables the creation, collection, and storage of “big data,” ML algorithms have the capacity to identify non-intuitive correlations in massive datasets, and as such can theoretically be more efficient and effective than humans at using those correlations to make accurate predictions. What is more, AI systems powered by ML algorithms represent a means of removing human prejudices from decision-making processes; since an AI system renders its decisions based solely on the data available, it can avoid the conscious and unconscious biases that often influence human decision-makers.

Contrary to this rosy picture of ML and AI, though, decades of evidence demonstrate how these technologies are not as objective and unbiased as many perhaps wish they were. Biases can be encoded in the datasets on which ML algorithms are trained, arising from poor sampling strategies, incomplete or erroneous information, and the social inequalities that exist in the actual world. And since ML algorithms and AI systems cannot build themselves, the humans who construct them may, however unintentionally, introduce their own biases when deciding on a model’s goals, selecting features, identifying which attributes are relevant, and developing classifiers. Additionally, the inherent complexities of ML algorithms that defy explanation even for the most expert practitioners can make it difficult, if not impossible, to identify the root causes of unfair decisions. That same opacity also presents an obstacle for individuals who believe that they have been evaluated unfairly, want to challenge a decision, or try to determine who should—or even ​could​—be held accountable for mistakes.

Compared to other fields, the financial services industry has taken a relatively conservative approach to ML/AI integrations. Consumer-facing applications like robo-advisors for portfolio management, AI-powered banking assistants, algorithmic trading programs, and proactive marketing tools, as well as harnessing the power of ML to do sentiment analysis of social media feeds and news stories in search of trendlines, have garnered a lot of media attention. However, the visibility of initiatives like these in press releases and news items exaggerates their role in financial services today, as they represent less than one-tenth of the funding received in the financial technology, or “fintech,” vendor space. Thus far, financial institutions have primarily invested in ML and AI for automating routine, back-office tasks, improving fraud detection and cybersecurity, and making regulatory compliance easier. 

The current state of ML and AI in consumer financial services, then, is one in which there is still enormous opportunity for innovation, but also reasons to be cautious. To paraphrase the feminist geographer Doreen Massey, some individuals and groups are more on the “receiving end” of these technologies than others. In other words, ML and AI’s advantages and disadvantages are not equally distributed. Nor are the vulnerabilities entailed by digital surveillance techniques for data creation and collection, the sorts of harm that can occur from an erroneous data entry and the burden for correcting it, or the ability to affect how an algorithm interprets one’s individual attributes and characteristics. In many ways, ML/AI research’s most important contributions have been demonstrating the extent to which structural inequalities—that is, conditions by which one or more groups of people are afforded unequal status and/or opportunities in comparison to other groups—persist by providing quantifiable, documented evidence of social disparities. If an organization’s reason for integrating ML- and AI-powered systems is to improve its decision-making procedures so as to make them both more accurate and fairer, then it is imperative to understand and account for persistent inequalities in the social contexts where those systems are embedded. Furthermore, assessing how exactly an algorithmic and/or automated decision-making system could impact specific populations, the risk that it could violate legal standards prohibiting discrimination, and the extent to which the system could perpetuate structural inequalities are of the utmost importance when deciding whether or not to make the integration in the first place.

You can read the rest of the white paper on SSRN.

Updates
Work in ML and AI is fast-moving, and in the time since this paper was published, there have been a number of developments that will affect how these technologies are integrated with the consumer financial services industry and beyond. Two in particular merit attention here:

1) Congressional action: On February 12, 2020, the U.S. House Committee on Financial Services' Task Force on Artificial Intelligence heard testimony from experts on AI, ML, and race and inclusion in a panel titled “Equitable Algorithms: Examining Ways to Reduce AI Bias in Financial Services.” The Committee acknowledged the usefulness of standards for the fairness and accuracy of AI applications in financial services, while also noting that existing laws such as the Equal Credit Opportunity Act, the Fair Housing Act, and the Fair Credit Reporting Act are inadequate in many respects for regulating AI's impact. The panel of experts recommended drafting a definition of "fairness" that could be used for evaluating ML, developing audit and assessment methods for locating biases in data and models, and requiring ML/AI developers to implement and report upon continuous monitoring plans that can detect new biases as they emerge. They also voiced concern regarding the Department of Housing and Urban Development's plans to revise the Fair Housing Act's disparate impact standards, and how such action might exacerbate the discriminatory effects of AI in home lending. 

2) Sandvig v. Barr decision: In March 2020, the U.S. District Court for the District of Columbia delivered its ruling in Sandvig v. Barr, which challenged a provision in the Computer Fraud and Abuse Act (CFAA) that made it a crime for researchers and journalists to use "dummy" accounts for the purposes of auditing algorithms in order to identify possible discrimination. The American Civil Liberties Union had initially brought the lawsuit in 2016 on behalf of a group of academics and journalists led by Christian Sandvig of University of Michigan's School of Information. The plaintiffs argued that the CFAA violated their First Amendment rights, and noted that comparable research activities were not illegal in offline contexts. The Court ruled in favor of the plaintiffs, thereby opening the door for more independent review of ML/AI applications and scoring an important victory for researchers' ability to hold algorithms and the institutions that use them accountable.

Additional Resources
AI Now Institute: https://ainowinstitute.org
Data and Society's AI on the Ground initiative: https://datasociety.net/research/ai-on-the-ground/

Thursday, April 16, 2020

The Use of Money to Maintain Connection (and Toilet Paper)

Pandemic Insights
Bill Maurer in Anthropology News, American Anthropological Association(AAA)



When I received an email containing one of the first Covid-19 payment memes—“How do you wish to pay?”, with an image of Visa, MasterCard, and a roll of toilet paper—I thought, OK, there might be some interesting payment and money things I should pay attention to during this pandemic. Students and colleagues sent me lots more toilet-paper themed memes and made jokes: about scenes of barter, the transactors trying to determine an exchange rate; toilet paper on a golden platter; stores accepting toilet paper rolls for one cup of coffee or one scoop of ice cream (“must be new; one roll per visit”). A satirical (I think!) account of a churchgoer placing a roll of toilet paper on the collection plate. One Sunday afternoon my family ordered delivery. Every order over $40 came with a free roll of toilet paper.

In between all the new kinds of work involved in transitioning a large university to fully-online operations, I filed away the toilet paper stuff in a special email folder. Toilet paper economies.

But when the University of California, Irvine Graduate Division announced that it would henceforth accept scanned forms and digital signatures (it has long been a preserve and holdout of paper) and more than that, suspend all fee payment associated with those forms—which heretofore had to be paid with paper check—I thought, game on! This pandemic is changing payment.

Some establishments—my local ice creamery, for instance—have suspended all in-store payments and require customers to pay online for curbside pickups. Some places ask you to Venmo. CNN aired a segment on cash as a possible vector for viruses, and a company offering contactless tap-and-pay card readers has launched a marketing blitz. But cash is still there, too. In fact, cash demand is way, way up. It is at the restaurants offering take-out whose proprietors prefer the one-way transmission of cash to the back-and-forth passing of card or card terminal between customer and vendor. It is in the cash stashes people are starting to build up. A reporter from the Financial Times called me asking why people are stockpiling cash (not just toilet paper) when the cash distribution system is not likely to be disrupted the way it has been when hurricanes and floods knock out servers or the electricity grid.

Continue reading original post here:
https://www.anthropology-news.org/index.php/2020/04/09/the-use-of-money-to-maintain-connection-and-toilet-paper/