DOJ & FTC targeting discrimination & bias in companies’ automated systems
Can automated payment systems be guilty of discriminating against certain groups of people? The federal government says yes, they absolutely can – and the feds plan to fine and possibly prosecute companies that rely on automated systems to do business.
The Federal Trade Commission (FTC) is planning a multi-agency enforcement push against “discrimination and bias in automated systems” in conjunction with the Consumer Financial Protection Bureau, Department of Justice’s (DOJ) Civil Rights Division and Equal Employment Opportunity Commission.
Companies that are using AI-based systems or are planning to do so could face a civil rights case if the feds can show a bias based on customers’ race, gender, sexual orientation or religion. The Biden administration’s chief focus here is racial discrimination against African American companies and customers, similar to the EPA forming an office devoted entirely to environmental justice.
Historical data could lead to claims of discrimination
According to the FTC’s news release announcing the multi-pronged enforcement action, “many automated systems rely on vast amounts of data to find patterns or correlations, and then apply those patterns to new data to perform tasks or make recommendations and predictions. While these tools can be useful, they also have the potential to produce outcomes that result in unlawful discrimination.”
In the case of a payment system, for example, red-flagging of customers that pay late, followed by multiple emails to clients reminding them of their debts, could lead to trouble for companies. Also: Businesses that are seen as “too aggressive” in their credit & collection practices may spark whistleblower complaints. And it goes without saying, lenders should be on alert.
The agencies plan to cast a very wide net. They’ll be taking a close look at data and datasets used in AI systems based on the premise that data can be inherently discriminatory: “Automated system outcomes can be skewed by unrepresentative or imbalanced datasets, datasets that incorporate historical bias, or datasets that contain other types of errors. Automated systems also can correlate data with protected classes, which can lead to discriminatory outcomes.”
In addition to data and datasets, the agencies will focus on how much users understand their automated systems. The FTC says many AI systems are little more than “‘black boxes’ whose internal workings aren’t clear to most people … this lack of transparency often makes it all the more difficult for developers, businesses, and individuals to know whether an automated system is fair.”
Free Training & Resources
Webinars
Provided by ADP
White Papers
Provided by Anaplan
Resources
Case Studies
Case Studies
Case Studies