DOJ & FTC targeting discrimination & bias in companies’ automated systems
Can automated payment systems be guilty of discriminating against certain groups of people? The federal government says yes, they absolutely can – and the feds plan to fine and possibly prosecute companies that rely on automated systems to do business.
The Federal Trade Commission (FTC) is planning a multi-agency enforcement push against “discrimination and bias in automated systems” in conjunction with the Consumer Financial Protection Bureau, Department of Justice’s (DOJ) Civil Rights Division and Equal Employment Opportunity Commission.
Companies that are using AI-based systems or are planning to do so could face a civil rights case if the feds can show a bias based on customers’ race, gender, sexual orientation or religion. The Biden administration’s chief focus here is racial discrimination against African American companies and customers, similar to the EPA forming an office devoted entirely to environmental justice.
Historical data could lead to claims of discrimination
According to the FTC’s news release announcing the multi-pronged enforcement action, “many automated systems rely on vast amounts of data to find patterns or correlations, and then apply those patterns to new data to perform tasks or make recommendations and predictions. While these tools can be useful, they also have the potential to produce outcomes that result in unlawful discrimination.”
In the case of a payment system, for example, red-flagging of customers that pay late, followed by multiple emails to clients reminding them of their debts, could lead to trouble for companies. Also: Businesses that are seen as “too aggressive” in their credit & collection practices may spark whistleblower complaints. And it goes without saying, lenders should be on alert.
The agencies plan to cast a very wide net. They’ll be taking a close look at data and datasets used in AI systems based on the premise that data can be inherently discriminatory: “Automated system outcomes can be skewed by unrepresentative or imbalanced datasets, datasets that incorporate historical bias, or datasets that contain other types of errors. Automated systems also can correlate data with protected classes, which can lead to discriminatory outcomes.”
In addition to data and datasets, the agencies will focus on how much users understand their automated systems. The FTC says many AI systems are little more than “‘black boxes’ whose internal workings aren’t clear to most people … this lack of transparency often makes it all the more difficult for developers, businesses, and individuals to know whether an automated system is fair.”
Free Training & Resources
White Papers
Provided by Anaplan
White Papers
Provided by Personify Health
Webinars
Provided by Yooz
Further Reading
Can a company’s cybersecurity weakness equate to “ineffective accounting controls?” The Securities & Exchange Commiss...
Cloud-based A/R is rapidly becoming a must-have for businesses of all sizes. But where is A/R technology heading, and what should you expec...
In the world of data, raw numbers are just the beginning. The real power lies in turning those numbers into actionable insights. While an E...
The finance leaders of tomorrow are hot on the heels of today’s CFOs and senior managers! So what else is new? “Seasoned”...
Artificial intelligence (AI) regulations are coming soon from federal rulemaking agencies. President Biden signed an executive order (EO) t...
Exporting ERP data into Excel and manually building financial reporting processes and reports is costing your team more than just time. Man...