US senators introduced a bill on Wednesday that will allow the Federal Trade Commission to inspect if corporations are using algorithms that are biased, discriminatory, and insecure. The bill, known as the Algorithmic Accountability Act of 2019, was backed by Senator Ron Wyden (D-OR), Senator Cory Booker (D-NJ), and Representative Yvette Clarke (D-NY). The Algorithmic
US senators introduced a bill on Wednesday that will allow the Federal Trade Commission to inspect if corporations are using algorithms that are biased, discriminatory, and insecure.
The bill, known as the Algorithmic Accountability Act of 2019, was backed by Senator Ron Wyden (D-OR), Senator Cory Booker (D-NJ), and Representative Yvette Clarke (D-NY).
The Algorithmic Accountability Act is focused on regulating “automated decision systems”. Congress has considers these systems as more of a “computational process,” used to help companies make decisions that directly impact consumers. The systems are built on techniques in machine learning, statistics or AI.
Tackling bias and discrimination is a hot subject in AI. Researchers have already shown that commercial facial recognition systems struggle more when trying to identify women and people of color. Experts in the AI community have urged the government to take a stronger approach to regulating tech companies, and it looks like the US government is slowly starting to take notice.
Wyden explained that these systems could potentially have a great impact on people’s lives if they were involved in determining things like loans, jobs, or recidivism.
“Computers are increasingly involved in the most important decisions affecting Americans’ lives –whether or not someone can buy a home, get a job or even go to jail,” he said. “But instead of eliminating bias, too often these algorithms depend on biased assumptions or data that can actually reinforce discrimination against women and people of color.”
Under the act, corporations will have to submit impact assessments to FTC officials. These reports have to address the accuracy, fairness, bias, discrimination, privacy and security issues of any automated decision systems being deployed.
Companies will have to show how the system was built and designed, what kind of data was used to train it, and what it’s being used for. They also have to evaluate how beneficial it really is to customers, by considering the sensitivity of the information being used by the systems to make decisions and how that data is stored and used. Any potential risks that a consumer might face if these systems are inaccurate, unfair or biased have to be discussed as well.
Not so fast AI Doctor, the FDA would like to check how good you really are at healthcare
“Algorithms shouldn’t have an exemption from our anti-discrimination laws,” Clarke said in a statement.
“Our bill recognizes that algorithms have authors, and without diligent oversight, they can reflect the biases of those behind the keyboard. By requiring large companies to not turn a blind eye towards unintended impacts of their automated systems, the Algorithmic Accountability Act ensures 21st Century technologies are tools of empowerment, rather than marginalization, while also bolstering the security and privacy of all consumers.”
The bill explicitly targets giant tech conglomerates like Facebook or Google. All companies that reap in more than $50m in gross annual income, have more than a million consumers, and collect personal information on their users will have to submit detailed risk assessments to the FTC. These reports can be released to the public if the corporation chooses to be more transparent, there is no obligation to do so.
If the report is found to be unfair or deceptive, the FTC has the right to conduct its own investigation to uncover any information that has been withheld by companies. Under the Federal Trade Commission Act, the FTC can issue subpoenas to collect documents or evidence from people related to the investigation. ®
Becoming a Pragmatic Security Leader