728 x 90

MIT’s new AI can spot its own bias and fix it

MIT’s new AI can spot its own bias and fix it

AI still has a lot to learn ANYBODY WHO THINKS that artificial intelligence will spell an end to human prejudices and subconscious bias is in for a nasty shock. If they ever notice, that is. The problem with training artificial intelligence is that it has to learn from somewhere and humans make for lousy teachers.

AI still has a lot to learn

ANYBODY WHO THINKS that artificial intelligence will spell an end to human prejudices and subconscious bias is in for a nasty shock. If they ever notice, that is.

The problem with training artificial intelligence is that it has to learn from somewhere and humans make for lousy teachers. Even the actual qualified teachers.

Take a criminal justice bot, for example. While the idea that human judges give our harsher sentences when hungry has thankfully been discredited, the bot still has to get its data from somewhere and that’s obviously the past. A ProPublica study found an AI designed to predict whether a prisoner would reoffend was tagging black prisoners twice as often as white ones. Why? Because the arrest rates showed this as happening, ignoring the wider societal factors of racial profiling and so forth. When the AI’s actual assessments were tested, criminals scoring the same were actually re-arrested at the same rate, regardless of skin tone. In other words: the AI was learning correlation, not causation.

It’s the same wherever you look, and not just in terms of race: CV-sorting AIs often disregard women and minorities, AIs judging beauty contest don’t like black skin and even chatbots can end up citing Hitler as an influence given the right (or more accurately, wrong) training set.  

The Massachusetts Institute of Technology (MIT) is working on a solution: an AI that can “de-bias” its data by resampling for added balance. It works by not only learning how to do the task (e.g: facial recognition) but how the data supporting its task is structured.

The team tested the AI on data gathered on its own previous study, and found that “categorical bias” was reduced by over 60% while maintaining the overall performance.

“Facial classification in particular is a technology that’s often seen as ‘solved,’ even as it’s become clear that the datasets being used often aren’t properly vetted,” said Alexander Amini, the paper’s co-author. “Rectifying these issues is especially important as we start to see these kinds of algorithms being used in security, law enforcement and other domains.”

Of course this isn’t the first time humans have come up with ways to keep their bias in check for the benefit of impressionable young AIs, but it’s probably worth having more than one team working on the problem. If there’s one thing we’ve learned from all this, it’s that humans remain as fallible as ever, after all. µ

[ad_2]

Source link

Susan E. Lopez
ADMINISTRATOR
PROFILE

Posts Carousel

Leave a Comment

Your email address will not be published. Required fields are marked with *

Latest Posts

Top Authors

Most Commented

Featured Videos