HomeTechGadgetsAI Is Biased. Here's How Scientists Are Trying to Fix It

AI Is Biased. Here’s How Scientists Are Trying to Fix It


Computers have learned to see the world more clearly in recent years, thanks to some impressive leaps in artificial intelligence. But you might be surprised—and upset—to know what these AI algorithms really think of you. As a recent experiment demonstrated, the best AI vision system might see a picture of your face and spit out a racial slur, a gender stereotype, or a term that impugns your good character.

Now the scientists who helped teach machines to see have removed some of the human prejudice lurking in the data they used during the lessons. The changes can help AI to see things more fairly, they say. But the effort shows that removing bias from AI systems remains difficult, partly because they still rely on humans to train them. “When you dig deeper, there are a lot of things that need to be considered,” says Olga Russakovsky, an assistant professor at Princeton involved in the effort.

The project is part of a broader effort to cure automated systems of hidden biases and prejudices. It is a crucial problem because AI is being deployed so rapidly, and in ways that can have serious impacts. Bias has been identified in facial recognition systems, hiring programs, and the algorithms behind web searches. Vision systems are being adopted in critical areas such as policing, where bias can make surveillance systems more likely to misidentify minorities as criminals.

In 2012, a project called ImageNet played a key role in unlocking the potential of AI by giving developers a vast library for training computers to recognize visual concepts, everything from flowers to snowboarders. Scientists from Stanford, Princeton, and the University of North Carolina paid Mechanical Turkers small sums to label more than 14 million images, gradually amassing a vast dataset that they released for free.

When this dataset was fed to a large neural network, it created an image-recognition system capable of identifying things with surprising accuracy. The algorithm learned from many examples to identify the patterns that reveal high-level concepts, such as the pixels that constitute the texture and shape of puppies. A contest launched to test algorithms developed using ImageNet shows that the best deep learning algorithms correctly classify images about as well as a person. The success of systems built on ImageNet helped trigger a wave of excitement and investment in AI, and, along with progress in other areas, ushered in such new technologies as advanced smartphone cameras and automated vehicles.

But in the years since, other researchers have found problems lurking in the ImageNet data. An algorithm trained with the data might, for example, assume that programmers are white men because the pool of images labeled “programmer” were skewed that way. A recent viral web project, called Excavating AI, also highlighted prejudices in the labels added to ImageNet, from such as “radiologist” and “puppeteer” to racial slurs like “negro” and “gook”. Through the project website (now taken offline) people could submit a photo and see terms lurking in the AI model trained using the dataset. These exist because the person adding labels might have added a derogatory or loaded term in addition to a label like “teacher” or “woman.”

The ImageNet team analyzed their dataset to uncover these and other sources of bias, and then took steps to address them. They used crowdsourcing to identify and remove derogatory words. They also identified terms that project meaning onto an image, for example “philanthropist,” and recommended excluding the terms from AI training.

Keep Reading

The team also assessed the demographic and geographic diversity in the ImageNet photos and developed a tool to surface more diverse images. For instance, ordinarily, the term “programmer” might produce lots of photos of white men in front of computers. But with the new tool, which the group plans to release in coming months, a subset of images that shows greater diversity in terms of gender, race, and age can be generated and used to train an AI algorithm.



Source link

NypTechtek
NypTechtek
Media NYC Local Family and National - World News

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Must Read