Elon Musk has created an ‘Ethical AI’ group on Twitter

As more and more issues with AI emerge, including racial, gender, and age bias, many tech companies have installed “ethical AI” teams seemingly dedicated to identify and mitigate such problems.

Twitter’s META unit has been more advanced than most in detailing problems with the company’s AI systems and allowing outside researchers to probe its algorithms for new problems.

Last year, after Twitter users noticed that an image cropping algorithm appeared to favor white faces when choosing how to crop a photo, Twitter made the unusual decision to let their META unit publish details of the bias it discovered appear. The group also launched one of the first “bounty bias” contests allow outside researchers to test algorithms for other problems. Last October, Chowdhury’s team also publish details of unintentional political bias on Twitter, showing that left-leaning sources were, in fact, promoted more than left-leaning sources.

Many outside researchers see the layoffs as a blow, not only to Twitter, but to efforts to improve AI. “What a tragedy,” Kate Starbirdan associate professor at the University of Washington who studies misinformation online, wrote on Twitter.

Twitter content

This content can also be viewed on the website derived are from.

“The META team is one of the only good case studies of a tech company running an AI ethics team that interacts with the public and academia with considerable credibility,” said Ali Alkhatibdirector of the Center for Applied Data Ethics at the University of San Francisco.

Alkhatib said Chowdhury is considered an extremely good person in the AI ​​ethics community, and her team has done really valuable work in getting hold of Big Tech. “There are not many corporate ethics groups worth taking seriously,” he said. “This is one of the pieces that I taught in classes.”

Mark Riedl, an AI research professor at Georgia Tech, said the algorithms used by Twitter and other social media giants have a huge impact on people’s lives and need to be studied. “Whether META has had any impact inside Twitter is hard to tell from the outside, but the promise is there,” he said.

Riedl added that allowing outsiders to probe Twitter’s algorithms is an important step toward greater transparency and understanding of issues surrounding AI. “They have become a watchdog that can help the rest of us understand how AI is affecting us,” he said. “Researchers at META have excellent degrees with a long history of AI research for the benefit of society.”

As for Musk’s idea of ​​the open-source Twitter algorithm, Reality is much more complicated. There are various algorithms that affect the way information is displayed, and it’s hard to understand them without the real-time data they’re being fed into about tweets, views, and likes.

The idea that having an algorithm with a clear political bias can oversimplify a system may harbor more insidious biases and problems. Exploring these is exactly the kind of work that Twitter’s META team is doing. “Not many groups rigorously study their own biases and algorithmic errors,” says Alkhatib at the University of San Francisco. “META did it.” And now, it doesn’t.


News 7D: Update the world's latest breaking news online of the day, breaking news, politics, society today, international mainstream news .Updated news 24/7: Entertainment, the World everyday world. Hot news, images, video clips that are updated quickly and reliably

Related Articles

Back to top button