Researchers from the University of Cornell discovered that artificial intelligence systems designed to identify offensive “hate speech” flag comments purportedly made by minorities “at substantially higher rates” than remarks made by whites. Several universities maintain artificial intelligence systems designed to monitor social media websites and report users who post “hate speech.” In a study published in May, researchers at Cornell discovered that systems “flag” tweets that likely come from black social media users more often, according to Campus Reform. The study’s authors found that, according to the AI systems’ definition of abusive speech, “tweets written in African-American English are abusive at substantially higher rates.” The study also revealed that “black-aligned tweets” are “sexist at almost twice the rate of white-aligned tweets.” https://pluralist.com/ai-censorship-cornell-study/45566/ I suppose there are a few possibilities here. It could be that the designers of this AI are racist/sexist. It could be that the AI is incorrect. Or it could be that the AI is correct, and minorities engage in more "hate speech". How will this study be handled? Will it just be ignored?
Just a guess, but the system is probably tripping over the "N" word being used in social media. If Blacks say it, it's fine, but an AI might not know that.
Perhaps thats a part of it-but it wouldn't explain the increased sexism, would it? If its ok for blacks to say it-how is the AI supposed to know that? How does it determine what is ok and what isn't?
AI hasn't been programmed yet to know it's ok when minorities do it. Of course even telling what AI is hateful is racist if it concludes minorities engage in it.
Obviously Twitter and Facebook will have to require that everyone's race and sex is part of registration and your profile. That way they know which comments to ignore so they can get to the serious business of tracking down and banning white guys from social media.
This reminds me of the study that showed half of online misogyny came from women. https://www.bbc.com/news/technology-36380247
I'm confident the AI programmers will be able to reprogram their AI to show less hate speech coming from minorities, and more coming from whites. What good is AI if it doesn't give you the results you want it to give you?
So if you check the box that says you are black, you can be sexist and racist? Isn't it racist to have different speech codes based on race?
You mean like this flub by Google? Lots more here in case something thinks this didn't happen https://www.google.com/search?q=goo...HQHhAO0QuIIBegQIARAs&biw=1600&bih=763&dpr=1.2
Zero people are surprised by this except the ones who scream white privilege who will call this racist.
Well you see, since minorities don't control the power structure, they can't be racist, so these AI results are meaningless. It's the same with the recently revealed mass shooting statistics. It's only a mass shooting if a crazy white guy does it randomly or targets specific types of people. When a minority shoots 4 or more people, they are less injured, dead, and terrorized because a crazy white guy wasn't behind the trigger.