Always Providing You With Ongoing Information

Posts tagged ‘Racism’

Racism Spreading To Educational List-servs For Writing, Many Students Want Out

Snapshot14_001

The Writing Program Administration Listserv, or WPA-L, is an invaluable resource for disciplinary news, opportunities, advice and research. It’s also a kind of community for the many writing scholars who work in temporary academic positions off the tenure track.

So an anonymous post referencing the Ku Klux Klan has jarred Listserv subscribers — some of whom now want the list to be formally moderated, or moved altogether. Many have left the list. Other subscribers say this most recent post is only an overt example of the everyday racism that happens on the Listserv.

Others still oppose moderation of the list and insist on the online community’s ability to continue to informally moderate itself — disposing of hate speech when and where it happens.

The anonymous troll’s KKK-inspired post was sparked by online discussions about this month’s Conference on College Composition and Communication convention in Pittsburgh — specifically an address by Asao Inoue, professor of interdisciplinary arts and sciences and director of University Writing and the Writing Center at the University of Washington at Tacoma.

Inoue in his speech talked about the “market of white language preferences in schools” and “freedom from white language supremacy.”

He argued that by using a “single standard to grade your students’ languaging, you engage in racism. You actively promote white language supremacy, which is the handmaiden to white bias in the world.” That kind of bias, he also argued, is the very kind that “kills black men on the streets by the hands of the police through profiling and good ol’ fashion [sic] prejudice.”

Considerate discussions, including praise and criticism of the talk, followed on the Listserv. But then someone identifying him or herself as the “Grand Scholar Wizard” — a clear reference to “Grand Wizard,” or KKK leader — weighed in.

Inoue said Wednesday that he’s asking teachers and others, when it comes to judging language, “not to give up a personal standard but to be compassionate to others — that is a harder thing to do.”

Facebook Is BandingWhite Nationalism and White Separatism

Snapshot41_072

Facebook the world’s biggest social media network, banned white nationalism and white separatism on its platform Tuesday. Facebook will also begin directing users who try to post content associated with those ideologies to a nonprofit that helps people leave hate groups, Motherboard has learned.

The new policy, which will be officially implemented next week, highlights Facebook’s policies, which govern the speech of more than 2 billion users worldwide. And Facebook still has to effectively enforce the policies if it is really going to diminish hate speech on its platform.

Social Media Giants Face New Criticism Amid Shooting In New Zealand

Snapshot41_058

Social media giants, including YouTube, Facebook and Twitter, are facing new criticism after they struggled to block livestreamed footage of a gunman shooting worshippers at a mosque in New Zealand.

The disturbing 17-minute livestream episode saw users uploading and sharing clips  faster than social media companies could remove them.

The companies were already under scrutiny over the rise in online extremist content, but Friday’s troubling incident also underscored big tech’s difficulties in rooting out violent content as crises unfold in real time.

In a live point-of-view video uploaded to Facebook on Friday, the killer shot dozens of mosque goers, at one point going back to his car to stock up on weapons.

Critics hammered tech companies, accusing them of failing to get ahead of the violent video spreading.

Sen. Cory Booker (D-N.J.), a 2020 contender, said “Tech companies have a responsibility to do the morally right thing,” told reporters on Friday. “I don’t care about your profits.

“This is a case where you’re giving a platform to hate,” Booker continued. “That’s unacceptable and should have never happened, and it should have been taken down a lot more swiftly. The mechanisms should be in place to allow these companies to do that.”

“The rapid and wide-scale dissemination of this hateful content — live-streamed on Facebook, uploaded on YouTube and amplified on Reddit — shows how easily the largest platforms can still be misused,” Sen. Mark Warner (D-Va.) said.

Facebook said it took down the video as soon as it was flagged by the New Zealand Police. But that response suggested artificial intelligence (AI) tools and human moderators had failed to catch the livestream, which went on for 17 minutes.

By the time Facebook suspended the account behind the video, an hour and a half after it was posted, the footage had already proliferated across the Internet with thousands of uploads on Twitter, Instagram, YouTube, Reddit and other platforms.

More than 10 hours after the attack, the video could still be found through searches on YouTube, Twitter and Facebook, even as those companies said they were working to prevent the footage from spreading.

YouTube by Friday evening had removed thousands of videos related to the incident. Facebook and Twitter did not share numbers, but both said they were working overtime to remove the content as it appeared.

“Shocking, violent and graphic content has no place on our platforms, and we are employing our technology and human resources to quickly review and remove any and all such violative content on YouTube,” a YouTube spokesperson said. “As with any major tragedy, we will work cooperatively with the authorities.”

Facebook in May 2017 announced that it was hiring 3,000 more content moderators to deal with the issue of graphic video content, a move that Mary Anne Franks, a law professor at the University of Miami, said amounted to “kicking the can down the road.”

 

Hours before the shooting, the suspect apparently posted a manifesto on Twitter and announced his intention to livestream the mass shooting on 8chan, a fringe chatroom that he frequented.

New Zealand police confirmed the suspected gunman had penned the white nationalist, anti-immigrant screed, which is more than 70 pages.

Twitter deleted the account in question hours after the shooting took place, and it has been working to remove reuploads of the video from its service.

Facebook, Twitter, YouTube and other leading social media platforms have been grappling with how to handle extremist and white nationalist content for years, particularly as anti-immigrant sentiment has spiked in the U.S. and Europe. The companies have struggled to draw the line between freedom of speech and incendiary propaganda that has the potential to radicalize users.

In the U.S., because of Section 230 of the Communications Decency Act, the platforms are not held legally liable for what users post. Tech advocates credit that law with empowering the internet, but some lawmakers have questioned whether it should be changed.

Alabama newspaper editor who urged Klan to ‘ride again’ replaced by African-American woman

Snapshot_143

 

Read Here

TikTok Has White Supremacy Problem

Image: Shutterstock

Mother board has discovered, Users on mega popular children’s lip-synching app TikTok are sharing calls for violence against people of color and Jews, as well as creating and sharing neo-Nazi propaganda,

Some accounts verbatim read “kill all n*****,” “all jews must die,” and “killn******.” (The words are uncensored on the app, which is a sort of melding of Vine and Instagram that allows users to create short videos synced to music.)

Motherboard found the content on the Chinese-made app, which is used by hundreds of millions people, many including teenagers and children in the United States, within minutes of starting a basic search.

TikTok

Caption: One of the videos Motherboard found on TikTok promoting white supremacy. Image: Motherboard.
One video contained a succession of users making Nazi salutes. Another video included the message, “I have a solution; a final solution,” referring to the Holocaust.
One TikTok video Motherboard found, which encourages viewers to read Siege, a book popular with neo-Nazis, included the hashtag #FreeDylannRoof. Roof was given nine consecutive life sentences for the massacre of nine African Americans at the historically Black Emanuel African Methodist Episcopal Church in Charleston, South Carolina in 2015.

Racist Rants Still Rattling @ Columbia University

Snapshot38_004

A Columbia University student shouted that “white people are the best thing that ever happened to the world” on Sunday evening during a racist tirade in front of students of color, who caught the rant on video.

“We invented science and industry, and you want to tell us to stop because ‘oh, my God, we’re so bad,’” the student said, skipping around the small crowd of students. “We saved billions of people from starvation. We built modern civilization. White people are the best thing that ever happened to the world. We are so amazing. I love myself and I love my people. [Fuck] yeah, white people! [Fuck] yeah, white men! We’re white men, we did everything.”

https://www.insidehighered.com/news/2018/12/11/columbia-student-goes-racist-tirade-fellow-students?utm_source=Inside+Higher+Ed&utm_campaign=bb315c6ce4-DNU_WO20181203_PREV_COPY_01&utm_medium=email&utm_term=0_1fcbc04421-bb315c6ce4-198883777&mc_cid=bb315c6ce4&mc_eid=bdb769ae1f

NIT Study Reveals That Artificial Intelligence Is Racist & Sexist

An MIT study has revealed the way artificial intelligence system collect data often makes them racist and sexist.

Researchers looked at a host of systems, and found many of them exhibited a outrageous bias.

The team then developed system to help researchers make sure their systems are less biased.

Computer scientists are often quick to say that the way to make these systems less biased is to simply design better algorithms,’ said lead author Irene Chen, a PhD student who wrote the paper with MIT professor David Sontag and postdoctoral associate Fredrik D. Johansson.

‘But algorithms are only as good as the data they’re using, and our research shows that you can often make a bigger difference with better data.’

In one example, the team looked at an income-prediction system and found that it was twice as likely to misclassify female employees as low-income and male employees as high-income. 

They found that if they had increased the dataset by a factor of 10, those mistakes would happen 40 percent less often.

In another dataset, the researchers found that a system’s ability to predict intensive care unit (ICU) mortality was less accurate for Asian patients. 

However, the researchers warned existing approaches for reducing discrimination would make the non-Asian predictions less accurate 

Chen says that one of the biggest misconceptions is that more data is always better. Instead, researchers should get more data from those under-represented groups.

‘We view this as a toolbox for helping machine learning engineers figure out what questions to ask of their data in order to diagnose why their systems may be making unfair predictions,’ says Sontag.

The team will present the paper in December at the annual conference on Neural Information Processing Systems (NIPS) in Montreal. 

Tag Cloud

%d bloggers like this: