Jewish Censorship: Youtube removed: 9 million videos! – Congress Drafting Bill To Create Federal “Social Media Task Force” To Police Online Speech

(005320.38-:E-003569.93:N-HO:R-SU:C-30:V)   


Jan‘s Advertisement
The Boer State Party
From Jan: This is the ONLY political party in South Africa that I will support. I have met their leader and know their history. *ALL* other Political Parties in SA are a total waste of time for Whites. This political party cares about Whites. They say: We fight for our People‘s Freedom and Safe continued existence in the new South Africa!


[Jews at work, trying hard to shut down white free speech and the rise of the white right. Folks, PLEASE DO ALL YOU CAN TO WORK AGAINST THESE THINGS … SPREAD THE TRUTH, speak to other whites. Don't be put off by this Jewish nonsense. We must continue forward even if we have to go UNDERGROUND … BUT DON'T STOP!

Notice the unbelievable amount of videos removed by Youtube.
By “harmful content” they really mean: SHUT UP WHITE TRUTH TELLERS! Jan]

House lawmakers are planning to unveil legislation to probe social media and online extremism

By Tony Romm and
Drew Harwell
September 18

Congressional lawmakers are drafting a bill to create a “national commission” at the Department of Homeland Security to study the ways that social media can be weaponized — and the effectiveness of tech giants’ efforts to protect users from harmful content online.

The draft House bill obtained by The Washington Post is slated to be introduced and considered next week. If passed, the commission would be empowered — with the authority to hold hearings and issue subpoenas — to study the way social media companies police the Web and to recommend potential legislation. It also would create a federal social media task force to coordinate the government’s response to security issues.

The effort reflects a growing push by members of Congress to combat online hate speech, disinformation and other harmful content online, including a hearing held Wednesday where Senate lawmakers questioned Facebook, Google and Twitter executives to probe whether their platforms have become conduits for real-world violence.

[The New Zealand shooting shows how YouTube and Facebook spread hate and violent images — yet again]

All three tech giants told lawmakers at the Wednesday hearing that they have made progress in combating dangerous posts, photos and videos — improvements they attributed largely to advancements in their artificial-intelligence tools. But some Democrats and Republicans in Congress still contend the companies haven’t acted aggressively enough.

“I would suggest even more needs to be done, and it needs to be better, and you have the resources and technological capability to do more and better,” Democratic Sen. Richard Blumenthal (Conn.) said at the hearing.

Lawmakers have grown increasingly concerned about the use of social media sites as conduits for violence and extremism, pointing to recent attacks including the mass shooting in Christchurch, New Zealand. Users uploaded videos of the deadly incidents at two mosques earlier this year, evading tech giants’ censors and then proving difficult to scrub.

But the most vile content has appeared on sites such as Gab, a haven for the alt-right, and 8chan, an anonymous message board. The latter site has been taken down in the aftermath of a shooting in El Paso this year that left 22 people dead. The suspect there is believed to have posted a manifesto to 8chan before carrying out his attack.

[From helicopter repairman to leader of the Internet’s ‘darkest reaches’: The life and times of 8chan owner Jim Watkins]

Lawmakers led by Rep. Bennie Thompson (D-Miss.), chairman of the House Homeland Security Committee, grilled the owner of 8chan at a private session this year. Thompson later said he had plans for a bill that would create the social media commission.

“One thing’s for sure — the challenge of preventing online terrorism content is one of the greatest post-9/11 homeland security challenges,” he said in a statement Wednesday.

In the Senate, the tech giants faced similar concerns from lawmakers. “In today’s Internet-connected society, misinformation, fake news, deep fakes and viral online conspiracy theories have become the norm,” said Republican Sen. Roger Wicker (Miss.), the chairman of the Senate Commerce Committee, to open the Wednesday hearing.

In response, Facebook, Google and Twitter said during their testimony that they had seen success in deploying automated tools to police for hate, violence and terrorist propaganda.

YouTube said nearly 90 percent of the 9 million videos they had removed in the second quarter of the year had been flagged by automated tools. Those played a major role in removing videos, comments and channels flagged for hate speech, which the company said had spiked in recent months.

Facebook said this week it would begin using police training videos to help its automated tools better detect first-person shooting videos like the one recorded in Christchurch. The company said its detection system, which was designed to automatically flag and remove videos showing violence, sex or objectionable content, now finds a rule violation on its live-streaming system in an average of 12 seconds. Also this week, Facebook also announced updates to its efforts to stop and remove hate speech, including unveiling a roughly 40-person independent board that will oversee content decisions and shape company policy.

[Facebook unveils charter for its ‘Supreme Court,’ where users can go to contest the company’s decisions]

Twitter said its abusive-content-monitoring systems now flag half of the content that is ultimately removed, compared with 20 percent a year ago. The site also said it was piloting a program in which it would alert outside websites when it appears they are hosting videos or other files promoting terrorist content.

But Twitter found itself embroiled in another controversy in Congress on Wednesday after President Trump retweeted a widely followed conservative comedian, who portrayed a video of Rep. Ilhan Omar (D-Minn.) dancing as a sign she was celebrating on the anniversary of the 9/11 attacks.

Omar responded that the video came from an event for the Congressional Black Caucus, before pointing out Trump’s retweet of it — who repeatedly has attacked the Democratic lawmaker — had resulted in her receiving death threats.

“The President of the United States is continuing to spread lies that put my life at risk,” Omar tweeted. “What is Twitter doing to combat this misinformation?”

The comedian, Terrence K. Williams later deleted his own tweet, Twitter confirmed Wednesday, without providing additional comment.

Meanwhile, major civil rights groups wrote to the chief executives of Facebook, Google, Twitter and YouTube on Tuesday saying they had a “moral responsibility” to better combat how social media could be exploited “to inflict fear and spread hate.”

The groups, including the Leadership Conference on Civil and Human Rights and the NAACP Legal Defense and Educational Fund, urged the companies to conduct regular civil rights audits and implement corporate accountability measures that would help address hate on their sites.

“Each massacre makes clearer that, while each of your companies has taken some steps to address white nationalism and white supremacy online, those steps are not enough,” the letter stated.

Source: https://www.washingtonpost.com/technology/2019/09/18/facebook-google-twitter-face-fresh-heat-congress-harmful-online-content/



Jan‘s Advertisement
Video: Professor Quigley: The Jewish Rothschilds and the History of Banking and Money
Professor Quigley was hated by the White Right in the 1960s. The bulk of this video is 40 pages specially removed from his 1966 book, Tragedy and Hope, which deal with the Jewish Rothschilds, banking and Whites and Jews who played a big role in capitalism.

%d bloggers like this:
Skip to toolbar