Big Tech VS Free Speech 💪 The end of Section 230 may be the key!


Episode Artwork
1.0x
0% played 00:00 00:00
May 14 2019 16 mins  
Big Tech VS Free Speech 💪
The end of Section 230 may be the key!

Please Donate to https://fundly.com/stopthebias

Together we can bring attention to the social media censorship and hold these monopolies to the exemption they have hid behind.

It’s no longer a question of whether the Giant Social Media Companies – Google, Twitter, Facebook, Instagram, etc. – have become too powerful. They’ve matured to the point that they can actually affect what people see, read, listen to and even what they think. To make matters worse, they’ve decided that they will use these powers to change voting patterns and to Censor speech that opposes their political beliefs.

It’s time to stop them before all is lost. Harmeet Dhillon (Attorney Suing Google and Republican Party Official) has been on Tucker Carlson’s show frequently of late and she warns,

"Trump won't win in 2020 and we will never win another election if we don't stop this!"

One of the most likely ways for Congress to stop them would be to revise Section 230 of the Communications and Decency Act (CDA) that provides a special exemption from liability for content that is posted on their platforms. This exemption was initially extended to them because they claimed that their platforms would be a place for people from all points of view to post their ideas. Given their current Censorship actions, we all know that is no longer the case.

Consequently, the Social Media Platforms should be subjected to the possibility that they be responsible for all content that is posted on their sites since they selectively publish just as the New York Times or Washington Post do. In fairness, then, the Social Media Platforms should bear the same risk of liability for their content as other publishers.

This move would, of course, destroy their business model so they would be likely to change the Censorship tactics they use against Conservatives in order to avoid any changes to Section 230 of the CDA.

Alternatively, the threat of Antitrust Litigation is another avenue that may get their attention. The government should apply the same techniques against these Social Media Giants as they used to bring Microsoft to heel.

Our goal is to see our leaders pursue these remedies before it’s too late!

Reprint from:
https://www.fastcompany.com/90273352/maybe-its-time-to-take-away-the-outdated-loophole-that-big-tech-exploits

The 1996 law that made the web is in the crosshairs

Internet companies have long been shielded from legal responsibility for toxic user content by the Section 230 statute. Now that they’re huge, rich, and behaving badly, that gift could be taken away.

In the face of that toxic content’s intractability and the futility of the tech giants’ attempts to deal with it, it’s become a mainstream belief in Washington, D.C.–and a growing realization in Silicon Valley–that it’s no longer a question of whether to, but how to, regulate companies like Google, Twitter, and Facebook to hold them accountable for the content on their platforms. One of the most likely ways for Congress to do that would be to revise Section 230.

UNDERSTANDING SECTION 230

Section 230 remains a misunderstood part of the law. As Wyden explained it to me, the statute provides both a “shield” and a “sword” to internet companies. The “shield” protects tech companies from liability for harmful content posted on their platforms by users. To wit:

(c) (1) No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider.

Specifically, it relieves web platform operators of liability when their users post content that violates state law by defaming another person or group, or painting someone or something in a false light, or publicly disclosing private facts. Section 230 does not protect tech companies from federal criminal liability or from intellectual property claims.

“Because content is posted on their platforms so rapidly there’s just no way they can possibly police everything,” Senator Wyden told me.

The “sword” refers to the 230’s “good samaritan” clause, which gives tech companies legal cover for choices they make when moderating user content. Before § 230, tech companies were hesitant to moderate content for fear of being branded “publishers” and thus made liable for toxic user content on their sites. Per the clause:

(c) (2) (a) No provider or user of an interactive computer service shall be held liable on account of any action voluntarily taken in good faith to restrict access to or availability of material that the provider or user considers to be obscene, lewd, lascivious, filthy, excessively violent, harassing, or otherwise objectionable, whether or not such material is constitutionally protected

“I wanted to make sure that internet companies could moderate their websites without getting clobbered by lawsuits,” Wyden said on the House floor back in March. “I think everybody can agree that’s a better scenario than the alternative, which means websites hiding their heads in the sand out of fear of being weighed down with liability.”

Many lawmakers, including Wyden, feel the tech giants have been slow to detect and remove harmful user content, that they’ve used the legal cover provided by § 230 to avoid taking active responsibility for user content on their platforms.

And by 2016 the harmful content wasn’t just hurting individuals or businesses, but whole societies. Social sites like YouTube became unwitting recruiting platforms for violent terrorist groups. Russian hackers weaponized Facebook to spread disinformation, which caused division and rancor among voters, and eroded confidence in the outcome of the 2016 U.S. presidential election.

As Wyden pointed out on the floor of the Senate in March, the tech giants have even profited from the toxic content.

“Section 230 means they [tech companies] are not required to fact-check or scrub every video, post, or tweet,” Wyden said. “But there have been far too many alarming examples of algorithms driving vile, hateful, or conspiratorial content to the top of the sites millions of people click onto every day –companies seeming to aid in the spread of this content as a direct function of their business models.”

And the harm may get a lot worse. Future bad actors may use machine learning, natural language, and computer vision technology to create convincing video or audio footage depicting a person doing or saying something provocative that they didn’t really do or say. Such “Deepfake” content, skillfully created and deployed with the right subject matter at the right time, could cause serious harm to individuals, or even calamitous damage to whole nations. Imagine a deep-faked president taking to Twitter to declare war on North Korea.

It’s a growing belief in Washington in 2018 that tech companies might become more focused on keeping such harmful user content off of their platforms if the legal protections provided in § 230 were taken away.

SHIELDING GIANTS

There’s a real question over whether Wyden’s “shield” still fits. Section 230 says web companies won’t be treated as publishers, but they look a lot more like publishers in 2018 than they did in 1996.

In 1996 websites and services often looked like digital versions of real-world things. Craigslist was essentially a digital version of the classifieds. Prodigy offered an internet on-ramp and some bulletin boards. GeoCities let “homesteaders” build pages that were organized (by content type) in “neighborhoods” or “cities.”

Over time the dominant business models changed. Many internet businesses and publishers came to rely on interactive advertising for income, a business model that relied on browser tracking and the collection of users’ personal data to target ads.

To increase engagement, internet companies began “personalizing” their sites so that each user would have a different and unique experience, tailor-made to their interests. Websites became highly curated experiences served up by algorithms. And the algorithms were fed by the personal data and browsing histories of users.

Facebook came along in 2004 and soon took user data collection to the next level. The company provided a free social network, but harvested users’ personal data to target ads to them on Facebook and elsewhere on the web. And the data was very good. Not only could Facebook capture all kinds of data about a user’s tastes, but it could capture the user’s friends’ tastes too. This was catnip to advertisers because the social data proved to be a powerful indicator of what sorts of ads the user might click on.

Facebook also leveraged its copious user data, including that on the user’s clicks, likes, and shares, to inform the complex algorithms that curate the content in users’ news feeds. It began showing users the posts, news, and other content that the user–based on their personal tastes–was most likely to respond to. This put more attention-grabbing stuff in front of its users’ eyeballs, which pumped up engagement and created more opportunities to show ads.

This sounds a lot like the work of a publisher. “Our goal is to build the perfect personalized newspaper for every person in the world,” Facebook CEO Mark Zuckerberg said in 2014.

But Facebook has always been quick to insist that it’s not a publisher, just a neutral technology platform. There’s a very good reason for that: Publishers are liable for the content

Follow @PeterBoykin on Social Media

Twitter: Suspended

Facebook: https://www.facebook.com/Gays4Trump

Instagram: https://www.instagram.com/peterboykin/

Youtube: https://www.youtube.com/c/PeterBoykin

Reddit: https://www.reddit.com/user/peterboykin

Telegram: https://t.me/PeterBoykin

PolitiChatter: https://politichatter.com/PeterBoykin

Patreon: https://www.patreon.com/peterboykin

PayPal: https://www.paypal.me/magafirstnews

Cash App: https://cash.me/app/CJBHWPS

Cash ID: $peterboykin1

Become a supporter of this podcast: https://www.spreaker.com/podcast/go-right-with-peter-boykin-gorightnews-com--3096608/support.