Two points:
-
There has been no government censorship. Information about the Great Barrington Declaration is out there and easy to find.
-
The Declaration mostly absurd. And thatâs why it has not received a lot of media attention.
Answer: probably not.
Access to private media platforms, e.g., social media sites, is not a legally protected liberty interest, and nobodyâs ability to actually SAY these things was harmed â itâs not at all clear that merely persuading someone else, who owns a private platform, to remove content is enough to constitute a First Amendment violation at all. State action for constitutional purposes generally requires a good deal more than someone acting upon a vague feeling that something bad might come from oneâs disagreement with the government, and it doesnât sound like theyâve got much on that score. More likely, the social media companies were persuaded that they ought to act responsibly in trying at least to flag this stuff. Will they say they were coerced? Probably not. And then whatâs left?
Bear in mind, too, that for purposes of damages liability any state actor is going to have access to qualified immunity: the doctrine that says heâs not liable unless his conduct violated clearly-established legal standards. The notion of what constitutes âclearly establishedâ has been narrowing, and narrowing, and narrowing (ironically, mostly to advance the political views of those who now are angry about this), and so unless someone can point to a body of case law where liability has been established upon similar facts, itâs likely that no liability can result here, either. Municipalities are potentially liable, but state and federal government themselves are generally not liable for damages for constitutional violations.
Preventing people from being deceived by propaganda and misinformation is not censorship.
Funny how the Biden administration is to blame for events that occurred during Trumpâs Presidency.
Has the US government violated the First Amendment?
Given the complete lack of any evidence of actual censorship (as opposed to woolly waffle claiming censorship), the answer is obviously no.
Reading between the lines of this drivel:
One warm weekend in October of 2020, three impeccably credentialed epidemiologistsâJayanta Bhattacharya, Sunetra Gupta, and Martin Kulldorff, of Stanford, Oxford, and Harvard Universities respectivelyâgathered with a few journalists, writers, and economists at an estate in the Berkshires where the American Institute for Economic Research had brought together critics of lockdowns and other COVID-related government restrictions.
⊠a more pertinent question might be was the Great Barrington Declaration anything more than a piece of right-wing political theatre?
I will note that:
-
They are promoting the âimpeccable credentialsâ of the trio, rather than the validity of their scientific evidence.
-
That this was brought together by a âlibertarian think tankâ, rather than a scientific organisation and that, other than the trio, no scientific expertise among the gathering is mentioned.
Their complaint seems a laundry list of half-baked, half-witted far-right grievances and grandstanding (including even a whole section devoted to âThe Hunter Biden Laptop Storyâ).
@Giltil: why are you wasting our time with this errant nonsense?
Maybe back in the early 1990âs when the Internet was still largely publicly funded, an argument about government restrictions might have been a valid claim. Today most of us pay private providers for access and we choose which platforms we sign on to use. The Internet for the most part is no longer a âpublic spaceâ,
Where I think the FA still applies is what is specifically NOT allowed. Free Speech does not include the right to shout âFIRE!â in a crowded movie theater. We do not have the right to spread false and harmful information unrestricted in a public space.
The open issues in my mind is the degree to which private platforms are providing âpublic spaceâ, and how much self-policing of this space is required.
Itâs not that so much. The thing is: nobody is stopping ANYBODY from putting pseudoscience on the Internet. But social media companies have almost absolute discretion as to what they will and wonât allow â and what they will preface with warnings â on their OWN sites.
But anybody who wants to spread this stuff can do it. Itâs easy to buy hosting and build a website.
I wouldnât characterize this as âfire in a crowded theaterâ stuff. That really, truly is a notion which has to do with the conduct/speech distinction and with particular circumstances where there is an immediate physical peril caused by the alarm. For the most part, we really DO have the right to spread false and harmful information in a public space; people do it all the time. But social media sites are not public spaces.
Iâm not so sure about that. That is to say, it is easy to get lost in the weeds in trying to determine whether specific instances are the equivalent of yelling âFireâ in a theatre. The Barrington declaration likely does not qualify.
However, that is quite immaterial to the claim @Giltilâs article is trying to make. It is clearly NOT unconstitutional for anyone, including agents of the state, to point out that there actually is no fire, and that you should ignore the people who are falsely claiming that there is one.
I have a feeling that this notion will be tested in court at some point. I wouldnât be surprised if someone sues one of the larger social media sites claiming they were harmed by misinformation found on those sites. If the media site is held liable in court then that would create a precedent for banning misinformation they deem to be potentially harmful to their audience.
One would also have to wonder if someone could sue the press based on misinformation. We also have freedom of the press, so could a journalist be sued for spreading misinformation that led to bodily harm?
I know I am preaching to the choir on this one, but no right in the US Constitution is limitless, freedom of speech included.
Thereâs an interesting case described here. Google (YouTube) is being sued for hosting recommending content (by algorithm - you watched this, so maybe youâd like to watch this). In this case itâs videos supporting ISIS so I canât imagine there will be much sympathy for Google.
Section 230 provides a very strong defence for merely hosting content created by others - and if it went away weâd see much stronger moderation (Google works to remove these sorts of videos already). But recommendations arenât so clearly covered, although Google claims otherwise,
Yeah, thatâs the likeliest issue to be front-and-center for social media companies. I have to admit that this is not a topic I have stayed current on â the Internet has made for a lot of difficulty with applying existing doctrines. But it used to be the case that merely publishing bad advice, without more, aimed to a general audience, was not sufficient to create liability. If you blew yourself up following bomb-making instructions from The Anarchistâs Cookbook, that was pretty much your own problem alone. If you ate a strange diet because you read a diet fad book, and harmed yourself, likewise. My recollection is that part of the rationale for this has traditionally been the First Amendment â the notion that if the civil justice system unduly burdens some types of speech, this judicial exercise of power is itself a form of state action which infringes the freedom of speech.
But the Internet isnât quite the publishing industry in the same way that book and magazine publishers are, and Iâm not sure what the case law has been like. I do know that there is something in the way of a federal statute that is supposed to provide at least a partial shield to social media companies.
I think that freedom of speech is, at least, about as close to a âlimitlessâ right as we get, but the issue often is what is really âspeechâ as such and what is conduct. It can run both ways, so that flag-burning, which involves no literal âspeechâ at all is âspeechâ because, though it is conduct, it is primarily expressive; but statements uttered for the purposes of defrauding someone, or inciting a riot, can be the subject of a criminal prosecution. Where the speech is the expression of a point of view, without some incitement to or commission of a crime being a part thereof, it is almost always protected.
So, for example, racist views are generally completely protected: one can assert the superiority of this race or that race, in print, in public, openly all day long and itâs not something which can be made a crime. But inciting violence against people can be â itâs not so much that there are limits on the expression of racist views as that there are limits on the conduct-linked parts of those expressions. âAll immigrants should be deported immediatelyâ is an odious opinion, but fully protected. âLetâs go right now, round up these immigrants and force them onto an outbound shipâ is not just an opinion, but an exhortation to criminal conduct, and so (at least if uttered in circumstances where the opportunity is presented for violence) not protected.
Some countries treat this quite differently; so the UK, for example, has laws against inciting racial hatred. In the US you really canât do that. You can have laws against inciting racial violence, but not hatred. What we call âhate crimesâ (not a very useful term) typically involve things that are already crimes but where âhateâ of one group or another is a motivating consideration.
The more I think about this, the harder time I have thinking that anything is happening other than things working how you would expect them to, or even how they should work.
-
Social media and search engines exist to point users to material on the internet that they might find enjoyable or useful. One measure of utility is reasonably whether the information is more likely to be truthful or untruthful and helpful or harmful, so it is reasonable if their algorithms attempt to prioritise on this basis. This is not censorship as (i) nothing has been erased, and (ii) by using these services, the user is implicitly asking the service to prioritise some material (and thus de-prioritise other material).
-
It is the legitimate function of government communication to attempt to persuade the news media to accept their favored narrative, be it on health, foreign policy, taxtation, etc, etc, and to reject contrary narratives. To this end, government officials will talk to journalists and news organisations, and attempt to persuade them to their viewpoint.
-
It is the function of journalists and news media to take this information (both government and contrary voices), analyse it for which views they consider most persuasive, and present this information emphasising this analysis.
What seems to be the âproblemâ (at least for conservatives) is not that the system is censoring de-prioritising conservative speech because it is conservative speech, but de-prioritising it because it is viewed as untruthful and/or harmful.
If conservatives want social media and search engines that prioritise content not according to whether the content is truthful or helpful-versus-harmful, but according to how drunk-the-koolaid pwn-the-libs batshit deranged it is, then they are welcome to create their own, with their own custom algorithms to reflect these priorities â as to a certain extent they have already begun to do.
Pretty much. I would add that the word âcensorshipâ is ambiguous and basically without clear legal meaning, so that arguments about what does or does not constitute censorship are irrelevant to the legal issues.
This is exactly what totalitarian states always argue.
When it is the state that decides what is misinformation and, accordingly what can be censored, then you are no longer in a free society. Donât forget the proverbial saying « 'power corrupts; absolute power corrupts absolutely »
And this is exactly what unhinged conspiracy theorists always argue. ![]()
But it wasnât just âthe stateâ, it was also medical experts, the news media and âbig techâ also deciding that this was misinformation. ![]()
But, as I have already pointed out, this is not meaningfully âcensorshipâ, and as @Puck_Mendelssohn has pointed out, âcensorshipâ lacks any well-defined legal meaning, this is a vacuous claim âŠ
⊠having no bearing on whether a society is âfreeâ or not.
Given that, even if we accept Younes, Schmitt & Landryâs batshit claims, the Biden Trump Administration were not acting with âabsolute powerâ, but only in coordination with a host of media and internet companies, this portentous quote would appear to be utterly irrelevant. A more apt quote might be:
Told by an idiot, full of sound and fury, Signifying nothing.
Exactly. Free speech is just the right for you to say what you want to. It doesnât mean that anyone else is forced to listen to you, agree with you, or publish your words.
Nothing has been censored.
The Barrington declaration is trivial to find. It is available in full at https://gbdeclaration.org/ to anyone who wants to read it.[1] If Facebook etc donât want to host it, they donât have to.
Refusing to publish, advertise or provide a platform is not censorship. Persuading or pressuring third parties not to publish, advertise or provide a platform is not censorship. Preventing authors and their supporters from publishing, advertising or providing a platform is censorship - but is not happening.
located via Google, who clearly arenât removing it from their search results. Itâs also accessible from Wikipedia. â©ïž
The state is perfectly entitled to correct misinformation. I have a hard time believing that anyone, even yourself, will disagree with this.
Yes, in the sense of fighting bad speech with good speech. But certainly not by arrogating to itself the power to define what would be misinformation and to censor or have censored the speeches which, according to it, would carry misinformation.