11/06/2024 / By News Editors
In the brave new world of the University of Washington’s Center for an Informed Public (CIP), it seems like “informed” is synonymous with “watched.” Birthed to combat the wildfires of online “misinformation,” CIP and its partners – including the defunct Election Integrity Partnership (EIP) and the short-lived Virality Project – thought they might have been celebrated defenders of truth. Instead, they became poster children for what happens when watchdogs get a little too cozy with power, diving into an experiment that teetered between public good and Orwellian oversight.
(Article by Christina Maas republished from ReclaimTheNet.org)
The Election Integrity Partnership, a coalition that included CIP as a key player, kicked off its operations with a noble-sounding mission: to shield our fragile electoral systems from the scourge of fake news. For the discerning reader, the term “integrity” in their name may raise eyebrows; it’s reminiscent of government programs cloaked in the language of virtue, their real work a little murkier. Partnering with government entities and social media giants like Facebook and then-Twitter, EIP set out to identify and “mitigate” misleading content related to elections. In other words, they assumed the job of selectively filtering out the lies, or as critics would say, the truths that didn’t toe the right political line.
For a while, EIP was in its element, functioning as a digital triage, purging the internet of what they deemed harmful content. But what started as “informational integrity” quickly became a federal hall monitor, policing citizens’ Facebook posts and Twitter threads with all the subtlety of a sledgehammer. Conservatives, in particular, saw this as more of a censorship scheme than a public service. Their view? EIP wasn’t there to inform – it was there to enforce.
Predictably, the backlash came hard and fast. Between accusations of censorship, lawsuits, and subpoenas, the EIP got hit with more legal troubles than a tech startup in a copyright infringement scandal. And when all was said and done, EIP disbanded, its ambitions buckling under the weight of public scrutiny and political pressure. The New York Times, ever the mournful observer of lost social crusades, called it a tragedy for public discourse. They framed the dissolution as a loss for those who believe in “responsible” information regulation, i.e., those who think someone should be appointed arbiter of truth, as long as it’s the “right” someone.
The lawsuit-laden disbandment sent a message: Americans are more than a little skeptical about government agencies and their academic friends lurking behind the scenes, flagging speech like a hall monitor on a power trip. The public isn’t too keen on playing along with institutional gatekeepers telling them which “facts” are allowed to stand.
With EIP gone, CIP has had to pivot. It’s retreated from the frontlines of digital speech enforcement, now favoring a softer approach – “educating” the public on misinformation rather than erasing it outright. Translation: CIP now hosts workshops and seminars where it teaches researchers and civilians alike about the nature of disinformation, sidestepping its prior role as a social media referee. This rebranding effort is essentially CIP’s way of saying, “We’re not here to censor, promise.”
Yet, the academic world’s “shift to education” sounds suspiciously like the fox retreating from the henhouse after getting caught. CIP’s pivot reflects the current climate, one in which watchdogs like it have to tread carefully or risk losing all influence. Now, they’re not shutting people up; they’re merely explaining why certain ideas are wrong, a move that feels less aggressive but still keeps CIP’s finger on the scale of public opinion.
CIP’s saga shines a harsh light on the deepening tensions between free speech advocates and so-called “disinformation” experts. On one side, you have entities like the New York Times wringing their hands, lamenting the “tragedy” of these anti-misinformation efforts falling apart. The Times warns of a future in which misinformation spreads unchecked, as though without EIP, social media will devolve into an apocalyptic pit of lies. On the other side, you have critics of censorship, those who see CIP’s previous activities as a government-endorsed grab at control, cloaked in the language of public safety.
Now, we find ourselves in a new chapter, with CIP toeing the line carefully, offering lessons in “awareness” rather than flagging posts. This so-called “nuanced understanding” might sound respectable, but it still hinges on a central belief: certain ideas are dangerous enough to warrant intervention, even if the means have shifted from banning to benign “educating.” In short, CIP may be keeping a lower profile, but its ambitions haven’t changed – they’ve merely gone underground.
So what do you get when you hand the keys to social discourse over to government-aligned bodies like the EIP? For starters, the inevitable slide toward an overzealous surveillance state. Free speech advocates have been beating this drum for a while, and they aren’t wrong: schemes like EIP carry the perfect storm of potential for overreach and abuse. It’s the classic “trust us” move from government and corporate giants who assure the public that they’re only flagging content for “our own good.” But when a government body is allowed to sift through online conversations, the notion of “our good” quickly morphs into “their control.”
The result? People start censoring themselves, fearing that one wrong post might put them on a watchlist or see them “fact-checked” into silence. These watchdog groups claim to target misinformation, but they often mistake dissenting views for danger and critique for conspiracy. The very act of monitoring speech creates a chilling effect, where the public might think twice before posting on sensitive subjects. After all, who wants to risk getting flagged by an algorithm armed with both the moral zeal and clumsiness of a hammer trying to nail jelly to a wall?
And then there’s the lack of transparency – a time-honored tradition in institutions that insist they know best. When EIP was in full swing, it wasn’t as if users got an email detailing who decided their post was a threat to democracy or what precise reasoning went into labeling it “misinformation.” Instead, decisions were made in rooms far from public view, with opaque policies and an ever-shifting definition of what “misinformation” even means. Political or corporate interests could easily influence this moderation, and, surprise, surprise – with little oversight, the system quickly looks more biased than benevolent.
The arbitrary and often political nature of these decisions only stokes public distrust, especially when it’s the very voices challenging authority that find themselves most frequently muzzled. It’s the internet equivalent of a teacher who can’t explain why certain kids always get detention – people quickly learn not to ask questions and go along with the rules, but that doesn’t mean they believe in the fairness of the process.
In democratic societies, it’s a cornerstone. The ability to voice different viewpoints, even those that shake the system, is essential for a healthy public sphere. When bodies like EIP take it upon themselves to deem what’s acceptable for public consumption, we’re left with a sanitized marketplace of ideas – one in which only the ideas that align with sanctioned narratives get a seat at the table. If only certain perspectives survive the cut, we end up with voters fed a curated set of “truths,” unable to challenge, investigate, or even consider alternatives.
And it’s not just a hypothetical fear. History has repeatedly shown that the silencing of controversial or dissenting voices only deepens public division. Ironically, the very thing these “integrity” initiatives aim to prevent – public polarization – often worsens when people feel their speech is being filtered. With an overpowered referee deciding which facts to keep on the field, the game of democracy itself suffers.
The question becomes, once government-linked entities start moderating our conversations, where does it end? Today, it’s about “election integrity.” Tomorrow, it could be “economic stability” or “public health.” Every crisis invites a new round of justifications for more speech control. After all, if misinformation on elections is a threat to democracy, couldn’t misinformation on any number of other issues pose a similar threat? Accepting censorship in any form opens a Pandora’s box of future government interference, each intervention creating new precedents that make the next round of censorship feel more routine.
The free speech argument here is simple: even if an opinion is wrong, unpopular, or offensive, it deserves protection. The minute we concede that it’s acceptable to police ideas – especially by bodies connected to government interests – we make it all the easier for future, more dangerous limitations to slip into place.
Then there’s the effectiveness issue. Does suppressing “misinformation” really work, or does it just make it more insidious? Efforts like EIP may well reduce the volume of “dangerous” content on mainstream platforms, but it doesn’t just vanish. Ideas banned in one place tend to bubble up elsewhere – often in online echo chambers where censorship only serves to validate radical viewpoints, feeding a cycle of resentment and extremism.
The disinformation crusade might actually be doing more harm than good, driving misinformation underground where it becomes even harder to address. The government’s digital eraser may scrub certain ideas from view, but it often intensifies belief among those already suspicious of authority. For them, censorship itself becomes “proof” of suppression, amplifying distrust and cementing conspiratorial thinking. In trying to stamp out the “lies,” EIP and its ilk may have simply fueled the fire.
In the end, the dissolution of the Election Integrity Partnership is perhaps less a blow to public discourse than a win for the democratic spirit. As the Center for an Informed Public pivots from censoring to educating, we’re reminded that the battle against misinformation doesn’t require speech suppression. It requires a trust in the public’s ability to sift truth from nonsense – a trust that, in a healthy democracy, should never be in short supply.
Read more at: ReclaimTheNet.org
Tagged Under:
banned, biased, big government, Censorship, Center for an Informed Public, conspiracy, cyber war, deception, deep state, democracy, disinformation, Election Integrity Partnership, elections, Fact Check, fascism, First Amendment, freedom, Glitch, left cult, Liberty, misinformation, Orwellian, politics, speech police, Suppressed, thought police, Tyranny, Xpost
This article may contain statements that reflect the opinion of the author
COPYRIGHT © 2017 DEEP STATE NEWS