The blocked Twitter account of US President Donald Trump (photo illustration by Justin … [+]
Shortly after the attack on the U.S. Capitol on January 6, 2021, some social media companies, starting with Twitter, threw President Donald Trump off their platforms. Following Twitter, Facebook banned Trump from its platforms, including Instagram, and other social media companies, including YouTube, followed suit. Within a few days, Trump had run out of digital media to spread his word.
Of course, he wasn’t muted – as President he could call a press conference at any time, and he could be confident that the media would attend. But like many politicians who had learned of the power of unfiltered access to the Internet, Trump had gotten used to speaking whenever he wanted. Now, with the actions of some big corporations, he couldn’t get his word out the way he wanted.
This raised questions in many circles at the time, and those questions have not gone away since then. What does it mean to silence a seated president? Is it legal or ethical to block access to these platforms?
To answer this question, the Georgetown University Law Center held a panel discussion with four of its leading legal experts to examine the issue of “deplatforming” a seated president.
The first question as to whether it is legal for companies to remove a seated president from a publicly accessible social media platform such as Twitter or Facebook was raised at the beginning of the panel discussion by moderator Hillary Brill, acting director of the Georgetown Law Institute for Technology, answers law and politics. Brill noticed the outcry of many when it happened that it was somehow a restriction on freedom of speech, noting that the first amendment to the U.S. Constitution only protects freedom of speech from government restrictions. She noticed that Twitter, Facebook, and the other companies were private corporations. There is no problem with the first change regarding their measures to prevent the President from posting on their service.
“White-Gravenor Hall, Georgetown University, Washington DC.”
But that didn’t mean there weren’t any concerns. Professor Erin Carrol, who teaches communication, technology and the press, said she was concerned about the power of big tech and the lack of transparency. “If you remove disinformation, is the truth behind it?” She asked.
Unfortunately it can’t be. Carroll pointed out that when Trump and his sympathizers got booted from major social media, they switched to other platforms like Telegram and Signal, which are intelligence services that law enforcement agencies have little access to, and Gab, which hardly tries to control the content of messages. Another social media site, Parler, initially profited from Twitter as a sort of home for refugees, but the sponsors didn’t like the lack of moderation and Amazon, which hosted the service, refused to broadcast it. That effectively killed Parler.
The language doesn’t go away
“Language doesn’t go away,” said Carroll, “it just finds other places.”
Much of the problem with removing someone like Trump from a platform is anchored in Section 230 of the Communications Decency Act, according to Professor David Vladeck, the AB Chettle Civil Procedure Chair at Georgetown Law and former head of the FTC Consumer Protection Office. He said section 230 enables many of the problems. “It offers very broad immunity to the publication of harmful or defamatory information.” He doubts that Section 230, which protects Internet providers from liability for material that others post on their websites, will be lifted, but he believes it is likely to be changed. He noted that former President Trump’s request to ditch this section was based on his incomprehension about what he was doing. In fact, it would have allowed much more control over what he posted than less.
Jack Dorsey, CEO of Twitter, testified during a remote hearing about reform of Section 230 of the … [+]
POOL / AFP via Getty Images
That then raised the question of how online content should be controlled. Professor Anupam Chander, who teaches communications and technology law, suggested that changing section 230 to allow for more content moderation may not be good. “It could lead to a ‘Disneyfied’ universe,” he said. That would be one where there is no negative information.
Instead, Carroll said the industry needs to be more transparent about decision-making. She said new rules, like a revision of Section 230, need to be done by people who understand them and who understand how online services like Twitter and Facebook work.
“How do we have policies promoting fact versus propaganda,” Carroll asked. She suggested that there needs to be some accountability of who makes decisions like deplatforming a president.
So far, however, there doesn’t seem to be an obvious answer to the question of when or if such a platform should be removed by the President (or anyone else). However, it seemed clear that the first step should be to update the existing legislation to at least reflect how these services are working and to ensure that there is transparency.