Dyn, a performance management company, published a blog post this week highlighting successful collaboration between transit service providers, IXPs, and the anti-spam community to effectively kick an abusive company off the internet.
Bitcanal is the company behind a years-long border gateway protocol (BGP) hijacking campaign. The group took over large groups of IP addresses that didn’t belong to them and allegedly sold some to spammers while participating in a campaign of abusing the fundamental systems that make the internet work and distribute content to your eyeballs. Some IPs belonged to non-existent businesses, but, as security journalist Brian Krebs points out, other addresses managed by legitimate organizations including the Texas State Attorney General’s office the US Department of Defense were roped up in the scheme. Krebs reports researchers have tied Bitcanal to the suspected theft of millions of IPv4 addresses.
Researcher Ronald Guilmette kickstarted the recent plot to takedown Bitcanal, working with transit providers including GTT and Cogent, and internet exchange points (IXPs) to play whack-a-mole when Bitcanal tried to reconnect through other companies. This week internet service provider Hurricane Electric and transit provider IPTelecom disconnected Bitcanal, rendering them “cutoff from the global internet,” Dyn’s Director of Internet Analysis Doug Madory wrote.
What struck me about his post was the definitive declaration tucked in at the end: “IXPs are not just a neutral transport bus anymore. They facilitate a unique service that malicious actors can leverage. Like it or not, this makes IXPs responsible too.”
IXPs are in charge of the infrastructure for traffic exchanges between ISPs and content delivery networks (CDNs). If IXPs are seen as complicit in enabling malicious activity and responsible for helping to police it when it becomes unwieldily, then perhaps that ethos could also extend to other internet services enabling bad actors: Social networks.
There is a groundswell of criticism directed at Twitter, Facebook, YouTube, and others for the roles they play in allowing malicious content, bullying, disinformation, and other bad content to proliferate on their platforms. Elected officials in both the US and UK have called on executives for Facebook and Twitter to explain how it’s possible for bot-armies and fraudulent (and often dangerous) information to proliferate on its platform to influence public opinion. The criticism came to a head when a whistleblower for Cambridge Analytica described how his company collected 50 million people’s Facebook data to build psychographic profiles of users in order to sell advertising. The UK fined the social network £500,000 for failing to protect users’ data (which is virtually nothing to Facebook, a company that made $11.97 billion in mostly advertising revenue last quarter).
There are of course differences between IXPs and social networks, fundamental building blocks versus social networks we opt-in to use. One could argue people know what they’re getting into — potential harassment, disinformation, and human monetization — when signing up for these platforms. However, social networks have become almost as fundamental to the way people use the internet as the pipes responsible for it.
Facebook used its Internet.org initiative to provide connectivity to developing countries as an on-ramp to Facebook itself. Its “Free Basics” program includes Facebook and Messenger as part of the free apps available to users where data is cost-prohibitive. (The Outline reported in May that Facebook is pulling its Free Basics plan from some countries following criticism.) About 45% of US adults rely on Facebook for their news. And at the same time it promises to combat disinformation, it still allows popular and harmful conspiracy theorist content to thrive. At an event with reporters this week, Facebook said it’s committed to fighting false news in the same breath as it defended permitting one of the biggest arbiters of that content on its platform.
Despite years of promises, Twitter, too, has failed to properly handle abuse as it simultaneously promotes its value to humanity while being the US President’s favorite platform. Its executives continuously promise to purge suspicious accounts — including this week’s tweet from CEO Jack Dorsey and a report from the Washington Post that the company suspended 70 million accounts in the last couple months — but often consider death threats and other harassment as insufficient evidence of violating its standards. People engage with the world’s biggest events — political, sporting, television — on the same platform used by propagandists to spread disinformation to millions of people during the 2016 US presidential election.
Facebook and Twitter could perma-ban accounts that continuously tweet harmful harassment and hate speech. Twitter has kicked off individual high-profile users for inciting hate speech and mobs of trolls, but that number is tiny. Social networks are loathe to remove speech or humans from their platforms; for both companies, the right to speech and to post virtually whatever you want is engrained deeply in their ethos, and booting publishers or people off their platforms would force them too much into an editorial role. (Facebook’s CEO Mark Zuckerberg recently told Congress it is a “technology company” not a media company, an argument that prevents it from being regulated like broadcasters). Banning accounts can have a financial impact, too. Twitter stock fell 9% on Monday following the news it suspended millions of accounts.
There are complexities around free speech—it's certainly not as cut and dried as shutting down IP hijackers—but its harmful effects are often extremely obvious. For example, the death threat cited earlier that Twitter ignored until the verified user put them on blast, and one popular conspiracy theory propagated online that resulted in a man bringing an assault rifle to a pizza parlor; he ended up sentenced to four years in jail.
Content distribution services have stepped up before to make editorial decisions about the information flowing on their services. In 2017, Cloudflare dropped neo-Nazi website The Daily Stormer from its network, and web-hosting providers GoDaddy and Google both also dumped the site. The move sparked a debate over power on the internet and whether CDNs and other service providers should decide what can be viewed online.
As billions of people become increasingly reliant on these companies as part of our daily communication and knowledge consumption, it’s time for them to more definitively address their role as responsible stewards of news and information, and the impact bad actors can have on the world beyond the web.
[Social networks] are not just a neutral transport bus anymore. They facilitate a unique service that malicious actors can leverage. Like it or not, this makes [social networks] responsible too.