Rise of the Bot Bills: How Best to Regulate the Internet’s Bots

Elliot Trotter
7 min readNov 2, 2020

“Danger, Will Robinson!” warned the robot in the 1960’s science fiction television show Lost in Space. An artificial intelligence, the robot helped the Robinson family navigate the unfamiliar depths of space, providing advanced calculation, physical defense, and perspective. While our present future is lightyears away from the Robinson’s family exploits and their robot, at best, contemporary bots (an abbreviation for robots) are rapidly attaining a similar role in society of helping people achieve their goals. Bots today are voice (e.g., Amazon’s Alexa, Apple’s Siri) or text-based communication interfaces, providing specific functions like helping a customer find banking information over the phone, or troubleshooting their computer via a chatbox.

Bots have become commonplace on applications like Slack as organizations integrate their functionality into valued communication dashboards. Through Slack, users can use bots to order a pizza or coordinate with members of their team through a series of interactive dialogue prompts. Major enterprise organizations like Microsoft have invested in providing an easy-to-script virtual assistant bot. X.ai uses a bot called Amy to act as an artificial email virtual assistant. Amy responds to emails on behalf of a user and automatically schedules meetings. Amy’s responses are supposedly so human-like that people have even asked her out on dates.

Another major player in the bot game is Google’s Duplex which allows users to make appointments over the phone using an AI bot. This bot even uses sophisticated quirks like pauses and ums to mimic authentic human speech. The film Her takes this vision of human-bot interaction to an extreme when the protagonist falls in love with an artificial intelligence voice bot, Samantha. In Lost in Space too, the robot quickly becomes part of the family due to its usefulness and human-like responses.

Of course, contemporary bots aren’t merely used to aid humanity. Social bots have pervaded social media like Twitter, Facebook, and Reddit, mimicking authentic human speech and influencing discussions. Bots inflate Follower counts and engage in artificial consensus-building by responding to social posts in a positive or negative light. One study suggested that less than 60% of all internet traffic is generated by humans at all. Another study found that, during a 2019 Democratic debate, upwards of 46% of Tweets pushing anti-vaccination misinformation came from bots. Many who have tried their luck in the digital dating space have found themselves in a flirtatious conversation, only to realize that there isn’t anyone on the other end, just a bot designed to direct them to a website or ask to be sent a gift card. Last year, Match.com was sued for allegedly allowing bot accounts to lure lonely users into subscriptions.

Political campaigns and actors, as well, are using bots to spread messaging and drum up support (or hate) for candidates, promoting hashtags and misinformation. This all came to a head in 2016 with the controversial election of Donald Trump amidst reports of Russian bots influencing the American election. So what have legislators in the US done about bots? To date, only one State has legislated any sort of action.

The California BOT Bill

With the abuses of the 2016 election in mind — one study concluded that pro-Trump bots outnumbered pro-Clinton bots five to one — and as bots continue to become more sophisticated, lawmakers in California introduced the Bolstering Online Transparency (BOT) Act in California Senate Bill 1001. The first-of-its-kind bill which came into effect in July of 2019, is designed to combat the types of social bots that risk influencing elections by forcing social media companies to disclose bots that are using their platforms and by prohibiting bots from misleading individuals about their artificial identity.

Though the White House, under this current administration, is encouraging States to avoid AI regulation, Senator Finestein of California has introduced a bill to congress; the Bot Disclosure and Accountability Act of 2018 which includes many of the same measures as California’s bill. Though the bill hasn’t made much progress in Congress, it goes further in that it prohibits political campaigns and candidates from using bots. The bill means to limit the use of bots in political advertising by unions, corporations, and political action committees (PACs) as well.

What’s in a Label?

The designed benefit of these bills is to quell the malicious influence of consensus-building social bots by giving individuals the choice of how they may interact with presented information. Similar to how packaging of food at a supermarket may disclose the country of origin, nutritional facts, and organic certification for a product, the California BOT Act enforces labeling of information that doesn’t originate from people.

But does the inclusion of trans fats or GMOs on a label suggest a particular stance by governments? Another comparison is warning labels on cigarettes. What’s the role of governments in influencing the buying practices of consumers with labeling that clearly defines cigarettes as a bad thing?

When it comes to public health, there is, perhaps, a more direct argument relating to the labeling of food products. Consumers ought to be aware of what they’re ingesting and whether there is evidence that a particular product causes harm. From there, the consumer still has the freedom to make a choice.

Granted, just as labeling a product GMO, labeling a social account as a bot has connotations. A bot may be seen as less authoritative whereas a product may be viewed as less natural. Of course, the user and consumer in both cases still have the ability to make a choice about ascribing value to those labels, especially when those labels provide a shield against deliberate misinformation. Most consumers wouldn’t tolerate a dangerous chemical cleaning product not disclosing that it’s harmful to ingest. While the labeling of a bot account may imply a certain value, it doesn’t go as far as saying an account is misleading a user or that it’s dangerous and, as such, appears to be an appropriate first step in combating the potential dangers of bots.

Freedom of Speech Issues

A primary issue with these bills comes down to the First Amendment and what is deemed speech. University of Washington law students Madeline Lemo and Ryan Calo suggest in their analysis Regulating Bot Speech, “Just because a statement is ultimately “made” by a robot does not mean that it is not the product of human creation.”

What Lemo and Calo allege is that because a social bot’s content originates from human creation (whether that is an algorithm or merely the amplifying of an original message), it could be protected under the First Amendment. The FTC and State laws regulate misleading information, but what do they say about disclosure in regard to speech? In most States, when purchasing a home, a seller has to disclose information about potential hazards like lead paint, neighbor disputes, or pest infestation. If you happen to like a home infested with rats, nothing is stopping someone from moving ahead with that home purchase, just as nothing would be stopping someone from reading a Tweet from a bot that has disclosed its identity. Of course, selling a home isn’t exactly speech.

We’re familiar with disclosure in political advertisements: political ads on television must disclose certain information about the creation of an ad. Is promoting an idea with a bot network any different than advertising on television? And what about political speech that the Senate bill intends on banning? After all, what becomes deemed political advertising? What happens when a government uses a bot tool to disseminate information about voting locations that reminds people to vote? Might those interactions be considered political speech and therefore banned under this Senate bill?

This does seem like a slippery slope argument. Especially in light of overwhelming support for campaign finance reform, the idea of regulating bots aligns with the concept of regulating political spending. Bots require an investment of resources to create, and maintain, just as political advertising does. Should only the rich and powerful be able to influence elections with ads? Certainly not, so the same should apply to bots as political advertising tools. Considering that, at least under the California law, bots may still be able share any information they please, despite a disclosure of identity, speech does not appear to be threatened.

Conclusion

Disclosure seems like a logical first step in this fight against misleading user accounts as free speech is protected while the general public is informed and allowed to draw their own conclusions about the value of that speech.

Further regulation that addresses what bots are allowed to be used for could be argued as measures to protect democracy in the same way that other regulations may protect public health. While imperfect, both disclosure and regulation of communication tools do seem to have significant precedent and are arguably appropriate when it comes to bots.

Ultimately, it comes down to the power we as a people believe that any one individual or organization should have. If organizations are allowed to freely use purchasing power to mislead and manufacture content with swarms of bots, then we run the risk of undermining authentic choice in American society at the click of a mouse.

--

--

Elliot Trotter

Content Designer, UX Writer | Microsoft | Master of Communication in Digital Media