Internet Performance Delivered right to your inbox

Can the Law Take on Bad Bots and Win?

We may find out sometime next year if legislation can contain the scourge of harmful bots in any meaningful way.

As of July 1, 2019, bots appearing on massive social networks (those that reach 10 million people or more) will be subject to a new California law that forces entities using bots to influence people’s behavior to disclose that they are, in fact, bots..

However, while everyone agrees that the population of bad bots, also known as hacker bots, is exploding, it’s crucial to remember that not all bots are the same, and not all are malicious.

Business-oriented bots help people bank online, access customer support, or buy products by walking them through specific processes and answering simple questions. On the other hand, bots that appear human to the casual observer are popping up on social media sites with the intention of creating havoc. These, for our purposes here, I will call bad bots, because their provenance and goals are murky and often nefarious.

Some bad bots aim to influence elections or disrupt civic processes generally. In addition, many are created by (or for) criminals to trick unsuspecting web surfers out of their data, their money, or both.

The threat has not gone unnoticed, but it’s unclear how to combat them. Social media networks say they are trying. For example, Twitter, which as of the first quarter of 2018 claimed more than 330 million active users, started culling fake accounts from its ranks and in July promised to axe tens of millions of these bots from its user ranks.

Others think legislation like that enacted in California is needed.

This seems like a pretty good idea; generally speaking, disclosure is good, right? But how effective can this be given that enforcement is a problem? California is huge—if it were a sovereign nation, it would constitute the world’s fifth largest economy—but the boundary-less nature of the internet makes enforcement and prosecution tricky, at best, impossible at worst.

The California bill, as written, will make it illegal for anyone in the state to:

communicate or interact with another person in California online, with the intent to mislead the other person about its artificial identity for the purpose of knowingly deceiving the person about the content of the communication in order to incentivize a purchase or sale of goods or services in a commercial transaction or to influence a vote in an election. A person using a bot shall not be liable under this section if the person discloses that it is a bot.

So it is applicable in the state but holds no sway beyond.

“The bots that California is trying to address are what we can call ‘social bots.’ These bots influence people and are designed to motivate them to make purchases, divulge personal information, and even sway their opinions during elections,” says Stephen Gates, edge security evangelist for Oracle Dyn. “However, there is a more insidious movement of bots under direct hacker control. These bots are primarily used by hackers for monetary gain and are most often consumer IoT devices that have been conscripted into the hacker’s botnets.”

Given that, it makes sense to deploy technology to counter the “other” bot threat as well.

“There are services which fight these hacker bots by challenging them with hidden tests that are difficult for bad bots to pass but allow legitimate traffic—say from search engines, legitimate visitors, or even social networks—to access web applications,” Gates says. “They are ‘hidden’ because these tech challenges happen in the background and let actual humans use the web as always.”

Such services detect behaviors that differentiate bad bot actions from human activities. Bad bots do not run browsers, for example — they alight on a web page for a fraction of a second to scan for vulnerabilities  or steal valuable content, Gates says.

“People use browsers and don’t often jump from page to page within seconds,” he adds.

Companies may balk at using technology that reduces their website visitor or page view counts but if that traffic is malicious, it’s clearly best to contain it. For one thing, hacker bot traffic can hinder performance and frustrate real people trying to use the site.

“Nearly 60% of traffic on the internet today is not from human sources. It’s coming from hacker bots with increasing levels of malicious intent. The failure to block bad bot traffic always results in undesirable outcomes,” Gates says.

The service also watches online interactions to detect whether there are mouse movements, scrolling, or keyboard activity that indicate the presence of an actual human being. Bots may be advanced but they still can’t type, apparently.

While the beauty of the internet is that it enables a free flow of information, that openness is also a vulnerability. Bad bot detection and mitigation allows legitimate traffic to keep moving and helps preserve that openness.

Fighting the bot scourge is going to take a major, multi-pronged effort. The challenge isn’t going to get any easier, given that the number of devices connected to the internet is soaring. One research firm expects that number to surpass 20 billion by 2020, and of course these devices can also be infected which will accelerate the botnets problem. That is why social media networks, tech providers, businesses, end users all have to work to protect their assets, detect bad players, and mitigate risk. Laws will not be enough.


Share Now

Whois: Barbara Darrow

Barbara Darrow is a senior director at Oracle Corp. She has covered business technology for more than two decades—a career spanning from Lotus 1-2-3 to Amazon Web Services. She has written for InfoWorld, CRN, eWeek, ComputerWorld, and GigaOM. Before joining Oracle, she was senior writer for Fortune Magazine covering enterprise technology.