By Raj Koneru, Founder and CEO, Kore
It’s easy for enterprises to get excited about how bots are evolving. Advanced chatbots break down barriers between apps, as well as between content creators and consumers. They allow you to easily automate a range of tasks. It’s powerful functionality. However, as with all new technology, it also carries with it certain security sensitivities that have yet to be fully explored.
For industries such as financial services, retail, banking, hospitality, travel, and healthcare, information security is paramount. Vital information such as credit cards, personal medical histories, bank accounts, social security numbers, and more play a critical role in the digital transactions that occur in each of these spaces. Much of this information must be encrypted and in many instances it must be monitored for data loss and malicious intrusions. Data protection involves both that at rest and that in transit.
While bots offer consumers and enterprises alike an immense new set of opportunities, they also present new cyber-security challenges. Questions such as where information is stored, how it is protected, what channels have access to it, among others must be posed and answered before consumers and businesses embrace individual bot solutions.
Whether you’re a business, developer, or a user of a bot, the following are three essential tips that you need to consider to stay ahead when it comes to bot security.
One of the key advantages of chatbot technology is the ability to seamlessly cross channels to gather the information needed to execute a task. Yet with all of this crisscrossing of channels and exchange of information, the probability of private information being shared or business-critical information being leaked increases dramatically. Every stakeholder in the chatbot ecosystem needs to take heed and consider each of the channels that are being accessed.
While a bot can be designed to secure information via a private channel like Kore, data that’s shared in a public channel — such as Facebook or Slack — is subject to the security sensitivities of that channel. It becomes essential then that every user is trained to verify that their bot communications occur in proper channels to protect their data. Private channels are good. Public channels aren’t secure.
Without the right cybersecurity protocols in place, bots become the equivalent of man-in-the-middle cyberattacks that have become increasingly prevalent. And unlike man-in the-middle attacks that typically compromise just one channel, a “bot-in-the-middle” cyberattack could expose countless channels to data leakage and malicious activity. These cybersecurity incidents could be exponentially worse, including associated financial repercussions, than the ones we see today.
Public bots such as Facebook Messenger or those that have come out from Slack are constrained in their usefulness as a result of their security limitations. These consumer bots lack the unique capabilities that address the requirements of a business. Users simply aren’t comfortable giving public bots access to their bank accounts, credit card accounts, and personal information. And enterprises interested in protecting the information and accounts of their customers will oppose giving these public bots access to their systems and data.
While bots offer users, developers, and companies greater speed, flexibility, and convenience, this doesn’t come without strings attached. Without proper safeguards in place such as bots storing user information or having unmitigated access to it (even if it’s stored on one of the channels), a malicious hack or data leak is a viable possibility.
Enterprises understand this issue and employ encryption across data channels and silos to protect data at rest and in motion. Email and SMS messages are encrypted, as well as bank messaging services and other classified services that exchange sensitive information.
When it comes to what bots can access, developers and enterprises that are interested in protecting data and communications will grant bots access to only encrypted channels. Users who are attuned to the dynamics of bot security will only use bots that leverage encryption. It’s a different story when it comes to public bots; businesses don’t have control over these platforms, what they encrypt and don’t encrypt, and what channels and information they access. This has the makings of a huge cybersecurity storm that can wreak havoc on those that fail to heed the consequences.
Managing the messaging ecosystem of a bot is crucial. Consider what a public bot can do:
Now factor in that a bot can send messages — using any number of messaging platforms from email to instant messaging — to thousands of addresses in a matter of a second, and enterprises, users, and even developers have real cause for concern. If an enterprise doesn’t have control over the messaging service, it cannot be sure that the data passing through that channel is secure. Some websites and apps build in enterprise controls, but others do not, including popular messaging services like Slack or Facebook Messenger.
It makes a lot of sense that security-related issues are viewed by IT professionals as the number one obstacle to the adoption and use of intelligent systems. Some of their foremost concerns include:
While there is a lot of due euphoria surrounding bots and their potential, organizations need to pause and assess the bot and its security capabilities and management controls before jumping into the “bot pool.” Many haven’t yet thought about the fact that data, in the new world of bots, resides in places it never resided before. As the technology becomes more popular, expect developers to restrict bot access to secure channels only — those that are encrypted and provide manage controls. Enterprises will only use and connect to bots that do so, recognizing that this is a business-critical requirement — for clients, workers, and the business itself.