AI/ML

Five ways to secure modern, AI-based customer support tools

Share
Keeping modern customer support tools secure

ANALYSIS: Customer support has undergone a tectonic shift over the years. Long gone are the days of endless queues in call centers and impersonal knowledge bases. Artificial intelligence (AI), machine learning (ML), and augmented reality (AR) have shaped a new ecosystem of efficient and user-centric support.

A recent study says 70% of organizations already use AI to personalize their marketing, and mature digital engagement methods have resulted in a 123% revenue increase. With all the benefits, it’s crucial to make sure that security and privacy aren’t an afterthought.

[SC Media Perspectives columns are written by a trusted community of SC Media cybersecurity subject matter experts. Read more Perspectives here.]

Let’s look at the ways to mitigate cybersecurity risks of five technologies that promise to revolutionize how businesses interact with customers today:

  • Jump on the AR bandwagon with caution: Since AR tools require access to device cameras and sensors, they can potentially expose sensitive information about the user’s environment if compromised. Vulnerabilities in such applications are often exploited to gain access to smartphone data or inject social engineering content into the AR experience. The organization must ensure all data transmitted through these applications gets encrypted using protocols like TLS. This makes customer information meaningless to malicious actors who may intercept it. Multi-factor authentication takes security a step further by foiling unauthorized access to AR systems and devices. Follow data privacy regulations such as CCPA or GDPR and obtain explicit consent for data collection. Also, educate customers on symptoms of phishing in AR environments and privacy measures to avoid identity theft.
  • Keep the chatbot ecosystem tamper-proof: Chatbots collect and process vast amounts of customer information that are misused unless handled securely. Criminals may carry out malicious code injection or create fake chatbots to deceive customers and steal information. The likely fallout from such exploitation includes breaches, misinformation campaigns, and serious reputational ramifications. Avoiding that requires prioritizing the security of user data and holding sway over chatbot software along with underlying platforms. Implement end-to-end encryption for communications between chatbots and users. Conduct regular security audits of chatbot code and infrastructure to identify and fix vulnerabilities, and use effective authentication mechanisms to ensure that only authorized users can interact with the chatbot.
  • Secure the AI models: Unlike traditional methods that rely on simple keyword matching, AI-driven data retrieval services leverage natural language processing to understand the context and intent behind a client’s query. There’s a flip side to it, though. Attackers can exploit large language models (LLMs) at the core of these systems to infer sensitive data used to train them. Moreover, the integration with huge external datasets presents a potential source of third-party API risks. A good mitigation strategy largely boils down to data minimization, where the amount of sensitive data accessed by AI systems is strictly limited. To thwart model inversion attacks, train AI models with robust datasets and use techniques like differential privacy. Also, conduct thorough assessments of API providers to vet their security practices, data management policies, and compliance with relevant regulations.
  • Use predictive analytics wisely: Predictive analytics uses historical data and ML algorithms to identify patterns and foresee future customer behaviors. While these takeaways are invaluable for the business, the deep involvement of personal data in the workflow raises privacy concerns. For instance, combining data from multiple sources increases the risk of exposing personal information through correlation. To address this risk, it’s important to anonymize data before feeding it into predictive analytics models. Off-the-shelf predictive analytics tools might not perfectly align with the unique security challenges of every organization. In that case, make custom software development the go-to strategy for the adaptation to a company-specific threat model, seamless integration with existing systems, and ML model adjustments for maximum security.
  • Mind the delicacy of sentiment analysis: These tools have raised ethical concerns because of potential privacy infringement and bias based on gender and race. As promising as the tools are, these technologies require a great deal of testing before they become a reliable source of insights. Ensure that customers are aware and have agreed to the analysis of their communications. Also, implement data validation mechanisms to detect and filter out potentially malicious or biased data that could poison the sentiment analysis model.

Amid today’s tech growth, it’s important to strike a balance between customer support advancements and user privacy. According to Cisco’s 2024 Data Privacy Benchmark Study, 94% of organizations admit that their customers won’t buy from them unless data gets properly protected.

While we all love innovation, it’s responsible innovation that pays off for the long haul. To get there, corporate security teams need to implement robust identity verification procedures, limit access to client data by job role, and anonymize or pseudonymize customer data whenever possible.

David Balaban, owner, Privacy-PC

SC Media Perspectives columns are written by a trusted community of SC Media cybersecurity subject matter experts. Each contribution has a goal of bringing a unique voice to important cybersecurity topics. Content strives to be of the highest quality, objective and non-commercial.

Get daily email updates

SC Media's daily must-read of the most current and pressing daily news

By clicking the Subscribe button below, you agree to SC Media Terms and Conditions and Privacy Policy.