AI's Potential While Guarding Against Emerging Cyber Threats: A Call to Action for CISOs
- Eva Frankenberger
- May 12, 2024
- 14 min read

As we continue to navigate the vast and rapidly evolving landscape of artificial intelligence (AI), the dual themes of innovation and security remain at the forefront of the corporate agenda. The potential of AI to revolutionize industries is immense, yet the cybersecurity challenges it brings cannot be understated. My recent participation at a Global CxO Institute conference, focusing on AI development, utilization, management, and governance, highlighted these points vividly.
The Rise of LLMjacking: A New Cyber Threat
During the conference, I delved into a range of current cybersecurity threats as part of the CISO stream. Among the discussed threats were data poisoning, prompt injections, remote code executions, and both cross-site and server-side request forgery. However, a particularly alarming development is the emergence of "LLMjacking." This novel attack exploits large language models (LLMs) to generate substantial unauthorized consumption costs—potentially over $46,000 per day according to Sysdig. LLMjacking not only signifies a shift in the nature of cyber attacks but also underscores the financial implications of AI vulnerabilities.
The Escalating Threat of AI-Powered Ransomware
The sophistication of AI-driven attacks raises a critical question: Could attackers not only use the compromised AI for financial gain but also alter its algorithm to deploy ransomware? If ransomware can be embedded within AI outputs, such as generated text, images, or files, the implications are dire. This method could facilitate the rapid spread of ransomware to a vast number of end-users almost instantaneously, significantly magnifying the attack's impact.
Threat of embedding ransomware or other malicious code could also apply to generative Large Language Models (LLMs) used in healthcare, automotive industry, critical infrastructure and other fields, where the output includes images, files, or text. The process of embedding malware in these diverse output formats increases the avenues through which cyber threats can manifest, making it a significant concern for security professionals. Here’s how this could occur across different output types:
Text Outputs: Malicious scripts or links could be embedded within text outputs generated by an LLM. For instance, a resume or a job description generated by an LLM could include hyperlinks that, when clicked, lead to malicious websites or directly initiate downloads of malware.
Image Files: Malware can be embedded in images through techniques such as steganography, where malicious code is hidden within an image file’s data. When these images are opened or processed by certain applications, the embedded code could be executed.
Generated Files: LLMs that generate or modify files (like PDFs, documents, or spreadsheets) could potentially embed malware within these files. When a user opens the file, the malware could be triggered, initiating a ransomware attack or other malicious activities.
To safeguard against such threats, organizations should consider the following security measures:
Content Filtering and Antivirus Scanning: Implement advanced content filtering and antivirus scanning on all files and outputs generated by AI systems before they are made available to users.
Secure AI Development Practices: Adopt secure coding practices and thorough testing environments for AI development to minimize vulnerabilities that could be exploited to inject malicious code.
User Education and Awareness: Educate users about the risks of opening files, clicking links, or executing scripts from untrusted sources, even if they appear to come from a legitimate internal system.
Use of Trusted Models: Work with trusted, secure sources for AI models and maintain them with regular updates and security patches to protect against known vulnerabilities.
In addition, attackers could potentially manipulate the algorithm in such a way that ransomware or other malicious code is spread through the output data to customers, both internal and external. This type of attack would represent a serious escalation in the capabilities of cyber threats involving LLMs.
It is of essence to recognize that the outputs from generative LLMs could be used as vectors for malware distribution, including ransomware, organizations can better tailor their cybersecurity strategies to address these emerging and sophisticated threats.
These are examples of vectors:
Model Manipulation: First, the attacker would need the ability to alter the LLM’s algorithm or insert malicious code within it. This could be done by exploiting a vulnerability in the model's management framework or gaining unauthorized access to the underlying systems.
Malicious Payloads: The manipulated model could then be programmed to generate outputs that include ransomware triggers or other types of malware. For example, the model might output executable scripts disguised within seemingly normal text, hyperlinks that lead to malicious sites, or even direct file attachments that contain malware.
Propagation via Outputs: When these compromised outputs are delivered to users—whether they're employees, customers, or other stakeholders—interacting with these outputs could trigger the malicious actions. This could involve running a script, opening a malicious file, or visiting a compromised website, leading to the malware executing on the user's system.
Ransomware Deployment: If the malicious output includes ransomware, the infection could lock or encrypt the recipient's data, subsequently demanding a ransom to unlock it. The distribution through an LLM could potentially reach a wide array of users quickly, amplifying the attack's impact.
To mitigate such risks, organizations need to implement advanced security measures:
Robust Monitoring and Logging: Track all activity related to AI models and analyze outputs for signs of tampering or unexpected behavior.
Secure Model Management: Protect access to AI models with strong authentication and authorization controls, and ensure that software updates and model changes are securely managed.
Output Sanitization: Implement processes to check and sanitize outputs from AI systems before they are sent to end users, potentially using additional layers of security software to detect and block malicious content.
Incident Response: Prepare for potential breaches by having an incident response plan that can quickly isolate affected systems, identify the breach extent, and remediate impacts. I will depict and discuss in detail why an incident responses and recovery with AI in your ecosystem is significantly different than in the traditional technology environment.
Moreover, organization should consider the scenario where compromised AI outputs are automatically integrated into other systems. This level of automation in AI systems, designed to streamline processes, could instead serve as a conduit for widespread dissemination of malware. The integration of malicious AI outputs could lead to a domino effect, rapidly compromising multiple systems across an organization. Ransomware could indeed be automatically embedded in other systems through an automated process where outputs from a compromised Large Language Model (LLM) are integrated into another system that ingests these outputs. This type of attack vector amplifies the risk and potential damage of ransomware because it leverages the interconnectedness and automated workflows common in modern IT environments.
This scenario could unfold as follows:
Automatic Integration: In many organizations, LLMs are used to generate content that is automatically integrated into other systems for further processing or distribution. For example, an LLM might generate reports, emails, or code snippets that are directly fed into databases, email servers, or even directly into software development pipelines.
Propagation of Malicious Outputs: If the LLM is compromised to include ransomware in its outputs, these malicious outputs could be automatically transferred to other systems without manual intervention. For instance, a malicious script embedded in an automatically generated email could be sent through a company’s email system, or a compromised code snippet could be pushed into production environments.
Execution of Malware: Upon being integrated into these other systems, the embedded ransomware could be executed. This execution might happen in several ways, depending on the nature of the output and the system:
a. File Execution: Direct execution of a file (like a PDF or executable) that contains ransomware.
b. Script Execution: Execution of a script within an environment that processes scripts automatically, such as a server or an application runtime.
c. Database Injection: SQL injection or similar attacks that deliver malicious payloads into database systems, which could then spread the ransomware or other malware.
Widespread Impact: Because these processes are automated and the integration is seamless, the ransomware can spread rapidly across systems, potentially affecting multiple parts of the organization simultaneously. This can lead to widespread data encryption, system lockdowns, and significant disruption.
To mitigate these risks, it's crucial for organizations to implement robust security measures, including:
Validation and Sanitization: Automatically validate and sanitize all outputs from LLMs before they are integrated into other systems. This includes checking for any code, links, or attachments that could be malicious.
Segmentation and Access Controls: Use network segmentation and strict access controls to limit how data from one part of the business can interact with other parts. This helps contain any potential spread of malware.
Monitoring and Anomaly Detection: Employ advanced monitoring and anomaly detection tools to quickly identify and respond to unusual activities that could indicate the presence of malware within integrated systems.
AI Incident Response and Recovery: A Specialized Approach
Addressing the unique challenges of AI in cybersecurity necessitates a specialized approach to incident response and recovery. AI systems, characterized by their complexity and continuous learning capabilities, require incident responses that go beyond traditional IT security measures. These responses must incorporate expertise in data science, legal, and ethical considerations, particularly because AI systems often operate under stringent regulatory frameworks.
The continuous learning and adaptation of AI systems mean that simply restoring a system to its pre-incident state might not suffice. A comprehensive strategy including retraining of the AI with clean data is often necessary to ensure the integrity and functionality of the system post-recovery.
AI incident response and recovery plans do share foundational elements with traditional incident response and recovery strategies, but they also contain unique considerations due to the complexities and specific characteristics of AI systems. Here’s how AI incident response and recovery plans differ and why these differences are important:
Complexity of AI Systems: AI systems, especially those using machine learning, can be black boxes with complex data inputs and model behaviors. Understanding how an AI system has been compromised or is malfunctioning requires specialized knowledge that goes beyond traditional IT expertise. This includes understanding data science, model training, and the specific architecture of the AI being used.
Data Poisoning and Model Drift: AI-specific threats like data poisoning (introducing malicious data to skew AI decisions) or model drift (where the model's accuracy degrades over time due to changes in underlying data patterns) are not typical concerns in non-AI systems. Responding to these issues involves not only identifying and mitigating the immediate impact but also recalibrating or retraining models, which can be both resource-intensive and time-consuming.
Explainability and Transparency: When an incident involves an AI system, part of the recovery process may involve dissecting decisions made by the AI to understand the root cause. This requires tools and processes for explainability, allowing responders to trace back AI decisions to specific data inputs and model behaviors. This level of transparency is crucial for rectifying issues and restoring trust but is not typically required in traditional systems.
Regulatory and Ethical Implications: AI systems often operate in environments with stringent regulatory and ethical requirements, especially if they make decisions impacting human lives (e.g., healthcare, financial services). Incident response in these contexts must consider not only technical and operational recovery but also compliance with legal standards and ethical norms, potentially requiring specialized legal and ethical expertise as part of the response team.
Continuous Learning and Adaptation: Unlike traditional systems, many AI systems are designed to continuously learn and adapt from new data. This characteristic can complicate incident response and recovery. For instance, if an AI system learns from malicious inputs during an attack, simply restoring the system to a pre-attack state might not be sufficient; the system might also need retraining from clean, verified datasets.
Automation and Scale: AI systems often operate at a scale and speed that surpass human capabilities. This can amplify the impact of any incident and requires rapid, sometimes automated responses that are different from more manual recovery processes in traditional IT environments.
Given these unique challenges, AI incident response and recovery plans often require a multidisciplinary approach involving cybersecurity experts, data scientists, legal advisors, and ethical oversight teams. Planning ahead, having a clear understanding of the AI's operational parameters, and maintaining rigorous monitoring and testing regimes are all crucial to effective incident management in AI systems.
AI's Role in Enterprise Solutions: Ai management, governance, the AI-First Companies
All the aspects mentioned above suggest that a more structured approach must be adopted, with AI management and AI governance being implemented from the outset. AI's integration into enterprise solutions is most notable in "AI-first" companies, where AI is not just a part of the business strategy but the central component. Industries like finance, healthcare, and customer service have increasingly adopted AI to drive innovation, automate processes, and enhance decision-making. These sectors face heightened risks due to the critical nature of their data and the consequences of potential AI malfunctions or security breaches. Consequently, AI management and governance play crucial roles in mitigating the risks associated with AI cyber attacks (such as OWASP Top 10 for LLM) . Effective AI governance involves establishing policies, procedures, and oversight mechanisms to ensure the secure, ethical, and responsible use of AI technologies.
Here are key aspects of how AI governance can help prevent AI-related cyber attacks:
Security by Design: AI governance mandates integrating security at every stage of the AI system development lifecycle. This includes
Secure coding practices,
Thorough testing for vulnerabilities, and the use of
Privacy-preserving technologies such as encryption and anonymization.
Implementing security by design helps in minimizing the risk of vulnerabilities that attackers could exploit.
Access Controls: Proper AI governance ensures
Strict access control policies are in place,
Limiting who can interact with the AI systems
Under what circumstances
Considering the risk of excessive agency of LLM models. Accurately tailored Access control also in the AI world can prevent unauthorized access or manipulation of AI systems, reducing the risk of insider threats and external breaches.
Regular Audits and Compliance Checks: AI governance frameworks often require regular audits to ensure compliance with internal security policies and external regulatory requirements. These audits help identify and rectify security gaps in AI implementations, thereby reducing the likelihood of successful cyber attacks.
Data Integrity Measures:
Governance policies enforce stringent data integrity measures to prevent data poisoning and other forms of data manipulation.
This includes mechanisms to ensure the accuracy and reliability of data fed into and
Generated by AI systems, which is crucial for preventing attacks aimed at corrupting data to mislead AI decision-making processes.
5. Incident Response and Recovery Plans, as stated in the sections above
6. Transparency and Explainability: Promoting transparency in AI operations helps stakeholders understand how AI systems make decisions. This is crucial for identifying any malicious manipulations or biases introduced into the system, which could potentially be exploited by attackers.
7. Ethical Considerations and Risk Assessments: AI governance also encompasses ethical considerations and comprehensive risk assessments. These practices help identify potential misuse of AI technologies and the corresponding risks, allowing organizations to implement targeted security measures proactively.
By addressing these aspects, AI governance not only enhances the overall security posture of an organization but also builds trust among users and stakeholders. It ensures that AI systems operate within established boundaries and that their outputs are both reliable and secure against cyber threats.
AI specific regulations, liability and other legal implications
If an organization is not capable to evidence the aspects discussed above, there are significant liability and legal implications for companies leveraging AI technologies, particularly for those positioning themselves as AI-first. These implications stem from various areas such as data privacy, compliance with regulations, ethical considerations, and potential malfunctions or misuse of AI systems.
An overview of the key legal and liability issues:
Data Privacy and Protection: Ensure compliance with data protection laws like GDPR and CCPA, which involves proper data handling, obtaining necessary consents, and maintaining transparency in data usage. A right to be forgotten will be a substantial challenge.
Intellectual Property Rights: Address potential issues related to the creation and use of content generated by AI, and navigate copyright and patent considerations, especially when using third-party data or methods.
Accountability and Transparency: Meet regulatory demands for transparency and accountability in AI decision-making, particularly in critical sectors like healthcare and finance.
Ethical Considerations: Manage risks related to bias, fairness, and discrimination to avoid legal consequences and reputational damage.
Product Liability: Clarify liability in cases where AI malfunctions, which may involve multiple parties including developers, users, and third-party vendors.
Contractual Obligations and Consumer Protection: Craft clear contracts that outline AI capabilities and limitations, ensuring compliance with consumer protection laws to prevent breaches and legal disputes.
AI-Specific Regulations: Stay updated and comply with emerging AI-specific laws and regulations that dictate the development, deployment, and usage of AI technologies.
Some examples of AI-specific regulations:
Ethical AI Frameworks: EU’s Ethics Guidelines for Trustworthy AI - Sets seven key requirements for ethical AI across the EU.
Transparency and Explainability Requirements: Right to Explanation under GDPR: Allows EU citizens to request explanations of decisions made by automated systems.
Safety and Certification Standards: ISO 26262 - International standard for functional safety of electrical and electronic systems in road vehicles, applicable to AI in autonomous vehicles.
Data Usage Regulations: California Consumer Privacy Act (CCPA) - Enhances privacy rights for California residents, impacting how businesses handle data collected by AI systems.
Sector-Specific AI Regulation: FDA guidelines for AI in medical devices - Ensures that AI used in healthcare is safe and effective.
International Cooperation and Standards: OECD AI Principles - Provides a framework for responsible AI design and use, endorsed by over 40 countries.
Oversight and Governance Bodies: UK’s Centre for Data Ethics and Innovation (CDEI) - Advises on ethical and effective governance of AI technologies.
Impact Assessments: EU Artificial Intelligence Act: Mandates thorough testing and assessment for high-risk AI systems before deployment.
AI knowledge and awareness, resilience: Get CISOs teams AI ready
To ensure that Chief Information Security Officers (CISOs) and their teams are adequately prepared to protect AI systems and manage potential cybersecurity incidents, a multifaceted approach is necessary. This involves upskilling and reskilling, leveraging the right tools and technologies, and implementing robust governance frameworks.
Up-skilling and Re-skilling
Specialized Training: CISOs and their teams need targeted training in AI technologies and the specific cybersecurity challenges they present. This includes understanding AI architectures, algorithms, data processing, and potential attack vectors such as data poisoning, model tampering, and adversarial attacks.
Understanding AI and Machine Learning: Basic and advanced courses on AI and machine learning principles, including types of models (like neural networks, decision trees, etc.), training processes, and typical use cases.
AI Architectures and Data Pipelines: Training on different AI architectures and the data pipelines that feed into AI systems, focusing on how data is collected, processed, and utilized in training AI models.
Security Vulnerabilities of AI Systems: Detailed exploration of specific vulnerabilities associated with AI systems, such as adversarial attacks, data poisoning, model inversion attacks, and evasion techniques.
Forensics in AI: Techniques for AI forensics, including how to trace back inputs to outputs, understand model decisions, and investigate incidents involving AI systems
Integrating AI with Cybersecurity Practices: Training on how to integrate AI tools into existing cybersecurity practices effectively. This includes using AI-driven security software for threat detection and response.
AI Risk Management: How to assess and manage risks specific to AI technologies, including risk assessment frameworks tailored for AI and strategies for mitigating potential impacts.
Ethical AI Use: Understanding the ethical implications of AI, including bias, fairness, accountability, and transparency. Training should also cover ethical guidelines for the development and deployment of AI systems.
Compliance and Regulatory Training: Detailed knowledge of existing regulations affecting AI, such as GDPR (General Data Protection Regulation) in Europe, or other local and international laws that govern data privacy and AI deployment.
Continuous Learning: through workshops, seminars, certifications, and courses on the latest AI and cybersecurity trends is essential.
Cross-disciplinary Expertise: Encouraging a blend of skills in teams can be highly beneficial. Combining expertise in cybersecurity, data science, and AI development can lead to more robust defensive strategies. Teams should understand not only how AI systems work but also how they can be exploited.
Leveraging Tools and Technologies
AI Security Tools: Invest in tools specifically designed to secure AI systems. These might include solutions for real-time monitoring of AI systems, automated testing tools to detect vulnerabilities in AI models, and tools for anomaly detection that can identify unusual patterns suggesting a breach or attack.
Simulation and Testing: Regularly test AI systems using red team exercises and penetration testing tailored for AI scenarios. These tests can help identify vulnerabilities in an AI system's design and implementation before they can be exploited by attackers.
Advanced Analytics and AI in Cybersecurity: Employ AI-powered cybersecurity solutions that can predict, detect, and respond to threats more quickly than traditional methods. These tools can analyze large volumes of data for signs of security breaches and automate responses to incidents.
Conclusion
Preparing CISOs, their teams, and other relevant organizational units such the legal, compliance, data privacy and other teams for the AI-driven future is crucial for the security and success of enterprises. As companies increasingly adopt AI technologies, it's essential to enhance the AI readiness of these teams through specialized training, adherence to AI-specific regulations, and the implementation of robust governance frameworks. This preparation will not only equip them to tackle the unique cybersecurity challenges presented by AI but also enable them to leverage AI responsibly and effectively. By investing in continuous education, adapting to regulatory requirements, and fostering an ethical AI culture, organizations can ensure that their AI initiatives are both innovative and secure, safeguarding their operations and enhancing their competitive edge in the digital age.
Commentaires