The rapid adoption and evolving deployment of artificial intelligence (AI) in recent years have heightened the risk of cyber event aggregation from both malicious and accidental sources, according to a recent paper by Guy Carpenter.
The paper identifies four main ways in which AI development and deployment contribute to this aggregation: software supply chain risk, expansion of the attack surface, increased data exposure, and growing usage in cyber security operations.
For businesses to use AI, it must be deployed either within the company’s network or externally through third-party providers like ChatGPT or Claude. In both cases, if a third-party AI model is compromised or fails, it poses a risk to all customers relying on it.
Once deployed, AI models interact with users by processing inputs and generating outputs. However, these processes can be vulnerable to both malicious and accidental manipulation.
For example, “jailbreaking” refers to tricking a model into acting outside its intended limits, which can lead to data exposure, loss of availability, or network breaches.
Additionally, AI models can provide incorrect answers, potentially resulting in liability and significant consequences for firms if safeguards are not in place.
A model’s effectiveness depends on the quality of its training data, which often includes large, sensitive datasets. To train the model, this data must be exposed or replicated, creating a market for data storage and engineering solutions that carry their own risks.
As AI technology benefits from more data, both vendors and customers are driven to aggregate it, which enhances the model but also increases risks. Centralised storage and training solutions can have serious consequences if compromised.
While AI is often praised for its potential in cybersecurity, such as automating high-level security tasks, it also introduces risks. Automation can lead to errors and vulnerabilities, especially in response orchestration where AI might take actions like quarantining systems, cutting off network access, or resetting credentials without human intervention.
While this “machine-speed” response is valuable, AI in security must be managed with safeguards and manual override options to prevent exacerbating problems and avoid unintended consequences.
Despite these risks, Guy Carpenter emphasises that re/insurers should view AI as a growth opportunity rather than a risk to avoid.
Guy Carpenter concludes, “As the first step of underwriting and managing AI portfolio risk, re/insurers should ask detailed questions and collect robust data about their insureds’ AI model development, deployment and testing: What checks exist to ensure confidentiality, integrity and availability? Are sensitive datasets aggregated for research and, if so, how is that data secured? Are AI solutions part of the security infrastructure and, if so, how are privileges managed and monitored?
“We have already seen how the answers to these questions can mean the difference between compromise and confidence. Leveraging this data allows re/insurers to better understand this risk, underwrite AI exposures profitably and manage aggregation risk as AI-related insurance volume scales with technological advancements.”