The demand for cyber insurance has continuously grown over the last 5 years; however, the majority of the risk modeling available to support it has often fallen short or in some cases even been misleading. The claim that the potential cyber risk is understood is frequently made, but the analysis is seldom backed by the necessary predictive data to appropriately quantify it, effectively, proactively, or assess how varying aspects of confidentiality, integrity and availability of data and services impact the business.
(Re)insurance industry leaders who are told their companies have a good handle on their explicit and cyber accumulations have every reason to be skeptical and should consider modeling approaches with prospective modeling and better data.
While silent cyber risk has declined with efforts at Lloyds and others to make coverage inclusion, or exclusion better defined and more clear, problematic wordings (e.g around attribution) remain plentiful.
Cyber-related risk itself is unique due to the fact that it can be both long- and short-tail and crosses many insurance classes across Property and Casualty. Perhaps most uniquely, it involves sentient agents engaged in their own game to extract value from would-be insureds. Much of the underwriting data gathered today remains too focused on data sources with tenuous correlation to either likelihood or severity of breaches. Additionally, many historical indicators may not have the same ongoing predictive value, based on changing attacker and defender dynamics and capabilities. Current dependence on primarily using external data from scans to evaluate security is not sufficient for modeling true cyber risk. This is because modern network architectures, and the overwhelming importance of characteristics like network segmentation, the precautionary practice of granting ‘least privilege’, and the ability to avoid silent failures via appropriately configured and monitored sensors with observers has changed the risk.
The only truly viable long-term solution is prospective simulation modeling, which can take a mix of internal, external, behavioral, and internet infrastructure data to explore a highly-dimensional scenario space.
This has to be the main approach to understand sensitivity and stay up to date with ever-changing vulnerabilities, exploits, and tactics for attackers and defenders. This capability will ultimately replace the long-practiced method of historical-based loss modelling using retrospective modeling techniques derived from actuarial practices in non-adversarial use cases.
Traditional actuarial techniques have focused on the analysis of historical loss history based on consistent parameters, relationships, and classifications. In cyber, many of these are both poorly understood, lack sufficient quality data, and undergoing continuous change. Even the classification of breaches, particularly as many organizations with integrity and availability issues have not historically disclosed them, negatively impacts the efficacy of current approaches. The overall dynamism in cyber security, further exacerbated by variability in the effectiveness of attacker and defender operations teams using technology to enact or evade controls, renders many of these historical techniques limited at best, and dangerous at worst. It is also becoming more difficult to detect and identify the breaches in a timely fashion. The simple external scans, often coupled with overexuberant promises of stochastic cyber model availability while failing to deal with the adversarial nature and the practical effects on ergodicity, are offering a false sense of security to executive decision-makers, underwriters, risk managers, and even regulators.
A more measured approach to cyber modeling needs to include simulation-based techniques in order to explore a broader range of futures which can include aspects of confidentiality, integrity, and availability issues rather than simply extrapolating from past events. Moreover, this approach can help organizations and insurers understand dependence and start to understand broader business impacts from Information and Operational Technology services. Given that cyber security threats remain significant but relatively unpredictable, and the methods of attack vary, an effective risk model will need several key features:
- A focus on severity estimation by improved understanding of impact – before moving to frequency drivers.
- The inclusion of a multiplicity of drivers, threats and threat actors in a simulated environment which natively supports multi-agent interactions and incentives. This is akin to modelling biological evolution with both hunter and prey forcing change on each other.
- The ability for the models to incorporate new datasets quickly and without considerable modeler effort – setting the stage for teams to better maintain data liveness which is required for accurate risk assessment in high dimensionality environments.
- The consideration of how exogenous factors (e.g. global security and geopolitical environment) may relate to the overall analysis of risk for individual entities, groups, and their associated supply chains in an increasingly interdependent world.
- The addition of cyber telematics that support continual and live risk modeling that includes confidential internal data relating to massive drivers of internal security controls and observability based on data vs self-attestation.
- An open framework which allows practitioners to easily set up and maintain their own view of risk, and which allows the communication of this view of risk to counterparties and regulators.
- The capability to evaluate cyber risk on affirmative and non-affirmative bases, including its relationship and impact across multiple associated classes of traditional cover.
The industry has only just started to understand the magnitude of the change necessary to effectively model and manage cyber risk. Astute leaders must become aware that not all of those offering direction on cyber risk modeling today are knowledgeable enough about adversarial modeling or cyber-peril specifics to do so. Analyzing historic loss based on confidentiality event and threat environment relationships is not an adequate or acceptable approach for event set generation, damage functions, or financial impact estimation. New modeling approaches will help improve outcomes, even for highly periodic data. However, true lasting solutions at higher levels of market penetration will ultimately require proactive and continuous modeling techniques making use of real-time observation data in order to build a sustainable, current basis for managing cyber and broader technology and operational risk transfer needs.