IT RISK MANAGEMENT MASTER CLASS

How to Build a Quantitative Technology Risk Management System

Ash Hunt

How do organizations make sense of multifarious technology risk data? How do they know if they’ve captured all the necessary attributes to properly assess a technology risk? How they know what is “enough data” to run an effective technology risk assessment?

Determining an organization’s technology loss profile and navigating emerging risk is challenging. There are a plethora of considerations across a complex array of requirements — methodology, frameworks, analytical models, data points, investment planning and more.

Frustratingly, qualitative techniques impede organizations seeking to reach this level of granularity in their risk analysis. Instead, compliance-based approaches creep in, promoting tick-box reviews and vague problem statements that describe high-level vulnerabilities and control deficiencies rather than actual risks. Random scoring is subsequently applied, and the organization produces an output that doesn’t provide the level of insight needed to make well-informed investment decisions.

While it cannot capture every potential eventuality, beginning with detailed loss scenarios provides far greater value than qualitative techniques can for effective risk management. Understanding the problem from both ends is critical. Every organization has strategic objectives and invariably incurs loss in trying to achieve them. Technology’s role is to help determine which scenarios relevant to its domain are the most probable to occur and cause the greatest amount of damage in preventing an objective from being realized.

Articulating risk effectively requires specificity; you can’t measure what you can’t define. When structuring a loss scenario, organizations need to consider, at a minimum:

  • Threat: an internal or external agent of harm, either adversarial, accidental or environmental in nature
  • Threat Event: an action (or lack thereof), initiated by a threat against an asset, which is capable of causing harm
  • Asset: a valuable part of your technology estate that can be exploited to cause loss
  • Vulnerability: a weakness in the asset(s), which could be exploited by one or more threats
  • Loss: the effect of harm through breaching confidentiality, integrity and or availability

Ignoring any of those elements leaves an unstructured concern at best, preventing the analyst from determining the parameters needed to measure the problem.

With a clearly defined scenario, data needs to be collected for the given parameters. For skeptics of quantitative approaches, here enters the claim of not having “enough” data to measure the scenario. Of course, no such arbitrary threshold exists, mathematically or otherwise. Hordes of data samples exist across the profession and within the organization. Moreover, existing data constraints don’t prevent analysts from currently leveraging qualitative mental models to conduct risk assessments, so there’s no reason why this would impede the adoption of more robust risk measurement techniques. It’s worth noting that uncertainty modeling also exists because of a lack of data, not in spite of it. Practitioners should focus on leveraging judgements and calibrated estimates to establish relationships, and condition those with empirical data.

This exercise will serve to integrate technology with wide-ranging business functions. Analysts will need to capture multiple data points to support estimates, including salary averages, revenue generated per minute for a particular system, client retention and prospective business. Technology risk teams often fail to capture finance telemetry, but all risk is business risk. Recognizing the range of losses technology incidents can incur (all of which manifest in financial cost to the business) also helps position technology as a revenue and opportunity creator rather than a cost center. With each investment made to remediate a risk, the reduction in loss exposure can be tied back to a business opportunity that  generates increased revenue and growth.

A risk-based investment for technology typically involves controls — technology operations or security processes and safeguards against risk, which either reduce the likelihood of event occurrence or the extent of loss incurred. There are several considerations surrounding controls that can provide quick wins in building an effective technology risk management program:

  • Create control mappings to threat events to swiftly determine coverage against given scenarios. Mappings should also be created for all industry standards and regulatory obligations to demonstrate inherent compliance.
  • Measure control design and operating effectiveness (continuously for the latter, if possible). Understanding and quantifying a control’s operating effectiveness also identifies the extent of deficiency or vulnerability the system has to relevant threat events. The delta in operating effectiveness is a useful measurement to inform probability estimates for related loss scenarios.
  • Determine the cost of controls — whether it’s the license price of a tool or OPEX across a control suite, CISOs and security leaders can measure the expenditure required to protect against specific threats and demonstrate how much loss exposure is consequently reduced over time.

Unfortunately, traditional, but widely established, paradigms (e.g. ISO) promote the absence of controls when pursuing the nebulous notion of inherent risk. While any scenario can be modelled, this seems a low-value use case, given that organizations across industries have always needed to rely on controls to function effectively. Controls should be the nexus of technology risk analysis.

Organizations should realign their focus from ill-defined and poorly structured concepts — such as inherent and residual risk — to measuring their existing loss exposure for specific scenarios: What would the fallout look like if the incident occurred today in our control environment? Leveraging a Monte Carlo Engine, organizations should subsequently run sensitivity analysis testing hypothetical control improvements: What would the fallout from the same incident look like if better controls were established? Ideally, you’d expect to see a lower distribution of loss exposure. The delta of loss between these two scenarios against the cost of the modeled control improvements provides your risk spend efficiency: the amount of risk reduction gained per dollar invested.

Conducting analytical processes across critical loss scenarios adds clarity to decision-making, furnishing security leaders with an arsenal of data to execute informed investment decisions based on accurate prioritization. In doing so, technology risk management is transformed into decision science and a demonstrable value-add for the business.

In the How Information Security Professionals Can Better Communicate Risks to the Board video, learn how to drive meaningful engagement with the board and build alignment between technical initiatives and business outcomes.

Ash Hunt

Ash Hunt

Ash Hunt is a global CISO, international keynote speaker and frequent board advisor with a decade of experience in complex, multinational environments. He has worked extensively across UK government departments, FTSE/FORBES organizations and Critical National Infrastructure (CNI), in addition to authoring the UK’s first quantitative framework and actuarial model for information risk. He has also served as a media commentator for Sky News and ITV on cyber security issues.