contact us

Operational Cyber-Risk Metrics the Experts Like Most

by | Apr 2, 2019

Operational Cyber-Risk Metrics the Experts Like Most

To prioritize risk-remediation efforts properly, you must have a system for quantifying cyber risk. Risk-scoring metrics can vary depending on the type of business, the threat landscape in which it operates, and its appetite for risk. Do you have a favorite risk metric?

We decided to find out what risk metrics the security-operations experts from different industries rely on most to manage and prioritize their day-to-day operations. Here’s what a few of the experts had to say:

Lowman Hatfield, system engineer ISSE, information assurance manager, program manager, BIT Systems: One of my favorite risk metrics that helps me prioritize my day is the number of known vulnerabilities that still exist after a patch-cycle event. Even with a robust patch-management program, nodes may not have been patched for a number of reasons:

  • A user did not log out at the end of the day and a reboot did not take place.
  • An exception was submitted not to patch a node due to any number of reasons.
  • The node is not managed by the master CM process, Puppet, or Spacewalk, and the node did not get the latest patch. This requires having to go hunt and apply the patches manually.

I use a dashboard that provides the number of systems I need to focus on for that day that did not receive the latest patches. I like this metric because for me, patch management is one of the first steps to implementing a strong security-vulnerability program, and this metric is just one layer in our defense-in-depth approach to a successful information-assurance program.

Guido DiMatteo, product security engineer/information security specialist, Xerox: I find the mean-time-to-detect and mean-time-to-respond metric to be extremely useful in identifying the differences between product security-response times versus escalation support and mitigation delivery. Customer perception is key when determining product security-support satisfaction, and there is often a discrepancy between the time it takes product security departments to respond to an inquiry with a theoretical answer as compared to when an actual mitigation is delivered by tier-3 customer support in the form of software or patches. These differences are sometimes in the hundreds of days depending on the level of third-party support required, so communicating proper expectations with the customer is vital for maintaining confidence in the support processes.

This metric is a great way to protect the reputation of product security-support teams, as it illustrates initial response efficiency while simultaneously revealing gaps in technical delivery expectations. Customers are often looking for immediate or instant gratification based on unrealistic goals and this metric can help to convey the historical trends while also revealing opportunities for improvement in support processes from inception to closure.

Richard Siedzik, director of information security and planning/ISO, Bryant University: From our event monitoring and asset classification we’re able to prioritize events by “value-of-asset-at-risk.” The higher the level of value and potential impact to the institution, the more actionable and immediate the response. With limited SecOps resources, this metric helps to ensure we’re operationalizing on the right (most critical) events.

Dan Han, chief information security officer, Virginia Commonwealth University: From an intrusion detection and incident response perspective, one of my go-to metrics is the mean dwell time for various types of threats. This metric is made up of detection and response times, and can be used not only to determine the effectiveness of detective controls and response processes, but also to give me an idea of the amount of resiliency needed in a complex system to prevent critical damage to the confidentiality, integrity, and availability of the system. From here, a risk-based conversation can then take place to design security controls within the acceptable parameters in terms of risk and cost.

Mark Thompson, information assurance security officer, Quantum Research International: Picking my favorite risk metric is a tough question for me to answer right now. We are going through a Risk Management Framework (RMF) assessment on our network and the cyber-risk metrics are so entangled with our progress metrics as to be indistinguishable. We move forward daily, eliminating or mitigating identified vulnerabilities and implementing security controls. We will either find ourselves lazor-focused on a particular vulnerability or set of security controls, or we’re pulled back to ensure our users are still able to complete the mission. To say I have a favorite metric would be to say I have a favorite child. They all come into play almost daily to ensure we are still moving forward toward a more secure network and ensuring mission success.

Ryan Edwards, command senior chief, information security professional, US Navy: A daily compliance check of systems and networks is one of the best ways of defending against failures and breaches. There will always be a zero-day adversary that awaits, but ensuring your network and systems meet baseline requirements daily is paramount to stress-free global operations.

Pull Quotes

  • “One of my favorite risk metrics is the number of known vulnerabilities that still exist after a patch-cycle event.”
  • “One of my go-to metrics is the mean dwell time for various types of threats, based on detection and response times.”

Key Points

  • With limited SecOps resources, a “value-of-asset-at-risk” metric helps to ensure operationalizing on the most critical events.
  • The mean-time-to-detect and mean-time-to-respond metric is extremely useful in identifying the differences between product security-response times versus escalation support mitigation delivery.