Systematically Collecting The Right Sort of Data About Cyber Security Incidents

← Blog Home

By

Introduction

Given that Farsight Security, Inc. (FSI) is a data-driven cyber security company, it shouldn’t be very surprising that many Farsight staff members have a passionate interest in how data – pretty much any sort of data – gets collected. As data people we know that systematically and consistently collecting data maximizes the value of that data, but surprisingly often, people measure even the simplest of phenomena in unexpectedly inconsistent ways.

Consider the humble apple. Given that this is autumn, many backyard orchardists are harvesting their trees. Orchardists, like fishermen, hunters, and others who harvest Mother Nature’s bounty, often may be inclined to brag a bit about their success.

One orchardist might measure the yield of his apple trees in bushels (weighing 40 pounds each), while another with only a few trees might count his modest harvest apple-by-apple. We can attempt to equilibrate those measurements, but that’s an imperfect process (there’s natural variability from one apple to the next, and some varieties of apples yield significantly larger and heavier fruit than others).

It’s a lot easier if everyone can agree to measure a given phenomena the same way: “our hobbyist orchard society agrees that it will measure backyard apple orchard output in 40 pound bushels.”

It can also matter when we measure: the end of August? The end of September? The end of October? The later we wait, the bigger the apples might become, but the later we wait, the greater the chance that some apples might become damaged or eaten by deer, birds, insects, or disease, or simple fall to the ground and be ruined.

Speaking of imperfect apples, when we measure, what are we measuring? Only flawless apples picked right from the tree and perfect for eating fresh? Or are we counting all usable apples, including partially-flawed apples that might need to be processed into apple sauce or cider to be acceptable to consumers?

Measuring even simple things can be surprisingly tricky, and if we don’t use common units, and agreed upon processes, we might literally find ourselves unable to compare “apples to apples.”

Cyber Incidents

Measuring cyber incidents (such as PII spills or malware infections or cyber intrusions), is potentially far harder than measuring fruit tree output.

Earlier this month, the Department of Homeland Security (DHS) National Protection and Programs Directorate (NPPD)’s Cyber Incident Data and Analysis Working Group (CIDAWG) released a new 53 page report entitled Enhancing Resilience Through Cyber Incident Data Sharing and Analysis: Establishing Community-Relevant Data Categories in Support of a Cyber Incident Data Repository.

While many government reports may be notorious for tackling obscure topics, having scant readership, and having little if any lasting global impact, that will likely NOT be the case for this report.

The CIDAWG report is important. It points to a path forward that will likely help the developing cyber security industry fill a longstanding and significant gap, and does an excellent job of proposing a practically usable framework for collecting information about cyber incidents, both big and small. If this framework ends up broadly used, we’ll be better positioned to track and understand the cyber security incidents we’re increasingly experiencing.

This report defines what matters about cyber security incidents, and what doesn’t. That makes this a truly critical report. It also implicitly declares what WON’T be measured, and thus what won’t be ABLE to be easily analyzed. That’s another critically important point.

A consistent framework, if clearly and carefully defined, and broadly accepted and used, lays a foundation for…

  • Data to be systematically collected and recorded, thereby making it possible for data to be shared and compared by incident response communities both at home and abroad. This means that your data will be able to be cleanly combined or contrasted with my data, and we won’t run into things such as non-comparable data categories* or differences of opinion about what’s defined to be a new bit of malware**.

  • Longitudinal trends can be monitored over time, with confidence that changes in reported statistics are due to substantive phenomena, not just differences in definitions or changes to data collection methodologies.***

Frankly, the adoption of a consistent cyber incident measurement framework is a watershed event, and given the increasing prevalence of cyber security incidents, one that’s long overdue.

Systematically and Consistently Measuring Phenomena of Interest: A Well-Accepted Idea

Many may find it a bit shocking that we don’t already have a framework of this sort for cyber security security incident data collection since we have consistent data collection frameworks for so many other areas of national and international concern, including (but not limited to):

The Report

With all of that by way of preface, what does the CIDAWG report actually recommend? The report originally sought to identify information that would be needed to create a database that could be used for cyber insurance-underwriting-related purposes, but that’s just one of many possible purposes to which this data could potentially be put.

Most of the body of the report (report pages 3 through 28) is devoted to describing and explaining the 16 types of data the group would like to see collected when cyber security incidents occur. Because you really should read the entire report itself, I won’t rehash those data types in detail except to note the 16 major areas called out by the report:

  1. Type of Incident
  2. Severity of Incident
  3. Use of Information Security Standards and Best Practices
  4. Timeline
  5. Apparent Goals
  6. Contributing Causes
  7. Security Control Decay
  8. Assets Compromised/Affected
  9. Type of Impact (s)
  10. Incident Detection Techniques
  11. Incident Response Playbook
  12. Internal Skill Sufficiency
  13. Mitigation/Prevention Measures
  14. Costs
  15. Vendor Incident Support
  16. Related Events

Please see the body of the report for details (truly, it’s well worth a read), or jump to the summary table in Appendix A for an excellent compact summary.

In our opinion, the 16 areas recommended make sense. They seem to do an excellent job of capturing the right general information associated with cyber incidents, although ultimately the usability of the data collection will depend strongly on the final “checkboxes” offered as possible responses to categorical questions, among other things.

The notional cyber incident use cases in Appendix B are also realistic and credible, and a fine test of whether the right information is being collected about incidents. That portfolio of scenarios should be augmented with further scenarios in a subsequent report.

Exclusions

Two potential areas of data collection were considered and excluded from the report’s framework: overall organizational cyber security maturity (ala cyber security maturity models), and attack attribution.

It was disappointing to see that the authors of the CIDAWG report considered but rejected a simple summary measure of overall cyber security maturity. As the report conceded, however, enough indicators are available in what will be collected and reported that a rough assessment of organizational maturity can likely be derived or imputed. This is a mitigating bone of sorts.

Incident attribution is also excluded from the incident-related areas where data collection is recommended. Unquestionably, attribution is often technically hard, but hard questions are often very interesting.

Moreover, if you think of cyber security incident data collection as being analogous to classic investigative journalism process, “who” is an integral and non-extricable part of the classic “5 W’s.”

We also suspect that many victims will be strongly motivated to identify, or attempt to identify, their proximate attacker if/when they are able to do so.

Conclusion

This report is well worth reading. We urge you do so.

We further hope that those who do read it consider adopting the framework it outlines for cyber security incident reporting and management.

Notes

* Disjoint or non-comparable categories arise when continuous data is binned inconsistently. For example, one survey might ask if the respondent is under 18, 18 to 24, 25 to 33, 34 to 39, or 40 or over. Another survey might ask if respondents are 21 or under, 22 to 30, 31 to 50, or 51 or over. Those categories simply don’t align.

** Simple example of this phenomena: is “adware” malware, or not? If a malware dropper undergoes minor modifications to make it harder for major antivirus software to detect (but is otherwise unchanged), is that a “new” strain of malware, or just a variant of an existing strain?.

*** To see how changing definitions can matter, consider FCC measurements of broadband deployment. At one time, 4 Mbps down/1 Mbps up was fast enough to count as “broadband” Internet. Recently the FCC changed that definition to 25 Mbps down/4 Mbps up. If you were to look at a plot over time of how many Americans have “broadband” access to the Internet, the watershed date for that definitional change will appear to be a time when many Americans suddenly “lost” broadband access, even though the only thing that changed was the FCC’s definition. If you’d like an example of how changes to data collection methodologies can have a profound impact on statistical results, review “The Tragedy of Canada’s Census,” Feb 26, 2015).

Joe St. Sauver, Ph.D. is a Scientist with Farsight Security, Inc.

← Blog Home

Protect against cybercriminal activity in real-time.

Request demo

Email: sales@farsightsecurity.com Phone: +1-650-489-7919