Security Metrics: Replacing Fear, Uncertainty and Doubt
by Andrew Jacquith
Addison Wesley 2007.
ISBN 0-321-34998-9. Amazon $32.99 (USD) Bookpool $31.50 (USD)
Reviewed by Richard Austin 07/17/07
We continuing our quest for numeracy in this month's review.
At just over 300 pages, this is a much less imposing tome than Herrmann's "Complete Guide" (reviewed in E78) and will be an easier read for the busy security manager or professional.
The book opens with a charming description of the "Hamster Wheel of Pain" which describes security programs where the efforts are locked in a never ending cycle of identifying vulnerabilities, applying fixes, enjoying a brief respite, reassessing to find new vulnerabilities, panic, apply more fixes and repeat. This process generates a lot of numbers and can document a lot of activity but it really doesn't answer the troubling question "So how ARE we doing, really?"
Jaquith takes this question a bit further when he alleges that the time for risk management as a guiding principle of information security has passed. He bases this rather astonishing observation on the fact that "nobody has a handle on the asset valuation part of the equation" (p.5) which renders any value-based assessment of risk highly questionable. He proposes to replace what has passed as "risk management" with key indicators that measure the health of the security operations directly.
The next chapter delves into how one defines realistic security metrics and also unfortunately reveals the lackluster technical editing which will plague the entire book. He dismisses qualitative metrics as unreliable because of difficulties in defining and measuring them in a consistent fashion and prefers metrics that refer to things that can be quantitatively measured or counted. The first editing faux pas occurs on p. 32 where a web page on annualized loss expectancy (ALE) calculations is reproduced with many errors including the startling statement that it is worthwhile to implement the example control even though its cost is "less than the expected losses due to the threat." Fortunately, the included web citation allows one to retrieve the correct version of the page.
Chapter 3 begins the presentation of actual metrics with those relevant to measuring technical security measures. He begins by walking the reader through a realistic case study that illustrates his approach to metrics in general as providing answers to relevant questions. The basic question for the case study is "Are my Internet facing applications secure?" Jaquith treats this question as an overall hypothesis (that Internet-facing applications are secure) that can be falsified by measurement. He breaks the overall hypothesis into sub-hypotheses that must hold for the overall hypothesis to be true. Each sub-hypothesis must be falsifiable by measurement or it is rejected from consideration. From each sub-hypothesis, diagnostic questions are developed to guide the selection or development of metrics.
Suggested metrics dealing with each question are defined and clearly explained. However this excellent foundation does get a bit marred by some of the examples. On page 50, for example, "stopping 70,000 inbound viruses" and stopping 500 outbound viruses is used to conclude that the internal network is "cleaner than the outside environment by a factor of 140 to 1" by taking the ratio of 70,000 to 140. Since it is not at all surprising that an internal network of perhaps a few thousand hosts would produce fewer viruses than the Internet composed of millions of hosts, the differing scales of the two measurements makes this a glaring example of comparing mice and elephants when concluding that one is "cleaner" than the other.
Some of the metrics also exhibit a naivete' that is surprising - for example, on page 70, we are told that "host uptime for critical hosts helps characterize the overall availability of these resources." Host uptime is an easily gathered metric, but we're typically more interested in whether the host is providing the relevant services which can be much harder to measure.
Chapter 4 provides a welcome look at how one can measure the effectiveness of a security program and is likely the most useful chapter in the book. Using broad categories taken from COBIT, realistic metrics are defined for each of the control objectives. For example, to deal with the control objective of "Assess and manage IT risks," he proposes metrics that count the number of critical assets and functions that reside on systems compliant with the organization's security policies and standards, that have estimates of the costs of compromise, documented risk assessments and so on. While not as exotic as many metrics that have been proposed in this area, these have the virtues of being easily defined and realistically measured.
Chapter 5 presents a very brief introduction to the subject of data analysis. The usual statistical measures are trotted out and some of the weaknesses of averages, assuming the Gaussian distribution (the normal assumption), etc, are very lightly covered. I would have liked to see more coverage of the limitations of means/standard deviations for real-life distributions that seldom exhibit Gaussian perfection. While he does introduce the median and quartiles, the treatment is sketchy and would have benefited from more background and details.
Chapter 6 provides a quick romp through the important subject of visualization of information. Good advice on avoiding the common gaffes of 3-D charts, superfluous labels and other "chart junk" is given and illustrated. Unfortunately, the slap-dash editing weakens the presentation - Jaquith discusses some illustrations as if they were shown in color as opposed to grayscale and some of the illustrations were so mangled in their reproduction that they produce the very sins he counsels us to avoid.
Chapter 7 discusses automating metric calculations and provides good advice on requirements and process. He presents a tabular metrics life cycle which would be a useful tool for organizing the metrics process in general.
The final chapter discusses the design of security scorecards which summarize the entire purpose of the book - we must be able to present the results of a security program in a concise and meaningful form to the organization's management. Niven's "Balanced Scorecard" is used as a model and the presentation would have been strengthened by a case study with an example balanced scorecard.
In summary, this is a good introductory book to the metrics process in information security and is a recommended read for the professional new to the area or a manager seeking guidance on how a security metrics program should be designed and built. While it lacks the structured detail of Herrmann's "Complete Guide" it is nonetheless a gentler introduction that will likely introduce many more readers to this important area.
Richard Austin recently retired from his position as the SAN security architect for a Fortune 25 company and now earns his bread and cheese as an iterant university instructor and private consultant. He can be reached at firstname.lastname@example.org and welcomes your thoughts and comments.