SecureArtisan

My Road to Digital Forensics Excellence

Archive for August, 2011

Criteria for an Effective Report

Posted by Paul Bobby on August 24, 2011

I work for a major defense contractor and have written many reports as the work product of being a digital forensics analysis practitioner for the last ten years. Have you looked at some of your own early reports? You may find bad use of language, incorrect conclusions, overreaching statements, inconsistent technical approaches and ambiguous data. While there is room in digital forensics analysis for 100% conclusive statements, the majority of statements you make are not, and learning what is and is not conclusive comes with experience.

I have supported security incidents, legal discovery and corporate investigations with digital forensics analysis. But more recently, my focus has been only on corporate investigations. Let me explain the difference. Security incidents are events that comprise network or computer intrusions, malware analysis, forensic deep-dives, root cause analysis, incident triage and damage assessment. Each sub-component of a security incident requires a unique approach to digital forensic analysis. For example, a triage typically requires assessing a large range of computing devices for evidence of compromise by analyzing registry indicators or file system indicators. Whereas a forensic deep-dive analyzes a specific device, already known to be compromised, in almost exhaustive detail: for example, to find evidence of exfiltration or to develop a complete timeline of the compromise. The work product of these analyses are formalized in a written report – the flavor, configuration, look-and-feel, whatever you want to call it is very different to the type of report I would write, say,  in support of a legal discovery or corporate investigation.

Corporate investigations are conducted by corporate officers (human resources, industrial security etc.) in to the allegation of policy violation by an employee. A digital forensics analyst is engaged to support this investigation specifically to retrieve electronic data that may substantiate the allegation (and yes, we do look for exculpatory evidence also). The work product of this analysis is the final report; the narrative that discusses these findings. The format of this report is different from one I’d write about a security incident. The consumer of this report is typically non-technical, the authors, the digital forensics analysts, may have differing technical skills and rhetorical skills and the technical data itself has changed over time.

Non-technical customers– when I talk about internet history and cache, one customer may understand the concept completely, another may not, so you write your report to the lowest common denominator.  For example, a common misunderstanding about technical data is why none of it contains any information about the ‘duration’ of an activity:  an employee visiting www[.]ebay.com is not important, but an employee spending 4 hours a day is, and yet internet history doesn’t provide this data.

Technical data changing over time – storage of email in PSTs is a common issue. Employees store lots of email, so when providing 800Mb of email to a customer, how do you present that effectively, analyze it, and provide an easy way for the customer to also interact with that data?

Because of these factors, it is important that a consistent approach to report writing be adopted by a digital forensics analysis group. This consistent approach should include standard formatting, approved language and a common look and feel for various report elements. But before you can address these consistency items you should develop goals to be met by an effective report. Here are some suggestions:

Accurately reflect the technical investigation process.

While it is important that the analyst understand the allegation and take appropriate steps to discover technical data that may become evidence, documenting these steps in the final report is more critical. That way the customer can understand where you found data, why you went ‘there’ looking for data, and can compare these approaches with past investigations. This provides a teaching opportunity to our customers; they become more aware of our capabilities and limitations, but also ensures that forensic analyst follows consistent technical practices when analyzing data.

Understandable to decision makers

As I said earlier, there are few 100% conclusive statements that can be made in a report, the rest may have some degree of uncertainty. And that’s okay, the point of being understandable to decision makers is to make clear the reason for that uncertainty: clarify why or why not a particular set of electronic evidence may or may not substantiate an allegation.

Withstand a barrage of employee objections

Your analysis is complete, the report is written and handed off and you move on to the next investigation. In the meantime your customer is interviewing the employee. The employee raises all sorts of objections to the technical data provided in the report. The customer, being non-technical, does not know how to rebut. Over the years I’ve heard many excuses for various technical evidence. For example, “Oh I take my laptop home over the weekend, and that was my teenage son who used it to visit inappropriate websites.” Many of these excuses can be anticipated and specifically commented on within the final report. To continue the example, I could highlight specific inappropriate websites that were visited not only on the weekend but also during work hours when badge records indicated that the employee was in the facility. This is a simple example, but it helps to tie together two different pieces of electronic data that help to address an anticipated employee objection.

Structured and easily referenced

This goes to the look and feel – if our customers receive reports from our analysts and they all ‘look’ the same, the customer learns to bypass the structure of the report and instead focus on and more easily consume the content of that report. Have you ever seen a complicated slide deck or spreadsheet and find yourself spending most of the time trying to figure out where data is? The same goes with technical reports for digital forensics. The technical content is hard enough to understand, don’t let your report structure get in the way of it.

Offer opinions and recommendations

This may be controversial to some of you, but in the world of corporate investigations it is most welcomed. The dialogue between a customer and forensic analyst isn’t just through a written report, there are many phone calls in which various technical concepts can be discussed: for example the significance of why a piece of data substantiates an allegation. Once the phone call is over, these conclusions and explanations will be forgotten. Writing them down as part of the final report will help the customer remember that conversation.

When you write a report, ask yourself if that report meets your established criteria for effectiveness. Peer review is key here, because after all, if another forensic analyst can make neither head nor tail of your report, a non-technical customer has no chance.

    Posted in State of Affairs | 1 Comment »

    Encase v7 First Month

    Posted by Paul Bobby on August 2, 2011

    We have multi-day Evidence Processing times, Date format issues, HD encryption issues, reporting issues and a bunch of other smaller but still irritating gotchas to deal with. Just check the forum if you don’t believe me. Are they all end-user errors? Hah, not likely.

    I have not yet worked with an operational v7 public release – Guidance is having difficulties licensing the forensic version to those of us with EE only dongles. *sigh*. But I do believe that the underlying capability of file system parsing is still intact. I tested out EXT/4 for example and found it to parse properly. So Encase, used as a file system browsing tool appears to behave as v6 currently does, and that is to present an accurate representation of the file system for manual review. What concerns  me however is that this core functionality has now been wrapped by a large number of new interface features, requiring a major relearn of the product, but more importantly, requiring considerable new testing on the part of the buyer before they feel that both v6 and v7 generate the same results.

    I strongly recommend that no one use this for current production case load without submitting v7 to a rigorous internal testing plan. I only hope that we do not find something that is ‘not a bug’ but in fact a correct interpretation of filesystem/artifact data, and renders all previous v6 case work invalid because v6 did ‘it wrong all along’.

    I have become aware that v6 owners, who wish to buy ‘modules’ for their v6 product (for example VFS) can no longer do so and must buy v7 instead. This is bad form Guidance considering the current state of v7.

    Posted in EnCase | 1 Comment »