SecureArtisan

My Road to Digital Forensics Excellence

My CEIC2012 Experience

Posted by Paul Bobby on May 23, 2012

Let me begin with the obligatory “I haven’t written in a while eh?”

Now that that is over with, and with some encouragement to continue posting, here we go.

I’m attending CEIC 2012 in Las Vegas, and with only two sessions left it’s time to post my thoughts on the conference.  Two keynotes to attend, the first being the CEO of Guidance Software. His presentation focused on the where we came from in the forensic world, through the 2000s and a brief look in to the future. It was also used to introduce the Guidance Software App Store to arrive in the Fall. This can only be a good thing. Forensics, hardware, techniques and everything else electronic is evolving at such a fast rate that any one company cannot keep up with it all – coders have been writing enscripts for a while now, myself included, and no doubt we will continue to provide free enscripts to the community. But allow a developer to be compensated for his/her work and you create a way for coders to spend serious time developing some significant scripts and plugins for Encase. Take a look a the Volume Shadow Copy problem – I’ve been waiting ages for Guidance to incorporate native support in to Encase; others have developed tools, so the solution is well understood, but while it likely appears on a future feature list, it is not a priority. An enterprising developer can take up the challenge and probably find many a shop willing to pay $s for it.

The second keynote came from General Richard Meyers, chairman of the Joint Chiefs of Staff, retired. While he retired from service in 2005 his points of view were definitely valid, and offered clarity to the incident problem we are dealing with today. In his summary of the Top 5 threats for today, Cyber Incidents was classified as number two. He also addressed the PRC, classifying them as highly aggressive when it comes to using cyberspace to gain intellectual property. He stressed however that these incidents would not directly lead to military conflict, however the persistence of the threat undermines whatever headway is being made via diplomacy, and that an unintended conflict may occur because of this tension, for example, in another space, such as the South China Sea.

Okay session time, number 1: Manual Web Page Reconstruction. This was a 90 minute lab session, the purpose of which was to teach an approach to reassembling a web page from the artifacts present on the computer. Unfortunately we didn’t start the lab work until 70 minutes in to the presentation, this alone was enough to rate the session as disappointing. However one of the things I always gain from sessions are questions that require some research to find the answer. Here’s the problem that I thought about during this session: if you see a file called “1.jpg” and “1[1].jpg”,”1[2].jpg” in temp internet files – do you know what that means? That’s not the real question though. What I need to figure out is if the web browsing mechanism (we’ll take Internet Explorer for example) is smart enough to know which of these JPGs to pull from the cache.

Let me state it this way, using IE, a user visits website1, website2 and website3. They all have a file called “1.jpg” that is loaded during the page render. The browser stores these JPGs in the cache, but the JPGs are completely different, only their filenames are the same. When the user visits one of the sites again, say, website2, and the image is loaded from the cache, is the correct one displayed? No idea, will have to test.

Session 2: Hunting for Unfriendly Easter Eggs. Two presenters from Deloitte walking us through “Cloppert’s Kill Chain” and modifying it slightly to be a Kill Chain Life Cycle (btw this kill chain is to be credited to Amin and Hutchins as well). The life cycle modifies the chain slightly by making the initial exploit/c2/exfil phase an external phase, that is to say the first penetration in to the network, followed by a cycle of internal phases that may repeat as often as is needed while the attacker modifies the chain with new exploits/recon/c2 etc. The second part of the presentation built on existing “indicators of compromise” proposed by Mandiant, in which case studies were made of real incidents that lead to 300+ IOCs for each stage in the chain. Good stuff.

More to come

Advertisements

Posted in State of Affairs | 1 Comment »

Criteria for an Effective Report

Posted by Paul Bobby on August 24, 2011

I work for a major defense contractor and have written many reports as the work product of being a digital forensics analysis practitioner for the last ten years. Have you looked at some of your own early reports? You may find bad use of language, incorrect conclusions, overreaching statements, inconsistent technical approaches and ambiguous data. While there is room in digital forensics analysis for 100% conclusive statements, the majority of statements you make are not, and learning what is and is not conclusive comes with experience.

I have supported security incidents, legal discovery and corporate investigations with digital forensics analysis. But more recently, my focus has been only on corporate investigations. Let me explain the difference. Security incidents are events that comprise network or computer intrusions, malware analysis, forensic deep-dives, root cause analysis, incident triage and damage assessment. Each sub-component of a security incident requires a unique approach to digital forensic analysis. For example, a triage typically requires assessing a large range of computing devices for evidence of compromise by analyzing registry indicators or file system indicators. Whereas a forensic deep-dive analyzes a specific device, already known to be compromised, in almost exhaustive detail: for example, to find evidence of exfiltration or to develop a complete timeline of the compromise. The work product of these analyses are formalized in a written report – the flavor, configuration, look-and-feel, whatever you want to call it is very different to the type of report I would write, say,  in support of a legal discovery or corporate investigation.

Corporate investigations are conducted by corporate officers (human resources, industrial security etc.) in to the allegation of policy violation by an employee. A digital forensics analyst is engaged to support this investigation specifically to retrieve electronic data that may substantiate the allegation (and yes, we do look for exculpatory evidence also). The work product of this analysis is the final report; the narrative that discusses these findings. The format of this report is different from one I’d write about a security incident. The consumer of this report is typically non-technical, the authors, the digital forensics analysts, may have differing technical skills and rhetorical skills and the technical data itself has changed over time.

Non-technical customers– when I talk about internet history and cache, one customer may understand the concept completely, another may not, so you write your report to the lowest common denominator.  For example, a common misunderstanding about technical data is why none of it contains any information about the ‘duration’ of an activity:  an employee visiting www[.]ebay.com is not important, but an employee spending 4 hours a day is, and yet internet history doesn’t provide this data.

Technical data changing over time – storage of email in PSTs is a common issue. Employees store lots of email, so when providing 800Mb of email to a customer, how do you present that effectively, analyze it, and provide an easy way for the customer to also interact with that data?

Because of these factors, it is important that a consistent approach to report writing be adopted by a digital forensics analysis group. This consistent approach should include standard formatting, approved language and a common look and feel for various report elements. But before you can address these consistency items you should develop goals to be met by an effective report. Here are some suggestions:

Accurately reflect the technical investigation process.

While it is important that the analyst understand the allegation and take appropriate steps to discover technical data that may become evidence, documenting these steps in the final report is more critical. That way the customer can understand where you found data, why you went ‘there’ looking for data, and can compare these approaches with past investigations. This provides a teaching opportunity to our customers; they become more aware of our capabilities and limitations, but also ensures that forensic analyst follows consistent technical practices when analyzing data.

Understandable to decision makers

As I said earlier, there are few 100% conclusive statements that can be made in a report, the rest may have some degree of uncertainty. And that’s okay, the point of being understandable to decision makers is to make clear the reason for that uncertainty: clarify why or why not a particular set of electronic evidence may or may not substantiate an allegation.

Withstand a barrage of employee objections

Your analysis is complete, the report is written and handed off and you move on to the next investigation. In the meantime your customer is interviewing the employee. The employee raises all sorts of objections to the technical data provided in the report. The customer, being non-technical, does not know how to rebut. Over the years I’ve heard many excuses for various technical evidence. For example, “Oh I take my laptop home over the weekend, and that was my teenage son who used it to visit inappropriate websites.” Many of these excuses can be anticipated and specifically commented on within the final report. To continue the example, I could highlight specific inappropriate websites that were visited not only on the weekend but also during work hours when badge records indicated that the employee was in the facility. This is a simple example, but it helps to tie together two different pieces of electronic data that help to address an anticipated employee objection.

Structured and easily referenced

This goes to the look and feel – if our customers receive reports from our analysts and they all ‘look’ the same, the customer learns to bypass the structure of the report and instead focus on and more easily consume the content of that report. Have you ever seen a complicated slide deck or spreadsheet and find yourself spending most of the time trying to figure out where data is? The same goes with technical reports for digital forensics. The technical content is hard enough to understand, don’t let your report structure get in the way of it.

Offer opinions and recommendations

This may be controversial to some of you, but in the world of corporate investigations it is most welcomed. The dialogue between a customer and forensic analyst isn’t just through a written report, there are many phone calls in which various technical concepts can be discussed: for example the significance of why a piece of data substantiates an allegation. Once the phone call is over, these conclusions and explanations will be forgotten. Writing them down as part of the final report will help the customer remember that conversation.

When you write a report, ask yourself if that report meets your established criteria for effectiveness. Peer review is key here, because after all, if another forensic analyst can make neither head nor tail of your report, a non-technical customer has no chance.

    Posted in State of Affairs | 1 Comment »

    Encase v7 First Month

    Posted by Paul Bobby on August 2, 2011

    We have multi-day Evidence Processing times, Date format issues, HD encryption issues, reporting issues and a bunch of other smaller but still irritating gotchas to deal with. Just check the forum if you don’t believe me. Are they all end-user errors? Hah, not likely.

    I have not yet worked with an operational v7 public release – Guidance is having difficulties licensing the forensic version to those of us with EE only dongles. *sigh*. But I do believe that the underlying capability of file system parsing is still intact. I tested out EXT/4 for example and found it to parse properly. So Encase, used as a file system browsing tool appears to behave as v6 currently does, and that is to present an accurate representation of the file system for manual review. What concerns  me however is that this core functionality has now been wrapped by a large number of new interface features, requiring a major relearn of the product, but more importantly, requiring considerable new testing on the part of the buyer before they feel that both v6 and v7 generate the same results.

    I strongly recommend that no one use this for current production case load without submitting v7 to a rigorous internal testing plan. I only hope that we do not find something that is ‘not a bug’ but in fact a correct interpretation of filesystem/artifact data, and renders all previous v6 case work invalid because v6 did ‘it wrong all along’.

    I have become aware that v6 owners, who wish to buy ‘modules’ for their v6 product (for example VFS) can no longer do so and must buy v7 instead. This is bad form Guidance considering the current state of v7.

    Posted in EnCase | 1 Comment »

    Manual review of data structures

    Posted by Paul Bobby on June 8, 2011

    #dfirsummit has been generous this year in that they’ve provided a free live stream of the 2 days of presentations. This quick post was prompted from listening to Lee “Gervais” Whitfield, and his discussion of where to look to disprove the ‘bios clock changed’ conspiracy when it comes to disputing the evidence on your hard drive.

    He indicated several locations that exhibit temporal anomalies should the clock in fact get changed. For example, thumbs.db (thumbnail databases in folders with images) stores thumbnail data sequentially – changes in the timestamps of those thumbnails may indicate time change.

    He was asked, what are the Top3 places to look for for evidence of clock changes, and as #1 he mentioned Event Logs. But I don’t think for the reason why it is #1. He mentioned one event for XP and a couple of events for Vista/7 that show the clock being changed that get recorded in the event log. This is good of course, but I believe the real deal with event logs is just as with thumbs.db. Data is written to the event log sequentially – it is not ordered chronologically.

    Talking Windows OS and NTFS.

    Again: event logs are written to the NTFS file system, and then individual events appended to the log. If the clock changes, new events are appended to the log with these new timestamps. This is where reliance on tools such as Encase/FTK to perl scripts, to Event Log explorers, even log2timeline, that may auto-sort events for us chronologically for presentation, or at the least, our first step is to sort output chronologically. If we manually inspect the contents of the event log file with a hex editor (i.e. a raw view), and do some decoding ourselves, we can see the jump in time/anomaly clearly.

    Of course what is ‘clear’ is subjective – but this is a good example of where manual review of data structures may in fact save the day rather than relying on our tools. Manual review of data sources may only be appropriate for certain scenarios, and I’m not recommending it as a daily approach; but it is something to be mindful of when trying to prove a point.

    Posted in Forensics, General Research | Leave a Comment »