SecureArtisan

My Road to Digital Forensics Excellence

Encase Benchmarking

Posted by Paul Bobby on November 4, 2008

I’m about to post this to the official Encase forums.

The other day I was thinking about benchmarks; I was contemplating using Xeon processors for my new forensic examiner box versus an Extreme edition CPU. And I got to thinking – wouldn’t it be great to have some sort of forensic benchmark, specifically to Encase (since I’ll be doing my work with Encase) so that I can tell which hardware configuration is the most appropriate?

A quick search of the Encase forums showed almost no traffic concerning benchmarks save for a recent thread started by dk_mitch. He and I exchanged email, and the idea was born. I even received word from Joshua Joseph, at Guidance, expressing his interest and that their tech-support stands by to help (which is good because I’m sure they have better hardware resources then I do).

Goals

  1. The goal is to benchmark Encase, not to test if Encase is capable of performing a forensic task
  2. The goal is to run repeatable benchmarks against your hardware configuration so that you can fine-tune it for performance using a static Encase configuration
  3. The goal is to run repeatable benchmarks against Encase and the Operating System so that you can fine-tune it for performance using a static Hardware configuration.
  4. The goal is to run repeatable benchmarks against multiple versions of Encase so that you can compare the performance of features when new versions are release. The OS and Hardware configurations should remain static.

Baseline System Configuration

Part of Goal #2 is that the hardware will change as you upgrade and replace components. Are there some rudimentary criteria for the hardware configuration that “goes without saying”? Bare metal hardware only, no virtualization?

Part of Goal #3 is that the Operating System and Encase itself will change as they are fine-tuned for optimal performance on a static hardware configuration. Are there some common criteria for the OS and Encase? Windows XP SP2 and higher? No virtualization?

Part of the system configuration will be the identification of possible performance steps. For example,

  • RAID or not?
  • Readyboost if Vista?
  • Pagefile on a USB thumb drive if XP?
  • Parsecache on a separate drive?
  • SATA, SAS, SST drives? SAN (iSCSI or FDDI)?
  • Multi-cores? Multi-CPUs?
  • 32bit or 64bit OS?
  • How much RAM?

Collecting the Results

If an index of a large eText takes 30minutes on your configuration, it does us little good if that data cannot be shared along with the hardware and software configuration. Should a custom tool be written to gather this data, or make use of something like “msinfo32.exe”?

The Execution

How to automate a series of bookmarks? Should we even consider automation? It rules out the end-user from modifying the tests for sure. The benchmarks have to be protected somehow to ensure consistency of testing otherwise the data has to be thrown out.

Can we Enscript and Enpack them? This has the advantage of being able to extend the benchmarking process by wrapping newer features around the core tests being conducted.

Evidence Sets

Whatever is used for evidence sets has to be free of copyright issues and other restrictions. The evidence can be customized around the benchmarks being conducted, but at the least should be standardized and presented in EWF format. For the benchmarks involving indexing and searching, large text collections come to mind. One approach is to use an eText of the Bible – they come in different languages, Project Gutenberg publishes them free-to-use, they can be resaved in to UTF8 and UTF16 for Unicode searches, and because the Bible is organized in to chapter and verse, comparisons between the language benchmarks are easy.

A second example of evidence-set customization is to include four copies of the same eText in the same series of EWF files. Execute four searches, one for each eText, i.e. four threads, on each core, and benchmark the bottleneck as Encase jumps all around the evidence set as it conducts its four searches.

The Benchmarks

Along with customized evidence sets, the majority of the work will be to establish the core benchmarks. Do we consider standalone Encase only or do include some Enterprise Encase functionality?

Some of the more common machine-intensives tasks that come to mind:

  • Indexing and Keyword Searching
    • Non-grep
    • Grep
    • Unicode
    • Language options
    • Simultaneous searches against one file
    • Simultaneous searches against two or more separate parts of the evidence set
  • Hash Analysis
    • NTFS file system build
    • Ext3 file system build (Ubuntu)
    • Other OS installations, Solaris, Mac (HFS)
    • Does the file system being analyzed really have an effect on the hash analysis at all?
    • Different hashing algorithms (MD5 and SHA)

There’s a lot of work to be done here – hopefully a discussion will unfold.

Advertisements

One Response to “Encase Benchmarking”

  1. Derek K said

    I am looking for similar information on this. Did anything come of your benchmarking study? I am in process of upgrading my infrastructure to SSD hard drives and a 10Gbe network. I am wondering if you found any information in regards to modifying settings within EnCase to improve performance or if it is simply the hardware you are running on.

    I realize that this is an old post but hopefully it is still monitored…

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

 
%d bloggers like this: