SecureArtisan

My Road to Digital Forensics Excellence

Archive for July, 2010

Enscript – PST TLN output

Posted by Paul Bobby on July 30, 2010

Another new enscript (See the My Files section). This enscript can be used to produce a five-field TLN format of the timeline data in a PST file. This format can be used as input to the Log2timeline tool.

I had considered writing an input module for Log2timeline, but processing PST files is hard. I then considered pre-processing the data in to MSG format – but that’s hard also. So I made use of the ability to mount a PST in Encase and process the ‘record’ data at runtime. Each email message has four timestamps, when the email was created, sent, received and last modified.

I learnt a couple of techniques when writing this enscript:

1. v.GetRecords(recs)

After mounting a ‘compound file’, the GetRecords() method forces Encase to generate record entries at runtime (the very same data that populates the records tab). Which leads to

2. forall (DataPropertyClass p in rec.DataPropertyRoot())

Each record entry is a series of key-value pairs. The ‘type’ of the key is binary, ascii, date etc (see the enscript help), and then the value can be processed.

Unfortunately the dates for each email are stored as text strings, and I had to convert text based timestamps to actual DateClass() objects to be manipulated by enscript. No built in methods exist, but I found two methods written by “ohopli” on the guidance forums.

As always, the enscript is provided as source code and not enpacked. Feel free to play, experiment and fix bugs 🙂 I have tested the enscript against 3 1Gigabyte PST files and not broken anything.

Posted in State of Affairs | Tagged: | Leave a Comment »

Enscript – Bookmarking files

Posted by Paul Bobby on July 23, 2010

I’ve uploaded a new enscript to my blog (see the My Files section). This script allows you to bookmark any number of files based on an input text file. This text file contains one filename per line.

Lance Mueller has a similar enscript here, but it didn’t work for me and was enpacked. So I wrote my own. Which is not always a bad thing to do anyway 🙂

Posted in EnCase | Tagged: | Leave a Comment »

Upcoming Last Accessed research

Posted by Paul Bobby on July 1, 2010

Sometimes getting back to the basics is required when unexpected behavior is observed.

Prompted by a question on the ForensicFocus forum, I started experimenting with how the Last Accessed time is updated (in Windows Explorer it is called Date Accessed).

The following scenario needs an answer:

1. Source hard drive, formatted NTFS
2. Destination device, 2gig USB thumb drive formatted FAT32
3. Select multiple .doc, .xls, .ppt, .pdf, .pst from the source hard drive, and Right Click->Copy
4. Paste these documents to the USB thumb drive

I found that the Last Accessed timestamp of only some of the files on the source hard drive had changed.

Why?

Update 7/2/10:

Some theories I’ve been testing:
1. Filesize: initially I thought it was a simple matter of filesize. I couldn’t get the last accessed timestamp to update on various 64k test files, but on >192k testfiles, they were updating. Unfortunately not all files >192k were updating.

2. Hardware policy: Optimized for fast removal versus Optimized for performance. Fast removal does not require buffers to be flushed prior to removal of the thumb drive (i.e. the Safely remove hardware feature). Of course buffers are flushed with a fast removal policy but it happens incident to the copying process, and is not delayed. The theory here is that with fast removal, the updating of Last Accessed would also happen immediately. Unfortunately I could not get this behavior to consistently repeat.

3. ObjectID: Some MFT records of files with updated Last Accessed had ObjectID attributes. The theory being that the GUID information had to be read, which caused the timestamp to update. Unfortunately some files did not have this attribute in their MFT record, so that wasn’t the culprit.

Throughout this testing I made use of Procmon and Sync (sysinternals tools).

I believe that was is happening relates to the caching function of the OS, and efficiency algorithms to avoid every file system READ becoming a WRITE. It’s more than just the 1hour lazy write function. The ‘sync’ tool does in fact work, however ‘flushing buffers’ simply commits filesystem operations, not necessarily metadata (i.e. MFT) operations.

Couple of things to focus on next:
1. Forcing the commit of data from the cache (no idea how to do this)
2. Tricking the cache in to thinking it needs to commit data (see the below text from Microsoft)
3. Identifying changes in $logfile.

The following text is all that I could find on the Microsoft technet site concerning Last Access Time.

Last Access Time
Each file and folder on an NTFS volume contains an attribute called Last Access Time. This attribute shows when the file or folder was last accessed, such as when a user performs a folder listing, adds files to a folder, reads a file, or makes changes to a file. The most up-to-date Last Access Time is always stored in memory and is eventually written to disk within two places:
• The file’s attribute, which is part of its MFT record.
• A directory entry for the file. The directory entry is stored in the folder that contains the file. Files with multiple hard links have multiple directory entries.

The Last Access Time on disk is not always current because NTFS looks for a one-hour interval before forcing the Last Access Time updates to disk. NTFS also delays writing the Last Access Time to disk when users or programs perform read-only operations on a file or folder, such as listing the folder’s contents or reading (but not changing) a file in the folder. If the Last Access Time is kept current on disk for read operations, all read operations become write operations, which impacts NTFS performance.
Note
• File-based queries of Last Access Time are accurate even if all on-disk values are not current. NTFS returns the correct value on queries because the accurate value is stored in memory.

NTFS eventually writes the in-memory Last Access Time to disk as follows.
Within the file’s attribute
NTFS typically updates a file’s attribute on disk if the current Last Access Time in memory differs by more than an hour from the Last Access Time stored on disk, or when all in-memory references to that file are gone, whichever is more recent. For example, if a file’s current Last Access Time is 1:00 P.M., and you read the file at 1:30 P.M., NTFS does not update the Last Access Time. If you read the file again at 2:00 P.M., NTFS updates the Last Access Time in the file’s attribute to reflect 2:00 P.M. because the file’s attribute shows 1:00 P.M. and the in-memory Last Access Time shows 2:00 P.M.
Within a directory entry for a file
NTFS updates the directory entry for a file during the following events:
• When NTFS updates the file’s Last Access Time and detects that the Last Access Time for the file differs by more than an hour from the Last Access Time stored in the file’s directory entry. This update typically occurs after a program closes the handle used to access a file within the directory. If the program holds the handle open for an extended time, a lag occurs before the change appears in the directory entry.
• When NTFS updates other file attributes such as Last Modify Time, and a Last Access Time update is pending. In this case, NTFS updates the Last Access Time along with the other updates without additional performance impact.

Note
• NTFS does not update a file’s directory entry when all in-memory references to that file are gone.

If you have an NTFS volume with a high number of folders or files, and a program is running that briefly accesses each of these in turn, the I/O bandwidth used to generate the Last Access Time updates can be a significant percentage of the overall I/O bandwidth.

Posted in Forensics, General Research | Tagged: | 1 Comment »