Sometimes – even I have to admit working at NetWitness is quite a unique experience. Because of what we do, the company has a very open culture. Our Internet connections always have various deployments of our products on them, and our engineering staff is encouraged to use them for monitoring. Today I posted a couple of pictures to a friend on Facebook. Within minutes, I received the following from a colleague: ”Hey – check out the new Facebook parser!” – along with the attached:
In our first post in the forensics and reversing series, we examined why HTTP gzip content encoding is a larger and more serious problem than most people realize. We’ll use the end of the first post as a starting point for analysis in this post. It also serves as an example of something far more important. That is, the very heart of forensics – and something I’d propose is the very definition of forensics. I teach a network forensics and reversing class together with Mike Sconzo about once a month. This is a point I raise at least a dozen times a day in class. That is:
World class forensics engineers are the ones who quickly and intelligently reduce millions of sessions to about a dozen worthy of deeper analysis.
What constitutes quickly? I suppose it depends on the tool being used to perform the analysis, but I’d generalize by saying no more than a couple minutes and/or the same number of clicks. We’ll see this in a moment.
What constitutes intelligently? We can answer this question by looking at a host-based forensics analogy. Suppose you were given a hard disk of a compromised machine and you needed to find the malware. There could be millions of files on the computer, so where do you start? Most of the time, especially for most standard compromises, the following steps will work (this is an over-generalization, but one that works nonetheless):
- Show only PE files (exe, dll, etc..). At this point you’ve probably gone from nearly a million to about 100,000.
- Show only PE files outside the Program Files directory. Here you may go from about a hundred thousand files to tens of thousands.
- Depending on the assumed time of compromise, show only those PE files modified or created in a specific range of days. At this point you should go from tens of thousands to less than 100.
- Since malware tends to be smaller in size, show only those PE files less than 500k. At this point you should be looking at only a handful of files, and most of the time, the malware you’re looking for will be one of them.
In the above steps, you found malware NOT by looking for known traits of malware. You did it by examining general characteristics about file traits. In other words, by examining characteristics external to the file, not by searching for signatures or other characteristics internal to the file. Typically, each of those traits by themselves are completely uninteresting until they are combined with other “uninteresting” traits, making them very interesting when layered together.
As you’ll see next, the same applies to network traffic. We can intelligently go from millions of sessions to only a few by wisely layering traits of network sessions with little attention paid to what is inside those sessions.
Read the full and detailed post here:
It seems that our holiday from rustock-generated spam is over.
We monitor a number of botnets at NetWitness and check them occasionally for new information. Since Rustock is in the news, we’ve paid close attention to it recently. Sometime this morning, Rustock begain spamming again, pushing viagra from shady .ru sites.
Looking at the traffic in Investigator, I see a quick overview of subject lines:
And reconstructed, we see a very in-depth message of “CLICK HERE!”
Which of course takes us to Canadian Pharmacy!
Welcome back Rustock…We can’t say we’ve missed you. There is no telling if this will be continued activity, but appears to be business as usual for the Rustock operators.
Brian Krebs posted an article on his blog this morning that documents a recent spam attack on U.S. government employees that occurred around christmas time.
which has in-depth technical coverage at:
Using a very simple ruse of “Merry Christmas from the White House”, this message used the common “ecard” social engineering hook to push a ZeuS trojan variant to the unlucky recipient.
From a configuration standpoint, this ZeuS bot used the following command and control points, all of which are down as of this writing:
It was poised to collect credentials from most major banks, but also includes site such as ebay, myspace, and microsoft, as well as online-payment processors, paypal and e-gold.
While these facts alone show similarities to infrastructure aspects of the “kneber” compromise that we documented back in February 2010, a very specific tie-in makes us believe that this attack was driven by operators that were also a part of the initial “kneber” compromise.
One domain in the original kneber data, “updatekernel.com” was tied specifically to a phishing email that used a spoofed address to push ZeuS to targeted government-employees, which Brian details here:
An interesting sidenote to this particular aspect of the kneber data was that the ZeuS bot that was involved with this phish had a second stage download of an executable called “stat.exe”. This malware was revealed to be a perl script converted to a stand-alone executable with the perl2exe tool.
This malware searched the local harddrive of the victim PC for xls,doc and pdf files, and uploaded them via FTP to:
Which at the time, resided on a server in Belarus.
This current spam run, also downloaded a second-stage executable, called “pack.exe”, which was also:
- A perl2exe exectuable
- Searched the victim PC for all xls, doc and pdf files
- Uploaded stolen information to a server in Belarus, which resolved to “uploadpack.org”
So in this case, we have two executables, and three domain names, that have three converging elements, (pack, belarus and perl2exe)
When compared, these two files, separated by almost a year, are nearly identical in size:
Furthermore when analyzed with HBGary’s “fingerprint” tool, which looks for code similarities and “toolmarks”, a 95.8% match is indicated, with the only differing factors being the CPUID of the machine on which the malware was compiled:
This, because it is such a small and fairly unknown aspect of the kneber compromise, makes us think that this is indeed the same operator, who is again after documents pertaining to U.S. Government activities.
This evidence shows the continuing convergence of cyber-crime and cyber-espionage activites, and how they occassionally mirror or play off one another.
The question again, which we posed in our initial Kneber document, is:
Who is the end consumer of this information?
Alex Cox, Principal Research Analyst