Breach Counts: We Don’t Know What We Don’t Know (Foghorn Leghorn Edition)

I asked a question last week on Twitter that provoked some interesting discussion and even a slap on the hand.  I thought my question was relatively simple and sensible:

Is it reasonable to wonder if the breaches we know about – the adversary was caught for lack of a better term – might we only be viewing a sample that represents the less well conceived and/or constructed attacks?

Seemed reasonable.  I asked the question because I use the various breach reports for statistics, and they of course report on breaches that are discovered. Think back to the hide and seek of your childhood.  In my experience, the worst hiders were very likely the first caught.  I even mentioned the old Monty Python “How to Hide” sketch.  So it seemed sensible to ask if the reports were skewed to the worst hiders of the attack population.  Or to quote that great security analyst and philosopher Foghorn Leghorn: “that breach is about as sharp as a bowling ball”.

I try very far to stay away from fear, uncertainty and doubt (FUD), but my question pushed the FUD detector of Pete Lindstrom (@SpireSec), a security analyst and founder of Spire Security, past his tolerance point.  Pete’s contention was that raising the question without supporting evidence was a form of FUD, because I was raising a level of uncertainty and perhaps fear.  Point taken, but that does not stop my intellectual curiosity because I still believe there is a bit of Gordian Knot at play here.  I raised the question because I really study the reports and use the presented statistics to support my points about Triumfant so I am not spreading FUD.  Foghorn would likely say that I am “more mixed up than a feather in a whirlwind”.  But the more I look at the statistics, the more I see unanswered questions that lie beyond the available evidence.

Which takes me back to the point of my original question: it is impossible to gauge the problem we collectively face in IT security because we do not know what we do not know.  And what we do not know is the proportion between detected and undetected breaches.  I raised a similar question in a blog post about malware detection rates tow years ago and noted that an undetected attack is still an attack, even if we can’t count it.

The breach counts in the collective reports actually rely on two things: detection and disclosure.  The Verizon Business report is based on the Verizon caseload and cooperation from law enforcement agencies from several countries.  How many breaches are detected that do not show up on the Verizon report or the others? How many breaches are not reported to the authorities?  There are regulatory mandates that require an organization to disclose breaches that involve the loss of certain types of data, but what happens when those regulatory lines are not crossed?  The Verizon Report is actually called the Data Breach Investigations Report.

I go back to what we don’t know.  How many breaches go undiscovered?  How many breaches are discovered and not disclosed?  Are the detected and disclosed breaches representative of the broader population or are they representative of the less well written and less well executed breaches? Are the breaches in the report 99% of the breaches? 50%? The tip of the proverbial iceberg?

These questions have ramifications, particularly when we put them in the context of what evidence we do have.  For example, if we discover that the discovered breaches are not exactly, as Foghorn would note, the sharpest knives in the drawer, what does it say about the ability of organizations to detect breaches when the average time from infiltration to detection is 173.5 days as reported by the Trustwave report?

I agree with Pete – we need evidence.  Unfortunately, a reasonable conclusion that can be drawn from the collective evidence of these studies is that most organizations are not equipped to detect breaches.  Which of course adds to the conundrum the evidence points to the fact that we will struggle to gather the proper evidence.

I don’t think the collective industry will answer these questions, because they are the uncomfortable detritus of years of placing so much emphasis on prevention. The “2011: Year of the Breach” declarations have been an uncomfortable public realization for the industry and for organizations.  Even if we were better at detecting breaches, organizations will not self-disclose unless required to do so for a variety of valid reasons.

So, FUD accusations aside, I stand by my question.  Of course, Foghorn would likely say that I “Got a mouth like a cannon. Always shooting it off”.

2011 – The Year We Recognized We Were Getting Breached

I just read the Symantec 2011 Internet Security Threat Report from cover to cover, which is a great report with a lot of great information.  But I have the same problem with this report as I do with the ones from Verizon Business, IBM X-Force, Trustwave, and Mandiant (also all great reports with great information) and several of the writers and general industry pundits.  In their report, Symantec calls 2011 “The Year of the Breach” which is consistent with the other reports and other discussions in the broader market.

I am sorry, but I just hate that term.  Hate it.  The fact that the industry, in many case begrudgingly, has had to publicly acknowledge that shields are being evaded and organizations are getting breached does not make 2011 a milestone for breaches.  Companies were getting breached in 2010 and prior, and will be breached in 2012 and beyond.  Breaches are not a 2011 thing, or some annual phase we entered, watched peak, and ultimately ebb away

I will agree that 2011 is the year that the IT Security Industry came to terms with the fact that vendors that sold preventative software could no longer conveniently ignore that organizations were being breached.  Many of the statistics that have been a consistent theme of reports like the Verizon Business 2012 Data Breach Investigation Report seem to have suddenly found resonance.  Statistics such as the 173.5 days on average from breach to detection reported in the Trustwave 2012 Global Security Report became impossible to ignore.

Therefore, calling 2011 “The Year of the Breach” seems disingenuous to me.  In fairness, calling 2011

“The Year We Stated the Obvious” or

The Year We Woke up and Smelled the Coffee” or

“The Year We Got Our Heads Out of Our Collective… (filters engaging) the Sand” or

“The Year Vendors Realized They Could No Longer Sell Just Shields”

is clearly not as catchy.

For the record, this is not a criticism of the reports or the people that produce them.  These reports are hugely informative and I respect the efforts of those who produce them.  As I noted previously, the relentless presentation of the statistics in those reports was at least partially responsible for changing the predominant messaging in the market.  The hype could no longer shout down the reality presented by the numbers.  Notice I said messaging, because I think most pragmatic, right-thinking folks in IT security already knew about the breach situation.

Don’t get me wrong; I am happy that the market has decided to recognize that organizations are being breached.  I work for the company that I think offers the best and most innovative solution for detecting breaches at the point of infiltration.  And with one child about to leave for college, I am all about contributions to the Ivers Foundation.

Which leads me to another comment about these reports.  The reports – rightfully so – talk about detected breaches.  The reports indicate that a high percentage (>90%) of breaches are discovered by someone outside of the organization, indicating that organizations are not equipped to detect breaches.  One could make the case that the breaches that get detected do not represent the best and brightest because they were detected.  Without dissolving into hype or FUD, what percentage of breaches do we really detect? All? Half? 10%?  It is a question worth asking, and as organizations begin to put breach detection capability in place, the resulting statistics will be interesting.

By the way – anyone want to place bets that 2012 will be “The Year of the Targeted Attack”?

Detection is the Horse, Investigation is the Cart – Use in That Order

I received some interesting responses from my last week’s post (Incident Detection, Then Incident Response) so let me try to answer them all collectively.

No, my post was not a knock against incident response (IR) or forensics tools.  I believe we are getting things out of order.  It is about detection first.  Better analysis? Good. Better Response. Good. But it all starts with breach detection.  In fact, if we had better breach detection, organizations would actually get more value out of their IR/forensics tools.

The inability of organizations to detect breaches is easily explained.  The picture below is my attempt to illustrates what I call The Breach Detection Gap.  This gap exists  between the numerous layers of prevention solutions and IR/forensics tools leaving organizations unable to detect breaches at the point of infiltration.

The IT security  market has been fixated – technically and emotionally – on prevention. Hence the numerous “usual suspects” on the left side of the breach.  I think my position is clear (cystal) that a prevention-centric strategy is doomed to failure.  Tradecraft relentlessly and rapidly evolves to evade any gains in prevention, and targeted attacks and the Advanced Persistent Threat are engineered to evade the specific defenses meant to defend their target.

IR and Forensic tools provide deep insight and valuable analysis to the breach investigation process, but are only brought to bear after the breach is detected.  Unfortunately, this is where most organizations spend the meager budget slice that is set aside for post infiltration.

The Breach Detection Gap is the critical exposure between prevention tools and IR/forensics tools that leave organizations without the means necessary to detect breaches in real-time.  Obviously, without detection there can be no timely response.  Which is my point of last week’s post: re-packaging IR tools as the solution for breach detection problems is not the answer.  The answer must start with faster and more accurate detection.

Someone also asked why I don’t name names.  I try to write this blog to stimulate thought and while I unashamedly say where Triumfant solves specific issues I try very hard to keep this from being an ongoing advertisement.  I also have never believed that there is any value from directly speaking in a negative manner about any other vendor.  There are some good IR/forensics tools in the market that are very hot right now, and when products get hot, the market begins to act strangely around them.  My post was not a knock on those products, but on the efforts I see in the market to position those tools with professional services as the solution to the Breach Detection Gap.  Make no mistake, the organizations around these hot products and event the vendors behind these products see this as a chance to sell professional services projects to hunt down breaches.  I will leave it to you to figure out who those vendors are, but I think in most cases the answer will be easily discerned if organizations resist the hype.

What I did not say in last week’s post is that Triumfant is positioned to detect breaches in real time.  There are ample posts that address that directly as well as a new whitepaper on our site, so I won’t go into details here.   I will say that while heuristics, behavioral, and IPS/HIPS are also being directed to the problem, I think that Triumfant’s use of change detection and the analysis of change in the context of the host machine population is uniquely suited for the role of breach detection.  You get rapid detection (real-time), and within minutes we provide detailed information to help formulate an informed response, and we custom-build a remediation to stop the attack and repair the machine.  That is rapid detection and response.

And while Triumfant provides a wealth of IR/forensics data, we fully endorse the use of IR/forensics tools to provide the full range of post-breach investigative work.

But it all starts with detection.

Incident Detection, Then Incident Response

There seems to be an interesting and, I believe unfortunate, trend emerging in IT security:  Incident Response (IR) and Forensics tools are being wrapped in professional services and being sold as the solution to the breach detection problem. While I am happy that there is growing understanding that there is a breach detection problem, the reaction to that recognition is disappointing and misses the mark.

I think the point is obvious and is right there in the name “Incident Response”.  Response is not detection.  It is a step after detection – 1. Detect the problem. 2. Analyze the problem. 3. Fix the problem.  You could group #2 and #3 as respond, but they still follow detect.

You see, I thought detection was the issue.  While coming up with faster and more efficient ways to respond is laudable, I did not think what we needed was a better response to breaches that go undetected for an average of 173.5 days (Trustwave Report).  Just to make sure I was not missing something, I reviewed all of the excellent breach investigations and reports (Verizon Business, Trustwave, IBM X-Force, and Mandiant).  While some note that the time from detection to containment, but it is certainly not the focus.  The consistent focus I take from my reading is that organizations are getting breached and are not prepared to detect those breaches.

Unfortunately, there are several organizations making hay with selling professional services engagements under the umbrella of incident response.  The IT security market has a long history of seeing success and extrapolating that success into a rush to copy that success.  This is one of those cases.  Then marketing kicks in and the opportunity for the market to take constructive steps forward is squelched by the vendors rushing toward the next pot of gold, and organizations being swept into the hype.  Then these same reports will come out next year and there will be collective head scratching as to why the numbers have not improved.

The winner is the adversary, who is quite fine with 173.5 days of undetected access to organizational networks.

A simple analogy is firefighting.  Firefighters diligently and continuously train to better respond to a fire when called.  There are constant technological breakthroughs in equipment that also help them respond to a fire when called.  All of that training and equipment is put into use when they are called (the fire is detected).  Firefighters are not responsible for detection, they are all about the response. And while I am not a firefighter, my guess is that firefighters would tell you that the sooner the fire is detected, the better their response.  I would also guess that rapid detection is a key component to reducing loss.  Having a better, more expensive fire investigator will not reduce loss.

The first step to solving the breach detection problem is deploying tools that rapidly detect breaches at the point of infiltration.  Studies prove that prevention tools cannot provide that detection, and IR/Forensic tools are not built for detection.  Detection must be addressed first.  Then you can deploy all of these marvelous response offerings.

Another explanation is that organizations have twisted themselves into a really unfortunate Gordian knot. Maybe they are just beginning to understand the problem, but have reconciled that they will take action if and when then are breached.  This is not a good strategy, because statistics say it is likely they already have been breached, but simply don’t know it yet because they lack the tools to detect breaches.   There is no more “if”, and the “when” has likely already happened.  That is not FUD, that is what the statistics say.  Once a breach is detected  – the statistics say that 92% of those breaches will be detected by a third party and not the breached organization – then they will spend enormous amounts of money to have someone come in and do lots of expensive analysis and make recommendations that they will likely ignore.  The organization of course must deal with the financial, regulatory, and reputational effects of the 173.5 days the adversary had access to their confidential data and intellectual property.

To paraphrase a quote from Churchill I have used before, people frequently stumble over the truth; unfortunately, they often pick themselves up and carry on as if nothing happened.  I fear this is one of those collective moments when organizations have stumbled onto the truth and will not be the better for it.

In 10 Days, the Mac Safe Haven Becomes a Botnet Spewing, APT Vulnerable OS

In rapid succession, the IT security world, not to mention the perceived cocoon of safety for Mac users, was rocked by two announcements.  On April 4, Russian antivirus company Dr. Web announced that they had discovered a Mac Botnet, called Flashback, and that the bot had infected 600,000 machines.  About ten days later, Kaspersky announced the discovery of a backdoor trojan called Backdoor.OSX.SabPub.  This attack leverages an exploit that uses malformed Word documents to deliver malware that opens a backdoor that can be used for advanced, persistent attacks.  Holy APT Batman!  Perceived safety to botnet to advanced persistent threat in 10 days!

Oh the shame.  The Mac went from safe haven to botnet spewing, APT exploitable platform tied to three-year old vulnerabilities before our very eyes.  As I tweeted, the heads of the Mac fanboys and the APT crew were simultaneously exploding.  Mac users were sent to various sites to download software to check their machines for Flashback like common Windows XP users.  I could not help but wonder if some enterprising bad guys had set up malware delivery disguised as Flashback checkers – wouldn’t that have been ironic.

I am really just having some fun here.  I take no joy in the Mac becoming a target, although it is good for business.  I am also not on some war against “smug” Mac owners because I have made the jump myself.

For me, the folklore/mythology of the Mac world as a safe haven from malicious attack reminds me of a scene from the classic movie and personal favorite, Butch Cassidy and the Sundance Kid.  In this scene, Butch and Sundance have fled to Bolivia and have taken a legitimate job guarding the payroll for a mining company.  At the beginning of the scene they are riding with the old, hardened mine boss (played perfectly by the great character actor Strother Martin) and begin to argue where the inevitable ambush will occur.  The mine boss responds disdainfully: “Morons. I’ve got morons on my team. Nobody is going to rob us going down the mountain. We have got no money going down the mountain. When we have got the money, on the way back, then you can sweat.”

Mac users, I hope you have enjoyed the ride down the mountain.  The recent Mac malware news just means that the downward portion is over, and now that there is a critical mass of Macs plugged into the networks and systems where the money lies. It is time for Mac users to sweat.

We could engage in what I am sure will be an animated conversation about the superiority of the Mac OS and the inherent vulnerabilities of Windows, but I contend this was all about opportunity.  Sure Windows machines were likely the road of least resistance, but malware writers have proven to be a resilient and industrious bunch and repeatedly rise to find a way around every barrier put in their path.  So now that the opportunity has arrived – what the adversary wants is on or accessible via the Mac – the Mac OS barriers will also be breached.

I should point out that Mac users are not finished with their journey into the seedy underbelly of IT security.  Not surprisingly, the sales of Mac AV software has gone way up.  Wait until the Mac people connect the dots that the same crew that discovered the malware also sells them AV software.  Of course, that AV software will at least partially return their cocoon of safety, until they find out that motivated adversaries will drive around their new shiny AV software like a traffic cone on the interstate.

I hope they enjoyed the ride down the mountain.

Digitally Signed Malware Proves Again That Attacks Get Through Your Shields

So what, Triumfant guy, exactly gets through my shields?  You tell me I will be breached and you give me statistics, but I have AV, whitelisting, deep packet inspection, and every other acronym and buzzword in place. Oh yea, and I have “the cloud” (pause for tympani emphasis) providing me prevalence information and other cloud-based stuff.

Well, digitally signed malware gets past your protections.  Not according to me, but according to several sources – Symantec, Kaspersky, AlienVault and BitDefender – cited in a March 15, 2012 PC World article “Digitally Signed Malware Is Increasingly Prevalent, Researchers Say”.

It is the blackhat version of “these are not the droids The Droids You Are Looking Foryou are looking for”, using the certificates to get the malicious code waved through.  Some of the first evidence of this technique was found in 2010 in the analysis of Stuxnet.  The PC World article provides evidence that the technique is showing up with increasing frequency.  The article tells in good detail how it works and what protections it can evade, including whitelisting.

This technique is illustrative of the ongoing battle between good and evil in IT security.  Operating system advances in Windows 7 and other OS versions were thought to advance the security of systems, and the adversary then takes the very techniques used to make the systems more secure and subverts them to find new ways to deliver malicious code and evade protections.  I have no interest in impugning the efficacy of prevention software and I have never said to turn off protection software.  What I have said consistently is that attacks will get through your shields.  Here is yet another example of how, and demonstrates that the adversary will always find a way to get through.  No FUD here – I would point out that every vendor cited in this story is a protection software vendor.

This story also illustrates that there are no silver bullets in protection.  Prospects often cite the use of whitelisting tools as their raison d’etre  of why they do not need something like Triumfant, but here is a clear example of how such tools are being evaded.  If you need more, there is a video from Shmoocon that shows multiple techniques for evading several whitelisting tools.  Yet another silver bullet falls short. I am not singling out whitelisting – it is just the current “It” tool of IT security.

Lastly, it is illustrative of how the foundations of trust have become less…well…trustworthy.  I have seen the validation process of a certificate authority up close, and let’s just say I am not shocked to know that malware writers can obtain certificates with false identities. With the RSA breach and other certificate authorities being hacked, the foundation of trust was already showing cracks.  Now we see examples of how trust can be subverted using this technique.

So if this technique essentially waves malware through your shields, how are you going to detect the infiltration?  That is where Triumfant fills the gap, detecting the zero day attacks and targeted attacks, including the advanced persistent threat, that infiltrate your endpoint machines and servers.

I once had a product manager from another company disdainfully tell me “when you find something that gets past my shields, you call me”.  I am looking for his number as soon as I finish this post.

Targeted Attacks Make Remote Adversaries Malicious Insiders

“Wow, your tool would be great against malicious insiders!”

This is a common conclusion made by those introduced to the Triumfant solution.  That is because instead of looking for applications or malicious executables, we detect malicious activity through change, whether a threat actor working programmatically creates the change or a malicious insider directly makes the change.

The term “malicious insider” has been gnawing at me since I delivered a short presentation for the Intelligence and National Security Alliance Innovators Showcase last week.  My new slides had several screen shots from the Poison Ivy Remote Administration Tool (RAT) that we use in demos of the Triumfant product.  It was interesting to see the reaction to those screen shots as people grasped in a very graphical way what it meant to “own” a machine.  I realized that perhaps while people have intellectually grasped what a RAT can do, they might not have fully appreciated the term “own” until they actually saw one in action. (More on RAT tools in the previous post)

Today’s attacks are not smash and grab operations – they methodically evade network and endpoint protections to establish a long-term and comprehensive presence on the machine.  These are carefully crafted incursions onto target networks that rely on persistence and stealth.

In short, they turn the outsider into an insider.  This of course is not news to those in infosec, but to the people we serve, this is an idea they are still wrapping their head around these sophisticated targeted attacks.

Once a RAT is in place, the hacker has the same access as if they were looking over the shoulder of the machine’s user.  The user literally guides them through the applications and systems on the network, providing them user IDs and passwords along the way.  This allows the hacker to spread their influence to other places in the network until they are able to access their targets.   Time is on their side, as every statistic says that they will have at least a month and on average six months to identify and exfiltrate the intellectual property or sensitive data they seek.

Attacks rarely start at the machine that holds the targeted information.  Hackers now patiently gain access to the network where they can, and then stealthily move about until they find what they need.  And new Advanced Persistent Threats like Duqu illustrate that hackers are now using sophisticated attacks to gather all manner of information to then plan their payoff attack.  As I said in the previous post, these attacks put the adversary in your boardroom, laboratories, production lines, and CFO’s office.

If six months and virtually unlimited access does not qualify the hacker as an insider, I do not know what does. Recruiting physical insiders is a long and costly process and smacks of too much Mission Impossible.  And even well placed insiders may have trouble moving outside of their areas of responsibility.  Why go through all of that risk and effort when an outsider can easily become an insider.  If the operation is discovered, the outsider simply moves to the next target.

There is another aspect to being an insider: once you are inside, all of the security measures designed to keep you an outsider are now irrelevant.  All of the carefully crafted shields an organization has in place are all pointing outward and are not equipped or designed to catch the work of an insider.  Once these shields are evaded they are no threat to the insider.  Statistics from the 2011 Verizon Business Data Breach Investigations Report say that less than 6% of data breaches are discovered by the organization’s IT shop.  That sound’s like a pretty wide gap that requires some new thinking to me.

The answer to the original question is yes, Triumfant rocks against malicious insiders.  All types.

Follow

Get every new post delivered to your Inbox.

Join 693 other followers