Context. This word has become clear to me as one of the key differentiators of our product and our ability to detect and ultimately remediate malicious attacks. This realization comes after two plus weeks of meetings with some of the most seasoned and smart security people I have had the pleasure to interact with since coming into the IT security market. These were all second and third meetings at a variety of organizations, so we were going much deeper into the inner workings of Resolution Manager to help them really understand how we can do what they were seeing in the demonstrations. One theme quickly emerged that was key to understanding what we do and how we do it and frankly left these very impressive security people intrigued with our product: Context.
There are other endpoint protection tools on the market that attempt to analyze logs or other information from endpoint machines in an attempt to identify malicious attacks to those machines. But these attempts all lack what Resolution manager uniquely brings to the table: context. That is because when you use log files or diffs you are seeing an event in time for just that machine and therefore have no way to put the event into any context. The result is predictable – so many false positives that the solution is not viable. The same fate is also shared by behavioral analysis and heuristics that, while very sophisticated, only see the story in the context of one machine and never in the context of the endpoint population.
Let me try to put the subject of context into some, well, context. A change is detected on an endpoint machine and is captured in a log entry. Is this a valid change or the results of an attack? Is this the only machine in that group or even the entire population that is seeing that change, or is it happening to other machines. If it is happening to other machines is it happening consistently, and in an orderly way? What other changes are related to this change? What was the collateral damage to the machine associated with the change? Where ports opened, files corrupted or deleted, or security configuration settings altered? If it is a new application, does that application exist in the endpoint population? If it does, is the install following the patterns established for other installs? Is the same set of files being installed and do they all hash to consistent values?
The attacks of today are sophisticated and complex and cannot be properly analyzed by looking at disconnected log entries. It is context that allows for a proper analysis, and context that uniquely enables Resolution Manager to see every change associated with an attack so it can synthesize a remediation that is complete and restores the machine to its pre-attack state. So how does Resolution Manager we get this context that others lack? The answer is in the Adaptive Reference Model that lies at the heart of the solution.
When Resolution Manager is installed, the agent on the endpoint machines scans over 200,000 attributes which include all of the registry keys, an MD5 hash of every file, configuration information, performance data, and just about every other piece of elemental data about the machine. These attributes are sent to the Resolution Manager server where they are correlated with the attributes from the other machines in the endpoint population in order to build the rules used by the analytics that makes up the “secret sauce” of our solution. The resulting rule set is an adaptive, multi-dimensional view of the endpoints that is really one-of-a-kind.
It is context.
When I say multi-dimensional, I am speaking to the highly sophisticated grouping and correlation process that occurs automatically within the model. The analytic engine will group machines by any number of dimensions and apply multiple correlation algorithms to find patterns that will eventually help with the threat assessment and false positive elimination. For example, it will build a set of rules for of every application it finds and will build a separate rule set for each version of that application. The rules capture the analysis of such information as the files associated with that specific release, the hash values of those files, and other elements of what is the “normal” for a machine running that version of that application.
You don’t have to tell the model what is normal – it learns it. The rules sets for each version of each application form a normative whitelist for the endpoint population. So when a new application is installed the model will know immediately if it has seen this application before. If it has not, it will create an alert, if it has, it will make sure that it installed according to the rules created from how that application performs on other machines.
And the model evolves over time. Changes to the endpoint population are assimilated into the model and rules are updated and reformed accordingly. The model grows and adapts with the environment.
With the model in place, you now have context. So ultimately when the agent identifies a change on a given machine and sends that change to the server, the change is analyzed in the context of the Adaptive Reference Model. It analytically asks all of the questions I used as the example earlier and thousands more, leveraging the learned context of the model. If the analysis determines the change may be part of an attack, the server will actually build a request back to the endpoint for additional information which we call a probe. The purpose of the probe is to get even more context, as it will perform over twenty different correlation techniques to ensure that all of the changes to the machine associated with the attack have been identified. It uses context to cluster these changes so any attack can be comprehensively addressed.
Context. No amount of heuristics applied to analysis of log traffic can replace the context of our Adaptive Reference Model. Same with behavioral analysis and heuristics at the endpoint level. It is context that makes the most seasoned security people all say they have never seen a product like Resolution Manager and leave them at a minimum intrigued and, from the reactions I can see, impressed.
I plan to supplement this post with additional information about the model – such as how do you set explicit policies – but I am way over my unofficial post word limit so I will stop here. I hope this gave you a feel for the Adaptive Reference Model and why it is differentiating, but the best way to understand is through a demonstration, which we would be happy to provide.