August 16, 2010 Leave a comment
Remediation is becoming a hot topic and already the FUD is flying. Of course, we are excited about our remediation story and I am often asked why our approach to remediation is different from others on the market. Let me see if I can help by borrowing a statistic.
I was at a meeting at Symantec headquarters on Friday where Francis deSouza, senior vice president of the Enterprise Security Group at Symantec, was first on the agenda. In his presentation, deSouza noted that Symantec research indicated that attacks are morphing so quickly that any given variation of an attack is used against 1.6 machines before a new variant appears.
Most companies (maybe everyone but Triumfant) employ an approach to remediation that employs previously written scripts that are matched to detected attacks. This approach of course requires that such scripts can only be written for known attacks. While there are some generic approaches that may apply to previously unknown attacks, for any moderately complex unknown attack there will likely be no remediation script.
Now let us put deSouza’s statistic to work in the discussion about remediation. If we put the script-based approach in the context of deSouza’s statistic, we can conclude that any remediation script is good for 1.6 machines. Makes sense because if the remediation is morphing, then it follows that the remediation needs would also change. New variant requires a new script.
I am already reluctant to believe that any pre-written script can be completely effective for attacks of even moderate complexity because attacks may cause varying primary and secondary damage based on the unique combination of factors for any given machine such as OS version, installed applications, and differences in configuration. Adding the restriction to previously known attacks and Mr. deSouza’s statistic and a logical conclusion is that scripted remediations will fall short. Even if a script will apply, it is reasonable to doubt that the script is capable of remediating the machine without leaving one or more artifacts that will make the machine vulnerable. This doubt normally translates to organizations re-imaging the machine as a matter of standard.
There are other differences such as the need for context. For example, a process may be part of an attack. A generic script may mark that process for deletion, when it may be a process shared by other benign applications. A script would have to either shoot it on sight, potentially corrupting other applications, or contain the logic required to know what other applications the process shared and then have the ability to determine if those applications were installed on the machine. Accounting for every “except for” would certainly be aq challenge.
Triumfant constructs a remediation that is specific to the identified incident for that machine and requires no previous knowledge to build this remediation. We correlate all of the changes to the machine to build a remediation so complete you should not have to reimage the machine. The remediation is surgical, contextual and specific. As a bonus, our remediations can leverage our patent pending donor technology to restore deleted or corrupted files.
There is more, but I feel like the point has been made and anything else would be showing off. The difference between common remediation solutions and Triumfant’s approach are profound. Now I need to figure out how you attack 0.6 of a machine.