Internet Security Suites fail to block exploits and do little to protect users against exploits, according to a recently released "test" [here] by Secunia, a Danish vulnerability notification firm. I quoted the word "test" as it's very common to see vulnerability companies use close-to-unethical tactics to oversell problems with the AV industry in order to promote their own services [another example here].
Now its Secunia's turn. In their "test" they assume that anti-virus products have poor performance in detecting vulnerability exploits because of their limited focus on traditional AV signatures. So along comes Secunia's Chief Technology Officer (CTO) Thomas Kristensen with the bright idea of testing 12 different Internet Security Suites from McAfee, Norton, Kaspersky, Panda and others against a testbed of exploit files. So far so good, it’s an interesting idea for comparing technologies and I believe it should be performed.
However when testing exploits one very important aspect is that these products don't just rely on traditional signature detection. Yet Secunia's "test methodology" only takes into consideration manually scanning 144 different inactive exploit files. This is very much like saying that you're going to test a car’s ABS breaks by throwing it down a 200 meter cliff. Absurd, sensationalist and misleading at best.
Just to clarify, if you only test 1 part of a product against exploits, which by the way is the part of the product which IS NOT designed to deal with exploits, and leave out of the test the part of the product that DOES deal with exploits and vulnerabilities, there's a very good chance the results will be misleading. Mr. Kristensen, as a Chief Technology Officer, should know this and should be very well aware of the consequences of a faulty methodology. So the question remains, why did he ignore it and just go for the yellow sensationalist approach?
But the absurd doesn't stop with Secunia's flawed testing methodology. Mr. Kristensen concludes that "… major security vendors do not focus on vulnerabilities. Instead, they have a much more traditional approach, which leaves their customers exposed to new malware exploiting vulnerabilities." Well duh, if you only test traditional signatures and neglect the other technologies included in the product which ARE designed to block exploits, what do you expect? Oh, wait, I just saw on their website that Secunia actually sells a vulnerability scanner! Hmmm, I wonder if that has something to do with the flawed conclusions of this test… Internet Security Suites do not rely on signature detection alone since many years ago. Panda's (and other) products integrate behavioral analysis, context-based heuristics, security policies, vulnerability detection, etc. However none of these technologies were tested by Secunia.
Let's just take 1 of the many protection technologies included in Panda Internet Security 2009 which DOES deal with prevention of vulnerability exploitation and see how it behaves against these exploits if tested correctly. I'm talking about Kernel Rules Engine, a security policy technology incorporated in 2004 to all Panda products which effectively prevents zero-day exploits of PDF, DOC, XLS, PPT and many other vulnerable applications. While Secunia's test grants Panda a lowly 1.59% detection rate of the important threats, if they would have tested correctly they would have found out that just with Kernel Rules Engine Panda's product is able to generically and proactively block 56% of the important threats. And this just with KRE technology. But Panda's products also include other technologies such as TruPrevent's Behavioral Analysis, URL Filters and the Vulnerability Detection module which would prevent other exploits if Secunia cared to run their tests with a minimum level of professionalism.
Note to Secunia:
The following exploits (at least), which in your study are marked as "not detected by Panda", are actually detected generically with the correct testing methodology. Hint: have you tried actually "running" the exploits?
SA14896 CVE-2005-0944 PoC.mdb SA20748#1 CVE-2006-3086 PoC.xls SA21061 CVE-2006-3655 POC1.ppt SA21061 CVE-2006-3656 POC2.ppt SA21061 CVE-2006-3660 POC3.ppt SA22127#1 CVE-2006-4694 PoC.ppt SA23540 CVE-2007-0015 PoC.qtl SA23676#2 CVE-2007-0028 Exploit1.xls SA23676#2 CVE-2007-0028 exploit2.xls SA23676#2 CVE-2007-0028 PoC.xls SA23676#3 CVE-2007-0029 PoC.xls SA23676#4 CVE-2007-0030 PoC.xls SA23676#5 CVE-2007-0031 PoC.xls SA24152 CVE-2006-1311 PoC.rtf SA24359#1 CVE-2007-0711 PoC.3gp SA24359#3 CVE-2007-0713 PoC.mov SA24359#4 CVE-2007-0714 PoC.mov SA24359#8 CVE-2007-0718 PoC.qtif SA24359#9 CVE-NOMATCH PoC.jp2 SA24659 CVE-2007-0038 GameOver.ani SA24664 CVE-2007-1735 PoC.wpd SA24725 CVE-2007-1867 GameOver.ani SA24784 CVE-2007-1942 Exploit.bmp SA24784 CVE-2007-1942 PoC.bmp SA24884 CVE-2007-2062 GameOver.cue SA24973 CVE-2007-2194 GameOver.xpm SA25023 CVE-2007-2244 PoC.bmp SA25034 CVE-2007-2366 GameOver.png SA25044 CVE-2007-2365 GameOver.png SA25052 CVE-2007-2363 GameOver.iff SA25089 CVE-2007-2498 PoC.mp4 SA25150#1 CVE-2007-0215 PoC1.xls SA25150#1 CVE-2007-0215 PoC2.xls SA25150#3 CVE-2007-1214 PoC.xls SA25178 CVE-2007-1747 PoC.xls SA25278 CVE-2007-2809 GameOver.torrent SA25426 CVE-2007-2966 PoC.lzh SA25619#1 CVE-2007-0934 PoC.vsd SA25619#2 CVE-2007-0936 GameOver.vsd SA25619#2 CVE-2007-0936 PoC.vsd SA25826 CVE-2007-3375 PoC.lzh SA25952 CVE-2007-6007 PoC1.psp SA25952 CVE-2007-6007 PoC2.psp SA25952 CVE-2007-6007 PoC3.psp SA25988 CVE-2007-1754 PoC.pub SA25995#1 CVE-2007-1756 PoC.xls SA25995#2 CVE-2007-3029 PoC1.xls SA25995#2 CVE-2007-3029 PoC2.xls SA25995#3 CVE-2007-3030 PoC.xlw SA26034#4 CVE-2007-2394 PoC.mov SA26145 CVE-2007-3890 PoC1.xlw SA26145 CVE-2007-3890 PoC2.xlw SA26433 CVE-2007-3037 PoC.wmz SA26619 CVE-2007-4343 Exploit.pal SA26619 CVE-2007-4343 GameOver.pal SA27000 CVE-2007-5279 PoC.bh SA27151 CVE-2007-3899 GameOver.doc SA27151 CVE-2007-3899 PoC.doc SA27270 CVE-2007-5709 GameOver.m3u SA27304#1 CVE-2007-5909 GameOver1.rtf SA27304#1 CVE-2007-5909 GameOver2.rtf SA27304#1 CVE-2007-5909 PoC1.rtf SA27304#2 CVE-2007-6008 PoC1.eml SA27304#2 CVE-2007-6008 PoC2.eml SA27361#4 CVE-2007-2263 PoC.swf SA27849 CVE-2007-6593 GameOver1.123 SA27849 CVE-2007-6593 GameOver2.123 SA27849 CVE-2007-6593 GameOver3.123 SA28034 CVE-2007-0064 PoC1.asf SA28034 CVE-2007-0064 PoC2.asf SA28034 CVE-2007-0064 PoC3.asf SA28034 CVE-2007-0064 PoC4.asf SA28083#2 CVE-2007-0071 PoC.swf SA28092#1 CVE-2007-4706 PoC.mov SA28209#10 CVE-2007-5399 PoCbcc.eml SA28209#10 CVE-2007-5399 _PoC_cc.eml SA28209#10 CVE-2007-5399 PoC_date.eml SA28209#10 CVE-2007-5399 PoC_from.eml SA28209#10 CVE-2007-5399 PoC_imp.eml SA28209#10 CVE-2007-5399 PoC_prio.eml SA28209#10 CVE-2007-5399 PoC_to.eml SA28209#10 CVE-2007-5399 PoC_xmsmail.eml SA28209#11 CVE-2007-5399 PoC.eml SA28209#12 CVE-2007-5399 PoC.eml SA28209#13 CVE-2007-5399 PoC.eml SA28326 CVE-2008-0064 GameOver1.hdr SA28326 CVE-2008-0064 GameOver2.hdr SA28506#1 CVE-2008-0081 Exploit.xls SA28506#1 CVE-2008-0081 PoC.xls SA28506#2 CVE-2008-0111 PoC1.xls SA28506#2 CVE-2008-0111 PoC2.xls SA28506#2 CVE-2008-0111 PoC3.xls SA28506#4 CVE-2008-0114 PoC.xls SA28506#7 CVE-2008-0117 Exploit.xls SA28506#7 CVE-2008-0117 GameOver.xls SA28506#7 CVE-2008-0117 PoC.xls SA28563 CVE-2008-0392 Exploit_CommandName.dsr SA28563 CVE-2008-0392 GameOver_CommandName.dsr SA28765 CVE-2008-0619 PoC.m3u SA28765 CVE-2008-0619 PoC.pls SA28802#1 CVE-2007-5659 GameOver.pdf SA28802#1 CVE-2007-5659 PoC.pdf SA28904#2 CVE-2008-0105 PoC1.wps SA28904#2 CVE-2008-0105 PoC2.wps SA28904#3 CVE-2007-0108 GameOver.wps SA29293#1 CVE-2008-1581 PoC.pct SA29321#2a CVE-2008-0118 PoC.ppt SA29321#2b CVE-2008-0118 GameOver.ppt SA29321#2b CVE-2008-0118 PoC.ppt SA29620 CVE-2008-0069 GameOver.sld SA29650#5 CVE-2008-1017 crgn_PoC.mov SA29704#1 CVE-2008-1083 PoC.emf SA29704#2 CVE-2008-1087 PoC.emf SA29838 CVE-2008-1765 Exploit.bmp SA29838 CVE-2008-1765 GameOver.bmp SA29934 CVE-2008-1942 PoC_ExtGState.pdf SA29934 CVE-2008-1942 PoC_Height.pdf SA29934 CVE-2008-1942 PoC_MediaBox.pdf SA29934 CVE-2008-1942 PoC_Width.pdf SA29941 CVE-2008-1104 Exploit.pdf SA29941 CVE-2008-1104 PoC.pdf SA29972 CVE-2008-2021 PoC.ZOO SA30143#1 CVE-2008-1091 PoC.rtf SA30953 CVE-2008-1435 PoC.search-ms SA30975 CVE-2008-2244 PoC1.doc SA30975 CVE-2008-2244 PoC2.doc SA31336#2 CVE-2008-3018 PoC.pict SA31336#4 CVE-2008-3020 PoC.bmp SA31336#5 CVE-2008-3460 PoC1.wpg SA31336#5 CVE-2008-3460 PoC2.wpg SA31336#5 CVE-2008-3460 PoC3.wpg SA31385 CVE-2008-2245 PoC.emf SA31441 CVE-2008-4434 PoC.torrent SA31454#X CVE-NOMATCH PoC.xls SA31454#2 CVE-2008-3005 Exploit.xls SA31454#2 CVE-2008-3005 PoC.xls SA31675#3 CVE-2008-3013 PoC.gif SA31675#4 CVE-2008-3014 PoC.wmf SA31675#X CVE-NOMATCH PoC.emf SA31675#X CVE-NOMATCH PoC.wmf SA31675#5 CVE-2008-3015 PoC.ppt SA31821#6 CVE-2008-3626 PoC1.mp4 SA31821#6 CVE-2008-3626 PoC2.mp4
17 comments
The panda products should detect exploits that are inactive
i understand Kernel Rules Engine is for to protect installed apps from active exploits by using policy rules
but It better to prevent the inactive exploits from actvating by having detections for them so they get detected immediatly when they enter the system
In some cases this might even be right, but using AV signatures for detecting exploits is not necessarily the most effective strategy. There are better and more efficient techniques. Btw this reminds me of a similar test published by SANS titled “Effectiveness of AV in Detecting Metasploit Payloads”:
http://www.sans.org/reading_room/whitepapers/casestudies/2134.php
I'm just an end user of Panda product and I can personally say that I used in the past Norton but it was slow, heavy and some bad software were not recognised and for these reasons I'm using now Panda (with happiness).
As end user I can't say the product A is better then B from a technical point of view: I haven't test files to use to check product A and product B, but I would like to write you my point of view: this seems, seems to me, a commercial test because I can't believe Norton has reached 20% and all the other so few (less than 5%).
I think, as written before from Pedro Bustamante, that if Secunia tries to send these files via e-mail on a victim computer with the relative antivirus program installed and running or if Secunia tries to run / open each file the result will be different.
As last my personal point I would like to write that automatic updates are important and usefull and normally enabled. So, for me, to check an old exploit in an unpatched operating system / program and to get a goal is less important that to check a new exploit / bad software in a patched system / program.
I mean, for me to know that Norton finds X dangerous files on an unpatched computer is not so important. For me it's important to know in how much time Norton finds X new dangerous files on an updated computer while the relative fix is missing. This last test will be more important for me and I think that with this the results will be different than Norton 20% other less than 5 %.
[ I'm sorry for English errors, if present! ๐ ]
jon wrote: “The panda products should detect exploits that are inactive”
That doesn’t work. It is simply WAY to easy to disguise the functionality of code, especially if it is hand-written machine language. (Yes, I come from the days of looking up opcodes in the manual.) White-listing and behavior analysis are highly effective, and you really do have to wait for something to go active before you know what it will do.
“if they would have tested correctly they would have found out that just with Kernel Rules Engine Panda’s product is able to generically and proactively block 56% of the important threats.”
Bragging about a 56% detection rate is still a huge SECURITY FAIL..
Antivirus technology is dead dead dead…the ever increasing number of botnets and the thriving underground malware economy are perfect evidence of this. Wake up Panda.
It's just an example as proof that there are other technologies that are better suited at stopping exploits than AV signatures. Security Suites have many different layers of technology which the majority of the people, including yourself as its seems from your comment, flat out ignore.
You don't seem to understand the real problem behind security technologies and bundle it all around an "AV is dead" simplistic view without any real or significant data to back it up. I recommend kurt wismer's blog which has plenty of hard evidence and reasoning against this "AV is dead" thinking and which I will not cover in this blog as its a waste of time:
http://anti-virus-rants.blogspot.com/
@fail: You overlooked two times that Pedro was only speaking about one single feature of Panda’s security suite? Wow…
@Pedro: But I would be interested in knowing the detection rate of the full product according to this “test”. As you were able to get the detection rate for just using KRE you should be able to produce the more interesting result, too, shouldn’t you?
Btw, reading this post was very amusing! Thanks, Pedro ๐
Moritz, good to read you around here again.
As you mention I think it would be very interesting to perform these kinds of tests with real-life exploit-using malware (which btw is not the same thing as only PoC exploits!). I haven't tested every single file in Secunia's test simply because of the effort of running and documenting each one. The results from KRE are based on the type of exploit, not because I've executed each one. The way KRE works any PPT/PDF/DOC/XLS/etc. exploit would have been block regardless of which exploit it is.
It seems that Secunia in their response post (http://secunia.com/blog/30/) arguing that the original intention of the test was to show that "patching must be done regardless of wheather you have AV or not" (which btw is not what I got out of reading Secunia's report, but whatever) mentions that their next test will take into consideration running the exploits and recording the complete results.
This brings up another interesting point which is worth investigating further. Is AV supposed to protect against ALL unpatched applications and OS components?
I was always here, I just didn’t have anything to say.
I think an important aspect for users is this part of Secunia’s response post: “If a user receives e.g. an Office document, saves it, scans it, and it isn’t detected as malicious, the user would (and should be able to) trust the document as it may be e.g sent to someone else or moved to a system without the same kind of protection.”
I was thinking the same, when I first read this blog entry. Consider an exploit for a problem patched a long time ago. A company unknowingly wants to include a file containing the exploit in a distribution of their product. The scanner doesn’t have a signature for it and testing the product doesn’t reveal it either, because the affected component is patched, so no malicous code was executed. Of course, the product is installed by a user who disabled Windows updates “because it’s evil”. BAMM!
So the question is, does it make sense to create a signature for every more or less polymorphically generated exploit, when the exploit has no chance on a patched system and things like KRE would detect it on an unpatched one?
And how would it be possible to protect against all implementations of every exploit?! There surely are cases where you would need to solve the halting problem to do that, although you might then be able to say, it’s “probably malicious”.
So I would wish AVs “to be able to detect malicious content while it is passive”, but I don’t think it’s sensibly possible (not mentioning zero-day exploits). But an up-to-date set of signatures fed from the results of KRE et al from the cloud should already be able to handle a large part of this problem.
Sounds like some sour grapes in the AV community. Someone performs testing they don’t like, and they get a low score. So it’s only natural to scream foul at the test. Admitting your AV Suite has shortcomings would be bad for business, after all.
I’ve enjoyed reading not just Panda’s reaction, but the reaction of all of the other AV blogs as well. But the fact is that this test is a TINY piece of the consumer AV puzzle. If you have a strong product, there should be no cause for concern. The level of concern tells me that the strengths of these suites tested are in question.
hey petro,
Is the exploit KB958644 proactivly detected in vista
i count’t install it because it gave me a blue screen crashes that said somthing not equal
i just want to make sure im protected
please answer my emails
thanks
Moritz, you touched on the two main issues: it’s not efficient to maintain signatures for every possible exploit as there are other better suited techniques against this rather than signatures (not to mention patching).
But more importantly you touched on an issue that was completely missed in the test and all discussions around it: patching is important but what happens with exploits for zero-day vulnerabilities?
Mikey, that’s one way of looking at it. Another way of looking at it is that those of us who have stronger products get mis-represented more by these type of tests than those who have even less technology against this type of threats (if tested correctly, of course).
Julevine, send me an email with the details and I’ll look at the problem. I just have one email from you but it’s about the quarantine.
Hi Pedro,
Panda Antivirus work well on my pc. Resource hog when panda update virus signature. The file named PAVSRV51.exe used 99% of my cpu usage. Hope Panda team can calculate the best algorithm for that file. Make it use low resource.
My CPU is intel core duo with 1 GB of RAM
So? It’s best if an AV can detect inactive exploits. If the exploit is active, damage may have already been done.
I’m sure if Panda won, Panda wouldn’t be complaining =
The test is valid. I don’t see how it’s invalid …
Read this thread:
http://www.wilderssecurity.com/showthread.php?t=228862
To condense the contents of the thread, the posters agreed that detection of an inactive exploit was optimal; even Panda, the supposed leader, was not able to stop 100%, much less 70%, of the exploits with real-time and on-demand scanning.
Now, Symantec already has a head-start of about 30% when it comes to detecting exploits; combine that with their real-time protection, you have a winner.
Now, those exploits could be, or are, considered and classified as backdoors. That shows that Panda has poor backdoor detection. Along with each other security suite tested = …
I’m sorry, however getting a poor score on a test does not constitute failure.
Tech0, I really don’t think you are understanding the issues behind all the things that are wrong with Secunia’s testing methodology. I recommend that you search around for comments by actual experts about Secunia’s faulty methodology and not rely on some anoynmous post who also doesn’t understand the issues behind complex security testing.
And just for the record, even if Panda would have detected more than the rest of vendors using signatures over inactive exploits, I can assure you I would still be complaining about the lack of valid testing methodology.