Recently AV-Test.org published its "Response Time Tests", which measures (in hours) how fast AV companies protect against new malware that makes it into the In-The-Wild list. The study takes into consideration the WildLists from July, August and September 2007. The detection rates were measured using the recommended settings for the e-mail and web protection of the products (as the infiltration vector for most malware is the internet). The results are very interesting and diverse between the entire industry (I've taken out some lesser known scanners and gateway products and concentrated on the desktop protections):
Scanner TOTAL July August September ========================================================= Ikarus 3.16 2.35 5.04 2.71 Panda 6.04 0.78 12.68 6.44 Sophos 21.65 17.24 24.44 23.68 AVG 27.83 32.30 20.01 28.79 BitDefender 45.92 79.32 15.85 36.00 AntiVir 65.08 2.06 17.31 147.07 Trend Micro 82.52 120.42 111.59 33.00 Kaspersky 95.96 165.43 41.84 70.22 F-Secure 100.45 167.35 56.73 70.58 Nod32 126.20 162.22 73.87 127.54 Symantec 156.98 211.20 209.48 79.56 F-Prot 215.33 317.57 153.31 166.78 eTrust-VET 239.98 268.80 249.87 209.72 Avast! 306.18 526.62 182.44 195.44 McAfee 343.52 432.61 274.47 310.30 Microsoft 393.25 636.78 183.63 315.06 Norman 438.92 609.76 271.61 396.34 ClamAV 599.55 700.72 630.53 495.60 Dr Web 724.87 870.02 458.58 763.82 Average Response Times in hours including Proactive Detections, Copyright © 2007 AV-Test GmbH Last update: 2007-12-19 (hp/am). (b) denotes beta signature updates.
The interesting data is the "TOTAL" column, which indicates the number of hours it takes each scanner to effectively protect customers against the new malware samples that make it into the WildList. In the case of Panda it only took us 1.84 hours to protect customers using our beta signatures and 6.04 hours to protect regular customers. The average between all scanners tested was 265 hours response time.
Proactive Protection
Of course the best results are always achieved when succesfully preventing rather than reacting to a threat. This is why Panda's results in these type of tests are very good. Our generic signatures and heuristic engines are capable of proactively protecting against most threats without having to wait for a signature update (94% detection rate using the beta signatures). Looking at the results from a "proactive protection" perspective the results are as follows. These porcentages mean the number of samples detected proactively at the time the sample initially appeared (of a total of 93):
Scanner TOTAL July August September ================================================== Panda 91% 97% 78% 95% AntiVir 87% 94% 74% 89% Ikarus 87% 88% 78% 92% Sophos 86% 94% 74% 87% BitDefender 81% 75% 78% 87% AVG 71% 59% 65% 84% Kaspersky 69% 59% 61% 82% Nod32 69% 56% 74% 76% Trend Micro 68% 56% 57% 84% F-Secure 67% 53% 61% 82% Symantec 66% 53% 52% 84% McAfee 55% 47% 61% 58% Avast! 53% 31% 65% 63% eTrust-VET 52% 44% 43% 63% Dr Web 51% 41% 65% 50% F-Prot 51% 28% 57% 66% Microsoft 48% 25% 65% 58% Norman 46% 44% 61% 39% ClamAV 42% 28% 39% 55%
14 comments
but it seems that panda’s desktop line havn’t integrated the above heuristis as the’ Command line ‘ engine which has a high-level heuristics and much better than current used one.
Actually for the email component of the desktop product we do have heuristics turned on high by default. Most of the heuristic configurations in this test are set to the product’s default for email and web protection.
I notice Sophos has published a similar entry on their blog site, however for some reason they left your product out of the table, leaving themselves at the top … strange that 😉
http://www.sophos.com/security/blog/2008/01/974.html
Is another opinion, and a independent opinion, but today, the first consideration for use Panda is the dual comunication between User <-> Panda, and the nice job on Panda Labs and Panda Research. A product (imagine) can detect 100% active and proactive, but if the use is only for engineers, advanced administrators and hard for desktop users, is a bad product. This is for us the fisrt 9 points to Panda, the other arrives without more words.
Lol, sophos used rootkit technique on the result, sorry for my irony.
I have a small suggestion for Panda AV, on the Update Setting add a option for beta update if the user want the beta signature
Thanks for complete result and happy new year.
Cheers
#4 don’t forget the 24×7 technical support !! That’s important (at least 4 me)
#5 LOL, my http protection has detected sophos’s web as rootkit.sophos.GEN !!!
Concratulation to all – it is nice that a respected company as AV-Test.org give us high point in their test.
I have been on the homepage at AV-Test.org and tried to find the test “Response Time Tests” But I couldnt find it. Can you send me a link directly to the test/artikle?
Best regards
Leif
Leif this is from a study by AV-Test which is not normally published publicly and which we use to monitor and improve our reaction time and proactive detections against malware in the “WildList”. As such the complete report is not published and only used internally by the different AV vendor labs.
Well I have seen the report from the study of AV Test and the signature detection from Panda was not so good because Panda finish fifteen place (What Happen to Collective Intelligence?) and in proactive detection five other AV companies have the same result. (So, why TruPrevent?)
Regarding the first question, our signatures are optimized for what we see in the wild, not necessarily malware collections which are used as testbeds, as is the case of the latest test of 1 million samples. For a more representative test of real-life see the detection rates of In-The-Wild samples, not collections.
Regarding the proactive detection, this latest test considers as 'proactive detection' both heuristic analysis (static analysis of files on a hard drive) plus behavioral analysis (dynamic or runtime proactive detection, aka TruPrevent). If the test would consider these two separately you would see the difference. The problem is that not many vendors incorporate runtime/dynamic behavioral analysis and this is why this is not reflected in public tests.
Maybe your company don’t know yet but few days ago two new "in the wild" threats came out (Adobe PDF, Linux kernel) and the first AV company to detected, who you think it was? That’s right not yours other 6 AV companies detected first and the other threat that came out for Linux no company has detected yet. So the Collective Intelligence concept is really working? 2-14-08
Actually Tech this is not true regarding the PDF threats. What you are refering to is detection based on AV signatures, which is a really inefficient way of detecting these type of threats. Panda detects all PDF and Office exploits proactively without any need for AV signature updates. More info here:
http://research.pandasecurity.com/archive/How-to-prevent-zero-day-exploits.aspx
So is not true. Go to http://www.hispasec.com the article was published in 2/11/08 for Adobe PDF and 2/13/08 for Linux kernel.
Thanks,
I don’t have more comments
Yes Tech the "signature detection" published at the Hispasec article in fact shows what you are saying ("no Panda signature detection of PDF threat when it first appeared"). But my point is that we detected it proactively without a need for a signature update from the very first moment it appeared. I had a talk with the folks at Hispasec a few days ago about this very same discussion. We should be looking beyond "signatures" when we talk about anti-malware techniques. Signatures is only 1 technology of many which are normally included in today's anti-malware products. Does it matter if we detect the PDF proactively from day 0 with a signature vs. with behavioral blocking? Why do you only consider a "signature detection" as valid when there are other detection methods just as effective if not more effective?