I’ve covered the impact that automated detection systems have on false positives in the past. Hispasec, the makers of VirusTotal, also talked about this issue in their blog post aptly named Antivirus Rumorology. More recently Kaspersky conducted an experiment during a press conference and showed a bunch of journalists how these false positives roll over from one vendor engine to the next. Of course being journalists, they only took home the message “AV copies each other and mostly us” as is shown in the articles published covering the event . Even though the objective of the experiment was put under scrutiny, the fact remains that this is an industry-wide problem and no single vendor is immune to its effects, not even Kaspersky as we will see.
As some of the regular readers of this blog will probably remember, in March 2010 we published a “PandaCloudTestFile.exe” binary file to test the connectivity of Panda products with its cloud-scanning component, Collective Intelligence. This “PandaCloudTestFile.exe” is a completely harmless file that only tells the Panda products to query the cloud. Our cloud-scanning servers have been manually configured to detect this file as malicious with the only objective of showing the end user that the cloud-scanning component of his/her product are working correctly.
Initially this file was only detected by Panda as Trj/CI.A (a Collective Intelligence detection) and Symantec’s Insight (noting that this is not a very common file, even though treating reputation alone as “suspicious” is by itself grounds enough for debate — maybe another future post).
Panda 10.0.2.2 2010.03.10 Trj/CI.A
Symantec 20091.2.0.41 2010.03.11 Suspicious.Insight
A few days later came the first problematic detection, this time from Kaspersky, who detected the “PandaCloudTestFile.exe” with a signature, specifically calling it a Bredolab backdoor. I call this detection problematic as it is clearly not a suspicious detection nor a reputation signature. It is also clearly an incorrect detection as the file in itself is not related in any way to Bredolab. Soon we will see why this Kaspersky signature is problematic.
Kaspersky 7.0.0.125 2010.03.20 Backdoor.Win32.Bredolab.djl
In the next few days some other AV scanners started detecting it as well, in many cases with the exact same Bredolab name.
McAfee+Artemis 5930 2010.03.24 Artemis!E01A57998BC1
Fortinet 4.0.14.0 2010.03.26 W32/Bredolab.DJL!tr.bdr
TheHacker 6.5.2.0.245 2010.03.26 Backdoor/Bredolab.dmb
Antiy-AVL 2.0.3.7 2010.03.31 Backdoor/Win32.Bredolab.gen
Jiangmin 13.0.900 2010.03.31 Backdoor/Bredolab.bmr
VBA32 3.12.12.4 2010.03.31 Backdoor.Win32.Bredolab.dmb
In the month that follows (April 2010) a bunch of new engines started detecting it, mostly as the Bredolab name we are now familiar with, although some new names started appearing as well (Backdoor.generic, Monder, Trojan.Generic, etc.).
a-squared 4.5.0.50 2010.04.05 Trojan.Win32.Bredolab!IK
AhnLab-V3 2010.04.30.00 2010.04.30 Backdoor/Win32.Bredolab
AVG 9.0.0.787 2010.04.30 BackDoor.Generic12.BHAD
Ikarus T3.1.1.80.0 2010.04.05 Trojan.Win32.Bredolab
CAT-QuickHeal 10.00 2010.04.12 Backdoor.Bredolab.djl
TrendMicro 9.120.0.1004 2010.04.03 TROJ_MONDER.AET
Sunbelt 6203 2010.04.21 Trojan.Win32.Generic!BT
VBA32 3.12.12.4 2010.04.02 Backdoor.Win32.Bredolab.dmb
VirusBuster 5.0.27.0 2010.04.17 Backdoor.Bredolab.BLU
And to top it all off, during this month of May 2010 the following engines started detecting “PandaCloudTestFile.exe” as well. Here we can also even see a “suspicious” detection, probably the only one out of all of them that could make any sense.
Authentium 5.2.0.5 2010.05.15 W32/Backdoor2.GXIM
F-Prot 4.5.1.85 2010.05.15 W32/Backdoor2.GXIM
McAfee 5.400.0.1158 2010.05.05 Bredolab!j
McAfee-GW-Edition 2010.1 2010.05.05 Bredolab!j
Norman 6.04.12 2010.05.13 W32/Suspicious_Gen3.CUGF
PCTools 7.0.3.5 2010.05.14 Backdoor.Bredolab
TrendMicro-HouseCall 9.120.0.1004 2010.05.05 TROJ_MONDER.AET
ViRobot 2010.5.4.2303 2010.05.05 Backdoor.Win32.Bredolab.40960.K
It is worth noting that consumer products have other technologies included in their products, such as white-listing and digital certificate checks, which could cause the file to not be detected on the consumer endpoint, but the fact that there is a signature for such file is a good indicator that it will probably be detected on the endpoint.
So why am I writing about all this? First of all, to emphasize the point I tried to make in the past that automated systems have to be maintained, monitored, tuned and improved so that more in-depth analysis is done through them and not rely so much on “rumorology”.
Secondly, to show that this is an industry-wide problematic that results from having to deal with tens of thousands of new malware variants per day, and no vendor is immune to it. What matters at the end of the day is that the automated systems are supervised and improved constantly to avoid false positives.
I can certainly understand why vendors point to their signatures being “rolled over” to other AV engines, but these same vendors should also take care so that they do not become the source of these “false positive rumors” in the first place.
UPDATE June 3rd, 2010: Reading Larry’s post over at securitywatch, it seems Kaspersky has reacted quickly and has removed their signature for the PandaCloudTestFile.exe file. Thanks Larry & Kaspersky!
5 comments
Pedro, good blog. I noticed the same a couple of years ago (when working for ESET) and the Skynet was a daily plaque. Kaspersky added UPACK itself as skynet worm. It took 1-2 days and every vendor detected UPACK (the packer, completely harmless) as Skynet worm and we got “spammed” with a harmless packer and shouted at that we don’t detect it. LOL.
Mike
Hey Mike, good to see you in this little corner of the net 🙂
Good example too, exact same problem!
Pedro, good article but would you also consider that sample sharing would contribute to this and the “trust” factor in that vendors would assume that the shared samples have gone through checks and verification BEFORE being shared and when the other vendors obtain these sample they simply add them blindly? This would explain to trickle effect.
Although I don’t think this was the case with the FP described in the post, I do think you are right and the trust factor in certain sharing situations may also cause these types of problems. I can think of at least one case where this has happened to us with batch of files which were all supposedly Confickers (and one turned out not to be). It’s probable that it has happened at other labs as well.
im so shocked this shit would happen so easly it seems that other av vendors are just relying on other av vendors detections
there not even analyzing there files by hand
there just copy cats this says alot of the vendors
if it was this easy to make a false postive there must be millions of false postives out there and these vendors must have thousands of false postives in there av databases
today i rescanned it on virustotal and none of them fixed there false postives
18 vendors detect this as malware look at the report
http://www.virustotal.com/analisis/21b50f091552cbd73dd46ab1e03f33b888a41104feb488171a621eb88920702d-1276890030
my option says dont trust 1 vendor
keep safe online