Because it is important: Project Zero is a terrible security research team, famous for three things: finding the worst vulnerabilities, finding new vulnerabilities every day, and giving companies just 90 days to find improvements before the public fully discloses. Equally admired and hated by the majority of the security community, they recently broke the silence to defend their policies against insight and explain what they are really doing.
All major technology companies, from Microsoft to Apple Intel received an error report from Project Zero containing the following statement: “This error is subject to a 90-day disclosure period. After 90 days have passed or the patch is widely available (whichever is earlier), the bug report will be publicly visible. “From that moment, companies can choose to correct errors with the help of Project Zero, alone or not at all, in this case the error report is published immediately.
Each bug report contains just about everything Project Zero can collect about vulnerabilities, from how it was first discovered to the proof-of-concept code that exploits it to indicate a problem.
As of July 30, Project Zero has released an error report of 1,585 repaired vulnerabilities and 66 not repaired. 1,411 of 1,585 issued within 90 days and an additional 174 issued within grace period 14 the day Project Zero allowed when they were certain the company was nearing completion. Only two that go beyond that, Specter & Meltdown, and task_t, both, when exploited, allow the program access to the highest secrets of the operating system.
Project Zero acknowledges that posting bug reports before a fix is a bit dangerous, but that's the point: It makes companies afraid to fix it, which they say it won't if they expect bug reports to remain hidden.
“If you assume that only vendors and reporters are aware of the vulnerabilities, then the problem can be solved without urgency. However, we have increasing evidence that an attacker discovered (or obtained) many of the same vulnerabilities reported by defensive security researchers. We cannot know for sure when the attacker discovered the security bug that we report, but we do know that it occurs routinely to include our disclosure policy.
Basically, disclosure timelines are a way for security researchers to set expectations and provide clear incentives for vendors and open source projects to scale up their vulnerability improvement efforts. We try to calibrate our disclosure time frame so that it is ambitious, fair and realistic. "
Project Zero has a clear line of evidence for this. A study analyzed more than 4,300 vulnerabilities and found that 15% to 20% of vulnerabilities are found independently at least twice a year. For Android, for example, 14% of vulnerabilities recovered in 60 days and 20% in 90, for Chrome there is 13% of rediscovery in 60 days. This shows that while a security researcher might be ahead of the curve, there is a reasonable chance that anything they find will be found by the attacker soon after.
But isn't it dangerous to post a bug report before patching?
“The answer is contradictory at first: revealing a small number of uncorrected vulnerabilities does not significantly increase or decrease the attacker's ability. Our 'deadline' disclosures have a neutral, short-term effect on the attacker's ability.
We certainly know that there are groups and individuals waiting to use public attacks to harm users (like exploiting the author kit), but we also know that the cost of turning a typical Project Zero vulnerability report into a real-world attack is not practically trivial. "
Project Zero does not publish a step-by-step guide, but publishes what it describes as "only part of the exploitation chain." In theory, attackers will need significant resources and skills to turn this vulnerability into trusted exploits, and Project Zero believes that attackers who can do it can do it even if they don't expose bugs. Perhaps the attacker is too lazy to start on his own because, according to a 2017 study, the average time from vulnerability to "fully functional exploit" is 22 days.
That is just a problem, it is a big problem, but most companies are pushing within 90 days. The second criticism many researchers have is Project Zero's policy of publishing bug reports after patches are released, mainly because the patches tend to be flawed and the same vulnerability could show up in other locations. Project Zero believes this is beneficial to defenders, as it allows them to better understand the vulnerability, and of little consequence to attackers who will be able to reverse engineer the patch exploits.
"Attackers have a clear incentive to spend time analyzing security patches to learn about vulnerabilities (both through source code review and reverse binary engineering), and will quickly establish full details even if vendors and researchers try retain technical data ”.
Because the usefulness of vulnerability information is very different for defenders versus attackers, we don't expect defenders to be able to perform the same depth of analysis as attackers.
The information we disclose can generally be used by defenders to immediately improve defenses, test the accuracy of bug fixes, and can always be used to make informed decisions about patch adoption or short-term mitigation. "
Sometimes, in war, risks must be taken to achieve overall success. And make no mistake, the battle between security researchers and hackers is real, with serious implications, real life. Until now, Project Zero has been operating successfully without significant consequences of its aggressive policies, and they will not hesitate to proceed in the same way unless it causes drastic problems. Hopefully that doesn't happen.