Two stories about how two responsible security disclosures failed
This article will be a certain departure from the heavily technical blogs that I have published so far. This one is more of a philosophic and moral meandering around what one feels is the right thing to do in the spirit of a professional commitment and obstacles that threat one's financial or professional reputation.
Recently, I discovered two previously unknown
vulnerabilities in the products of two reputable vendors however, vendors that are
focused on highly specialised areas of telecom business. Lack of their proper
engagement is the reason I will not detail their names, products or intricacies
of their products' vulnerabilities.
In both cases I was following guidelines of a
responsible security disclosure - a procedure where one informs the vendor of
the specifics of the vulnerabilities in their product and refrains from
publishing the discovery until vendors agree it's safe to do so. Mostly, one
would publish the research when the respective vendor developed and released
the patch.
This is in theory...In reality, sadly, I will
share a good deal of frustration common to many security researchers today. My
case was not much different from other frustrating experiences of researchers
who tried to communicate the weaknesses to vendors and make them patch it. In a
number of cases, researchers would agree acceptable way to publish their
findings and protect the reputation of a vulnerable vendor. However, in a good
number of cases researchers hit the wall of silence, vague communication or
even threats of legal actions if they attempt to publish anything.
The focus of my mental deliberations was how to
spare myself and my company of potential legal actions from vendors who failed
to engage and still abide to the professional duty of improving cyber security
around us by publishing the details of vulnerabilities. It turns out that this
can be tricky.
Why is the picture not black and white?
The main reason is the contractual restrictions,
mostly Non-Disclosure Agreements between the researcher and its native company,
the native company and the customer, the customer and the vendor...You get the
idea- a complex chain of legal regulations that, once violated, may initiate an
avalanche of adverse events.
In my specific case, the vendors' products are
available to restricted number of clients/customers, primarily in the telecom
industry. I was engaged by a telecom provider to do a pentest of those devices
and as such my name, findings and all the details were known to the telecom
operator, my native company and the vendor. Plus, the products I was testing
are not available for public purchase/download. That means one cannot
experiment with them out of the contractual relationships. Plus, I had an NDA
with both my company and the telecom, my company had an NDA with telecom,
telecom had an NDA with the vendor...
When my communication attempts to the vendors
failed after I provided them with the detailed analysis, attack steps and
exploit/Proof of Concept, I was entertaining myself with the thought to publish
the analysis (but not the exploit). Before I hit that road, I consulted the
lawyers and did a bit of research related to security disclosure to avoid
adverse course of action. Long story short, it turns out that NDAs were
preventing me from publishing the research unless I was given explicit
permission from the vendor to publish.
Many researchers decided at this stage to simply
go public in order to put pressure on the vendors to speed up the patching. And
many succeeded with no adverse consequences. Some, however, ended up with legal
cases initiated by vendors, or faced problems with their companies or clients.
Although I understand the desire of companies and
vendors to protect their reputation and business compliance by restricting
researchers from violating disclosure policies, I also understand that we ,
security professionals, are tasked with making the cyber reality as secure as
possible. This is why we tend to follow our conscious decision to put pressure
on vendors to provoke the remediation-even (or especially) in the case when
vendor does not engage. The problem is that moral and business imperatives
clash in such a scenario.
You face two choices - risk and maybe jeopardise
your career/current job/financially stable situation in exchange for a
mitigation or back off and leave the situation as bad as it is.
None of the options is desirable, it is a kind of
a stale-mate position. After a long and careful consideration I decided to back
off in these two cases- “betraying” the profession (for not going public) but
protecting my current professional status. I comfort myself that I did provide
the detailed analysis of the attack and provided a proof of concept exploit to
both vendors, so they can replicate and patch the improper handling of the
application traffic. Hopefully…
But still, I cannot get rid of the feeling that
these two vendors will almost certainly continue to pursue the same attitude to
the security vulnerabilities - they were not pressured to publish CVE, they did
not face public exploit against their products. Such a position may instill the
long term attitude that they can continue to silently patch (or even don't
patch at all) without notifying the customers of the weaknesses that may
compromise any of the current of future clients.
Comments
Post a Comment