We all see in the mass media every day that software is vulnerable and that this is bad. But, few know what is happening behind the scene, until the news get out.
There are two ways to disclosure a vulnerability: the most common one is to make a “full disclosure”, but there is also the “limited disclosure”.
A full disclosure means that the details of a security vulnerability are disclosed to the large public, including details of the vulnerability and how to detect and exploit it. [Wikipedia]
A limited disclosure is an alternative approach where full details of the vulnerability are provided to a restricted community of developers and vendors while the public is only informed of a potential security issue. Advocates of this approach also claim the term “responsible disclosure” [Wikipedia].
The hope and theory behind a full disclosure is that once that a vulnerability is released to the public, the company affected will issue a fix immediately in order to reduce the damage in image and not only. So, the intention is obviously good because it ideally results in a better security for the users. But, as very often in real life, the theory doesn’t always work in practice. On one side, due to the pressure created to release the fix, the security is improved because the amount of time the vulnerability is open to attack is reduced. On the other side, very often the time to react is so short that companies intentionally reduce the range of the fix to a minimum so that they can release it earlier. Those who develop software know that few things work well when software is developed in a hurry and under a lot of pressure. As you can expect, with these fast fixes even more bugs are introduced in the software.
In the computer vulnerabilities, disclosure is often achieved via mailing lists such as a full disclosure mailing list, by publishing it in online magazines, Facebook, Twitter, or by other means.
There are many companies out there which have as business model to buy and sell vulnerabilities. Even though this sounds like a decent job, you will think differently if you are the one who is offered to buy the vulnerability. That is, your software is vulnerable and your users will potentially become victims of the vulnerability once it becomes public. If it will ever become public. Very often, but not always, these companies come with the offer: pay or we disclose that your software is vulnerable. Even if it makes sense to sell to the highest bidder, I’ve never heard or seen any proof that these companies sell the vulnerabilities to cybercriminals. However, according to various sources (see Sources at the end of the article) the trend is to sell indeed to the highest bidder, but that bidder is rather represented by governments than, as we would expect, by cybercriminals.
In order to avoid such a blackmailing, big companies who can afford , organize regularly hacking sessions of their software. They pay per vulnerability between $100 and $10000 for a zero-day exploit. This means that many people, especially IT students, come to make a decent revenue out of this.
These companies have officially created a market for finding vulnerabilities. However, as in any free market, there are many vendors with various offers and even more buyers.
This is how companies which buy the vulnerabilities for much more money appeared. Companies like ZeroDayInitiative, iDefense, SecuriTeam VUPEN, FinFisher, HackingTeam and others pay up to $250.000 for zero-day exploits in various products.
Handling vulnerabilities can be a very risky job: legally and professionally. There are news about people who got sued or threatened. Things will become much more complicated if any government gets involved. According to these sources, governments are very interested in this business because they might use a zero-day exploit to get into the computers of various suspects. Some say that governments directly or through military organizations they control are even trying to use such vulnerabilities to create cyber-weapons (like Stuxnet).
As security professionals, it is our every-day work to detect, report, patch and sometimes fix such problems. All is simple if the problem is in your own software, but most of the time it is not in your own software.
Should we find such vulnerabilities in other software, it is our duty as software security professionals to report them. But we should do it properly, considering all ethical and legal responsibilities.
It is imperative that vulnerabilities are reported first only to the vendor of the affected software and that enough time is given to the vendor to properly fix the problems. Do not forget that only reporting the problem is usually not enough as the vendor might need additional details, code and scenarios where the bug is reproducible. In some cases, it might prove that fixing the bug requires a big change in the architecture of the software. Be prepared to negotiate compromises. It sounds complicated, but it will be easier if you put the end user in front and you negotiate for his/her security and usability.