New weponized exploit code for old systems
21/Apr 2011
Why? Because it works.
Being something that resembles a pathetic excuse for an administrator, I’ve noticed a bit of downtime in the onslaught of root linux kernel exploits leaked over the past six months. The (re)release of cve-2010-3301 that Ben Hawkes (re)published resulted in an angry publication from Ac1dB1tCh3z of their full weaponized, and throughly badass implementation of cve-2010-3081. ABftw.c (code is full of lulz and a great read) was published to the full disclosure mailing list we have all grown to s/love/hate/, and within hours of the post companies everywhere were dealing with indiscreetly rooted servers, and mass website defacements. This exploit covered a broad range of linux kernels. The big winner is that most enterprises running gnu/linux servers deploy redhat / centos distributions. This vulnerability was mistakenly overlooked and backported into the kernel for these major distributions, allowing the attackers to maintain a working root exploit for years. Yes. Woops. I was not surprised a back-ported bug existed for a grip of gnu/linux distributions, though the publicly stated lifespan was impressive. The chaos development model strikes again.
This is basically the most frightening thing that can happen to anyone that is responsible, partially responsible, mildly responsible, written policy for, involved in customer service, deals with, talks about, thinks about, looks at, or breathes on enterprise systems security in any form or faction. Its also worth noting the avalanche was in motion for the rest of the year of root level exploit code for unknown/known vulnerabilities in the modern linux kernel. Open the floodgates. Get ready. Get owned.
The following major hack was the compromise of a proftpd mirror server, which then provided backdoored versions of the software package, for about a week. The backdoored version included code that would ping a server in Saudi-arabia to say it was installed. The attacker could then open a tcp connection with the ftp server, send the string “help acidbitchz”, and would be dropped into a root shell on the server. While the relation of the authors and implementors to the previously mentioned linux kernel exploit was never claimed, this hack was yet again terrifying as it compromised one of the top three ftpd server software deployed in the world.
Servers for the apache project had also been compromised. Again a frightening thing as the apache httpd daemon which kindly provides you the website you are currently viewing, along with more than 60% of the total websites on the internet. Yes. The whole thing. Luckily the source code was not mucked with and backdoored, unlike the unreal ircd project, who was not so lucky. It was discovered that one of their software repositories mirrors had been hosting a backdoored version of their software, for more than six months.
While it is no mystery such vulnerabilities such as the previously mentioned ABftw.c are actively being exploited “in the wild”, they are rather tightly guarded secrets, and used on a smaller scope of targets (actually targeted systems vs drive by hacking) than anything made publicly available. The days of an endless supply of remote root exploits available for most of the software running the internet are over. Using grep to locate easily exploitable programming errors in the most common open source software products went out with the late 1990s. Killing bugs through disclosure (publicity usually = patch) is now frowned upon by those who use them, instead of getting props for finding and/or coding a working exploit.
Unfortunately for the defenders, motivated attackers who are sitting on kernel root exploits for literally years without disclosure are by far smart enough to not leak them, leave them laying around post exploitation, or otherwise execute them on systems believed to have software running to document that actual exploitation throughtcpdump, snort, or some sort of honeypot, which you have squeezed management to allow you to plug into your network for fun, as honeypots rarely do anything more than invite more attacks instead of mitigate them. Not to mention the wasted time and resources analyzing what will 99.9999% of the time result in known exploits and attack vectors.
To add insult to injury, a lot of exploit code is written by reverse engineering the patches vendors release. This simplifies the process of discovering what code is changed before and after the patch, leading to the exact location of the flaw as well as memory locations involved. Basically a blueprint for an exploit writer. Its not there yet, but you know how to implement it. Disclosed vulnerabilities are only not a threat if a safeguard is put in place, which people are rather slow to resolve, which is why risk metrics remain high. Patches to fix bugs don’t work unless the patch is actually implemented.
If you consistently review the usual channels which publish exploit code, you know that the majority of what is published falls in the lines of “outdated” to “just plain old”, software. I’m obviously biased, so the flags for inspiration of this rant were two recent posts for old FreeBSD systems. Namely a kingcope hack for FreeBSD 5.4-RELEASE, shortly followed by an exploit for FreeBSD 6.4-RELEASE. To put it into perspective, 5.4 hit its EOL in 2006. The 6.4 branch didn’t time out until last November, but that is still four months ago, and the release date was originally set for EoL in November 2008. Both examples are local exploits, but we all know getting local access is usually the easy part as enterprises don’t like to fix what isn’t broken, and customers enjoy running oscommerce, joomla, and wordpress installations from 5 years ago which are riddled with file inclusion vulnerabilities. Getting any flavor of unauthorized access on an unmanaged server is trivial, and can result in privilege escalation as long as your willing to research every piece of installed software. Statistics are on the attackers side, there is bound to be an exploitable flaw.
New exploit code is constantly written for old known exploits and outdated systems, software, and protocols because these targets are the glut of the devices connected to the internet. From outdated windows xp machines which still rule the windows market, to old unix servers which were installed as the backbone of fortune 500 enterprise. Not only does an administrator have to fight the battle to end-of-life the device, they have to come up with a modern solution to replace it. Yet another hard sell, so the core of the company sits there, being consistently abused because it has been kept up so long that its outlived the age of the current information technology employees. Says a lot for solid mainframes ibm use to produce that have been in production since 1991, and speaks mountains in how hard it can be to implement change.
Without an actual test network which can be stress tested, which never really compares to real world abuse, patching in its own right is an inherent risk of stability. Unfortunately a process which should be standard practice from day one, is often neglected, outdated, or flat out unimplemented. Needless to say, we are swimming in a sea of old broken software and services.
Money, fear, uncertainty, doubt, lack of an understanding of best practices, and lack of ability to spur change. The main problem across the enterprise is that security does not directly make (not to be confused with the loss of funds, yes, risk metrics) money, so it’s a hard sell to management to allocate resources for information security related projects. Resources to clean up a public facing mess that hurts the company is much easier to obtain than anything which would yield a preventive measure, otherwise known as change for the better. Looking out of the best interest of an employer is not an easy task. Proving that the investments which do not directly yield profits are a good idea usually get shot down until after an incident. Too little, too late, and the final outcome is a pure clean up effort, or diligent due process to maintain evidence for legal prosecution and/or defense.
Here we stand. In the gutter, cleaning up the same mess, dealing with the same problems, running into the same wall, and hitting your head against the desk spending late nights and long hours fighting to find yet another quick fix for something that instead should be torn down.
Moral of the story? Stop running outdated software, implement a plan to decommission old systems. Policy revision. Don’t give up on standing up to say “Hey, this is stupid, we should stop doing this.” Fight until you reach a point where you can spend your time and available resources to address new threats instead of consistently fighting old vulnerabilities which should no longer exist in your organization. Bottom line, it is never going to go away. Information technology is a constant evolution, and you cannot afford to let any single piece of the system date or degrade to the point where it is unmanageable. You will not get anywhere, your job will become increasingly difficult, and you will eventually get fired even though you filed all your reports and provided all relevant information.
But who am I? I don’t have authority. I don’t have the power to change the broad scope of problems.
Do what it takes or accept being the scapegoat.