Who do we trust and why do we trust them?

Certificate authorities, and because it’s currently out best option, despite it being a poor one.

Everyone who uses the internet relies on certificate authorities to authenticate somewhat important sites they visit. This is done with a model of public key cryptography, combined with a trusted-ish(ed) 3rd party entity acting as an escrow to delegate, maintain, and revoke cryptographic certificates. The commonly used browsers ship with a list of approved certificate authorities, which is not standardized, but chosen by the software vendor.

For simplicity, I’ll provide the following example.

When you visit your banks website through https the, server provides your web browser with a cryptographic token. Your web browser does some magic mathematical wizardry (similar to magnets) to confirm the certificate has been signed by the certificate authority,. If it’s approved, your browser will accept it’s talking to who it should be talking to, and start negotiating what cipher it would like to use for communication.

In the event the mathematical wizardry nobody understands (similar to magnets), produces an incorrect signature, your browser should kindly present you with a pop up window asking to review, then accept or deny the ssl certificate. Unfortunately nobody does this as people generally don’t understand what it is, why it’s there, or how it can help them. Those who do understand are lazy and almost always just accept it anyway.

The authentication provided by ssl certificates and CA vendors are in place so you know your providing your banking credentials to your banks website, and not an impostor who will then sell them, or use them to transfer your balance to someone else’s account, then to the local western union. By ignoring a warning from something you typically trust, you are inviting the possibility of a classic man in the middle attack, and the unfortunate ordeal of getting your bank to return the fraudulent transaction(s).

There have been several instances recently of certificate authorities erroneously selling and/or issuing without their knowledge, certificates for fortune 500 companies to customers who are not affiliate with said fortune 500 company. The plus side is the public key infrastructure model allows keys to be revoked, so we are not stuck with a random entity who owns a 256 byte string of ascii characters that cryptographically says they are google.com. The down side is we are all vulnerable until the breach is discovered as we (the whole internet) relies on these vendors for authentication.

There are reports that the past few discovered breaches at the major CA vendors were possibly funded by the Iranian government. There is no doubt of desire by governments to monitor it’s citizens and spy on the world abroad. A valid ssl certificate for a popular domain demands a large price on the black market due to potential profit generation through exploitation. Particular governments can/will provide funds to obtain illicit certificates to collect intelligence. I’m sure it’s just another drop in the bucket of a countries defense budget.

There is a simple explanation of why we are all royally screwed.

The inherent complication is the vendors which our software have chosen to include as valid certificate authorities, are literally just vendors. There is no magic behind the curtains. Despite being a corporate entity which is based on selling a security model and providing associated products and services, they are fundamentally just as flawed and have to mitigate the same broad range of attack vectors and vulnerabilities every corporation is faced with on a daily basis. If we can’t keep our own infrastructure secure, how can we feasibly expect someone else to do it for us.

The solution is.. currently none. The problem with any trust implementation is there has to be something that is trusted. Anything that has to be trusted is susceptible to exploitation, causing the entire model to crumble. You have two choices. Rely on a CA for authentication, or rely on yourself. If you use the current model you are faced with the previously stated issues. On the other hand we could place the certificate authority as an “in house” verification process. You could provide the url hosting the other half of the certificate in the same manor as domain name servers, though would still be plagued with the same problems. Plus now your putting all your eggs in one basket, aiding the avoidance of discovery of a compromise. Bottom line, you can’t win.

The chain is only as strong as it’s weakest link. The weakest link we currently rely on is just as weak as the rest of us.

Posted: September 15th, 2011

The “Unbreakable” Oracle, breaking everything.

A brand that lingers from the late 1970′s to current, It joins the ranks of IBM and Apple. The (Un)breakable Oracle now saddles an unfair share of the community. With their aggressive campaign and acquisition of Sun Microsystems, they control software which sends your tweets and makes your facebook google+ posts. They own the rights to their legacy propitiatory db products, mysql, innodb, java, openoffice, solaris, virtualbox, and now ksplice(1).

What is the big deal?

Oracle owns java. Java is a programming language that was designed so applications can be executed independent of platform. This is accomplished by the java runtime environment acting as a proxy, handling interactions with the operating system instead of the program itself. Lightweight virtualization. It’s not known for being fast, but it enables a developer to write a single program that functions on all the operating systems which the runtime environment is available. Java became extremely popular after its official release in 1995, being mostly used for web based client/server thing-a-ma-jigs and embedded nick-nacks. Chances are the cell phone in your pocket is a java based device, unless it’s a fruit phone, or made in 2003. This includes devices sporting the fast growing, open source, android operating system, which is developed by a small company called google.

Oracle also owns mysql, which was developed by MySQL AB, which was acquired by Sun Microsystems, which was acquired by Oracle, which now owns two of the three most widely used database software systems in the world. The structured query language databases is the storage platform for the dynamic content your blogging or ecommerce content management system uses, almost always in the form of mysql for open source applications. If it wasn’t for the MicroSoft implementation, we may have seen a super fun antitrust lawsuit preventing Oracle from purchasing Sun Microsystems.

Oracle owns openoffice. Openoffice is an open source office application suite, providing a stable, robust, and free implementation of word processing, spreadsheet, and other common applications needed for general office productivity. It also happens to be the most robust replacement for the classic MicroSoft product we have all grown to love hate.

Oracle recently sniped a company/product that is grown to be critical for those of us in the GNU/Linux server management world called Ksplice. This product allows updates to the linux kernel without rebooting. It became popular amongst hosting providers, enabling them to maintain a secure environment with the latest critical bug fixes and security patches, while gaining ridiculous uptime and bypassing the barrage of customer service contacts generated from the 65 seconds it takes to reboot a server.

Is this just and advertisement for Oracle products? What could possibly go wrong?

No, it is not, and everything.

Oracle now has proprietary financial influence, brand copyrights, and control over the trajectory of technologies used to develop and drive a wide array of products you use, unknowingly or not, on a daily basis.

The open source fork of the solaris operating system which resulted in the development of zfs was disbanded by its lead developers within weeks of the purchase of Sun due to Oracles inability to provide clear support of the project and direction for future development.

The turn Ksplice is taking was made quite clear with Oracles announcement of acquisition.

“Oracle does not plan to support the use of Ksplice technology with Red Hat Enterprise Linux or SUSE Enterprise Linux. The Oracle Linux Premier Support subscription applies to Unbreakable Enterprise Kernel.”

Great. We now have to buy an Oracle branded linux kernel to get support. See where this is going?

The extensive market saturation is frightening. The influence this single corporation has on the most commonly used and freely available applications to build IT infrastructure is nothing less than reminiscent of the grip Redmond maintains over modern computing. This is a full on assault, attempting to leverage technology and control the market majority of a minority market. A model of a single company which more or less calls the shots, and squashes the products they don’t view as financially viable.

The beautiful thing about “unix like” software is the wide variety, originality, for the most part freely available, and endless possibilities of implementation. It’s not difficult to find a solution for your requirements, and if you can’t find a robust one, there is a good chance someone else has already started development of something that will fit your needs. You can comb through roughly thirty years of methodology and design practices to find anything from the refined, well tuned, and robust, to the bloated, broken, and compounded horrific decisions. Though its all there, and some of the clunkers are still the best despite their almost ancient appearance.

A single entity with self interest and sole purpose of sustainability with financial growth controlling a wealth of very important technologies will cause a fundamental drift towards stagnation. All the projects will shift focus towards integration, instead of striving to achieve their original purpose of accomplishing their function to the best of their ability.

Don’t get me wrong. This is in no way, shape, or form the destruction of free software or open source movement, nor a goofy conspiracy theory of “the man” trying to “keep us down”. Though it very well may be the downfall of some of the largest open source projects which we have deployed across our enterprise, and depended on for years. The risk of vanishing at any moment grows exponentially with uniform commercial branding, and teeters on a board of directors ideas of the company’s overall best interest.

Stability gone. Paradise lost.

1). I know this rant is over a month out of date. It was a draft I haven’t found time to finish.

Posted: September 6th, 2011

Migrating FreeBSD to a new hard drive.

I arrived at work to find a post-it note on my desk which read “Your disk was spinning a lot on Sunday”. The office was quiet, so a colleague could hear the disk spinning out of control. I leaned forward to hear the sound of a failing disk. High rpm spin up, slow down, and repeat. The usual behavior before the chug of death starts. Smartctl had reported roughly 30,000 seek errors in the past 20 minutes.

The dump and restore utilities are part of FreeBSD which allow you to backup and restore entire file systems. These utilities can work with pipes, allowing you to easily manipulate the data, and send it elsewhere, ie gzip it and store it remotely with ssh. They can work on live file systems. These features allow for a painless migration to a new hard drive, without the hassle of reinstalling the operating system. While this could lead to instability on a drive failing with a high number of read errors or bad sectors, in this instance it was a low risk transfer.

Linux users may find the drive naming structure odd, though its a much better design. Linux sata drives show up in the order found, as /dev/sd[a-z] and may have a trailing digit to specify partitions on the drive. FreeBSD uses the sata controller the drive is physically plugged into to define where it lives in the system, followed by the slice number, then partition. This scheme keeps drives named as they should be, regardless if other disks are added or removed from the system, or which order they are detected on boot.

You may use fdisk and disklabel to cut the slice and set up partitions on your new drive, though sysinstall or the stand alone sade are better suited for ease of use.  In this example my current installation was on /dev/ad4, and I was moving to a new sata drive attached on /dev/ad1.

My current partition scheme and their mount points looked like this:

/dev/ad4s1a /
/dev/ad4s1e /tmp
/dev/ad4s1f  /usr
/dev/ad4s1d /var

The commands to migrate are simple.

newfs /dev/ad1s1a
mount /dev/ad1s1a /mnt
cd /mnt
dump 0afL – / | restore rf -

This was to move my root partition. Take note the L argument for dumping from a live file system. Follow the same process to dump usr, tmp, and var, piping them to their new location. Edit fstab on the new drive to correctly reflect where it is or will be. Shutdown the machine, remove your failing disk, boot into your migrated installation, and enjoy not having to reinstall or configure anything.

Finally, stare at the failing drive now sitting on your desk, and curse the manufacturer. Nobody produces quality drives anymore.

Posted: September 5th, 2011