Ever since Charles Babbage first proposed his ‘Difference Engine’ computer to the Royal Society in 1822, technology scientists have sought to create faster and more powerful computing machines.
Our desire to process data more quickly and generally extract more ‘juice’ from our electronic devices has meant that we have been involved in a constant cycle of computing enhancements and augmentations.
As the race for home PC power reached what many people regard as its zenith during the Nineties, microprocessor manufacturers including Intel and AMD carried forward their research into increasingly thin silicon wafer technology in order to give us more power.
Building CPU chips (or central processing units) that would run at ever-faster speeds meant that the scientists had to find a way of packing in more work-hungry transistors onto each layer of silicon. Eventually the silicon layers got so thin that home and business machines started using parallel processing techniques to perform multiple computations concurrently, which had in fact been employed within mainframes and supercomputers for many years.
So while your home laptop probably has most of the computing power that the supercomputers of the Sixties struggled to pump out, modern variants have been developed with astonishing levels of speed in terms of processing power. These machines can process algorithmic logic at speeds almost impossible for us to comprehend. But scientists and doctors still argue that however fast the hardware and software gets, it’s still a cavernous gap away from the speed at which synaptic signals are transmitted around the human brain.
If you don’t remember floating-point numbers from school, put simply these are numbers where the decimal point can be moved around to create additional numbers. By moving the decimal point around, more numbers can be created from a single set of digits – and the rest we can leave to the scientists. What we should know is that floating-point operations are crucial to supercomputers as the speed at which they can carry out calculations upon them give us our floating point operations per second (FLOPs) measure.
So where has the development of supercomputers brought us in terms of day-to-day technology usage? While many super machines are used for research projects, other machines are being employed by so-called cloud computing providers to build “virtual’ computing machines shared by multiple tenants across a network. It may sound complicated, but it’s not really.
If we plunge deeper into the supercomputing supernova we start to see where the real beasts reside; ie dedicated lumps of raw computing power that sit immobile on their indomitable data-driven haunches to carry out mankind’s most complex computational desires. One such beast that most of us will have heard about is the supercomputer located on the French-Swiss border at CERN, which carries out the number crunching for the Large Hadron Collider. In fact, CERN’s life force is monitored by something in excess of 2,500 clustered servers running the Linux operating system.
Because the Large Hadron Collider generates such massive volumes of data, the scientists need custom-built supercomputing power if they are to successfully analyze the subatomic particles that get bashed around the inside the Collider’s seemingly endless miles of internal structure.
But this is still science, so how do supercomputers work in our world where we can see and feel their real effects and power? Computing giant IBM tried to show us evidence of the real power of its Deep Blue supercomputer in May of 1997 when it set the machine’s logic to play a fascinating match with the reigning World Chess Champion, Garry Kasparov.
This reality, of course, did not stop IBM using the win to great effect for publicity purposes and indeed the company has continued to label the research arm of its supercomputer business as “Deep Computing’ to this day. “Combining these [Deep Computing] capabilities with advances in algorithms, analytic methods, modelling and simulation, visualization, data management and software infrastructures is enabling valuable scientific, engineering and business opportunities,” says IBM.
It’s not a simple spreadsheet with ten numerical values in it that could be easily broken down and expressed in its most simple binary value state. This is complex-rich data with meta tags, embedded extras and when we start to also exchange this data on mobile devices on the go the situation multiplies tenfold. The era of the green screen is gone, the era of the supercomputer has only just begun.