BY PETER WANYONYI
The more things change, it seems, the more they resemble the past. This is true in politics, in life, and nowhere more than in technology.
Step back 30 years from now, and you had large centralised computer systems – minicomputers and mainframe computers – to which were attached a number of “dump” terminals equipped with data entry peripherals like keyboards. Users had no processing at their” workstations”, which were just display terminals. The actual processing happened at the central computer, with the results of the processing displayed for the user to view and interact with.
Somewhere in the decade after that, the selfish computing gene took over. Everyone wanted a personal device that kept their data local. The advent of laptops meant that more people now wanted to work off-site, and the model of personalised computing took off. Busy executives carried their laptops everywhere, mid-ranking managers wanted to keep working on their presentations and reports and spreadsheets on the go, and the centralised model of computing was replaced by distributed, disconnected computing in which all user data was stored on the user’s computer, and the user was free to wander around with their computer.
But this wasn’t good enough – for one, it presented a horrendous security challenge. Stolen laptops and computers would result in loss of critical information, and even backups wouldn’t really help get back the very latest updates to critical documents. As it happened, the internet turned up at just the right time to open up new possibilities in personal computing. At the beginning, internet speeds were very low, as most internet access was via old copper telephone infrastructure. But the mobile phone revolution was also underway, and this resulted in the installation of new fibre and other data transport networks that took internet speeds into the stratosphere. By about a decade ago, serious thought was being given to a new paradigm in computing: having remote laptops and personal computers connecting to central servers in much the same way as the dumb terminals of old, only that this time the connection would be over the “cloud” that is the internet. Cloud computing was born.
The initial impetus in cloud computing was to virtualise server hardware and software.
Thus, instead of installing expensive servers at a company and then having to make all the required security and administration arrangements and setups to get all that gear working for you, the organisation was offered a simple bargain: large IT providers would build giant data centres where all the hardware would be located. Any company wishing to set up its applications just required a good internet connection to the data centre, where it then paid for a given amount of storage space, processing power and system memory. Servers could be created on the fly using pre-canned software code, and they were no longer physical: they became virtual servers, sitting in the cloud that was the data centre’s infrastructure. Software no longer had to be purchased; it could just be leased as a service. This model of computing killed the traditional server market – the small servers that were the mainstay of small and medium sized companies have all but died out.
Cloud computing has been a huge success, and is now extending to the desktop, the very experience of the users accessing information systems and data. In the past few years, as servers became virtualised and migrated to virtual machines sitting in the vast digital catacombs of data centres, the desktop remained fairly standard: the organisation bought a computer and installed an operating system on it for its users to access organisational applications such as email and business software. Even where the applications were accessed off a “software as a service” subscription, the computer used to access them was fairly standard, consisting of the hardware itself and an operating system installed locally and capable of saving data locally.
Not anymore. The coming of browser-based computing is changing that as well. The leader in this respect is Google, whose Chromebooks are lightweight computers without a traditional operating system. When Chromebooks power up, they display Google’s Chrome browser. The user can then utilise Google’s free “Google Docs” service to create documents – or any of the many similar free online Office Suite software offerings from such players as Microsoft. The user can save their documents to free online storage provided as part of their email account with Microsoft or Google or any of dozens of other providers online: all that’s required is the Chromebook and a good internet connection. The Chromebook provides minimal processing capacity through its browser, with more complex tasks relegated to processing by Google’s servers.
As internet access becomes more and more ubiquitous even in the developing world, this model of computing – so similar to the old dumb terminals and their backend processing off a central mainfame computer – will grow and spread to include those parts of the world where traditional computing is too expensive for most people. That includes Kenya, particularly the light applications that we are now beginning to use in school computing and the like. The future of personal computing is here in the form of browser-based cloud computing, and it looks a lot like the past!
The author is an information systems professional based in New Zealand