I've Looked at Clouds from Both Sides Now
The new big thing from Microsoft is Azure, their view of "Cloud Computing". It is the new feature in Visual Studio 2010. It competes with other cloud services from Google and Amazon. This is all leading edge stuff. Right?
In 1967 a research group at the IBM Cambridge Scientific Center (near MIT) created an operating system for the IBM 360 Model 67 mainframe computer called CP-67. It created "virtual machines" so that each user connected to the mainframe by a terminal controlled what appeared to be his own mainframe computer into which he could load his own operating system. In some sense this was the first Personal Computer, and since it was not possible to build a PC in hardware, the mainframe simulated one in software. Most users loaded a simple to use small operating system called CMS that had many of the same features (simple commands, letters to identify each disk, a crash-proof file system) that would eventually appear in PC-DOS.
A company called CSS acquired one of these Model 67 mainframes and optimized the operating system for timesharing use. It then sold services (compute time, disk storage, memory) to other companies that could not afford their own computers or IT staff. The company name was changed to National CSS or NCSS and their modified version of the IBM CP-67 operating system was called VP/CSS.
How exactly do you optimize an operating system for a particular type of timesharing. Well, a virtual machine can run any operating system that you can load into a real machine. However, if you select a specific guest operating system that everyone is going to use, then you can "checkpoint" a copy of that system. When someone needs a new virtual machine you don't really boot a system from scratch. You simply give them shared access to the pages in memory that contain the checkpointed system and the shared base component of the system disk and they start up immediately. Any pages containing data they create are managed separately. The incremental cost of each additional virtual machine in terms of memory and disk space is minimized.
In 2008 Microsoft released Windows Server 2008 with a virtual machine capability called Hyper-V. It could run virtual copies of any server operating system. However, if you choose a particular operating system, then checkpoint a booted up generic copy, and arrange it so every remote subscriber gets shared access to this checkpointed image and the ability to store new data in additional memory pages, then you can create a version of Hyper-V and call it "Azure".
There are several ways to create a virtual machine. The host computer can provide each guest virtual machine with what appears to be a particular type of disk, network adapter card, and CD drive. Often these virtual devices have nothing to do with the actual hardware disks and adapters on the real server computer. Alternately, a "hypervisor" can take a pool of physical devices and assign them to specific virtual machines.
However, the IBM mainframe researchers found that the most efficient approach was to have the virtual machines communicate to the real supervisor code each time they wanted to talk to other machines or to read or write data on disk. Then the guest VM and the real supervising OS can exchange high level information efficiently instead of wasting time simulating a piece of hardware. This is also part of the Azure infrastructure. In fact, Azure does not create any type of simulated hard drive, but instead provides each virtual machine with simple database services to store any type of data.
Of course, after forty years one learns what customers really want and what they can do without. Azure is more sophisticated because it can offer a specific set of services targeted directly to the modern customer. That is what Microsoft is marketing.
However, under the covers the Azure service exists only in Microsoft data centers where servers running a special version of Hyper-V generate a particular type of diskless virtual machine that uses a high level communications fabric to talk to the supervisor, to other virtual machines, and to the database storage subsystem.
So it is different and updated, but basically it is still the same core idea that NCSS invented forty years ago. Build a high performance system for creating, managing, and linking virtual machines and then sell compute and storage services to off site timesharing users. Some things have changed. The Intel x86 architecture has replaced the IBM mainframe hardware. The internet and Web browsers replace the old character terminals. Database storage replaces ordinary filesystems.
The big difference is a new targeted market. The NCSS customers used their virtual machines for many of the Office applications that today are better handled by laptop computers. What customers today don't already have are servers, especially Web Application Servers. Small companies can afford server hardware, but they cannot easily plan off site backup, redundant network connections, disk backup and database fail over, and all the other complex professional functions. It makes sense to purchase this type of service from a commercial vendor, and in any type of Web commerce application the revenue justifies the cost.
Obviously Microsoft would prefer that you use their development products and languages, so they make a big push for .NET programming in Azure. However, the virtual machine can run any type of program written in any language that Windows supports, provided that the runtime has been adapted to use the database service instead of hardware disk I/O for normal housekeeping.
"Cloud computing" suggests something fuzzy and indistinct. Certainly you don't need to know how the thing works in order to use it. If, however, you like to keep things sharp and precise, then Azure is not really a new innovation or a leading edge technology. The basic ideas were all worked out forty years ago, and today Microsoft can build on that experience. Provided, of course, that there is anyone in Microsoft who is old enough to remember how this sort of thing is done.