Tag Archives: data-center

Musings on why Cloud Computing will prevail…

suhas kelkar headshot

Today’s post is a guest post by Suhas Kelkar. Suhas leads the Innovation & Incubation Lab at BMC Software India. Prior to BMC he was the Vice President of Product Management at Digite, an enterprise software company in the field of Project Portfolio Management. See his linked-in profile for details.

In the recent Hype Cycle for Cloud Computing 2009 special report by Gartner, technologies at the ‘Peak of Inflated Expectations’ include Cloud Computing! (For description of five phases of Hype Cycle look here) This means that Cloud Computing is on the verge of entering the “Trough of Disillusionment” phase. Many technologies have been unable to come out of this dreaded trough where they fail to meet expectations and quickly become unfashionable. Articles such as “Could the cloud lead to an even bigger 9/11” clearly indicate that Gartner’s analysis is right and that cloud computing indeed has reached the peak of hype!

This article has my musings on why cloud computing will eventually come out of this phase and would reshape the way we run business.

Hype Cycle for Cloud Computing 2009
Hype Cycle for Cloud Computing 2009

I had an opportunity to attend VmWorld 2009 conference. During the course of this conference, VmWare announced its latest initiative, vCloud. vCloud is essentially using VmWare’s virtualization technology to create an ecosystem of cloud service providers. With this initiative VmWare joins already crowded space of public cloud providers such as Amazon, Rackspace Cloud and Savvis. Out of all the exhibitors at the VmWorld conference, almost everyone was trying to get on the bandwagon of Cloud Computing. And this was not even a Cloud Computing focused conference! The more you look into Cloud Computing the more you feel like it is indeed the next big thing after the internet gold rush of 90s.

All this hype for Cloud Computing feels like a déjà vu. Turn the dial few years ago and the area of Software As A Service (SaaS) went through very similar transition. After SaaS reached the trough of disillusionment skeptics were raising doubts. Many argued that they would never consider putting their competitive data (CRM) in a software system outside of their corporate networks. Salesforce had to fight an uphill battle as it tried to establish its SaaS products. However the value proposition of SaaS, in terms of zero install and pay-as-you-go was too attractive to ignore. Today SaaS is the architecture of choice for many enterprise software products and last time I checked Salesforce is sitting pretty at a massive market cap of 7.13 billion dollars!

Let’s look at the benefits of Cloud Computing,

  • Lower Costs – OPEX not CAPEX: Cloud Computing avoids capital expenditure (CapEx) on hardware, software and services by renting it from a third party provider (such as Amazon). Consumption is usually billed on a utility (resource based like electricity) or subscription (time based, like a monthly cable subscription) basis with little or no upfront cost. You pay as you go and pay for what you need. This seemingly straight forward benefit has deep impact on business models and strategy.
  • Self service and Agility: Provisioning a server used to take days if not weeks. With Amazon you can procure a server on their public cloud in minutes! Users can generally terminate the contract at any time (improving ROI and eliminating financial risks), and the services are often covered by service level agreements (SLAs) with financial penalties.
  • Focus on your business: Cloud computing abstracts away underlying resources (server, network and storage) and management of it so that you can focus on your core business. Win-win for Providers and Consumers.
  • Cloud Infrastructure and services are by default multi-tenant enabled, with multiple customers sharing resources and the costs associated with these. Providers run centralized infrastructure at low cost locations and make use of expertise of providers in terms of utilization and efficiency of infrastructure. Providers benefit with increased efficiency due to economies of scale and are able to provide the same service at lesser costs to happy consumers.

  • Elastic Scalability: Hosting your applications on Cloud Infrastructure enable dynamic (“on-demand”) provisioning of resources that can be done at near real time, without having to waste server resources engineered for peak loads. This enables small business to start offering their services on the web with low entry barriers and then scale as and when their load demands are higher.
  • Consider for example that you want to start a small web based business selling toys. Your business plan calls for exponential growth with number of customers ramping from few hundred in the first year to thousands in 2-3 years to million plus in 5-7 years. Ofcourse this plan does not even include wild fluctuations during peak holiday seasons. Until today, planning for this type of scenario involved lot of upfront costs that created huge barriers of entry for start ups. Now with cloud computing and public cloud infrastructure, such small companies can dream of doing exactly what they want to do and provides them with unlimited elasticity!

Similar to SaaS success story, it will be the benefits of the “cloud” that will eventually win over the skeptics due to underlying benefits. Of course an important factor would also be for an eco system to evolve in a timely fashion. One of the reasons why SaaS was successful was the fact that an entire ecosystem made itself available that rendered well to the SaaS Model including Web Standards (SOAP, WSDL, UDDI) and architectures such as AJAX.

Similar to the platform wars of the eighties (followed by browser wars of nineties), Cloud Computing is currently going through a war with each player trying to establish itself as the destination. Some efforts have started to promote interoperability and openness of cloud. Open Cloud Initiative is one such example. However it remains to be seen how the industry as a whole matures and adopts such efforts…

Cloud computing is here to stay and will succeed as a concept eventually. It has the power to establish new business models and change existing processes. More will have to be written about what does it mean for enterprises of tomorrow to manage their businesses in cloud. Do provide feedback via your comments if you would like to hear about it more…

See also: Suhas’ previous PuneTech article: The Changing Landscape of Data Centers.

Reblog this post [with Zemanta]

Changing Landscape of Data Centers

Today’s post is a guest post by Suhas Kelkar, the Head of Innovation & Incubation Lab at BMC Software India. Prior to BMC he was the Vice President of Product Management at Digite, an enterprise software company in the field of Project Portfolio Management. See his linked-in profile for details.

I had an opportunity to speak at the very first BMC India Technical Event held in Bangaluru on June 11th, 2009. At this event I talked about the changing landscape of data centers. This article is an excerpt of the talk intended to facilitate understanding of the presentation. The entire presentation is available here.

There are many factors causing the landscape of data centers to change. There are some disruptive technologies at play namely Virtualization and Cloud Computing. Virtualization has been around for a while but only recently it has risen to the level of making significant impact to data centers. Virtualization has come a long way since VMware first introduced VMware Workstation in 90s. The product was initially designed to ease software development and testing by partitioning a workstation into multiple virtual machines.

The virtual machine software market space has seen a substantial amount of evolution, The Xen® hypervisor, the powerful open source industry standard for virtualization. To vSphere, the first cloud operating system, transforming IT infrastructures into a private cloud-a collection of internal clouds federated on-demand to external clouds. Hardware vendors are also not too behind. Intel/AMD and other hardware vendors are pumping in lot of R&D dollars to make their chipsets and hardware optimized for hypervisor layer.

According to IDC more than 75% companies with more than 500 employees are deploying virtual servers. As per a survey by Goldman Sach’s 34 per cent of servers will be virtualized within the next 12 months among Fortune 1000 companies, double the current level of 15 per cent.

Cloud computing similarly existed as a concept for many years now. However various factors finally coming together that are now making it ripe for it to have the most impact. Bandwidth has been increasing significantly across the world that enables faster access to applications in the cloud. Thanks to success of SaaS companies, comfort level of having sensitive data out of their direct physical control is increasing.

There is increasing need for remote work force. Applications that used to reside on individual machines now need to be centralized.

Economy is pushing costs to go down. Last but not least, there is an increasing awareness about going green.

All these factors are causing the data center landscape to change. Now let’s look at some of the ways that the data centers are changing.

Data centers today are becoming much more agile. They are quick, light, easy to move and nimble. One of the reasons for this is that in today’s data center, virtual machines can be added quickly as compared to procuring and provisioning a physical server.

Self service provisioning allows end-users to quickly and securely reserve resources and automates the configuration and provisioning of those physical and virtual servers without administrator intervention. Creating a self-service application and pooling resources to share across teams not only optimizes utilization and reduces needless hardware spending but it also improves time to market and increases IT productivity by eliminating mundane and time consuming tasks.

Public clouds have set new benchmarks. E.g. Amazon EC2 SLA for availability is 99.95% which raised the bar from traditional data center availability SLA significantly. Most recently another vendor, 3Tera came out with five nines, 99.999% availability. Just to compare Amazon and 3Tera, 99.999% availability translates into 5.3 minutes of downtime each year, the different in cost between five 9’s and four 9’s (99.99 percent, or 52.6 minutes of downtime per year) can be substantial.

Data centers are also becoming more scalable. With virtualization, a data center may have 100 physical servers that are servicing 1000 virtual servers for your IT. Once again due to Virtualization, data centers are no longer constrained due to physical space or power/cooling requirements.

The scalability requirements for data centers are also changing. Applications are becoming more computation and storage hungry. Example of computation sensitive nature of apps, enabling a sub-half-second response to an ordinary Google search query involves 700 to 1,000 servers! Google has more than 200,000 servers, and I’d guess it’s far beyond that and growing every day.

Or another example is Facebook, where more than 200 million photos are uploaded every week. Or Amazon, where post holiday season their data center utilization used to be <10%! Google Search, Facebook and Amazon are not one off examples of applications. More and more applications will be built with similar architectures and hence the data center that hosts/supports those applications would need to evolve.

Data center are becoming more fungible. What that means is that resources used within the data centers are becoming easily replaceable. Earlier when you procured a server, chances were high that it will be there for number of years. Now with virtual servers, they will get created, removed, reserved and parked in your data center!

Data centers are becoming more Utility Centric and service oriented. As an example look at Cisco‘s definition of Data Center 3.0 where it calls it infrastructure services. Data center users are increasingly going to demand pay as you go and pay for what you use type of pricing. Due to various factors, users are going to cut back on large upfront capital expenses and instead going to prefer smaller/recurring operating expenses.

Most organizations have either seasonal peaks or daily peaks (or both) with a less dramatic cost differential; but the cost differential is still quite dramatic and quite impactful to the bottom line. In addition, the ability to pay for what you use makes it easy to engage in “proofs of concept” and other R&D that requires dedicated hardware.

  • As the discrepancy between peak usage and standard usage grows, the cost difference between the cloud and other options becomes overwhelming.

Technology is changing; the business needs are changing, with changing times organization’s social responsibilities are changing. More and more companies are thinking about the impact they have on the environment. Data centers become major source of environment impact especially as they grow in size.

A major contributor to excessive power consumption in the data center is over provisioning. Organizations have created dedicated, silo-ed environments for individual application loads, resulting in extremely low utilization rates. The result is that data centers are spending a lot of money powering and cooling many machines that individually aren’t doing much useful work.

Cost is not the only problem. Energy consumption has become a severe constraint on growth. In London, for example, there is now a moratorium on building new data centers because the city does not have the electrical capacity to support them!

Powering one server contributes to on an average 6 Tons of carbon emissions (depending upon the location of the server and how power is generated in that region) It is not too farfetched to claim that every data center has some servers that are always kept running because no one knows what business services depend on them but in reality no one seems to be using them. Even with the servers that are being used, there is an opportunity to increase their utilization and consolidate them.

Now that we have seen some of the ways that the data centers are changing, I am going to shift gears and talk about evolution of data centers. I am going to use the analogy of evolution of web to changing landscape of data centers. Just like web evolved from Web 1.0 where everyone could access, to Web 2.0 where people started contributing to Web 3.0 where the mantra is everyone can innovate.
Image showing Web-3.0 and DC-3.0
Applying this analogy to Data Centers we can see how it has evolved from its early days of existence to where we are today,
Evolution of a DC
Using the analogy of Web world, we can see how data centers have evolved from their early days till now.

  • In the beginning, Data centers were nothing but generic machines stored together. From there it evolved to blade servers that removed some duplicate components and optimized. Now in DC3.0, they are becoming even more virtual and cloud based.
  • So from mostly physical servers we have moved to Physical and Virtual servers to now where we would even treat underlying resources as virtual.
  • Provision time has gone down significantly
  • User participation has changed
  • Management tools that used to be nice to have are playing a much important role and are becoming mandatory. Good example once again is UCS where Bladelogic Mgmt tool will be pre-installed!
  • The role of a data center admin itself has changed from mostly menial work into a much more sophisticated one!

Slideshow for “Changing Landscape of Data Centers”

If you cannot see the slideshow above, click here.

Reblog this post [with Zemanta]

Stop Virtual Machine Sprawl with Colama

This is a product-pitch for Colama, a product built by Pune-based startup Coriolis. For the basics of server virtualization, see our earlier guest posts: Introduction to Server Virtualization, and Why do we need server virtualization. Virtualization is fast emerging as a game-changing technology in the enterprise computing space, and Colama is trying to address an interesting new pain-point that making its presence felt. 

Virtualization is fast becoming an accepted standard for the IT infrastructure. While it comes as a boon to the development and QA communities, the IT practitioners are dealing with the pressing issue of virtual machine sprawl that surfaces due to adhoc and hence uncontrolled adoption of virtualization. While describing the problem and its effects, this paper outlines a solution called Colama, as offered by Coriolis Technologies.

 

Virtual machines have been slipping in under the covers everywhere in the IT industry. Software developers like virtual machines because they can easily mimic a target environment. QA engineers prefer virtual machines because they can simultaneously test the software on different configurations. Support engineers can ensure reproducibility of an issue by pointing to an entire machine rather than detailing on the individual steps and/or configuration requirement on a physical host. In many cases, adoption of virtual machines has been primarily driven by users’ choice rather than any coherent corporate strategy. The ad-hoc and uncontrolled use of virtual machines across the organization has resulted in to a problem called Virtual Machine sprawl, which has become critical for today’s IT administrators.

Virtual machine sprawl is an unpleasant side effect of server virtualization and its near exponential growth over the years. Here are the symptoms:

  • At any given point, the virtual machines running in the organization are un-accounted for. Information like who created them and when, who used them, what configuration/s they have, what licensed software they use, whether security patches have been applied, whether the data is backed up etc are not maintained and tracked anywhere.
  • Most commonly, people freely copy each other’s virtual machines and no usage tracking and access control is in place.
  • Because of cheap storage, too many identical or similar copies of the same machines are floating across the organization. But reduction in storage cost does not reduce the operational cost of storage management, search, backup, etc. Data duplication and redundancy is a problem even if storage is plentiful.
  • Because there is no mechanism to keep track of why an image was created, it is hard to figure out when it should be destroyed. People tend to forget what each image was created for, and keep them around just in case they are needed. This increases the storage requirements.
  • Licensing implications: Virtual machine copied from one with a licensed (limited) software needs to tracked for its life span in order to put a control on the use of licensed software.
  •  

    There are many players in the industry who address this problem. Most of the virtual lab management products are tied to one specific virtualization technology. For example, the VMWare Lab Manager works for only VMWare virtualization technology. In a heterogeneous virtualization environment that is filled with Xen, VMWare, VirtualBox and Microsoft virtual machines, such an approach falls short.

    Colama is Coriolis Technologies solution to address this problem. Colama manages the life cycle of virtual machines across an organization. While tracking and virtual machines, Colama is agnostic to the virtualization technology.

     

    Here are some of the features of Colama:

  • Basic SCM for virtual machine: You can Checkin/checkout/clone/tag/comment virtual machine/s for tracking revisions of virtual machine.
  • Image inspection: Colama provides automatic inspection, extraction and display of image-related data, like OS version, software installed, etc and also facilitates “search” on the extracted data. For example, you can search for the virtual machines that have got Windows 2003 server with service pack 4 and Oracle 10g installed!
  • Web based interface: Navigate through the virtual machine repository of your organization using just a web browser.
  • Ownership and access control: • Create a copy of a machine for yourself and decide who can use “your” machine.
  • De-duplication: Copying/Cloning virtual machines happens without any additional storage requirement.
  • Physical machine provisioning (lab management): Spawn a virtual machine of your choice on a physical host available and ‘ready’.
  • Management reports: auditing and compliance User activity reports, virtual machine history, health information (up/down time) of virtual machines, license reports of the virtual machines etc.
  • Virtualization agnostic: works for virtual machines from all known vendors. 
  • Please note: This product-pitch, by Barnali Ganesh, co-founder of Coriolis, is featured on PuneTech because the editors found the technology interesting (or more accurately, the pain-point which it is addressing). PuneTech is a purely non-commercial website and does not take any considerations (monetary or otherwise) for any of the content on the site. If you would like your product to be featured on the front page, send us an article and we’ll publish it if we fell it’s interesting to our readers. You can always add a page to the PuneTech wiki by yourself, as long as you follow the “No PR” guidelines.

    Why do we need server virtualization

    Virtualization is fast emerging as a game-changing technology in the enterprise computing space. What was once viewed as a technology useful for testing and development is going mainstream and is affecting the entire data-center ecosystem. This article on the important use-cases of server virtualization by Milind Borate, is the second in PuneTech’s series of articles on virtualization. The first article gave an overview of server virtualization. Future articles will deal with the issue of management of virtual machines, and other types of virtualization.

    Introduction

    Is server virtualization a new concept? It isn’t, because traditional operating systems do just that. An operating system provides a virtual view of a machine to the processes running on it. Resources are virtualized.

    • Each process gets a virtual address space.
    • A process’ access privileges control what files it can access. That is storage virtualization.
    • The scheduler virtualizes the CPU so that multiple processes can run without conflicting with each other.
    • Network is virtualized by providing multiple streams (for example, TCP) over the same physical link.

    Storage and network are weakly virtualized in traditional operating systems because some global state is shared by all processes. For example, the same IP address is shared by all processes. In case of storage, the same namespace is used by all processes. Over time, some files/directories become de-facto standards. For example, all process look at the same /etc/passwd file.

    Today, the term “server virtualization” means running multiple OSs on one physical machine. Isn’t that just adding one more level of virtualization? An additional level generally means added costs, lower performance, higher maintenance. Why then is everybody so excited about it? What is it that server virtualization provides in addition to traditional OS offerings? An oversimplified version of the question is: If I can run two processes on one OS, why should I run two OSs with one process each? This document enumerates the drivers for running multiple operating systems on one physical machine, presenting a use case, evaluating the virtualization based solution, suggesting alternates where appropriate and discussing future trends.

    Application Support

    Use case: I have two different applications. One runs on Windows and the other runs on Linux. The applications are not resource intensive and a dedicated server is under-utilized.

    Analysis: This is a weak argument in an enterprise environment because enterprises want to standardize on one OS and one OS version. Even if you find Windows and Linux machines in the same department, the administrators are two different people. I wonder if they would be willing to share a machine. On the other hand, you might find applications that require conflicting versions of, say, some library, especially on Linux.

    Alternative solution: Wine allows you to run Windows applications on Linux. Cygwin allows you to run Linux applications on Windows. Unfortunately, it’s not the same as running the application directly on the required OS. I won’t bet that a third party application would run out of the box under these virtual environments.

    Future: Some day, developers will get fed up of writing applications for a particular OS and then port them to others. JAVA provides us with an host/OS independent virtual environment. JAVA wants programmers to write code that is not targetted for a particular OS. It succeeded in some areas. But, still there is a lot of software written for a particular OS. Why did everybody not move to JAVA? I guess, because JAVA does not let me do everything that I can do using OS APIs. In a way, that’s JAVA’s failure in providing a generic virtual environment. In future, we will see more and more software developed over OS independent APIs. Databases would be the next target for establishing generic APIs.

    Conflicting Applications

    Use case: I have two different applications. If I install both on the same machine, both fail to work. In fact, they might actually work but it’s not a supported by my vendor.

    Analysis: In the current state of affairs, an OS is not just hardware virtualization. The gamut of libraries, configuration files, daemons is all tied up with an OS. Even though an application does not depend on the exact kernel version, it very much depends on the library versions. It’s also possible that the applications make conflicting changes to some configuration file.

    Alternative solution: OpenVZ modifies Linux to provide multiple “containers” inside the same OS. The machine runs a single kernel but provides multiple isolated environments. Each isolated environment can run an application that would be oblivious to the other containers.

    Future: I think, operating systems need to support containers by default. The process level isolation provided at memory and CPU level needs to be extended storage and network also. On the other hand, I also hope that application writers desist from depending on shared configuration and shared libraries pay some attention to backward compatiblity.

    Fault Isolation

    Use case: In case an application or the operating system running the application faults, I want my other applications to run unaffected.

    Analysis: A faulty application can bring down entire server especially if the application runs in a priviledged mode and if it could be attacked over a network. A kernel driver bug or operating system bug also brings down a server. Operating systems are getting more stable and servers going down due to operating system bug is rare now a days.

    Alternative solution: Containers can help here too. Containers provide better isolation amongst applications running on the same OS. But, bugs in kernel mode components cannot be addressed by containers. Future: In near future, we are likely see micro-kernel like architectures around virtual machines monitors. Light weight operating systems could be developed to work only with virtual machine monitors. Such a solution will provide fault isolation without incurring the overheads of a full opearting system running inside a virtual machine.

    Live Application Migration

    Use case: I want to build a datacenter with utility/on-demand/SLA-based computing in mind. To achieve that, I want to be able to live-migrate an application to a different machine. I can run the application in a virtual machine and live-migrate the virtual machine.

    Analysis: The requirement is to migrate an application. But, migrating a process is not supported by existing operating systems. Also, the application might do some global configuration changes that need to be available on the migration target.

    Alternative solution: OpenVZ modifies Linux to provide multiple “containers” inside the same OS. OpenVZ also supports live migration of a container.

    Future: As discussed earlier, operating systems need to support containers by default.

    Hardware Support

    Use case: My operating system does not support the cutting edge hardware I bought today.

    Analysis: Here again, I’m not bothered about the operating system. But, my applications run only on this OS. Also, enterprises like to use the same OS version throughout the organization. If an enterprise sticks to an old OS version, it does not work with new hardware to be bought. If an enterprise is willing to move to the newer OS, it does not work with the existing old hardware.

    But, the real issue here is the lack of standardization across hardware or driver development models. I fail to understand why every wireless LAN card needs a different driver. Can all hardware vendors not standardize the IO ports and commands so that one generic driver works for all cards? On the other hand, every OS and even OS version has a different drivers development model. That means every piece of hardware requires a different driver for each OS version. Alternative solution: I cannot think of a good alternative solution. One specific issue, unavailability of wireless LAN card drivers for Linux, is addressed by NdisWrapper. NdisWrapper allows us to access a wirelss card on Linux by loading a Windows driver.

    Future: We either need hardware level standardization or the ability to run the same driver on all verions on all operating systems. It would be good to have wrappers, like NdisWrapper, for all types of drivers and all operating systems. A hardware driver should write to a generic API provided by the wrapper framework. The generic API should be implemented by the operating system vendors.

    Software Development Environment

    Use case: I want to manage hardware infrastructure for software development. Every developer and QA engineer needs dedicated machines. I can quickly provision a virtual machine when the need arises.

    Analysis: Under development software fails more often than a released product. Software developers and QA engineers want an isolated environment for the tests to correctly attribute bugs to the right application. Also, software development envinronments require frequent reprovisioning as the product under development needs to be tested under different operating systems.

    Alternative solution: Containers would work for most software development. I think, the exception is kernel level development.

    Future: Virtual machines found an instant market in software QA labs. Virtual machines will continue to flourish in this market.

    Application Configuration Replica

    Use case: I want to ship some data to another machine. Instead of setting up identical application enviroment on the other machine to access the data, I want to ship the entire machine image itself. Shipping physical machine image does not work because of hardware level differences. Hence, I want to ship virtual machine image.

    Analysis: This is another hard reality of life. Data formats are not compatible across multiple versions of a software product. Portable data formats are used by human readable documents. File-system data formats are also stable to a large extent and you can mount a FAT file-system or ISO 8529 file-system on virtually any version of any operating system. The same level of compatiblity is not established for other structured data. I don’t see that happening in near future. Even if this hurdle is crossed, you need to bother about correctly shipping all the application configuration, which itself could be different for the same software running on different OSs.

    Alternative solution: OpenVZ container could be a light-weight alternative to a complete virtual machine.

    Future: The future seems inclined towards “computing in a cloud”. The network bandwidth is increasing and so is the trend towards is outsourced hosting. Mail and web services are outsourced since a long time. Oracle-on-demand allows us to outsource database hosting. Google (writely) wants us to outsource document hosting. Amazon allows us to outsource storage and compuation both. In future, we will be completely oblivious to the location of our data and applications. The only process running on your laptop would be an improved a web browser. In that world, only system software engineers, who build these datacenters, would be worried about hardware and operating system compatibilities. But, they also will not be overly bothered because the data-center consolidations will reduce the diversity in hardware and OS.

    Thin Client Desktops

    Use case: I want to replace desktop PCs with thin clients. A central server will run a VM for each thin client. The thin client will act as a display server.

    Analysis: Thin clients could bring down the maintenance costs substantially. Thin client hardware is more resilient than a desktop PC. Also, it’s easier to maintain the software installed on a central server than managing several PCs. But, it’s not required to run a full virtual machine for each thin client. It’s sufficient to allow users to run the required applications from a central server and make the central storage available.

    Alternative solution: Unix operating systems are designed to be server operating systems. Thin X terminals are still prevalent in Unix desktop market. Microsoft Windows, the most prevalent desktop OS, is designed as a desktop OS. But, Microsft also has added substantial support for server based computing. Microsft’s terminal services allows multiple users to connect to a Windows server and launch applications from a thin client. Several commercial thin clients can work with Microsoft terminal services or similar services provided by other vendors.

    Future: Before the world moves to computing in a global cloud, an intermediate step would be enterprise-wide desktop application servers. Thin-clients would become prevalent due to reduced maintenance costs. I hope to see Microsoft come up with better licensing for server based computing. On Unix, floating application licenses is the norm. With a floating application licence, a server (or a cluster of servers) can run only fixed application instances as per the license. It does not matter which user or thin client launches the application. Such a floating licensing from Microsoft will help.

    Conclusion

    Server virtualization is a “heavy” solution for the problems it addresses today.These problems could be adddressed by operating systems in a more efficient manner with following modifications:

    • Support for containers.
    • Support for live migration of containers.
    • Decoupling of hardware virtualization and other OS functionalities.

    If existing operating systems muster enough courage to deliver these modifications, server virtualization will have tough time. It’s unrealistic to expect complete overhauls of existing operating systems. It’s possible to implement containers as a part of OS but decoupling hardware virtualizatoin from OS is a hard job. Instead, we are likely to see new light weight operating systems designed to run only in server virtualization environment. The light weight operating system will have following characteristics:

    • It will do away with functionality already implemented in virtual machine monitor.
    • It will not worry about hardware virtualization.
    • It might be a single user operating system.
    • It might expect all co-operative processes.
    • It will have a minimal kernel mode component. It will be mostly composed of user mode libraries providing OS APIs.

    Existing virtual machine monitors would also take up more responsiblity in order to support light weight operating systems:

    • Hardware support: The hardware supported by a VMM will be of primary importance. The OS only needs to support the virtual hardware made visible by VMM.
    • Complex resource allocation and tracking: I should get a finer control over resources allocated to virtual machines and be able to track resource usage. This involves CPU, memory, storage and network.

    I hope to see a light weight OS implementation targetted at server virtualization in near future. It would a good step towards modularizing the operating systems.

    Acknowledgements

    Thanks to Dr. Basant Rajan and V. Ganesh for their valuable comments.

    About the Author – Milind Borate

    Milind Borate is the CTO and VP of Engineering at Pune-based continuous data protecting startup Druvaa. He has over 13 years experience in enterprise product development and delivery. He worked at Veritas Software as Technical Director for SAN-FS and served on board of Veritas patent filter committee. Milind has filed over 15 patent applications (4 alloted) and co-authored “Undocumented Windows NT” in 1998. He holds a BE (CS) degree from University of Pune and MTech (CS) degree from IIT, Bombay.

    This article was written when Milind was at Coriolis, a startup he co-founded before Druvaa.

    Reblog this post [with Zemanta]

    Introduction to Server Virtualization

    Virtualization is fast emerging as a game-changing technology in the enterprise computing space. What was once viewed as a technology useful for testing and development is going mainstream and is affecting the entire data-center ecosystem. Over the course of the next few weeks, PuneTech is going to run a series of articles on virtualization from experts in the industry. This article, the first in the series, gives an introduction to server virtualization, and has been written by Anurag Agarwal and Anand Mitra, founders of KQ Infotech.

    What is virtualization

    Virtualization is essentially some kind of abstraction of computing resources. There are various kinds of abstractions. Files provide an abstraction of disk blocks into linear space. Storage virtualization products, like logical volume manager, virtualize multiple storage devices into single storage and vice versa.

    Processes are also a form of virtualization. A process provides an illusion to the programmer that she has the entire address space at her disposal and has exclusive control of hardware resources. Multiplexing of these resources between all the processes on the system is done by the OS, transparent to the process. This concept has been universally adopted.

    All multi-programming operating systems are characterized by executing instructions in at least two privilege levels i.e. unprivileged for user programs, and privileged for the operating system. The user programs use “system calls” to request the operating system to perform privileged operations on its behalf. The interface which consists of the unprivileged instruction set and the set of system calls define an “extended machine” which is easier to program than the bare machine and makes user programs more portable.

    The benefits of having the kernel wrapping completely around the hardware and not exposing it to upper layer has its advantages. But in this model, only one operating system can be run at a given time. One cannot perform any activity that would disrupt the running system (for example, upgrade, migration, system debugging, etc.)

    A virtual machine provides an abstraction of complete physical machine. This is the also known as server virtualization. The basic idea is to run more than one operating system on the single server at the same time.

    The History of Server Virtualization

    In 1964, IBM had developed a Virtual Machine Monitor (CP) to run their various OSes on their mainframes. Hardware was too expensive to leave underutilized. They had addressed many of the performance challenges inherent in virtualization by designing hardware amenable to virtualization. However with the advent of cheap computing resources and proliferation of commodity hardware, virtualization was no longer popular and was viewed as a artifact of a an era where computing resources were scarce. This was reflected in design of x86 architectures which no longer provided enough support to implement virtualization efficiently.

    With the cost of hardware going down and complexities of software increasing, a large number of administrators started putting one application per server. This provides them isolation, where one application does not interfere with other application. However, over some time it started resulting into a problem called server sprawl. There are too many underutilized servers in data centers. Most windows servers have average utilization between 5% and 15%. This utilization rate will further go down with dual core and quad core processors becoming very common. In addition to the cost of the hardware, there are also power and cooling requirements for all these servers. The earlier problem of utilization of hardware resources has started surfacing again.

    Ironically the very reason which resulted in the demise of virtualization in the mainstream, was the cause of its resurrection. The features which made the OSes attractive, also made them more fragile. And this renewed interest in virtualization resulted into VMWare providing a server virtualization solution for x86 machines in 1999. Server consolidation has increased the server utilization to the 60% to 80% level. This has resulted in 5 to 15 times reduction in the servers.

    Virtual machines have introduced a whole new paradigm of looking at operating systems. Traditionally they were coupled with physical machines, and they needed to know all the peculiarities of hardware. Once hardware becomes obsolete, your operating system becomes obsolete too. But virtual machines have changed that notion. They have decoupled the operating systems from hardware by introducing a virtualization layer called virtual machine monitor (VMM).

    Types of Virtualization architectures

    There are many VMM architectures.

    Full emulation: It is the oldest virtualization technique in use. An emulator is a software layer which tracks the memory and CPU state of the machine being emulated and interprets each instruction applying the effect it would have on the virtual state of the machine it has constructed. In a regular server, machine instructions are directly executed by the CPU and the memory is directly manipulated. In full emulation, the instructions are handed over to the emulator and it then converts these instructions into a (possibly different) set of instructions to be executed on the actual underlying physical machine. Full emulation is routinely used during the development of software for new hardware which might not be available yet. Virtualization can be considered as a special case of emulation where both the machine being emulated and host are similar. This allows execution of unprivileged instructions natively. Qemu falls in this category.

    Hosted: In this approach, a traditional operating system (Windows or Linux) runs directly on the hardware. This is called the host OS. VMM is installed as a service in the host OS. This application creates and manages multiple virtual machines as processes. Each virtual machine process has a full operating system inside it. Each of these is called a guest OS. This approach greatly simplifies the design of the VMM as it can directly use the services provided by the host operating system. VMWare server, VMWare workstation, Virtual box, and KVM fall in this category.

    Hypervisor based: Hosted VMM solutions have a high overhead, as the VMM does not directly control the hardware. In the hypervisor approach the VMM is directly installed on the hardware. The VMM provides virtual hardware abstractions to create and manage multiple virtual machines. Performance overhead in this approach is very small.

    Another way to classify virtual machines is on the basis of how privileged instructions are handled. Modern processors have a privileged mode of execution that the OS kernel executes in, and a non-privileged mode that the user programs execute in. This can cause a problem for virtual machines because although the host OS (or the hypervisor) runs in privileged mode the entire guest OS runs in non-privileged mode. Most of today’s OSs are specifically designed to run in privileged mode, and hence their binaries end up having some instructions that must be run in privileged mode. (For example, there are 17 such instructions in the Intel IA-32 architecture.) This causes a problem for the virtual machine, and there are two major approaches to handling this problem.

    Para virtualization: In this approach, the binary of the OS needs to be rewritten statically to replace the use of the privileged instructions by appropriate calls into the hypervisor. In other words, the operating system needs to be ported to the virtual hardware abstraction provided by VMM. This requires changes in the operating system code. This approach has least performance penalty. This is the approach taken by Xen.

    Full virtualization: In this approach, no change is made in the operating system code. There are two ways of supporting this.

    • Using run time emulation of the privileged instructions. The VMM monitors program execution during runtime, and takes over control of execution whenever a privileged instruction arises in the guest OS. This approach is called binary translation. VMWare uses this technology.
    • Hardware assisted virtualization: Both intel and AMD have come up virtualization extensions of their hardware to support virtualization. Intel calls this VT technology and AMD calls it SVM technology. These extensions provide an extra privilege level for VMM to run. These extensions have provided a number of additional features like nested page tables and IOMMU, to make virtualization more efficient.

    Virtualization Vendors

    VMWare: VMWare has a suite of products in this area. There are two hosted products, called VMWare workstation and VMWare server. Their hypervisor product is called VMWare ESX. They have one version of ESX that comes burned in the bios. It is called VMWare ESXi. They have virtual center as management product to manage complete virtual machine infrastructure in the data center. There all the products are based on the dynamic binary translation technology. They support various flavors for Windows and Linux.

    Xen: It is an open source project. It is based on para-virtualization and hypervisor technologies. Linux is modified to support para-virtualization. Xen now supports Windows with hardware assisted virtualization. There are number of products based on Xen. Citrix, which bought XenSource has couple of Xen based products, Sun has xVM, Oracle has Oracle VM. Redhat and Suse have been shipping Xen as part of their Linux distributions for some time.

    Hyper-V: This is Microsoft’s entry in this space. It is similar to the Xen architecture. It also requires hardware assistance. It comes bundled with Windows server 2008, and supports running Windows and Linux guest operating systems in the virtual machines.

    Advantages of Virtualization

    Virtualization has also provided some new capabilities. Server provisioning becomes very easy. It is just creating and managing a virtual machine. This has transformed the way testing and development are done. There is another interesting feature called Vmotion or live migration, where a running virtual machine can be moved from one physical machine to other physical machine. Executing of the virtual machine is briefly suspended, and the entire image of the virtual machine is moved to a different machine. Now the execution can be re-started and the guest OS will continue from exactly the same point where it was suspended. This eliminates the need for downtime, even for things like hardware maintenance. This also enables the dynamic resource management or utility computing.

    Adoption of server virtualization has been phenomenal. There are already hundreds of thousands servers running virtual machines. Initial adoption of virtual machine was restricted only to test and development, but now it has matured enough to become quite popular in production too.

    About the Authors

    Anurag Agarwal

    Anurag Agarwal has more than 11 years of industry experience both in India and US. Prior to founding the KQInfotech, he was a technical director at Symantec India. Anurag has designed, developed various products at Symantec (earlier Veritas). During 2006-2007, Anurag has conceived the idea of Software Fault Tolerance for Xen at Symantec. He was awarded highest technical award of outstanding innovator in 2006 for this invention. Anurag has build and lead a team of ten people in India to take it from idea stage to product.

    During the same time Anurag has started working with College of Engineering, Pune. There he and his friends offered a full semester course in Linux kernel. Anurag was also involved in mentoring a large number of students from various engineering colleges. This involvement in teaching and mentoring students resulted in formation of KQInfotech with training and mentoring focus.Prior to this, Anurag has architected scalable transaction system for Cluster file system at Symantec in USA. This architecture improved scalability of cluster file system from three nodes to sixteen nodes and beyond. He was awarded star award for this work. He has filed half a dozen patents at Symantec. Anurag has extensive knowledge in Solaris, Linux kernel, file system, storage technologies and virtualization.He has ME from Indian Institute of Science, Bangalore and BE from MBM Engineering College, Jodhpur.

    Anand Mitra
    After completing his post-graduation (iitb) in 2001, Anand had been working with Symantec India (Formerly Veritas). Prior to founding KQInfotech, he was a Principal Software Engineer at Symantec, chartered with the task of scoping and designing a support for windows on Xen based Fault Tolerance. He has worked for 6.5 years on the clustered filesystem product VxFS & CFS. He had architected the online upgrade for Veritas File system and designed the write fastpath which improved performance of the file system. He has also designed the integration of Power6 (powerPC) CPU feature of storage keys for the Veritas storage stack. He co-maintained technical relations with IBM for special proprietary kernel interfaces within AIX and designed a filesystem pre-allocation API for IBM DB2 database.

    Chitale Dairy Consolidates Two Datacenters into One with VMware Infrastructure

    (news sent in by PuneTech reader Chirag Dalal)

    Yahoo! Finance reports that Pune’s Chitale Dairy has used VMWare’s virtualization infrastructure to consolidate their two data centers into one and save costs:

    Chitale Dairy, which produces about 400,000 liters of milk per day as well as cream, butter and yogurt, faced operational challenges with 10 physical servers spread across two datacenters in a town 500 kilometers from the nearest city. In its remote location, the company found it expensive and challenging to source and retain qualified IT support staff while also grappling with server sprawl.

    By consolidating its two physical operations into one virtual datacenter using VMware Infrastructure, Chitale Dairy reduced server hardware acquisition costs by 50 percent, software acquisition costs by 75 percent, and power consumption in half. VMware also reduced server deployment times from three weeks to three hours and the time to restore a corrupted server from six or seven hours to 10 minutes.

    See full article.

    Zemanta Pixie