Why do we need server virtualization

Virtualization is fast emerging as a game-changing technology in the enterprise computing space. What was once viewed as a technology useful for testing and development is going mainstream and is affecting the entire data-center ecosystem. This article on the important use-cases of server virtualization by Milind Borate, is the second in PuneTech’s series of articles on virtualization. The first article gave an overview of server virtualization. Future articles will deal with the issue of management of virtual machines, and other types of virtualization.

Introduction

Is server virtualization a new concept? It isn’t, because traditional operating systems do just that. An operating system provides a virtual view of a machine to the processes running on it. Resources are virtualized.

  • Each process gets a virtual address space.
  • A process’ access privileges control what files it can access. That is storage virtualization.
  • The scheduler virtualizes the CPU so that multiple processes can run without conflicting with each other.
  • Network is virtualized by providing multiple streams (for example, TCP) over the same physical link.

Storage and network are weakly virtualized in traditional operating systems because some global state is shared by all processes. For example, the same IP address is shared by all processes. In case of storage, the same namespace is used by all processes. Over time, some files/directories become de-facto standards. For example, all process look at the same /etc/passwd file.

Today, the term “server virtualization” means running multiple OSs on one physical machine. Isn’t that just adding one more level of virtualization? An additional level generally means added costs, lower performance, higher maintenance. Why then is everybody so excited about it? What is it that server virtualization provides in addition to traditional OS offerings? An oversimplified version of the question is: If I can run two processes on one OS, why should I run two OSs with one process each? This document enumerates the drivers for running multiple operating systems on one physical machine, presenting a use case, evaluating the virtualization based solution, suggesting alternates where appropriate and discussing future trends.

Application Support

Use case: I have two different applications. One runs on Windows and the other runs on Linux. The applications are not resource intensive and a dedicated server is under-utilized.

Analysis: This is a weak argument in an enterprise environment because enterprises want to standardize on one OS and one OS version. Even if you find Windows and Linux machines in the same department, the administrators are two different people. I wonder if they would be willing to share a machine. On the other hand, you might find applications that require conflicting versions of, say, some library, especially on Linux.

Alternative solution: Wine allows you to run Windows applications on Linux. Cygwin allows you to run Linux applications on Windows. Unfortunately, it’s not the same as running the application directly on the required OS. I won’t bet that a third party application would run out of the box under these virtual environments.

Future: Some day, developers will get fed up of writing applications for a particular OS and then port them to others. JAVA provides us with an host/OS independent virtual environment. JAVA wants programmers to write code that is not targetted for a particular OS. It succeeded in some areas. But, still there is a lot of software written for a particular OS. Why did everybody not move to JAVA? I guess, because JAVA does not let me do everything that I can do using OS APIs. In a way, that’s JAVA’s failure in providing a generic virtual environment. In future, we will see more and more software developed over OS independent APIs. Databases would be the next target for establishing generic APIs.

Conflicting Applications

Use case: I have two different applications. If I install both on the same machine, both fail to work. In fact, they might actually work but it’s not a supported by my vendor.

Analysis: In the current state of affairs, an OS is not just hardware virtualization. The gamut of libraries, configuration files, daemons is all tied up with an OS. Even though an application does not depend on the exact kernel version, it very much depends on the library versions. It’s also possible that the applications make conflicting changes to some configuration file.

Alternative solution: OpenVZ modifies Linux to provide multiple “containers” inside the same OS. The machine runs a single kernel but provides multiple isolated environments. Each isolated environment can run an application that would be oblivious to the other containers.

Future: I think, operating systems need to support containers by default. The process level isolation provided at memory and CPU level needs to be extended storage and network also. On the other hand, I also hope that application writers desist from depending on shared configuration and shared libraries pay some attention to backward compatiblity.

Fault Isolation

Use case: In case an application or the operating system running the application faults, I want my other applications to run unaffected.

Analysis: A faulty application can bring down entire server especially if the application runs in a priviledged mode and if it could be attacked over a network. A kernel driver bug or operating system bug also brings down a server. Operating systems are getting more stable and servers going down due to operating system bug is rare now a days.

Alternative solution: Containers can help here too. Containers provide better isolation amongst applications running on the same OS. But, bugs in kernel mode components cannot be addressed by containers. Future: In near future, we are likely see micro-kernel like architectures around virtual machines monitors. Light weight operating systems could be developed to work only with virtual machine monitors. Such a solution will provide fault isolation without incurring the overheads of a full opearting system running inside a virtual machine.

Live Application Migration

Use case: I want to build a datacenter with utility/on-demand/SLA-based computing in mind. To achieve that, I want to be able to live-migrate an application to a different machine. I can run the application in a virtual machine and live-migrate the virtual machine.

Analysis: The requirement is to migrate an application. But, migrating a process is not supported by existing operating systems. Also, the application might do some global configuration changes that need to be available on the migration target.

Alternative solution: OpenVZ modifies Linux to provide multiple “containers” inside the same OS. OpenVZ also supports live migration of a container.

Future: As discussed earlier, operating systems need to support containers by default.

Hardware Support

Use case: My operating system does not support the cutting edge hardware I bought today.

Analysis: Here again, I’m not bothered about the operating system. But, my applications run only on this OS. Also, enterprises like to use the same OS version throughout the organization. If an enterprise sticks to an old OS version, it does not work with new hardware to be bought. If an enterprise is willing to move to the newer OS, it does not work with the existing old hardware.

But, the real issue here is the lack of standardization across hardware or driver development models. I fail to understand why every wireless LAN card needs a different driver. Can all hardware vendors not standardize the IO ports and commands so that one generic driver works for all cards? On the other hand, every OS and even OS version has a different drivers development model. That means every piece of hardware requires a different driver for each OS version. Alternative solution: I cannot think of a good alternative solution. One specific issue, unavailability of wireless LAN card drivers for Linux, is addressed by NdisWrapper. NdisWrapper allows us to access a wirelss card on Linux by loading a Windows driver.

Future: We either need hardware level standardization or the ability to run the same driver on all verions on all operating systems. It would be good to have wrappers, like NdisWrapper, for all types of drivers and all operating systems. A hardware driver should write to a generic API provided by the wrapper framework. The generic API should be implemented by the operating system vendors.

Software Development Environment

Use case: I want to manage hardware infrastructure for software development. Every developer and QA engineer needs dedicated machines. I can quickly provision a virtual machine when the need arises.

Analysis: Under development software fails more often than a released product. Software developers and QA engineers want an isolated environment for the tests to correctly attribute bugs to the right application. Also, software development envinronments require frequent reprovisioning as the product under development needs to be tested under different operating systems.

Alternative solution: Containers would work for most software development. I think, the exception is kernel level development.

Future: Virtual machines found an instant market in software QA labs. Virtual machines will continue to flourish in this market.

Application Configuration Replica

Use case: I want to ship some data to another machine. Instead of setting up identical application enviroment on the other machine to access the data, I want to ship the entire machine image itself. Shipping physical machine image does not work because of hardware level differences. Hence, I want to ship virtual machine image.

Analysis: This is another hard reality of life. Data formats are not compatible across multiple versions of a software product. Portable data formats are used by human readable documents. File-system data formats are also stable to a large extent and you can mount a FAT file-system or ISO 8529 file-system on virtually any version of any operating system. The same level of compatiblity is not established for other structured data. I don’t see that happening in near future. Even if this hurdle is crossed, you need to bother about correctly shipping all the application configuration, which itself could be different for the same software running on different OSs.

Alternative solution: OpenVZ container could be a light-weight alternative to a complete virtual machine.

Future: The future seems inclined towards “computing in a cloud”. The network bandwidth is increasing and so is the trend towards is outsourced hosting. Mail and web services are outsourced since a long time. Oracle-on-demand allows us to outsource database hosting. Google (writely) wants us to outsource document hosting. Amazon allows us to outsource storage and compuation both. In future, we will be completely oblivious to the location of our data and applications. The only process running on your laptop would be an improved a web browser. In that world, only system software engineers, who build these datacenters, would be worried about hardware and operating system compatibilities. But, they also will not be overly bothered because the data-center consolidations will reduce the diversity in hardware and OS.

Thin Client Desktops

Use case: I want to replace desktop PCs with thin clients. A central server will run a VM for each thin client. The thin client will act as a display server.

Analysis: Thin clients could bring down the maintenance costs substantially. Thin client hardware is more resilient than a desktop PC. Also, it’s easier to maintain the software installed on a central server than managing several PCs. But, it’s not required to run a full virtual machine for each thin client. It’s sufficient to allow users to run the required applications from a central server and make the central storage available.

Alternative solution: Unix operating systems are designed to be server operating systems. Thin X terminals are still prevalent in Unix desktop market. Microsoft Windows, the most prevalent desktop OS, is designed as a desktop OS. But, Microsft also has added substantial support for server based computing. Microsft’s terminal services allows multiple users to connect to a Windows server and launch applications from a thin client. Several commercial thin clients can work with Microsoft terminal services or similar services provided by other vendors.

Future: Before the world moves to computing in a global cloud, an intermediate step would be enterprise-wide desktop application servers. Thin-clients would become prevalent due to reduced maintenance costs. I hope to see Microsoft come up with better licensing for server based computing. On Unix, floating application licenses is the norm. With a floating application licence, a server (or a cluster of servers) can run only fixed application instances as per the license. It does not matter which user or thin client launches the application. Such a floating licensing from Microsoft will help.

Conclusion

Server virtualization is a “heavy” solution for the problems it addresses today.These problems could be adddressed by operating systems in a more efficient manner with following modifications:

  • Support for containers.
  • Support for live migration of containers.
  • Decoupling of hardware virtualization and other OS functionalities.

If existing operating systems muster enough courage to deliver these modifications, server virtualization will have tough time. It’s unrealistic to expect complete overhauls of existing operating systems. It’s possible to implement containers as a part of OS but decoupling hardware virtualizatoin from OS is a hard job. Instead, we are likely to see new light weight operating systems designed to run only in server virtualization environment. The light weight operating system will have following characteristics:

  • It will do away with functionality already implemented in virtual machine monitor.
  • It will not worry about hardware virtualization.
  • It might be a single user operating system.
  • It might expect all co-operative processes.
  • It will have a minimal kernel mode component. It will be mostly composed of user mode libraries providing OS APIs.

Existing virtual machine monitors would also take up more responsiblity in order to support light weight operating systems:

  • Hardware support: The hardware supported by a VMM will be of primary importance. The OS only needs to support the virtual hardware made visible by VMM.
  • Complex resource allocation and tracking: I should get a finer control over resources allocated to virtual machines and be able to track resource usage. This involves CPU, memory, storage and network.

I hope to see a light weight OS implementation targetted at server virtualization in near future. It would a good step towards modularizing the operating systems.

Acknowledgements

Thanks to Dr. Basant Rajan and V. Ganesh for their valuable comments.

About the Author – Milind Borate

Milind Borate is the CTO and VP of Engineering at Pune-based continuous data protecting startup Druvaa. He has over 13 years experience in enterprise product development and delivery. He worked at Veritas Software as Technical Director for SAN-FS and served on board of Veritas patent filter committee. Milind has filed over 15 patent applications (4 alloted) and co-authored “Undocumented Windows NT” in 1998. He holds a BE (CS) degree from University of Pune and MTech (CS) degree from IIT, Bombay.

This article was written when Milind was at Coriolis, a startup he co-founded before Druvaa.

Reblog this post [with Zemanta]

10 thoughts on “Why do we need server virtualization

  1. Good article,

    but IMHO the current virtualizaton technology for server virtualization or hardware abstraction isn’t ready for production servers.

    99% of VMWare use cases are for application migration, DR and testing.

    The reason being poor IO stack and CPU/memory overhead.

    Linux KVM is a much better approach.

    Jaspreet

  2. Until a year back, most of the VM deployments were in the test and dev. But VMWare claims that a good number of their deployment started moving production application on VMWare for the server consolidation purpose.

  3. Being in startup, i have realized that most of these industry reports and news are just PR .. to influence people like you and me to create a positive vibe around it.

    Jaspreet

  4. While at Symantec, we have done a number of benchmarks with Xen and performance impact even for TPCC load was not very high. There is no good reason by VMWare would not be able to come pretty close to Xen. I don’t have very good reason to believe that these technologies are not production ready.

  5. Anurag,

    two thoughts –

    1. Xen is not KVM.
    Xen would most likely dumped from Linux and RHEL distros in coming times.

    2. Vmware – the response time and peak performance numbers are not production ready.

    There is always a 80-20 rule in enterprises. Virtualization has so far hit only the no-so-important 20% of production/dev servers.

    Tell me how many applications in Symantec’s data center or production environment run on VMWARE / Xen ?

  6. Hi,

    I don’t know about Symantec data center, but I was at a client last month, where they have consolidated most of their development work on VMWare servers.

    Xen going be dumped from Linux and RHEL was an story that KVM wanted people to believe. Xen is part of Linux kernel tree. Sun is shipping solution based on Xen (xVM), oracle is shipping Oracle VM based on Xen, Citrix has solution based on Xen. Microsoft has launched hyper-V with Windows server.

    Virtualization is getting lot of traction, and it has a very good value proposition. There are going to be a class of application which is very I/O intensive and CPU intensive, it will not get there. But for a large number of data center solution those are based on Microsoft technologies, it is getting significant traction. Most of the interesting play in virtualization is happening in x86 space and IMO, in Microsoft space.

    Regards,
    Anurag.

  7. Rethinking Current Virtualisation practices, it is not that Resource efficient indeed. (Redundant OS installation within VM)

    At the other end we have the JVM/CLR on an applicative level which do not provide enough isolation from the underlying OS.

    Does anyone know of an intermediate solution that is configurable as a full VM, but does not need the full redundancy of one, just like a JVM?

  8. Mimoun, I am by no means an expert in this area, so take my comment with a pinch of salt, but I think that Solaris containers are probably the intermediate solution that you are looking for. However, I don’t really have an idea why this concept has not really caught on for other platforms. Maybe someone more knowledgeable might be able to answer that one.

  9. It looks to me that VM’s are not yet production ready due to following reason:
    1. Resource constraint while connecting to external sub system like storage/network. This is induced mainly by hardware
    2. Hardware failures are going to cost multiple production enviornmentat the same time.
    3. VM redundency not fool proof.

    So, in most of my knowledge, the first target to hit the Test/Dev enviornment to consolidate and then go for production. Any ways, production environments are few in numbers

Comments are closed.