Tag Archives: consolidation

Stop Virtual Machine Sprawl with Colama

This is a product-pitch for Colama, a product built by Pune-based startup Coriolis. For the basics of server virtualization, see our earlier guest posts: Introduction to Server Virtualization, and Why do we need server virtualization. Virtualization is fast emerging as a game-changing technology in the enterprise computing space, and Colama is trying to address an interesting new pain-point that making its presence felt. 

Virtualization is fast becoming an accepted standard for the IT infrastructure. While it comes as a boon to the development and QA communities, the IT practitioners are dealing with the pressing issue of virtual machine sprawl that surfaces due to adhoc and hence uncontrolled adoption of virtualization. While describing the problem and its effects, this paper outlines a solution called Colama, as offered by Coriolis Technologies.

 

Virtual machines have been slipping in under the covers everywhere in the IT industry. Software developers like virtual machines because they can easily mimic a target environment. QA engineers prefer virtual machines because they can simultaneously test the software on different configurations. Support engineers can ensure reproducibility of an issue by pointing to an entire machine rather than detailing on the individual steps and/or configuration requirement on a physical host. In many cases, adoption of virtual machines has been primarily driven by users’ choice rather than any coherent corporate strategy. The ad-hoc and uncontrolled use of virtual machines across the organization has resulted in to a problem called Virtual Machine sprawl, which has become critical for today’s IT administrators.

Virtual machine sprawl is an unpleasant side effect of server virtualization and its near exponential growth over the years. Here are the symptoms:

  • At any given point, the virtual machines running in the organization are un-accounted for. Information like who created them and when, who used them, what configuration/s they have, what licensed software they use, whether security patches have been applied, whether the data is backed up etc are not maintained and tracked anywhere.
  • Most commonly, people freely copy each other’s virtual machines and no usage tracking and access control is in place.
  • Because of cheap storage, too many identical or similar copies of the same machines are floating across the organization. But reduction in storage cost does not reduce the operational cost of storage management, search, backup, etc. Data duplication and redundancy is a problem even if storage is plentiful.
  • Because there is no mechanism to keep track of why an image was created, it is hard to figure out when it should be destroyed. People tend to forget what each image was created for, and keep them around just in case they are needed. This increases the storage requirements.
  • Licensing implications: Virtual machine copied from one with a licensed (limited) software needs to tracked for its life span in order to put a control on the use of licensed software.
  •  

    There are many players in the industry who address this problem. Most of the virtual lab management products are tied to one specific virtualization technology. For example, the VMWare Lab Manager works for only VMWare virtualization technology. In a heterogeneous virtualization environment that is filled with Xen, VMWare, VirtualBox and Microsoft virtual machines, such an approach falls short.

    Colama is Coriolis Technologies solution to address this problem. Colama manages the life cycle of virtual machines across an organization. While tracking and virtual machines, Colama is agnostic to the virtualization technology.

     

    Here are some of the features of Colama:

  • Basic SCM for virtual machine: You can Checkin/checkout/clone/tag/comment virtual machine/s for tracking revisions of virtual machine.
  • Image inspection: Colama provides automatic inspection, extraction and display of image-related data, like OS version, software installed, etc and also facilitates “search” on the extracted data. For example, you can search for the virtual machines that have got Windows 2003 server with service pack 4 and Oracle 10g installed!
  • Web based interface: Navigate through the virtual machine repository of your organization using just a web browser.
  • Ownership and access control: • Create a copy of a machine for yourself and decide who can use “your” machine.
  • De-duplication: Copying/Cloning virtual machines happens without any additional storage requirement.
  • Physical machine provisioning (lab management): Spawn a virtual machine of your choice on a physical host available and ‘ready’.
  • Management reports: auditing and compliance User activity reports, virtual machine history, health information (up/down time) of virtual machines, license reports of the virtual machines etc.
  • Virtualization agnostic: works for virtual machines from all known vendors. 
  • Please note: This product-pitch, by Barnali Ganesh, co-founder of Coriolis, is featured on PuneTech because the editors found the technology interesting (or more accurately, the pain-point which it is addressing). PuneTech is a purely non-commercial website and does not take any considerations (monetary or otherwise) for any of the content on the site. If you would like your product to be featured on the front page, send us an article and we’ll publish it if we fell it’s interesting to our readers. You can always add a page to the PuneTech wiki by yourself, as long as you follow the “No PR” guidelines.

    Why do we need server virtualization

    Virtualization is fast emerging as a game-changing technology in the enterprise computing space. What was once viewed as a technology useful for testing and development is going mainstream and is affecting the entire data-center ecosystem. This article on the important use-cases of server virtualization by Milind Borate, is the second in PuneTech’s series of articles on virtualization. The first article gave an overview of server virtualization. Future articles will deal with the issue of management of virtual machines, and other types of virtualization.

    Introduction

    Is server virtualization a new concept? It isn’t, because traditional operating systems do just that. An operating system provides a virtual view of a machine to the processes running on it. Resources are virtualized.

    • Each process gets a virtual address space.
    • A process’ access privileges control what files it can access. That is storage virtualization.
    • The scheduler virtualizes the CPU so that multiple processes can run without conflicting with each other.
    • Network is virtualized by providing multiple streams (for example, TCP) over the same physical link.

    Storage and network are weakly virtualized in traditional operating systems because some global state is shared by all processes. For example, the same IP address is shared by all processes. In case of storage, the same namespace is used by all processes. Over time, some files/directories become de-facto standards. For example, all process look at the same /etc/passwd file.

    Today, the term “server virtualization” means running multiple OSs on one physical machine. Isn’t that just adding one more level of virtualization? An additional level generally means added costs, lower performance, higher maintenance. Why then is everybody so excited about it? What is it that server virtualization provides in addition to traditional OS offerings? An oversimplified version of the question is: If I can run two processes on one OS, why should I run two OSs with one process each? This document enumerates the drivers for running multiple operating systems on one physical machine, presenting a use case, evaluating the virtualization based solution, suggesting alternates where appropriate and discussing future trends.

    Application Support

    Use case: I have two different applications. One runs on Windows and the other runs on Linux. The applications are not resource intensive and a dedicated server is under-utilized.

    Analysis: This is a weak argument in an enterprise environment because enterprises want to standardize on one OS and one OS version. Even if you find Windows and Linux machines in the same department, the administrators are two different people. I wonder if they would be willing to share a machine. On the other hand, you might find applications that require conflicting versions of, say, some library, especially on Linux.

    Alternative solution: Wine allows you to run Windows applications on Linux. Cygwin allows you to run Linux applications on Windows. Unfortunately, it’s not the same as running the application directly on the required OS. I won’t bet that a third party application would run out of the box under these virtual environments.

    Future: Some day, developers will get fed up of writing applications for a particular OS and then port them to others. JAVA provides us with an host/OS independent virtual environment. JAVA wants programmers to write code that is not targetted for a particular OS. It succeeded in some areas. But, still there is a lot of software written for a particular OS. Why did everybody not move to JAVA? I guess, because JAVA does not let me do everything that I can do using OS APIs. In a way, that’s JAVA’s failure in providing a generic virtual environment. In future, we will see more and more software developed over OS independent APIs. Databases would be the next target for establishing generic APIs.

    Conflicting Applications

    Use case: I have two different applications. If I install both on the same machine, both fail to work. In fact, they might actually work but it’s not a supported by my vendor.

    Analysis: In the current state of affairs, an OS is not just hardware virtualization. The gamut of libraries, configuration files, daemons is all tied up with an OS. Even though an application does not depend on the exact kernel version, it very much depends on the library versions. It’s also possible that the applications make conflicting changes to some configuration file.

    Alternative solution: OpenVZ modifies Linux to provide multiple “containers” inside the same OS. The machine runs a single kernel but provides multiple isolated environments. Each isolated environment can run an application that would be oblivious to the other containers.

    Future: I think, operating systems need to support containers by default. The process level isolation provided at memory and CPU level needs to be extended storage and network also. On the other hand, I also hope that application writers desist from depending on shared configuration and shared libraries pay some attention to backward compatiblity.

    Fault Isolation

    Use case: In case an application or the operating system running the application faults, I want my other applications to run unaffected.

    Analysis: A faulty application can bring down entire server especially if the application runs in a priviledged mode and if it could be attacked over a network. A kernel driver bug or operating system bug also brings down a server. Operating systems are getting more stable and servers going down due to operating system bug is rare now a days.

    Alternative solution: Containers can help here too. Containers provide better isolation amongst applications running on the same OS. But, bugs in kernel mode components cannot be addressed by containers. Future: In near future, we are likely see micro-kernel like architectures around virtual machines monitors. Light weight operating systems could be developed to work only with virtual machine monitors. Such a solution will provide fault isolation without incurring the overheads of a full opearting system running inside a virtual machine.

    Live Application Migration

    Use case: I want to build a datacenter with utility/on-demand/SLA-based computing in mind. To achieve that, I want to be able to live-migrate an application to a different machine. I can run the application in a virtual machine and live-migrate the virtual machine.

    Analysis: The requirement is to migrate an application. But, migrating a process is not supported by existing operating systems. Also, the application might do some global configuration changes that need to be available on the migration target.

    Alternative solution: OpenVZ modifies Linux to provide multiple “containers” inside the same OS. OpenVZ also supports live migration of a container.

    Future: As discussed earlier, operating systems need to support containers by default.

    Hardware Support

    Use case: My operating system does not support the cutting edge hardware I bought today.

    Analysis: Here again, I’m not bothered about the operating system. But, my applications run only on this OS. Also, enterprises like to use the same OS version throughout the organization. If an enterprise sticks to an old OS version, it does not work with new hardware to be bought. If an enterprise is willing to move to the newer OS, it does not work with the existing old hardware.

    But, the real issue here is the lack of standardization across hardware or driver development models. I fail to understand why every wireless LAN card needs a different driver. Can all hardware vendors not standardize the IO ports and commands so that one generic driver works for all cards? On the other hand, every OS and even OS version has a different drivers development model. That means every piece of hardware requires a different driver for each OS version. Alternative solution: I cannot think of a good alternative solution. One specific issue, unavailability of wireless LAN card drivers for Linux, is addressed by NdisWrapper. NdisWrapper allows us to access a wirelss card on Linux by loading a Windows driver.

    Future: We either need hardware level standardization or the ability to run the same driver on all verions on all operating systems. It would be good to have wrappers, like NdisWrapper, for all types of drivers and all operating systems. A hardware driver should write to a generic API provided by the wrapper framework. The generic API should be implemented by the operating system vendors.

    Software Development Environment

    Use case: I want to manage hardware infrastructure for software development. Every developer and QA engineer needs dedicated machines. I can quickly provision a virtual machine when the need arises.

    Analysis: Under development software fails more often than a released product. Software developers and QA engineers want an isolated environment for the tests to correctly attribute bugs to the right application. Also, software development envinronments require frequent reprovisioning as the product under development needs to be tested under different operating systems.

    Alternative solution: Containers would work for most software development. I think, the exception is kernel level development.

    Future: Virtual machines found an instant market in software QA labs. Virtual machines will continue to flourish in this market.

    Application Configuration Replica

    Use case: I want to ship some data to another machine. Instead of setting up identical application enviroment on the other machine to access the data, I want to ship the entire machine image itself. Shipping physical machine image does not work because of hardware level differences. Hence, I want to ship virtual machine image.

    Analysis: This is another hard reality of life. Data formats are not compatible across multiple versions of a software product. Portable data formats are used by human readable documents. File-system data formats are also stable to a large extent and you can mount a FAT file-system or ISO 8529 file-system on virtually any version of any operating system. The same level of compatiblity is not established for other structured data. I don’t see that happening in near future. Even if this hurdle is crossed, you need to bother about correctly shipping all the application configuration, which itself could be different for the same software running on different OSs.

    Alternative solution: OpenVZ container could be a light-weight alternative to a complete virtual machine.

    Future: The future seems inclined towards “computing in a cloud”. The network bandwidth is increasing and so is the trend towards is outsourced hosting. Mail and web services are outsourced since a long time. Oracle-on-demand allows us to outsource database hosting. Google (writely) wants us to outsource document hosting. Amazon allows us to outsource storage and compuation both. In future, we will be completely oblivious to the location of our data and applications. The only process running on your laptop would be an improved a web browser. In that world, only system software engineers, who build these datacenters, would be worried about hardware and operating system compatibilities. But, they also will not be overly bothered because the data-center consolidations will reduce the diversity in hardware and OS.

    Thin Client Desktops

    Use case: I want to replace desktop PCs with thin clients. A central server will run a VM for each thin client. The thin client will act as a display server.

    Analysis: Thin clients could bring down the maintenance costs substantially. Thin client hardware is more resilient than a desktop PC. Also, it’s easier to maintain the software installed on a central server than managing several PCs. But, it’s not required to run a full virtual machine for each thin client. It’s sufficient to allow users to run the required applications from a central server and make the central storage available.

    Alternative solution: Unix operating systems are designed to be server operating systems. Thin X terminals are still prevalent in Unix desktop market. Microsoft Windows, the most prevalent desktop OS, is designed as a desktop OS. But, Microsft also has added substantial support for server based computing. Microsft’s terminal services allows multiple users to connect to a Windows server and launch applications from a thin client. Several commercial thin clients can work with Microsoft terminal services or similar services provided by other vendors.

    Future: Before the world moves to computing in a global cloud, an intermediate step would be enterprise-wide desktop application servers. Thin-clients would become prevalent due to reduced maintenance costs. I hope to see Microsoft come up with better licensing for server based computing. On Unix, floating application licenses is the norm. With a floating application licence, a server (or a cluster of servers) can run only fixed application instances as per the license. It does not matter which user or thin client launches the application. Such a floating licensing from Microsoft will help.

    Conclusion

    Server virtualization is a “heavy” solution for the problems it addresses today.These problems could be adddressed by operating systems in a more efficient manner with following modifications:

    • Support for containers.
    • Support for live migration of containers.
    • Decoupling of hardware virtualization and other OS functionalities.

    If existing operating systems muster enough courage to deliver these modifications, server virtualization will have tough time. It’s unrealistic to expect complete overhauls of existing operating systems. It’s possible to implement containers as a part of OS but decoupling hardware virtualizatoin from OS is a hard job. Instead, we are likely to see new light weight operating systems designed to run only in server virtualization environment. The light weight operating system will have following characteristics:

    • It will do away with functionality already implemented in virtual machine monitor.
    • It will not worry about hardware virtualization.
    • It might be a single user operating system.
    • It might expect all co-operative processes.
    • It will have a minimal kernel mode component. It will be mostly composed of user mode libraries providing OS APIs.

    Existing virtual machine monitors would also take up more responsiblity in order to support light weight operating systems:

    • Hardware support: The hardware supported by a VMM will be of primary importance. The OS only needs to support the virtual hardware made visible by VMM.
    • Complex resource allocation and tracking: I should get a finer control over resources allocated to virtual machines and be able to track resource usage. This involves CPU, memory, storage and network.

    I hope to see a light weight OS implementation targetted at server virtualization in near future. It would a good step towards modularizing the operating systems.

    Acknowledgements

    Thanks to Dr. Basant Rajan and V. Ganesh for their valuable comments.

    About the Author – Milind Borate

    Milind Borate is the CTO and VP of Engineering at Pune-based continuous data protecting startup Druvaa. He has over 13 years experience in enterprise product development and delivery. He worked at Veritas Software as Technical Director for SAN-FS and served on board of Veritas patent filter committee. Milind has filed over 15 patent applications (4 alloted) and co-authored “Undocumented Windows NT” in 1998. He holds a BE (CS) degree from University of Pune and MTech (CS) degree from IIT, Bombay.

    This article was written when Milind was at Coriolis, a startup he co-founded before Druvaa.

    Reblog this post [with Zemanta]

    Introduction to Server Virtualization

    Virtualization is fast emerging as a game-changing technology in the enterprise computing space. What was once viewed as a technology useful for testing and development is going mainstream and is affecting the entire data-center ecosystem. Over the course of the next few weeks, PuneTech is going to run a series of articles on virtualization from experts in the industry. This article, the first in the series, gives an introduction to server virtualization, and has been written by Anurag Agarwal and Anand Mitra, founders of KQ Infotech.

    What is virtualization

    Virtualization is essentially some kind of abstraction of computing resources. There are various kinds of abstractions. Files provide an abstraction of disk blocks into linear space. Storage virtualization products, like logical volume manager, virtualize multiple storage devices into single storage and vice versa.

    Processes are also a form of virtualization. A process provides an illusion to the programmer that she has the entire address space at her disposal and has exclusive control of hardware resources. Multiplexing of these resources between all the processes on the system is done by the OS, transparent to the process. This concept has been universally adopted.

    All multi-programming operating systems are characterized by executing instructions in at least two privilege levels i.e. unprivileged for user programs, and privileged for the operating system. The user programs use “system calls” to request the operating system to perform privileged operations on its behalf. The interface which consists of the unprivileged instruction set and the set of system calls define an “extended machine” which is easier to program than the bare machine and makes user programs more portable.

    The benefits of having the kernel wrapping completely around the hardware and not exposing it to upper layer has its advantages. But in this model, only one operating system can be run at a given time. One cannot perform any activity that would disrupt the running system (for example, upgrade, migration, system debugging, etc.)

    A virtual machine provides an abstraction of complete physical machine. This is the also known as server virtualization. The basic idea is to run more than one operating system on the single server at the same time.

    The History of Server Virtualization

    In 1964, IBM had developed a Virtual Machine Monitor (CP) to run their various OSes on their mainframes. Hardware was too expensive to leave underutilized. They had addressed many of the performance challenges inherent in virtualization by designing hardware amenable to virtualization. However with the advent of cheap computing resources and proliferation of commodity hardware, virtualization was no longer popular and was viewed as a artifact of a an era where computing resources were scarce. This was reflected in design of x86 architectures which no longer provided enough support to implement virtualization efficiently.

    With the cost of hardware going down and complexities of software increasing, a large number of administrators started putting one application per server. This provides them isolation, where one application does not interfere with other application. However, over some time it started resulting into a problem called server sprawl. There are too many underutilized servers in data centers. Most windows servers have average utilization between 5% and 15%. This utilization rate will further go down with dual core and quad core processors becoming very common. In addition to the cost of the hardware, there are also power and cooling requirements for all these servers. The earlier problem of utilization of hardware resources has started surfacing again.

    Ironically the very reason which resulted in the demise of virtualization in the mainstream, was the cause of its resurrection. The features which made the OSes attractive, also made them more fragile. And this renewed interest in virtualization resulted into VMWare providing a server virtualization solution for x86 machines in 1999. Server consolidation has increased the server utilization to the 60% to 80% level. This has resulted in 5 to 15 times reduction in the servers.

    Virtual machines have introduced a whole new paradigm of looking at operating systems. Traditionally they were coupled with physical machines, and they needed to know all the peculiarities of hardware. Once hardware becomes obsolete, your operating system becomes obsolete too. But virtual machines have changed that notion. They have decoupled the operating systems from hardware by introducing a virtualization layer called virtual machine monitor (VMM).

    Types of Virtualization architectures

    There are many VMM architectures.

    Full emulation: It is the oldest virtualization technique in use. An emulator is a software layer which tracks the memory and CPU state of the machine being emulated and interprets each instruction applying the effect it would have on the virtual state of the machine it has constructed. In a regular server, machine instructions are directly executed by the CPU and the memory is directly manipulated. In full emulation, the instructions are handed over to the emulator and it then converts these instructions into a (possibly different) set of instructions to be executed on the actual underlying physical machine. Full emulation is routinely used during the development of software for new hardware which might not be available yet. Virtualization can be considered as a special case of emulation where both the machine being emulated and host are similar. This allows execution of unprivileged instructions natively. Qemu falls in this category.

    Hosted: In this approach, a traditional operating system (Windows or Linux) runs directly on the hardware. This is called the host OS. VMM is installed as a service in the host OS. This application creates and manages multiple virtual machines as processes. Each virtual machine process has a full operating system inside it. Each of these is called a guest OS. This approach greatly simplifies the design of the VMM as it can directly use the services provided by the host operating system. VMWare server, VMWare workstation, Virtual box, and KVM fall in this category.

    Hypervisor based: Hosted VMM solutions have a high overhead, as the VMM does not directly control the hardware. In the hypervisor approach the VMM is directly installed on the hardware. The VMM provides virtual hardware abstractions to create and manage multiple virtual machines. Performance overhead in this approach is very small.

    Another way to classify virtual machines is on the basis of how privileged instructions are handled. Modern processors have a privileged mode of execution that the OS kernel executes in, and a non-privileged mode that the user programs execute in. This can cause a problem for virtual machines because although the host OS (or the hypervisor) runs in privileged mode the entire guest OS runs in non-privileged mode. Most of today’s OSs are specifically designed to run in privileged mode, and hence their binaries end up having some instructions that must be run in privileged mode. (For example, there are 17 such instructions in the Intel IA-32 architecture.) This causes a problem for the virtual machine, and there are two major approaches to handling this problem.

    Para virtualization: In this approach, the binary of the OS needs to be rewritten statically to replace the use of the privileged instructions by appropriate calls into the hypervisor. In other words, the operating system needs to be ported to the virtual hardware abstraction provided by VMM. This requires changes in the operating system code. This approach has least performance penalty. This is the approach taken by Xen.

    Full virtualization: In this approach, no change is made in the operating system code. There are two ways of supporting this.

    • Using run time emulation of the privileged instructions. The VMM monitors program execution during runtime, and takes over control of execution whenever a privileged instruction arises in the guest OS. This approach is called binary translation. VMWare uses this technology.
    • Hardware assisted virtualization: Both intel and AMD have come up virtualization extensions of their hardware to support virtualization. Intel calls this VT technology and AMD calls it SVM technology. These extensions provide an extra privilege level for VMM to run. These extensions have provided a number of additional features like nested page tables and IOMMU, to make virtualization more efficient.

    Virtualization Vendors

    VMWare: VMWare has a suite of products in this area. There are two hosted products, called VMWare workstation and VMWare server. Their hypervisor product is called VMWare ESX. They have one version of ESX that comes burned in the bios. It is called VMWare ESXi. They have virtual center as management product to manage complete virtual machine infrastructure in the data center. There all the products are based on the dynamic binary translation technology. They support various flavors for Windows and Linux.

    Xen: It is an open source project. It is based on para-virtualization and hypervisor technologies. Linux is modified to support para-virtualization. Xen now supports Windows with hardware assisted virtualization. There are number of products based on Xen. Citrix, which bought XenSource has couple of Xen based products, Sun has xVM, Oracle has Oracle VM. Redhat and Suse have been shipping Xen as part of their Linux distributions for some time.

    Hyper-V: This is Microsoft’s entry in this space. It is similar to the Xen architecture. It also requires hardware assistance. It comes bundled with Windows server 2008, and supports running Windows and Linux guest operating systems in the virtual machines.

    Advantages of Virtualization

    Virtualization has also provided some new capabilities. Server provisioning becomes very easy. It is just creating and managing a virtual machine. This has transformed the way testing and development are done. There is another interesting feature called Vmotion or live migration, where a running virtual machine can be moved from one physical machine to other physical machine. Executing of the virtual machine is briefly suspended, and the entire image of the virtual machine is moved to a different machine. Now the execution can be re-started and the guest OS will continue from exactly the same point where it was suspended. This eliminates the need for downtime, even for things like hardware maintenance. This also enables the dynamic resource management or utility computing.

    Adoption of server virtualization has been phenomenal. There are already hundreds of thousands servers running virtual machines. Initial adoption of virtual machine was restricted only to test and development, but now it has matured enough to become quite popular in production too.

    About the Authors

    Anurag Agarwal

    Anurag Agarwal has more than 11 years of industry experience both in India and US. Prior to founding the KQInfotech, he was a technical director at Symantec India. Anurag has designed, developed various products at Symantec (earlier Veritas). During 2006-2007, Anurag has conceived the idea of Software Fault Tolerance for Xen at Symantec. He was awarded highest technical award of outstanding innovator in 2006 for this invention. Anurag has build and lead a team of ten people in India to take it from idea stage to product.

    During the same time Anurag has started working with College of Engineering, Pune. There he and his friends offered a full semester course in Linux kernel. Anurag was also involved in mentoring a large number of students from various engineering colleges. This involvement in teaching and mentoring students resulted in formation of KQInfotech with training and mentoring focus.Prior to this, Anurag has architected scalable transaction system for Cluster file system at Symantec in USA. This architecture improved scalability of cluster file system from three nodes to sixteen nodes and beyond. He was awarded star award for this work. He has filed half a dozen patents at Symantec. Anurag has extensive knowledge in Solaris, Linux kernel, file system, storage technologies and virtualization.He has ME from Indian Institute of Science, Bangalore and BE from MBM Engineering College, Jodhpur.

    Anand Mitra
    After completing his post-graduation (iitb) in 2001, Anand had been working with Symantec India (Formerly Veritas). Prior to founding KQInfotech, he was a Principal Software Engineer at Symantec, chartered with the task of scoping and designing a support for windows on Xen based Fault Tolerance. He has worked for 6.5 years on the clustered filesystem product VxFS & CFS. He had architected the online upgrade for Veritas File system and designed the write fastpath which improved performance of the file system. He has also designed the integration of Power6 (powerPC) CPU feature of storage keys for the Veritas storage stack. He co-maintained technical relations with IBM for special proprietary kernel interfaces within AIX and designed a filesystem pre-allocation API for IBM DB2 database.

    Chitale Dairy Consolidates Two Datacenters into One with VMware Infrastructure

    (news sent in by PuneTech reader Chirag Dalal)

    Yahoo! Finance reports that Pune’s Chitale Dairy has used VMWare’s virtualization infrastructure to consolidate their two data centers into one and save costs:

    Chitale Dairy, which produces about 400,000 liters of milk per day as well as cream, butter and yogurt, faced operational challenges with 10 physical servers spread across two datacenters in a town 500 kilometers from the nearest city. In its remote location, the company found it expensive and challenging to source and retain qualified IT support staff while also grappling with server sprawl.

    By consolidating its two physical operations into one virtual datacenter using VMware Infrastructure, Chitale Dairy reduced server hardware acquisition costs by 50 percent, software acquisition costs by 75 percent, and power consumption in half. VMware also reduced server deployment times from three weeks to three hours and the time to restore a corrupted server from six or seven hours to 10 minutes.

    See full article.

    Zemanta Pixie