Tag Archives: virtualization

Tech Events this Week: @NexusVP, @GNUnify, Mobile Apps, @SpotZot, Cocoa, OpenStack

Here is a list of technology events happening in Pune over the next few days. To be informed of these events in advance, you should subscribe to get the PuneTech calendar event announcements by email. Click here to subscribe.

Panel Discussion: Intelligence at the Edge with @DruvaInc @Helpshift @Uniken_Inc and @nexusvp

  • Date: Thu, 13 Feb 6:00pm – 9:00pm
  • Location: Sumant Moolgaokar Auditorium, Ground Floor, Wing A, ICC Trade Center, SB Road

Nexus Venture Partners is one of India’s top technology venture capital firms, and has invested in 4 of Pune’s top technology startups: Druva, Helpshift, Pubmatic and Uniken.

On 13th October, Nexus is hosting a panel discussion on the topic “Intelligence at the edge”. With the CTOs of Druva and Helpshift, and the CEO of Uniken as panelists, it promises to be a great event that every techie in Pune who’s interested in the future of technology must attend.

Detailed Program:
– 6 – 6:30pm: Introductions
– 6:30 – 7:30pm: Panel Discussion.
– Panel Moderator:
– Jishnu Bhattacharjee, MD, Nexus Venture Partners
– Panelists:
– Milind Borate, Co-founder & CTO, Druva
– Baishampayan Ghose, Co-founder & CTO, Helpshift
– Sanjay Deshpande, Chief Innovation Officer & CEO, Uniken
– 7:30 – 8pm: Q & A
– 8 – 9pm: Networking over dinner

Fees and Registration

This event is free and open for anybody to attend. Please register here: https://docs.google.com/forms/d/16L17gU5zTGPFtfIxQWymz5tJWqcfzG3IsWvdeIag5DA/viewform

Please double-check the date/time/venue of the event at the above link. We try to ensure that PuneTech calendar listings are accurate, but occasional errors creep in.

About the PuneTech Calendar

Get event announcements by email. Click here to subscribe (free) to the PuneTech Calendar of events.

(Paid) Android QuickStart

  • Date: 14-15 Feb
  • Location: Pune – Contact Organizer for details

This Android application programming workshop from Cralina is a hands on workshop designed to address the need of professionals who would like to start up on Android mobile application development. It is also useful for professionals who have worked on any other mobile platform and need to develop a good understanding of Android application development concepts.

Target Audience:

  • Developers, technical leads/managers who need to develop a good basic understanding of Android
  • Engineers/Managers who wish to understand Android architecture, application development on Android and also test Android based products
  • Anyone who wishes to develop a good understanding of the workings of Android

Fees and Registration

This is a paid event. Check the event website: http://cralina.com/upcoming-programs#AS for details.

Please double-check the date/time/venue of the event at the above link. We try to ensure that PuneTech calendar listings are accurate, but occasional errors creep in.

About the PuneTech Calendar

Get event announcements by email. Click here to subscribe (free) to the PuneTech Calendar of events.

GNUnify 2014 – 2-day open-source technologies conference

  • Date: 14-15 Feb
  • Location: SICSR, (Symbiosis Institute of Computer Studies and Research, near Om Market, Model Colony)

GNUnify is one of India’s biggest open source conferences, will happen again in Pune, this Friday and Saturday (14, 15 Feb) in SICSR.

The conference will have talks and workshops on Android Hacking/Development, Programming with GCC/GDB/GMake, HTML5/CSS3/JavaScript/JQuery, Python, Google Admob challenge, OpenStack, Drupal and more.

Lots of people from all over India, and also some from abroad, usually come for this conference. This is your chance to connect with people passionate about technologies, and enthusiastic students.

This conference is free and open for anyone to attend. Click here to register.

From Idea to APP (Google Admob Challenge)

  • Date: 14-15 Feb
  • Location: 7th floor, SICSR, (Symbiosis Institute of Computer Studies and Research, near Om Market, Model Colony)

This is a 2 day workshop happening at SICSR along with GNUnify, the 2-day free and open source conference. Participants get an insight into how to transform an idea into an app. Student participants can present the same app in “Student’s admob Challenge”. Find more details at http://www.google.co.in/ads/admob/challenge.html

Entry is free. Though students can participate in Student’s Admob Challenge, working professionals too can participate as campus has kept it open to all.

Event website: http://www.meetup.com/Pune-GDG/events/164627612/ and http://gnunify.in

Please double-check the date/time/venue of the event at the above link. We try to ensure that PuneTech calendar listings are accurate, but occasional errors creep in.

About the PuneTech Calendar

Get event announcements by email. Click here to subscribe (free) to the PuneTech Calendar of events.

(Nominal Fees) Breakfast with @TiEPune: Lessons on E-commerce & Internet advertising for entrepreneurs

  • Date: Sat, 15 Feb 8:30am – 10:30am
  • Location: Shekhar Natu Hall, 5th Floor, A-Wing, MCCIA, ICC Towers, SB Road

TiE Pune Breakfast session – with Pehr Luedtke CEO of SpotZot

As with any business move or expansion, considering an online presence can raise a sometimes dizzying list of questions for an entrepreneur. Exactly what must be put in place to make it happen? How does an online presence change the market for the business? What are competitors doing? How will people shop? What kind of security is required? How will customers pay online? Small businesses that have little or no e-commerce capabilities on their websites should know about taking the next step in converting their marketing sites into selling locations that extend their customer bases, images and sales in entirely new ways. Those entrepreneurs not yet online need to discover how the Internet is likely to transform their businesses and introduce them to markets far beyond those which are currently in reach. The most amazing aspect of e-commerce is its ability to impact sales and marketing efforts immediately. By going online, suddenly a neighborhood bakery or a home based consulting service expands its reach to a national, or even international base of potential customers. Web-based sales know no international boundaries.

About the Speaker – Pehr Luedtke

Pehr has a decade of experience in ecommerce, internet advertising, and retail. Currently, Pehr is the CEO of Spotzot, a leading location-based mobile advertising platform for retailers and brands. Spotzot employs proprietary technology to create compelling offers for consumers near retailers that they care about. Spotzot is venture funded with headquarters in San Francisco and a large development center in Pune. Prior to Spotzot, Pehr was on the executive team at Bazaarvoice, the leading ratings and reviews software company. He joined Bazaarvoice through its acquisition of PowerReviews, where he was CEO. Prior to PowerReviews, Pehr held leadership positions at eBay, Levi Strauss, and Oliver Wyman. He has an undergraduate degree from Princeton and an MBA from Stanford. Pehr lives in the Bay Area with his wife, two children, and a new puppy.

About TiE Pune

Pune chapter of TiE – A non-profit global network of entrepreneurs and professionals, established to foster entrepreneurship and nurture entrepreneurs.

Fees and Registration

Fees : 150/ members and 300/- non-members of TiE.

Register here: http://em.explara.com/event/tiepune15feb

Please double-check the date/time/venue of the event at the above link. We try to ensure that PuneTech calendar listings are accurate, but occasional errors creep in.

About the PuneTech Calendar

Get event announcements by email. Click here to subscribe (free) to the PuneTech Calendar of events.

Pune Cocoa Meetup: Database Wars: YapDatabase Vs Core Data

  • Date: Sat, 15 Feb 3:00pm – 5:00pm
  • Location: Synerzip (3rd Floor, Revolution Mall, next to City Pride, Kothrud, Pune)

Chaitanya (@chaitanyapandit) from http://www.includetech.co will be speaking about his new found love towards YapDatabase.

YapDatabase is a “key/value store and MUCH MORE” built atop sqlite for iOS & Mac. It has the a Ton of features can be found on GitHub page

There will be a follow up Panel Discussion on YapDatabase Vs Core Data.

So, roll up your sleeves, brush up your thoughts on both these DBs and see you there.

About Pune Cocoa Meetup Group

See http://www.meetup.com/Pune-Cocoa/ for more details about the Pune Cocoa meetup group.

Fees and Registration

This event is free and open for anybody to attend. Please register here: http://www.meetup.com/PuneCocoa/events/162447612/

Please double-check the date/time/venue of the event at the above link. We try to ensure that PuneTech calendar listings are accurate, but occasional errors creep in.

About the PuneTech Calendar

Get event announcements by email. Click here to subscribe (free) to the PuneTech Calendar of events.

WordPress User Group Pune Meetup

  • Date: Sat, 15 Feb 5:00pm – 7:00pm
  • Location: Cafe Coffee Day, Ground Floor, Mariplex Mall

If you’re interested in meeting other WordPress developers in Pune…

Fees and Registration

This event is free and open for anybody to attend. Please register here: http://www.meetup.com/WordPress/Pune-IN/1103872/

Please double-check the date/time/venue of the event at the above link. We try to ensure that PuneTech calendar listings are accurate, but occasional errors creep in.

About the PuneTech Calendar

Get event announcements by email. Click here to subscribe (free) to the PuneTech Calendar of events, or follow @punetech on twitter

Software Developers Meetup: Intro to developing a mobile app using PhoneGap

  • Date: Sun, 16 Feb 11:00am – 1:00pm
  • Location: 5th Floor, A-Wing, MCCIA, ICC Towers, SB Road

In this tutorial, we shall create a fully functional mobile application with PhoneGap. We shall cover:

  • How to use different local data storage strategies.
  • How to use several PhoneGap APIs such as Geolocation, Contacts, and Camera.
  • How to handle specific mobile problems such as touch events, scrolling, styling, page transitions, etc.
  • How to build an application using a single page architecture and HTML templates.
  • How to build (compile and package) an application for 6 platforms using PhoneGap Build.

To attend this tutorial, all you need is a code editor (Notepad++ is fine too!), a modern browser, and a connection to the Internet (will be provided at the venue). A working knowledge of HTML and JavaScript is assumed, but you don’t need to be a JavaScript guru.

Fees and Registration

This event is free and open for anybody to attend. Please register here: http://www.meetup.com/Software-Developers-In-Pune/events/165018212/

Please double-check the date/time/venue of the event at the above link. We try to ensure that PuneTech calendar listings are accurate, but occasional errors creep in.

About the PuneTech Calendar

Get event announcements by email. Click here to subscribe (free) to the PuneTech Calendar of events, or follow @punetech on twitter

OpenStack Pune Meetup Group: Network Virtualization – Beyond the Basics

  • Date: Tue, 18 Feb 5:30pm – 8:30pm
  • Location: JW Marriot, SB Road

Agenda

  • 5:30-6pm: Registration
  • 6pm-7:30pm: Future of Network Virtualization: Going beyond the basic Use Cases, Speaker : Martin Casado
  • 7:30pm: Networking and Dinner

About OpenStack Pune Meetup Group

This group is for OpenStack and Virtualization enthusiasts – Developers, IT Admins, Open Source Contributors to learn and share knowledge about this upcoming platform. This group is interested in Open Source, Cloud Computing, SaaS and Cloud Computing, OpenStack – Quantum, Virtualization, Building private cloud with OpenStack, OpenStack.

Fees and Registration

This event is free and open for anybody to attend. Please register here: http://www.meetup.com/OpenStack-Pune/events/162779722/

Please double-check the date/time/venue of the event at the above link. We try to ensure that PuneTech calendar listings are accurate, but occasional errors creep in.

About the PuneTech Calendar

Get event announcements by email. Click here to subscribe (free) to the PuneTech Calendar of events.

Call for speakers – IndicThreads conference on Cloud Computing

IndicThreads, the Pune-based organization that is best known for its annual Java conference that happens in Pune, is now diversifying and holding conferences on other areas of technology. The next conference is on Cloud Computing, and will be held in Pune on 20 and 21st August.

indicthreads logo small
The call for papers is out and the deadline for submissions is 31st May – actually, it’s a call for speakers – because you’re only required to submit an abstract now, and a slideshow just before the conference. Submitting entries to conferences like this is, we believe, a good way for Pune’s tech professionals to get visibility for their work, and a good way to get into a paid conference without having to pay the fees. Hence, if you’ve done any work in cloud computing, or virtualization, or Software-as-a-Service, or if you know enough about one of the related fields to be able to give an overview talk, you should submit an abstract.

For IndicThreads’ previous conference (on software quality assurance), we had given a list of reasons why you should strongly consider being a speaker at that conference. You should re-read that post, because most of the reasons continue to apply.

Suggested list of topics

Topics include but are not restricted to the following, stated in no particular order –

Diagram showing three main types of cloud comp...
Image via Wikipedia
  1. Cloud /Grid architecture
  2. Cloud-based Services and Education
  3. Infrastructure as a Service (IaaS)
  4. Software as a Service (SaaS)
  5. Platform as a Service (PaaS)
  6. Virtualization
  7. High-Performance Computing
  8. Cloud-Delivered Testing
  9. Cloud Lock-in vs. Cloud Interoperability
  10. Multi Cloud Frameworks & APIs
  11. Monitoring Cloud Applications
  12. Data Security & Encryption On The Cloud
  13. Elastic Computing
  14. Cloud Databases
  15. Private vs. Public Clouds
  16. Cloud Scalability
  17. Cloud Analytics

Submission

Submit your entry here.

Enhanced by Zemanta

Musings on why Cloud Computing will prevail…

suhas kelkar headshot

Today’s post is a guest post by Suhas Kelkar. Suhas leads the Innovation & Incubation Lab at BMC Software India. Prior to BMC he was the Vice President of Product Management at Digite, an enterprise software company in the field of Project Portfolio Management. See his linked-in profile for details.

In the recent Hype Cycle for Cloud Computing 2009 special report by Gartner, technologies at the ‘Peak of Inflated Expectations’ include Cloud Computing! (For description of five phases of Hype Cycle look here) This means that Cloud Computing is on the verge of entering the “Trough of Disillusionment” phase. Many technologies have been unable to come out of this dreaded trough where they fail to meet expectations and quickly become unfashionable. Articles such as “Could the cloud lead to an even bigger 9/11” clearly indicate that Gartner’s analysis is right and that cloud computing indeed has reached the peak of hype!

This article has my musings on why cloud computing will eventually come out of this phase and would reshape the way we run business.

Hype Cycle for Cloud Computing 2009
Hype Cycle for Cloud Computing 2009

I had an opportunity to attend VmWorld 2009 conference. During the course of this conference, VmWare announced its latest initiative, vCloud. vCloud is essentially using VmWare’s virtualization technology to create an ecosystem of cloud service providers. With this initiative VmWare joins already crowded space of public cloud providers such as Amazon, Rackspace Cloud and Savvis. Out of all the exhibitors at the VmWorld conference, almost everyone was trying to get on the bandwagon of Cloud Computing. And this was not even a Cloud Computing focused conference! The more you look into Cloud Computing the more you feel like it is indeed the next big thing after the internet gold rush of 90s.

All this hype for Cloud Computing feels like a déjà vu. Turn the dial few years ago and the area of Software As A Service (SaaS) went through very similar transition. After SaaS reached the trough of disillusionment skeptics were raising doubts. Many argued that they would never consider putting their competitive data (CRM) in a software system outside of their corporate networks. Salesforce had to fight an uphill battle as it tried to establish its SaaS products. However the value proposition of SaaS, in terms of zero install and pay-as-you-go was too attractive to ignore. Today SaaS is the architecture of choice for many enterprise software products and last time I checked Salesforce is sitting pretty at a massive market cap of 7.13 billion dollars!

Let’s look at the benefits of Cloud Computing,

  • Lower Costs – OPEX not CAPEX: Cloud Computing avoids capital expenditure (CapEx) on hardware, software and services by renting it from a third party provider (such as Amazon). Consumption is usually billed on a utility (resource based like electricity) or subscription (time based, like a monthly cable subscription) basis with little or no upfront cost. You pay as you go and pay for what you need. This seemingly straight forward benefit has deep impact on business models and strategy.
  • Self service and Agility: Provisioning a server used to take days if not weeks. With Amazon you can procure a server on their public cloud in minutes! Users can generally terminate the contract at any time (improving ROI and eliminating financial risks), and the services are often covered by service level agreements (SLAs) with financial penalties.
  • Focus on your business: Cloud computing abstracts away underlying resources (server, network and storage) and management of it so that you can focus on your core business. Win-win for Providers and Consumers.
  • Cloud Infrastructure and services are by default multi-tenant enabled, with multiple customers sharing resources and the costs associated with these. Providers run centralized infrastructure at low cost locations and make use of expertise of providers in terms of utilization and efficiency of infrastructure. Providers benefit with increased efficiency due to economies of scale and are able to provide the same service at lesser costs to happy consumers.

  • Elastic Scalability: Hosting your applications on Cloud Infrastructure enable dynamic (“on-demand”) provisioning of resources that can be done at near real time, without having to waste server resources engineered for peak loads. This enables small business to start offering their services on the web with low entry barriers and then scale as and when their load demands are higher.
  • Consider for example that you want to start a small web based business selling toys. Your business plan calls for exponential growth with number of customers ramping from few hundred in the first year to thousands in 2-3 years to million plus in 5-7 years. Ofcourse this plan does not even include wild fluctuations during peak holiday seasons. Until today, planning for this type of scenario involved lot of upfront costs that created huge barriers of entry for start ups. Now with cloud computing and public cloud infrastructure, such small companies can dream of doing exactly what they want to do and provides them with unlimited elasticity!

Similar to SaaS success story, it will be the benefits of the “cloud” that will eventually win over the skeptics due to underlying benefits. Of course an important factor would also be for an eco system to evolve in a timely fashion. One of the reasons why SaaS was successful was the fact that an entire ecosystem made itself available that rendered well to the SaaS Model including Web Standards (SOAP, WSDL, UDDI) and architectures such as AJAX.

Similar to the platform wars of the eighties (followed by browser wars of nineties), Cloud Computing is currently going through a war with each player trying to establish itself as the destination. Some efforts have started to promote interoperability and openness of cloud. Open Cloud Initiative is one such example. However it remains to be seen how the industry as a whole matures and adopts such efforts…

Cloud computing is here to stay and will succeed as a concept eventually. It has the power to establish new business models and change existing processes. More will have to be written about what does it mean for enterprises of tomorrow to manage their businesses in cloud. Do provide feedback via your comments if you would like to hear about it more…

See also: Suhas’ previous PuneTech article: The Changing Landscape of Data Centers.

Reblog this post [with Zemanta]

Changing Landscape of Data Centers

Today’s post is a guest post by Suhas Kelkar, the Head of Innovation & Incubation Lab at BMC Software India. Prior to BMC he was the Vice President of Product Management at Digite, an enterprise software company in the field of Project Portfolio Management. See his linked-in profile for details.

I had an opportunity to speak at the very first BMC India Technical Event held in Bangaluru on June 11th, 2009. At this event I talked about the changing landscape of data centers. This article is an excerpt of the talk intended to facilitate understanding of the presentation. The entire presentation is available here.

There are many factors causing the landscape of data centers to change. There are some disruptive technologies at play namely Virtualization and Cloud Computing. Virtualization has been around for a while but only recently it has risen to the level of making significant impact to data centers. Virtualization has come a long way since VMware first introduced VMware Workstation in 90s. The product was initially designed to ease software development and testing by partitioning a workstation into multiple virtual machines.

The virtual machine software market space has seen a substantial amount of evolution, The Xen® hypervisor, the powerful open source industry standard for virtualization. To vSphere, the first cloud operating system, transforming IT infrastructures into a private cloud-a collection of internal clouds federated on-demand to external clouds. Hardware vendors are also not too behind. Intel/AMD and other hardware vendors are pumping in lot of R&D dollars to make their chipsets and hardware optimized for hypervisor layer.

According to IDC more than 75% companies with more than 500 employees are deploying virtual servers. As per a survey by Goldman Sach’s 34 per cent of servers will be virtualized within the next 12 months among Fortune 1000 companies, double the current level of 15 per cent.

Cloud computing similarly existed as a concept for many years now. However various factors finally coming together that are now making it ripe for it to have the most impact. Bandwidth has been increasing significantly across the world that enables faster access to applications in the cloud. Thanks to success of SaaS companies, comfort level of having sensitive data out of their direct physical control is increasing.

There is increasing need for remote work force. Applications that used to reside on individual machines now need to be centralized.

Economy is pushing costs to go down. Last but not least, there is an increasing awareness about going green.

All these factors are causing the data center landscape to change. Now let’s look at some of the ways that the data centers are changing.

Data centers today are becoming much more agile. They are quick, light, easy to move and nimble. One of the reasons for this is that in today’s data center, virtual machines can be added quickly as compared to procuring and provisioning a physical server.

Self service provisioning allows end-users to quickly and securely reserve resources and automates the configuration and provisioning of those physical and virtual servers without administrator intervention. Creating a self-service application and pooling resources to share across teams not only optimizes utilization and reduces needless hardware spending but it also improves time to market and increases IT productivity by eliminating mundane and time consuming tasks.

Public clouds have set new benchmarks. E.g. Amazon EC2 SLA for availability is 99.95% which raised the bar from traditional data center availability SLA significantly. Most recently another vendor, 3Tera came out with five nines, 99.999% availability. Just to compare Amazon and 3Tera, 99.999% availability translates into 5.3 minutes of downtime each year, the different in cost between five 9’s and four 9’s (99.99 percent, or 52.6 minutes of downtime per year) can be substantial.

Data centers are also becoming more scalable. With virtualization, a data center may have 100 physical servers that are servicing 1000 virtual servers for your IT. Once again due to Virtualization, data centers are no longer constrained due to physical space or power/cooling requirements.

The scalability requirements for data centers are also changing. Applications are becoming more computation and storage hungry. Example of computation sensitive nature of apps, enabling a sub-half-second response to an ordinary Google search query involves 700 to 1,000 servers! Google has more than 200,000 servers, and I’d guess it’s far beyond that and growing every day.

Or another example is Facebook, where more than 200 million photos are uploaded every week. Or Amazon, where post holiday season their data center utilization used to be <10%! Google Search, Facebook and Amazon are not one off examples of applications. More and more applications will be built with similar architectures and hence the data center that hosts/supports those applications would need to evolve.

Data center are becoming more fungible. What that means is that resources used within the data centers are becoming easily replaceable. Earlier when you procured a server, chances were high that it will be there for number of years. Now with virtual servers, they will get created, removed, reserved and parked in your data center!

Data centers are becoming more Utility Centric and service oriented. As an example look at Cisco‘s definition of Data Center 3.0 where it calls it infrastructure services. Data center users are increasingly going to demand pay as you go and pay for what you use type of pricing. Due to various factors, users are going to cut back on large upfront capital expenses and instead going to prefer smaller/recurring operating expenses.

Most organizations have either seasonal peaks or daily peaks (or both) with a less dramatic cost differential; but the cost differential is still quite dramatic and quite impactful to the bottom line. In addition, the ability to pay for what you use makes it easy to engage in “proofs of concept” and other R&D that requires dedicated hardware.

  • As the discrepancy between peak usage and standard usage grows, the cost difference between the cloud and other options becomes overwhelming.

Technology is changing; the business needs are changing, with changing times organization’s social responsibilities are changing. More and more companies are thinking about the impact they have on the environment. Data centers become major source of environment impact especially as they grow in size.

A major contributor to excessive power consumption in the data center is over provisioning. Organizations have created dedicated, silo-ed environments for individual application loads, resulting in extremely low utilization rates. The result is that data centers are spending a lot of money powering and cooling many machines that individually aren’t doing much useful work.

Cost is not the only problem. Energy consumption has become a severe constraint on growth. In London, for example, there is now a moratorium on building new data centers because the city does not have the electrical capacity to support them!

Powering one server contributes to on an average 6 Tons of carbon emissions (depending upon the location of the server and how power is generated in that region) It is not too farfetched to claim that every data center has some servers that are always kept running because no one knows what business services depend on them but in reality no one seems to be using them. Even with the servers that are being used, there is an opportunity to increase their utilization and consolidate them.

Now that we have seen some of the ways that the data centers are changing, I am going to shift gears and talk about evolution of data centers. I am going to use the analogy of evolution of web to changing landscape of data centers. Just like web evolved from Web 1.0 where everyone could access, to Web 2.0 where people started contributing to Web 3.0 where the mantra is everyone can innovate.
Image showing Web-3.0 and DC-3.0
Applying this analogy to Data Centers we can see how it has evolved from its early days of existence to where we are today,
Evolution of a DC
Using the analogy of Web world, we can see how data centers have evolved from their early days till now.

  • In the beginning, Data centers were nothing but generic machines stored together. From there it evolved to blade servers that removed some duplicate components and optimized. Now in DC3.0, they are becoming even more virtual and cloud based.
  • So from mostly physical servers we have moved to Physical and Virtual servers to now where we would even treat underlying resources as virtual.
  • Provision time has gone down significantly
  • User participation has changed
  • Management tools that used to be nice to have are playing a much important role and are becoming mandatory. Good example once again is UCS where Bladelogic Mgmt tool will be pre-installed!
  • The role of a data center admin itself has changed from mostly menial work into a much more sophisticated one!

Slideshow for “Changing Landscape of Data Centers”

If you cannot see the slideshow above, click here.

Reblog this post [with Zemanta]

Rise of the Virtual Machines – Some thoughts on the impact of virtual machines

Virtual Machine Monitor Type I
Schematic diagram of a Virtual Machine setup. The physical hardware is at the bottom, the virtual machine monitor (VMM) sits in the middle, and multiple actual virtual machines sit on top of the VMM. Image via Wikipedia

(This post by Dilip Ranade on his blog, takes a look at how Virtual Machines are going to change the way we do computing, and also how we will start using virtual machines in new and interesting ways as they mature. It is republished from there here with permission.)

Synopsis: some thoughts on the impact of virtual machines

Virtual Machines were invented in IBM in the early seventies , but it appears that it was only VMWare started much later in 1998 that figured out how to make money purely out of virtualization. However, with Xen and Microsoft Virtual Server also entering the fray, things are getting interesting.  The green pastures of Virtual Machines, often misnamed virtualization (which is actually
a broader term) now appear poised to support a large herd of bulls.

Although it is hard to predict all the ways in which a new technology will change the world– think of telephones and sex hotlines for example — here are some thoughts on how VM’s can have an unforeseen impact, arranged roughly in order of increasing ambitiousness:

  • VM’s can break the HW/SW Red Queen Effect
  • VM’s can break vendor “lock-in”
  • Processors can become commoditized
  • Operating systems can become commoditized
  • Rise of virtual appliances
  • Rise of virtual machine swarms

VM’s can Break the HW/SW Red Queen Effect.

Software vendors and hardware vendors are in a mutually beneficial race, leading to an exponential spiral: customers are forced to buy ever more powerful computers to run ever more resource-hogging versions of software. But with a Virtual Machine this collusion can be broken. First of all, customers will balk at buying bloated software, as happened with Microsoft Vista. Secondly, marginally bloated software can be tolerated without having to replace the virtual servers with more powerful machines. For example, a VM can
be virtually upgraded to larger memory or more CPUs without making new purchases.
Thus, the existence of virtualized servers brings genuine economic pressure for software developers to be more frugal with CPU and memory consumption in their products. This works in conjunction with the next point.

VM’s can Break Vendor “lock-in”

When a software product is on a virtual machine, it is easy and non disruptive to try
out a competing product on another virtual machine, even if it
requires different type of hardware. However, this effect is not as powerful
as it can potentially be, because todays virtualization is too focused to x86
architecture.

Processors can Become Commoditized

The time is ripe for the evolution of a standard virtual processor,
just like TCP/IP is for network protocols. Consider the advantages: Considerably reduced development and testing costs (write once run anywhere); potentially longer software product life (delinked from hardware obsolescence); clean room environment for “dusty decks” (very old software can continue to run in a virtual environment). I am thinking of a more abstract kind of virtual processor that is also extensible or mutable in ways that hardware processors cannot be. It may not need to make hard choices between various hardware tradeoffs.
The Java virtual machine is an example.

Operating Systems can Become Commoditized

As the virtual processor evolves towards higher levels of
abstraction, so should virtual devices that it connects to. This should reduce the complexity of the virtual operating system; then it should not need a team consisting of thousands of progammers to maintain a virtual operating system.
For example, a virtual OS does not need bootstrapping code – it can boot of a virtual network boot service. Similarly, there is no need for every virtual Operating system to implement its own file system and to interact only with (virtual) hard disks. All it needs is a simple file system client to discover and connect to the correct virtual Network Attached Storage (NAS) devices.

Rise of Virtual Appliances

General-purpose operating systems can be replaced with lean-and-mean
tailor-made variants designed for specific applications. For example
an OS built specifically for a web server, or different one for a
database.

Rise of Virtual Machine Swarms

The trend towards multi-core, multi-thread programming can be fitted
better to a virtual machine designed to work in swarms. The Transputer of late 1980’s comes to
mind: multiprocessor meshes could be built from multiple Transputers just by physically connecting built-in serial links between pairs of Transputers. The
standard virtual processor’s simple network interconnect could support
easy interfacing within a swarm. I think PVM and grid computing concepts can be considered precursors of VM swarms.

About the Author – Dilip Ranade

For more information, see his linked-in profile.

Comments are closed on this post. If you have any comments, please leave them at the original article.

Reblog this post [with Zemanta]

Seminar on Xen Virtual Machine – 20th Oct

What: Seminar on Xen Virtual Machine architecture and server virtualization
When: Monday, October 20, 2008 from 5:00pm – 7:00pm
Where: Auditorium, Building “C”, Pune IT Park, Bhau Patil Road, Aundh
Fees and Registration: This event is free for all. Register here

KQInfotech presents a seminar on the Xen Virtual Machines. We would present various server virtualization technologies in general. Xen virtual machine architecture will be presented in more detail. That would be followed by comparison of various virtual machine architectures.

This event is free for all.

You would need to register at http://mentor.kqinfotech.com to attend this seminar. Select Xen Virtual machine course from this page, it will take you a login page. Please select a login id for yourself and login. Please fill in the details.

Or you could just RSVP to alka at kqinfotech dot com

As usual, check the PuneTech Calendar for all happening tech events happening in Pune (which, incidentally, is a very happening place). Also, don’t forget to tell your friends about PuneTech, the … ahem … techies’ hub for bonding (as reported by the Pune Mirror yesterday).

Stop Virtual Machine Sprawl with Colama

This is a product-pitch for Colama, a product built by Pune-based startup Coriolis. For the basics of server virtualization, see our earlier guest posts: Introduction to Server Virtualization, and Why do we need server virtualization. Virtualization is fast emerging as a game-changing technology in the enterprise computing space, and Colama is trying to address an interesting new pain-point that making its presence felt. 

Virtualization is fast becoming an accepted standard for the IT infrastructure. While it comes as a boon to the development and QA communities, the IT practitioners are dealing with the pressing issue of virtual machine sprawl that surfaces due to adhoc and hence uncontrolled adoption of virtualization. While describing the problem and its effects, this paper outlines a solution called Colama, as offered by Coriolis Technologies.

 

Virtual machines have been slipping in under the covers everywhere in the IT industry. Software developers like virtual machines because they can easily mimic a target environment. QA engineers prefer virtual machines because they can simultaneously test the software on different configurations. Support engineers can ensure reproducibility of an issue by pointing to an entire machine rather than detailing on the individual steps and/or configuration requirement on a physical host. In many cases, adoption of virtual machines has been primarily driven by users’ choice rather than any coherent corporate strategy. The ad-hoc and uncontrolled use of virtual machines across the organization has resulted in to a problem called Virtual Machine sprawl, which has become critical for today’s IT administrators.

Virtual machine sprawl is an unpleasant side effect of server virtualization and its near exponential growth over the years. Here are the symptoms:

  • At any given point, the virtual machines running in the organization are un-accounted for. Information like who created them and when, who used them, what configuration/s they have, what licensed software they use, whether security patches have been applied, whether the data is backed up etc are not maintained and tracked anywhere.
  • Most commonly, people freely copy each other’s virtual machines and no usage tracking and access control is in place.
  • Because of cheap storage, too many identical or similar copies of the same machines are floating across the organization. But reduction in storage cost does not reduce the operational cost of storage management, search, backup, etc. Data duplication and redundancy is a problem even if storage is plentiful.
  • Because there is no mechanism to keep track of why an image was created, it is hard to figure out when it should be destroyed. People tend to forget what each image was created for, and keep them around just in case they are needed. This increases the storage requirements.
  • Licensing implications: Virtual machine copied from one with a licensed (limited) software needs to tracked for its life span in order to put a control on the use of licensed software.
  •  

    There are many players in the industry who address this problem. Most of the virtual lab management products are tied to one specific virtualization technology. For example, the VMWare Lab Manager works for only VMWare virtualization technology. In a heterogeneous virtualization environment that is filled with Xen, VMWare, VirtualBox and Microsoft virtual machines, such an approach falls short.

    Colama is Coriolis Technologies solution to address this problem. Colama manages the life cycle of virtual machines across an organization. While tracking and virtual machines, Colama is agnostic to the virtualization technology.

     

    Here are some of the features of Colama:

  • Basic SCM for virtual machine: You can Checkin/checkout/clone/tag/comment virtual machine/s for tracking revisions of virtual machine.
  • Image inspection: Colama provides automatic inspection, extraction and display of image-related data, like OS version, software installed, etc and also facilitates “search” on the extracted data. For example, you can search for the virtual machines that have got Windows 2003 server with service pack 4 and Oracle 10g installed!
  • Web based interface: Navigate through the virtual machine repository of your organization using just a web browser.
  • Ownership and access control: • Create a copy of a machine for yourself and decide who can use “your” machine.
  • De-duplication: Copying/Cloning virtual machines happens without any additional storage requirement.
  • Physical machine provisioning (lab management): Spawn a virtual machine of your choice on a physical host available and ‘ready’.
  • Management reports: auditing and compliance User activity reports, virtual machine history, health information (up/down time) of virtual machines, license reports of the virtual machines etc.
  • Virtualization agnostic: works for virtual machines from all known vendors. 
  • Please note: This product-pitch, by Barnali Ganesh, co-founder of Coriolis, is featured on PuneTech because the editors found the technology interesting (or more accurately, the pain-point which it is addressing). PuneTech is a purely non-commercial website and does not take any considerations (monetary or otherwise) for any of the content on the site. If you would like your product to be featured on the front page, send us an article and we’ll publish it if we fell it’s interesting to our readers. You can always add a page to the PuneTech wiki by yourself, as long as you follow the “No PR” guidelines.

    Why do we need server virtualization

    Virtualization is fast emerging as a game-changing technology in the enterprise computing space. What was once viewed as a technology useful for testing and development is going mainstream and is affecting the entire data-center ecosystem. This article on the important use-cases of server virtualization by Milind Borate, is the second in PuneTech’s series of articles on virtualization. The first article gave an overview of server virtualization. Future articles will deal with the issue of management of virtual machines, and other types of virtualization.

    Introduction

    Is server virtualization a new concept? It isn’t, because traditional operating systems do just that. An operating system provides a virtual view of a machine to the processes running on it. Resources are virtualized.

    • Each process gets a virtual address space.
    • A process’ access privileges control what files it can access. That is storage virtualization.
    • The scheduler virtualizes the CPU so that multiple processes can run without conflicting with each other.
    • Network is virtualized by providing multiple streams (for example, TCP) over the same physical link.

    Storage and network are weakly virtualized in traditional operating systems because some global state is shared by all processes. For example, the same IP address is shared by all processes. In case of storage, the same namespace is used by all processes. Over time, some files/directories become de-facto standards. For example, all process look at the same /etc/passwd file.

    Today, the term “server virtualization” means running multiple OSs on one physical machine. Isn’t that just adding one more level of virtualization? An additional level generally means added costs, lower performance, higher maintenance. Why then is everybody so excited about it? What is it that server virtualization provides in addition to traditional OS offerings? An oversimplified version of the question is: If I can run two processes on one OS, why should I run two OSs with one process each? This document enumerates the drivers for running multiple operating systems on one physical machine, presenting a use case, evaluating the virtualization based solution, suggesting alternates where appropriate and discussing future trends.

    Application Support

    Use case: I have two different applications. One runs on Windows and the other runs on Linux. The applications are not resource intensive and a dedicated server is under-utilized.

    Analysis: This is a weak argument in an enterprise environment because enterprises want to standardize on one OS and one OS version. Even if you find Windows and Linux machines in the same department, the administrators are two different people. I wonder if they would be willing to share a machine. On the other hand, you might find applications that require conflicting versions of, say, some library, especially on Linux.

    Alternative solution: Wine allows you to run Windows applications on Linux. Cygwin allows you to run Linux applications on Windows. Unfortunately, it’s not the same as running the application directly on the required OS. I won’t bet that a third party application would run out of the box under these virtual environments.

    Future: Some day, developers will get fed up of writing applications for a particular OS and then port them to others. JAVA provides us with an host/OS independent virtual environment. JAVA wants programmers to write code that is not targetted for a particular OS. It succeeded in some areas. But, still there is a lot of software written for a particular OS. Why did everybody not move to JAVA? I guess, because JAVA does not let me do everything that I can do using OS APIs. In a way, that’s JAVA’s failure in providing a generic virtual environment. In future, we will see more and more software developed over OS independent APIs. Databases would be the next target for establishing generic APIs.

    Conflicting Applications

    Use case: I have two different applications. If I install both on the same machine, both fail to work. In fact, they might actually work but it’s not a supported by my vendor.

    Analysis: In the current state of affairs, an OS is not just hardware virtualization. The gamut of libraries, configuration files, daemons is all tied up with an OS. Even though an application does not depend on the exact kernel version, it very much depends on the library versions. It’s also possible that the applications make conflicting changes to some configuration file.

    Alternative solution: OpenVZ modifies Linux to provide multiple “containers” inside the same OS. The machine runs a single kernel but provides multiple isolated environments. Each isolated environment can run an application that would be oblivious to the other containers.

    Future: I think, operating systems need to support containers by default. The process level isolation provided at memory and CPU level needs to be extended storage and network also. On the other hand, I also hope that application writers desist from depending on shared configuration and shared libraries pay some attention to backward compatiblity.

    Fault Isolation

    Use case: In case an application or the operating system running the application faults, I want my other applications to run unaffected.

    Analysis: A faulty application can bring down entire server especially if the application runs in a priviledged mode and if it could be attacked over a network. A kernel driver bug or operating system bug also brings down a server. Operating systems are getting more stable and servers going down due to operating system bug is rare now a days.

    Alternative solution: Containers can help here too. Containers provide better isolation amongst applications running on the same OS. But, bugs in kernel mode components cannot be addressed by containers. Future: In near future, we are likely see micro-kernel like architectures around virtual machines monitors. Light weight operating systems could be developed to work only with virtual machine monitors. Such a solution will provide fault isolation without incurring the overheads of a full opearting system running inside a virtual machine.

    Live Application Migration

    Use case: I want to build a datacenter with utility/on-demand/SLA-based computing in mind. To achieve that, I want to be able to live-migrate an application to a different machine. I can run the application in a virtual machine and live-migrate the virtual machine.

    Analysis: The requirement is to migrate an application. But, migrating a process is not supported by existing operating systems. Also, the application might do some global configuration changes that need to be available on the migration target.

    Alternative solution: OpenVZ modifies Linux to provide multiple “containers” inside the same OS. OpenVZ also supports live migration of a container.

    Future: As discussed earlier, operating systems need to support containers by default.

    Hardware Support

    Use case: My operating system does not support the cutting edge hardware I bought today.

    Analysis: Here again, I’m not bothered about the operating system. But, my applications run only on this OS. Also, enterprises like to use the same OS version throughout the organization. If an enterprise sticks to an old OS version, it does not work with new hardware to be bought. If an enterprise is willing to move to the newer OS, it does not work with the existing old hardware.

    But, the real issue here is the lack of standardization across hardware or driver development models. I fail to understand why every wireless LAN card needs a different driver. Can all hardware vendors not standardize the IO ports and commands so that one generic driver works for all cards? On the other hand, every OS and even OS version has a different drivers development model. That means every piece of hardware requires a different driver for each OS version. Alternative solution: I cannot think of a good alternative solution. One specific issue, unavailability of wireless LAN card drivers for Linux, is addressed by NdisWrapper. NdisWrapper allows us to access a wirelss card on Linux by loading a Windows driver.

    Future: We either need hardware level standardization or the ability to run the same driver on all verions on all operating systems. It would be good to have wrappers, like NdisWrapper, for all types of drivers and all operating systems. A hardware driver should write to a generic API provided by the wrapper framework. The generic API should be implemented by the operating system vendors.

    Software Development Environment

    Use case: I want to manage hardware infrastructure for software development. Every developer and QA engineer needs dedicated machines. I can quickly provision a virtual machine when the need arises.

    Analysis: Under development software fails more often than a released product. Software developers and QA engineers want an isolated environment for the tests to correctly attribute bugs to the right application. Also, software development envinronments require frequent reprovisioning as the product under development needs to be tested under different operating systems.

    Alternative solution: Containers would work for most software development. I think, the exception is kernel level development.

    Future: Virtual machines found an instant market in software QA labs. Virtual machines will continue to flourish in this market.

    Application Configuration Replica

    Use case: I want to ship some data to another machine. Instead of setting up identical application enviroment on the other machine to access the data, I want to ship the entire machine image itself. Shipping physical machine image does not work because of hardware level differences. Hence, I want to ship virtual machine image.

    Analysis: This is another hard reality of life. Data formats are not compatible across multiple versions of a software product. Portable data formats are used by human readable documents. File-system data formats are also stable to a large extent and you can mount a FAT file-system or ISO 8529 file-system on virtually any version of any operating system. The same level of compatiblity is not established for other structured data. I don’t see that happening in near future. Even if this hurdle is crossed, you need to bother about correctly shipping all the application configuration, which itself could be different for the same software running on different OSs.

    Alternative solution: OpenVZ container could be a light-weight alternative to a complete virtual machine.

    Future: The future seems inclined towards “computing in a cloud”. The network bandwidth is increasing and so is the trend towards is outsourced hosting. Mail and web services are outsourced since a long time. Oracle-on-demand allows us to outsource database hosting. Google (writely) wants us to outsource document hosting. Amazon allows us to outsource storage and compuation both. In future, we will be completely oblivious to the location of our data and applications. The only process running on your laptop would be an improved a web browser. In that world, only system software engineers, who build these datacenters, would be worried about hardware and operating system compatibilities. But, they also will not be overly bothered because the data-center consolidations will reduce the diversity in hardware and OS.

    Thin Client Desktops

    Use case: I want to replace desktop PCs with thin clients. A central server will run a VM for each thin client. The thin client will act as a display server.

    Analysis: Thin clients could bring down the maintenance costs substantially. Thin client hardware is more resilient than a desktop PC. Also, it’s easier to maintain the software installed on a central server than managing several PCs. But, it’s not required to run a full virtual machine for each thin client. It’s sufficient to allow users to run the required applications from a central server and make the central storage available.

    Alternative solution: Unix operating systems are designed to be server operating systems. Thin X terminals are still prevalent in Unix desktop market. Microsoft Windows, the most prevalent desktop OS, is designed as a desktop OS. But, Microsft also has added substantial support for server based computing. Microsft’s terminal services allows multiple users to connect to a Windows server and launch applications from a thin client. Several commercial thin clients can work with Microsoft terminal services or similar services provided by other vendors.

    Future: Before the world moves to computing in a global cloud, an intermediate step would be enterprise-wide desktop application servers. Thin-clients would become prevalent due to reduced maintenance costs. I hope to see Microsoft come up with better licensing for server based computing. On Unix, floating application licenses is the norm. With a floating application licence, a server (or a cluster of servers) can run only fixed application instances as per the license. It does not matter which user or thin client launches the application. Such a floating licensing from Microsoft will help.

    Conclusion

    Server virtualization is a “heavy” solution for the problems it addresses today.These problems could be adddressed by operating systems in a more efficient manner with following modifications:

    • Support for containers.
    • Support for live migration of containers.
    • Decoupling of hardware virtualization and other OS functionalities.

    If existing operating systems muster enough courage to deliver these modifications, server virtualization will have tough time. It’s unrealistic to expect complete overhauls of existing operating systems. It’s possible to implement containers as a part of OS but decoupling hardware virtualizatoin from OS is a hard job. Instead, we are likely to see new light weight operating systems designed to run only in server virtualization environment. The light weight operating system will have following characteristics:

    • It will do away with functionality already implemented in virtual machine monitor.
    • It will not worry about hardware virtualization.
    • It might be a single user operating system.
    • It might expect all co-operative processes.
    • It will have a minimal kernel mode component. It will be mostly composed of user mode libraries providing OS APIs.

    Existing virtual machine monitors would also take up more responsiblity in order to support light weight operating systems:

    • Hardware support: The hardware supported by a VMM will be of primary importance. The OS only needs to support the virtual hardware made visible by VMM.
    • Complex resource allocation and tracking: I should get a finer control over resources allocated to virtual machines and be able to track resource usage. This involves CPU, memory, storage and network.

    I hope to see a light weight OS implementation targetted at server virtualization in near future. It would a good step towards modularizing the operating systems.

    Acknowledgements

    Thanks to Dr. Basant Rajan and V. Ganesh for their valuable comments.

    About the Author – Milind Borate

    Milind Borate is the CTO and VP of Engineering at Pune-based continuous data protecting startup Druvaa. He has over 13 years experience in enterprise product development and delivery. He worked at Veritas Software as Technical Director for SAN-FS and served on board of Veritas patent filter committee. Milind has filed over 15 patent applications (4 alloted) and co-authored “Undocumented Windows NT” in 1998. He holds a BE (CS) degree from University of Pune and MTech (CS) degree from IIT, Bombay.

    This article was written when Milind was at Coriolis, a startup he co-founded before Druvaa.

    Reblog this post [with Zemanta]

    Introduction to Server Virtualization

    Virtualization is fast emerging as a game-changing technology in the enterprise computing space. What was once viewed as a technology useful for testing and development is going mainstream and is affecting the entire data-center ecosystem. Over the course of the next few weeks, PuneTech is going to run a series of articles on virtualization from experts in the industry. This article, the first in the series, gives an introduction to server virtualization, and has been written by Anurag Agarwal and Anand Mitra, founders of KQ Infotech.

    What is virtualization

    Virtualization is essentially some kind of abstraction of computing resources. There are various kinds of abstractions. Files provide an abstraction of disk blocks into linear space. Storage virtualization products, like logical volume manager, virtualize multiple storage devices into single storage and vice versa.

    Processes are also a form of virtualization. A process provides an illusion to the programmer that she has the entire address space at her disposal and has exclusive control of hardware resources. Multiplexing of these resources between all the processes on the system is done by the OS, transparent to the process. This concept has been universally adopted.

    All multi-programming operating systems are characterized by executing instructions in at least two privilege levels i.e. unprivileged for user programs, and privileged for the operating system. The user programs use “system calls” to request the operating system to perform privileged operations on its behalf. The interface which consists of the unprivileged instruction set and the set of system calls define an “extended machine” which is easier to program than the bare machine and makes user programs more portable.

    The benefits of having the kernel wrapping completely around the hardware and not exposing it to upper layer has its advantages. But in this model, only one operating system can be run at a given time. One cannot perform any activity that would disrupt the running system (for example, upgrade, migration, system debugging, etc.)

    A virtual machine provides an abstraction of complete physical machine. This is the also known as server virtualization. The basic idea is to run more than one operating system on the single server at the same time.

    The History of Server Virtualization

    In 1964, IBM had developed a Virtual Machine Monitor (CP) to run their various OSes on their mainframes. Hardware was too expensive to leave underutilized. They had addressed many of the performance challenges inherent in virtualization by designing hardware amenable to virtualization. However with the advent of cheap computing resources and proliferation of commodity hardware, virtualization was no longer popular and was viewed as a artifact of a an era where computing resources were scarce. This was reflected in design of x86 architectures which no longer provided enough support to implement virtualization efficiently.

    With the cost of hardware going down and complexities of software increasing, a large number of administrators started putting one application per server. This provides them isolation, where one application does not interfere with other application. However, over some time it started resulting into a problem called server sprawl. There are too many underutilized servers in data centers. Most windows servers have average utilization between 5% and 15%. This utilization rate will further go down with dual core and quad core processors becoming very common. In addition to the cost of the hardware, there are also power and cooling requirements for all these servers. The earlier problem of utilization of hardware resources has started surfacing again.

    Ironically the very reason which resulted in the demise of virtualization in the mainstream, was the cause of its resurrection. The features which made the OSes attractive, also made them more fragile. And this renewed interest in virtualization resulted into VMWare providing a server virtualization solution for x86 machines in 1999. Server consolidation has increased the server utilization to the 60% to 80% level. This has resulted in 5 to 15 times reduction in the servers.

    Virtual machines have introduced a whole new paradigm of looking at operating systems. Traditionally they were coupled with physical machines, and they needed to know all the peculiarities of hardware. Once hardware becomes obsolete, your operating system becomes obsolete too. But virtual machines have changed that notion. They have decoupled the operating systems from hardware by introducing a virtualization layer called virtual machine monitor (VMM).

    Types of Virtualization architectures

    There are many VMM architectures.

    Full emulation: It is the oldest virtualization technique in use. An emulator is a software layer which tracks the memory and CPU state of the machine being emulated and interprets each instruction applying the effect it would have on the virtual state of the machine it has constructed. In a regular server, machine instructions are directly executed by the CPU and the memory is directly manipulated. In full emulation, the instructions are handed over to the emulator and it then converts these instructions into a (possibly different) set of instructions to be executed on the actual underlying physical machine. Full emulation is routinely used during the development of software for new hardware which might not be available yet. Virtualization can be considered as a special case of emulation where both the machine being emulated and host are similar. This allows execution of unprivileged instructions natively. Qemu falls in this category.

    Hosted: In this approach, a traditional operating system (Windows or Linux) runs directly on the hardware. This is called the host OS. VMM is installed as a service in the host OS. This application creates and manages multiple virtual machines as processes. Each virtual machine process has a full operating system inside it. Each of these is called a guest OS. This approach greatly simplifies the design of the VMM as it can directly use the services provided by the host operating system. VMWare server, VMWare workstation, Virtual box, and KVM fall in this category.

    Hypervisor based: Hosted VMM solutions have a high overhead, as the VMM does not directly control the hardware. In the hypervisor approach the VMM is directly installed on the hardware. The VMM provides virtual hardware abstractions to create and manage multiple virtual machines. Performance overhead in this approach is very small.

    Another way to classify virtual machines is on the basis of how privileged instructions are handled. Modern processors have a privileged mode of execution that the OS kernel executes in, and a non-privileged mode that the user programs execute in. This can cause a problem for virtual machines because although the host OS (or the hypervisor) runs in privileged mode the entire guest OS runs in non-privileged mode. Most of today’s OSs are specifically designed to run in privileged mode, and hence their binaries end up having some instructions that must be run in privileged mode. (For example, there are 17 such instructions in the Intel IA-32 architecture.) This causes a problem for the virtual machine, and there are two major approaches to handling this problem.

    Para virtualization: In this approach, the binary of the OS needs to be rewritten statically to replace the use of the privileged instructions by appropriate calls into the hypervisor. In other words, the operating system needs to be ported to the virtual hardware abstraction provided by VMM. This requires changes in the operating system code. This approach has least performance penalty. This is the approach taken by Xen.

    Full virtualization: In this approach, no change is made in the operating system code. There are two ways of supporting this.

    • Using run time emulation of the privileged instructions. The VMM monitors program execution during runtime, and takes over control of execution whenever a privileged instruction arises in the guest OS. This approach is called binary translation. VMWare uses this technology.
    • Hardware assisted virtualization: Both intel and AMD have come up virtualization extensions of their hardware to support virtualization. Intel calls this VT technology and AMD calls it SVM technology. These extensions provide an extra privilege level for VMM to run. These extensions have provided a number of additional features like nested page tables and IOMMU, to make virtualization more efficient.

    Virtualization Vendors

    VMWare: VMWare has a suite of products in this area. There are two hosted products, called VMWare workstation and VMWare server. Their hypervisor product is called VMWare ESX. They have one version of ESX that comes burned in the bios. It is called VMWare ESXi. They have virtual center as management product to manage complete virtual machine infrastructure in the data center. There all the products are based on the dynamic binary translation technology. They support various flavors for Windows and Linux.

    Xen: It is an open source project. It is based on para-virtualization and hypervisor technologies. Linux is modified to support para-virtualization. Xen now supports Windows with hardware assisted virtualization. There are number of products based on Xen. Citrix, which bought XenSource has couple of Xen based products, Sun has xVM, Oracle has Oracle VM. Redhat and Suse have been shipping Xen as part of their Linux distributions for some time.

    Hyper-V: This is Microsoft’s entry in this space. It is similar to the Xen architecture. It also requires hardware assistance. It comes bundled with Windows server 2008, and supports running Windows and Linux guest operating systems in the virtual machines.

    Advantages of Virtualization

    Virtualization has also provided some new capabilities. Server provisioning becomes very easy. It is just creating and managing a virtual machine. This has transformed the way testing and development are done. There is another interesting feature called Vmotion or live migration, where a running virtual machine can be moved from one physical machine to other physical machine. Executing of the virtual machine is briefly suspended, and the entire image of the virtual machine is moved to a different machine. Now the execution can be re-started and the guest OS will continue from exactly the same point where it was suspended. This eliminates the need for downtime, even for things like hardware maintenance. This also enables the dynamic resource management or utility computing.

    Adoption of server virtualization has been phenomenal. There are already hundreds of thousands servers running virtual machines. Initial adoption of virtual machine was restricted only to test and development, but now it has matured enough to become quite popular in production too.

    About the Authors

    Anurag Agarwal

    Anurag Agarwal has more than 11 years of industry experience both in India and US. Prior to founding the KQInfotech, he was a technical director at Symantec India. Anurag has designed, developed various products at Symantec (earlier Veritas). During 2006-2007, Anurag has conceived the idea of Software Fault Tolerance for Xen at Symantec. He was awarded highest technical award of outstanding innovator in 2006 for this invention. Anurag has build and lead a team of ten people in India to take it from idea stage to product.

    During the same time Anurag has started working with College of Engineering, Pune. There he and his friends offered a full semester course in Linux kernel. Anurag was also involved in mentoring a large number of students from various engineering colleges. This involvement in teaching and mentoring students resulted in formation of KQInfotech with training and mentoring focus.Prior to this, Anurag has architected scalable transaction system for Cluster file system at Symantec in USA. This architecture improved scalability of cluster file system from three nodes to sixteen nodes and beyond. He was awarded star award for this work. He has filed half a dozen patents at Symantec. Anurag has extensive knowledge in Solaris, Linux kernel, file system, storage technologies and virtualization.He has ME from Indian Institute of Science, Bangalore and BE from MBM Engineering College, Jodhpur.

    Anand Mitra
    After completing his post-graduation (iitb) in 2001, Anand had been working with Symantec India (Formerly Veritas). Prior to founding KQInfotech, he was a Principal Software Engineer at Symantec, chartered with the task of scoping and designing a support for windows on Xen based Fault Tolerance. He has worked for 6.5 years on the clustered filesystem product VxFS & CFS. He had architected the online upgrade for Veritas File system and designed the write fastpath which improved performance of the file system. He has also designed the integration of Power6 (powerPC) CPU feature of storage keys for the Veritas storage stack. He co-maintained technical relations with IBM for special proprietary kernel interfaces within AIX and designed a filesystem pre-allocation API for IBM DB2 database.

    Chitale Dairy Consolidates Two Datacenters into One with VMware Infrastructure

    (news sent in by PuneTech reader Chirag Dalal)

    Yahoo! Finance reports that Pune’s Chitale Dairy has used VMWare’s virtualization infrastructure to consolidate their two data centers into one and save costs:

    Chitale Dairy, which produces about 400,000 liters of milk per day as well as cream, butter and yogurt, faced operational challenges with 10 physical servers spread across two datacenters in a town 500 kilometers from the nearest city. In its remote location, the company found it expensive and challenging to source and retain qualified IT support staff while also grappling with server sprawl.

    By consolidating its two physical operations into one virtual datacenter using VMware Infrastructure, Chitale Dairy reduced server hardware acquisition costs by 50 percent, software acquisition costs by 75 percent, and power consumption in half. VMware also reduced server deployment times from three weeks to three hours and the time to restore a corrupted server from six or seven hours to 10 minutes.

    See full article.

    Zemanta Pixie