What: “Financing Your Startup” Startup Saturday Pune event with Indian Angels Network and Ganesh Natarajan When: Saturday, Sept 11, 3pm-6pm Where: MDC Hall V. 1st floor, Auditorium Building, YASHADA, Baner Road. Registration & Fees: This event is free for all. Register here
Financing your venture – with Ganesh Natarajan / IAN
Financing your venture is the most challenging tasks for a start-up. Itâs easy to get customers, employees, technologies but finance is tricky. 10 years ago, you could have gone to a VC. Today that is not an option. So how do you finance your start-up?
Thankfully there are lots of other options. Funds are available from friends and family, angel investors, government bodies like MSME, SIBRI, NMITLI, incubators and angel Investorâs networks.
To throw light on this subject, we are getting veterans who have been there, done that for Startup Saturday Pune 10. Mr. Ganesh Natrajan, Chairman of NASSCOM and Global CEO, Zensar, will give the keynote address. He wears various hats. Here he will represent the Pune chapter of “Indian Angel Network”
Tentative Agenda
3:00 – 3:15 Introduction
3:15 to 3:45 Key Note address by Mr Ganesh Natrajan, Chairman of Nasscom and Global CEO of Zensar Technologies.
3:45 to 4:00 Funding schemes from Government of India, by Kaushik Gala, NCL Venture Center 4:00 to 4:15 Crowd funding as an option by Satish Kataria, Grow VC
4:15 – 4:30 Lightening pitches from three promising startups
4:45 – 5:00 Closing Remarks
5:00 – 6:00 Networking and Snacks (On the House)
About IAN – Indian Angel Network
The Indian Angel Network(IAN) is India’s first angel investment network and looks to invest up to US$ 1 mn, though their sweet spot is between US$ 200K to 400K. Apart from funding, the Network also seeks to provide mentoring, strategic thought leadership and leverage the Network’s network for the investee companies. The Network has met with early successes and has already invested in 22 companies across multiple sectors.
Indian Angel Network(IAN) currently has over 125 members drawn from across the country and some from overseas, including leading lights from diverse sectors . Members include people such as Jerry Rao, Saurabh Srivastava, Pramod Bhasin, Raman Roy, Rajiv Luthra, Pradeep Gupta, Sunil Munjal, Arvind Singhal and institutions such as IBM, SIDBI, Spice Televentures, Intel, etc.
About Startup Saturday
Startup Saturday, Pune, is a forum aimed at deepening the skills of startup community in Pune to make more successful startups coming out of the city through creation of a vibrant innovation ecosystem. As with other cities, SS Pune will also be held on second Saturdays of the month.
A SS session is about rich-discussions on topics of interest to startups in the city. A typical session would have only about 25% of time devoted to talk/presentation and rest of the time time dedicated to freewheeling discussion as that is where, in our experience, the audience makes the best use of the available expert.
(This information is mostly taken from the website of Ozran Academy. Although we at PuneTech don’t really know anybody at Ozran personally, but the courses look interesting enough, and it appears that they could be useful to many freshers, and best of all it’s free. We’re publishing this information in the hope that students find it useful.)
Ozran is a small Dutch company that has a development center in Pune. They are providing five free courses targeted towards freshers in IT, Arts or Maths, with the intention of developing skills that industry is interested in and identifying talented individuals. Each course consists of 6 evening classes (2-1/2 hours each) and one exam on a Saturday afternoon. The whole thing is free, and a certificate is given to each participant who attends all classes, and passes the exam. Talented participants who demonstrate the ability to quickly learn and apply the concepts taught in the Ozran Academy courses may be offered a paid traineeship or job.
The five courses being offered this year are:
For HTML coders – Advanced HTML/CSS techniques and concepts, includes a primer in HTML5
For Programmers – Adobe ColdFusion for rapid construction of dynamic web applications
For Artists and Graphic Designers – Web Design European style
For Number Crunchers and Marketing Geniuses – Optimizing website conversion with marketing and analytics
For Artistic Programmers and Programming Designers – Replacing Flash with the jQuery JavaScript library
There are different cut-off dates for applications, and for the start of the actual course, and unfortunately, we believe one of the courses is already over. But check the Ozran Academy Page for full details of the courses.
What: Pune Java Meetup When: Saturday, Sept 11, 5:30pm Where: ThoughtWorks Technologies, Tower C, Panchshil Tech Park, Yerwada Registration and Fees: This event is free for all to attend. Register here. Group Page:Pune Java Meetup Group
Details
The Pune Java Meetup group hopes to meet on the second Saturday of every month. This group is a free/open group. Anybody interested in Java can join the group. Anybody can propose a meeting.
This month’s meetup, on Saturday, will feature Kiran Narasareddy talking about his experiences with Modeling Frameworks – specifically their (bad) experiences with the EMF (Eclipse Modeling Framework), and good experience with JaXB. The talk will also cover the various different plugins available for JaXB, and what all you can achieve using them.
After that Atul will talk about using struts unit testing framework. It is a very effective way to decouple the Action Layer from the Business Layer – No need to wait for UI development to test your code. Very appealing – and addicting.
(Pune based serial entrepreneur, Ajit Shelat, passed away yesterday. This article and photo are by flickr user drona and are taken from this page. They’re reproduced here under the terms of the Creative Commons (BY-NC) license under which that page is published.)
My friend Ajit Shelat passed away today. He was driving on the Mumbai-Pune Highway, and had an accident at about 530pm September 1, 2010.
He was a fellow alumnus and contemporary of IIT-Mumbai.
Trained entirely in India, he was perhaps the first Indian engineer who designed and developed a very complex LAN security chipset at Nevis Networks, entirely based out of Pune, India.
He was a co-founder of RIMO technologies, Switch-on Networks(with Moti Jiandani), and Nevis Networks. Switch-On Networks was sold to PMC-Sierra for $300M+.
He supported a wide variety of environmental causes and an avid hiker and naturalist. A prolific entrepreneur himself, he generously gave his time and money to his favorite causes: The environment, education and entrepreneurs.
Said Yatin Mundkur, a venture capitalist at Artiman Ventures, who used to work for Ajit at Godrej Industries, in the mid-eighties: “I am who I am today, because of Ajit. And a lot of us who reported to him at Godrej would gladly say that.”
I will fondly remember the many hikes I took with him, and particularly the many discussions I had with him during the early X Window System days.
He is survived by his wife Radha Shelat and daughter Arundhati, and mother and sister.
Pune’s IndicThreads, which organizes a number of tech conferences in Pune, put out a call for speakers for its next two conferences – their flagship Java conference, whose 5th edition will be held in December 2010, and a new conference on mobile technologies, whose first edition will be in November 2010. The call for speakers for both conferences is still open (until 31st August) and represents a good opportunity for techies in Pune to get visibility for their work, and a chance for networking with like-minded people without having to pay the hefty conference fees.
The annual indicthreads.com java technology conference is Pune’s best and possibly one of India’s finest conferences on matters related to Java technologies. I looked forward to attending the same and was not disappointed a bit.
He has written a fairly detailed post, including overviews of the sessions he attended, which is worth reading.
Here is a PuneTech article about the IndicThreads Java conference 2 years ago.
Earlier this month, IndicThreads had the first edition of their new conference on upcoming technologies, this one being focused on cloud computing. You can see PuneTech’s coverage (also see this article), the report by Janakiram, a senior technical architect at Microsoft, and this one by Arun Gupta, a technical evangelist at Sun (aka Oracle). That should give you an idea of the kinds of talks that go into IndicThreads’ conferences.
Here are some other reasons I had given earlier as to why you should apply for a speaker spot. The reasons are still valid today, so I’ll simply cut-n-paste here:
If you’re accepted as a speaker, you get a free pass to the conference.
Become famous: being a speaker at a national conference is good for visibility, and all engineers should strive for visibility. It’s very important. Almost as important as being a good programmer. (Maybe more?)
Help out a good Pune initiative. More submissions will improve the quality of the conference, and having a high quality conference in Pune improves the overall stature of Pune as an emerging IT powerhouse.
And finally, I also said this:
I’m willing to bet that many people reading this will think – but I am not an expert. Not true. If you’ve spend a couple of years working on some specific aspect of testing, chances are that you’ve acquired expertise that you can present and add value to the understanding of others. You don’t have to have done groundbreaking research. Have you adopted a new tool that came out recently? Talk about it, because others will not have experience with its use. Have you used an old tool in a new way? Definitely submit a proposal. The others in this field would love to hear of this new wine in an old bottle.
(Disclaimer: In the past, a couple of times, PuneTech has received a complimentary pass from IndicThreads (sort of a “press pass”) for attending their conferences. There are no strings attached to this – and we try to be objective in our coverage of the conference. As per PuneTech policy, we don’t promote the actual conference on the PuneTech blog, since it’s a paid event, but we do promote the call for speakers, since that’s free, and we do reporting of the event itself whenever possible, since a significant fraction of it ends up highlighting technology work being done in Pune.)
(This is a live-blog of a talk given by Kalpak Shah, at the Indic Threads Conference on Cloud Computing, held in Pune on 20/21 Aug 2010. Since it’s being typed in a hurry, it is not necessarily as coherent and complete as we would like it to be, and also links might be missing.)
Kalpak Shah is the founder and CEO of Clogeny, a company that does consulting & services in cloud computing. His talk is about the various choices available in cloud computing today, and how to go about picking the one that’s right for you.
These are the slides that were used by Kalpak for this talk. Click here if you can’t see the slideshow above.
Kalpak’s definition of a cloud:
If you cannot add a new machine yourself (i.e. without making a phone call or email), then it’s just hosting, not cloud computing
If you cannot pay as you go (i.e. pay per use) it is not cloud computing
If you don’t have APIs which allow integration with the rest of your infrastructure/environment, then it is not a cloud
Kalpak separates out cloud infrastructure into three parts, and gives suggestions on how to choose each:
Infrastructure as a service
Basically allows you to move your local server stuff into the cloud. Examples: Amazon EC2, Terremark vCloud, GoGrid Cloud, Rackspace Cloud
You should check:
Support and Helpdesk. Is it 24×7? Email? Phone?
Hardware and Performance. Not all of them are the same. Amazon EC2 not as good as Terremark.
OS support. Which OS and distributions are supported. Is imaging of server allowed? Is distribution and re-selling of images allowed? Not everybody allows you to save the current state of the server, and restart it later on a different instance.
Software availability and partner network. Example, Symantec has put up their anti-virus software for Windows on EC2. How many such partners are available with the provider you’re interested in? (EC2 is far ahead of everybody else in this area.)
APIs and Ecosystem. What APIs are available and in what languages. Some providers don’t do a good job of providing backward compatibility. Other might not be available in language of your choice. EC2 and Rackspace are the best in this area.
Licensing is a big pain. Open source software is not a problem, but if you want to put licensed applications on the cloud, that is a problem. e.g. IBM Websphere clustering is not available on EC2. Or Windows licenses cannot be migrated from local data center to the cloud.
Other services – How much database storage are you allocated? What backup software/services are available? What monitoring tools? Auto-scaling, load-balancing, messaging.
Kalpak has put up a nice comparison of Amazon AWS, Rackspace, GoGrid and Terremark on the above parameters. You can look at it when the PPT is put up on the IndicThreads conference website in a few days.
Platform as a Service
This gives you a full platform, not just the hardware. You get the development environment, and a server to upload the applications to. Scalability, availability managed by the vendor. But much less flexibility than infrastracture-as-a-service. You are stuck with the programming language that the PaaS supports, and the tools.
For example, Google AppEngine. Which is available only for Python and Java. Or Heroku for Ruby + Rails.
PaaS is targeted towards developers.
Software as a Service
This gives you a consumer facing software that sits in the cloud. You can start using the software directly, or you can extend it a bit. A business layer is provided, so you can customize the processes to suit your business. Good if what is provided fits what you already want. Not good if your needs are rather different from what they have envisoned.
Examples: Sales Force, Google Apps, Box.net, Zoho
Storage as a Service
Instead of storing data on your local disks, store it in the cloud. Lots of consumer adopton, and now enterprise usage is also growing. No management overhead, backups, or disaster recovery to worry about. And pay either flat fees per month, or by the gigabyte.
Examples: Mozy from EMC. Amazon S3. Rackspace CloudFiles. Carbonite. DropBox.
Comparing PaaS and SaaS
Some choices automatically made for you based on development language and available skill sets. Python + Java? Use Google AppEngine. Ruby on Rails? Use Heroku. Microsoft shop? Use Azure.
Other ways to compare are the standard ones: size of vendor and ecosystem maturity. Tools, monitoring, connectors, etc. e.g. AppEngine has a Eclipse plugin, so if your developers are used to Eclipse (and they should be!) then this is very good. Another question to ask is this – will the vendor allow integration with your private cloud? Can you sync your online hosted database with your local database? If yes, that’s great. If not that can be very painful and complicated for you.
Interesting Private Cloud Platforms
These are some interesting private cloud platforms
Eucalyptus: open source IaaS cloud computing platform.
VMWare Cloud: Partnered with Terremark. Expensive but worth it.
Appistry: Allows installing of the platform on Amazon EC2, or in your private data center. Allows application deployment and mgmt, various services across the stack IaaS, PaaS, SaaS. Integration with SQL Azure, SharePoint, Dynamics CRM. Visual Studio development and testing. Supports multiple development languages.
Database in the cloud
You can either do regular relational databases (easy to use, everybody knows them, scaling and performance needs to be managed by you). Or do NoSQL – non-relational databases like SimpleDB (Amazon), Hadoop (Yahoo), BigTable (Google). They’re supported and managed by cloud vendor in some cases. Inherent flexibility and scale. But querying is more difficult and less flexible.
Business Considerations
Licensing is a pain, and can make the cloud unattractive if you’re not careful. So figure this one out before you start. SLAs are around 99.9% for most vendors, but lots of fine print. Still evolving and might not meet your standards, especially if you’re an enterprise. Also, if SLA is not being met, vendor will not tell you. You have to complain and only then they might fix it. Overall, this is a grey area.
Pricing is a problem – it keeps changing (e.g. in case of Amazon). So you can have problems estimating it. Or the pricing is at a level that you might not understand. e.g. pricing of 10 cents per million I/O requests. Do you know how many I/Os your app makes? Maybe not.
Compliance might be a problem – your government might not allow your app to be in a different country. Or, for banking industry, there might be security certification required (for the vendor) before the cloud can be reached.
Consider all of these before deciding whether to go to a cloud or not.
Summary
IaaS gives you the infrastructure in the cloud. PaaS adds the application framework. SaaS adds a business layer on the top.
Each of these are available as public clouds (that would be somewhere out there on the world wide web), or private clouds that are installed in your data-center. Private is more expensive, more difficult to deploy, but your data is in your premises, you have better (local) connectivity, and have more flexibility. You could also have a hybrid cloud, where some stuff is in-house and some stuff in the public cloud. And if your cloud infrastructure is good enough, you can easily move computation or data back and forth.
About the Speaker – Kalpak Shah
Kalpak Shah is Founder & CEO of Clogeny Technologies Pvt. Ltd. and guides the overall strategic direction of the company. Clogeny is focused on providing services and consultancy in the cloud computing and storage domains. He is passionate about the ground-breaking economics and technology afforded by the cloud computing platforms. He has been working on various cloud platforms including IaaS, PaaS and SaaS vendors.
(This is a live-blog of the Indic Threads Conference on Cloud Computing, that’s being held in Pune. Since it’s being typed in a hurry, it is not necessarily as coherent and complete as we would like it to be, and also links might be missing. Also, this has notes only on selected talks. The less interesting ones have not been captured; nor have the ones I missed because I had to leave early on day 1.)
This is the first instance of what will become IndicThreads’ annual conference on Upcoming Technology – and the theme this time is Cloud Computing. It will be a two day conference and you can look at the schedule here, and the profiles of the speakers here.
Choosing your Cloud Computing Service
The first talk was by Kalpak Shah, the founder and CEO of Clogeny, a company that does consulting & services in cloud computing. He gave a talk about the various choices available in cloud computing today, and how to go about picking the one that’s right for you. He separated out Infrastructure as a Service (IaaS) which gives you the hardware and basic OS in the cloud (e.g.AmazonEC2), then Platform as a Service (PaaS) which gives you an application framework on top of the cloud infrastructure (e.g.Google AppEngine), and finally Software as a Service (SaaS) which also gives you business logic on top of the framework (e.g.SalesForce). He gave the important considerations you need to take into account before choosing the right provider, and the gotchas that will bite you. Finally he talked about the business issues that you need to worry about before you choose to be on the cloud or not. Overall, this was an excellent talk. Nice broad overview, lots of interesting, practical and useful information.
Everybody is jumping on the cloud bandwagon. Do you know how to find your way around in the maze? Image via Wikipedia
The next talk is by Arun Gupta about JavaEE in the cloud. Specifically Java EE 6, which is an extreme makeover from previous versions. It makes it significantly easier to deploy applications in the cloud. It is well integrated with Eclipse, NetBeans and IntelliJ, so overall it is much easier on the developer than the previous versions.
Challenges in moving a desktop software to the cloud
Prabodh Navare of SAS is talking about their experiences with trying to move some of their software products to the cloud. While the idea of a cloud is appealing, there are challenges in moving an existing product to the cloud.
Here are the challenges in moving to a cloud based business model:
Customers are not going to switch unless the cost saving is exceptional. Minor savings are not good enough.
Deployment has to be exceptionally fast
High performance is an expectation. Customers somehow expect that the cloud has unlimited resources. So, if they’re paying for a cloud app, they expect that they can get whatever performance they demand. Hence, auto-scaling is a minimum rquirement.
Linear scaling is an expectation. But this is much easier said than done. Parallelization of tasks is a big pain. Must do lots of in-memory execution. Lots of caching. All of this is difficult.
Latency must be low. Google, facebook respond in a fraction of a second. So, users expect you will to.
If you’re using Linux (i.e. the LAMP stack), then, for achieving some of thees things, you’ll need to use Memcache, Hadoop..
You must code for failure. Failures are common in the cloud (at those scales). And you’re system needs to be designed to seamlessly recover from this.
Is customer lock-in good or bad? General consensus in cloud computing market is that data lock-in is bad. Hence you need to design for data portability.
Pricing: Deciding the price of your cloud based offering is really difficult.
Cost of the service per customer is difficult to judge (shared memory used, support cost, CPU consumed, bandwidth consumed)
In Kalpak’s talk, he pointed this out and one of the inhibitors of cloud computing for business
Customers expect pay-as-you-go. This needs a full-fledged effort to build an appropriate accounting and billing system, and it needs to be grafted into your application
To support pay-as-you-go effectively, you need to design different flavors of the service (platinum, gold, silver). It is possible that this might not be easy to do with your product.
Multi-cloud programming with jCloud
This talk is by Vikas Hazrati, co-founder and “software craftsman” at Inphina.
Lots of people are interested in using the cloud, but one of the things holding them back is cloud vendor lock-in. If one cloud doesn’t work out, they would like to be albe to shift to another. This is difficult.
To fix this problem, a bunch of multi-cloud libraries have been created which abstract out the clouds. Basically they export an API that you can program to, and they have implementations of their API on a bunch of major cloud providers. Examples of such multi-cloud libraries/frameworks are: Fog, Delta, LibCloud, Dasein, jCloud.
These are the things that are different from cloud to cloud:
Key-value store (i.e. the database)
File sizes
Resumability (can you stop and restart an application)
CDN (content delivery network)
Replication (some clouds have it and some don’t)
SLA
Consistency Model (nobody gives transaction semantics; everybody gives slightly different eventual consistency semantics)
Authorization
API complexity
APIs like jCloud try to shield you from all the differences in these.
jCloud allows a common API that will work on Amazon, Rackspace, VMWare and a bunch of other cloud vendors. It’s open source, performant, is based on closure, and most importantly, it allows unit testability across clouds. The testability is good because you can test without having to deploy on the cloud.
The abstractions provided by jCloud:
BlobStore (abstracts out key-value storage for: atmos, azure, rackspace, s3)
Compute (abstracts out vcloud, ec2, gogrid, ibmdev, rackspace, rimu)
Provisioning – adding/removing machines, turning them on and off
jCloud does not give 100% portability. It gives “pragmatic” portability. The abstraction works most of the time, but once in a while you can access the underlying provider’s API and do things which are not possible to do using jCloud.
A Lap Around Windows Azure
Windows Azure is Microsoft’s entry in the PaaS arena. Image via Wikipedia
This talk is by Janakiram, a Technical Architect (Cloud) at Microsoft.
Microsoft is the only company that plays in all three layers of the cloud:
IaaS – Microsoft System Center (Windows HyperV). Reliance is setting up a public cloud based on this in India.
PaaS – Windows Azure Platform (AppFabric, SQLAzure)
SaaS – Microsoft Online Services (MSOffice Web Application, MSExchange Online, MSOffice Communications Online, SharePoint Online)
The focus of this talk is the PaaS layer – Azure. Data is stored in SQLAzure, the application is hosted in Windows Azure and AppFabric allows you to connect/synchronize your local applications and data with the cloud. These together form the web operating system known as Azure.
The cloud completely hides the hardware, the scalability, and other details of the implementation from the developer. The only things the cloud exposes are: 1) Compute, 2) Storage, and 3) Management.
The compute service can have two flavors. There’s a “Web Role” is essentially the UI – it shows webpages and interacts with the user – based on IIS7. The “Worker Role” does not have a UI, and is expected to be “background” processes, often long-running, and operate on the storage directly. You can have Java Tomcat, Perl, Python, or whatever you want to run inside of a worker role. They demonstrated wordpress working on Azure – by porting mysql, php, and wordpress to the platform. Bottomline: you can put anything you want in a worker role.
Azure storage exposes a Blob (very much like S3, or any other cloud storage engine). This allows you to dump your data, serialized, to the disk. This can be combined with a CDN service to improve availability and performance. In addition you can use tables for fast read mostly access. And it gives you persistent queues. And finally, you get “Azure Drive”, a way to share raw storage across your apps. And all of this is available via a REST interface (which means that any app, anywhere on the web can access the data – not just .NET apps).
Building an Azure application is no different from designing, developing, debugging, and testing an ASP.NET application. There is a local, simulated cloud interface that allows you to try everything out locally before deploying it to the cloud.
Simone Brunozzi, a Technology Evangelist at Amazon Web Services, is talking about Amazon’s EC2.
AWS has the broadest spectrum of services on offer in the cloud computing space, and the best partner/developer/tools ecosystem. Image via Wikipedia
Overview of Amazon Web Services: Compute(EC2, Elastic MapReduce, AutoScaling), Messaging(SQS, simple notification service), Storage (S3, EBS, import/export), Content Delivery (CloudFront), Monitoring (CloudWatch), Support, Database (SimpleDB, RDBMS), Networking (Virtual Private Cloud, Elastic Load Balancing), Payments & Billing (FPS – flexible payments service), e-Commerce (fws – fulfillment web service, Amazon DevPay), Web Traffic (Alexa Web Information, Alexa Top sites), Workflow (Amazon Mechanical Turk)!! See this link for more
AWS exists in US-West (2 locations), US-East (4 locations), Europe (2 locations), Asia-Pacific (2 locations). It’s architected for redundancy, so you get availability and failover for free.
EC2 essentially gives you virtual servers in the cloud, that can be booted from a disk image. You can choose your instance type from small to extra-large (i.e. how much memory, and CPU speed), and install an image on that machine. You can choose from a lot of pre-configured images (Linux, Solaris, Windows). These are basic OS installs, or more customized versions created by Amazon or the community. You can further customize this as you want, because you obviously, get root/administrator access on this computer. Then you can attach a “disk” to this “computer” – basically get an EBS, which is 1GB to 1TB in size. An EBS device is persistent, and is automatically replicated. If you want even better durability, then snapshot the EBS and store it to S3.
Scaling with EC2: Put an ELB (Elastic Load Balancer) in front of your EC2 instances, and it will automatically load balance across those (and give you a single URL to expose to your users). In addition, ELB does health-checks on the worker instances and removes the ones who are not performing up to the mark. If you use the CloudWatch monitoring service, you can do things like: “if average CPU usage across all my instances is above 80%, then add a new instance, and remove it once average CPU usage drops below 20%.” After this point, adding and removing instances will be fully automated.
It’s important to mention that AWS is very enterprise ready: it has certifications and security (needed for banking apps), SLAs, worldwide ecosystem of service integrators, and other partners. Another enterprise feature: Virtual Private Clouds. Basically, carve out a small area in the Amazon Public cloud which is only accessible through a VPN, and this VPN ends in the enterprise. Hence, nobody else can access that part of the cloud. (Note: this is different from a Private Cloud, which is physically located in the enterprise. Here, it is still physically located somewhere in Amazon’s data-centers, but a VPN is used to restrict access to one enterprise.)
Multi-tenancy in the Cloud
Difference between single-tenant and multi-tenant apps. Image by andreasvongunten.com via Flickr
Vikas Hazrati (referenced earlier in this post), is talking about multi-tenancy. How do you give two different customers (or groups of customers) the impression that they are the only ones using a particular instance of a SaaS; but actually you’re using only one installation of the software.
Multi-tenancy is basically when you have a single infrastructure setup (software stack, database, hardware), but you want multiple groups to use it, and each should see a completely isolated/independent view. Basically, for security, one customer group does not want their data to be visible to anybody else. But we don’t want to give each group their own instance of the infrastracture, because that would be too expensive.
Variations on multi-tenancy. Easiest is to not do multi-tenancy – have separate hardware & software. Next step is to have multiple virtual machines on shared hardware. So hardware shared, software is not. If you’re sharing the middleware, you can do the following: 1. Multiple instances of the app on the same OS with independent memory, 2. Multiple instances of the app with shared memory, and 3. True multi-tenancy.
What level do you need to do multi-tenancy at? It could be at any layer: the database of course needs to be separable for different tenants. You can also do it at the business logic layer – so different tenants want different configurations of the business logic. And finally, you could also do this at the presentation logic – different tenants want different look’n’feel and branding.
Multi-tenancy in the database. Need to add a tenant-id to the database schema (and the rest of the schema is the same). A bit customer concern in this is that bugs in queries can result in data-leakage (i.e. a single poorly written query will result in your competitor seeing your sales leads data). This can be a huge problem. A typical SaaS vendor does this: put smaller customers in the same database with tenant-id, but for larger customers, offer them the option of having their data in a separate database.
Multi-tenancy in the cloud. This is really where the cloud shines. Multi-tenancy gives very low costs; especially compared to the non-multi-tenant version (also known as the on-premise version). For example, the cost of multi-tenant JIRA is $10 per month, while the on-premise version is $150 per month (for the same numbers of users).
Multi-tenancy example: SalesForce does a very fine-grained approach. Each user gets his own portion of the database based on primary-key. And there is a validation layer between the app and the database which ensures that all queries have a tenant-id. Fairly fine-grained, and fairly secure. But it is quite complex – lots of design, lots of thinking, lots of testing.
One big problem with multi-tenancy is that of the runaway customers. If a few customers are really using a large share of the resources, then the other customers will suffer. Limiting their resource usage, or moving them elsewhere are both difficult to do.
In general, some providers believe that having each app developer implement multi-tenancy in the app is inefficient. The solution to this is to virtualize the database/storage/other physical resources. In other words, for example, the database exports multiple virtual databases, one per tenant, and the underlying multi-tenant-database handles all the issues of multi-tenancy. Both Amazon’s RDS and Windows SQLAzure provide this service.
Google released the namespaces API for Google AppEngine just a few days back, and that takes a different approach. The multi-tenancy is handled at the highest level of the app, but there’s a very easy way of specifying the tenant-id and everything else is handled by the platform. However, note that multi-tenancy is currently supported only for 3 of their services, and will break if you use one of the others.
Issues in multi-tenancy:
Security: all clients are worried about this
Impact of other clients: customers hogging resources is still not a solved problem
Some customers are willing to pay for a separate instance: and it’s a pain for us to implement and manage
Multi-tenancy forces users to upgrade when the app is upgraded. And many customers don’t want to be upgraded forcefully. To handle this issue, many SaaS providers make new features available only via configuration options, not as a forced upgrade.
Configurations/customizations can only be done upto some level
There is no user acceptance testing. We test, and users have to take it when we make it live.
When should you not use multi-tenancy?
Obviously, when security is a concern. e.g. Google not doing this for government data
High customization and tight integration will make all the advantages of multi-tenancy disappear
SaaS-ifying a traditional application
Chirag Jog, CTO at Clogeny Technologies, a PICT ex-student, talking about the choices and issues faced in converting a traditional application to a SaaS, based on a real-life scenario they faced. The case study is of a customer in Pune, who was using his own infrastructure to host a standard web app (non-cloud), and occasional spikes in user requests would cause his app to go down. Dedicated hosting was too expensive for him – hence the need to move it to the cloud.
Different choices were: SaaS on top of shared infrastructure (like slicehost), or SaaS on top of PaaS (like AppEngine), or SaaS on top of IaaS (Amazon EC2, Rackspace). PaaS seems great, but real-life problems make you re-think: Your existing app is written in a language (or version of a language) that is not supported on the PaaS. Or has optimizations for a local deployment. Specific libraries might be missing. Thus, there’s lots of code change, and lots of testing, and stability will be a problem.
Hence, they decided to go with SaaS on IaaS. Basically simply moving the existing local app to the same software stack on to a server in IaaS. The app itself was largely compute intensive, so they decided to use the existing app as a ‘server’ and built a new client that talks to the server and serves up the results over the web. For this, the server(s) and client went on Amazon EC2 instances, Simple Queueing Service (SQS) was used to communicate between the ‘client’ and the ‘server’, and automatic scaling was used to scale the app (multiple compute servers). This not only helped the scalability & load balancing, but they were able to use this to easily create multiple classes of users (queue cheapo users, and prioritize priority customers) – improved business logic!
Cloud Security – Threats and Mitigations
If there’s a malicious hacker in your cloud, you could be in trouble. And it is your responsibility, not the cloud vendor. Image by Mikey G Ottawa via Flickr
Vineet Mago and Naresh Khalasi (the company that @rni and @dnene are associated with) are talking about the privacy and security issues they faced in putting their app on the cloud, and how to deal with those.
Good thing about the cloud is that the cloud vendor takes care of all the nitty-gritty, and the developer need not worry about it. The presenters disagree – especially where privacy and security are concerned. You have to worry about it. It is your responsibility. And if you’re not careful, you’ll get into trouble, because the vendors are not. Protecting against malicious hackers is still your responsibility; cloud vendor doing nothing about it.
The Cloud Security Alliance publishes best practices for security in the cloud. Recommneded.
You need to worry about the following forms of security:
Physical security: who gets into the building? cameras?
System security: anti-virus, active directory, disabling USB
Application security: AAA, API security, release management (no release goes out without proper audits and checks by different departments)
And there are three aspects you need to think about:
Confidentiality: can your data get into the wrong hands? What if cloud provider employee gets his hands on the data?
Integrity: can the data be corrupted? Accidentally? Maliciously?
Availability: Can someone make your data unavailable for a period of time? DDoS?
Remember, if you’re using the cloud, the expectation is that you can do this with a very small team. This is a problem because the effort to take into account the security aspects doesn’t really reduce. It increases. Note: it is expected that a team of 2 people can build a cloud app (R&D). However, if a networked app needs to be deployed securely, we’d expect the team size to be 30.
State of the art in cloud security:
IaaS: provider gives basic firewall protection. You get nothing else.
PaaS: Securing the infrastructure (servers, network, OS, and storage) is the provider’s responsibility. Application security is your responsibility
SaaS: Network, system and app security is provider’s job. SLAs, security, liability expectation mentioned in agreements. Best. But least flexibility for developers.
Problems with cloud security:
Unknown risk profile: All of them, IaaS, PaaS, and SaaS, are unknowns as far as security is concerned. This industry is just 4 years old. There are areas that are dark. What to do?
Read all contracts/agreements carefully and ask questions.
Ask provider for disclosure of applicable logs and data.
Get partial/full disclosure of infrastructure details (e.g. patch levels, firewalls, etc.)
Abuse and nefarious use of cloud computing: Applies to IaaS, PaaS. If hackers are using using Amazon EC2 instances to run malware, then there are two problems. First, malware could exploit security loop-holes in the virtualization software and might be able to access your virtual machine which happens to be on the same physical machine as the hacker’s virtual machine. Another problem is that the provider’s machines/IP-addresses enter public blacklists, and that will cause problems. What to do?
Look for providers that have strict initial registration requirements.
Check levels of credit card fraud monitoring and co-ordination used by the provider
Is the provider capable of running a comprehensive introspection of customer network traffic?
Monitor public blacklists for one’s own network IP blocks
Insecure Interfaces and APIs: 30% of your focus in designing an API should go into building a secure API. e.g. Twitter API does not use https. So anybody at this conference today could sniff the wi-fi here, sniff the network traffic, get the authentication token, and run a man-in-the-middle attack. Insecure API. What to do?
Analyze the security model of cloud provider’s interfaces
Build limits into your apps to prevent over-use of your apps
Malicious Insiders: “In 3 years of working at company, you’ll have the root passwords of all the servers in the company!” Is your security policy based on the hope that all Amazon AWS employees are honest? What to do?
Know what are the security breach notification processes of your provider, and determine your contingency plans based on that information
Read the fine print in the contracts/agreements before deciding on cloud vendors
Shared Technology Issues: There are various ways in which a malicious program in a virtual machine can access underlying resources from the hypervisor and access data from other virtual machines by exploiting security vulnerabilities. What to do?
Implement security best practices for your virtual servers
Monitor environment for unauthorized changes/activity
Enforce vendor’s SLAs for patching and vulnerability remediation
Example: Amazon allows you to run penetration testing, but you need to request permission to do that
Summary
Overall, a good conference. Some great talks. Not too many boring talks. Quality of attendees was quite good. Met a bunch of interesting people that I hadn’t seen in POCC/PuneTech events. You should be able to find slides of all the talks on the conference website.
We reported an incorrect date for the Building Billion Dollar Software Companies talk, by Anand Deshpande and Shirish Deodhar. The talk is on Friday, 20th August, at 10am. (It is not on Wednesday, 18th as we earlier reported).
What: Building Billion Dollar Software Companies from Pune, with Anand Deshpande and Shirish Deodhar, presented by Software Exporters Association of Pune (SEAP) When: Friday, Aug 20, 10am-12noon Where: Dewang Mehta Auditorium, Persistent Systems, S.B. Road Registration and Fees: This event is free for all. Register by sending a mail to deshpande@synygy.com.
“How we got here, and how we plan to get there” by Anand Deshpande
Anand Deshpande, founder of Persistent Systems, which recently had a very successful IPO, will talk about how he “got here”, and will share his vision on how he plans to take his company to $1B, and “get there”.
“From Entrepreneurs to Leaders” by Shirish Deodhar
Shirish Deodhar is the author of a book on what founders of Indian software companies need to focus on to build $1B product companies in India. (See PuneTech excerpt of that book). He will share some of his insights during this talk.
(This is a live-blog of Dr. Vipin Chaudhary talk on Trends in High Performance Computing, organized by the IEEE Pune sub-section. Since this is being typed while the talk is going on, it might not be as well organized, or as coherent as other PuneTech articles. Also, links will usually be missing.)
Myths about High Performance Computing:
Commonly associated with scientific computing
Only used for large problems
Expensive
Applicable to niche areas
Understood by only a few people
Lots of servers and storage
Difficult to use
Not scalable and reliable
This is not the reality. HPC is:
Backbone for national development
Will enable economic growth. Everything from toilets to potato chips are designed using HPC
Lots of supercomputing is throughput computing – i.e. used to solve lots of small problems
“Mainstream” businesses like Walmart, and entertainment companies like Dreamworks Studioes use HPC.
_(and a bunch of other reasons that I did not catch)
China is really catching up in the area of HPC. And Vipin correlates China’s GDP with the development of supercomputers in China. Point: technology is a driver for economic growth. We need to also invest in this.
Problems solved using HPC:
Movie making (like avatar)
Real time data analysis
weather forecasting
oil spill impact analysis
forest fire tracking and monitoring
biological contamination prediction
Drug discover
reduce experimental costs through simulations
Terrain modeling for wind-farms
e.g. optimized site selection, maintenance scheduling
and other alternate energy sources
Geophysical imaging
oil industry
earthquake analysis
Designing airplanes (Virtual wind tunnel)
Trends in HPC.
The Manycore trend.
Putting many CPUs inside a single chip. Multi-core is when you have a few cores, manycore is when you have many, many cores. This has challenges. Programming manycore processors is very cumbersome. Debugging is much harder. e.g. if you need to get good performance out of these chips then you need to do parallel, assembly programming. Parallel programming is hard. Assembly programming is hard. Both together will kill you.
This will be one of the biggest challenges in computer science in the near future. A typical laptop might have 8 to 10 processses running concurrently. So there is automatic parallelism, as long as number of cores is less than 10. But as chips get 30, 40 cores or more, individual processes will need to be parallel. This will be very challenging.
Oceans of Data but the Pipes are Skinny
Data is growing fast. In sciences, humanities, commerce, medicine, entertainment. The amount of information being created in the world is huge. Emails, photos, audio, documents etc. Genomic data (bio-informatics) data is also huge.
Note: data is growing way, way faster than Moore’s law!
Storing things is not a problem – we have lots of disk space. Fetching and finding stuff is a pain.
Challenges in data-intensive systems:
Amount of data to be accessed by the application is huge
This requires huge amounts of disk, and very fat interconnects
And fast processors to process that data
Conventional supercomputing was CPU bound. Now, we are in the age of data-intensive supercomputing. Difference: old supercomputing had storage elsewhere (away from the processor farm). Now the disks have to be much closer.
Conventional supercomputing was batch processed. Now, we want everything in real-time. Need interactive access. To be able to run analytic and ad hoc queries. This is a new, and difficult challenge.
While Vipin was faculty in SUNY Buffalo, they started an initiative for data-intensive discovery initiative (Di2). Now, CRL is participating. Large, ever-changing data sets. Collecting and maintaining data is of course major problem, but primary focus of Di2 is to search in this data. e.g. security (find patterns in huge logs user actions). This requires a new, different architecture from traditional supercomputing, and the resulting Di2 system significantly outperforms the traditional system.
This also has applications in marketing analysis, financial services, web analytics, genetics, aerospace, and healthcare.
High Performance Cloud Services at CRL
Cloud computing makes sense. It is here to stay. But energy consumption of clouds is a problem.
Hence, CRL is focusing on a green cloud. What does that mean?
Data center optimization:
Power consumption optimization on hardware
Optimization of the power system itself
Optimized cooling subsystem
CFD modeling of the power consumption
Power dashboards
Workflow optimization (reduce computing resource consumption via efficiencies):
Cloud offerings
Virtualizations
Workload based power management
Temperature aware distribution
Compute cycle optimization
Green applications being run in CRL
Terrain modeling
Wind farm design and simulation
Geophysical imaging
Virtual wind tunnel
Summary of talk
Manycore processors are here to stay
Programmability have to improve
Must match application requirements to processor architecture (one size does not fit all)
Computation has to move to where the data is, and not vice versa