Pune’s IndicThreads, which organizes a number of tech conferences in Pune, put out a call for speakers for its next two conferences – their flagship Java conference, whose 5th edition will be held in December 2010, and a new conference on mobile technologies, whose first edition will be in November 2010. The call for speakers for both conferences is still open (until 31st August) and represents a good opportunity for techies in Pune to get visibility for their work, and a chance for networking with like-minded people without having to pay the hefty conference fees.
The annual indicthreads.com java technology conference is Pune’s best and possibly one of India’s finest conferences on matters related to Java technologies. I looked forward to attending the same and was not disappointed a bit.
He has written a fairly detailed post, including overviews of the sessions he attended, which is worth reading.
Here is a PuneTech article about the IndicThreads Java conference 2 years ago.
Earlier this month, IndicThreads had the first edition of their new conference on upcoming technologies, this one being focused on cloud computing. You can see PuneTech’s coverage (also see this article), the report by Janakiram, a senior technical architect at Microsoft, and this one by Arun Gupta, a technical evangelist at Sun (aka Oracle). That should give you an idea of the kinds of talks that go into IndicThreads’ conferences.
Here are some other reasons I had given earlier as to why you should apply for a speaker spot. The reasons are still valid today, so I’ll simply cut-n-paste here:
If you’re accepted as a speaker, you get a free pass to the conference.
Become famous: being a speaker at a national conference is good for visibility, and all engineers should strive for visibility. It’s very important. Almost as important as being a good programmer. (Maybe more?)
Help out a good Pune initiative. More submissions will improve the quality of the conference, and having a high quality conference in Pune improves the overall stature of Pune as an emerging IT powerhouse.
And finally, I also said this:
I’m willing to bet that many people reading this will think – but I am not an expert. Not true. If you’ve spend a couple of years working on some specific aspect of testing, chances are that you’ve acquired expertise that you can present and add value to the understanding of others. You don’t have to have done groundbreaking research. Have you adopted a new tool that came out recently? Talk about it, because others will not have experience with its use. Have you used an old tool in a new way? Definitely submit a proposal. The others in this field would love to hear of this new wine in an old bottle.
(Disclaimer: In the past, a couple of times, PuneTech has received a complimentary pass from IndicThreads (sort of a “press pass”) for attending their conferences. There are no strings attached to this – and we try to be objective in our coverage of the conference. As per PuneTech policy, we don’t promote the actual conference on the PuneTech blog, since it’s a paid event, but we do promote the call for speakers, since that’s free, and we do reporting of the event itself whenever possible, since a significant fraction of it ends up highlighting technology work being done in Pune.)
(This is a live-blog of a talk given by Kalpak Shah, at the Indic Threads Conference on Cloud Computing, held in Pune on 20/21 Aug 2010. Since it’s being typed in a hurry, it is not necessarily as coherent and complete as we would like it to be, and also links might be missing.)
Kalpak Shah is the founder and CEO of Clogeny, a company that does consulting & services in cloud computing. His talk is about the various choices available in cloud computing today, and how to go about picking the one that’s right for you.
These are the slides that were used by Kalpak for this talk. Click here if you can’t see the slideshow above.
Kalpak’s definition of a cloud:
If you cannot add a new machine yourself (i.e. without making a phone call or email), then it’s just hosting, not cloud computing
If you cannot pay as you go (i.e. pay per use) it is not cloud computing
If you don’t have APIs which allow integration with the rest of your infrastructure/environment, then it is not a cloud
Kalpak separates out cloud infrastructure into three parts, and gives suggestions on how to choose each:
Infrastructure as a service
Basically allows you to move your local server stuff into the cloud. Examples: Amazon EC2, Terremark vCloud, GoGrid Cloud, Rackspace Cloud
You should check:
Support and Helpdesk. Is it 24×7? Email? Phone?
Hardware and Performance. Not all of them are the same. Amazon EC2 not as good as Terremark.
OS support. Which OS and distributions are supported. Is imaging of server allowed? Is distribution and re-selling of images allowed? Not everybody allows you to save the current state of the server, and restart it later on a different instance.
Software availability and partner network. Example, Symantec has put up their anti-virus software for Windows on EC2. How many such partners are available with the provider you’re interested in? (EC2 is far ahead of everybody else in this area.)
APIs and Ecosystem. What APIs are available and in what languages. Some providers don’t do a good job of providing backward compatibility. Other might not be available in language of your choice. EC2 and Rackspace are the best in this area.
Licensing is a big pain. Open source software is not a problem, but if you want to put licensed applications on the cloud, that is a problem. e.g. IBM Websphere clustering is not available on EC2. Or Windows licenses cannot be migrated from local data center to the cloud.
Other services – How much database storage are you allocated? What backup software/services are available? What monitoring tools? Auto-scaling, load-balancing, messaging.
Kalpak has put up a nice comparison of Amazon AWS, Rackspace, GoGrid and Terremark on the above parameters. You can look at it when the PPT is put up on the IndicThreads conference website in a few days.
Platform as a Service
This gives you a full platform, not just the hardware. You get the development environment, and a server to upload the applications to. Scalability, availability managed by the vendor. But much less flexibility than infrastracture-as-a-service. You are stuck with the programming language that the PaaS supports, and the tools.
For example, Google AppEngine. Which is available only for Python and Java. Or Heroku for Ruby + Rails.
PaaS is targeted towards developers.
Software as a Service
This gives you a consumer facing software that sits in the cloud. You can start using the software directly, or you can extend it a bit. A business layer is provided, so you can customize the processes to suit your business. Good if what is provided fits what you already want. Not good if your needs are rather different from what they have envisoned.
Examples: Sales Force, Google Apps, Box.net, Zoho
Storage as a Service
Instead of storing data on your local disks, store it in the cloud. Lots of consumer adopton, and now enterprise usage is also growing. No management overhead, backups, or disaster recovery to worry about. And pay either flat fees per month, or by the gigabyte.
Examples: Mozy from EMC. Amazon S3. Rackspace CloudFiles. Carbonite. DropBox.
Comparing PaaS and SaaS
Some choices automatically made for you based on development language and available skill sets. Python + Java? Use Google AppEngine. Ruby on Rails? Use Heroku. Microsoft shop? Use Azure.
Other ways to compare are the standard ones: size of vendor and ecosystem maturity. Tools, monitoring, connectors, etc. e.g. AppEngine has a Eclipse plugin, so if your developers are used to Eclipse (and they should be!) then this is very good. Another question to ask is this – will the vendor allow integration with your private cloud? Can you sync your online hosted database with your local database? If yes, that’s great. If not that can be very painful and complicated for you.
Interesting Private Cloud Platforms
These are some interesting private cloud platforms
Eucalyptus: open source IaaS cloud computing platform.
VMWare Cloud: Partnered with Terremark. Expensive but worth it.
Appistry: Allows installing of the platform on Amazon EC2, or in your private data center. Allows application deployment and mgmt, various services across the stack IaaS, PaaS, SaaS. Integration with SQL Azure, SharePoint, Dynamics CRM. Visual Studio development and testing. Supports multiple development languages.
Database in the cloud
You can either do regular relational databases (easy to use, everybody knows them, scaling and performance needs to be managed by you). Or do NoSQL – non-relational databases like SimpleDB (Amazon), Hadoop (Yahoo), BigTable (Google). They’re supported and managed by cloud vendor in some cases. Inherent flexibility and scale. But querying is more difficult and less flexible.
Business Considerations
Licensing is a pain, and can make the cloud unattractive if you’re not careful. So figure this one out before you start. SLAs are around 99.9% for most vendors, but lots of fine print. Still evolving and might not meet your standards, especially if you’re an enterprise. Also, if SLA is not being met, vendor will not tell you. You have to complain and only then they might fix it. Overall, this is a grey area.
Pricing is a problem – it keeps changing (e.g. in case of Amazon). So you can have problems estimating it. Or the pricing is at a level that you might not understand. e.g. pricing of 10 cents per million I/O requests. Do you know how many I/Os your app makes? Maybe not.
Compliance might be a problem – your government might not allow your app to be in a different country. Or, for banking industry, there might be security certification required (for the vendor) before the cloud can be reached.
Consider all of these before deciding whether to go to a cloud or not.
Summary
IaaS gives you the infrastructure in the cloud. PaaS adds the application framework. SaaS adds a business layer on the top.
Each of these are available as public clouds (that would be somewhere out there on the world wide web), or private clouds that are installed in your data-center. Private is more expensive, more difficult to deploy, but your data is in your premises, you have better (local) connectivity, and have more flexibility. You could also have a hybrid cloud, where some stuff is in-house and some stuff in the public cloud. And if your cloud infrastructure is good enough, you can easily move computation or data back and forth.
About the Speaker – Kalpak Shah
Kalpak Shah is Founder & CEO of Clogeny Technologies Pvt. Ltd. and guides the overall strategic direction of the company. Clogeny is focused on providing services and consultancy in the cloud computing and storage domains. He is passionate about the ground-breaking economics and technology afforded by the cloud computing platforms. He has been working on various cloud platforms including IaaS, PaaS and SaaS vendors.
(This is a live-blog of the Indic Threads Conference on Cloud Computing, that’s being held in Pune. Since it’s being typed in a hurry, it is not necessarily as coherent and complete as we would like it to be, and also links might be missing. Also, this has notes only on selected talks. The less interesting ones have not been captured; nor have the ones I missed because I had to leave early on day 1.)
This is the first instance of what will become IndicThreads’ annual conference on Upcoming Technology – and the theme this time is Cloud Computing. It will be a two day conference and you can look at the schedule here, and the profiles of the speakers here.
Choosing your Cloud Computing Service
The first talk was by Kalpak Shah, the founder and CEO of Clogeny, a company that does consulting & services in cloud computing. He gave a talk about the various choices available in cloud computing today, and how to go about picking the one that’s right for you. He separated out Infrastructure as a Service (IaaS) which gives you the hardware and basic OS in the cloud (e.g.AmazonEC2), then Platform as a Service (PaaS) which gives you an application framework on top of the cloud infrastructure (e.g.Google AppEngine), and finally Software as a Service (SaaS) which also gives you business logic on top of the framework (e.g.SalesForce). He gave the important considerations you need to take into account before choosing the right provider, and the gotchas that will bite you. Finally he talked about the business issues that you need to worry about before you choose to be on the cloud or not. Overall, this was an excellent talk. Nice broad overview, lots of interesting, practical and useful information.
Everybody is jumping on the cloud bandwagon. Do you know how to find your way around in the maze? Image via Wikipedia
The next talk is by Arun Gupta about JavaEE in the cloud. Specifically Java EE 6, which is an extreme makeover from previous versions. It makes it significantly easier to deploy applications in the cloud. It is well integrated with Eclipse, NetBeans and IntelliJ, so overall it is much easier on the developer than the previous versions.
Challenges in moving a desktop software to the cloud
Prabodh Navare of SAS is talking about their experiences with trying to move some of their software products to the cloud. While the idea of a cloud is appealing, there are challenges in moving an existing product to the cloud.
Here are the challenges in moving to a cloud based business model:
Customers are not going to switch unless the cost saving is exceptional. Minor savings are not good enough.
Deployment has to be exceptionally fast
High performance is an expectation. Customers somehow expect that the cloud has unlimited resources. So, if they’re paying for a cloud app, they expect that they can get whatever performance they demand. Hence, auto-scaling is a minimum rquirement.
Linear scaling is an expectation. But this is much easier said than done. Parallelization of tasks is a big pain. Must do lots of in-memory execution. Lots of caching. All of this is difficult.
Latency must be low. Google, facebook respond in a fraction of a second. So, users expect you will to.
If you’re using Linux (i.e. the LAMP stack), then, for achieving some of thees things, you’ll need to use Memcache, Hadoop..
You must code for failure. Failures are common in the cloud (at those scales). And you’re system needs to be designed to seamlessly recover from this.
Is customer lock-in good or bad? General consensus in cloud computing market is that data lock-in is bad. Hence you need to design for data portability.
Pricing: Deciding the price of your cloud based offering is really difficult.
Cost of the service per customer is difficult to judge (shared memory used, support cost, CPU consumed, bandwidth consumed)
In Kalpak’s talk, he pointed this out and one of the inhibitors of cloud computing for business
Customers expect pay-as-you-go. This needs a full-fledged effort to build an appropriate accounting and billing system, and it needs to be grafted into your application
To support pay-as-you-go effectively, you need to design different flavors of the service (platinum, gold, silver). It is possible that this might not be easy to do with your product.
Multi-cloud programming with jCloud
This talk is by Vikas Hazrati, co-founder and “software craftsman” at Inphina.
Lots of people are interested in using the cloud, but one of the things holding them back is cloud vendor lock-in. If one cloud doesn’t work out, they would like to be albe to shift to another. This is difficult.
To fix this problem, a bunch of multi-cloud libraries have been created which abstract out the clouds. Basically they export an API that you can program to, and they have implementations of their API on a bunch of major cloud providers. Examples of such multi-cloud libraries/frameworks are: Fog, Delta, LibCloud, Dasein, jCloud.
These are the things that are different from cloud to cloud:
Key-value store (i.e. the database)
File sizes
Resumability (can you stop and restart an application)
CDN (content delivery network)
Replication (some clouds have it and some don’t)
SLA
Consistency Model (nobody gives transaction semantics; everybody gives slightly different eventual consistency semantics)
Authorization
API complexity
APIs like jCloud try to shield you from all the differences in these.
jCloud allows a common API that will work on Amazon, Rackspace, VMWare and a bunch of other cloud vendors. It’s open source, performant, is based on closure, and most importantly, it allows unit testability across clouds. The testability is good because you can test without having to deploy on the cloud.
The abstractions provided by jCloud:
BlobStore (abstracts out key-value storage for: atmos, azure, rackspace, s3)
Compute (abstracts out vcloud, ec2, gogrid, ibmdev, rackspace, rimu)
Provisioning – adding/removing machines, turning them on and off
jCloud does not give 100% portability. It gives “pragmatic” portability. The abstraction works most of the time, but once in a while you can access the underlying provider’s API and do things which are not possible to do using jCloud.
A Lap Around Windows Azure
Windows Azure is Microsoft’s entry in the PaaS arena. Image via Wikipedia
This talk is by Janakiram, a Technical Architect (Cloud) at Microsoft.
Microsoft is the only company that plays in all three layers of the cloud:
IaaS – Microsoft System Center (Windows HyperV). Reliance is setting up a public cloud based on this in India.
PaaS – Windows Azure Platform (AppFabric, SQLAzure)
SaaS – Microsoft Online Services (MSOffice Web Application, MSExchange Online, MSOffice Communications Online, SharePoint Online)
The focus of this talk is the PaaS layer – Azure. Data is stored in SQLAzure, the application is hosted in Windows Azure and AppFabric allows you to connect/synchronize your local applications and data with the cloud. These together form the web operating system known as Azure.
The cloud completely hides the hardware, the scalability, and other details of the implementation from the developer. The only things the cloud exposes are: 1) Compute, 2) Storage, and 3) Management.
The compute service can have two flavors. There’s a “Web Role” is essentially the UI – it shows webpages and interacts with the user – based on IIS7. The “Worker Role” does not have a UI, and is expected to be “background” processes, often long-running, and operate on the storage directly. You can have Java Tomcat, Perl, Python, or whatever you want to run inside of a worker role. They demonstrated wordpress working on Azure – by porting mysql, php, and wordpress to the platform. Bottomline: you can put anything you want in a worker role.
Azure storage exposes a Blob (very much like S3, or any other cloud storage engine). This allows you to dump your data, serialized, to the disk. This can be combined with a CDN service to improve availability and performance. In addition you can use tables for fast read mostly access. And it gives you persistent queues. And finally, you get “Azure Drive”, a way to share raw storage across your apps. And all of this is available via a REST interface (which means that any app, anywhere on the web can access the data – not just .NET apps).
Building an Azure application is no different from designing, developing, debugging, and testing an ASP.NET application. There is a local, simulated cloud interface that allows you to try everything out locally before deploying it to the cloud.
Simone Brunozzi, a Technology Evangelist at Amazon Web Services, is talking about Amazon’s EC2.
AWS has the broadest spectrum of services on offer in the cloud computing space, and the best partner/developer/tools ecosystem. Image via Wikipedia
Overview of Amazon Web Services: Compute(EC2, Elastic MapReduce, AutoScaling), Messaging(SQS, simple notification service), Storage (S3, EBS, import/export), Content Delivery (CloudFront), Monitoring (CloudWatch), Support, Database (SimpleDB, RDBMS), Networking (Virtual Private Cloud, Elastic Load Balancing), Payments & Billing (FPS – flexible payments service), e-Commerce (fws – fulfillment web service, Amazon DevPay), Web Traffic (Alexa Web Information, Alexa Top sites), Workflow (Amazon Mechanical Turk)!! See this link for more
AWS exists in US-West (2 locations), US-East (4 locations), Europe (2 locations), Asia-Pacific (2 locations). It’s architected for redundancy, so you get availability and failover for free.
EC2 essentially gives you virtual servers in the cloud, that can be booted from a disk image. You can choose your instance type from small to extra-large (i.e. how much memory, and CPU speed), and install an image on that machine. You can choose from a lot of pre-configured images (Linux, Solaris, Windows). These are basic OS installs, or more customized versions created by Amazon or the community. You can further customize this as you want, because you obviously, get root/administrator access on this computer. Then you can attach a “disk” to this “computer” – basically get an EBS, which is 1GB to 1TB in size. An EBS device is persistent, and is automatically replicated. If you want even better durability, then snapshot the EBS and store it to S3.
Scaling with EC2: Put an ELB (Elastic Load Balancer) in front of your EC2 instances, and it will automatically load balance across those (and give you a single URL to expose to your users). In addition, ELB does health-checks on the worker instances and removes the ones who are not performing up to the mark. If you use the CloudWatch monitoring service, you can do things like: “if average CPU usage across all my instances is above 80%, then add a new instance, and remove it once average CPU usage drops below 20%.” After this point, adding and removing instances will be fully automated.
It’s important to mention that AWS is very enterprise ready: it has certifications and security (needed for banking apps), SLAs, worldwide ecosystem of service integrators, and other partners. Another enterprise feature: Virtual Private Clouds. Basically, carve out a small area in the Amazon Public cloud which is only accessible through a VPN, and this VPN ends in the enterprise. Hence, nobody else can access that part of the cloud. (Note: this is different from a Private Cloud, which is physically located in the enterprise. Here, it is still physically located somewhere in Amazon’s data-centers, but a VPN is used to restrict access to one enterprise.)
Multi-tenancy in the Cloud
Difference between single-tenant and multi-tenant apps. Image by andreasvongunten.com via Flickr
Vikas Hazrati (referenced earlier in this post), is talking about multi-tenancy. How do you give two different customers (or groups of customers) the impression that they are the only ones using a particular instance of a SaaS; but actually you’re using only one installation of the software.
Multi-tenancy is basically when you have a single infrastructure setup (software stack, database, hardware), but you want multiple groups to use it, and each should see a completely isolated/independent view. Basically, for security, one customer group does not want their data to be visible to anybody else. But we don’t want to give each group their own instance of the infrastracture, because that would be too expensive.
Variations on multi-tenancy. Easiest is to not do multi-tenancy – have separate hardware & software. Next step is to have multiple virtual machines on shared hardware. So hardware shared, software is not. If you’re sharing the middleware, you can do the following: 1. Multiple instances of the app on the same OS with independent memory, 2. Multiple instances of the app with shared memory, and 3. True multi-tenancy.
What level do you need to do multi-tenancy at? It could be at any layer: the database of course needs to be separable for different tenants. You can also do it at the business logic layer – so different tenants want different configurations of the business logic. And finally, you could also do this at the presentation logic – different tenants want different look’n’feel and branding.
Multi-tenancy in the database. Need to add a tenant-id to the database schema (and the rest of the schema is the same). A bit customer concern in this is that bugs in queries can result in data-leakage (i.e. a single poorly written query will result in your competitor seeing your sales leads data). This can be a huge problem. A typical SaaS vendor does this: put smaller customers in the same database with tenant-id, but for larger customers, offer them the option of having their data in a separate database.
Multi-tenancy in the cloud. This is really where the cloud shines. Multi-tenancy gives very low costs; especially compared to the non-multi-tenant version (also known as the on-premise version). For example, the cost of multi-tenant JIRA is $10 per month, while the on-premise version is $150 per month (for the same numbers of users).
Multi-tenancy example: SalesForce does a very fine-grained approach. Each user gets his own portion of the database based on primary-key. And there is a validation layer between the app and the database which ensures that all queries have a tenant-id. Fairly fine-grained, and fairly secure. But it is quite complex – lots of design, lots of thinking, lots of testing.
One big problem with multi-tenancy is that of the runaway customers. If a few customers are really using a large share of the resources, then the other customers will suffer. Limiting their resource usage, or moving them elsewhere are both difficult to do.
In general, some providers believe that having each app developer implement multi-tenancy in the app is inefficient. The solution to this is to virtualize the database/storage/other physical resources. In other words, for example, the database exports multiple virtual databases, one per tenant, and the underlying multi-tenant-database handles all the issues of multi-tenancy. Both Amazon’s RDS and Windows SQLAzure provide this service.
Google released the namespaces API for Google AppEngine just a few days back, and that takes a different approach. The multi-tenancy is handled at the highest level of the app, but there’s a very easy way of specifying the tenant-id and everything else is handled by the platform. However, note that multi-tenancy is currently supported only for 3 of their services, and will break if you use one of the others.
Issues in multi-tenancy:
Security: all clients are worried about this
Impact of other clients: customers hogging resources is still not a solved problem
Some customers are willing to pay for a separate instance: and it’s a pain for us to implement and manage
Multi-tenancy forces users to upgrade when the app is upgraded. And many customers don’t want to be upgraded forcefully. To handle this issue, many SaaS providers make new features available only via configuration options, not as a forced upgrade.
Configurations/customizations can only be done upto some level
There is no user acceptance testing. We test, and users have to take it when we make it live.
When should you not use multi-tenancy?
Obviously, when security is a concern. e.g. Google not doing this for government data
High customization and tight integration will make all the advantages of multi-tenancy disappear
SaaS-ifying a traditional application
Chirag Jog, CTO at Clogeny Technologies, a PICT ex-student, talking about the choices and issues faced in converting a traditional application to a SaaS, based on a real-life scenario they faced. The case study is of a customer in Pune, who was using his own infrastructure to host a standard web app (non-cloud), and occasional spikes in user requests would cause his app to go down. Dedicated hosting was too expensive for him – hence the need to move it to the cloud.
Different choices were: SaaS on top of shared infrastructure (like slicehost), or SaaS on top of PaaS (like AppEngine), or SaaS on top of IaaS (Amazon EC2, Rackspace). PaaS seems great, but real-life problems make you re-think: Your existing app is written in a language (or version of a language) that is not supported on the PaaS. Or has optimizations for a local deployment. Specific libraries might be missing. Thus, there’s lots of code change, and lots of testing, and stability will be a problem.
Hence, they decided to go with SaaS on IaaS. Basically simply moving the existing local app to the same software stack on to a server in IaaS. The app itself was largely compute intensive, so they decided to use the existing app as a ‘server’ and built a new client that talks to the server and serves up the results over the web. For this, the server(s) and client went on Amazon EC2 instances, Simple Queueing Service (SQS) was used to communicate between the ‘client’ and the ‘server’, and automatic scaling was used to scale the app (multiple compute servers). This not only helped the scalability & load balancing, but they were able to use this to easily create multiple classes of users (queue cheapo users, and prioritize priority customers) – improved business logic!
Cloud Security – Threats and Mitigations
If there’s a malicious hacker in your cloud, you could be in trouble. And it is your responsibility, not the cloud vendor. Image by Mikey G Ottawa via Flickr
Vineet Mago and Naresh Khalasi (the company that @rni and @dnene are associated with) are talking about the privacy and security issues they faced in putting their app on the cloud, and how to deal with those.
Good thing about the cloud is that the cloud vendor takes care of all the nitty-gritty, and the developer need not worry about it. The presenters disagree – especially where privacy and security are concerned. You have to worry about it. It is your responsibility. And if you’re not careful, you’ll get into trouble, because the vendors are not. Protecting against malicious hackers is still your responsibility; cloud vendor doing nothing about it.
The Cloud Security Alliance publishes best practices for security in the cloud. Recommneded.
You need to worry about the following forms of security:
Physical security: who gets into the building? cameras?
System security: anti-virus, active directory, disabling USB
Application security: AAA, API security, release management (no release goes out without proper audits and checks by different departments)
And there are three aspects you need to think about:
Confidentiality: can your data get into the wrong hands? What if cloud provider employee gets his hands on the data?
Integrity: can the data be corrupted? Accidentally? Maliciously?
Availability: Can someone make your data unavailable for a period of time? DDoS?
Remember, if you’re using the cloud, the expectation is that you can do this with a very small team. This is a problem because the effort to take into account the security aspects doesn’t really reduce. It increases. Note: it is expected that a team of 2 people can build a cloud app (R&D). However, if a networked app needs to be deployed securely, we’d expect the team size to be 30.
State of the art in cloud security:
IaaS: provider gives basic firewall protection. You get nothing else.
PaaS: Securing the infrastructure (servers, network, OS, and storage) is the provider’s responsibility. Application security is your responsibility
SaaS: Network, system and app security is provider’s job. SLAs, security, liability expectation mentioned in agreements. Best. But least flexibility for developers.
Problems with cloud security:
Unknown risk profile: All of them, IaaS, PaaS, and SaaS, are unknowns as far as security is concerned. This industry is just 4 years old. There are areas that are dark. What to do?
Read all contracts/agreements carefully and ask questions.
Ask provider for disclosure of applicable logs and data.
Get partial/full disclosure of infrastructure details (e.g. patch levels, firewalls, etc.)
Abuse and nefarious use of cloud computing: Applies to IaaS, PaaS. If hackers are using using Amazon EC2 instances to run malware, then there are two problems. First, malware could exploit security loop-holes in the virtualization software and might be able to access your virtual machine which happens to be on the same physical machine as the hacker’s virtual machine. Another problem is that the provider’s machines/IP-addresses enter public blacklists, and that will cause problems. What to do?
Look for providers that have strict initial registration requirements.
Check levels of credit card fraud monitoring and co-ordination used by the provider
Is the provider capable of running a comprehensive introspection of customer network traffic?
Monitor public blacklists for one’s own network IP blocks
Insecure Interfaces and APIs: 30% of your focus in designing an API should go into building a secure API. e.g. Twitter API does not use https. So anybody at this conference today could sniff the wi-fi here, sniff the network traffic, get the authentication token, and run a man-in-the-middle attack. Insecure API. What to do?
Analyze the security model of cloud provider’s interfaces
Build limits into your apps to prevent over-use of your apps
Malicious Insiders: “In 3 years of working at company, you’ll have the root passwords of all the servers in the company!” Is your security policy based on the hope that all Amazon AWS employees are honest? What to do?
Know what are the security breach notification processes of your provider, and determine your contingency plans based on that information
Read the fine print in the contracts/agreements before deciding on cloud vendors
Shared Technology Issues: There are various ways in which a malicious program in a virtual machine can access underlying resources from the hypervisor and access data from other virtual machines by exploiting security vulnerabilities. What to do?
Implement security best practices for your virtual servers
Monitor environment for unauthorized changes/activity
Enforce vendor’s SLAs for patching and vulnerability remediation
Example: Amazon allows you to run penetration testing, but you need to request permission to do that
Summary
Overall, a good conference. Some great talks. Not too many boring talks. Quality of attendees was quite good. Met a bunch of interesting people that I hadn’t seen in POCC/PuneTech events. You should be able to find slides of all the talks on the conference website.
We reported an incorrect date for the Building Billion Dollar Software Companies talk, by Anand Deshpande and Shirish Deodhar. The talk is on Friday, 20th August, at 10am. (It is not on Wednesday, 18th as we earlier reported).
What: Building Billion Dollar Software Companies from Pune, with Anand Deshpande and Shirish Deodhar, presented by Software Exporters Association of Pune (SEAP) When: Friday, Aug 20, 10am-12noon Where: Dewang Mehta Auditorium, Persistent Systems, S.B. Road Registration and Fees: This event is free for all. Register by sending a mail to deshpande@synygy.com.
“How we got here, and how we plan to get there” by Anand Deshpande
Anand Deshpande, founder of Persistent Systems, which recently had a very successful IPO, will talk about how he “got here”, and will share his vision on how he plans to take his company to $1B, and “get there”.
“From Entrepreneurs to Leaders” by Shirish Deodhar
Shirish Deodhar is the author of a book on what founders of Indian software companies need to focus on to build $1B product companies in India. (See PuneTech excerpt of that book). He will share some of his insights during this talk.
(This is a live-blog of Dr. Vipin Chaudhary talk on Trends in High Performance Computing, organized by the IEEE Pune sub-section. Since this is being typed while the talk is going on, it might not be as well organized, or as coherent as other PuneTech articles. Also, links will usually be missing.)
Myths about High Performance Computing:
Commonly associated with scientific computing
Only used for large problems
Expensive
Applicable to niche areas
Understood by only a few people
Lots of servers and storage
Difficult to use
Not scalable and reliable
This is not the reality. HPC is:
Backbone for national development
Will enable economic growth. Everything from toilets to potato chips are designed using HPC
Lots of supercomputing is throughput computing – i.e. used to solve lots of small problems
“Mainstream” businesses like Walmart, and entertainment companies like Dreamworks Studioes use HPC.
_(and a bunch of other reasons that I did not catch)
China is really catching up in the area of HPC. And Vipin correlates China’s GDP with the development of supercomputers in China. Point: technology is a driver for economic growth. We need to also invest in this.
Problems solved using HPC:
Movie making (like avatar)
Real time data analysis
weather forecasting
oil spill impact analysis
forest fire tracking and monitoring
biological contamination prediction
Drug discover
reduce experimental costs through simulations
Terrain modeling for wind-farms
e.g. optimized site selection, maintenance scheduling
and other alternate energy sources
Geophysical imaging
oil industry
earthquake analysis
Designing airplanes (Virtual wind tunnel)
Trends in HPC.
The Manycore trend.
Putting many CPUs inside a single chip. Multi-core is when you have a few cores, manycore is when you have many, many cores. This has challenges. Programming manycore processors is very cumbersome. Debugging is much harder. e.g. if you need to get good performance out of these chips then you need to do parallel, assembly programming. Parallel programming is hard. Assembly programming is hard. Both together will kill you.
This will be one of the biggest challenges in computer science in the near future. A typical laptop might have 8 to 10 processses running concurrently. So there is automatic parallelism, as long as number of cores is less than 10. But as chips get 30, 40 cores or more, individual processes will need to be parallel. This will be very challenging.
Oceans of Data but the Pipes are Skinny
Data is growing fast. In sciences, humanities, commerce, medicine, entertainment. The amount of information being created in the world is huge. Emails, photos, audio, documents etc. Genomic data (bio-informatics) data is also huge.
Note: data is growing way, way faster than Moore’s law!
Storing things is not a problem – we have lots of disk space. Fetching and finding stuff is a pain.
Challenges in data-intensive systems:
Amount of data to be accessed by the application is huge
This requires huge amounts of disk, and very fat interconnects
And fast processors to process that data
Conventional supercomputing was CPU bound. Now, we are in the age of data-intensive supercomputing. Difference: old supercomputing had storage elsewhere (away from the processor farm). Now the disks have to be much closer.
Conventional supercomputing was batch processed. Now, we want everything in real-time. Need interactive access. To be able to run analytic and ad hoc queries. This is a new, and difficult challenge.
While Vipin was faculty in SUNY Buffalo, they started an initiative for data-intensive discovery initiative (Di2). Now, CRL is participating. Large, ever-changing data sets. Collecting and maintaining data is of course major problem, but primary focus of Di2 is to search in this data. e.g. security (find patterns in huge logs user actions). This requires a new, different architecture from traditional supercomputing, and the resulting Di2 system significantly outperforms the traditional system.
This also has applications in marketing analysis, financial services, web analytics, genetics, aerospace, and healthcare.
High Performance Cloud Services at CRL
Cloud computing makes sense. It is here to stay. But energy consumption of clouds is a problem.
Hence, CRL is focusing on a green cloud. What does that mean?
Data center optimization:
Power consumption optimization on hardware
Optimization of the power system itself
Optimized cooling subsystem
CFD modeling of the power consumption
Power dashboards
Workflow optimization (reduce computing resource consumption via efficiencies):
Cloud offerings
Virtualizations
Workload based power management
Temperature aware distribution
Compute cycle optimization
Green applications being run in CRL
Terrain modeling
Wind farm design and simulation
Geophysical imaging
Virtual wind tunnel
Summary of talk
Manycore processors are here to stay
Programmability have to improve
Must match application requirements to processor architecture (one size does not fit all)
Computation has to move to where the data is, and not vice versa
What: IEEE Pune presents a session on High Performance Computing, by Dr. Vipin Chaudhary, CEO of the Computation Research Laboratories (CRL) the makers of the Eka supercomputer When: Saturday, 14 August, 5pm-7pm Where: Institution of Engineers, Shivajinagar, JM Road, Opposite Modern Cafe Registration and Fees: This event is free for all. Register by sending mail to IEEE125.Pune.Symposium@gmail.com. Details: Contact Amey Asodekar 020-6606-8494
Computation Research Laboratories (CRL) is the Pune-based company from the Tatas which built the Eka supercomputer. Eka was the 4th fastest when it launched a few years back, but has now dropped to 33rd; nevertheless, it remains one of the fastest private (i.e. not funded by any government) supercomputers in the world.
Earlier this year, Dr. Vipin Chaudhary took over as the CEO of CRL. I assume this marks a change in direction for CRL. Earlier, the focus was on building Eka, which required lots of cutting edge research in hardware, software, and facilities among other things. During that phase it was developed and run by academics (CRL was started with the help of Dr. Narendra Karmarkar, and most of the senior executives in CRL were ex-IIT-Bombay professors. Now, however, it is likely that they’re looking for a return on the investment, and would like to start marketing high performance computing services using Eka. They have a team working on high performance infrastructure and applications using the Eka hardware, and being a purely private company, are in a unique position to offer their hardware, software and services to companies who might be interested in supercomputing applications (think airplane design and modeling (e.g. somebody like Boeing), or car design (e.g. for in-house use like Tata Motors)). Dr. Chaudhary, who relocated from the US for this role, has been earlier involved with two startups (Corio, acquired by IBM in 2005) and Cradle Technologies (2000-2003), in addition to being an associate professor at SUNY Buffalo. Thus the credentials he brings, specifically his strong technical and business background in this area, are impressive.
CRL, is working on some of the most complex technologies in Pune. For that reason alone, any techie in Pune should be interested in this talk.
Do you have some cool new technology that you would like to showcase? In that case, now is your chance to show it for free at the India International Trade Fair 2010 that’s happening in Delhi starting on 14th November, 2010.
Basically, Maharashtra has been allocated 11000 sq. ft. at this trade fair to show the coolest stuff from Maharashtra, and out of that 3000 sq. ft. has been allocated to Pune. The Science and Technology Park (STP) has been given the responsibility of using this space to highlight the achievements of Pune. They have decided to try to find a few innovative companies/technologies and showcase them (for free).
Specifically:
It should be a company or product that actually exists (not just an idea or a concept)
It should be something that is interesting or innovative. Something that shows that Maharashtra is on the cutting edge
Specific domains of interest include CleanTech, GreenTech, Environment, e-Governance, m-Governance; but entries need necessarily not be limited to these domains
The Trade Fair starts on 14th November, and will be at Pragati Maidan, Delhi
If you are a company who fits this description, or if you know some other company who does, please get in touch with Rohit Srivastwa (rohit.srivastwa @ scitechpark.org.in), Advisor, Science & Technology Park, Pune. If you are a company/product from Mumbai or elsewhere in Maharashtra, don’t give up hope. You can still apply, and if found interesting enough, they’ll try to accommodate you.
Entrepreneurs, investors, government agencies, domestic companies & MNC executives in India need to think beyond “hi-tech” ventures and creation of IP and should focus instead of adapting existing technologies for Indian needs, points out Kaushik Gala in a new essay he published on his website. Kaushik is a Business Development Manager at Pune-based startup incubator Venture Center, so he does spend a lot of time talking to and thinking about all the players of our technology and startup ecosystem mentioned in the first sentence of this paragraph.
The whole article is definitely worth reading, and we give here a few excerpts from the article to whet your appetite:
So, will hi-tech entrepreneurs & startups drive economic growth & wealth creation in India? Consider this assertion by economist John Kay:
Advancing technology is the principal determinant of economic growth for the twenty or so rich countries of the world. However most of the world is well inside that technological frontier. For these countries, prospects of economic growth depend little on technology and principally on advances in their economic, political and social infrastructure.
Over the two centuries of rapid economic growth in rich states, the pattern has been for one or two countries to join the group of advanced states every decade or two. In the last fifty years or so these new members of the rich list include Italy, Finland and Ireland within Europe and the first Asian economies (Japan, Hong Kong, Singapore) to operate at this technological frontier.
Later, he points out that there are three kinds of tech startups in India: 1) Technology innovators (who are creating new IP at the cutting edge of science & technology), 2) Technology imitators (who are reverse engineering technology from elsewhere and implementing a copy here), and 3) Technology adapters (who take a foreign technology, and then adapt it to Indian conditions. This usually involves significant changes, and there’s usually a key piece of (non-technology) innovation required to make it successful locally).
He gives this example of technology adaption:
My favorite example is Sarvajal. They sell clean drinking water – but with many twists:
They’ve developed a (patent pending!) device called Soochak which combines existing water purification technology with cloud computing.
Their innovative ‘distributed’ business model uses pre-payment, franchising, branding, etc. to make it profitable to sell relatively affordable water to remote rural areas.
Success for Sarvajal is as much – or more – dependent on understanding the psychology of rural customers and village entrepreneurs (franchisees) as it is on the technology.
Kaushik ends by saying that while all three avatars of technology enterprises are required for wealth creation in India, being an adopter/adapter in India offers far more opportunities to excel.
(In this guest post, Markus Hegi, partially-Pune-based CEO of partially-Pune-based company Colayer, laments the death of Google Wave, and points out that the concept behind the Wave is right. Google should have re-launched a new, improved Wave, he feels, because the world does need a paradigm shift in business communications. This article is a shortened & modified version of a post published on ex.colayer.com)
3 days ago, Google announced that it would stop the development of Wave and would stop supporting it by the end of the year. Even though the buzz about Wave and the (visible) progress of Wave was low for the last few months, the shut down is surprising: I would have expected a re-launch, a change of the architecture, integration with gmail – anything, but not a complete halt – The concept behind Wave is right and ahead of its time – and Google could have been a leading player in this space!
When I looked at Wave for the first time right after the announcement one year ago, it struck me, how similar the concepts were to what we were working for years with Colayer. I started Colayer in 99 – suffering myself the mess of email communication. As a travelling business consultant I was convinced, that this can not be the way we will communicate in future! This is fundamentally wrong! – I mean: the basic idea of SENDING information on the web is wrong! (You GO TO and ARE ON Facebook, twitter, yahoo – you don’t ‘download’ it.) Google Wave addresses exactly these same issues.
We were excited to see, what approach Google would take to implement the new paradigm of online communication – But also realized quickly, that this product in this stage would not be usable for 3 main reasons:
The Technical Architecture was too heavy and complex
The Operability – The way to operate the tool was limiting
The Notification – the way the users would be notified about updates in their many waves.
If you would use this product in a real world scenario with heavy communication, it would not work! – But Wave was at its very start. We thought Google would quickly realize the problems and implement solutions for it – and with their market power, Google would be able to initiate the paradigm shift in online communication.
But after the Wave launch, it seemed that innovation stopped. Yes, there was development, improvements & many extensions were released. But the above 3 problems were not addressed. They couldn’t be solved through improvements or extensions, but needed fundamental shifts in the product design – which never happened. And as many users seemed to loose patience too, Google pulled the plug for poor user adoption after only one year.
What went wrong? – Gartner has a valid point: “Startup innovation” has simply no place in a large enterprise software company. Well, this is not exactly what Gartner writes, but this is essentially the meaning: Either you are in the business of breaking & paradigm shifting innovation (Startups), or you are serving a large base of enterprise customers – Both together is almost impossible, because there is no breaking innovation, without messing up with your customers. After Wave was launched, even though it was still tagged as ‘beta’, the team could not just say to its 100’000 users: “you know, we just realized that the architecture has a fundamental problem – lets start it all over again …!” – which we, in a small company did several times …
Maybe another problem of Wave was, that Google choose the wrong market: Wave was intended for the broad consumer market, as well as for enterprises – But the paradigm shift happens elsewhere first: If you observe today’s kids and young nerds, you can imagine, how the next generation of businesses will use online communication: Email for them is ‘lame’ and just used for communication with outsiders, older people and the ‘conservative’ business world. Why would you need email anyway in a world of Facebook & Foursquare?
After 10 years, we are still in the beginning of the massive paradigm shift of online communication. I am eager to see, who will join the journey next!
About Google Wave
Wave is a web application for real-time communication and collaboration.
Announced in May 2009, Wave attracted a lot of attention for a couple of months. The project was stopped by Google after just a little more than one year for poor user adoption.
About the author – Markus Hegi
Markus Hegi founded Metalayer (now renamed to Colayer) 10 years ago. The Colayer platform is a software technology to create collaborative web sites.
Colayer is a Swiss-Indian company with headquarters in Zurich, Switzerland and development center in Pune, India. Markus ‘commutes’ since 10 years between Zurich and Pune and spends almost half of his time here in Pune. See his linked-in profile, or follow him on twitter.