Category Archives: Event Report

Internet of Things: Challenges and Opportunities Jürgen Mössinger

(This is a liveblog of SEAP’s event where Jürgen Mössinger of Bosch talked about “Internet of Things: Challenges and Opportunities”)

About the Speaker

Jürgen Mössinger is the Head of Business Unit at Robert Bosch Engineering and
Business Solutions, India. He has an extensive background in embedded SW, IT and product development. He has been with Bosch since the last 19 years. Jürgen headed several positions in platform and customer product development for control units and was the spokesperson of the AUTOSAR Consortium in 2008. Currently he is heading a business unit at Robert Bosch Engineering and Business Solutions, responsible for products, services and solutions in the areas of Consumer Goods, Industrial Technology, Energy and Building Technology and Automotive Electronics. Beside the classical areas, Juergen is working on Smart Home, Smart City and Connected Industry (Industry 4.0).

About Bosch

Bosch group is a 40+ billion euro company with 280000 employees and 225 manufacturing sites. Automotive technology is their biggest sector, but they’re also in industrial sector, energy and building technoloyg, consumer goods – all over the world.

About the Internet of Things

The Internet of Things (IoT) is the term given to small computing devices embedded in anything an everything around us which will all be collecting data about their environment and which will be connected to the internet – allowing for data collection and analysis at a scale never before seen, and of course fine-grained control of the environment.

Things, in the IoT, can refer to a wide variety of devices such as heart monitoring implants, biochip transponders on farm animals, electric clams in coastal waters, automobiles with built-in sensors, or field operation devices that assist fire-fighters in search and rescue. Current market examples include smart thermostat systems and washer/dryers that utilize wifi for remote monitoring.

Opportunities and Challenges in IoT

Here is a random collection of interesting points made during the presentation:

  • By 2020, 50 Billion devices will be connected to the internet. This forms the basis of IOT
  • IoT will be everywhere. Huge potential: Smart Cities, Smart Homes, Smart Industry, Smart Wearables, Logistics (e.g. transport fleets, tracking)
  • More than two thirds of consumers expect to buy IoT devices for their homes by 2019, and nearly half expect to buy wearable technology
  • The wearables market is expected to have reached $1.5 billion in 2014
  • By 2020 there will be over 100 million light bulbs and lamps worldwide that will be connected to the internet wirelessly
  • Just 1% improvement in an industrial setting via use of IoT can result in billions of savings in operational costs
    • $30B fuel cost saving in aviation industry
    • $66B fuel cost saving in gas powered fleets
    • $63B productivity improvement in healthcare
    • $90B reduction in capital expenditure in oil and gas exploration and development
    • $27B productivity improvement in rail industry

Examples of IoT usage in Smart Homes:

  • Appliance Information available on the cloud/smartphones
  • Appliances operate automatically / efficiently
    • Through the use of scheduling or historical patterns or sensors
  • Control House from Anywhere:
    • Customer is aware and fine-tunes the settings from anywhere

What all does IoT need?

  • Sensors: heat, temperature, light and various other things
  • Long battery life; can’t go around the house changing/charging batteries all the time
  • GPS
  • Local Network, Global Network
  • Software to tie it all together

IoT means Big Data:

  • 4.5 billion pieces of content are shared on Facebook every day
  • Youtube users upload 100 hours of new video every minute
  • By 2019 we’ll have 9.2 billion mobile phones
  • iTunes receives about 100 billion app downloads in one quarter
  • Huge in 3 different dimensions:
    • Volume: Raw amount of data generated
    • Velocity: speed with which the data is generated
    • Variety: the various different sources and types of data (sensor data, text, images, videos). Some of the data is structured and lot of it is unstructured.
  • We will need next generation algorithms and tools to make sense of all this data so that we can generate usable insights
    • Software and Algorithms
    • Data Modelers
    • Data Visualizers
    • Data Architects
    • Business Analysts

Actual examples of IoT usage by Bosch:

  • Fleet Management: 10% reduction of fuel cost per trip in underperforming routes – this was in Bangalore
  • 25% reduction of testing time in Manufacturing
  • 33% reduction in calibration cost of hybrid ECUs in automotive calibration
  • 15% reduction in inventory holding costs in a supply chain

Smart Cities:

  • This is a difficult area
  • Lots of countries/cities claiming that they want to become smart
    • Narendra Modi has also announced Smart Cities initiative in India
      based on the PPP model (Public Private Partnership)
  • Main problem is that these smart city initiatives do not have a business model. The investment has to be made by someone (the city, or the company in the PPP) and the benefits are reaped by others (the citizens). There is no direct return on investment for the investor
  • No city has really solved this problem.

Smart Industry:

  • We have already had 3 industrial revolutions:
    • 1st industrial revolution: the original mechanical industrial revolution
    • 2nd industrial revolution: the assembly line
    • 3rd industrial revolution: electronics
  • Now, with IoT we are ready for the 4th.
    • Smart production: communication between each part and the machine:
      • Dynamic optimization of scheduling of processes and machines
      • Customized processing for each individual product
    • Horizontal Integration
      • Communication between:
        • Parts Suppliers
        • Transportation
        • Within the Factory
      • When all of these are talking to each other and we have data, new optimizations become possible
        • Example: +10 increase in productivity, -30% reduction in stock

RailsGirls Pune – The Gold and the Beautiful

(This event report of RailsGirls Pune – a one-day conference for women working on Ruby-on-Rails, first appeared on the Josh Software Blog, and is reproduced here with permission for the benefit of PuneTech readers.)

I was skeptical! Just from my experience of organizing RubyConf India and some local meetups, I was sure the turnout would be low and the event would be ‘casually boring’. If only I knew how crazily wrong I was!

A few days before the event, seeing about 250 registrations was itself a revelation. Then the turnout of over a 100 girls was exhilarating to say the least. I mean, how often does one get a chance to address a crowd at a techie event with ‘Hello Ladies!’. I am indeed lucky.

RailsGirlsJosh Software was one of the sponsors along with ThoughtWorks and CodeIgnition. It was great to see a LOT of mentors turn up for the event — which in my opinion was one of the success factors of Rails Girls Pune – EVERYONE got attention and there was never a case where someone was sitting idle or looking around for help.

The demographics of the crowd was varied – quite a few girls had come down to Pune for this event, some even travelling most of the night to get here!. There were a lot of girls with a prior technical background but none in Ruby. And there were 5 people who were entirely new to programming and they too managed to code, gleefully if I may add! It’s not often you get to hear an artist talk about controllers and methods. 🙂


One good cultural change that stood out was the kids running around an event with their dad’s baby-sitting them while the mom’s coded!

Proceedings were kick-started off  by Gunjan from ThoughtWorks. She spoke about Women in Technology and Leadership and why such events are so important for everyone.


After a brief introduction from other sponsors, all participants were divided among various tables and each table was assigned a mentor, though I saw at least 2 or 3 mentors at each table. Everyone was eager to help and answer questions.

RailsGirls3As the mentors were aware of the RailsGirls teaching process, it started of with getting familiar before moving on to other things like BentoBox and explaining basic concepts of Rails. Before long, everyone was busy working hard!

There were lot’s of discussions happening amidst kids running around, regular trips for getting more coffee or tea or some snacks. Before long, it was time for lunch and a breather.

chickendanceThoughtWorks organized an awesome lunch and after lunch, it required more than just Ruby to shake everyone up. What better than a ChikenDance for 5 minutes that started slow with a little bit of shyness among the audience but reached a crescendo with everyone joining in!



lighteningtalkThere were three Lightening Talks to get the crowd back into the coding groove. I spoke about why we love Ruby with the talk ”My Grandmother can read my code“.  Praveena  gave a talk on “Perks of being a Programmer” and Nishita gave a talk about “How the Internet works!”.

The afternoon was one busy session and everywhere you went, you heard only about Controllers, Scaffold, Bootstrap etc. – music to my ears!

Towards the end of the session, we had some participants showcase their work after uploading on Heroku. A few mentions of some really good work are and

mentorsThe mentors did a wonderful job and got a lot of appreciation from all. Well, at the end of the day, a lot of people mad new friends. (Do not miss the “I <3 Matz” on the bottom right corner of the photo!)

Anup and Prathamesh wound up proceedings with introducing everyone to Pune Ruby User Group and after the prolonged networking that ensued, everyone called it a day and probably slept the night dreaming of instance variables!

A special thanks to Shilpa Nair and Hephzibah who did a wonderful job organizing RailsGirls. Hope to see more of these events soon. Do like the Rails Girls Facebook Page  and show your appreciation!

@RubyConfIndia 2013 Pune Event Report by @JonathanWallace

(This event report of the recently concluded RubyConf 2013, which was held in Pune a couple of weekends back, by @JonathanWallace first appeared at the @BigNerdRanch blog. It is reproduced here with permission, for the benefit of PuneTech readers.)

In my professional career, I’ve never felt prouder than when I was accepted as a speaker at RubyConf India. I’ve spoken at numerous user groups, helped organize events, and even performed in front of huge crowds, but this was the first time I had been given the opportunity to speak at a conference.

My goal was to put together a quality presentation on debugging that would help the attendees in at least one small way. If each person, from advanced to beginner, were to walk away with at least one new insight or piece of information, then I would be happy.

I found myself achieving that much and more. I met so many friendly people at this conference, had a lot of good conversations and made a number of #rubyfriends—more than at any other conference I’ve attended. And while the accolades and interest in my talk were wonderful, discussing my work, good code and great co-workers at Big Nerd Ranch was the best part of all.
The Talks

There were many other excellent talks at the conference and I enjoyed all of the ones I attended, but I found myself most inspired by three talks in particular:

  1. Siddhant Chothet‘s talk on accessibility and Ruby illustrated how easily the Ruby community could improve accessibility for users and developers. This talk wowed us as Siddhant demonstrated the challenges and impressive capabilities of blind developers. I would be remiss if I didn’t note that though Siddhant did have slides, he did not read from them, as he is blind himself. Not only was this his first talk at a conference, Siddhant gave the whole presentation from memory! If you want to support his work, check out the brails project.
  2. Sau Sheong Chang created beautiful music for us using pure Ruby, turning tweets into music. He shared just enough of the basics of music theory and the physics of music to walk us through his newly released muse gem. I love music and have played the piano for many years, and I look forward to creating music with one of my favorite tools, Ruby. Step one? Add a hubot script that makes use of muse in some fashion.

  3. Our own Andy Lindeman gave the closing keynote. In this talk, he revealed how much we all benefit from open-source software, thanks to the many developers who have given freely of their time and effort. I highly recommend that everyone in the Ruby community see the talk. While Andy’s talk focused only on the code written in Ruby libraries, I find myself flabbergasted at how much benefit we derive from open source, free technologies when considering the full stack of operating system, database server, web server, web browsers and client-side technologies!

Next year

But a summary of a few talks alone doesn’t do this conference justice. It’s definitely a should not miss, and I’m already planning a talk for next year. I hope to see you there.

(For another event report, see this post by student Vikas Yaligar.)

Event Report: Transforming and Scaling Education – D.B. Phatak

(This is a live-blog of the talk D.B. Phatak gave at Grand Finale event of the Turing100 Lecture Series titled “Rethinking Education – Transforming and Scaling the Learning Model”. Note, this is a live blog, so please excuse the fact that it is unstructured, incomplete, and might contain errors. Note: this talk is being live-cast to 30+ colleges and other institutions all over India.)

Anand Deshpande’s introduction of D.B. Phatak

  • Prof. Phatak is my Guru. I have not been his student, formally, but I know him since early 90s and I always go to him for advice before anything important.
  • He did his engineering from Indore and PhD from IIT Bombay.
  • He got the Padmashree last year
  • He is a great speaker and anytime he is going a talk, you should always attend it.

Transforming and Scaling Education – by D.B. Phatak

  • This talk will touch upon these topics: 1) Learning, 2) Education, 3) Scaling, 4) Open Sourcing of Knowledge and 5) Technology Crystal Gazing


  • We are all familiar with learning in groups. Classroom learning. Fixed time slots. Typical: 1 teacher, 50 students, 1 hour. Teacher has (hopefully) pre-prepared the lecture. The students are supposed to listen with attention, throughout the hour, but this never happens.
  • So does learning happen in a classroom? Partially. Maximal learning happens when you try to apply knowledge that you’ve acquired.
  • All the advocates of e-learning and e-everything claim that if there is access to good quality knowledge, that is enough for anyone to learn. This is false. If just access to knowledge was good enough for learning, then librarians would be the smartest people on earth.
  • Learning needs applying knowledge, failing to apply that knowledge, correcting the failures. Without these steps, learning cannot happen.
  • Can an individual learn entirely on his/her own? Eklavya. Yes, there are cases of this. But don’t forget that here is only one Eklavya, but 7 billion non-Eklavya humans who also need to learn.
  • Why do we learn? Primarily for survival. Then betterment of ones life. Two other reasons which not everybody follows: learning for the sake of learning, and learning to advance human knowledge (research).
  • Unfortunately, we seem to have separated “research” and “education”). But research shouldn’t be just the domain of PhDs writing papers. The most important things needed in research should really be included in the mindset of everyone – Meticulousness. Curiosity. Precise Articulation. Diligence. Discipline. Rigor.
  • The most important learning happens from the age of 0 to 5 (-9months to 5 if you consider Abhimanyu), before the child goes to school. Social behavior. Basic Articulation. A second language. Ethics. Humility.


  • We think of education as a formal system of knowledge being imparted through training and/or research. But education is happening all the time. Every interaction with someone else is an opportunity for self-education.
  • Our existing system is broken. Too much emphasis on rote learning. Children cannot apply what they learn. Industry says that less than 25% of our engineers are employable (and apparently the number in China is even lower).
  • We as a society have concluded that getting a degree with good marks implies that your career will be successful. And also, that the manner in which the degree and marks are gained is irrelevant – so optimizations (classes, cheating, leaked papers) are widespread.
  • The teaching is syllabus driven, and the learning by students is examination driven. The teacher must stick to the syllabus because the exam papers will be checked by a different teacher based on a paper set by a third teacher.
  • Is autonomy the answer?
  • The problem is not that our existing system is broken. The problem is that our system refuses to break! It is so well-entrenched. So any solution cannot emerge from complete disruption. The change has to be incremental and needs to work with the system.


  • A claimed advantage of India is the demographic dividend. 300 million people under the age of 19. Educating them well can lead to huge gains for us. But we spend a very small fraction of our GDP (compared to other developing countries).
  • Gross enrollment ratio – the ratio of students who actually enroll for higher education to those actually eligible for higher education – is 60-80% in developed countries. In India it was 8% about 6 years ago. It has been brought to 13-14% now. We are hoping to bring it up to 30% by 2020. Double! To achieve that, we need to double all our educational institutions in 7 years. This is a tall order.
  • Another problem: last year, our engineering colleges’ capacity was 1.45million, whereas enrollment was 1.25million. So, while capacity is growing, enrollment is not growing. Parents and students have begun to believe that getting an engineering degree might not be worth it in all cases.
  • This is the situation with engineering education. It is much worse as you go lower.
  • Think of the problems we face, and the scale of the problems. And we need to solve them at that scale. If we double all our higher educational infrastructure in 7 years, and we convince students/parents to join the new schools, we’ll just get the enrollment ratio to just 30%. And we need to get to 80%
  • Teachers need to be convinced that their main job is not to teach. The main job is enable students to learn. The student should be able to transcend the knowledge of the teacher if/where needed. Also, student should be able to learn in the best possible manner for that student. The manner will be different for different students.
  • Our current education system allows a fixed amount of time for learning, but given that different human beings learn at different rates, it results in variable amount of learning. How does our education system deal with this difference? We grade the students. And denigrate the students who get lower marks. Not just society, friends and family start looking down on the student, but the student himself loses confidence and motivation.
  • What is needed is fixed amounts of learning in variable time (as long as the time is not too long). Is it possible to do this? Maybe – the technology, for the first time in human history, might allow this. Conventional education does not admit this possibility.

Open Sourcing of Knowledge

  • One of the important reasons for creation of the copyright and patent laws was to ensure that after a fixed amount of time, the knowledge contained there is available for all of humanity. But industry is manipulating the system to increase the amount of time.
  • The open source movement, creative commons are ways to get around the problems now being caused by copyright and patent problems.
  • There is lots of knowledge available on the net for free downloads, but because they are not appropriately licensed, it is not possible to distribute this knowledge in a system like Aakash. It is quite likely that the original author would have happily consented to the knowledge being used in this way, but often it is not possible to contact the person, or other problems get in the way. So good knowledge gets lost because of lack of awareness of open sourcing of knowledge.
  • However, if there are companies who are spending money on innovation, and would like to benefit monetarily from those innovations, it is only fair to expect that they use copyrights and licenses to enforce their rights. But as far as knowledge dissemination is concerned, open sourcing the knowledge is what will benefit the most people. There needs to be a balance between these two forces.
  • To do anything sustainably – including bringing changes into education – there needs to be revenues and financial management. But, for some reason, India has conferred a moral high ground to the education sector, and there is a belief that education sector should not be making money. That is not a sustainable thought.
  • Premji Foundation has an initiative in rural Karnataka where they are using computers to enhance education. They’re not teaching computers to the students – they are using computers to improve teaching of Kannada, Maths, etc. The program is funded by the foundation, the government, and the students. (There was a proposal to make this free for the students by taking more money from the government, but they found works better if the students pay.) The foundation has used controlled studies to show that the technology results in significant improvements in education.
  • IIT-Bombay runs a course to train teachers. It reaches 10000 teachers in 250 institutions across India. They’re trained by faculty from IIT Bombay. 4 of these centers are in Pune. This initiative is extremely well received. It is a costly model because it costs Rs. 6400 per teacher for a 2-week program – but by introducing a fee for teachers (because the teachers and colleges do benefit from this program) they’re hoping to reduce the cost to run this program.
  • MOOCs (Massively Online Open Courseware) like Coursera and MIT OCW are a new entrant with a lot of promise. IIT-Bombay has just concluded an MOU with edx and should be the first Indian university to offer an MOOC in about 6 months. Some courses can easily scale up to 1 lakh students. This would ensure that quality education will reach the masses.
  • Sam Pitroda makes a point that students who earn credits via MOOCs should be permitted to transferred credits/marks in their educational instituation. i.e. a COEP student taking an IIT-Bombay MOOC should be able to get COEP credits for passing that course.
  • Currently MOOCs are free, but there needs to be a revenue model for MOOCs. IIT-Bombay believes that knowledge should be free – so all the course material should be available using an open source license, but actual interaction can be paid.
  • But, one problem of MOOC is that often students don’t complete the course, or don’t take it seriously. One big advantage of actual physical classrooms is that in spite of all the distractions, you still end up paying attention to a significant fraction of the lecture.
  • These problems with MOOCs will be solved, and MOOCs will play a very large role in scalable education in India. Via internet. On the cloud.

Technology Crystal Gazing

  • MOOCs will be big – and will become the predominant technology platform for education. (IIT-Bombay picked edX instead of Coursera and others because edX is open source.)
  • Everything will be on the cloud
  • Bandwidth requirements will increase significantly
  • Every educational institution should plan for 1 gbps bandwidth.

Concluding remarks

  • Government must invest much more money in education. Government should not be a benevolent dictator. Education institutions, good or bad, need to get autonomy. Why do we have bad institutions who are simply degree factories? Because industry and society tremendously value degrees and marks. As soon as industry discovers that it can quickly and accurately evaluate students/job-seekers on the basis of their actual capabilities (as opposed to their marks and degrees), universities’ arrogance will disappear, and education will become much better.
  • The same technology which allows us to teach lakhs of students simultaneously and scalably, will also allow companies to assess and evaluate lakhs of students quickly and accurately.
  • Education does not end when you graduate from an educational institution. Education continues forever. Students and professionals need to understand this, and companies need to start focusing on this aspect.
  • Parents need to re-think their priorities. Forcing your child to prepare for JEE for 2 years is causing them to lose two years of their life that they could be using for actual education. And they’re learning to cheat – attending classes and skipping college, but getting “full attendance” at college anyway is being encouraged by parents.
  • It is well established that the best education of a child happens in his/her own mother tongue. Yet, most parents opt for English education. This is acceptable for parents who converse with the children in English on a regular basis. But this is a tiny fraction.
  • Students: enjoy education. Enjoy solving problems. Enjoy life. Dream big. But work hard.
  • There are 300 million Indians younger than 19, younger than the people in this room – and they’re waiting for us to do something for them. Independent of whatever else you are doing in your profession, you must think of making some contribution to making life more meaningful in terms of better learning and better education for those 300 million.

Event Report: What markets to do a startup in India – by @dkhare (Lightspeed Ventures)

(This is essentially a live-blog of Dev Khare’s talk on the Fund Raising Environment in India at the Pune Open Coffee Club. This is essentially a list of bullet points and notes jotted down during the talk, so please excuse the lack of organization and coherence (the fault is mine, not Dev’s). The article is also just a subset of what was talked about but is being published because something is better than nothing.)

If you’re pitching to a VC, better pitch to someone who is already interested in the area that you are doing your startup in. If you have to convince them of the potential of an area, it is an uphill battle, and they will not be particularly useful to you. Also, areas that VCs are interested in tend to be areas where there is a lot of potential money to be made.

With that in mind here are the areas Lightspeed is interested in – and by definition, this are areas that Dev thinks have a lot of potential. So start-ups should go after these areas. Interesting markets and investment themes (The names in parentheses represent companies Lightspeed has invested in):

  • Internet has about 120-150 million internet users. Here, the important theme is networks and marketplaces (Indian Energy Exchange, Fashionara, Limeroad)
  • Mobile has 900 million users, and here direct to consumer business models are of interest (askLaila, and Pune-based Dhingana)
  • Education, with 350 million k12 eligible students, in which education technology platforms are interesting (tutorVista, acquired by Pearson)
  • Financial services, which has 250 million un-banked population, where speciality lending and payments/loyalty are interesting areas (ItzCash)
  • and finally Consumer, with a 150 million middle class in India, and the interesting areas are emerging consumer brands and consumer services (OneAssist).

Mobile is a big and very interesting area. In mobile, there are about 30 million smartphones in India and it could go to 100 million by end of next year. That is a large enough population to build some great companies. The investment horizon of Lightspeed is 5 to 7 years, and if you think that far ahead, there could be 500 million smartphone users in India. Companies serving them have a huge potential. One problem is that it is very difficult to make money in the mobile space, except if you are the service provider. But the money service providers are making from VAS is going down, and that is a business that is dying. So they have to start working with app developers. Vodafone has already started sharing 70% revenues with developers, Idea will do that soon, and others will follow. But remember, they are taking 30% just to allow you to be on their platform – they will not do your marketing. So you still need to do the marketing yourself.

In mobile, various companies have tried to do Indian games (Ramayana games), and that has not worked out. General/betting games like bingo/rummy/poker are doing OK. But this is probably not a big space.

In the mobile enterprise space, there are various problems, including the fact that people bring their own devices, and you have to integrate with various backends. So some horizontal solutions (like email) have gotten traction, but nobody really made much money, because of the competition. So this area is limited – it is more suitable for ISVs and service providers rather than product companies. And one thing you don’t want to do is build a mobile app development framework/platform. 50+ such companies have been funded but none of them are going to make it.

Education is a big and interesting market, where there are lots of pain points, but the biggest challenge is that you cannot sell technology without having to go through schools and colleges and their administrators and trustees and that is a very long and painful road. So it is a big market, but a challenging one.

Question: Why don’t Indian VCs take risk?

Answer: The problem is that there are lots of challenges in the market in India. There are lots of companies who are funded at the seed level who are not going to make it to Series A. And a typical Indian company takes maybe 15 years to give returns, and a VC company cannot really afford to have a number of such investments, since they have a 5-7 year outlook.

Question: There are 7-8 accelerators/incubators in India. What value do they add?

Answer: One of the problems in India is the lack of mentorship in the startup space. There are not too many people who have succeeded and are accessible. (Lots of companies have create a brand (e.g. Zomato, OlaCabs etc) but are still 10 years away from really making it big.) So, all the good accelerators/incubators in India are really run by one person who succeeded and is now trying to give back to the community. And the most important value that the incubator adds is mentorship from that one person. So, if you engage with an incubator, you need to ensure that you do get mentorship from that person. Another thing they do is some branding/marketing in the form of demo days, and just the stamp of approval of being a graduate of that incubator.

Scaling challenges in India: market friction, series of small markets, lack of trust, scarcity of mentors, scarcity of strategic talent, and constricted capital. This ensures that it will take you much longer to grow your company than you think (and than what would be needed in the US).

So how do you grow? You need to figure out your strategy: are you going after India only, or are you going to start from the Indian market and then go to the US market, or some other adjacent market. Another problem is that there are many markets that are really small in India, and there is no point in going after that market (at least for a VC backed company).

Lack of trust is a big problem in India. In general, everybody from banks to entrepreneurs themselves try to protect themselves from fraud that is going to be committed by 1% of the people, and are hence making life hard for the remaining 99%. There are not many people here who are able to think that they are OK with losing money on that 1% but that will be more than offset by the money they make from the remaining people. Everything has to be pre-paid which is another thing that slows down growth. (One solution to the trust problem is to build a brand – and have radio/TV advertisements. See below for more on this.)

Enterprise selling or SMB selling in India is tough. Market sizes are small. Most enterprises in India do not value packaged products, and are unwilling to pay. Even when they’ve agreed to pay, getting them to actually pay is difficult. The sales cycles are very long, and the markets are all very fragmented. Small business lead-generation, subscription (prepaid) companies are doing OK – where it looks like it is a consumer company (like Naukri, Zomato), but is really making money from the small businesses.

It is possible for companies in India to go after enterprises in US. If you’re going to do this, go after a market in the US that is mature, self-service oriented, and SaaS. An example is ZenDesk. You need to build a very easy self-service product, with a very easy on-ramp, it should be easy to pay and easy to use. Druva started off that way. There are 10 other good companies in India doing this. (Note: self-service for the Indian market does not work. It has to be feet-on-the-street, and that will only scale after 10/15 years, and it will still be a mid-sized company.)

Question: Which city should I be in to do a startup?

Answer: If you’re doing a tech company, you should be in Bangalore or Delhi. Pune is OK – investors have started showing an interest, and Bombay is close enough. Places like Chennai and Hyderabad also have hopes. But if you’re anywhere outside (e.g. Lucknow), you need to move.

Question: What are your thoughts on education as a space?

Answer: If you’re in the education space, you need to be in the curriculum, not as something supplemental. Also, it is very difficult to sell to parents directly – you have to sell via the schools. Remember, selling directly to the consumer in this space is very difficult unless you have a brand and strong marketing. Another point, if you’re selling to schools, your product does not really have to be that good – you don’t have to be good at student outcomes, you have to be good at selling to schools. So invest in a sales team, and not so much in the product. Distance education / e-learning is an interesting area. Also online assessments are an interesting area – like doing tests for Infosys for recruitment, or exams for schools.

Question: How come there isn’t much investment in India for clean energy?

Answer: One of the issues is that the VC community wants to invest in areas that have high margins and low capital expenditure. They like to do a few million dollars to do an experiment, validate the market and then scale the business. By contrast, energy is an area with lots of capital investment and long gestation periods. It takes 5 to 7 years for the science to work, then the real product starts, and then there are other challenges. So this is a problem everywhere in the world. In the US there was a clean energy bubble a few years back but that has burst, and those VCs have fled. And in India, the situation is worse – whom do you sell the energy to, in India? The government, which is a problem.

Question: How do you get advertisers, especially from the US?

Answer: First, you need to have scale before advertisers are interested. If you have less than 10 million monthly unique, don’t even bother – it’s a waste of time. Just focus on scaling your traffic. Once you have reached those numbers, there are various options open to you, including outsourcing the sales of your ad inventory to third parties.

No consumer company in India can really succeed without a physical infrastructure of some sort. And that is painful, difficult and slow. But once you’re able to do that, then you can really leverage that well, and it becomes a barrier to entry. Another issue in India is that a consumer company really needs to build a brand (because of the trust issue mentioned earlier). Hence, it is a good idea for a consumer company to start spending on radio/TV ads early. But remember, do not try to scale before you have a microcosm of your business working in a profitable, sustainable way in a small way. i.e. get something working well in Pune, and then scale to 20 cities.

Question: Should startups go after existing markets, or create new markets

In India, the only real way to grow a company is to grow your category. You cannot really simply survive on steal a market away from others – you have to grow the market. And for this, you need to evangelize your market – use the ecosystem to do this. And this takes time – so you need to raise a lot capital and then go big.

Question: What are some areas in which entrepreneurs should not do startups

These are some areas in which entrepreneurs keep trying to start companies, inspite of the fact that there is no real hope for anybody in that area: Development tools (framework for mobile app development, library/framework for easy software development), e-commerce and daily deals (this area is overfunded), unified communications.

Question: Where to get seed investment from

List of active seed investment groups in India: Blume Ventures, Harvard Angels, India Internet Fund, India Quotient, Indian Angel Network, Jungle Ventures, Kae Capital, Morpheus, Mumbai Angels, Qualcomm Ventures, Seed Fund, Venture Nursery, Yournest.

Event Report: VLSI Design Conference Pune 2013

(This is an event report of the VLSI Design Conference that was held in Pune in Jan 2013, by Shakthi Kannan. It originally appeared on his blog, and is reproduced here with permission for the benefit of PuneTech readers.)

The 26th International Conference on VLSI Design 2013 and the 12th International Conference on Embedded Systems was held at the Hyatt Regency, Pune, India between January 5-10, 2013. The first two days were tutorial sessions, while the main conference began on Monday, January 7, 2013.

26th VLSID 2013

Day 1: Tutorial

On the first day, I attended the tutorial on “Concept to Product – Design, Verification & Test: A Tutorial” by Prof. Kewal Saluja, and Prof. Virendra Singh. Prof. Saluja started the tutorial with an introduction and history of VLSI. An overview of the VLSI realization process was given with an emphasis on synthesis. The theme of the conference was “green” technology, and hence the concepts of low power design were introduced. The challenges of multi-core and high performance design including cache coherence were elaborated. Prof. Singh explained the verification methodologies with an example of implementing a DVD player. Simulation and formal verification techniques were compared, with an overview on model checking. Prof. Saluja explained the basics of VLSI testing, differences between verification and testing, and the various testing techniques used. The challenges in VLSI testing were also discussed.

Day 2: Tutorial

On the second day, I attended the tutorial on “Formal Techniques for Hardware/Software Co-Verification” by Prof. Daniel Kroening, and Prof. Mandayam Srinivas. Prof. Kroening began the tutorial with the motivation for formal methods. Examples on SAT solvers, boundary model checking for hardware, and bounded program analysis for C programs were explained. Satisfiability modulo theories for bit-vectors, arrays and functions were illustrated with numerous examples. In the afternoon, Prof. Srinivas demoed formal verification for both Verilog and C. He shared the results of verification done for both a DSP and a microprocessor. The CProver tool has been released under a CMBC license. After discussion with Fedora Legal, and Prof. Kroening, it has been updated to a BSD license for inclusion in Fedora. The presentation slides used in the tutorial are available.

Day 3: Main conference

The first day of the main conference began with the keynote by Mr. Abhi Talwalker, CEO of LSI, on “Intelligent Silicon in the Data-centric Era”. He addressed the challenges in bridging the data deluge gap, latency issues in data centers, and energy efficient buildings. The second keynote of the day was given by Dr. Ruchir Puri, IBM Fellow, on “Opportunities and Challenges for High Performance Microprocessor Designs and Design Automation”. Dr. Ruchir spoke about the various IBM multi-core processors, and the challenges facing multi-core designs – software parallelism, socket bandwidth, power, and technology complexity. He also said that more EDA innovation needs to come at the system level.


After the keynote, I attended the “C1. Embedded Architecture” track sessions. Liang Tang presented his paper on “Processor for Reconfigurable Baseband Modulation Mapping”. Dr. Swarnalatha Radhakrishnan then presented her paper on “A Study on Instruction-set Selection Using Multi-application Based Application Specific Instruction-Set Processors”. She explained about ASIPs (Application Specific Instruction Set Processor), and shared test results on choosing specific instruction sets based on the application domain. The final paper for the session was presented by Prof. Niraj K. Jha on “Localized Heating for Building Energy Efficiency”. He and his team at Princeton have used ultrasonic sensors to implement localized heating. A similar approach is planned for lighting as well.

Post-lunch, I attended the sessions for the track “B2. Test Cost Reduction and Safety”. The honourable chief minister of Maharashtra, Shri. Prithviraj Chavan, arrived in the afternoon to formally inaugurate the conference. He is an engineer who graduated from the University of California, Berkeley, and said that he was committed to put Pune on the semiconductor map. The afternoon keynote was given by Mr. Kishore Manghnani from Marvell, on “Semiconductors in Smart Energy Products”. He primarily discussed about LEDs, and their applications. This was followed by a panel discussion on “Low power design”. There was an emphasis to create system level, software architecture techniques to increase leverage in low power design. For the last track of the day, I attended the sessions on “C3. Design and Synthesis of Reversible Logic”. The Keccak sponge function family has been chosen to become the SHA-3 standard.

Day 4: Main conference

The second day of the main conference began with a recorded keynote by Dr. Paramesh Gopi, AppliedMicro, on “Cloud computing needs at less power and low cost” followed by a talk by Mr. Amal Bommireddy, AppliedMicro, on “Challenges of First pass Silicon”. Mr. Bommireddy discussed the factors affecting first pass success – RTL verification, IP verification, physical design, routing strategies, package design, and validation board design. The second keynote of the day was by Dr. Louis Scheffer from the Howard Hughes Medical Institute, on “Deciphering the brain, cousin to the chip”. It was a brilliant talk on applying chip debugging techniques to inspect and analyse how the brain works.

After the keynote, I visited the exhibition hall where companies had their products displayed in their respective stalls. AppliedMicro had a demo of their X-gene ARM64 platform running Ubuntu. They did mention to me that Fedora runs on their platform. Marvell had demonstrated their embedded and control solutions running on Fedora. ARM had their and kits on display for students. Post-lunch, was an excellent keynote by Dr. Vivek Singh, Intel Fellow, titled “Duniyaa Maange Moore!”. He started with what people need – access, connectivity, education, and healthcare, and went to discuss the next in line for Intel’s manufacturing process. The 14nm technology is scheduled to be operational by end of 2013, while 10nm is planned for 2015. They have also started work on 7nm manufacturing processes. This was followed by a panel discussion on “Expectations of Manufacturing Sector from Semiconductor and Embedded System Companies” where the need to bridge the knowledge gap between mechanical and VLSI/embedded engineers was emphasized.

Day 5: Main conference

The final day of the main conference began with the keynote by Dr. Vijaykrishnan Narayanan on “Embedded Vision Systems”, where he showed the current research in intelligent cameras, augmented reality, and interactive systems. I attended the sessions for the track “C7. Advances in Functional Verification”, and “C8. Logic Synthesis and Design”. Post-lunch, Dr. Ken Chang gave his keynote on “Advancing High Performance System-on-Package via Heterogeneous 3-D Integration”. He said that Intel’s 22nm Ivy Bridge which uses FinFETs took nearly 15 years to productize, but look promising for the future. Co(CoS) Chip on Chip on Substrate, and (CoW)oS Chip on Wafer on Substrate technologies were illustrated. Many hardware design houses use 15 FPGAs on a board for testing. The Xilinx Virtex-7HT FPGA has analog, memory, and ARM microprocessor integrated on a single chip giving a throughput of 2.8 Terabits/second. He also mentioned that Known Good Die (KGD) methodologies are still emerging in the market. For the last track of the conference, I attended the sessions on “C9. Advances in Circuit Simulation, Analysis and Design”.

Shakthi Kannan

Thanks to Red Hat for sponsoring me to attend the conference.

About the Author – Shakthi Kannan

Shakthi Kannan is a Senior Software Engineer with Red Hat in Pune, and is also a very active member of the open source community. For more details about him, see his Linkedin Profile, or his blog.

#VLSI-Conf-Pune 2013 Event Report: Intelligent Silicon in the Data-Centric Era

(This is a live-blog of the keynote address given by Abhi Talwalkar at the 26th International Conference on VLSI Design being held in Pune. Abhi is the President and CEO of LSI Corporation. (LSI has had a large development center in Pune for the last 4 years.)

(Note: since this is a live-blog, it is only a partial and unorganized report, and might contain errors and omissions.)

The innovation happening in the world since the first transistor was developed has been unparalleled in history. This has led to various changes, including a flat world where anyone can innovate from anywhere in the world, there is lots and lots of collaboration, and where for the first time, information and data are the most important currency.

As a result, we are now seeing a deluge of data. The reasons are:

  • Everybody is on social networks and creating/sharing data
  • Everyone has personal devices (8.5 billion devices sold per year, 40% of them are smart devices), and again people are living a lot of their lives through these devices
  • Other devices are generating data automatically, and will continue to do so

The technology challenges resulting from this data deluge are in the areas of devices, the data centers and the network. These are the challenges in these areas:

  • Bring your own device. Previously, companies insisted on employees using company approved devices (e.g. Blackberry only, and no iPhones). But more and more employees want to use their own devices, and company IT departments are forced to deal with them. The variety of devices that need to be supported is a proble. And the devices need to be always on and always connected – and so do the enterprise backend apps that need to support these devices. The enterprise IT apps need to support mobile devices seamlessly, and in general there is a consumerization of enterprise IT – driving a newfound focus on improved end-user experiences.
  • Green Impact of Devices: All these devices generate e-waste, emissions and use up energy
  • Network bottlenecks: the wireless spectrum which these devices use is getting congested. The backhaul network connections are also facing a capacity crunch. And security in all these areas is an area of increasing concern.
  • Green Impact of DataCenters: Data centers have increased energy consumption by 3x. Telecom in India consumes 3 billion litres of diesel. This is second only to railways, and is a major contributor to the carbon emissions.

Since most of the above seem like software challenges, what does Silicon (Hardware/VLSI/Embedded systems) have to do with them? The answer is that silicon allows you do more with less, and is a key catalyst for innovation. There is much more power in CPUs today than we need – and we need to figure out how to use it. There needs to be more intelligent hardware which knows how to protect the data, where to move it, etc.

What are the specific problems that can/should be solved in silicon?

  • Hardware Accelerators: A full suite of silicon based accelerators can be deployed in the network and the data center.
  • Improve latency and capacity: utilization levels continue to remain low in data-centers, and can be improved significantly
  • Intelligent caching: For example, appropriate use of flash memory between magnetic storage and memory can get much better performance without a significant increase in infrastructure.
  • Use sensors and gather data to make the silicon more intelligent and take better decisions. For example, many companies would leave lights on all night but now more and more are deploying sensors which will turn off the lights when not required. This concept can be extended to many other areas.

Event Report: 7th IndicThreads Software Development Conference

(This is a live blog of the talks being given at the 7th IndicThreads Software Development Conference happening in Pune today and tomorrow. The slides of the individual talks will become available on the conference website tomorrow.)

Keynote Address: Suhas Kelkar: Technology Is Just The Start

The the keynote address was by Suhas Kelkar, CTO (APAC) for BMC Software, also in charge of driving BMC’s Incubator Team globally as well as the overall technical and business strategy.

The main point Suhas made was that technology is a great way to innovate, but technology is not enough. There are many other things beyond technology to look at to build great software products.

Any technology innovator must understand the technology adoption curve that Geoffrey Moore explained in Crossing the Chasm. First you innovate and create a new technology, and the first people to use it are the early adopters. And to keep the early adopters you need to keep adding technology features. But the set of early adopters is small. You cannot get mainstream success based on the early adopters. And most new technology innovations fail at this stage. Getting the mainstream to adopt the technology you need to win them over with non-technology things like better user experience, reliability, low cost, etc. And this is necessary, otherwise you’re technology innovation will be wasted. So you must learn to focus on these things (at the right stage in the life-cycle of the product)

Technology innovation is easy. Feature/function innovation is easy. User experience innovation is hard. Getting reliability without increasing cost is very hard. The best technologists need to work on these problems.

Karanbir Gujral: Grammar of Graphics: A New Approach to Visualization

The world has too much data, but not enough information. Not enough people working on converting raw data to easily consumable, actionable information. Hence, the area of data visualization is very important.

“Grammar of Graphics” is a new approach to visualization. It is not simply a library of different types of chart. Instead, it is a language which can be used to describe chart type, or even easily declare/create new chart types. Think of it as the SQL equivalent of graphics.

The language describes features of charts, not specific names/types. For example, instead of saying “bar chart”, you would describe it as a chart with basic 2D co-ordinates, categorical x numeric, displayed with intervals.

Where is this available?

  • Book: Grammar of Graphics by Leland Wilkinson
  • Open Source software packages:
  • ProtoVis and D3 for JavaScript users
  • Ggplot2 for R users
  • Bokeh for Python users
  • Commercial Software
  • Tableau

More specifically, the grammar allows specification of a chart using six different properties:

  • Element Type:
    • point, line, area, interval (bar), polygon, schema, text.
    • Any element type can be used with any underlying data
  • Co-ordinates:
    • Any number of dimensions (1D, 2D, etc)
    • Clustering and stacking
    • Polar
    • Transpositions
    • Map projections
    • Simple Axis
    • Nested Axis
    • Facet Axis
    • Standard sequential/stacking layout
    • Graph Layouts (Network, Treelike)
    • Treemaps
    • Custom Layouts
    • Map data to graphic attributes
    • Color (exterior/interior, gradient)
    • Size (width, height, both)
    • Symbol
    • Splitting data into multiple charts
    • Chart-in-chart
    • Panels

Ideally, you would like each of these 6 different types of properties to be orthogonal to each other. You should be able to mix and match any type of element type with any type of co-ordinate system, with any aesthetics/style. Thus, specifying the values of these six different properties gives a specific type of chart.

Karanbir gave an example of how a simple bar chart is specified using a language like this. The specification was in JSON. I could not capture it in this live blog, but check out the slides for the actual specification. It is quite interesting.

Having to write out a largish specification in JSON just to get a bar chart doesn’t seem to be that useful, but the real power of this approach becomes apparent when you need to make changes or combine the different properties in new/unique ways. The power of orthogonality and composition become readily apparent, and you are able to do really big and cool changes with simple modifications to the specification. Karanbir’s demo of different visualizations for the same data using simple changes to the spec was very intersting. Again, see the slides for some examples.

Mohan Kumar Muddana: JVM the Polyglot Platform

The Java Virtual Machine (JVM) was initially built for Java. But now it handles many different languages. More specifically, it was built to only handle classes, but now so many different types of language constructs from modern languages newer than Java are handled. Since Java6 (JSR223), JVM has supported scripting (Groovy, Jython, JRuby). Starting with Java7 (JSR292), the JVM has added the ‘invokedynamic’ construct, allowing support for dynamically typed languages, which makes it easier to implement all the features of interpreted languages on the JVM – and not just those that are compiled to bytecode.

Why would you want to run other languages on the JVM? Because you can get the speed and stability of the JVM and combine it with the productivity, or ease-of-use, or the different paradigm of the other languages.

Some interesting JVM based languages to check out:

  • Scala: Pure object oriented language with Traits, Case Classes and Pattern Matching, and Type Erasure.
  • Clojure: Pure functional language
  • Groovy

The slides of the talk contain lots of details and examples of why these languages are cool.

Sushanta Pradhan: Typesafe Stack Software Development on the JVM

The world is moving towards apps that need Scalability, Distributed, Parallel & Concurrent, Dynamic and Agile and Swift.

Concurrent access to data-structures is a big pain when trying to implement parallel programs.

Scala is a functional, concise programming language written on top of the JVM, and inter-operates with Java (i.e. you can very easily access all the Java standard library and any other Java code you might have from Scala code). It has immutable and mutable variables; it has tuples, multiple assignments in a single statements, sensible defaults, and operator overloading.

Scala has the following important features that help with implementation of parallel code:

  • Parallelism:
    • Parallel Data-Structures
    • Distributed Data-Structures
  • Concurrency:
    • Actor Model
    • Software Transactional Memory
    • Futures

Akka Middleware is a concurrent, scalable and fault tolerant framework based on the actor model. It is event driven and highly performant. It has a supervision model which establishes relationships between actors, and the supervisor can detect and correct errors of the subordinate model. This is a model of reliability based on the belief that you cannot completely avoid failures, so you should simply let your actor fail, and allow the supervisor to take corrective action.

Akka is written using Scala, but has Java APIs, so you can use Akka without really having to use or know Scala.

The Play framework, gives Ruby-on-Rails style agility in the Java world by adopting the convention-over-configuration techniques from Rails, has seamless integration with Akka, and hot deployment which makes it easier to have agile code updates. Play has an easy, out of the box setup for unit and functional testing, asynchronous HTTP request handling, WebSocket support, support for caching (e.g. memcaching).

Play has progressive stream processing. This is based on the iteratee model, which improves over the iterator model by allowing you to easily work with streaming data in which the collection size is not fixed beforehand. Similarly there are enumerator and enumeratees.

Play is written using Scala, and has integration with Akka, but it can be used without Akka, and it has Java APIs, so you can use Play without really having to use or know either Scala or Akka.

Scala, Akka and Play together are called the TypeSafe stack, and is a modern JVM based software stack for building scalable applications.

Mushtaq Ahmed/Shripad Agashe: Using Play! 2.0 to Build a Website

Important features of Play!

  • Powerful Templating: Play! has A heavy focus on type-safety across all the layers. This means that even the templating engine has type-safety built in; thus the templates are statically compiled and check for type safety.
  • Routes: Statically compiled reverse routes. Ease of refactoring routes due to compiler support. All references to routes are via reverse routes, no hard-coded routes.
  • Non-blocking: End-to-end non-blocking architecture. Thus a small number of threads can handle large numbers of concurrent requests. There are no callbacks.
  • Great Developer Workflow: Hot reloading, error display (compile time and runtime errors shown in browser), and in-memory database during development.

The rest of this talk contained a code walk-through of Play! features and an implementation of an insurance website using Play! 2.0.

Arun Tomar: Cloud Automation using Chef

Chef is an open source framework that allows the creation of automated scripts that help with hardware/software infrastructure management and deployment. Previously, app upgrades/deployments were manual processes that took hours if not days. Chef allows creation of scripts (written in Ruby) which fully automate these processes.

Chef is cross-platform, so it supports pretty much everything: Windows, Linux, Sun, Mac, BSD, AIX etc.

A Chef deployment consists of a Chef server, which holds all the configuration information for all your infrastructure (aka recipes), a Chef workstation (i.e. the machine from which you will run the Chef recipes), and the actual servers/machines that will be controlled/configured using Chef (aka Chef nodes). Chef client software needs to be installed on each node. The Chef workstation then takes the Chef script (or recipe) from the Chef server, breaks it up into pieces to be executed on each individual server/machine and sends those instructions to the Chef client on the nodes using ssh.

Each Chef recipe is used to configure some part of a server: e.g. Apache, MySQL, Hadoop, etc. These recipes describe a series of resources that should be in a particular state – packages that should be installed, services that should be running, or files that should be written. Chef makes sure each resource is properly configured, and gives you a safe, flexible, easily-repeatable mechanism for making sure your servers are always running exactly the way you want them to.

Core principles of Chef:

  • Idempotence: doing an operation should be idempotent. This ensures that when scripts fail halfway through, you can just re-run the whole script and everything will just work properly and the script will complete cleanly, without doing anything twice.
  • Order Matters: the steps in a recipe will get executed in the order they’re written
  • You are the Boss: Most infrastructure solutions force you to adapt to their “one-size-fits-all” approaches, even though every infrastructure challenge is different. This is crazy. Instead, Chef allows you to be in full charge of everything and you can configure everything exactly the way you want it – and you have the full power of the Ruby programming language to specify this to any level of detail required.

A big advantage of Chef is that Chef recipes are essentially a self-documenting record of your actual infrastructure. In addition, when you add new servers, Chef automatically discovers information your system. So, for example, when you need to do something on production, you simply ask Chef, which knows what is really happening, and you don’t need to rely on static documentation that might be out of date.

Another advantage of Chef is the extensive library of existing recipes for standard things that people would like to install/control using Chef. For example, consider this:

include_recipe "apache2"

This line is enough to install apache2 on any machine, irrespective of which OS the machine is running. This is possible because all the hardwork of actual commands and configurations for Apache2 for all the various supported platforms has already been done for you by someone else!

In addition, Chef contains templates (e.g. a template for apache2 ) which contain the various configuration parameters that can be configured by you easily. So if you don’t want to standard apache2 installation, you can “include” a template, over-ride a few parameters, and you’re done.

Here are some examples of what chef can do:

  • It can use rpm, or apt-get, or dpkg, or yum, or python-pip, or python-easy_install, or ruby gems, or whatever your package installer happens to be and install packages. It is smart enough to know whether a package is already installed and which version, and will not do unnecessary things.
  • It can update configuration files, start and stop services, and install plugins for software packages that can handle plugins.
  • It can automatically clone git repositories, make/build/install, handle dependencies, execute scripts.

Shubham Srivastava: NoSQL – Now and the Path Ahead

Why NoSQL?

  • Scalability and Performance
    • Horizontal scalability is better than vertical scalability
    • Hardware is getting cheaper and processing power in increasing
    • Less operational complexity compared to RDBMS solutions
    • Get automatic sharding as default
  • Cost
    • Scale without Hefty Cost
    • Commodity Hardware, and free/open source software (think versions, upgrades, maintenance)
  • Flexibility in Data Modeling.
    • Key-value stores (very powerful…)
    • Hierarchical or Graph Data
    • Storing values like maps, or maps of maps
    • Document databases – with arbitrarily complex objects

The next part of the talk was a detailed look at advantages and disadvantages of a specific type of NoSQL data base for answering specific queries.

Turing100 Event Report: Work of Butler Lampson – Systems

(This is a live-blog of Neeran Karnik‘s talk on Butler Lampson, as part of the Turing100 Lecture Series happening at Persistent. Since it is being typed during the talk, please forgive the typos and bad structuring of the article. This article is also very incomplete.)

Butler Lampson has contributions in a wide area of computer science fields. Here is the Turing Award Citation:

For contributions to the development of distributed, personal computing environments and the technology for their implementation: workstations, networks, operating systems, programming systems, displays, security and document publishing.

The number of different areas of computer science touched there is breathtaking.

Systems Built

Here is just a sampling of some of the work of Lampson that resulted in entire areas of computer hardware and software:

  • The first personal computer:
    • The first personal computer in the world, the Xerox Alto, was conceived in a 1972 memo written by Lampson.
    • Important contributions of the Alto:
      • First “personal” computer
      • First computer that used a desktop metaphor
      • First computer to use a mouse-driven graphical interface
    • Lampson later work on the follow-up workstation designs Dorado and Wildflower (research products) which later resulted in a successful commercial product (Star).
  • The Bravo Editor
    • Lampson designed the first WYSIWYG editor in the world in 1974. This shipped with the Xerox Alto. This work can ultimately be seen to have led to the development of Microsoft Word
  • The Xerox 9700 and Dover Laser Printers
    • The first laser printer was designed in 1969 at Xerox Parc and Lampson worked on the electronic design of it.
  • The SDS 940 Time-sharing system
    • The first general-purpose time-sharing system.

And those were just the systems he built.

What about more fundamental contributions to computer science? Here is a list:

  • The two-phase commit protocol.
    • This is the fundamental building block of all transactional processing in databases that are spread out across machines and/or geographies.
  • The CAL time-sharing system
  • Programming Languages
    • MESA and SPL: for systems programming. Modern threads developed from here
    • Euclid: first programming language to use verification
  • Security:
    • Access matrix model, unifying capabilities and ACLs
    • Theory of principals speaking for other principals
    • Microsoft Palladium
    • Scrubbing disk storage
    • Research on how economic factors affect security
  • Networking
    • Co-inventor of ethernet!

How is Systems Research different?

Butler Lampson was one of the few great computer scientists who spent a lot of in the laboratory with actual hardware, getting his hands dirty. This is not the kind of work normally associated with Turing award winners, but it is the kind of work that has really given us the actual hardware that we use in computer science today.

Some thoughts on why systems programming is different more difficult.

Designing and building large computing systems, or complex computing systems or both. Computers (from tablets to supercomputers), networks, storage and other hardware and OS, programming languages, and other infrastructure software.

Systems design is different from other parts of computers science (e.g. algorithm design) because it’s external interface is to the real world (and hence it is imprecise, subject to change, and generally underspecified), lots of moving parts (i.e. more internal structure and more internal interfaces), module-level design choices have wider implications on the end product, and measure of success is less clear than in other fields. There is no such thing as an optimal answer, so avoiding terrible designs is more important than finding the most optimal one.

Hints on Computer System Design

This is a paper written by Lampson giving hints on how to build good systems. He uses hints, because in systems work, there are no infallible rules. So there are just hints which guide your thinking. Werner Vogels, CTO of Amazon, who oversees some of the most complex and scalable computing infrastructure in the world is a fan of this work. He finds these hints very useful, and says that they are more important today because they’ve withstood the test of time

These hints talk about functionality (what you’re doing), speed (are you doing it quickly enough), and fault-tolerance (and will you keep doing it) In systems, the interface design is the most important part. Do this well and other stuff will follow.

  • Hints about Functionality/Interface:

    • Do one thing at a time, and do it well
      • KISS – Keep it Short Stupid
      • Don’t generalize – because generalizations are wrong
      • Don’t overload interfaces with too much work
      • “Perfection is reached not when there is no longer anything to add, but when there is no longer anything to take away” – Antoine de St. Exupery
    • An interface should capture the minimum essentials
      • Don’t promise more than you can deliver
    • Cost of an interface should be predictable
    • Better to have basic and fast functionality, rather than slow and complex
      • Clients who need speed, are happy with the basic+fast interface
      • Clients how need complexity, can build it on top of your basics
    • Don’t hide the power
      • If a lower layer abstraction is more powerful, a higher layer abstraction should not hide that power. It should expose the power. The abstraction should only conceal undesirable properties
    • Keep the basic interfaces stable; parts of the system that will be used by many different modules should not change
    • Treat the first system that you’re building as a prototype that you will throw away
    • Divide and conquer
      • Reduce the problem to smaller pieces that you can solve more easily
      • Bite of as much as you can handle right now, and then leave the rest for the next iteration.
  • Hints on Speed

    • Split resources in a fixed way, rather than a sharing way
      • Do not spend too much effort in trying to share resources
      • Overhead of multiplexing usually not worth it
      • Fixed allocation this makes things faster, and more predictable
    • Cache Answers to Expensive Computations
    • Dynamic Translation
      • Use different representations for use of some structure vs. implementation of the same, and then dynamically translate between them.
      • “All problems in computer science can be solved by adding a level of indirection”
    • Use hints
      • i.e. things that don’t have to be 100% accurate, but can improve performance if they are correct
      • For example, the routing tables in internet packet routing. They can be incorrect, out-of-date. In most cases, they work, and give great performance. Rarely, they don’t work, and you can recover.
    • When in doubt, use brute force
      • Hardware is cheap
      • Brute force allows for cheaper faster implementation
      • e.g. chess-playing computers that use brute force have defeated grandmasters, while complex algorithms trying to “understand” chess.
    • Compute in the background whenever possible
      • e.g. if a webpage update results in an email being sent, the email sending should happen in the background.
    • Use batch processing when possible
    • Safety First
      • Correctness is more important than speed
      • Being clever and optimal is over-rated
      • A general purpose system cannot really optimize, because it doesn’t really know the use cases
      • Hardware is cheap
  • Hints for Fault Tolerance
    • End-to-End Error Recovery
      • “Error detection and recover at the application level is necessary. Error detection and recovery at lower layers is not necessary, and should only be done for performance reasons.”
      • Saltzer et. all, classic 1984 paper
      • Example: if transferring file from one computer to another there are lots and lots of steps (file being transferred from storage to memory to network), and in lots of sub-tasks (transferring chunks of file)
    • Lot updates to record the truth about an object state
      • Current state of the object in memory is a hint
      • The true state of the object is what can be recovered from the storage and logs
    • Actions should be atomic or restartable
      • Either an action is all-or-nothing, or if there is a failure in between the action should be able to restart from where it left off

Lampson himself suggests that reading the whole hints paper at once might be tiresome, so it is better to read it in small doses at bedtime. And he points out that he himself has ignored most of these hints sometimes, but then has always regretted them.

The Net Neutrality Debate – A Supply/Demand Perspective – V. Sridhar, Sasken

(This is a liveblog of a lecture on Network Neutrality by V. Sridhar, a Fellow at Sasken. This talk was delivered as a part of the Turing100@Persistent Lecture Series in Pune. Since it is being typed as the event is happening, it is not really well structured, but should rather be viewed as a collection of bullet points of interesting things said during the talk. For more information about Dr. Sridhar, see his website)

The Problem of Net Neutrality

The principle of “Net Neutrality” states that all traffic on the internet should be treated equally. Thus, the principle states that network service providers (i.e. the telecom companies) should not be allowed to discriminate (i.e. limit or disallow) on network connections and speeds based on the type of traffic. Thus, for example, under net neutrality, a telecom should not be allowed to disallow BitTorrent Downloads, or limit bandwidth for Skype or Video streaming, or provide higher speeds and better quality of service guarantees for just traffic generated by iPhones or US-based companies.

Telecom companies are trying to introduce systems by which different levels of service are provided for different types of traffic, because, they argue that network neutrality is not economically viable.

The Demand for Network Services

  • Mobile broadband and 3G traffic is increasing exponentially
    • Even in India! In the last 7 months there has been 78% growth in 3G traffic, and 47% growth in 2G. India loves mobile broadband
    • Users are getting hooked to 3G. An average 3G user consumes 4 times more data than a 2G user. 3G is an acceptable alternative to wired broadband
    • Mobile data is growing fastest in smaller towns and villages (category B & C circles)
  • Video, voice, and streaming data are taking up huge chunks of bandwidth

NetHeads vs BellHeads

There are two major approaches to the network: the traditional telephone providers who come from a circuit switched Telephone background (the BellHeads), and the people who come from the packet-switched internet protocol background (the NetHeads). The BellHeads believe that the network is smart, endpoints are dumb; they believe in closed, proprietary networks; they expect payment for each service; often with per-minute charges; they want to control the evolution of the network and to control everything about the network. They want strong regulations. The NetHeads philosophy is that network is dumb, and endpoints are smart. So users should take all the decisions; they believe in an open community; and they expect cheap or free services, with no per-minute charges; they want the network to evolve organically without regulations.

To a large extent, the NetHeads are for net neutrality and the BellHeads are in favor of abolishing net neutrality in favor of carefully controlled tiered traffic.

The Supply Side

Land-line penetration is decreasing. On the other hand, mobile penetration continues to increase and is showing no signs of saturation. Fixed-line is losing its relevance, especially in case of emerging countries in India. Which means that increasing chunk of the internet bandwidth is going to be consumed by mobile devices.

LTE (the Long Term Evolution) mobile network is the fastest growing network ever. 300+ different operators all over the world are investing in LTE. This will come to India soon.

Mobile technologies are improving, and individual devices will soon be capable of handling 1Gbps data connections. This means that the capacity of the core network will have to go up to provide the speeds that the device is capable of consuming. And the NetHeads are making good progress and being able to provide high capacities for the core networks.

The problem is that the mobile spectrum is a scarce resource, and will soon become the bottleneck. The other problem is that chunks of the spectrum have to be exclusively allocated to individual operators. And then that operator has to operate just within that chunk.

The Problem of the Commons

When people have shared, unlimited access to a common resource, then each will consume the resource without recognizing that this results in costs for everyone else. When the total amount that everybody would like to consume goes above what is totally available, everybody suffers. This is a problem which will affect the mobile spectrum. The spectrum gets congested, and bandwidth suffers.

How to solve the congestion problem?

  • Congestion pricing. For example, cheaper access after 9pm is an instance of congestion pricing – an attempt to convince some of the users to consume resources when they’re less congested.
  • During periods of congestion, bandwidth is scarce and hence should have high prices. On the other hand, when the network is not congested, then the additional cost of supporting an additional user’s downloads is minimal, hence the user should be given free or very cheap access.

The Net Neutrality Debate

Net neutrality believes that the maximum good of maximum people will happen if networks service providers do not discriminate amongst their customers.

No discrimination means:

  • No blocking of content based on its source, ownership or destination
  • No slowing down or speeding up of content based on source, ownership or destination

Examples of discrimination:

  • In 2005, Madison River Communications (an ISP) blocked all Vonage VoIP phone traffic
  • In 2007, Comcast in the US, restricted some P2P applications (like BitTorrent)
  • In 2009, AT&T put restrictions on what iPhone apps can run on its network
    • Disallowed SlingPlayer (IP based video broadcast) over it’s 3G network
    • Skype was not allowed to run over AT&T’s 3G network

The case for net neutrality:

  • Innovation: Operators/ISPs can kill innovative and disruptive apps if they’re allowed to discriminate
  • Competition: Operators/ISPs can kill competition by selectively disallowing certain applications. For example, if AT&T slows down Google Search, but speeds up Bing Search, this can cause Google Search to die.
  • Consumers: Operators/ISPs will have a strong grip on the consumers and other players will not get easy access to them. This will hurt the consumers in the long run.

The case against net neutrality:

  • Capacity is finite. Especially in the case of mobile broadband (because the spectrum is limited)
  • If there is no prioritization, a few apps will consume too much bandwidth and hurt everybody; and also it reduces the service provider’s motivation to increase bandwidth
  • Prioritization, and higher pricing for specific apps can be used to pay for new innovations in future network capacity increases

Broadband is a two-sided market:

  • Apps and Broadband is a two-sided market.
    • Both, applications and bandwidth are needed by consumers
    • Without applications, users will not consume the bandwidth, because they have nothing interesting to do
    • Without bandwidth, users will not use applications, because they’ll be too slow
    • Hence both have to be promoted simultaneously
  • How should a two-sided market be handled?
    • Usually, one side should to be subsidized so it can grow and help the other grow
    • e.g. Somebody needs to break this cycle and grow one side of this market, so that the other can then grow
    • For example, Google (an app/content provider) is buying fiber and providing 1Gbps connection in Kansas for $70 per month. Thus Google is subsidizing the bandwidth increase, and hopes that the users and apps will increase in proportion.
  • Regulatory and Policy implications
    • Two ways to handle this:
      • Ex Ante: come up with regulations and policies before problems occur
        • Because lawsuits are expensive
        • US is trying to do this – they have exempted mobile providers from net neutrality principles
        • Netherlands has passed net neutrality regulations – first country in the world. Mobile operators are not allowed to disallow or discriminate against services like Skype
        • Rest of Europe: public consultations going on
      • Ex Post: Let the problems occur and then figure out how to deal with them
  • Net Neutrality and India
    • No mention of net neutrality in the NTP (National Telecom Policy 2012)
    • Fair Usage Policy (FUP)
      • Is against net neutrality (maybe)
      • It discriminates against users, but does not discriminate against applications
      • But it is indirect discrimination against applications – because users who use BitTorrent and other bandwidth heavy applications will be more affected by FUP
      • Affects innovation – because users are discouraged from using innovative, bandwidth heavy applications