Monthly Archives: April 2009

Optimization: A case study

(PuneTech is honored to have Dr. Narayan Venkatasubramanyan, an Optimization Guru and one of the original pioneers in applying Optimization to Supply Chain Management, as our contributor. I had the privilege of working closely with Narayan at i2 Technologies in Dallas for nearly 10 years.

PuneTech has published some introductory articles on Supply Chain Management (SCM) and the optimization & decision support challenges involved in various real world SCM problems. Who better to write about this area in further depth than Narayan!

For Dr. Narayan Venkatasubramanyan’s detailed bio, please click here.

This is the first in a series of articles that we will publish once a week for a month. For the full series of articles, click here.)

the following entry was prompted by a request for an article on the topic of “optimization” for publication in punetech.com, a website co-founded by amit paranjape, a friend and former colleague. for reasons that may have something to do with the fact that i’ve made a living for a couple of decades as a practitioner of that dark art known as optimization, he felt that i was best qualified to write about the subject for an audience that was technically savvy but not necessarily aware of the application of optimization. it took me a while to overcome my initial reluctance: is there really an audience for this after all, even my daughter feigns disgust every time i bring up the topic of what i do. after some thought, i accepted the challenge as long as i could take a slightly unusual approach to a “technical” topic: i decided to personalize it by rooting in a personal-professional experience. i could then branch off into a variety of different aspects of that experience, some technical, some not so much. read on …

background

the year was 1985. i was fresh out of school, entering the “real” world for the first time. with a bachelors in engineering from IIT-Bombay and a graduate degree in business from IIM-Ahmedabad, and little else, i was primed for success. or disaster. and i was too naive to tell the difference.

for those too young to remember those days, 1985 was early in rajiv gandhi‘s term as prime minister of india. he had come in with an obama-esque message of change. and change meant modernization (he was the first indian politician with a computer terminal situated quite prominently in his office). for a brief while, we believed that india had turned the corner, that the public sector companies in india would reclaim the “commanding heights” of the economy and exercise their power to make india a better place.

CMC was a public sector company that had inherited much of the computer maintenance business in india after IBM was tossed out in 1977. quickly, they broadened well beyond computer maintenance into all things related to computers. that year, they recruited heavily in IIM-A. i was one of an unusually large number of graduates who saw CMC as a good bet.

not too long into my tenure at at CMC, i was invited to meet with an mid-level manager in electronics & telecommunications department of the oil and natural gas commission of india (ONGC). the challenge he posed us was simple: save money by optimizing the utilization of helicopters in the bombay high oilfield.

the problem

the bombay high offshore oilfield, the setting of our story
the bombay high offshore oilfield, the setting of our story

the bombay high oilfield is about 100 miles off the coast of bombay (see map). back then, it was a collection of about 50 oil platforms, divided roughly into two groups, bombay high north and bombay high south.

(on a completely unrelated tangent: while writing this piece, i wandered off into searching for pictures of bombay high. i stumbled upon the work of captain nandu chitnis, ex-navy now ONGC, biker, amateur photographer … who i suspect is a pune native. click here for a few of his pictures that capture the outlandish beauty of an offshore oil field.)

movement of personnel between platforms in each of these groups was managed by a radio operator who was centrally located.

all but three of these platforms were unmanned. this meant that the people who worked on these platforms had to be flown out from the manned platforms every morning and brought back to their base platforms at the end of the day.

at dawn every morning, two helicopters, flew out from the airbase in juhu, in northwestern bombay. meanwhile, the radio operator in each field would get a set of requirements of the form “move m men from platform x to platform y”. these requirements could be qualified by time windows (e.g., need to reach y by 9am, or not available for pick-up until 8:30am) or priority (e.g., as soon as possible). each chopper would arrive at one of the central platforms and gets its instructions for the morning sortie from the radio operator. after doing its rounds for the morning, it would return to the main platform. at lunchtime, it would fly lunchboxes to the crews working at unmanned platforms. for the final sortie of the day, the radio operator would send instructions that would ensure that all the crews are returned safely to their home platforms before the chopper was released to return to bombay for the night.

the challenge for us was to build a computer system that would optimize the use of the helicopter. the requirements were ad hoc, i.e., there was no daily pattern to the movement of men within the field, so the problem was different every day. it was believed that the routes charted by the radio operator were inefficient. given the amount of fuel used in these operations, an improvement of 5% over what they did was sufficient to result in a payback period of 4-6 months for our project.

this was my first exposure to the real world of optimization. a colleague of mine — another IIM-A graduate and i — threw ourselves at this problem. later, we were joined yet another guy, an immensely bright guy who could make the lowly IBM PC-XT — remember, this was the state-of-the-art at that time — do unimaginable things. i couldn’t have asked to be a member of a team that was better suited to this job.

the solution

we collected all the static data that we thought we would need. we got the latitude and longitude of the on-shore base and of each platform (degrees, minutes, and seconds) and computed the distance between every pair of points on our map (i think we even briefly flirted with the idea of correcting for the curvature of the earth but decided against it, perhaps one of the few wise moves we made). we got the capacity (number of seats) and cruising speed of each of the helicopters.

we collected a lot of sample data of actual requirements and the routes that were flown.

we debated the mathematical formulation of the problem at length. we quickly realized that this was far harder than the classical “traveling salesman problem”. in that problem, you are given a set of points on a map and asked to find the shortest tour that starts at any city and touches every other city exactly once before returning to the starting point. in our problem, the “salesman” would pick and/or drop off passengers at each stop. the number he could pick up was constrained, so this meant that he could be forced to visit a city more than once. the TSP is known to be a “hard” problem, i.e., the time it takes to solve it grows very rapidly as you increase the number of cities in the problem. nevertheless, we forged ahead. i’m not sure if we actually completed the formulation of an integer programming problem but, even before we did, we came to the conclusion that this was too hard of a problem to be solved as an integer program on a first-generation desktop computer.

instead, we designed and implemented a search algorithm that would apply some rules to quickly generate good routes and then proceed to search for better routes. we no longer had a guarantee of optimality but we figured we were smart enough to direct our search well and make it quick. we tested our algorithm against the test cases we’d selected and discovered that we were beating the radio operators quite handily.

then came the moment we’d been waiting for: we finally met the radio operators.

they looked at the routes our program was generating. and then came the first complaint. “your routes are not accounting for refueling!”, they said. no one had told us that the sorties were long enough that you could run out of fuel halfway, so we had not been monitoring that at all!

Dhruv
ONGC’s HAL Dhruv Helicopters on sorties off the Mumbai coast. Image by Premshree Pillai via Flickr

so we went back to the drawing board. we now added a new dimension to the search algorithm: it had to keep track of fuel and, if it was running low on fuel during the sortie, direct the chopper to one of the few fuel bases. this meant that some of the routes that we had generated in the first attempt were no longer feasible. we weren’t beating the radio operators quite as easily as before.

we went back to the users. they took another look at our routes. and then came their next complaint: “you’ve got more than 7 people on board after refueling!”, they said. “but it’s a 12-seater!”, we argued. it turns out they had a point: these choppers had a large fuel tank, so once they topped up the tank — as they always do when they stop to refuel — they were too heavy to take a full complement of passengers. this meant that the capacity of the chopper was two-dimensional: seats and weight. on a full tank, weight was the binding constraint. as the fuel burned off, the weight constraint eased; beyond a certain point, the number of seats became the binding constraint.

we trooped back to the drawing board. “we can do this!”, we said to ourselves. and we did. remember, we were young and smart. and too stupid to see where all this was going.

in our next iteration, the computer-generated routes were coming closer and closer to the user-generated ones. mind you, we were still beating them on an average but our payback period was slowly growing.

we went back to the users with our latest and greatest solution. they looked at it. and they asked: “which way is the wind blowing?” by then, we knew not to ask “why do you care?” it turns out that helicopters always land and take-off into the wind. for instance, if the chopper was flying from x to y and the wind was blowing from y to x, the setting was perfect. the chopper would take off from x in the direction of y and make a bee-line for y. on the other hand, if the wind was also blowing from x to y, it would take off in a direction away from y, do a 180-degree turn, fly toward and past y, do yet another 180-degree turn, and land. given that, it made sense to keep the chopper generally flying a long string of short hops into the wind. when it could go no further because they fuel was running low or it needed to go no further in that direction because there were no passengers on board headed that way, then and only then, did it make sense to turn around and make a long hop back.

“bloody asymmetric distance matrix!”, we mumbled to ourselves. by then, we were beaten and bloodied but unbowed. we were determined to optimize these chopper routes, come hell or high water!

so back we went to our desks. we modified the search algorithm yet another time. by now, the code had grown so long that our program broke the limits of the editor in turbo pascal. but we soldiered on. finally, we had all of our users’ requirements coded into the algorithm.

or so we thought. we weren’t in the least bit surprised when, after looking at our latest output, they asked “was this in summer?”. we had now grown accustomed to this. they explained to us that the maximum payload of a chopper is a function of ambient temperature. on the hottest days of summer, choppers have to fly light. on a full tank, a 12-seater may now only accommodate 6 passengers. we were ready to give up. but not yet. back we went to our drawing board. and we went to the field one last time.

in some cases, we found that the radio operators were doing better than the computer. in some cases, we beat them. i can’t say no creative accounting was involved but we did manage to eke out a few percentage point of improvement over the manually generated routes.

epilogue

you’d think we’d won this battle of attrition. we’d shown that we could accommodate all of their requirements. we’d proved that we could do better than the radio operators. we’d taken our machine to the radio operators cabin on the platform and installed it there.

we didn’t realize that the final chapter hadn’t been written. a few weeks after we’d declared success, i got a call from ONGC. apparently, the system wasn’t working. no details were provided.

i flew out to the platform. i sat with the radio operator as he grudgingly input the requirements into the computer. he read off the output from the screen and proceeded with this job. after the morning sortie was done, i retired to the lounge, glad that my work was done.

a little before lunchtime, i got a call from the radio operator. “the system isn’t working!”, he said. i went back to his cabin. and discovered that he was right. it is not that our code had crashed. the system wouldn’t boot. when you turned on the machine, all you got was a lone blinking cursor on the top left corner of the screen. apparently, there was some kind of catastrophic hardware failure. in a moment of uncommon inspiration, i decided to open the box. i fiddled around with the cards and connectors, closed the box, and fired it up again. and it worked!

it turned out that the radio operator’s cabin was sitting right atop the industrial-strength laundry room of the platform. every time they turned on the laundry, everything in the radio room would vibrate. there was a pretty good chance that our PC would regress to a comatose state every time they did the laundry. i then realized that this was a hopeless situation. can i really blame a user for rejecting a system that was prone to frequent and total failures?

other articles in this series

this blog entry is intended to set the stage for a series of short explorations related to the application of optimization. i’d like to share what i’ve learned over a career spent largely in the business of applying optimization to real-world problems. interestingly, there is a lot more to practical optimization than models and algorithms. each of the the links below leads to a piece that dwells on one particular aspect.

optimization: a case study (this article)
architecture of a decision-support system
optimization and organizational readiness for change
optimization: a technical overview

About the author – Dr. Narayan Venkatasubramanyan

Dr. Narayan Venkatasubramanyan has spent over two decades applying a rare combination of quantitative skills, business knowledge, and the ability to think from first principles to real world business problems. He currently consults in several areas including supply chain and health care management. As a Fellow at i2 Technologies, he tackled supply chains problems in areas as diverse as computer assembly, semiconductor manufacturer, consumer goods, steel, and automotive. Prior to that, he worked with several airlines on their aircraft and crew scheduling problems. He topped off his days at IIT-Bombay and IIM-Ahmedabad with a Ph.D. in Operations Research from the University of Wisconsin-Madison.

He is presently based in Dallas, USA and travels extensively all over the world during the course of his consulting assignments. You can also find Narayan on Linkedin at: http://www.linkedin.com/in/narayan3rdeye

Reblog this post [with Zemanta]

Pune Tivoli Users Group Meeting – 18 April

IBM Global Services
Image via Wikipedia

What: This is the first meeting of the Pune Tivoli Users Group – with introductory presentations on various Tivoli products
When: 9:30am to 1:30pm, Saturday 18th April
Where: Meeting Room M4 (Video Conference Room), 7th Floor, Tower (B), Tech Park One (TPO), (Panchshill), Off Airport Road, Near Don Bosco School, Yerwada
Registration and Fees: This meeting is free for all to attend. Register here.

Details:

Come along to the first ever Pune Tivoli User Group meeting and meet like minded people. Your presence will help to make this group a success !

Agenda:

9:30am – Introduction for Tivoli User Group members. What do YOU want to get out of YOUR group
Topic #1 – TSM FastBack : Introduction & Architecture
Speaker: Chanchal Ghevade, IBM
Topic #2 – Introduction to Tivoli Identity and Access Management
Speaker: Deepak Kaul, IBM
Topic #3 – Introduction to Tivoli Storage Manager
Speaker: Rahul Sharma, IBM
Topic #4 – Introduction to IBM Tivoli Monitoring
Speaker: Himanshu Karmarkar, IBM
Topic #5 – IBM Support Assistant (ISA) tool for Tivoli products

Feedback & close
Lunch and networking.

As usual, see the PuneTech calendar for other tech events in Pune this week.

Reblog this post [with Zemanta]

JBoss/Hibernate Guru Emmanuel Bernard in Pune – 21/22 April (Pune JBoss Users Group)

RichFaces
Image via Wikipedia

Emmanuel Bernard, a JBoss Guru, and founder/co-founder of all the Hibernate annotations projects is visiting Pune next week. The just-formed Pune JBoss Users Group is planning on arranging an event for Pune’s JBoss/Hibernate developers to interact with Emmanuel. The details have not yet been decided and will be put up on the PuneJBUG mailing list. Or, simply subscribe for PuneTech updates.

Stay tuned for details of the event that the PuneJBUG is planning. If you want to meet Emmanuel separately, you can try to get in touch with Jaikiran, the creator/moderator of PuneJBUG, or you can directly message Emmanuel via twitter.

About Pune JBoss Users Group – PuneJBUG

This is a community for JBoss developers in Pune (or any other part of India). The group will soon be starting regular events related to JBoss community projects. Feel free to suggest an event that you would like to organize or participate with other JBoss community users. Mailing List: http://groups.google.com/group/jbug-pune

About Emmanuel Bernard

Emmanuel is a Lead developer at JBoss, a division of Red Hat. After graduating from Supelec (French “Grande Ecole”), Emmanuel has spent a few years in the retail industry where he started to be involved in the ORM space. He joined the Hibernate team 4 years ago. Emmanuel is the lead developer of Hibernate Annotations and Hibernate EntityManager, two key projects on top of Hibernate core implementing the Java Persistence(tm) specification, as well as Hibernate Search and Validator. Emmanuel is a member of the EJB 3.0 expert group and the spec lead of JSR 303: Bean Validation. He is a regular speaker at various conferences and JUGs, including JavaOne, JBoss World and JavaPolis and the co-author of Hibernate Search in Action from Manning.

Reblog this post [with Zemanta]

Pune’s Tech Mahindra wins Satyam bid

According to news reports today, Pune based Tech Mahindra has won the Satyam bid. Here is the coverage in The Economic Times “Tech Mahindra wins bid for Satyam Computers

Satyam Computer Services Ltd.
Image via Wikipedia

The other rivals in the race were L&T InfoTech and the American billionaire Wilbur Ross. This news is already being covered in great detail in all the national business media and I doubt if I can add anything new.

My thought would be from a Pune angle. Pune has been amongst the leading IT cities in India for a while now. Infosys and Wipro have plans underway to expand their Pune centers into their single biggest facilities. Yet, a ‘Pune-based company’ has never been in the big league!

It’s worth noting how Infosys started in Pune in the early 1980s and then moved on to Bangalore. In some sense this void can be filled today! Tech Mahindra has its roots in Pune for many years. Here are a couple of links that provide more information about the company profile:

Tech Mahindra Wikipedia Link

Tech Mahindra Official Website (About Us Link)

(This article is cross-posted from Amit Paranjape’s Blog.)

Reblog this post [with Zemanta]

Beyond keyword search: Adding Findability to your information

(PuneTech reader Titash Neogi has been in the information architecture domain for many years, and is passionate about making information more accessible to people. In recent times, he has been studying the problem of the difficulties of finding information within large enterprises, and his thoughts on the approaches for solving these problems. In this article, he gives us an overview of the concept of “findability” of information in an enterprise.)

One of the greatest human needs that have evolved in the 21st century is the need to know as much as possible about something before making a decision. While human beings have forever been driven towards learning and knowing more, in the last decade technology has added a new dimension to this.

There was a time when we could take the word of our neighbour, colleague, friend or the man at the grocery shop and reach a decision. Since information was only available in finite ways and in finite volumes, there was not much competitive edge or wide reaching impact.

The Internet has changed all of that. Today, the fear that there’s knowledge out there that could be relevant to our decisions, and that we are not using it and getting a lesser deal, haunts us all.

And this need in turn has fuelled the growth of information, made it more complex to deal with and more voluminous in size. Pick up any topic and you would find thousands of pages on the internet related to that topic. There are facts, figures, opinions, comments, user reviews. Information, unlike wealth, has grown directly in proportion to its usage.

This information fire hose impacts both individuals and enterprises. While individuals crave to know as much as possible before committing to something, enterprises find their customers more demanding or their competition more informed.

Staying on top of this complex, voluminous information tidal wave has become crucial for survival. As a response to this, search companies have sprung up, with Google in the lead. For about a decade now, search engines of all sorts are battling it out with terabytes of content on the Internet.

However, the Internet (and other networks within or without the enterprise) are moving from being information stores to knowledge networks. As the volume and complexity of knowledge grows, search as we know it today, is becoming inadequate. Search companies are losing ground fast.

Search as a tool is fine for information stores, but poor for knowledge bases. Knowledge bases need to have findability. Search is used when you know exactly what you are looking for, and are trying to figure out where it is. Findability is when you have only a vague idea of what you want to achieve, and you rely on the knowledge base to guide you into the direction of finding more and more information that is useful and relevant to you.

Understanding Findability

Findability could mean different things to different people or even to the same person at different times. Peter Morville, credited to be the guy who coined the term findability, defines it as:

The quality of being locatable or navigable. At the item level, we can evaluate to what degree a particular object is easy to discover or locate. At the system level, we can analyze how well a physical or digital environment supports navigation and retrieval.

While, a lot of people confuse findability with search, the two are really not the same thing. Search tries to solve the problem of locating information that you already know exists somewhere in a corpus.

Findability encompasses search, but also deals with the problem of how to make the searcher aware of other relevant information, that they didn’t know existed in the corpus. Findability exposes the knowledge within a corpus.

For example, when you are looking for a home loan rate from IDBI bank, it’s a search problem. You want to locate the document/URL that talks about the interest rates of IDBI. However, if you were someone very new to the entire banking/housing scenario and you didn’t know the names of any banks in India, or what loans they offered, you are dealing with a findability problem. While solving this problem will definitely involve search, but you can immediately see that it also involves a lot of information engineering, semantic modelling and usability engineering.

So in a sense, Findability is the big daddy of search. The example above might sound very impractical, but you can easily abstract it and see that it applies to a lot of scenarios in different ways. The future is about solving findability problems.

While Findability concerns all of us in our everyday life, it poses some interesting challenges for modern enterprises. This article will try to focus on findability issues in the enterprise and some pointers at how to solve them.

Why is findability important for organizations?

Internally, as organizations grow bigger in size and complexity and short on budgets and time, no one can afford to waste energy and resources in duplicating the knowledge that already exists in the organization. Smart companies will leverage the knowledge within their workforce and beat their competition. Enterprise 2.0 is about knowledge competition – do we know what we know, and can we use that in new ways.

Externally, as product complexity increases and newer offerings come up, it is going to be a crucial challenge for a company to communicate with its customers and let them know the range of product offerings that they have. An effective findability solution allows customers to automatically explore new products and solutions that might have come up, and might be more relevant for them.

The four-headed monster

To outline a few typical findability problems in an enterprise:

Brute Force findability: This is the most elementary form of findability problem, and can be simply classified as search. It’s like performing a grep over the content base with a particular pattern.

“I remember that the document contained the word log = 2 in its text”

Today’s search engines are very good at solving these problems. In fact this form of search has evolved simply because of the inability of search engines to understand natural language, leading users to rely on grep techniques to find documents/results in the quickest possible manner.

However, as organizations grow and we move from data to knowledge, this form of search will increasingly become impossible to scale or use.

Knowledge findability: This is the next level of findability and comprises the most common “how do I” or “what is” kind of questions.

“How do I know if I really need to move my web application from struts 1 to struts 2?”

“What kind of data backup product should I buy if I am a SOHO with a limited budget?”

Index based search engines are trying various complex algorithms to solve these findability problems.

The last few years have seen a lot of semantic search solutions trying to tackle this problem, by performing semantic analysis of content and indexing them based on meaning rather than sheer word statistics.

People/expertise findability: A lot of times, we find people asking for in house experts

“Anyone who has worked on technology X + platform Y in the organisation”

Typically today, this is handled by Word of Mouth or grapevine, which not only becomes impossible to scale in a cross geographic organisation, but is also inefficient and limited in scope.

A semantic analysis engine, plugged into a HRMS DB or an organisation’s intranet can very effectively solve this problem. An index-based search in a similar scenario is likely to pull up a lot of noise and irrelevant results, rather than solving this problem.

Social findability: Findability that relates to knowledge implicit in the community.

“What do all Linux newbie’s read when they join the organisation”

“What’s the best starting point for understanding deployment of Product X – the product guide or the support technote?”

No semantic or index based search can ever completely fill this gap. A good approach to solving this problem would be to marry a social-tagging system such as de.li.ci.ous or digg and a semantic analysis engine. The Findability solution would need to work as a facilitator that allows people to share their personal experiences and knowledge around a product and build a knowledge community.

What becomes obvious from this discussion is that findability is not a single technology or solution that can be purchased over the counter, deployed and then expected to perform wonders after couple hours of crawling or indexing. This is just plain vanilla search. Search can provide results to queries but not necessarily answers to questions. A good search engine can make your content searchable, but it does nothing to solve your findability problem.

The mistake that most organizations make today is to deploy a million dollar search engine and then expect it to solve a problem it was never designed to solve – a findability problem.

Solving the Findability problem

A good findability platform needs to bring together expertise and lessons learnt from the fields of Semantic search, Usability, Information Architecture, Graphic Design and Text Engineering. And above all, people need to understand that a findability solution can never be a “one size fits all” solution. It can never be an appliance that you can deploy over your networks and forget about.

Think of findability solution as an ERP solution. It needs to have various modules that can understand and talk to different information stores in the organisation. The first step in solving an organizations findability problem is to analyze its findability need and then deploying or developing all or some specific modules of that solution. Also important is the right combination of content strategy, user query analysis, and search and interface design.

Who’s working on Findability?

There are a lot of people trying to take their stab at findability. Solutions and products range from sophisticated semantic search applications to information architecture consulting firms. However, in the limited scope of this article, I would like to touch upon few companies/individuals who stand out in their attempts to solve the findability problem.

First, Peter Morville at Semantic Studios is doing ground breaking work in this area. The stuff he puts up on his site (www.findability.org) is pretty exciting and educating. There are also a few start-ups in this space, but I would like to mention connectbeam (www.connectbeam.com), a bay area based start-up that caught my notice. They are trying to solve the social findability problem within the enterprise and I found their approach very unique.

Finally

I am old school and I like to conclude my articles with something to think about. In a global, recession ridden economy, findability affects your bottom-line one way or another. To quote Peter Morville,

“You can’t use, what you can’t find” (and neither can your customers)

About the author – Titash Neogi

Titash Neogi is working with Symantec Corp (formerly VERITAS India) for past six years at various profiles in customer support, knowledge management and content management divisions. At present he is the architect and lead developer for Symantec’s new semantic search based help system initiative.

Reblog this post [with Zemanta]

POSTPONED: Getting Started with OpenSocial, PuneGTUG Seminar is now on 18 April

Rohit Ghatol, who was supposed to give the “Getting started with OpenSocial” presentation tomorrow is down with viral fever, and hence this event is postponed to next Saturday, 18th April. Other details remain the same. See the updated post for other details.

Sorry for the inconvenience caused.
Vishwesh Jigrale, Pune GTUG Manager

Getting started with OpenSocial: Pune GTUG meet 18 April

OpenSocial logo
Update: This event was earlier scheduled for 11 April. Rohit Ghatol, the presenter is down with viral fever, hence this event is postponed to 18th April
What:Pune Google Technology Users Group (Pune GTUG) presents a seminar “Getting started with OpenSocial
When: Saturday, 18th April. 3pm onwards
Where: Synerzip. Dnyanvatsal Commercial Complex, Survey No. 23, Plot No. 189, Near Mirch Masala Restaurant , Opp Vandevi Temple, Karve Nagar (Map).
Registration and Fees: The event is free for all, no registration required.
Details
Agenda for this meet is as follows
1. General overview of OpenSocial (But participates are expected to read about OpenSocial)
2. Getting started with a simple Gadget
3. Getting started with a simple OpenSocial Application
4. Overview of RestFul APIs for Server side OpenSocial Applications

For more information about PuneGTUG, see the PuneTech wiki profile of PuneGTUG. For other tech events happening in Pune, see the PuneTech calendar.

Reblog this post [with Zemanta]

PMI Pune Seminar: Introduction to the Telecom Software Domain

PMI Pune LogoWhat: Monthly meeting of Project Management Institute’s Deccan Pune Chapter, featuring an introduction to the Telecom Software Domain, and an introduction to Six Sigma
When: Saturday, 11 April, 10am to 12:30pm
Where: Pune Shramik Patrakar Sangh, Cummins Auditorium, 193 Navi Peth, Ganjwe Chowk, Near Alka Talkies, Garware bridge
Registration and Fees: This talk is free for all to attend. No registration required

Session 1: Telecom Domain Overview by Utkarsh Kikani

Optical fiber provides cheaper bandwidth for l...
Image via Wikipedia

The session is based on understanding and knowledge that was developed from a vantage point of a software team member of an IT team of a typical Telco enterprise. It is meant to be an overview and introduction to Telecom as a domain or vertical. Telecom by its very nature can become quite a technological or technical subject. However, the current session is approached more from business angle. There will be a bit of technical Talk but that will be kept minimum and only when absolutely essential. A person who is barely familiar with the world of Telecom would have, at the end of an hour, developed a very high level understanding of the domain with some insights in to its evolution, current status and future trends.

Session 1: About the Speaker Utkarsh Kikani

Utkarsh Kikani MCA from Surat, started his career as a C++ programmer around 16 years ago. After first three years initial stint at a small start up in Gujarat moved to then Mahindra British Telecom which is now TechMahindra as Analyst Programmer. Utkarsh had 13 years long association with TechM with assignments with different Telcos – Singapore Telecom, British Telecom, U S West (now Qwest), Cingular Wireless, Rockwell First Point Contact and AT&T – in different roles of Team lead, Technical architect, Business Analyst, Onsite coordinator and Offshore Project manager and Program manager. Recently left TechM for a planned professional break, and currently teaching a course on Telecom Business Management to business management students as visiting faculty.

Session 2: Introduction to Six Sigma and Impact of Variation with Hemant Urdhwareshe

This picture was reworked by the Bilderwerksta...
Image via Wikipedia

Six Sigma Approach is implemented by many reputed world class companies. These companies have been benefited immensely in terms of improving their bottom-line, customer satisfaction and waste reduction. The purpose of this presentation is to create awareness about the approach and opportunities. The presentation also includes a simulation of typical manufacturing processes to appreciate impact of variation in various processes on productivity and waste. The simulation also helps understand a strong linkage between Lean and Six Sigma. The presentation will also attempt to illustrate impact of variation in the context of project management

Session 2: About the Speaker Hemant P. Urdhwareshe

Hemant Urdhwareshe is a mechanical engineer from VNIT, Nagpur with post graduate diploma in business management from IMDR, Pune. Hemant director of Institute of Quality & Reliability, comes with a rich experience of 28+ years in Cummins India Limited (CIL). He worked as Sr. General Manager Product Engineering. Hemant is a Master Black Belt (MBB) and has trained more than 300 Black/Green Belts. He has conducted Six Sigma Black Belt/Green Belt programs for several companies along with implementation support and project reviews as MBB. Hemant has also conducted Design for Six Sigma Green Belt Program for Satyam Computers. He has conducted many other workshops in Reliability Basics, Design FMEA, Quality Function Deployment, and Reliability Growth covering more than 500+ participants. Hemant was one of the eminent panelists for Lean Six Sigma Excellence Awards organized by SCMHRD and Sakal. He has published series of articles on Six sigma & related topics in renowned journals & magazines. Recipient of several awards, Hemant is a member of several institutes like American Society for Quality (ASQ) etc.

Reblog this post [with Zemanta]

Internet Traffic Tracking and Measurement

comScore Search Ratings, Dec. 2005-2006, Live,...
Image by dannysullivan via Flickr

(As the web upgrades to web-2.0, it becomes a difficult challenge to figure out the value of companies that are serving this market. Since most web-2.0 companies are in an early stage of their evolution, they can’t be measured on the basis of the revenues they are earning. Instead, one needs to guess at the future earnings based on measuring the thing that they’ve currently managed capture – i.e. the number and demographics of visitors, and the amount of attention they are paying to the site. Pune-based entrepreneur Vibhushan Waghmare, who has co-founded a marketing analytics startup, MQuotient, gives us an overview of this space, points out some problems, and wonders if there is an opportunity for some entrepreneur to step in and provide solutions.)

Introduction

A good product or service will always attract appreciation and success, but what will make it stand out from crowd and fetch the premium is knowledge of exactly how good it is from the rest. Qualitative strategy decisions are important to set the direction, but real numbers and insights from these numbers are required to actually know how fast/slow is one moving in that direction and how far it is from the target.

This applies to online internet businesses as well. As against the established brick-and-mortar businesses which are driven primarily by monetary profitability, evolving online businesses have been searching for the parameters to judge and measure the success or failure of the business.

A few days before the Dot Com bust (of early 2000s) happened, we had seen how internet companies’ valuation shot off the roof based on parameters like eyeballs they generated – and hypothetically – could be monetized. We had ExciteAtHome paying $780 million for BlueMountain.com, an online greeting cards company with 11 million monthly visitors and negligible revenues (which was sold to American Greetings after 2 years for just $35 million!). Back then, page-hits on the servers was what each site measured and investors bought into.

Today we are into web2.0 world, and parameters for measuring success of online business have also evolved to 2.0 version. Before the Lehman Brothers folded up their shop, we had valuations of these socionets soaring to astronomical levels – all based on the unique users they can generate. Page-hits have given way to page-views per unique user, and now we talk about more evolved and derived parameters like unique users visiting the site and time spent by each unique user on the website. With the ghosts of Dot Com bust not yet laid to rest, investors and entrepreneurs are much more cautious and are becoming scientific in tracking and measuring the internet traffic. Still every now and then, we keep getting news about socionets with their latest 2.0 apps being chased by good money because of their platform of involved users, although all that they do there is poke each other and take up a challenge of some random quiz. We all know the problems giants like Google are facing when it comes to monetizing a socionet like Orkut.com. Economists have predicted 8 of the last 5 economic meltdowns, and I don’t want to sound like one. I just want to point out to you the issues faced by online businesses today.

User Panel based traffic estimation

I was reminded of these measurement arguments when last weekend I attended an interesting talk organized by Pune OpenCoffee Club. We had owner of a reputed online gaming portal talking about the kind of traffic his games attract. He used comScore extensively to compare himself in the online gaming world and stated that getting into the top 5 of the comScore list of online gaming sites worldwide is the target he has set for himself. (I don’t know whether the list was of page views or unique users these gaming sites are generating). Definitely a great target to chase!

Image representing comScore as depicted in Cru...
Image via CrunchBase

While comScore does provide an elaborate analysis of the website traffic and is considered a standard worldwide, before we set our business targets based on it, we need to understand the methodology used for this tracking. comScore has a panel of around 2 million internet users worldwide (16,000 in India) and these users install monitoring software from comScore on their computers. This monitoring software is used to determine which websites are being visited by these users, and how much time they are spending on each site. comScore then uses extensive statistical methods to extrapolate these numbers to the behaviour of all the users (not just comScore’s user panel). More details on methodology are here). They have elaborate analysis like time spent by each user, IP tracking, repeating users, incoming and outgoing traffic and many more such details.

But what needs to be noticed is the fact that comScore excludes traffic from cyber-cafes and users under age 15. For India, I am sure that is a sizable mass of internet users. And when it comes to activities like online gaming, I am afraid, absolute numbers shown by comScore might be drastically away from the reality. Cyber cafe still remains an important point of access for Indians and excluding this traffic can result in misleading conclusions. Internet is being taught in schools and at least in cities, school kids are using internet extensively for both studies and entertainment purposes. In such situation, excluding users under age 15 might not always provide the best traffic numbers, especially for activity like online gaming.

Image representing Alexa as depicted in CrunchBase
Image via CrunchBase

The other good bet in terms of tracking online traffic is Alexa.com. Alexa again, is a panel research based on the information gathered through a browser toolbar that their panel of users download and install in their browser. However in over more than a decade that I have spent on internet, I have not seen a single browser with Alexa toolbar. Apart from the high-end users of internet, I wonder if an average internet user would actually go to www.alexa.com and download and install their toolbar.

There are some other tracking and measurement services available, but mostly it has been Alexa and comScore who are quoted for such purposes.

One can argue that both comScore and Alexa work based on a random sample and hence same error in reporting would appear across traffic measurement for all sites. Given this, Alexa and comScore can be reliably used to compare two internet destinations or to detect any deviation from normal trend. However for absolute numbers, I guess there is lot more needed to be done.

For developed countries where most of the traffic originates from home, school or offices and very less from cyber cafes, these numbers might work, but for India with its huge cyber cafe traffic, I guess a more extensive tracking system is required. Cyber cafes continue to be important point of access, often the only access point in tier II and III cities. I have seen young school kids flocking these cyber cafes which serve more as gaming parlours; parents creating matrimony profiles of their children with the help of assistant (generally the owner) at the cyber cafe; and young college students playing pranks on their friends through Orkut and also getting their first experience to mature content over internet. comScore is missing this traffic by excluding cyber cafes.

Although this traffic might not be very huge in terms of absolute numbers, general observation is that these new users of internet (who learn how to use internet in cyber cafes) are more likely to click on ads as online behaviour has not matured to differentiate an online advertisement from a genuine article. I once saw a school kid trying to fill up a life insurance form because the advertisement offered some lucky draw prize on filling the form (Of course he never completed the form for the lack of PAN number :-)). This audience would be of the least interest to all the online advertisers and brands since they hardly convert into any transaction; however these would be the guys who would most likely click on all those online advertisements and hence form important part of the online advertisement industry.

Is there an entrepreneurship opportunity here

I am sure that all hosting servers do have the exact numbers about traffic coming to them, however key is in profiling this traffic and consolidating and analysing this information into useful insights. Quite often websites who try to track and measure their traffic resort to putting javascripts on their pages for this purpose. This adds to page-weight and slows down the site, a significant problem in country like India where high-speed broadband is still a luxury. These efforts give reasonable tracking and measurement of traffic from server side alone. However to prove the worthiness of traffic generated by website, system needs to track the demographic details of this traffic. System should provide information about the age, education profile, income level and other details in which advertisers and investors would be interested. Of course proxy variables need to be used for this tracking along with all the principles of market research and with due care of privacy of the user. Also the system should be encompassing enough to take care of diversity in internet usage as we see in India and in developed western countries and also in non-English speaking countries.

Creating such a tracking and measurement system for India would need investment, and given the current level of online advertisement spends in the country it needs to be analyzed whether this investment is justified.
Do you guys see an entrepreneurship opportunity in this?

About the author – Vibhushan Waghmare

Vibhushan is a co-founder of MQuotient, a Pune-based startup that uses cutting-edge quantitative analytics and mathematical modeling to build software products for marketing analytics, and in general deliver solutions for enterprise marketing challenges. Before co-founding MQuotient, Vibhushan was managing the Search product at Yahoo! India. He is an MBA from IIM Ahmedabad and an Electrical Engineer from REC, Nagpur. He has also held positions with Amdocs & Cognizant Technology Solutions. Check out his blog, his linked-in page, or his twitter page for more about him.

Reblog this post [with Zemanta]

PuneTech’s April Fools Prank – Pune handling recession better than Bangalore

Following in the footsteps of such respected media houses as the BBC and the Guardian, PuneTech yesterday played an April Fools’ Day prank on its readers.


Click here if you cannot see the video above. It’s different from yesterday’s news clip.

We claimed that Pune is expecting growth in revenues in spite of the recession, whereas the other cities, including Bangalore, would see a decline. If you missed all the excitement yesterday, check out our report yesterday. Unfortunately, the entire report was a hoax. There is no company by the name of INHR Associates, the links to the “extended abstract” and the “full (paid) report” are both non-existent, and the the news clip by a “certain TV channel (that will remain unnamed)” was actually created for PuneTech by some of our over-enthusiastic friends.

We have shown above a different, more over-the-top version of the same “news video“. It some out-takes and has credits of the cast and crew who made the film. As with the original video, the “reactions” of the average techie-in-the-street are the most hilarious – definitely worth checking out.

The hints you should have seen

As with any good April Fools’ Prank, we tried to liberally sprinkle it with giveaways – hints that people should have caught on to:

  • The original source report did not exist. Only two brave souls complained to us that the links were broken. Everybody else seems to have taken our report at face value
  • INHR Associates, the company which is supposed to have done the survey, had a website called http://inhumanresources.com – very few people picked up on that
  • Needless to say, the video is completely ridiculous. The fact that many people actually believed it to be real, is a very sad commentary on the state of the real news being put out by our TV channels. We have come to expect trash like this from our TV.
  • Check out the ticker at the bottom of the news video. It has ridiculous items like “Mallika Sherawat enters politics” and a bunch of other such things.

The believers

PunetechGraphAprilFool
Image by ngkabra via Flickr

@asutosh has the distinction of being the first person to fall for the joke and retweet it

This was just the first in the long line of people believers. We had a few accomplices (@aparanjape, @d7y, and @amitsomani, and @meetumeetu) who re-tweeted it, after which I believe about 15 to 20 believers retweeted.

@logic loved the “marathi manoos” reaction in the video and wrote:

http://is.gd/pVGh 3:10 I knew recession wont affect pune coz marati dictionary doesnt have it.. ROFLMAO #EKSI. no this tweet is not #april1

Oh, the irony! Sorry, @logic, it was #april1.

@beastoftraal and @milliblog were were ridiculing PuneTech thusly:

Pune handling recession better than Bangalore? They couldn’t afford to buy a report for Rs. 7.5K, but! http://tr.im/i4jD

Another person who did not like the Pune chest-thumping was @kiranspillai who chastised us:

Junta in Pune maha excited about thier recession being not so bad as bangalore. Chest deep or neck deep – You still have sh*t sticking on u

I think the best exchange happened on facebook. Chief evangelist of PuneTech, Amit Paranjape, who was in on the joke, posted this to facebook:

Amit Paranjape: Why Pune is handling recession better than Bangalore. http://tinyurl.com/czl3w8

This resulted in the following conversation thread over there:

Rohit Joshi at 3:06pm April 1
It’s a bit like comparing France with the US. The French don’t have boom-bust cycles like anglo-saxon economies because the French don’t innovate and take risks as much. France is still a lovely place.

Navin Kabra at 3:09pm April 1
@Rohit, I doubt that the software/IT/ITES economics of Bangalore and Pune are very different from each other in terms of innovation, risk taking, and boom-bust cycles. I’m sure the explanation for this phenomenon lies elsewhere.

Abhijit Athavale at 4:22pm April 1
Maybe, the Puneites have not realized how serious this thing is going to be. Seriously, the reason might be that Pune has a ton of local industry that the IT/ITES companies are catering to. Bangalore has none.

Amit Paranjape at 5:30pm April 1
I agree Pune does have other local industry. Also, many other non-IT ‘tech’ companies.

After seeing all these reactions, I almost wished that the news item was true.

Of course, there were also some believers in the comments on the original post.

The making of the video

A few months back when we first got the idea, I casually asked meetu of wogma whether her film-industry friends would be willing to help us out by making a short film for PuneTech as an April Fools’ Prank. She and her friends went nuts with the idea and produced this clip. At that time, I had absolutely no idea of the huge amount of effort that goes into making even a short film like that. But, meetu and her friends, really took to the idea, and worked nights and weekends for almost 20 days to make this clip. I would probably not even have suggested the idea if I knew this beforehand, but anyway, they seem to have enjoyed the process, and we at PuneTech are absolutely thrilled with the final product. We would like to thank them all for the efforts, and for the superb result.

The director (Nitin Gaikwad, nitindgaikwad[at]gmail[dot]com, +91 98193 74727), the editor (Shreyas Beltangdy, shreyasbeltangdy[at]gmail[dot]com, +91 98922 12953) and the main news anchor (Raj Kumar Yadav, raj.deniro[at]gmail[dot]com, +91 99677 82869) are actual professionals in their field, and friends of meetu, who did this for us, free. The rest of the cast and crew are friends, relatives and neighbors. Here are the full credits:

Cast
News reader: Raj Kumar Yadav (raj.deniro[at]gmail[dot]com, +91 99677 82869)
Statistical analyst: Subramanyam Pisupati
Pune expert: Navin Kabra
Field Reporter: Shweta Karwa
IT employee #1: Mudit Singhal
IT employee #2: Nitin Gaikwad
IT employee #3: Subur Khan
IT employee #4: Amit Rajput

A billion thanks to: Pushpa and Badri Baldawa

Special thanks to: Mudit Singhal, Ravi Iyer, Agasti, Amit Rajput
Lighting: Agasti, Chandu dada
Production Assistants: Siddhu, Shiva, Sriram, Hemant, Saraswati
Make-up: Suman Baldawa
Camera and sound equipment: Rudra Communications
Editing Studio: Rudra Vision
Concept: Navin Kabra
Editor: Shreyas Beltangdy (shreyasbeltangdy[at]gmail[dot]com, +91 98922 12953)
Dialogue: meetu, Shreyas Beltangdy, Navin Kabra

Director of Photography: Shreyas Beltangdy (shreyasbeltangdy[at]gmail[dot]com, +91 98922 12953)
Director: Nitin Gaikwad (nitindgaikwad [at] gmail [dot] com, +91 98193 74737)

Final thoughts

Gautam Morey, a PuneTech reader, said yesterday: “You are spending too much time setting up an April Fool’s Joke!”

And our answer was: “All work and no play make Jack a dull boy :-). We spend so much time being serious that spending some time on frivolous things is OK once in a while.”

At the very least, I think we should be able to say that Pune handled April Fools’ day much better than Bangalore!

Reblog this post [with Zemanta]