Ever since Toyota introduced the Prius in the 1990s, hybrid vehicles have become an exciting new development area in the Auto Industry. With ever increasing fuel prices and environment concerns, hybrid technology will increasingly play an important role in the automobile of the 21st century.
Hybrid vehicles use a combination of power from an internal combustion engine and an electric motor. The electric motor is powered by a battery, which is typically charged during braking and decceleration. The battery can also be directly charged from an external electric source (these vehicles are known as ‘plug-in hybrids’). Hybrid vehicles typically deliver significant double digit savings in fuel economy and emissions.
Last week Pune based KPIT Cummins and Bharat Forge announced a joint venture for the design and development of a plug-in hybrid solution. PuneTech interviewed KPIT Cummins SVP, Anup Sable, to get more details about this ….
You announced a joint venture with Bharat Forge last week in the exciting new area of hybrid vehicles. Can you explain the new solution that you are planning to develop?
The hybrid solution developed by KPIT Cummins is a plug-in parallel hybrid solution that consists of the following key components:
Electric motor
Electronic motor controller
Battery pack
Mechanical assembly & coupling
Proprietary software for control algorithms of the motor & batteries
Intelligent battery management system that enhances battery performance and battery life
Overview:
Plug-in: The batteries used in the solution can be charged from a standard external electricity source such as a domestic power outlet.
Parallel hybrid: The motor and engine work simultaneously at all times. The vehicle will, at no point, work like an EV (electric vehicle) only and hence will continue to operate as a conventional fuel vehicle if the batteries are fully discharged. The solution is battery-agnostic, in other words, it can be adopted to work with various types of batteries such as lead acid batteries or Lithium Ion batteries. The tests were performed using a lead acid battery based solution. The solution works without any interaction or interference with the existing Engine Management System (EMS) of the vehicle. The system is thereby also adaptable to vehicles without EMS and without electronic engines. It requires low maintenance and has a reliable three phase AC induction motor.
11 global patents have been filed in areas such as integrated system, motor design, motor mounting, and control system and battery management.
Is this a full-fledged hybrid vehicle you are building, or a sub-system that will be sold to Auto OEMs?
Our solution can be retrofitted in every car – new and used. Small, mid-size and large. This will not be a complete vehicle that we build.
Can you talk about the advantages of plug-in hybrids over other conventional hybrids?
Benefits:
Doesn’t interfere with Manufacturersâ in-vehicle systems
It is battery chemistry agnostic
Is compact yet delivers high peak power
Fuel efficiency improvement of over 40% as observed during tests at ARAI. The solution however provides 60% to 80% improvement during city driving conditions & above 50% during highway driving.
Solution is capable of reducing GHG emissions by over 30%.
Since the consumption of fuel will go down on account of the hybrid solution, the government would be able to save through the reduction in subsidy and foreign exchange outflow.
Retro-fitment of this solution can be expeditiously done in 4 to 6 hours.
Solution does not require additional infrastructure investments from the government.
What markets do you see for this technology?
After-market, vehicle owners & OEMs
What unique capabilities does KPIT and Bharat Forge bring to the table?
The technology for this intelligent plug-in, parallel, full-hybrid solution has been designed and engineered by KPIT Cummins, while the solution for automobiles would be manufactured through a joint venture (JV) between Bharat Forge Limited and KPIT Cummins Infosystems Limited. As part of the joint venture, KPIT Cummins will license the technology to the JV while Bharat Forge would bring in its manufacturing, assembly & integration expertise to the JV. The solution will be marketed to OEMs and fleet & individual vehicle owners through a network of certified and authorized dealerships
A general question – how do you see the future of embedded electronics space evolving in the next few years?
The embedded electronics space will see growth in its application in the Indian Vehicles at a rate faster than what happened in Europe, US and Japan. The key difference between application of electronics in the vehicle in Western world and in India would be that in western world it evolved with the advances in technology(and regulations) but in India it will be driver by market demand at the appropriate price.
About Anup Sable – SVP, Automotive and Allied Embedded and Tools, KPIT-Cummins
Anup heads the Automotive line of business which is a leading product engineering partner to the automotive industry.
He is responsible for managing relationships with customers and helping them to globalize & standardize efficiently. He has been instrumental in creating a robust delivery ecosystem which supports clients in bringing complex technology products and systems faster to markets.
Passionate about technology in cars, Anup began his career as a research engineer at the Automotive Research Association in India (ARAI). He joined KPIT Cummins as a software engineer in 1994. With over 15 years of experience in the field of automotive electronics, Anup has played a key role in setting up the Automotive Electronics practice at KPIT Cummins.
Anup has done his engineering from Government College of Engineering, Pune.
(This article by Amit was first published on his blog and is being reproduced here with permission.)
Pune is well-known in India and internationally for being a hub of education and research. It has a wide range of academic & research institutions spanning various domains in science, technology, medicine, agriculture, arts, humanities, law, finance, etc. This blog article is an attempt to list out these various institutions. If you find any missing, please add a comment.
Science & Technology
University of Pune: http://www.unipune.ac.in
One of the top universities in the country, established in 1948.
College Of Engineering Pune (COEP) http://www.coep.org.in
One of the top engineering colleges in the country. Also, the second oldest engg college in India (Established: 1853).
National Chemical Laboratory http://www.ncl-india.org/
The top research organization in India that is focused on Chemistry and Chemical Engineering.
Automotive Research Association of India (ARAI) http://www.araiindia.com/
A collaborative effort between the Government and Industry, focused on testing and validation of various automotive related technologies.
Bhaskaracharya Pratishthana (Bhaskaracharya Institute of Mathematics) http://www.bprim.org/
Educational and Research Institute in Mathematics
Inter University Center For Astronomy & Astro-Physics (IUCAA) http://www.iucaa.ernet.in/Home.html
Focused on academics research in astronomy and astro-physics.
National Center For Radio Astro-Physics ((NCRA – TIFR) http://ncra.tifr.res.in/
A division of TIFR, focused on research in Astro-physics. (Also involved with the GMRT (Giant Meter Wave Radio Telescope) Project near Naranygaon.
Central Institute Of Road Transport (CIRT) http://www.cirtindia.com
Research on roads and transportation.
Tata Research Development & Design Center (TRDDC) http://www.tcs-trddc.com Part of the Tata Group – focused on research in various engineering and computer science related areas.
Computational Research Laboratories (CRL) http://www.crlindia.com/ CRL is a wholly owned subsidiary of Tata Sons Ltd. The company is into the R&D and business of High Performance Computing (HPC) services and solutions.
Center For Development In Advanced Computing (CDAC) http://www.cdac.in/
Focused on advanced computing related research. Renowned for developing India’s first supercomputer Param.
Vasantdada Sugar Institute http://www.vsisugar.com
Sugarcane and sugar research and process development
Agriculture College http://mpkv.mah.nic.in/PUNECOL.HTM
One of the first Agricultural Research Institutions, established over 100 years ago. Research in various types of crops, cultivation, seeds, etc.
National Institute of Virology (NIV) http://www.niv.co.in/
Research in virology. WHO Collaborating Center for arboviruses reference and hemorrhagic fever reference and research. National Monitoring Center for Influenza, Japanese encephalitis, Rota , Measles and Hepatitis.
National AIDS Research Center http://www.nari-icmr.res.in/
Various aspects of research on HIV and AIDS through infra-structural development, capacity building & research programs.
Call it by whatever name: Green Energy /Sustainable Solutions /Cleantech /Alternative Energy /etc. The quest for environment friendly, cheap and renewable energy is probably the most important technology problems of the 21st century.
Various options are being under development for a few decades but still all of these put together constitute a small percentage (in most countries – single digit) of total energy consumption. These options include: Wind, Bio-Fuels, Solar Photo-Voltaics, Solar-Thermal, Geo-Thermal, Tidal Power, etc. The only renewable energy form that has been used effectively(in non-trivial amounts) is hydel power.
In this brief blog, I am attempting to capture a list of interesting companies and R&D organizations in Pune that are involved in these fields. Would appreciate any inputs (and details) on companies/organizations that you are aware of, and that are not listed in here. Please add them as comments, and I will consolidate them into this blog post.
Thermax has been an important Indian (as well as global) player in Thermal Engineering for many decades. Their focus includes Solar Thermal, Geo-Thermal, Waste-Heat Recycling and related areas.
Suzlon Energy is amongst the top wind power companies in the world. Headquartered and founded here in Pune, it has a global presence in Europe, North America, Australia and in many other countries.
National Chemical Laboratory (NCL) is the premier research institution in India (and one of the top ones globally), involved in R&D in chemistry and chemical engineering. Their work includes research on bio-fuels, associated enzymes, etc.
Over the past 2 years, PuneTech has covered a breadth of technology related topics, with a concentration on Information Technology & Software. The strategic goal is to cover multiple technology segments and discuss innovative & exciting developments in these areas; specifically in Pune’s context.
It was with this objective that the concept of ‘SIG’ (Special Interest Group) was first mooted last year. A SIG covers a given vertical or horizontal domain area in depth. We decided that the best way to expand PuneTech would be to create a number of such SIGs, each focused on some particular vertical, and each run by someone who is passionate about that vertical. PuneTech would provide support, like a launching pad, publicity and visibility, and guidance about what works and what doesn’t work, based on our own experiences. Over time, we expect SIGs to have their own websites, and their own offline events.
PuneChips was the first SIG, launched in June of 2009, launched by Abhijit Athavale (SIG Leader) in cooperation with PuneTech. It focuses on semiconductors design and applications. This SIG has arranged many successful meetings and events, and now it has also launched has its own website: www.punechips.com . This website features information about the PuneChips events, as well as blogs about the semiconductors and embedded system industry. Volunteers like Arati Halbe have helped with PuneChips (but more volunteers are needed). Also, the Venture Center and Kaushik Gala have been helpful in graciously providing their premises for holding PuneChips events. For more details see the PuneChips about page.
PuneTech hopes to incubate more SIGs like PuneChips in future, and spin them off as separate entities. PuneTech will continue to be actively involved in supporting and publicizing the events and activities of these SIGs. If you’d like to start one, please get in touch with us.
PuneChips Activities
Over the past 8 months, PuneChips has organized a number of interesting meetings, featuring senior thought leaders from the semiconductor industry. It also has an active google-groups mailing list and a ‘Pune Chips’ linkedin group. Nearly 200 professionals and students from the VLSI, Embedded Systems, and other related areas are members of these groups. You can also follow PuneChips on twitter.
The first kick-off meeting of PuneChips in June 2009 featured Abhijit Abhyankar, Head of Rambus India. His talk on Emerging opportunities in the semiconductor industry presented a nice overview of the semiconductor sector and its progression over the past few decades. He also discussed emerging opportunities and trends in this field.
The second event featured Shrinath Keskar, ex-MD of Ikanos India. His presentation: IC Design Challenges in the Telecom sector discussed the various challenges in IC Design, specifically with respect to the Telecom Sector.
The October 2009 session featured a talk by Cliff Cummings, President of Sunburst Design and SystemVerilog Industry Guru. He talked about SystemVerilog & Designer Productivity, discussing specific tools and tricks for improving designer productivity.
The January session featured Madhu Atre, President of Applied Materials India. His talk on A Bright Solar Future discussed the various new developments in the area of solar power (specifically photo-voltaic) and the macro alternate energy global trends. He also touched upon the implications of these developments for India, including costs and government incentives.
In 2010, PuneChips plans to arrange similar meetings, featuring talks by thought leaders from the industry. The SIG also looks forward to more active interactions on the mailing list and linkedin group. If you are interested in learning more about the PuneChips activities and/or have a speaker you would like to recommend, please contact Abhijit Athavale.
You should form a SIG too – Get in touch with us
It would be great if Pune has many more such SIGs. A number of such groups and organizations are already active (some like the Pune Linux Users Group have existed long before PuneTech was started, and most like the Pune Open Coffee Club, for entrepreneurs and startups, were created independently). But there is scope for many more. The existing ones largely tend to be focused on particular technologies (like the Google Technologies User Group, or the Pune User Group for Microsoft Technologies). There are only a few that are aligned with industry verticals, like PuneChips or the Null group focused on security. I think there should be more. So, if you’re passionate about some industry vertical, and are willing to spend at least a few hours a week on organizing a Pune-based SIG around that vertical, and are willing to do that for at least a couple of years, please let in touch with us, and let us make it happen.
In fact, it does not even have to be a vertical. It can be a horizontal area that goes across groups. As long as it is something that benefits Pune’s techies, we are game. In fact, we’re soon expecting to make an announcement related to PuneTech and Marathi. Subscribe to PuneTech so you don’t miss it.
For Dr. Narayan Venkatasubramanyan’s detailed bio, please click here. For the full series of articles, click here.)
this is a follow-up to optimization: a case study. frequent references in this article to details in that article would make this one difficult to read for someone who hasn’t at least skimmed through that.
the problem of choice
the wikipedia article on optimization provides a great overview of the field. it does a thorough job by providing a brief history of the field of mathematical optimization, breaking down the field into its various sub-fields, and even making a passing reference to commercially available packages that help in the rapid development of optimization-based solutions. the rich set of links in this page lead to detailed discussions of each of the topics touched on in the overview.
i’m tempted to stop here and say that my job is done but there is one slight problem: there is a complete absence of any reference to helicopter scheduling in an offshore oil-field. not a trace!
this brings me to the biggest problem facing a young practitioner in the field: what to do when faced with a practical problem?
of course, the first instinct is to run with the technique one is most familiar with. being among the few in our mba program that had chosen the elective titled “selected topics in operations research” (a title that i’m now convinced was designed to bore and/or scare off prospective students who weren’t self-selected card-carrying nerds), we came to the problem of helicopter scheduling armed with a wealth of text-book knowledge.
the lines represent the constraints. the blue region is the set of all “permissible values”. the objective function is used to choose one (“the most optimal”) out of the blue points. image via wikipedia
having recently studied linear and integer programming, we first tried to write down a mathematical formulation of the problem. we knew we could describe each sortie in terms of variables (known as decision variables). we then had to write down constraints that ensured the following:
any set of values of those decision variables that satisfied all the constrains would correspond to a sortie
any sortie could be described by a set of permissible set of values of those decision variables
this approach is one of the cornerstones of mathematical programming: given a practical situation to optimize, first write down a set of equations whose solutions have a one-to-one correspondence to the set of possible decisions. typically, these equations have many solutions.
the other cornerstone is what is called an objective function, i.e., a mathematical function in those same variables that were used to describe the set of all feasible solutions. the solver is directed to pick the “best” solution, i.e., one that maximizes (or minimizes) the objective function.
the set of constraints and the objective function together constitute a mathematical programming problem. the solution that maximizes (or minimizes) the objective function is called an optimal solution.
linear programming – an example
googling for “linear programming examples” leads to millions of hits, so let me borrow an example at random from here: “A farmer has 10 acres to plant in wheat and rye. He has to plant at least 7 acres. However, he has only $1200 to spend and each acre of wheat costs $200 to plant and each acre of rye costs $100 to plant. Moreover, the farmer has to get the planting done in 12 hours and it takes an hour to plant an acre of wheat and 2 hours to plant an acre of rye. If the profit is $500 per acre of wheat and $300 per acre of rye how many acres of each should be planted to maximize profits?”
the decisions the farmer needs to make are: how many acres of wheat to plant? how many acres of rye to plant? let us call these x and y respectively.
so what values can x and y take?
since we know that he has only 10 acres, it is clear that x+y must be less than 10.
the problem says that he has to plant at least 7 acres. we have two choices: we can be good students and write down the constraint “x+y >= 7” or we can be good practitioners and demand to know more about the origins of this constraint (i’m sure every OR professional of long standing has scars to show from the times when they failed to ask that question.)
the budget constraint implies that 200x + 100y <= 1200. again, should we not be asking why this farmer cannot borrow money if doing so will increase his returns?
finally, the time constraint translates into x + 2y <= 12. can he not employ farm-hands to increase his options?
the non-negativity constraints (x, y >= 0) are often forgotten. in the absence of these constraints, the farmer could plant a negative amount of rye because doing so would seem to get him more land, more money, and more time. clearly, this is practically impossible.
as you will see if you were to scroll down that page, these inequalities define a triangular region in the x,y plane. all points on that triangle and its interior represents feasible solutions: i.e., if you were to pick a point, say (5,2), it means that the the farmer plants 5 acres each of wheat and 2 acres of rye. it is easy to confirm that this represents no more than 10 acres, no less than 7 acres, no more than $1200 and no more than 12 hours. but is this the best solution? or is there another point within that triangle?
this is where the objective function helps. the objective is to maximize the profit earner, i.e., maximize 500x + 300y. from among all the points (x,y) in that triangle, which one has the highest value for 500x + 300y?
this is the essence of linear programming. LPs are a subset of problems that are called mathematical programs.
real life isn’t always lp
in practice, not all mathematical programs are equally hard. as we saw above, if all the constraints and the objective function are linear in the decision variables and if the decision variables can take on any real value, we have a linear program. this is the easiest class of mathematical programs. linear programming models can be used to describe, sometimes approximately,a large number of commercially interesting problems like supply chain planning. commercial packages like OPL, GAMS, AMPL, etc can be used to model such problems without having to know much programming. packages like CPLEX can solve problems with millions of decision variables and constraints and produce an optimal solution in reasonable time. lately, there have been many open source solvers (e.g., GLPK) that have been growing in their capability and competing with commercial packages.
integer programming problems constrain the solution to specific discrete values. while the blue lines represent the “feasible region”, the solution is only allowed to take on values represented by the red dots. this makes the problem significantly more difficult. image via wikipedia
in many interesting commercial problems, the decision variables is required to take on discrete values. for example, a sortie that carries 1/3 of a passenger from point a to point b and transports the other 2/3 on a second flight from point a to point b would not work in practice. a helicopter that lands 0.3 in point c and 0.7 in point d is equally impractical. these variables have to be restricted to integer values. such problems are called integer programming problems. (there is a special class of problems in which the decision variables are required to be 0 or 1; such problems are called 0-1 programming problems.) integer programming problems are surprisingly hard to solve. such problems occur routinely in scheduling problems as well as in any problem that involves discrete decisions. commercial packages like CPLEX include a variety of sophisticated techniques to find good (although not always optimal) solutions to such problems. what makes these problems hard is the reality that the solution time for such problems grows exponentially with the growth in the size of the problem.
another class of interesting commercial problems involves non-linear constraints and/or objective functions. such problems occur routinely in situations such refinery planning where the dynamics of the process cannot be described (even approximately) with linear functions. some non-linear problems are relatively easy because they are guaranteed to have unique minima (or maxima). such well-behaved problems are easy to solve because one can always move along an improving path and find the optimal solution. when the functions involved are non-convex, you could have local minima (or maxima) that are worse than the global minima (or maxima). such problems are relatively hard because short-sighted algorithms could find a local minimum and get stuck in it.
fortunately for us, the helicopter scheduling problem had no non-linear effects (at least none that we accounted for in our model). unfortunately for us, the discrete constraints were themselves extremely hard to deal with. as we wrote down the formulation on paper, it became quickly apparent that the sheer size and complexity of the problem was beyond the capabilities of the IBM PC-XT that we had at our disposal. after kicking this idea around for a bit, we abandoned this approach.
resorting to heuristics
we decided to resort to a heuristic approach, i.e., an approach that used a set of rules to find good solutions to the problem. the approach we took involved the enumeration of all possible paths on a search tree and then an evaluation of those paths to find the most efficient one. for example, if the sortie was required to start at point A and drop off m1 men at point B and m2 men at point C, the helicopter could
leave point A with the m1 men and proceed to point B, or
leave point A with the m2 men and proceed to point C, or
leave point A with the m1 men and some of the m2 men and proceed to point B, or
leave point A with the m1 men and some of the m2 men and proceed to point C, or
. . .
if we were to select the first possibility, it would drop off the m1 men and then consider all the options available to it (return to A for the m2 men? fly to point D to refuel?)
we would then traverse this tree enumerating all the paths and evaluating them for their total cost. finally, we would pick the “best” path and publish it to the radio operator.
at first, this may seem ridiculous. the explosion of possibilities meant that this tree was daunting.
there were several ways around this problem. firstly, we never really explicitly enumerated all possible paths. we built out the possibilities as we went, keeping the best solution until we found one that was better. although the number of possible paths that a helicopter could fly in the course of a sortie was huge, there were simple rules that directed the search in promising directions so that the algorithm could quickly find a “good” sortie. once a complete sortie had been found, the algorithm could then use it to prune searches down branches that seemed to hold no promise for a better solution. the trick was to tune the search direction and prune the tree without eliminating any feasible possibilities. of course, aggressive pruning would speed up the search but could end up eliminating good solutions. similarly, good rules to direct the search could help find good solutions quickly but could defer searches in non-obvious directions. since we were limited in time, so the search tree was never completely searched, so if the rules were poor, good solutions could be pushed out so late in the search that they were never found, at least not in time to be implemented.
one of the nice benefits of this approach was that it allowed the radio operator to lock down the first few steps in the sortie and leave the computer to continue to search for a good solution for the remainder of the sortie. this allowed the optimizer to continue to run even after the sortie had begun. this bought the algorithm precious time. allowing the radio operator the ability to override also had the added benefit of putting the user in control in case what the system recommended was infeasible or undesirable.
notice that this approach is quite far from mathematical programming. there is no guarantee of an optimal solution (unless one can guarantee that pruning was never too aggressive and that we exhaustively searched the tree, neither of which could be guaranteed in practical cases). nevertheless, this turned out to be quite an effective strategy because it found a good solution quickly and then tried to improve on the solution within the time it was allowed.
traditional operations research vs. artificial intelligence
this may be a good juncture for an aside: the field of optimization has traditionally been the domain of operations researchers (i.e., applied mathematicians and industrial engineers). even though the field of artificial intelligence in computer science has been the source of many techniques that effectively solve many of the same problems as operations research techniques do, OR-traditionalists have always tended to look askance at their lowly competitors due to the perceived lack of rigour in the AI techniques. this attitude is apparent in the wikipedia article too: after listing all the approaches that are born from mathematical optimization, it introduces “non-traditional” methods with a somewhat off-handed “Here are a few other popular methods:” i find this both amusing and a little disappointing. there have been a few honest attempts at bringing these two fields together but a lot more can be done (i believe). it would be interesting to see how someone steeped in the AI tradition would have approached this problem. perhaps many of the techniques for directing the search and pruning the tree are specific instances of general approaches studied in that discipline.
if there is a moral to this angle of our off-shore adventures, it is this: when approaching an optimization problem, it is tempting to shoot for the stars by going down a rigorous path. often, reality intrudes. even when making technical choices, we need to account for the context in which the software will be used, how much time there is to solve the problem, what are the computing resources available, and how it will fit into the normal routine of work.
other articles in this series
this article is the fourth in the series of short explorations related to the application of optimization. i’d like to share what i’ve learned over a career spent largely in the business of applying optimization to real-world problems. interestingly, there is a lot more to practical optimization than models and algorithms. each of the the links leads to a piece that dwells on one particular aspect.
Dr. Narayan Venkatasubramanyan has spent over two decades applying a rare combination of quantitative skills, business knowledge, and the ability to think from first principles to real world business problems. He currently consults in several areas including supply chain and health care management. As a Fellow at i2 Technologies, he tackled supply chains problems in areas as diverse as computer assembly, semiconductor manufacturer, consumer goods, steel, and automotive. Prior to that, he worked with several airlines on their aircraft and crew scheduling problems. He topped off his days at IIT-Bombay and IIM-Ahmedabad with a Ph.D. in Operations Research from the University of Wisconsin-Madison.
He is presently based in Dallas, USA and travels extensively all over the world during the course of his consulting assignments. You can also find Narayan on Linkedin at: http://www.linkedin.com/in/narayan3rdeye
For Dr. Narayan Venkatasubramanyan’s detailed bio, please click here. For the full series of articles, click here.)
this is a follow-up to optimization: a case study. frequent references in this article to details in that article would make this one difficult to read for someone who hasn’t at least skimmed through that.
organizational dynamics
most discussions of optimization tend to focus on the technical details of problem formulation, algorithm design, the use of commercially available software, implementation details, etc. a fundamental point gets lost in that approach to this topic. in this piece, we will focus on that point: organizational readiness for change.
the introduction of optimization in the decision-making process almost always requires change in that process. processes exist in the context of an organization. as such, when introducing change of this nature, organizations need to be treated much the same way a doctor would treat a recipient of an organ. careful steps need to be take to make sure that the organization is receptive to change. before the change is introduced, the affected elements in the organization need to be made aware of the need for change. also, the organization’s “immune system” needs to be neutralized while the change is introduced. the natural tendency of any organization to attack change and frustrate the change agent needs to be foreseen and planned for.
the structure of the client’s project organization is critical. in my experience, every successful implementation of optimization has required support at 3 levels within the client organization:
a project needs “air cover” from the executive level.
at the project level, it needs a champion who will serve as the subject-matter expert, evangelist, manager, and cheerleader.
at the implementation level, it needs a group of people who are intimately familiar with the inner workings of the existing IT infrastructure.
let me elaborate on that with specific emphasis on the first two:
an executive sponsor is vital to ensuring that the team is given the time and resources it needs to succeed even as changes in circumstances cause high-level priorities to change. during the gestation period of a project — a typical project tends to take several months — the project team needs the assurance that their budget will be safe, the priorities that guide their work will remain largely unchanged, and the team as a whole will remain free of distractions.
a project champion is the one person in the client organization whose professional success is completely aligned with the success of the project. he/she stands to get a huge bonus and/or a promotion upon the success of the project. such a person keeps the team focused on the deliverable, keeps the executive sponsor armed with all the information he/she needs to continue to make the case for the project, and keeps all affected parties informed of impending changes, in short, an internal change agent. in order to achieve this, the champion has to be from the business end of the organization, not from the IT department.
unfortunately, most projects tend to focus on the third of the elements. strength in the implementation team alone will not save project that lacks a sponsor or a champion.
ONGC’s HAL Dhruv Helicopters on sorties off the Mumbai coast. Image by Premshree Pillai via Flickr
it could be argued that executive sponsorship for this project came from the highest possible level. i heard once that our project had been blessed by the managing directors of the two companies. unfortunately, their involvement didn’t extend anywhere beyond that. neither managing director helped shape the project organization for success.
who was our champion? there was one vitally important point that i mentioned in passing in the original narrative: the intended users of the system were radio operators. they reported to an on-shore manager in the electronics & telecommunication department. in reality, their work was intimately connected to the production department, i.e., the department that managed the operations in the field. as such, they were effectively reporting to the field production supervisor. the radio operators worked very much like the engineers in the field: they worked all day every day for 14 days at a time and then went home for the next 2 weeks. each position was manned by two radio operators — more about them later — who alternately occupied the radio room. as far as their helicopter-related role was concerned, they were expected to make sure that they did the best they could do to keep operations going as smoothly as possible. their manager, the person who initiated the project, had no direct control over the activities of the radio operator. meanwhile, the field production supervisor was in charge of maintaining the efficient flow of oil out of the field. the cost of helicopter operations was probably a miniscule fraction of the picture they viewed. because no one bore responsibility for the efficiency of helicopter usage, no one in the client organization really cared about the success of our project. unfortunately, we were neither tasked nor equipped to deal with this problem (although that may seem odd considering that there were two fresh MBAs on the team).
in hindsight, it seems like this project was ill-structured right from the beginning. the project team soldiered on in the face of these odds, oblivious to the fact that we’d been dealt a losing hand. should the final outcome have ever been a surprise?
other articles in this series
this article is the third in a series of short explorations related to the application of optimization. i’d like to share what i’ve learned over a career spent largely in the business of applying optimization to real-world problems. interestingly, there is a lot more to practical optimization than models and algorithms. each of the the links below leads to a piece that dwells on one particular aspect.
Dr. Narayan Venkatasubramanyan has spent over two decades applying a rare combination of quantitative skills, business knowledge, and the ability to think from first principles to real world business problems. He currently consults in several areas including supply chain and health care management. As a Fellow at i2 Technologies, he tackled supply chains problems in areas as diverse as computer assembly, semiconductor manufacturer, consumer goods, steel, and automotive. Prior to that, he worked with several airlines on their aircraft and crew scheduling problems. He topped off his days at IIT-Bombay and IIM-Ahmedabad with a Ph.D. in Operations Research from the University of Wisconsin-Madison.
He is presently based in Dallas, USA and travels extensively all over the world during the course of his consulting assignments. You can also find Narayan on Linkedin at: http://www.linkedin.com/in/narayan3rdeye
(PuneTech is honored to have Dr. Narayan Venkatasubramanyan, an Optimization Guru and one of the original pioneers in applying Optimization to Supply Chain Management, as our contributor. I had the privilege of working closely with Narayan at i2 Technologies in Dallas for nearly 10 years.
For Dr. Narayan Venkatasubramanyan’s detailed bio, please click here.
This is the second in a series of articles that we will publish once a week for a month. The first one was an ‘overview’ case study of optimization. Click here for the full series.)
this is a follow-up to optimization: a case study. frequent references in this article to details in that article would make this one difficult to read for someone who hasn’t at least skimmed through that.
it is useful to think of a decision-support system as consisting of 4 distinct layers:
data layer
visibility layer
predictive/simulation layer
optimization layer
the job of the data layer is to capture all the data that is relevant and material to the decision at hand and to ensure that this data is correct, up-to-date, and easily accessible. in our case, this would include master/static data such as the map of the field, the operating characteristics of the helicopter, etc as well as dynamic data such as the requirements for the sortie, ambient conditions (wind, temperature), etc. this may seem rather obvious at first sight but a quick reading of the case study shows that we had to revisit the data layer several times over the course of the development of the solution.
as the name implies, the visibility layer provides visibility into the data in a form that allows a human user to exercise his/her judgment. very often, a decision-support system requires no more than just this layer built on a robust data layer. for example, we could have offered a rather weak form of decision support by automating the capture of dynamic data and presenting to the radio operator all the data (both static and dynamic), suitably filtered to incorporate only parts of the field that are relevant to that sortie. he/she would be left to chart the route of the helicopter on a piece of paper, possibly checking off requirements on the screen as they are satisfied. even though this may seem trivial, it is important to note that most decision-support systems in everyday use are rather lightweight pieces of software that present relevant data to a human user in a filtered, organized form. the human decision-maker takes it from there.
the predictive/simulation layer offers an additional layer of help to the human decision-maker. it has the intelligence to assess the decisions made (tentatively) by the user but offers no active support. for instance, a helicopter scheduling system that offers this level of support would present the radio operator with a screen on which the map of the field and the sortie’s requirements are depicted graphically. through a series of mouse-clicks, the user can decide whom to pick up, where to fly to, whether to refuel, etc. the system supports the user by automatically keeping track of the weight of the payload (passenger+fuel) and warning the user of violations, using the wind direction to compute the rate of fuel burn, warning the user of low-fuel conditions, monitoring whether crews arrive at their workplace on time, etc. in short, the user makes decisions, the system checks constraints and warns of violations, and provides a measure of goodness of the solution. few people acknowledge that much of corporate decision-making is at this level of sophistication. the widespread use of microsoft excel is clear evidence of this.
the optimization layer is the last of the layers. it wrests control from the user and actively recommends decisions. it is obvious that the effectiveness of optimization layer is vitally dependent on the data layer. what is often overlooked is that the acceptance of the optimization layer by the human decision-maker often hinges on their ability to tweak the recommendations in the predictive layer, even if only to reassure themselves that the solution is correct. often, the post-optimization adjustments are indispensable because the human decision-maker knows things that the system does not.
the art (and science) of modeling
the term “decision-support system” may seem a little archaic but i will use it here because my experience with applying optimization has been in the realm of systems that recommend decisions, not ones that execute them. there is always human intervention that takes the form of approval and overrides. generally speaking, this is a necessary step. the system is never all-knowing. as a result, its view of reality is limited, possibly flawed. these limitations and flaws are reflected in its recommendations.
this invites the question: if there are known limitations and flaws in the model, why not fix them?
this is an important question. the answer to this is not nearly as obvious as it may appear.
before we actually construct a model of reality, we must consciously draw a box around that portion of reality that we intend to include in the model. if the box is drawn too broadly, the model will be too complex to be tractable. if the box is drawn too tightly, vital elements of the model are excluded. it is rare to find a decision problem in which we find a perfect compromise, i.e., we are able to draw a box that includes all aspects of the problem without the problem becoming computationally intractable.
unfortunately, it is hard to teach the subtleties of modeling in a classroom. in an academic setting, it is hard to wrestle with the messy job of making seemingly arbitrary choices about what to leave in and what to exclude. therefore, most students of optimization enter the real world with the impression that the process of modeling is quick and easy. on the contrary, it is at this level that most battles are won or lost.
note: the term modeling is going to be unavoidably overloaded in this context. when i speak of models, students of operations research may immediately think in terms of mathematical equations. those models are still a little way down the road. at this point, i’m simply talking about the set of abstract interrelationships that characterize the behaviour of the system. some of these relationships may be too complex to be captured in a mathematical model. as a result, the mathematical model is yet another level removed from reality.
consider our stumbling-and-bumbling approach to modeling the helicopter scheduling problem. we realized that the problem we faced wasn’t quite a text-book case. our initial approach was clearly very narrow. once we drew that box, our idealized world was significantly simpler than the real world. our world was flat. our helicopter never ran out of fuel. the amount of fuel it had was never so much that it compromised its seating capacity. it didn’t care which way the wind was blowing. it didn’t care how hot it was. in short, our model was far removed from reality. we had to incorporate each of these effects, one by one, because their exclusion made the gap between reality and model so large that the decisions recommended by the model were grossly unrealistic.
it could be argued that we were just a bunch of kids who knew nothing about helicopters, so trial-and-error was the only approach to determining the shape of the box we had to draw.
not true! here’s how we could have done it differently:
if you were to examine what we did in the light of the four-layer architecture described above, you’d notice that we really only built two of the four: the data layer and the optimization layer. this is a tremendously risky approach, an approach that has often led to failure in many other contexts. it must be acknowledged that optimization experts are rarely experts in the domain that they are modeling. nevertheless, by bypassing the visibility and predictive layers, we had sealed off our model from the eyes of people who could have told us about the flaws in it.
each iteration of the solution saw us expanding the data layer on which the software was built. in addition to expanding that data layer, we had to enhance the optimization layer to incorporate the rules implicit in the new pieces of data. here are the steps we took:
we added the fuel capacity and consumption rate of each helicopter to the data layer. and modified the search algorithm to “remember” the fuel level and find its way to a fuel stop before the chopper plunged into the arabian sea.
we added the payload limit to the data layer. and further modified search algorithm to “remember” not to pick up too many passengers too soon after refueling or risk plunging into the sea with 12 people on board.
we captured the wind direction in the data layer and modified the computation of the distance matrix used in the optimization layer.
we captured the ambient temperature as well as the relationship between temperature and maximum payload in the data layer. and we further trimmed the options available to the search algorithm.
we could have continued down this path ad infinitum. at each step, our users would have “discovered” yet another constraint for us to include. back in those days, ongc used to charter several different helicopter agencies. i remember one of the radio operator telling me that some companies were sticklers for the rules while others would push things to the limit. as such, a route was feasible or not depending on whether the canadian company showed up or the italian one did! should we have incorporated that too in our model? how is one to know?
this question isn’t merely rhetorical. the incorporation of a predictive/simulation layer puts the human decision-maker in the driver’s seat. if we had had a simulation layer, we would have quickly learned the factors that were relevant and material to the decision-making process. if the system didn’t tell the radio operator which way the wind was blowing, he/she would have immediately complained because it played such a major role in their choice. if the system didn’t tell him/her whether it was the canadian or the italian company and he didn’t ask, we would know it didn’t matter. in the absence of that layer, we merrily rushed into what is technically the most challenging aspect of the solution.
implementing an optimization algorithm is no mean task. it is hugely time-consuming, but that is really the least of the problems. optimization algorithms tend to be brittle in the following sense: a slight change in the model can require a complete rewrite of the algorithm. it is but human that once one builds a complex algorithm, one tends to want the model to remain unchanged. one becomes married to that view of the world. even in the face of mounting evidence that the model is wrong, one tends to hang on. in hindsight, i would say we made a serious mistake by not architecting the system to validate the correctness of the box we had drawn before we rushed ahead to building an optimization algorithm. in other words, if we had built the solution systematically, layer by layer, many of the surprises that caused us to swing wildly between jubilation and depression would have been avoided.
other articles in this series
this article is the second in a series of short explorations related to the application of optimization. i’d like to share what i’ve learned over a career spent largely in the business of applying optimization to real-world problems. interestingly, there is a lot more to practical optimization than models and algorithms. each of the the links below leads to a piece that dwells on one particular aspect.
articles in this series: optimization: a case study architecture of a decision-support system (this article) optimization and organizational readiness for change optimization: a technical overview
Dr. Narayan Venkatasubramanyan has spent over two decades applying a rare combination of quantitative skills, business knowledge, and the ability to think from first principles to real world business problems. He currently consults in several areas including supply chain and health care management. As a Fellow at i2 Technologies, he tackled supply chains problems in areas as diverse as computer assembly, semiconductor manufacturer, consumer goods, steel, and automotive. Prior to that, he worked with several airlines on their aircraft and crew scheduling problems. He topped off his days at IIT-Bombay and IIM-Ahmedabad with a Ph.D. in Operations Research from the University of Wisconsin-Madison.
He is presently based in Dallas, USA and travels extensively all over the world during the course of his consulting assignments. You can also find Narayan on Linkedin at: http://www.linkedin.com/in/narayan3rdeye
(PuneTech is honored to have Dr. Narayan Venkatasubramanyan, an Optimization Guru and one of the original pioneers in applying Optimization to Supply Chain Management, as our contributor. I had the privilege of working closely with Narayan at i2 Technologies in Dallas for nearly 10 years.
For Dr. Narayan Venkatasubramanyan’s detailed bio, please click here.
This is the first in a series of articles that we will publish once a week for a month. For the full series of articles, click here.)
the following entry was prompted by a request for an article on the topic of “optimization” for publication in punetech.com, a website co-founded by amit paranjape, a friend and former colleague. for reasons that may have something to do with the fact that i’ve made a living for a couple of decades as a practitioner of that dark art known as optimization, he felt that i was best qualified to write about the subject for an audience that was technically savvy but not necessarily aware of the application of optimization. it took me a while to overcome my initial reluctance: is there really an audience for this after all, even my daughter feigns disgust every time i bring up the topic of what i do. after some thought, i accepted the challenge as long as i could take a slightly unusual approach to a “technical” topic: i decided to personalize it by rooting in a personal-professional experience. i could then branch off into a variety of different aspects of that experience, some technical, some not so much. read on …
background
the year was 1985. i was fresh out of school, entering the “real” world for the first time. with a bachelors in engineering from IIT-Bombay and a graduate degree in business from IIM-Ahmedabad, and little else, i was primed for success. or disaster. and i was too naive to tell the difference.
for those too young to remember those days, 1985 was early in rajiv gandhi‘s term as prime minister of india. he had come in with an obama-esque message of change. and change meant modernization (he was the first indian politician with a computer terminal situated quite prominently in his office). for a brief while, we believed that india had turned the corner, that the public sector companies in india would reclaim the “commanding heights” of the economy and exercise their power to make india a better place.
CMC was a public sector company that had inherited much of the computer maintenance business in india after IBM was tossed out in 1977. quickly, they broadened well beyond computer maintenance into all things related to computers. that year, they recruited heavily in IIM-A. i was one of an unusually large number of graduates who saw CMC as a good bet.
not too long into my tenure at at CMC, i was invited to meet with an mid-level manager in electronics & telecommunications department of the oil and natural gas commission of india (ONGC). the challenge he posed us was simple: save money by optimizing the utilization of helicopters in the bombay high oilfield.
the problem
the bombay high oilfield is about 100 miles off the coast of bombay (see map). back then, it was a collection of about 50 oil platforms, divided roughly into two groups, bombay high north and bombay high south.
(on a completely unrelated tangent: while writing this piece, i wandered off into searching for pictures of bombay high. i stumbled upon the work of captain nandu chitnis, ex-navy now ONGC, biker, amateur photographer … who i suspect is a pune native. click here for a few of his pictures that capture the outlandish beauty of an offshore oil field.)
movement of personnel between platforms in each of these groups was managed by a radio operator who was centrally located.
all but three of these platforms were unmanned. this meant that the people who worked on these platforms had to be flown out from the manned platforms every morning and brought back to their base platforms at the end of the day.
at dawn every morning, two helicopters, flew out from the airbase in juhu, in northwestern bombay. meanwhile, the radio operator in each field would get a set of requirements of the form “move m men from platform x to platform y”. these requirements could be qualified by time windows (e.g., need to reach y by 9am, or not available for pick-up until 8:30am) or priority (e.g., as soon as possible). each chopper would arrive at one of the central platforms and gets its instructions for the morning sortie from the radio operator. after doing its rounds for the morning, it would return to the main platform. at lunchtime, it would fly lunchboxes to the crews working at unmanned platforms. for the final sortie of the day, the radio operator would send instructions that would ensure that all the crews are returned safely to their home platforms before the chopper was released to return to bombay for the night.
the challenge for us was to build a computer system that would optimize the use of the helicopter. the requirements were ad hoc, i.e., there was no daily pattern to the movement of men within the field, so the problem was different every day. it was believed that the routes charted by the radio operator were inefficient. given the amount of fuel used in these operations, an improvement of 5% over what they did was sufficient to result in a payback period of 4-6 months for our project.
this was my first exposure to the real world of optimization. a colleague of mine — another IIM-A graduate and i — threw ourselves at this problem. later, we were joined yet another guy, an immensely bright guy who could make the lowly IBM PC-XT — remember, this was the state-of-the-art at that time — do unimaginable things. i couldn’t have asked to be a member of a team that was better suited to this job.
the solution
we collected all the static data that we thought we would need. we got the latitude and longitude of the on-shore base and of each platform (degrees, minutes, and seconds) and computed the distance between every pair of points on our map (i think we even briefly flirted with the idea of correcting for the curvature of the earth but decided against it, perhaps one of the few wise moves we made). we got the capacity (number of seats) and cruising speed of each of the helicopters.
we collected a lot of sample data of actual requirements and the routes that were flown.
we debated the mathematical formulation of the problem at length. we quickly realized that this was far harder than the classical “traveling salesman problem”. in that problem, you are given a set of points on a map and asked to find the shortest tour that starts at any city and touches every other city exactly once before returning to the starting point. in our problem, the “salesman” would pick and/or drop off passengers at each stop. the number he could pick up was constrained, so this meant that he could be forced to visit a city more than once. the TSP is known to be a “hard” problem, i.e., the time it takes to solve it grows very rapidly as you increase the number of cities in the problem. nevertheless, we forged ahead. i’m not sure if we actually completed the formulation of an integer programming problem but, even before we did, we came to the conclusion that this was too hard of a problem to be solved as an integer program on a first-generation desktop computer.
instead, we designed and implemented a search algorithm that would apply some rules to quickly generate good routes and then proceed to search for better routes. we no longer had a guarantee of optimality but we figured we were smart enough to direct our search well and make it quick. we tested our algorithm against the test cases we’d selected and discovered that we were beating the radio operators quite handily.
then came the moment we’d been waiting for: we finally met the radio operators.
they looked at the routes our program was generating. and then came the first complaint. “your routes are not accounting for refueling!”, they said. no one had told us that the sorties were long enough that you could run out of fuel halfway, so we had not been monitoring that at all!
ONGC’s HAL Dhruv Helicopters on sorties off the Mumbai coast. Image by Premshree Pillai via Flickr
so we went back to the drawing board. we now added a new dimension to the search algorithm: it had to keep track of fuel and, if it was running low on fuel during the sortie, direct the chopper to one of the few fuel bases. this meant that some of the routes that we had generated in the first attempt were no longer feasible. we weren’t beating the radio operators quite as easily as before.
we went back to the users. they took another look at our routes. and then came their next complaint: “you’ve got more than 7 people on board after refueling!”, they said. “but it’s a 12-seater!”, we argued. it turns out they had a point: these choppers had a large fuel tank, so once they topped up the tank — as they always do when they stop to refuel — they were too heavy to take a full complement of passengers. this meant that the capacity of the chopper was two-dimensional: seats and weight. on a full tank, weight was the binding constraint. as the fuel burned off, the weight constraint eased; beyond a certain point, the number of seats became the binding constraint.
we trooped back to the drawing board. “we can do this!”, we said to ourselves. and we did. remember, we were young and smart. and too stupid to see where all this was going.
in our next iteration, the computer-generated routes were coming closer and closer to the user-generated ones. mind you, we were still beating them on an average but our payback period was slowly growing.
we went back to the users with our latest and greatest solution. they looked at it. and they asked: “which way is the wind blowing?” by then, we knew not to ask “why do you care?” it turns out that helicopters always land and take-off into the wind. for instance, if the chopper was flying from x to y and the wind was blowing from y to x, the setting was perfect. the chopper would take off from x in the direction of y and make a bee-line for y. on the other hand, if the wind was also blowing from x to y, it would take off in a direction away from y, do a 180-degree turn, fly toward and past y, do yet another 180-degree turn, and land. given that, it made sense to keep the chopper generally flying a long string of short hops into the wind. when it could go no further because they fuel was running low or it needed to go no further in that direction because there were no passengers on board headed that way, then and only then, did it make sense to turn around and make a long hop back.
“bloody asymmetric distance matrix!”, we mumbled to ourselves. by then, we were beaten and bloodied but unbowed. we were determined to optimize these chopper routes, come hell or high water!
so back we went to our desks. we modified the search algorithm yet another time. by now, the code had grown so long that our program broke the limits of the editor in turbo pascal. but we soldiered on. finally, we had all of our users’ requirements coded into the algorithm.
or so we thought. we weren’t in the least bit surprised when, after looking at our latest output, they asked “was this in summer?”. we had now grown accustomed to this. they explained to us that the maximum payload of a chopper is a function of ambient temperature. on the hottest days of summer, choppers have to fly light. on a full tank, a 12-seater may now only accommodate 6 passengers. we were ready to give up. but not yet. back we went to our drawing board. and we went to the field one last time.
in some cases, we found that the radio operators were doing better than the computer. in some cases, we beat them. i can’t say no creative accounting was involved but we did manage to eke out a few percentage point of improvement over the manually generated routes.
epilogue
you’d think we’d won this battle of attrition. we’d shown that we could accommodate all of their requirements. we’d proved that we could do better than the radio operators. we’d taken our machine to the radio operators cabin on the platform and installed it there.
we didn’t realize that the final chapter hadn’t been written. a few weeks after we’d declared success, i got a call from ONGC. apparently, the system wasn’t working. no details were provided.
i flew out to the platform. i sat with the radio operator as he grudgingly input the requirements into the computer. he read off the output from the screen and proceeded with this job. after the morning sortie was done, i retired to the lounge, glad that my work was done.
a little before lunchtime, i got a call from the radio operator. “the system isn’t working!”, he said. i went back to his cabin. and discovered that he was right. it is not that our code had crashed. the system wouldn’t boot. when you turned on the machine, all you got was a lone blinking cursor on the top left corner of the screen. apparently, there was some kind of catastrophic hardware failure. in a moment of uncommon inspiration, i decided to open the box. i fiddled around with the cards and connectors, closed the box, and fired it up again. and it worked!
it turned out that the radio operator’s cabin was sitting right atop the industrial-strength laundry room of the platform. every time they turned on the laundry, everything in the radio room would vibrate. there was a pretty good chance that our PC would regress to a comatose state every time they did the laundry. i then realized that this was a hopeless situation. can i really blame a user for rejecting a system that was prone to frequent and total failures?
other articles in this series
this blog entry is intended to set the stage for a series of short explorations related to the application of optimization. i’d like to share what i’ve learned over a career spent largely in the business of applying optimization to real-world problems. interestingly, there is a lot more to practical optimization than models and algorithms. each of the the links below leads to a piece that dwells on one particular aspect.
Dr. Narayan Venkatasubramanyan has spent over two decades applying a rare combination of quantitative skills, business knowledge, and the ability to think from first principles to real world business problems. He currently consults in several areas including supply chain and health care management. As a Fellow at i2 Technologies, he tackled supply chains problems in areas as diverse as computer assembly, semiconductor manufacturer, consumer goods, steel, and automotive. Prior to that, he worked with several airlines on their aircraft and crew scheduling problems. He topped off his days at IIT-Bombay and IIM-Ahmedabad with a Ph.D. in Operations Research from the University of Wisconsin-Madison.
He is presently based in Dallas, USA and travels extensively all over the world during the course of his consulting assignments. You can also find Narayan on Linkedin at: http://www.linkedin.com/in/narayan3rdeye
The other rivals in the race were L&T InfoTech and the American billionaire Wilbur Ross. This news is already being covered in great detail in all the national business media and I doubt if I can add anything new.
My thought would be from a Pune angle. Pune has been amongst the leading IT cities in India for a while now. Infosys and Wipro have plans underway to expand their Pune centers into their single biggest facilities. Yet, a ‘Pune-based company’ has never been in the big league!
It’s worth noting how Infosys started in Pune in the early 1980s and then moved on to Bangalore. In some sense this void can be filled today! Tech Mahindra has its roots in Pune for many years. Here are a couple of links that provide more information about the company profile:
Once in a while a technology feature/product really appeals to you, and you have that ‘Aha! Experience’. “Why didn’t I think of this before? This makes perfect sense!” I felt the same way when I first signed up on Linkedin in 2004. Linkedin was a fairly small, unknown networking portal back then. It surely has come a long way!
My first reaction was, here I have a way to maintain a ‘dynamic address book’. Other address books maintained in email programs are ‘static’. They have to be manually updated, and get out of synch when your contact changes jobs/schools/location/etc. I gradually started building my network in 2004. Over the past 4 years, I have been able to grow my network significantly through Linkedin. Various new features have been added over this period those greatly enhance networking capabilities. Features focused on tracking former colleagues and classmates were very helpful in finding these old friends.
Today, I have a large contact base of over 500 contacts. I extensively use Linkedin to keep track of my network (changes/updates/etc.) as well as for finding new connections. The recently introduced ‘connections update’ feature is extremely useful. You can get daily alerts about changes in your network (profile changes, network changes, etc.)
I think of a Linkedin profile as a summarized background with specific emphasis on specialties and areas of interest. The Linkedin framework if leveraged effectively can help you build an excellent network. [Note -To me ‘recruitment aid/job hunting’ is a tactical objective for using Linkedin; ‘networking’ is a more strategic objective in my view.]
In this article, I will discuss ‘Do’s and Don’ts’ of effective Linkedin usage, based on my experience. These points are specifically written for a new linkedin user or someone who doesn’t use it extensively, but wants to learn more. I have benefited greatly from following these simple guidelines in building my network and staying in touch with it.
Dos & Don’ts
0. Maintain brief, accurate and current profile information about yourself. Clearly list your specialties and provide a brief summary.
1.Update the correct geographic location. For users in India, there is a known issue/bug regarding Indian zip codes and how linkedin interprets it. E.g. Entering Pune area zip codes will end up displaying ‘Satara Area’ as your location. Similar issues are also there with a few other major cities in India. Until this is fixed, I would not populate anything in the zip code field (just put ‘111111’). That way, the location will show up as ‘India’.
2.Update a non-work email address on your profile.
3.I personally am not a big fan of recommendations. If you really have a good recommendation from someone senior, then put it up, otherwise don’t put them just for the sake of it. Do restrict the number of recommendations.
4.Ensure that education experience is correctly updated. This is very important.
5.As far as possible, avoid chronological gaps in your profile.
6.Do ensure that you have correct names and tenure for your companies and educational institutes. Typos here will prevent you from effectively using the class-mates and colleagues feature.
7.Avoid writing a long winding description of your experience. Remember, a linkedin profile is a ‘profile’ and not a detailed resume.
8.Use the ‘My Company’, ‘My Website’ links to add information about your company websites, your blog, websites that you are involved with, etc.
9. Remember, Linkedin is a professional networking tool, and not a social networking tool. Hence keep the profile/language/messages professional.
10.After you have finished all updates, do view your profile once, to check for any obvious typos or formatting errors.
11.Send invites to all key professional contacts from your address book. Use some discretion here. Sending invites to people who may not recognize you (people with whom you have had limited interactions) could result in rejections.
12.First send invites to those contacts that are already signed up on Linkedin. Probability of them connecting with you is higher than someone who is not already on Linkedin.
13.Leverage the class-mates and colleagues feature to find and link with former co-workers and school & college friends. This feature lets you send invites without having the email address. However, if you send invites to too many folks who don’t recognize you, and who end up rejecting your invite – you might end up in some linkedin trouble! Linkedin can temporarily put a hold of sending any further invites if you accumulate too many rejections.
14.While leveraging your contacts list to connect with someone in your 2nd or 3rd degree separated network, make sure you write a brief and clear note (to your contact, as well as the end recipient).
15.Avoid sending ‘Spam Invites’. In general, maintain some guidelines regarding who are trying to connect with. There are no hard and fast rules, but I usually try to restrict this to people I have met with in context of some business meeting/seminar/conference/networking event. I hate invitations that come from people who I haven’t even met or exchanged any emails!
16.Do monitor the ‘Just Joined Linkedin’ tab. Often times one of your friends/colleagues would show up on this list. You can directly send an invite to this person, if you already have their email id and add them to your contacts list.
17.Avoid participating in too many irrelevant groups. A few relevant groups can help you in two ways. 1) I like to look at linkedin group as an ‘electronic T Shirt’. It’s a way to proclaim to the world that you belong to a specific alumni or professional association. 2) A way to exchange discussions and establish contacts with other members of the groups. I am a big fan of #1 and don’t think linkedin is doing great with #2. Probability of getting connected to someone, whom you send an invite just because you are on the same group, is pretty low. Also, the quality of discussions on linkedin groups today leaves a lot to be desired compared to Google or Yahoo groups.
18.I personally prefer to not to hide the contacts list from my connections. This is subjective, and I know some people who keep the list private. However, I think this creates a negative (anti-networking) perception in the mind of their contacts.
19.If you are really adding some confidential person (e.g. competitor, etc.) to your connections list, then keep the network private. However, the best solution would be not to add such people to your network in the first place!
20.If you actively use multiple email addresses, do enter them into your linkedin profile. This way, if you receive an invitation on any of these email ids, it will automatically show up on your linkedin homepage. This will also prevent linkedin from accidentally creating two different accounts for you. If this happens, send a note to linkedin support and I believe they can merge the two accounts.
21.Linkedin recently started providing a way to update a one-line status about your activity. Use it, where it makes sense. I know some people who abuse it by changing the tag line literally everyday. This can be counterproductive. Remember, this is not like a tag line on your instant messaging device.
22.Try to maintain some communication with your contacts (in some cases, even once a year is sufficient). I know many people who absolutely forget about some of their contacts for years, after adding them.
23.However DO NOT SPAM your network. If you want to forward or send messages that you think are relevant, do so very carefully and make sure you send it to that subset of your network that will find it ‘relevant’!
24.Recently, linkedin started supporting ‘linkedin apps’. I have used the wordpress plugin thus far and it is a very good way to selectively display articles summary from your blog on your profile. Applications like Trip It also look interesting.
25.Create a personalized name (like a login name) for your linkedin profile. This way, your linkedin profile will display in the www.linkedin.com/in/YOUR_USER_NAME format (e.g. www.linkedin.com/in/aparanjape) instead of some complicated alpha numerical id.
26. Make your profile ‘public’ for search engines. This way, it will show up on web searches.
27.Once in a while, it’s a good idea to backup your contacts. This can be done by exporting your contacts list into Outlook or into an Excel File.
29.Do you use the new analytics feature ‘People who looked at this profile also viewed…’ This can sometimes come with terrific suggestions!
29.‘Who has viewed my profile’ can also provide some interesting insights from time to time.
30.‘People you name know’ is another new powerful feature that can suggest some good potential connections.