Category Archives: Event Report

Event Report: The Work and Impact of Bob Kahn and Vint Cerf

(This is a liveblog of the Turing100@Persistent Lecture on Bob Kahn and Vint Cerf by R. Venkateswaran, CTO of Persistent Systems. Since it is being typed as the event is happening, it is not really well structured, but should rather be viewed as a collection of bullet points of interesting things said during the talk.)

Vint Cerf and Bob Kahn

Vint Cerf: Widely known as the father of the internet. He is President of the ACM, Chief Internet Evangelist at Google, Chairman of the ICANN and many other influential positions. In addition to the Turing Award, he has also received the Presidential Medal of Freedom in 2005 and was elected to the Internet Hall of Fame in 2012.

Bob Kahn: Worked at AT&T Bell Labs, MIT, then while working with BBN, he got involved with the DARPA and Vint Cerf and they together worked on packet switching networks, and invented the IP and TCP.

The birth of the internet: TCP and IP. 70s and 80s.

  • The Internet:

    • The first 20 years:
      • Trusted network
      • Defense, Research and Academic network
      • Non-commercial
      • Popular apps: email, ftp, telnet
    • Next 20 years:
      • Commercial use
      • Multiple levels of ownership – increased distrust and security concerns
      • Wide range of apps: email, WWW, etc
  • What did Vint Cerf and Bob Kahn do?

    • The problem:
      • There were many packet switched networks at that time
      • But very small, limited and self contained
      • The different networks did not talk to each other
      • Vint Cerf and Bob Kahn worked on interconnecting these networks
    • The approach

      • Wanted a very simple, and reliable interface
      • Non-proprietary solution. Standardized, non-patented, “open”
      • Each network talked its own protocol, so they wanted a protocol neutral mechanism of connecting the networks.
      • Each network had its own addressing scheme, so they had to invent a universal addressing scheme.
      • Packets (information slices) forwarded from one host to another via the “internetwork”
      • Packets sent along different routes, no guarantees of in-order delivery. Actually no guarantee of delivery
      • Packets have sequence numbers, so end point needs to reassemble them in order
    • The protocol

      • A “process header” identifies which process on the end host should be delivered the packets. This is today called the “port”
      • Retransmissions to ensure reliable delivery. And duplicate detection.
      • Flow control – to limit number of un-acknowledged packets, prevent bandwidth hogging
      • A conceptual “connection” created between the end processes (TCP), but the actual network (IP) does not know or understand this
      • Mechanism to set up and tear down the “connection” – the three-way handshake
      • This are the main contributions of their seminal paper
    • The Layered Network Architecture
      • Paper in 1974 defining a 4 layered network model based on TCP/IP.
      • This later became the basis of the 7 layer network architecture
    • The Internet Protocol
    • Packet-switched datagram network
    • Is the glue between the physical network and the logical higher layers
    • Key ideas:
      • Network is very simple
      • Just route the packets
      • Robust and scalable
      • Network does not guarantee any thing other than best effort
        • No SLA, no guarantee of delivery, not guarantee of packet ordering
      • Dumb network, smart end-host
      • Very different from the existing, major networks of that time (the “circuit-switched” telephone networks of that time)
      • No state maintained at any node of the network
    • Advantages
      • Can accommodate many different types of protocols and technologies
      • Very scalable
    • The Transport Layer
    • UDP
      • Most simplistic higher level protocol
      • Unreliable, datagram-based protocol
      • Detect errors, but no error corrections
      • No reliability guarantees
      • Great for applications like audio/video (which are not too affected by packet losses) or DNS (short transactions)
    • TCP
      • Reliable service on top of the unreliable underlying network
      • Connection oriented, ordered-stream based, with congestion and flow control, bi-directional
      • State only maintained at the end hosts, not at the intermediate hosts

Internet 2.0 – Commercialization

  • The birth of the world wide web: late 80s early 90s
    • Tim Berners-Lee came up with the idea of the the world-wide-web
    • 1993: Mosaic, the first graphical web browser
    • First Commercial ISP (Internet Service Provider) – Dial up internet
    • Bandwidth doubling every 6 months
    • Push for multi-media apps
  • Push for higher bandwidth and rich apps
    • Net apps (like VoIP, streaming video) demand higher bandwidth
    • Higher bandwidth enables other new applications
    • Apps: email, email with attachments, streaming video, intranets, e-commerce, ERP, Voice over Internet, Interactive Video Conferencing
  • Dumb Network no longer works
    • Single, dumb network cannot handle all these different applications
    • Next Generation Networks evolved
    • Single, packet-switched network for data, voice and video
    • But with different levels of QoS guarantees for different services
  • Clash of Network Philosophies: BellHeads vs NetHeads (mid-90s)
    • Two major approaches: the BellHeads (circuit switched Telephone background), and the NetHeads (from the IP background)
    • BellHeads philosophy: network is smart, endpoints are dumb; closed, proprietary communities; expect payment for service; per-minute charges; Control the evolution of the network; want strong regulations
    • NetHeads philosophy: network is dumb, endpoints are smart; open community; expect cheap or free services; no per-minute charges; want network to evolve organically without regulations.
    • These two worlds were merging, and there was lots of clashes
    • BellHead network example: Asynchronous Transfer Mode (ATM) network
      • Fixed sized packets over a connection oriented network
      • Circuit setup from source to destination; all packets use same route
      • Low per-packet processing at each intermediate node
      • Much higher speeds than TCP/IP (10Gbps)
      • A major challenge for the NetHeads
    • Problems for NetHeads
      • To support 10Gbps and above, each packet needs to be processed in less than 30ns, which is very difficult to do because of all the processing needed (reduce TTL, lookup destination address, manipulate headers, etc)
      • As sizes of networks increased, sizes of lookup tables increased
      • Almost read to concede defeat
    • IP Switching: Breakthrough for NetHeads
      • Use IP routing on top of ATM hardware
      • Switch to ATM circuit switching (and bypass the routing layer) if a long-running connection detected.
      • Late 90s, all IP networking companies started implementing variations on this concept
    • MPLS: Multi-Protocol Lable Switching
      • Standard developed by IP networking companies
      • Insert a layer between TCP and IP (considered layer 2.5)
      • Separates packet forwarding from packet routing
      • Edges of the network do the full IP routing
      • Internal nodes only forward packets, and don’t do full routes
      • Separate forwarding information from routing information, and put forwarding info in an extra header (MPLS label – layer 2.5)
      • MPLS Protocol (mid-97)
        • First node (edge; ingress LSR) determines path, inserts MPLS label header
        • Internal nodes only look at MPLS label, and forwards appropriately, without doing any routing and without looking at IP packet
        • Last node (edge; egress LSR) removes the MPLS label
        • Label switching at intermediate nodes can be implemented in hardware; significant reduction in total latency
      • MPLS is now basis of most internet networking

Internet 3.0: The Future

End of the network centric viewpoint. (Note: These are futuristic predictions, not facts. But, for students, there should be lots of good project topics here.)

  • Problems with today’s internet
    • Support for mobility is pretty bad with TCP/IP.
    • Security: viruses, spams, bots, DDOS attacks, hacks
      • Internet was designed for co-operative use; not ideal for today’s climate
    • Multi-homing not well supported by TCP/IP
      • Change in IP address results in service disruption
      • What if you change your ISP, your machine, etc?
      • Cannot be done seamlessly
    • Network is very machine/ip centric (“Where”)
      • What is needed are People-centric networks (“Who”) and content centric (“What”)
      • IP address ties together identity and location; this is neither necessary, nor desirable
  • Three areas of future research:
    • Delay Tolerant Network (DTN) Architecture
      • Whenever end-to-end delay is more than a few 100 milliseconds, various things start breaking in today’s networks
      • DTN’s characterized by:
        • Things that are not always connected to the network. For example, sensor networks, gadgets, remote locations. Another Example: remote villages in Africa have a bus visiting them periodically, and that gives them internet access for a limited time every day.
        • Extremely Long Delays
        • Asymmetric Data Rates
        • High Error Rates
      • Needs a store-and-forward network
    • Content-centric Networks
      • Instead of everything being based on IP-address, how about giving unique identifiers to chunks of content, and define a networking protocol based on this
      • Strategy: let the network figure out where the content is and how to deliver it
      • Security: the content carries the authorization info, and unauthorized access is prevented
    • Software Defined Networks
      • Virtualizing the Network
      • Search the net for: “OpenFlow”
      • Hardware Router only does packet forwarding, but end applications can update the routing tables of the router using the OpenFlow protocol. App has a OpenFlow controller that sends updates to the OpenFlow agent on the Hardware Router.
      • In the hardware/OS world, virtualization (VMWare, Xen, VirtualBox) are slowly taking over; OpenFlow is a similar idea for network hardware
      • Oracle, VMWare have had major acquisitions in this space recently

Event Report: Sham Navathe on E.F. Codd

(This is a liveblog of Sham Navathe’s lecture on E.F. Codd as part of the Turing 100 @ Persistent lecture series.)

Sham Navathe does not really need an introduction – since he is famous for his book “Fundamentals of Database Systems” written by with Ramez Elmasri, which is prescribed for undergraduate database courses all over the world. His full background can be looked up in his Wikipedia page, but it is worth mentioning that Navathe is a Punekar, being a board topper from Pune in his HSC. He regularly visits Pune since he has family here.

Today’s lecture is about E.F. Codd, a British mathematicians, logician, and analyst, who was given the Turing award in 1981 for his invention of the relational databases model. He is one of the 3 people to have received a Turing award for work in databases (the other two being Charlie Bachman in 1973 for introducing the concept of data structures and the network database model, and Jim Gray in 1998 for his work on database transaction processing research and technical leadership in system implementation.)

Ted Codd studied in Oxford, initially studying Chemistry, before doing a stint with the Royal Air Force and then getting degree in Maths. He later emigrated to US, worked in IBM, did a PhD from University of Michigan, and finally went back to IBM. At that time, he led the development of the world’s first “multiprogramming system” – sort of an operating system.

Codd quit IBM in 1984 because he was not happy with the “incomplete implementation of the relational model.” He believed that SQL is a “convenient” and “informal” representation of the relational model. He published rules that any system must follow before it could be called a relational database management system, and complained that most commercial systems were not really relational in that sense – and some were simply thin pseudo-relational layer on top of older technology.

Invention of the Relational Model

In 1963-64, IBM developed the IMS database management system based on the hierarchical model. In 1964-65 Honeywell developed IDS, based on the network model. In 1968, Dave Childs of Michigan first proposed a set-oriented database management system. In 1969 Codd published “The derivability, redundancy, and consistency of relations stored in large databases” (IBM research report, RJ-599, 1969). This was the work that led to the seminal paper, “A Relational Model for Large Shared Data Banks” (CACM, 13:6, June 1970). Other classic papers are: “Extending the Relational Model to capture more meaning” (ACM TODS, 4:4, Dec 1979), which is called the RM/T model. He is also the inventor of the term OLAP (Online Analytical Processing).

After Codd’s proposal of the relational model, IBM was initially reluctant to commercialize the idea. Instead, Michael Stonebraker of UC-Berkeley along with PhD students created INGRES, the first fully relational system. INGRES ultimately became Postres database which is one of the leading open source databases in the world today. In the meantime, Relational Software Inc. brought another relational database product to the market. This ultimately became Oracle. After this, IBM heavily invested in System R that developed the relational DBMS ideas fully. Codd was involved in the development of System R – and most of the fundamental ideas and algorithms underlying most modern RDBMS today are heavily influenced by System R.

Interesting RDBMS developments after Codd’s paper:

  • 1975: PhD students in Berkeley develop an RDBMS
  • 1976: System R first relational prototype system from IBM
  • 1979: First proposal for a cost based optimizer for database queries
  • 1981: Transactions (by Jim Gray)
  • 1981: SQL/DS First commercial RDBMS

Two main motivations for the relational model:

    • Ordering dependence
    • Indexing dependence
    • Access path dependence

    In DBMS before RDBMS, there was a heavy dependence of the program (and programmer) on the way the data is modeled, stored and navigated:

    All of this was hardcoded in the program. And Codd wanted to simplify database programming by removing these dependencies.
    – Loss of programmer productivity due to manual optimization.

Codd’s fundamental insight was that freeing up the application programmer from knowing about the layout of the data on disk would lead to huge improvements in productivity. For example, in the network or hierarchical models, a data model in which a Student has a link to the Department that he is enrolled in, is very different from a model in which each Department links to all the students that are enrolled there. Depending upon which model is used, all application programs would be different, and switching from one model to another would be difficult later on. Instead, Codd proposed the relational model which would store this as the Student relation, the Department relation, and finally the Enrolment relation that connects Student and Department.

The Relational Data Model

A relation is simply an unordered set of tuples. On these relations, the following operations are permitted:

  • Permutation
  • Projection
  • Join
  • Composition

Of course, other operators from set theory can be applied to relations, but then the result will not be a relation. However, the operations given above take relations and the results are also relations. Thus, all the relational operators can again be applied to the results of this operation.

He defined 3 types of relations:

  • Expressible: is any relation that can be expressed in the data model, using the data access language
  • Named: is any relation that has been given a specific name (i.e. is listed in the schema)
  • Stored: is a relation that is physically stored on disk

He also talked about 3 important properties of relations:

  • Derivability: A relation is derivable if it can be expressed in terms of the data access language (i.e. can be expressed as a sequence of relational operations)
  • Redundancy: A set of relations is called strongly redundant if one of the relations can be derived from the other relations. i.e. it is possible to write a relational operation on some of the relations of the set whose result is the same as one of the other relations. A set of relations is weakly redundant if there is a relation in that set which has a projection that is derivable from the other relations. Good database design entails that strongly redundant sets of relations should not be used because of problems with inconsistency. However, weakly redundant relations are OK, and used for performance purposes. (Materialized views.)
  • Consistency / Inconsistency: Codd allowed the definition of constraints governing the data in a set of relations, and a database is said to be consistent if all the data in the database satisfies those constraints, and is said to be inconsistent if not.

In the years that followed, a bunch of IBM research reports on normalization of databases followed.

Turing Award Lecture

His talk is titled: “Relational Databases: A Practical Foundation for Productivity”. His thoughts at that time:

  • Put users in direct touch with databases
  • Increase productivity of DP professionals in developing applications
  • Concerned that the term “relational” was being misued

He points out that in relational data model, data can be addressed symbolically, as “relation name, primary key value, attribute name”. This is much better than embedding links, or positional addressing (X(i, j)).

The relational data model encompasses structure, manipulation and integrity of data. Hence, it is a complete model, because all 3 aspects are important for data management.

Characteristics of relational systems:

  • MUST have a data sub-language that allows users to query the data using SELECT, PROJECT and JOIN operators
  • MUST NOT have user visible navigation links between relations
  • MUST NOT convey any information in the way tuples are ordered

He was worried that relational system might not be able to give performance as good as the performance of non-relational systems. He talked about:

  • performance oriented data structures
  • efficient algorithms for converting user requests into optimal code

In future work, he mentioned the following

  1. Domains and primary keys
  2. Updating join-type views
  3. Outer-joins
  4. Catalogs
  5. Design aids at logical and physical level
  6. Location and replication transparency in distributed databases
  7. Capture meaning of data
  8. Improved treatment of missing, null and inapplicable values
  9. Heterogeneous data

This was a remarkably prescient list. In the 30 years since this talk, most of this has actually happened either in commercial databases or in research labs. We have pretty much achieved #1 to #6, while #7 to #9 have seen a lot of research work but not wide commercial acceptance yet.

Concluding Thoughts

  • Relational model is a firm foundation for data management. Nothing else compares.
  • On this foundation we were able to tackle difficult problems in the areas of design, derivability, redundancy, consistency, replication as well as language issues. All of these would have been very difficult otherwise
  • Proponents of NoSQL databases as well as map-reduce/hadoop type of systems need to keep in mind that large data management cannot really be done in an ad hoc manner.
  • Codd’s RM/T model was an attempt to capture metadata management, but fell short of what was needed.

Audience Questions

Q: Why did object-oriented databases not catch on?

A: There was a lack of understanding amongst the wider community as to the best way of using object-oriented ideas for data management. OODBMS companies were not really able to really educate the wider community, and hence failed. Another problem is that object-oriented DBMS systems made the data model complex but there were not corresponding developments in the query language and optimization tools.

Q: When the relational model was developed, did they constrain themselves due to the hardware limitations of those days?

A: Codd did mention that when deciding on a set of operations for the relational model, one consideration was ‘Which of these operations can be implemented on today’s hardware’. On the other hand, there were lots of companies in the 80s which tried to implement specialized hardware for relational/DBMS. However, none of those companies really succeeded.

In the future, it is very unlikely that anyone will develop a new data model with improvements in hardware and processing power. However, new operators and new ways of parallelizing them will certainly be developed.

Q: There are areas of data management like incomplete, inexact data; common sense understanding of data; deduction and inferencing capabilities. These are not really handled by today’s DBMS systems. How will this be handled in the future.

A: There have been many interesting and elegant systems proposed for these areas, but non have seen commercial success. So we’ll have to wait a while for any of these to happen.

Will be updated every 15 minutes. Please refresh regularly.

What NVidia is up to – NVidia Tech Week Open House in Pune

(This report of an demo/event organized by NVidia in February 2012 was written by Abhijit Athavale, and was originally published on PuneChips.com, a PuneTech sister organization that focuses on semiconductor, eda, embedded design and VLSI technology in Pune. It is reproduced here for the benefit for PuneTech readers.)

I was invited to visit the Nvidia Tech Week this past weekend (February 25-26, 2012) at their facilities in Pune. This is a great concept – getting employees to invite friends and relatives to actually see what their company is all about is very good social outreach and a fantastic marketing initiative. If more tech companies in the area do similar events once or twice a year, it will help lift the shroud of technical opaqueness around them. I think hosting similar events in area colleges will also help students realize that even VLSI/Embedded Systems Design is cool.

I was given a personal tour by Sandeep Sathe, a Sr. Development manager at Nvidia and also met with Jaya Panvalkar, Sr. Director and head of Pune facilities. There was enough to see and do at this event and unfortunately I was a bit short on time. It would have taken a good two hours for a complete walk-through, so I decided to spend more time on the GPU/HPC section though the Tegra based mobile device section was also quite impressive. It’s been a while since I actually installed a new graphics card in a desktop (actually, it’s been a while since I used a desktop), but graphics cards have come a long way! Nvidia is using standard PCI Express form factor cards for the GPU modules with on-board fans and DVI connectors.

The following are key takeaways from the demo stations I visited

GeForce Surround 2-D

Here, Nvidia basically stretches the game graphics from a single monitor to three monitors. Great for gamers as it gives a fantastic feel for peripheral vision. The game actually doesn’t have to support this. The graphics card takes care of it. The setup here is that while the gamer sits in front of the main monitor, he also sees parts of the game in his peripheral vision in two other monitors that are placed at an angle to the main monitor. I played a car rally game and the way roadside trees, objects moved from the main monitor to the peripheral vision monitors was quite fascinating.

GeForce 3-D Vision Surround

This is similar to the above, but with 3D. You can completely immerse yourself in the game. This sort of gaming setup is now forcing monitor manufacturers to develop monitors with ultra small bezel widths. I suppose at some point in the next few years, we will be able to seamlessly merge graphics from different monitors into one continuous collage without gaps.

Powerwall Premium Mosaic

Powerwall is a eight monitor setup driven by the Quadro professional graphics engine. Two Quadro modules fit into one Quadroplex industrial PC to drive four monitors. Projectors can also be used in place of monitors to create a seamless view. The display was absolutely clear and highly detailed. The Powerwall is application transparent. Additional coolness factor – persistence data is saved so you don’t lose the image during video refresh and buffer swaps. This is most certainly a tool intended for professionals who need high quality visuals and computing in their regular work. Examples are automotive, oil and gas, stock trading.

PhysX Engine

PhysX is a graphics engine that infuses real time physics into games or applications. It is intended to make objects in games or simulations move as they would in real life. To me this was very disruptive, and highlight of the show. You can read more about PhysX here. It is very clear how PhysX would change gaming. The game demo I watched had several outstanding effects: dried leaves moving away from the character as he walks through a corridor, glass breaking into millions of shards as it would in real life. Also running was a PhysX simulation demo that would allow researchers to actually calculate how objects would move in case of a flood. What was stunning was that the objects moved differently every time as they would in real life. PhysX runs on Quadro and Tesla GPUs. It is interesting to note that Ra.One special effects were done using PhysX.

3D photos and movies

Next couple of demos demonstrated 3D TV and photo technology using Sony TVs and a set of desktops/laptops. Notably, the Sony 3D glasses were much more comfortable compared to others. Nvidia is working with manufacturers to create more comfortable glasses. There was also a Toshiba laptop that uses a tracking eye camera to display a 3D image to the viewer regardless of seating position without glasses. It was interesting. However, the whole 3D landscape need a lot of work from the industry before it can become mainstream.

Optimus

What was explained to me was that Optimus allows laptops to shut off GPUs when they are not needed. They can be woken up when high performance work is required. This would be automatic and seamless, similar to how power delivery is in on a Toyota Prius. This sort of a technology is not new to computing – a laptop typically puts a lot of components to sleep/hibernate when not being used, but the GPU is not included.

Quadro Visualizations

This allows 2D/3D visualizations for automotive, architectural and similarly complex systems for up to one thousand users at a time. You can easily change colors, textures, views so everyone can comment and give constructive feedback. I was not sure if the design can be changed on the fly as well. Nvidia is working with ISVs like Maya and Autodesk on this.

Tesla

Tesla GPUs use chips that are used for high performance computing and not rendering, which is different from what Nvidia typically does. The Tesla modules do not have any video ports! It has a heterogeneous GPU/CPU architecture that saves power. In fact, the SAGA-220 supercomputer, dubbed India’s fastest, at ISRO’s Vikram Sarabhai Space Center facility uses 2070 Tesla GPUs along with 400 Intel Xeon processors. In addition to supercomputing, Tesla is very useful in 3D robotic surgery, 3D ultrasound, molecular dynamics, oil and gas, weather forecasting and many more applications.

Tegra Mobile Processor

Next few demos showcased the Tegra mobile applications processor based on ARM Cortex A9 cores. The HD quality graphics and imaging were impressive. It is clear that smartphones and tablets of the day are clearly far more powerful compared to desktops of yesteryear and can support highly impressive video and audio in a very handy form factor.

In all, I had a great time. As I mentioned earlier, Nvidia along with other tech companies in Pune should hold more of these kinds of events to give technology exposure to the larger population in general. I think it is important for people to know that the stuff that makes Facebook run is the real key and that is where the coolness is.

Event Report: “Innovate or Die” – Suhas Kelkar, CTO-APAC, BMC Software

(This is a live blog of Suhas Kelkar’s talk at the SEAP Breakfast Meet. Suhas talked about his experience of building an incubator at BMC Software.)

Background

  • Suhas joined BMC Pune and was given the job of creating an innovation incubator within the company.
  • This was the second attempt at creating an incubator in BMC. A previous attempt had failed spectacularly. The previous one had been started with great fanfare, with a 100 people team, and over time, it went down to 80, to 60 to nothing. With this history Suhas started his incubator with zero employees, and minimal fanfare.

On Innovation

  • Suhas defines innovation as “Ideas to Cash.” This is important. The focus on cash, i.e. revenues, was an important difference between this incubator and the previous incubator, and also other research labs in companies around the world. Invention for the sake of invention, research for the sake of research is something that they definitely did not want to do. The wanted to ensure that everything they do has a direct or indirect revenue impact upside for BMC.
  • There actually exists a document called “The Oslo Manual” which is a set of guidelines for how to do innovation. It is a free PDF that anyone can download, and Suhas recommends that to anyone interested in innovation.
  • The Oslo Manual points out that innovation can happen in 4 different areas Product, Process, Marketing and Organization. Suhas adds a 5th category of innovation: “User Experience”

The BMC Incubator

  • Why does a product company need an incubator? Product teams get bogged down by tactical improvements for existing customers, and the larger vision (beyond 12-24 months) does not get attention. Startups innovate all the time, and BMC does buy innovative companies, but then integrating them into the company is a huge overhead, and fraught with risks. It would be much more efficient to do innovation in-house if it could be made to work
  • The incubator is a separate team who can focus on these issues. It is a small team (about 25 people) compared to the 200 people in just one of the product groups that BMC has. And these 25 people try various different innovative ideas, 9 of 10 of which are bound to fail. But even that failure adds value because that means there are 9 things that the 200 people product team does not have to try out – hence they’re shielded from dead ends and unproductive explorations.
  • The mandate for the new incubator (partially based on lessons learnt from the failure of the previous incubator was):
    • Don’t alienate the product teams – you’ll never succeed without their help and blessings
    • Understand the base products thoroughly. Superficial understanding of the issues, toy applications, will not earn the respect of the product team
    • Frequent communication with the business teams
    • File many patents
  • The Process

    • The incubator takes inputs from:
      • The office of the CTO, which strategizes and puts together a vision. Before the incubator team, the office of the CTO would hand over the long term vision and strategy to the product team, which was ill equipped to handle it. Now, the incubator fits this gap
      • Product Business Units
      • Customers and Partners
      • Academia (what is the latest in research)
    • The idea backlog is looked at by these three teams:
      • The governance team which meets once in 6 months
      • The alignment team which meets once a quarter
      • The execution team which meets once a month
    • The output of the incubator are:
      • White Papers
      • Prototypes
      • Delivery
      • Patents
      • Innovation Culture
  • Challenges for an Incubator
    • How to measure innovation? Number of patents is not a good enough metric.
    • Motivation: the motivation for the incubator and the people on the team must come from within. Creating the motivation, and staying motivated, in the face of 9 failures out of every 10 ideas tried, is difficult.
    • Difficult skill set: the team needs people who are smart, intelligent architects, but also hands on developers, with ability to switch context frequently, understanding the overall BMC vision, ability to sell/market ideas internally, and most importantly they need to be technology as well as business savvy. Finding people like this is a tall order.
  • The incubator only does small projects. There are two kinds of projects: “research” and “prototype”.
    • Research projects which are just 1 person 1 week, where that person is supposed to study something for a week and come back with a report.
    • Prototype projects are just 2 or 3 people working for a maximum of 2 months to build a prototype – not necessarily a shipping product. The prototype should prove or disprove some specific hypothesis, and there is a tricky balance to be made in deciding which parts of the prototype will be “real” and which parts will be simple mocked up.

Future Directions

  • From technology incubation, they want to move to co-innovation, where they work in conjunction with product teams, and customers to innovate.
  • After that they would even like to do business incubation – where the product team is not interested in looking at an adjacent business, in which case the incubator would like to have the ability to go after that market themselves.
  • The Indian IT industry, from humble beginnings, is moving up the value chain.
    • First we were doing cost arbitrage (1990s)
    • Now we have process maturity (2000s)
    • The next step would be to get product ownership, and product management here (2010s)
    • Finally, in the 2020s we’ll be able to do innovation, incubation, entrepreneurship
    • The bottomline is that Indian IT industry should be focusing on taking on more and more Product Management responsibilities

Questions from Audience

  • Q: The incubator needs people who understand the current products thoroughly. Which means that you need to steal the stars from each product team, because you cannot really hire from outside. And obviously the product team is not willing to give up their stars. How do you solve this problem?
    • A: In general, trying to get stars from the product teams is not possible. You wont get them, and you sour the relationship with the product teams. Instead, what works is to hire the smartest outside people you can hire and then make them learn the product. These people are then teamed up with the right people in the product team during the ‘learning’ process. The learning process is still a bit ad hoc and we haven’t yet formalized it, but at the very least it involves doing some work hands-on.
  • Q: What do you answer when a product team asks what is the value you are adding?
    • A: We constantly worry about the value we are adding, and we keep pro-actively stay in touch with the product teams and constantly keep reminding them of the value we add. If it ever happens that a product team asks what value you are bringing, you are already too late
  • Q: How are you engaging in academia, and what else would you like to do?
    • A: Currently, we get interns from academia. This allows them to do look at projects that would not get “approved” as regular projects, because “it’s just an intern project.”
  • Q: Customers are in the US. Product Managers are in the US. And you cannot innovate unless you understand customers and have close ties with the Product Managers. How do you do that sitting in India?
    • The head of the incubator must be the ultimate product manager, and more. First, s/he must have almost as much understanding of the market and the customers as the product manager of the actual product. In addition s/he must have a vision beyond just what customers want, so that they are able to generate innovative ideas. Successful engagement and understanding of Product Management is key to success of an incubator.
  • Q: How do you ensure that the output of an incubator prototype is actually accepted by a product team, and how does the process work
    • All prototype projects require buy-in from the product team and other stakeholders, agreeing tentatively that if the prototype is successful, the product team will actually put that project onto the release schedule. Once the prototype is completed, it is incorporated into the release schedule, and the 2/3 people who worked on the prototype transition into the product team temporarily.

SEAP Book Club Event Report: MindSet presented by Gireendra Kasmalkar

(This is a live-blog of the SEAP Book Club meeting that happened on 4th Feb at Sungard Aundh. Gireendra Kasmalkar, MD & CEO of SQS India, talked about a book called “Mindset – The Psychology of Success.” The contents of this post are not directly related to technology, however, it is published on PuneTech since this was a SEAP meeting, and most of the people attending were senior members from Pune’s IT industry. Hence, we felt that it would be of interest to PuneTech readers to get an idea of what senior member of SEAP are talking about. Please note: this is a partial and incomplete account of what Gireendra talked about, and possibly has my biases. Also, since it is a live-blog, it will ramble a little and might contain errors.)

There are two different mindsets for humans: Fixed Mindset and Growth Mindset. People with a fixed mindset use events as opportunities for assessment and validation of what they’re already doing. Those with a growth mindset use events as an opportunity to learn. Thus, the potential of a person with a fixed mindset is known, whereas the potential of a person with growth mindset is not only unknown, but also unknowable.

The key difference between the fixed mindset and growth mindset is how they think about natural talent vs effort. In general, as a society, we tend to value natural talent, and effortless accomplishment. But what’s so heroic about having a gift? Effort ignites ability and turns it into accomplishment. Note: just because someone is talented and can accomplish things effortlessly, it does not mean that we should think less of them. But we shouldn’t give them more credit just because they did it effortlessly.

A person with a fixed mindset thinks that if you need to put in effort then you’re not talented. And they are terrified of putting in an effort, because what if you fail even after put in effort? Thus, failure is a setback, and they tend to blame it on someone else. On the other hand success is about being gifted and is validation of being smart. They have a sense of entitlement. They get a thrill from doing things that are easy for them, and their self-esteem comes from being better than others.

By contrast, a person with a growth mindset thinks of effort as the main driver of success. They are terrified by the idea of not capitalizing on opportunities. Failure does hurt them, but it does not define them. It is taken as an opportunity to learn and improve. So success is about putting effort and stretching yourself, thrills come from doing hard things, and self-esteem come from being better than yesterday.

So, in the long term, growth mindset brings more success, and also helps you stay at the top.

Benjamin Bloom studied 120 outstanding achievers over 40 years. After 40 years of research, they concluded that it is not possible to predict future achievement of a person from current abilities. Basically, that their research showed is that if one person can learn something then any other person can learn the same thing given appropriate prior and current conditions of learning (except for 2% of extremely gifted or extremely impaired people.)

Not performing up to standards should be seen as an indicator for further learning.

Psychological research shows that people who are told they were brilliant become more conservative (because they want to conserve their “brilliant” image) whereas people who are praised for their effort put in more effort the next time.

Bottomline: negotiators, managers, leaders are made not born. Any ability, including artistic ability can be learnt. And does not really take very long to learn.

Failure is the key to learning, and achievement, and ultimate success. J.K. Rowling, author of the Harry Potter books, gave a great commencement speech at Harvard talking about The Fringe Benefits of Failure, and the Importance of Imagination. The basic claim is that success in school/college, resulting in a well paying job, is actually a deterrent to success – because you will no longer be willing to leave your comfort zone and take risks. Nitin Deshpande of Allscripts talks about an incident from the early part of his career: a person who was considering offering a partnership to Nitin asked Nitin whether he had ever failed at anything in life, and when Nitin said that he hadn’t really failed at anything, he was told that he was not qualified to be a partner.

Final thoughts:

  • If you think: “This is hard. This is fun,” then you have a growth mindset, and you’ll do well
  • Categorize people as learners and non-learners (instead of successes and failures.)
  • A fixed mindset will limit what you can achieve with your ability, whereas a growth mindset will help you realize the full potential.
  • You can and should train yourself to get into a growth mindset

Event Report: IndicThreads Java Conference 2011

(This article about the IndicThreads Java Conference 2011 was written by Abhay Bakshi for DZone. It has been re-published here with permission for the benefit of PuneTech readers.)

Attending a conference (probably as renowned and as recognized as the Java conference by IndicThreads) adds to your muscle – Period. By the way, I have picked up from the same thread — same tone and similar spirit — from March 2011. IndicThreads held the Q11 conference then, which I had a chance to attend and then write a short report on for DZone. If you attended IndicThreads conferences before, your feedback is also welcome — through your blogs or through places like this report hosting page.

Now, you may ask – How Was the Environment This Time?

First and foremost, I would like to say this — you could feel the thought process from Harshad Oak (Owner – IndicThreads – Conference Organizer) all throughout the conference. When I attended the conference sessions, I could see that one presentation simply led to another one. And somehow I could also relate this fact to the earlier Q11 conference; and could see the passion that Harshad has when he arranges these events.

Just as a side note – Harshad is the first Java champion in India and he continues to serve the IT community. He is ably supported by his wife Sangeeta Oak in these endeavors. This young couple gives a lot of attention to detail for the events!

The Conference Agenda in short

The conference agenda included the following topics (Friday/Saturday — Dec 02/03):

  • The Java Report (Harshad Oak)
  • Scalability Considerations (Yogesh Deshpande)
  • PaaSing a Java EE 6 Application (Kshitiz Saxena)
  • Solr as your Search and Suggest Engine (Karan Nangru)
  • Testing Concurrent Java Programs (Sameer Arora)
  • Scala Collections: Expressivity and Brevity upgrade from Java (Dhananjay Nene)
  • REST Style Web Services – Google Protocol Buffers (Prasad Nirantar)
  • Java EE 7 Platform: Developing for the Cloud (Kshitiz Saxena – yes again! He has awesome topic coverage.)
  • Building Massively Scalable Applications with Akka (Vikas Hazrati)
  • Simplifying builds with Gradle (Saager Mhatre)
  • Using Scala for Building DSLs (Abhijit Sharma)

The presentation slides are hosted at http://j11.indicthreads.com/slides

My Thoughts on the Agenda

On the first day of the conference, I noticed that there are 7 sessions to attend on Friday and 4 more sessions on Saturday. Frankly, I thought there was some kind of mismatch in arranging these sessions. But my opinion changed as the conference went on from Friday into Saturday. The next day was intentionally kept lighter. As an attendee, I now think that your mind probably absorbs and retains more information during the initial parts of a conference. I believe that IndicThreads is getting better overall conference after conference.

What I Wanted to Get from Each Session

I planned on getting 3 things from the sessions (that was my ROI!) — first, how the knowledge earned will apply towards the business domain at my work place; second, my personal interactions with the speaker(s) from networking perspectives; and third, how I can help Harshad and his team and provide helpful feedback. Even with events like NFJS, TSSS in USA, I always received and offered my best to organizers Jay Zimmerman, Floyd Marinescu et al.

I should also mention, I still remember Rick Ross’ keynote speech at TSSS and how it was inspirational to many of us there. Point is that industry leaders like Harshad, Rick, Floyd (and of course some more) are doing everything to lead developers all across the world to be better IT professionals. Sometimes they pay from their own pockets to see results.

The Actual Sessions

I am not going to cover all the details from all the talks, well, it’s not possible. The slides are available for entire content.

The Java Report

In the keynote speech, Harshad mentioned that things moved very rapidly after Sun was purchased by Oracle. He later encouraged participants to have a look at topics such as Java EE 6 Web Profile, Java FX 2.0 (all Java), Java EE 7 and a few more. Harshad raised a point – do you as a Java expert look the same “sexy” today as you did when Java started? The answer is “less sexy”. He also said that Java ME was not offering many new things for quite a while now.

Scalability Considerations

Yogesh covered Vertical Scaling and Horizontal Scaling, and principles behind both techniques. He backed up his presentation with a helpful case study.

PaaSing a Java EE 6 Application

Kshitiz works at Sun/Oracle for last 10 years. He explained PaaS in simpler terms. It was very important to keep things simple. The speech was well accepted by the audience. Just as I was putting this article together, I saw that Javalobby had published a fresh article on PaaS 2.0 — it looks quite relevant to our discussion.

Solr as Your Search and Suggest Engine

It was very good to learn from Karan about Embedded Solr Server versus Commons Http Solr Server, and the various “search” and “suggestion” cases. Karan is quite passionate about Solr.

Testing Concurrent Java Programs

I don’t develop as much concurrent Java code at work as I do some other pieces; but learning from Sameer clicked a few ideas in my mind for a business case that we have at work. We (AEGIS) do some case executions in our workflow, and ideas from concurrency can be applied to what we do. By the way, for the intense session that we had with Sameer, fortunately, there was a coffee break after the session. Hats off to Sameer for how much he knows about this topic.

Scala Collections – Expressivity and Brevity upgrade from Java

Although Dhananjay knew a lot, he was addressing a very specific topic “Collections”. To me, the topic could have been broader (or be split in two sessions). Scala is a powerful language and initial learning curve looks longer for a beginner. I should mention that Dhananjay preferred IntelliJ for Scala-based development — rightfully so.

REST Style Web Services – Google Protocol Buffers

Prasad (speaker) has a background from Akron, Ohio (M.S.). He compared content negotiation techniques (JSON, XML, and Portable Binary Content) with focus on Google Protocol Buffers. His comparison of Google Protocol Buffers with Apache Avro was very apt.

Java EE 7 Platform: Developing for the Cloud

Kshitiz explained the terms IaaS, PaaS and SaaS. There are vendors other than Sun that offer PaaS support — but standards are lacking. He explained Java EE 7 focus on PaaS – Elasticity which has progressed from single node implementation to multi-node multi-instance clustering to SLA driven Elasticity. Refer the slides for more details.

Building Massively Scalable Applications with Akka

Vikas writes for InfoQ. He said that if you wanted to learn Akka, then you needed to keep in mind that Akka was designed to make developer’s life easier by addressing concurrency, scalability and fault-tolerance in applications. The founder of Akka is Jonas Boner, and I find Jonas’ article on Akka hosted by Javalobby at this page. As per Vikas, Akka is good for event-based systems, whereas Hadoop for batch-based systems.

Simplifying Builds (Build Scripts) with Gradle

An excellent slide presentation and visual illustrations by Saager. He corrected the name of the topic to “Simplifying build scripts..”. He compared Gradle with Ant and Maven, and mentioned that Gradle describes builds with only as much text as is absolutely necessary.

Using Scala for Building DSLs

This was the only session where there were no questions from the audience! From Abhijit’s (speaker) angle, it was a bit uncomfortable feeling; but I later mentioned to him that the presentation was so straight-forward (note – not an easy compilation) and neatly arranged, the questions were answered even before they were asked. I recommend – just download the presentation, and you will get to see what I mean. Good to learn about Scala in this domain.

Every session was little over an hour. And all speakers covered their sessions very well.

Past Reviews of IndicThreads Conference on Java

Some of the celebrity authors and speakers like Arun Gupta and Vikas Hazrati have reviewed their prior Java IndicThreads conference experiences by writing articles on their respective blogs (you may access the reviews: Arun, Vikas). It is rewarding to learn from such experts in the field.

Lastly, about the Food and Quizzes and Prizes!

I believe, Sangeeta made awesome choices for food at lunch and the breaks! As well as, she put up short quizzes and announced prizes in different categories. IndicThreads have maintained the “Green” theme and I won a prize in that category.

My Top Three Take-away Points

My top three take away points from J11 are – rejuvenating yourself by looking at technical topics from speakers’/attendees’ eyes and adding to your knowledge, networking with experts so that you can offer your best and receive the best from them, and just knowing where the Java industry stands today.

Conclusion

There was an “Unconference” session, where everybody who participated voiced a need for the Java groups in the city to come together. I get a feel that awareness in the industry about such conferences is increasing, and demand for such speakers and quality offered by these conferences is going to increase in few more short years.

Harshad encourages local speakers to come out and respond to the RFPs (and participate). For those who only want to attend can also win a FREE pass to the conference! All in all, it was worth attending the Java conference by IndicThreads.

Advantage Pune Panel Discussion: Opportunities for Pune to become an Innovation Hub

These are a few quick ‘n dirty notes captured during a Panel Discussion that was held as a part of the “Global Conclave: Advantage Pune” event held in Pune yesterday, organized by Zinnov and Software Exporters Association of Pune (SEAP). The panel discussion was on the topic “Opportunities and Challenges for Pune to become an Innovation Hub”. The panelists were:

  • Bhavani Shankar from Zinnov
  • Akila Krishnakumar head of Sungard India
  • Ashish Deshpande from Google (based in Pune)
  • Kiran Gadi head of Motorola Mobility India
  • Omkar Nimbalkar head of Tivoli Group IBM India
  • Tarun Sharma head of BMC India

Overall, a few themes that most people touched upon were these:

  • Pune isn’t just about software. It has automotive, manufacturing, sciences (for example, NCL), and other things going for it. So it is more rounded than other cities
  • Pune has great climate
  • Pune has lots of educational instiutions
  • Pune is still not as crowded as Bangalore, so growth is still possible in Pune.

Overall, these are the advantages that Pune has for driving innovation.

Here are some additional interesting points made by the panelists:

  • [Akila] Sungard is probably one of the earliest Software Product MNCs to set up in Pune (back in 1993). Pune has 20% of Sungard’s global R&D strength. BFSI is the biggest market for the software sector, and hence a lot of innovation in Pune’s software industry has to happen (will happen) in this space
  • [Kiran] Our Pune center had lower attrition than other cities. This was a huge advantage.
  • [Tarun] 23% of BMC is in Pune. Largest in the world. This gives huge advantages – having many different teams in one location. This is easier to achieve do than in other cities.
  • [Omkar] Pune has an advantage over Bangalore that it still has space to grow. In Bangalore, it is very difficult to find space.
  • [Tarun] Pune definitely has a better perception of quality of life compared to Bangalore. It’s still a small city compared to Bangalore – you can get anywhere in 30 minutes. And the culture and art is great.
  • [Akila] Pune and Germany have had a great relationship, because of the auto industry. Pune has the largest concentration of German companies in India. This is a great opportunity for Pune’s software industry – it needs to leverage this and grow the software market in Europe.
  • [Kiran] The great thing about the Pune Community is that all the different groups (Software Exporters Association of Pune (SEAP), PuneTech, TiE, Pune Open Coffee Club, Head Start, CSI Pune) all talk to each other and co-operate.
  • [Akila] Pune’s demographics are interesting – lower than average age, and higher than average per capita income. It is easier to find early adopters in Pune, and easier to do viral (i.e., cheap) marketing in Pune. For example, it is not a surprise that it is the gaming capital of the country.

IPMA Event Report: Market Research Using Social Media

(This is a live blog of the presentation on Market Research using Social Media, by Pinkesh Shah, for the Indian Product Managers Association (IPMA) Pune event. Since it is a live blog, it might have errors, and won’t be as well organized as an article ought to be. Please keep that in mind while reading.)

Background – Why is Market Research Important

Product Management is really about Value management. There are five parts to it:

  • Understanding Value: Understand what the customer wants / care about
  • Creating Value: Build the Product
  • Capturing Value: Making sure that your product is appropriately priced. It is not necessary that you charge for the product immediately, or at all. You might make money somewhere else.
  • Communicating Value: Position your value proposition appropriately
  • Delivering Value: Making sure your product / value reaches the right person. Having the correct Channels.
Pinkesh Shah talking at IPMA Pune

For your next product or product feature, you will have lots of idea. But knowing what will really be the right thing to focus on is difficult. For a successful product or feature, the following pipeline is important:

  • Market Analysis: Choosing what to build
  • Strategic Analysis: Building the product profitably
  • Building the Product: In India we are very good at this step
  • Go to Market: Marketing it Right
  • Sales Enablement: Selling Effectively

The rest of this talk will focus on mostly on Market Analysis.

What does a PM do? It’s more than just requirement analysis:

  • Champions the customer’s context within the organization
  • Define the roadmap for a product, and deliver products that customers will actually buy
  • Master orchestrator of the productization process

Market Research – An Art and a Science

Ways to do market research:

  • Surveys: very few people do surveys. And it is easy to do. The only thing difficult is to come up with good survey questions. But otherwise this is one of the best and scalable techniques for market research.
  • Talking to your sales guys
  • Reading research reports from people like Gartner
  • Ethnography: watching your customers in their natural setting. In Big Bazaar there are always people standing in a corner of the store and observing customers. They spend 8 hours watching the patterns.
  • User research: Bring users in and make them go through use cases
  • Win Loss Interviews
  • Product Advisory Council: Announce a product, as if it is already done. Put out a Google ad about this product that does not exist. Target it for the geography and demographics that you’re interested. And then check who and how many people are clicking on it. Gives you a good idea of whether it is really working or not. Very easy and cheap way of figuring out whether your product is going to work. And you can do it sitting at home in India for any product targeting anywhere in the world

Why is social media is a great tool for market research?

  • Getting real users in a the real world is a lot of effort. Easier to get users online: LinkedIn, Facebook, Blogger, Quora, Twitter, etc.
  • Viral propagation. Truly borderless. And impossible to do without social media even if you have lots of money
  • Asychronous. You and the users don’t have to be in the same place at the same time. Makes it much easier.
  • Figure out who are the influencers

LinkedIn

Great resource. All people in professional settings are on LinkedIn. Hence, for product management, especially enterprise products, this is a great resource.

Very easy to create surveys / polls on LinkedIn and ask questions about your potential future product / features, and get responses from people all over the world. With demographic information from LinkedIn.

You can not only get quantitative results, but also qualitative results and opinions.

In addition, you get to go back and give updates to all those who participated about what happened, what features were included, etc.

Audience Question: What about competition finding out about your product ideas / features?

Answer: This is a problem with all market research. But in most cases, the idea is not the most important part of the product, so it’s OK. If indeed your idea is the secret sauce, then don’t include it in your market research, but in most cases it is no.

Uservoice

If you are a product manager, you must use Uservoice.

Similarly there is CustomerVoice, an Indian Startup similar to Uservoice, but for India.

Facebook likes are not a good substitute for Uservoice. You need really granular feedback, which a “like” does not give.

Landing Pages

A landing page can be created within 5 minutes of creating an idea. Just put up your idea, ask people to register for the beta. At this point, you don’t have a beta, but you can decide whether to create one or not based on the amount of interest you generate.

Online Ads for Validation

Think of a product. Assume that the product already exists, and create an ad for the product. Put the ad out. Target a few important cities and sectors (e.g. Bangalore, Delhi, Pune, Chennai). See how many people click on the ad, and from which city and sector. That will give you an idea of how much interest is there for your product, and which geographies and sectors your product should target.

Do not start building a product unless you have done this.

Google Ads are good for validating a concept, but not very good for getting an idea of the people who clicked on the ad. LinkedIn ads cost more, but provide much more details about the click-throughs.

Analytics

Make sure you have Google Analytics installed on all your websites. It is free and gives you lots of data on who’s coming and what they’re doing.

In addition there are paid services (often fairly inexpensive) that do even better.

For example, there is an Indian startup called Wingify that allows you to do A/B testing on your website. If you don’t know what A/B testing is, find out now.

Other interesting websites/products

  • Ask Your Target Market: http://aytm.com – ask questions to specific target groups (mostly US)
  • Sprout Social: http://sproutsocial.com – get social media conversations about various keywords
  • CDC Pivotal CRM – get twitter and other social media conversations of each customer

Parting Thought

Samuel Colt, who invented the revolver, said that his invention was one of the most important things ever. Because, he said, “God made men. I’ve enabled them to be equal.” The person without strength, money, knowledge, can still win if s/he has a revolver.

Social Media is the revolver for product management. Anyone can do it now.

Don’t let this weekend end without sending out a survey.

SEAP Book Club Meeting Report: Crucial Conversations

(Warning: this is a live-blog of the presentation, written while the event was going on. So it might have errors, might not be as well organized as an article ought to be, and I might have misrepresented the speaker. Please keep that in mind while reading.)

The Software Exporters Association of Pune (SEAP) has a book club which meets on the first Saturday of every month where one of the members presents a summary of a pre-selected book. Since the members and the presenter are usually senior managers in Pune’s IT companies, the books chosen are usually management books.

The book Crucial Conversations was presented by Nitin Deshpande, President of Allscripts India (a medical software products company with 1000+ people in Pune)

  • This book, when you’re reading it, seems like common sense. But it is not. There are lots of anecdotes that will teach you interesting things
  • After you read this book (or a book), you cannot put all of the concepts into practice all at once. Instead, you should pick just one area where you want to improve, and then just focus on it. Don’t do too much at once.
  • A crucial conversation is a conversation you have with someone which transforms a relationship, and creates a new level of bonding. This is not a conversation for someone trying to be popular – a politician. This is not a conversation where you agree with everything the other says.
  • A crucial conversation between two people with differing opinions, and where the stakes are high, and emotions run strong. You have to tackle tough issues, and the result can have a huge impact on your quality of life.
  • Anecdote: people with life-threatening diseases were broken up into two groups, and the authors taught crucial conversations’ techniques to one of the groups in 6 sessions. At the end of one year, only 9% of this group had succumbed to the disease, while 30% of the other group had died.

  • Right motive:

    • Start a high-risk conversation with the right motive, and stay focused on that motive no matter what happens.
    • Work on ME first:
      • Often you are not clear on what you really want.
      • It is easy to fall into incorrect motivations
      • Wanting to win (no, that’s not really what you want)
      • Seeking revenge (no, that’s not what you want either)
      • Often, these desires are sub-conscious
    • Refuse to accept the suckers choice – that there are only two ugly options
      • i.e. “I can honest OR I have to lie”
      • Search for more possibilities
      • Identify what you really want, and what you don’t want. If you do this properly, and combine these two, you can think of many other possibilities
  • Safety: Have a conversation when both feel safe
    • If you feel safe, you can say anything. If you don’t feel safe, you start to go blind
    • Safety is an important requirement for a crucial conversation
      • Check the conditions/context around the conversation, not just the content
      • Learn to view silence and violence as signs that the other person is not feeling safe
        • Silence: Avoiding, withdrawing. Or even sarcasm and sugar-coating.
        • Violence: Controlling, labelling (e.g. “Fascist’), verbal attacking
      • Figure out what is your style. Do you fall into silence or violence when you’re under stress
      • Example: at a performance appraisal, an employee feels unsafe. So that’s not the right place for a crucial conversation. Give feedback earlier, as soon as possible
    • How to Make it Safe
      • Lack of safety comes from risk of loss of mutual purpose, or risk of loss of mutual respect
      • If either is at risk, then to fix it, do one of these:
        • Apologize when necessary
        • Contrast to fix misunderstanding
          • This is not the same as an apology
          • Here you explain what you did not mean, and contrast that with what you meant. This acts as first aid to restore safety
      • Make sure that there is a mutual purpose
        • Commit to seek a mutual purpose
        • Recognize the purpose behind the strategy
        • Invent a mutual purpose if no mutual purpose can be discovered
        • Brainstorm on strategies (what you’re going to do) once mutual purpose is established
    • Master your Emotions
      • Emotions don’t just happen – you create them
      • Something external happens. You react to that. Then you get an emotion. In other words: when something external happens, your brain tells you a story related to that event, and then you get an emotion.
      • To fix the emotion, fix the story.
      • Figure out what story you told yourself:
        • Notice your behavior: silence or violence
        • Don’t confuse the story with facts
        • Watch for three clever stories:
          • Victim: It’s not my fault
          • Villain: It’s all your fault
          • Helpless: There’s nothing I can do
        • e.g. Wife finds a credit card receipt for a nearby motel for husband’s card. Story she tells herself he went there with someone. Then blows up at him.
      • Complete the story
        • Turn yourself from victim to actor (“Could I be contributing to it?”)
        • Turn others from villains to humans (“Why would a normal person do this?”
        • Turn yourself from helpless into able (“What can I do now?”)
  • Listen
    • Listen sincerely to others’ facts+stories
      • Be curious, even if the other person is furious. Be patient
    • When someone is not talking, use AMPP
      • Ask to get things rolling (What’s going on?)
      • Mirror to confirm feelings (You say your OK, but you seem angry)
      • Paraphrase what you’ve understood
      • Prime the pump – start guessing when all else fails
    • Remember your ABC: agree when you agree, build if incomplete, compare when you differ
  • Action
    • Dialog is not decision making
    • Figure out how decisions will be taken: Command, Consult, Vote, Consensus
    • Common mistakes:
      • In case of “command”: don’t pass orders like candy. Explain why.
      • Don’t just pretend to consult. Really do it. Announce what you’re doing. Report the final decision
      • Know when a vote is needed.
    • Action: figure out who, what, when and follow-up
    • Document your work

Audience Discussion

  • Question: What if one person feels that the conversation is crucial, but the other does not? Example: I feel a conversation is crucial, but boss does not. Should we treat all conversations as crucial?
    • Audience Reactions: 1. You can’t treat every conversation as crucial, otherwise you’ll get tired. 2. A boss just has to get used to the fact that every conversation with a subordinate is crucial. 3. If the other person’s emotions are not running high (i.e. s/he does not see it as crucial), that’s actually a good thing, since things will not blow up.
  • Question: This seems like too much to learn and digest. How would you pick what are the first things to take away from here. Related: When I read books like this, I remember only 10%. How do you pick up more?
    • Audience Reactions: 1. When you read something like this, keep track of what you already know, what you’re already good at, and what are the areas where you need to improve, and pick only those to work on. 2. Don’t just read the book. Sign up to present it to someone – that way you’ll learn much more.

(The SEAP Book Club meets on the first Saturday of every month at Sungard, Aundh. If you’re interested in joining, contact Saheli Daswani saheli.daswani@softexpune.org)

IPMA Pune Event Report: Experiences in Product Management by Amit Paranjape

Product management means many different things to many different people, and is in fact quite different depending upon whether the product is new or mature, whether the company is small, medium or large, whether it is an enterprise product or consumer product, and a host of other things. A lot of issues that product managers need to keep in mind, and skills that they need to develop was covered in Vivek Tuljapurkar’s IPMA Pune Talk covered by PuneTech earlier.

Here are Amit’s Experiences in Product Management:

  • Early days of a company
    • Product Management is not a well-defined role or a group or even a person in a small company. Focus is only on sales and development, and product roadmap is decided in an ad hoc fashion.
    • As number of customers, and breadth of solution increases, the ad hoc processes start to break down.
    • Must create a Product Management as a layer between developers and customers. And everybody views this as bureaucracy and added overhead. This can only be done if there is strong backing from someone for the PM role. For example, the development team might get hassled by all the ad hoc requests that come from the sales organization, and will insist that a PM group be created and that all requests are channeled through PM. This is internal change management and it takes time to settle down.
  • Roles of PM in early days
    • Create a process for written specs, well defined test cases and support for QA
    • Be a friend of the development/delivery organization and the sales organization
    • In general, build relationships will all the stakeholders
    • Take over program management of all custom development projects
    • Recruiting product managers – biggest challenge.
  • As the company gets bigger
    • As the company gets bigger, the challenges change
    • Need to start worrying about requirements of individual products vs. the product suite, and solutions
    • Worry about difference between product and solution and module
    • Most of the time, you don’t really know what you’re doing – you’re just trying to do a good job in the face of uncertainties and ambiguity.
  • Products vs. Solutions – the perennial debate in Enterprise product companies
    • A solution is something that solves a business problem of a customer. This is what sales sells to the customer. Solutions can be based on customer industries (e.g. consumer goods, automotive, finance), or it could be based on business processes (e.g. Procurement, Demand Management)
    • A product is a specific piece of technology that engineering can build and which solves some particular problem. A combination of products that work together seamlessly is a solution
    • The reason for separating out products and solutions is to ensure that a small set of products can be used to build many different solutions for various customers
  • Overall Learnings
    • A PM must be paranoid. You need to worry about everybody and everything, because whenever anything goes wrong because of anybody, it ultimately comes back to you. So keep track of what various development teams are doing, what potential problems are. You need to keep track of sales teams, and what they’re promising customers, and how they’re positioning the product.
    • You need to work by influence. The people who can make your life miserable (sales, dev, etc.) don’t report to you, but still you need to make sure that they listen to you.
    • All PMs need to be entrepreneurial in their thinking – jugaad is needed at all times. Because things are always broken or breaking as far as a PM is concerned, problems to be solved, fires to be put out.
    • Blaming others is not the answer. Ultimately the buck stops with PM, so PM needs to solve the problem, irrespective of who or what caused the problem.
    • Relationship management is the key. If you maintain good relationships with various stakeholders, your life will be easy.
    • You are constantly in “sell” mode. You need to convince sales people to do some things, and consultants to do some things, and development to do some things.
    • In a fast growing company, where there isn’t lots of structure, be ready to temporarily take on the roles of development manager, or customer project manager as and when required
    • Make sure you do competitive research
    • Make sure you keep track of customer satisfaction levels
      -Recruiting
    • Recruiting product management people is a challenge
    • Skills required for PM in small companies are different from those required for larger companies. Small companies are ad hoc, with tactical goals, with a narrow focus, and a consultant/developer mindset. Large company PMs are process driven, worry more about long-term strategic goals, have a broader focus, and think more like sales people than developers.
    • Do not make the mistake of hiring “experienced” PMs from large companies for doing a small company PM job. This usually does not work well.