Category Archives: Featured

The best articles

Event Report: The Work and Impact of Bob Kahn and Vint Cerf

(This is a liveblog of the Turing100@Persistent Lecture on Bob Kahn and Vint Cerf by R. Venkateswaran, CTO of Persistent Systems. Since it is being typed as the event is happening, it is not really well structured, but should rather be viewed as a collection of bullet points of interesting things said during the talk.)

Vint Cerf and Bob Kahn

Vint Cerf: Widely known as the father of the internet. He is President of the ACM, Chief Internet Evangelist at Google, Chairman of the ICANN and many other influential positions. In addition to the Turing Award, he has also received the Presidential Medal of Freedom in 2005 and was elected to the Internet Hall of Fame in 2012.

Bob Kahn: Worked at AT&T Bell Labs, MIT, then while working with BBN, he got involved with the DARPA and Vint Cerf and they together worked on packet switching networks, and invented the IP and TCP.

The birth of the internet: TCP and IP. 70s and 80s.

  • The Internet:

    • The first 20 years:
      • Trusted network
      • Defense, Research and Academic network
      • Non-commercial
      • Popular apps: email, ftp, telnet
    • Next 20 years:
      • Commercial use
      • Multiple levels of ownership – increased distrust and security concerns
      • Wide range of apps: email, WWW, etc
  • What did Vint Cerf and Bob Kahn do?

    • The problem:
      • There were many packet switched networks at that time
      • But very small, limited and self contained
      • The different networks did not talk to each other
      • Vint Cerf and Bob Kahn worked on interconnecting these networks
    • The approach

      • Wanted a very simple, and reliable interface
      • Non-proprietary solution. Standardized, non-patented, “open”
      • Each network talked its own protocol, so they wanted a protocol neutral mechanism of connecting the networks.
      • Each network had its own addressing scheme, so they had to invent a universal addressing scheme.
      • Packets (information slices) forwarded from one host to another via the “internetwork”
      • Packets sent along different routes, no guarantees of in-order delivery. Actually no guarantee of delivery
      • Packets have sequence numbers, so end point needs to reassemble them in order
    • The protocol

      • A “process header” identifies which process on the end host should be delivered the packets. This is today called the “port”
      • Retransmissions to ensure reliable delivery. And duplicate detection.
      • Flow control – to limit number of un-acknowledged packets, prevent bandwidth hogging
      • A conceptual “connection” created between the end processes (TCP), but the actual network (IP) does not know or understand this
      • Mechanism to set up and tear down the “connection” – the three-way handshake
      • This are the main contributions of their seminal paper
    • The Layered Network Architecture
      • Paper in 1974 defining a 4 layered network model based on TCP/IP.
      • This later became the basis of the 7 layer network architecture
    • The Internet Protocol
    • Packet-switched datagram network
    • Is the glue between the physical network and the logical higher layers
    • Key ideas:
      • Network is very simple
      • Just route the packets
      • Robust and scalable
      • Network does not guarantee any thing other than best effort
        • No SLA, no guarantee of delivery, not guarantee of packet ordering
      • Dumb network, smart end-host
      • Very different from the existing, major networks of that time (the “circuit-switched” telephone networks of that time)
      • No state maintained at any node of the network
    • Advantages
      • Can accommodate many different types of protocols and technologies
      • Very scalable
    • The Transport Layer
    • UDP
      • Most simplistic higher level protocol
      • Unreliable, datagram-based protocol
      • Detect errors, but no error corrections
      • No reliability guarantees
      • Great for applications like audio/video (which are not too affected by packet losses) or DNS (short transactions)
    • TCP
      • Reliable service on top of the unreliable underlying network
      • Connection oriented, ordered-stream based, with congestion and flow control, bi-directional
      • State only maintained at the end hosts, not at the intermediate hosts

Internet 2.0 – Commercialization

  • The birth of the world wide web: late 80s early 90s
    • Tim Berners-Lee came up with the idea of the the world-wide-web
    • 1993: Mosaic, the first graphical web browser
    • First Commercial ISP (Internet Service Provider) – Dial up internet
    • Bandwidth doubling every 6 months
    • Push for multi-media apps
  • Push for higher bandwidth and rich apps
    • Net apps (like VoIP, streaming video) demand higher bandwidth
    • Higher bandwidth enables other new applications
    • Apps: email, email with attachments, streaming video, intranets, e-commerce, ERP, Voice over Internet, Interactive Video Conferencing
  • Dumb Network no longer works
    • Single, dumb network cannot handle all these different applications
    • Next Generation Networks evolved
    • Single, packet-switched network for data, voice and video
    • But with different levels of QoS guarantees for different services
  • Clash of Network Philosophies: BellHeads vs NetHeads (mid-90s)
    • Two major approaches: the BellHeads (circuit switched Telephone background), and the NetHeads (from the IP background)
    • BellHeads philosophy: network is smart, endpoints are dumb; closed, proprietary communities; expect payment for service; per-minute charges; Control the evolution of the network; want strong regulations
    • NetHeads philosophy: network is dumb, endpoints are smart; open community; expect cheap or free services; no per-minute charges; want network to evolve organically without regulations.
    • These two worlds were merging, and there was lots of clashes
    • BellHead network example: Asynchronous Transfer Mode (ATM) network
      • Fixed sized packets over a connection oriented network
      • Circuit setup from source to destination; all packets use same route
      • Low per-packet processing at each intermediate node
      • Much higher speeds than TCP/IP (10Gbps)
      • A major challenge for the NetHeads
    • Problems for NetHeads
      • To support 10Gbps and above, each packet needs to be processed in less than 30ns, which is very difficult to do because of all the processing needed (reduce TTL, lookup destination address, manipulate headers, etc)
      • As sizes of networks increased, sizes of lookup tables increased
      • Almost read to concede defeat
    • IP Switching: Breakthrough for NetHeads
      • Use IP routing on top of ATM hardware
      • Switch to ATM circuit switching (and bypass the routing layer) if a long-running connection detected.
      • Late 90s, all IP networking companies started implementing variations on this concept
    • MPLS: Multi-Protocol Lable Switching
      • Standard developed by IP networking companies
      • Insert a layer between TCP and IP (considered layer 2.5)
      • Separates packet forwarding from packet routing
      • Edges of the network do the full IP routing
      • Internal nodes only forward packets, and don’t do full routes
      • Separate forwarding information from routing information, and put forwarding info in an extra header (MPLS label – layer 2.5)
      • MPLS Protocol (mid-97)
        • First node (edge; ingress LSR) determines path, inserts MPLS label header
        • Internal nodes only look at MPLS label, and forwards appropriately, without doing any routing and without looking at IP packet
        • Last node (edge; egress LSR) removes the MPLS label
        • Label switching at intermediate nodes can be implemented in hardware; significant reduction in total latency
      • MPLS is now basis of most internet networking

Internet 3.0: The Future

End of the network centric viewpoint. (Note: These are futuristic predictions, not facts. But, for students, there should be lots of good project topics here.)

  • Problems with today’s internet
    • Support for mobility is pretty bad with TCP/IP.
    • Security: viruses, spams, bots, DDOS attacks, hacks
      • Internet was designed for co-operative use; not ideal for today’s climate
    • Multi-homing not well supported by TCP/IP
      • Change in IP address results in service disruption
      • What if you change your ISP, your machine, etc?
      • Cannot be done seamlessly
    • Network is very machine/ip centric (“Where”)
      • What is needed are People-centric networks (“Who”) and content centric (“What”)
      • IP address ties together identity and location; this is neither necessary, nor desirable
  • Three areas of future research:
    • Delay Tolerant Network (DTN) Architecture
      • Whenever end-to-end delay is more than a few 100 milliseconds, various things start breaking in today’s networks
      • DTN’s characterized by:
        • Things that are not always connected to the network. For example, sensor networks, gadgets, remote locations. Another Example: remote villages in Africa have a bus visiting them periodically, and that gives them internet access for a limited time every day.
        • Extremely Long Delays
        • Asymmetric Data Rates
        • High Error Rates
      • Needs a store-and-forward network
    • Content-centric Networks
      • Instead of everything being based on IP-address, how about giving unique identifiers to chunks of content, and define a networking protocol based on this
      • Strategy: let the network figure out where the content is and how to deliver it
      • Security: the content carries the authorization info, and unauthorized access is prevented
    • Software Defined Networks
      • Virtualizing the Network
      • Search the net for: “OpenFlow”
      • Hardware Router only does packet forwarding, but end applications can update the routing tables of the router using the OpenFlow protocol. App has a OpenFlow controller that sends updates to the OpenFlow agent on the Hardware Router.
      • In the hardware/OS world, virtualization (VMWare, Xen, VirtualBox) are slowly taking over; OpenFlow is a similar idea for network hardware
      • Oracle, VMWare have had major acquisitions in this space recently

Turing100 Lecture: Vint Cerf + Bob Kahn – “Fathers of the Internet” – 8 Sept

Vinton Cerf and Robert Kahn invented TCP and IP, the two protocols at the heart of the internet, and are hence considered the “Fathers of the Internet”. For this and other fundamental contributions, they were awarded the Turing award in 2004.

On 8th September, R. Venkateswaran, CTO of Persistent Systems, will give a talk on the life and work of Vint Cerf and Bob Kahn as a part of the Turing 100 Lecture Series organized by Persistent in Pune on the first Saturday of every month (although this month it was shifted to the second Saturday).

In addition, this Saturday’s event will also feature a talk on “Net Neutrality: The Supply and Demand Side Perspective” by Dr. V Sridhar, a research fellow with Sasken.

About the Turing Awards

The Turing awards, named after Alan Turing, given every year, are the highest achievement that a computer scientist can earn. And the contributions of each Turing award winner are then, arguably, the most important topics in computer science.

About Turing 100 @ Persistent Lecture Series

This year, the Turing 100 @ Persistent Lecture Series will celebrate the 100th anniversary of Alan Turing’s birth by having a monthly lecture series. Each lecture will be presented by an eminent personality from the computer science / technology community in India, and will cover the work done by one Turing award winner.

The lecture series will feature talks on Ted Codd (Relational Databases), Vint Cerf and Robert Kahn (Internet), Ken Thompson and Dennis Ritchie (Unix), Jim Gray, Barbara Liskov, and others. Full schedule is here

This is a lecture series that any one in the field of computer science must attend. These lectures will cover the fundamentals of computer science, and all of them are very relevant today.

Fees and Registration

This is a free event. Anyone can attend.

The event will be at Dewang Mehta Auditorium, Persistent Systems, SB Road, from 2pm to 5pm on Saturday 8th September. This event is free and open for anybody to attend. Register here

Event Report: Sham Navathe on E.F. Codd

(This is a liveblog of Sham Navathe’s lecture on E.F. Codd as part of the Turing 100 @ Persistent lecture series.)

Sham Navathe does not really need an introduction – since he is famous for his book “Fundamentals of Database Systems” written by with Ramez Elmasri, which is prescribed for undergraduate database courses all over the world. His full background can be looked up in his Wikipedia page, but it is worth mentioning that Navathe is a Punekar, being a board topper from Pune in his HSC. He regularly visits Pune since he has family here.

Today’s lecture is about E.F. Codd, a British mathematicians, logician, and analyst, who was given the Turing award in 1981 for his invention of the relational databases model. He is one of the 3 people to have received a Turing award for work in databases (the other two being Charlie Bachman in 1973 for introducing the concept of data structures and the network database model, and Jim Gray in 1998 for his work on database transaction processing research and technical leadership in system implementation.)

Ted Codd studied in Oxford, initially studying Chemistry, before doing a stint with the Royal Air Force and then getting degree in Maths. He later emigrated to US, worked in IBM, did a PhD from University of Michigan, and finally went back to IBM. At that time, he led the development of the world’s first “multiprogramming system” – sort of an operating system.

Codd quit IBM in 1984 because he was not happy with the “incomplete implementation of the relational model.” He believed that SQL is a “convenient” and “informal” representation of the relational model. He published rules that any system must follow before it could be called a relational database management system, and complained that most commercial systems were not really relational in that sense – and some were simply thin pseudo-relational layer on top of older technology.

Invention of the Relational Model

In 1963-64, IBM developed the IMS database management system based on the hierarchical model. In 1964-65 Honeywell developed IDS, based on the network model. In 1968, Dave Childs of Michigan first proposed a set-oriented database management system. In 1969 Codd published “The derivability, redundancy, and consistency of relations stored in large databases” (IBM research report, RJ-599, 1969). This was the work that led to the seminal paper, “A Relational Model for Large Shared Data Banks” (CACM, 13:6, June 1970). Other classic papers are: “Extending the Relational Model to capture more meaning” (ACM TODS, 4:4, Dec 1979), which is called the RM/T model. He is also the inventor of the term OLAP (Online Analytical Processing).

After Codd’s proposal of the relational model, IBM was initially reluctant to commercialize the idea. Instead, Michael Stonebraker of UC-Berkeley along with PhD students created INGRES, the first fully relational system. INGRES ultimately became Postres database which is one of the leading open source databases in the world today. In the meantime, Relational Software Inc. brought another relational database product to the market. This ultimately became Oracle. After this, IBM heavily invested in System R that developed the relational DBMS ideas fully. Codd was involved in the development of System R – and most of the fundamental ideas and algorithms underlying most modern RDBMS today are heavily influenced by System R.

Interesting RDBMS developments after Codd’s paper:

  • 1975: PhD students in Berkeley develop an RDBMS
  • 1976: System R first relational prototype system from IBM
  • 1979: First proposal for a cost based optimizer for database queries
  • 1981: Transactions (by Jim Gray)
  • 1981: SQL/DS First commercial RDBMS

Two main motivations for the relational model:

    • Ordering dependence
    • Indexing dependence
    • Access path dependence

    In DBMS before RDBMS, there was a heavy dependence of the program (and programmer) on the way the data is modeled, stored and navigated:

    All of this was hardcoded in the program. And Codd wanted to simplify database programming by removing these dependencies.
    – Loss of programmer productivity due to manual optimization.

Codd’s fundamental insight was that freeing up the application programmer from knowing about the layout of the data on disk would lead to huge improvements in productivity. For example, in the network or hierarchical models, a data model in which a Student has a link to the Department that he is enrolled in, is very different from a model in which each Department links to all the students that are enrolled there. Depending upon which model is used, all application programs would be different, and switching from one model to another would be difficult later on. Instead, Codd proposed the relational model which would store this as the Student relation, the Department relation, and finally the Enrolment relation that connects Student and Department.

The Relational Data Model

A relation is simply an unordered set of tuples. On these relations, the following operations are permitted:

  • Permutation
  • Projection
  • Join
  • Composition

Of course, other operators from set theory can be applied to relations, but then the result will not be a relation. However, the operations given above take relations and the results are also relations. Thus, all the relational operators can again be applied to the results of this operation.

He defined 3 types of relations:

  • Expressible: is any relation that can be expressed in the data model, using the data access language
  • Named: is any relation that has been given a specific name (i.e. is listed in the schema)
  • Stored: is a relation that is physically stored on disk

He also talked about 3 important properties of relations:

  • Derivability: A relation is derivable if it can be expressed in terms of the data access language (i.e. can be expressed as a sequence of relational operations)
  • Redundancy: A set of relations is called strongly redundant if one of the relations can be derived from the other relations. i.e. it is possible to write a relational operation on some of the relations of the set whose result is the same as one of the other relations. A set of relations is weakly redundant if there is a relation in that set which has a projection that is derivable from the other relations. Good database design entails that strongly redundant sets of relations should not be used because of problems with inconsistency. However, weakly redundant relations are OK, and used for performance purposes. (Materialized views.)
  • Consistency / Inconsistency: Codd allowed the definition of constraints governing the data in a set of relations, and a database is said to be consistent if all the data in the database satisfies those constraints, and is said to be inconsistent if not.

In the years that followed, a bunch of IBM research reports on normalization of databases followed.

Turing Award Lecture

His talk is titled: “Relational Databases: A Practical Foundation for Productivity”. His thoughts at that time:

  • Put users in direct touch with databases
  • Increase productivity of DP professionals in developing applications
  • Concerned that the term “relational” was being misued

He points out that in relational data model, data can be addressed symbolically, as “relation name, primary key value, attribute name”. This is much better than embedding links, or positional addressing (X(i, j)).

The relational data model encompasses structure, manipulation and integrity of data. Hence, it is a complete model, because all 3 aspects are important for data management.

Characteristics of relational systems:

  • MUST have a data sub-language that allows users to query the data using SELECT, PROJECT and JOIN operators
  • MUST NOT have user visible navigation links between relations
  • MUST NOT convey any information in the way tuples are ordered

He was worried that relational system might not be able to give performance as good as the performance of non-relational systems. He talked about:

  • performance oriented data structures
  • efficient algorithms for converting user requests into optimal code

In future work, he mentioned the following

  1. Domains and primary keys
  2. Updating join-type views
  3. Outer-joins
  4. Catalogs
  5. Design aids at logical and physical level
  6. Location and replication transparency in distributed databases
  7. Capture meaning of data
  8. Improved treatment of missing, null and inapplicable values
  9. Heterogeneous data

This was a remarkably prescient list. In the 30 years since this talk, most of this has actually happened either in commercial databases or in research labs. We have pretty much achieved #1 to #6, while #7 to #9 have seen a lot of research work but not wide commercial acceptance yet.

Concluding Thoughts

  • Relational model is a firm foundation for data management. Nothing else compares.
  • On this foundation we were able to tackle difficult problems in the areas of design, derivability, redundancy, consistency, replication as well as language issues. All of these would have been very difficult otherwise
  • Proponents of NoSQL databases as well as map-reduce/hadoop type of systems need to keep in mind that large data management cannot really be done in an ad hoc manner.
  • Codd’s RM/T model was an attempt to capture metadata management, but fell short of what was needed.

Audience Questions

Q: Why did object-oriented databases not catch on?

A: There was a lack of understanding amongst the wider community as to the best way of using object-oriented ideas for data management. OODBMS companies were not really able to really educate the wider community, and hence failed. Another problem is that object-oriented DBMS systems made the data model complex but there were not corresponding developments in the query language and optimization tools.

Q: When the relational model was developed, did they constrain themselves due to the hardware limitations of those days?

A: Codd did mention that when deciding on a set of operations for the relational model, one consideration was ‘Which of these operations can be implemented on today’s hardware’. On the other hand, there were lots of companies in the 80s which tried to implement specialized hardware for relational/DBMS. However, none of those companies really succeeded.

In the future, it is very unlikely that anyone will develop a new data model with improvements in hardware and processing power. However, new operators and new ways of parallelizing them will certainly be developed.

Q: There are areas of data management like incomplete, inexact data; common sense understanding of data; deduction and inferencing capabilities. These are not really handled by today’s DBMS systems. How will this be handled in the future.

A: There have been many interesting and elegant systems proposed for these areas, but non have seen commercial success. So we’ll have to wait a while for any of these to happen.

Will be updated every 15 minutes. Please refresh regularly.

SLP-Pune: Startup Leadership Program for Entrepreneurs

The Startup Leadership Program, a global entrepreneur discussion/training group now has Pune Chapter. Apply before 7th Aug here – http://www.startupleadership.com

The Startup Leadership Program (SLP) is a selective, training program for entrepreneurs who are or want to be startup CEOs, and be connected to a global network. SLP Fellows have founded over 300 breakthrough startups including Duron Energy, Gharpay, ixigo, Innoz, Momelan, Runkeeper, SideTour, Shareaholic, Solar Junction, Ubersense, Savored, Voicetap, and have won many awards.

The program brings together about 25 high impact entrepreneurs in every city to coach and create the next generation CEOs. The unique curriculum is co-designed by Serial Entrepreneurs, Academicians, Researchers, VCs, and Analysts. The complete list of Mentors can be seen here http://www.startupleadership.com/main_nav/mentors-2/.

The Startup Leadership Program is a NOT FOR PROFIT entity, registered in US and INDIA.

*What will entrepreneurs get? *

  1. Avoid entrepreneurial mistakes – as you learn from your peer group (that comprises of entrepreneurs from different background) and mentors (serial entrepreneurs, VCs, bankers, etc) and experts.
  2. Get solutions to your growth challenges – as you get feedback from super peer group and mentors.
  3. Connect to VCs/Investors and raise funding. Make real-life pitches.Understand what VC looks for.
  4. Get an understanding of term-sheet, legal aspects and exiting ventures.
  5. Be part of high-profile and high impact SLP Global alumni that will help you to scale up.
  6. Last but not least, make friends with entrepreneurs, as all know – its lonely at the top.

As a testimonial, from the class of Mumbai Chapter 2012-2013, 3 people
raised capital (with total to tune of USD 5 Mn!). One of them is part of
SLP Pune organizing team.

Commitments and pedagogy

  1. The program runs from mid-Sept to March and requires about 60 hours of time commitment on the part of entrepreneur. Usually there is a session once in every three weeks and on Saturday.
  2. The session comprises of brain storming, role plays, VC pitches, HBR case studies, etc.
  3. There are usually 2-3 mentors per session. You can expect people from Angel Community and VC firms (last year our chapter had mentors from Indian Angels, Mumbai Angels, Cannan Partners, IvyCap Ventures, Blume Ventures, Sequioa, Kanakia Ventures, Anand Rathi Group, Kae Capital, Nexus Capital, etc) and serial entrepreneurs!
  4. SLP is volunteer run and not for profit. The entrepreneur will need to pay fees of 6000 Rs. which cover costs of food, certification and logistics.

It is observed that the SLP Class usually hangs-out for a beer or coffee and develops strong bond.

Venue

For Pune, the sessions will be held at Venture Center, NCL Innovation Park.

Contact

For any details/doubts on SLP Pune, please contact Dhiraj Khot (dhirajk@gmail.com) +91-9850 682 789

Lecture on Turing Award Winner Ted Codd (Databases) by Sham Navathe – 4 Aug

Ted Codd was awarded the Turing Award in 1981 for “his fundamental and continuing contributions to the theory and practice of database management systems.” A simpler way to put it would be that Codd was given the award for inventing relational databases (RDBMS).

On 4th August, Prof. Sham Navathe, of Georgia Tech University, who is visiting Pune, will talk about Ted Codd’s work. This talk is a part of the Turing Awards lecture series that happens at Persistent’s Dewang Mehta Auditorium at 2pm on the first Saturday of every month this year.

About the Turing Awards

The Turing awards, named after Alan Turing, given every year, are the highest achievement that a computer scientist can earn. And the contributions of each Turing award winner are then, arguably, the most important topics in computer science.

About Turing 100 @ Persistent Lecture Series

This year, the Turing 100 @ Persistent lecture series will celebrate the 100th anniversary of Alan Turing’s birth by having a monthly lecture series. Each lecture will be presented by an eminent personality from the computer science / technology community in India, and will cover the work done by one Turing award winner.

The lecture series will feature talks on Ted Codd (Relational Databases), Vint Cerf and Robert Kahn (Internet), Ken Thompson and Dennis Ritchie (Unix), Jim Gray, Barbara Liskov, and others. Full schedule is here

This is a lecture series that any one in the field of computer science must attend. These lectures will cover the fundamentals of computer science, and all of them are very relevant today.

Fees and Registration

This is a free event. Anyone can attend.

The event will be at Dewang Mehta Auditorium, Persistent Systems, SB Road, at from 2pm to 5pm on Saturday 4th August. This event is free and open for anybody to attend. Register here

Activities of SEAP – the Software Exporters Association of Pune

SEAP (the Software Exporters Association of Pune), the organization consisting of top software companies of Pune has been very active last year. On 27th July SEAP had its AGM, and at this, Gaurav Mehra, president of SEAP gave a report of his activities. This is a quick capture of his report – and should give an idea of the various SEAP activities in Pune.

These are the major activities of SEAP last year:

  • Advocacy. Represent Pune’s software companies at:
    • RAC Customs,
    • STPI, Hinjewadi
    • ESI Inspector
    • PF Office
  • Ideas Exchange and Education
    • SEAP Book Club meets on the first Saturday of every month – 10am at Sungard Aundh. 13 books have been presented so far, and this will continue
    • Breakfast series – 3rd Wednesday of every month -at Sumant Moolgaokar Auditorium, ICC, SB Road. Cover topics of interest to middle management and higher. Topics covered in the 4 sessions so far – Innovation, Security, etc.
    • Leadership Forum. 15 member companies trained on Crucial Conversations. Atyaasaa did a session on managing human resources in turbulent times.
    • SEAP Education.
  • Collaboration and Connection
    • Working with and expanding the eco-system
    • Working closely with NASSCOM to bring their events to Pune in a much more aggressive manner
    • PuneConnect (done with PuneTech, Pune Open Coffee Club, TiE Pune, and ET Now) put small startups Pune in touch with the established companies.
    • SEAP-Zinnov event.
    • SEAP’s Other collaborations with the ecosystem
      • TiE – exchange invitations and merge calendars
      • IPMA Pune hosted along with the SEAP Book Club
      • PuneTech – exchange of calendars and invitations
      • CSI Pune – exchange of invitations
  • Networking
    • SEAP Golf: Golf tournament and clinic. 50 players. (Thanks Dell computers)
  • Value Added Services
    • Research, communication and partner networks
    • Create SEAP Associate Members – a bunch of companies, who are “recommended” providers of products and services that are of interest to SEAP member companies.
    • Research and Publications: Compensation and Benefits study for Pune by Hexagon
    • Pune Advantage Study by Zinnov
  • Communication
    • Brand new SEAP website – with the help of Aadi Ventures
      • Member areas
      • Features for Colleges
      • Creation of Companies Directories by area
    • Facebook page
    • LinkedIn Group
    • YouTube Page
  • Corporate Social Responsibility
    • Hosted Bhimthadi Jatra in SEAP member companies
    • Supported Students FUEL
  • SEAP Advisory Council:
    • Creation of a SEAP Advisory Council consisting of past SEAP Presidents – Anand Desphande, Nitin Deshpande, Abhijit Atre, Chetan Shah who will advice SEAP on a formal biannual basis.
  • SEAP Ambassador in Silicon Valley Area:
    • Parag Mehta, of QLogic, past president of SEAP, is formally named as the “ambassador” of SEAP in the Silicon Valley. He will be the evangelist for Pune and SEAP there.

PuneTech Event: Storage Technology Trends talk by Ken Boyd, IBM: 28 July

Ken Boyd, a Distinguished Engineer at IBM, who has been building high end storage products at IBM for over 25 years, is visiting Pune and will talk about his thoughts on the trends in storage technology.

On Saturday July 28, 5pm, at MCCIA, SB Road, Ken will present some of the technology trends that are shaping the design of future storage systems in IBM. Ken will also discuss the opportunities these technology trends are creating for increasing the value of future storage systems. This talk is free and open to all those who’re interested in attending.

Ken is current Chief NAS (Network Attached Storage) Architect at IBM, and leads IBM’s NAS division. He is a Distinguished Engineer at IBM and has been awarded the Master Inventor award, and holds over 40 patents.

Ken recently completed a two year IBM international assignment in Israel where he served as XIV Chief Architect. After IBM acquired XIV, an Israeli start up company, Ken led the XIV team in defining the future architecture and system design of IBM-XIV. Ken also led the technical integration of XIV into IBM.

Ken started his IBM career after graduating from the University of Illinois, Urbana-Champaign in with a B.S. degree in Computer Engineering. After beginning as an IBM logic designer, Ken held a variety of engineering and management positions in Poughkeepsie, NY before transferring to Tucson, AZ in 1987. Advancing in IBM’s storage development team in Tucson, Ken led several organizations, including hardware development, microcode development, technical support marketing, and product management. Ken made significant contributions to IBM high end storage products, including the IBM 3990 Storage Controller, the IBM Enterprise Storage Controller (now known as the DS8000 family), and the XIV Storage System. He was promoted to IBM Director in 1993 and was named an IBM Distinguished Engineer in 2003. In July 2005 Ken received an IBM Outstanding Innovation Award for significant contributions to developing and protecting IBM Intellectual Property. Ken, named an IBM Master Inventor, holds over 40 patents and has achieved an IBM 12th Plateau Invention Achievement Award. Ken earned a M.B.A. degree from the University of Arizona and he is a Senior Member of the IEEE.

This is a free event, and anybody interested in technology is free to attend.

Registration and Fees

This event is free and open for anybody to attend. Please register here

Call for Speakers: ClubHack Security Conference 2012

ClubHack is one of India’s foremost conferences on Security and is now in its 6th year. As usual, it will be on the first weekend of December (1st to 3rd) in Pune.

However, rather than focusing on just plain security and awareness of security, ClubHack is now changing its focus. Here is the motivation:

ClubHack when started in 2007, dreamt that people in India will wake up and start thinking information security seriously. We even decided our motto as “Making Security a Common Sense”. After 5 long years, today we witness a lot of action around the country in this field, media as well as working professionals are actually looking towards security seriously.

Waking up to an extent that today we see 5-6 similar events in India on the same line. Hence we have now decided to confer the task of rest of the awakening to them and start a new journey.

ClubHack2012 onwards, we will concentrate our energies in empowering innovation & leadership development. Having loved our domain so much, we’d continue to do this in the domain of information security only. And that coins our new motto line “Empowering Innovation & Leadership in Information Security”

With this in mind, this year’s ClubHack is looking for speakers who can emphasize entrepreneurship in this space. So, here is a partial list of suggested topics:

  • Entrepreneurship in infosec product development
  • Research work in infosec
  • Innovation in attack vectors
  • Attacks on Cloud
  • Mobile computing
  • Malware & Botnets
  • Privacy with social networks
  • Telecom Security (3G/4G, SS7, GSM/CDMA, VoIP) and Phone Phreaking
  • Hardware, Embedded Systems and other Electronic Devices Hacking
  • War of handhelds & BYOD
  • Cyber warfare & your role
  • Open Source Intelligence (OSINT)
  • Signal Intelligence (SIGINT) – COMINT, ELINT, etc
  • Critical Infrastructure Protection
  • Security aspects in SCADA and industrial environments and “obscure” networks
  • & the general other infosec domains like web, network, tools & exploits etc.

Those who would like to deliver a workshop at ClubHack2012, please write to cfp@clubhack.com to discuss the details.

Why become a speaker? In addition to helping the community, becoming well known and famous, meeting interesting people in this area, you also get:

  • Travel reimbursement or arrangement of economy return tickets for speakers
  • Accommodation for 2
  • Complementary passes for event & party for 2
  • Gift hampers & freebies

See the CFP link for more details of how to submit a proposal.

Event: Turing’s Theory of Computing – Turing 100 @ Persistent – July 7

The Turing awards, named after Alan Turing, given every year, are the highest achievement that a computer scientist can earn. And the contributions of each Turing award winner are then, arguably, the most important topics in computer science. This year, the Turing 100 @ Persistent lecture series will celebrate the 100th anniversary of Alan Turing’s birth by having a monthly lecture series. Each lecture will be presented by an eminent personality from the computer science / technology community in India, and will cover the work done by one Turing award winner.

The lecture series will feature talks on Ted Codd (Relational Databases), Vint Cerf and Robert Kahn (Internet), Ken Thompson and Dennis Ritchie (Unix), Jim Gray, Barbara Liskov, and others. Full schedule is here

This is a lecture series that any one in the field of computer science must attend. These lectures will cover the fundamentals of computer science, and all of them are very relevant today.

This lecture series kicks off this Saturday with a talk on the work of Turing himself – Turing’s Theory of Computing, by Vivek Kulkarni, Principal Architect at Persistent Systems. The full schedule of the event is:

  • Welcome Address: Dr. Anand Deshpande, CEO Persistent Systems
  • Keynote: Dr. Mathai Joseph, Advisor TCS
  • Media Presentation: Life of Alan Turing
  • Turing’s Theory of Computation: Vivek Kulkarni, Principal Architect Persistent Systems

The event will be at Dewang Mehta Auditorium, Persistent Systems, SB Road, at from 2pm to 5pm on Saturday 7th July. This event is free and open for anybody to attend. Register here

What NVidia is up to – NVidia Tech Week Open House in Pune

(This report of an demo/event organized by NVidia in February 2012 was written by Abhijit Athavale, and was originally published on PuneChips.com, a PuneTech sister organization that focuses on semiconductor, eda, embedded design and VLSI technology in Pune. It is reproduced here for the benefit for PuneTech readers.)

I was invited to visit the Nvidia Tech Week this past weekend (February 25-26, 2012) at their facilities in Pune. This is a great concept – getting employees to invite friends and relatives to actually see what their company is all about is very good social outreach and a fantastic marketing initiative. If more tech companies in the area do similar events once or twice a year, it will help lift the shroud of technical opaqueness around them. I think hosting similar events in area colleges will also help students realize that even VLSI/Embedded Systems Design is cool.

I was given a personal tour by Sandeep Sathe, a Sr. Development manager at Nvidia and also met with Jaya Panvalkar, Sr. Director and head of Pune facilities. There was enough to see and do at this event and unfortunately I was a bit short on time. It would have taken a good two hours for a complete walk-through, so I decided to spend more time on the GPU/HPC section though the Tegra based mobile device section was also quite impressive. It’s been a while since I actually installed a new graphics card in a desktop (actually, it’s been a while since I used a desktop), but graphics cards have come a long way! Nvidia is using standard PCI Express form factor cards for the GPU modules with on-board fans and DVI connectors.

The following are key takeaways from the demo stations I visited

GeForce Surround 2-D

Here, Nvidia basically stretches the game graphics from a single monitor to three monitors. Great for gamers as it gives a fantastic feel for peripheral vision. The game actually doesn’t have to support this. The graphics card takes care of it. The setup here is that while the gamer sits in front of the main monitor, he also sees parts of the game in his peripheral vision in two other monitors that are placed at an angle to the main monitor. I played a car rally game and the way roadside trees, objects moved from the main monitor to the peripheral vision monitors was quite fascinating.

GeForce 3-D Vision Surround

This is similar to the above, but with 3D. You can completely immerse yourself in the game. This sort of gaming setup is now forcing monitor manufacturers to develop monitors with ultra small bezel widths. I suppose at some point in the next few years, we will be able to seamlessly merge graphics from different monitors into one continuous collage without gaps.

Powerwall Premium Mosaic

Powerwall is a eight monitor setup driven by the Quadro professional graphics engine. Two Quadro modules fit into one Quadroplex industrial PC to drive four monitors. Projectors can also be used in place of monitors to create a seamless view. The display was absolutely clear and highly detailed. The Powerwall is application transparent. Additional coolness factor – persistence data is saved so you don’t lose the image during video refresh and buffer swaps. This is most certainly a tool intended for professionals who need high quality visuals and computing in their regular work. Examples are automotive, oil and gas, stock trading.

PhysX Engine

PhysX is a graphics engine that infuses real time physics into games or applications. It is intended to make objects in games or simulations move as they would in real life. To me this was very disruptive, and highlight of the show. You can read more about PhysX here. It is very clear how PhysX would change gaming. The game demo I watched had several outstanding effects: dried leaves moving away from the character as he walks through a corridor, glass breaking into millions of shards as it would in real life. Also running was a PhysX simulation demo that would allow researchers to actually calculate how objects would move in case of a flood. What was stunning was that the objects moved differently every time as they would in real life. PhysX runs on Quadro and Tesla GPUs. It is interesting to note that Ra.One special effects were done using PhysX.

3D photos and movies

Next couple of demos demonstrated 3D TV and photo technology using Sony TVs and a set of desktops/laptops. Notably, the Sony 3D glasses were much more comfortable compared to others. Nvidia is working with manufacturers to create more comfortable glasses. There was also a Toshiba laptop that uses a tracking eye camera to display a 3D image to the viewer regardless of seating position without glasses. It was interesting. However, the whole 3D landscape need a lot of work from the industry before it can become mainstream.

Optimus

What was explained to me was that Optimus allows laptops to shut off GPUs when they are not needed. They can be woken up when high performance work is required. This would be automatic and seamless, similar to how power delivery is in on a Toyota Prius. This sort of a technology is not new to computing – a laptop typically puts a lot of components to sleep/hibernate when not being used, but the GPU is not included.

Quadro Visualizations

This allows 2D/3D visualizations for automotive, architectural and similarly complex systems for up to one thousand users at a time. You can easily change colors, textures, views so everyone can comment and give constructive feedback. I was not sure if the design can be changed on the fly as well. Nvidia is working with ISVs like Maya and Autodesk on this.

Tesla

Tesla GPUs use chips that are used for high performance computing and not rendering, which is different from what Nvidia typically does. The Tesla modules do not have any video ports! It has a heterogeneous GPU/CPU architecture that saves power. In fact, the SAGA-220 supercomputer, dubbed India’s fastest, at ISRO’s Vikram Sarabhai Space Center facility uses 2070 Tesla GPUs along with 400 Intel Xeon processors. In addition to supercomputing, Tesla is very useful in 3D robotic surgery, 3D ultrasound, molecular dynamics, oil and gas, weather forecasting and many more applications.

Tegra Mobile Processor

Next few demos showcased the Tegra mobile applications processor based on ARM Cortex A9 cores. The HD quality graphics and imaging were impressive. It is clear that smartphones and tablets of the day are clearly far more powerful compared to desktops of yesteryear and can support highly impressive video and audio in a very handy form factor.

In all, I had a great time. As I mentioned earlier, Nvidia along with other tech companies in Pune should hold more of these kinds of events to give technology exposure to the larger population in general. I think it is important for people to know that the stuff that makes Facebook run is the real key and that is where the coolness is.