Tag Archives: funding

Enrich your website (with content & money) – An interview with Hover.in

hover_logoHover.in is a Pune startup that provides a service for web publishers (i.e. website/blog owners) to automatically insert extra content into the webpages, in the form of a bubble that appears when the mouse is hovered over underlined words. The bubble can be informational (like a map appearing wherever a name of a place appears, or a background information about a company appearing wherever a name of a company appears), or it can be contextual, in-text, advertisement from hover’s network of partners, and most importantly it is fully under the publisher’s control. While services like this have been around in other forms, hover.in believes that its ability to handle any language, and the focus on Indian market sets it apart from the competition. See the PuneTech profile of hover.in to get a better idea of what hover.in provides.

Hover.in was one of the startups chosen to be showcased at proto.in’s Jan ’08 edition. Earlier this, week, they announced that they have received seed funding from Media2Win, and will soon be seen in action on some large Indian portals.

PuneTech interviewed Arun Prabhudesai, CEO of Hover.in, (he also runs popular Indian Business blog trak.in) to get a deeper look at hover.in. To be true to the “tech” part of PuneTech, we also asked some technical questions that were answered by Bhasker V. Kode (Bosky), CTO of Hover.

Q: Congratulations on getting funded – especially under these economic conditions. How do you plan on using this funding – what will be the focus areas?

The seed funding was finalized few months back before the whole “recession” thing started constantly ringing in our ears.

Actually, from hover.in perspective we feel this funding as more of strategic investment where Media2Win – being a leading digital media agency – will help us to go to the market. We have immensely benefitted from the experience Me2W brings on table.

The funding is being mostly used to ramp up our technical resources and infrastructure.

Q: Your main “customers” are website publishers. Are you targeting any specific geography, like India (as the .in domain name would suggest)?

Hover.in in-text platform is global and open for web publishers and bloggers from all geographies. However, we are actively targeting Indian market first. India currently does not have any in-text platform and that’s puts us in a great position to capture this market. Infact, hover.in is world’s first in-text platform that is also language agnostic, which opens up a large chunk of regional language websites.

Q: I keep hearing that “there isn’t enough money to be made from online advertisements alone in India, except for a few specific verticals.” And you seem to be going squarely after this market. What is your take on this issue?

You know, this people have started talking about because there are too many ad networks that have come up in last couple of years…more than 15 odd I can count on my fingers !

But if you look at the larger picture, online advertisements are the only ones that are growing year on year. Traditional advertising is hardest hit…

For us the advantage is, we DO NOT compete with traditional ad networks as they are 99% display advertising. We are in-text and this market has not even tapped. From publisher perspective, it is an additional channel for content and monetization.

From Advertisers, this is the most targeted way of displaying their advertisement. Also, as we follow CPA / CPC kind of model, advertisers have full ROI on investment.

Co-founders of Hover.in - Bhasker V. Kode, CTO (left) and Arun Prabhudesai, CEO
Co-founders of Hover.in - Bhasker V. Kode, CTO (left) and Arun Prabhudesai, CEO

Q: If I remember right, you are using Erlang for your development – a very non-standard choice. Can you share the reasons behind the choice? Isn’t it difficult to get Erlang developers? In retrospect are you happy with this decision?

(by Bosky)

Erlang has been used to build fault-tolerant and distributed applications for quite some time in areas like telecom, especially for allowing highly granular choices in networking. Off-late projects like ejabberd, mnesia, yaws and tsung have shown how products spanning several hundred nodes can be implemented with the erlang stack and in particular – web technologies.

It most definitely is a paradigm shift courtesy of it’s functional programming concepts, and we are glad we took that decision because of its inherent focus on distributed systems, and although the erlang developer community in India is non-existent, with the right attitude towards learning now a day’s it does’nt matter. Moreover it only took a couple of months for our developers to get used to the semantics, following which as with any stack – it’s about what you do with that expertise.

Erlang gives you that power, but at the same time – there are areas where it might not seem a right fit and perhaps look to perl or ruby for tasks that suit them. For example, we use python wherever it seems required as well. The good part is erlang open-source community has quite a closely-knit presence online, which does help quite a lot. We ourselves are now looking at contributing and opening up internal projects.

Q: One of the important challenges for hover.in will be scalability. How are you gearing up to handle that?

(By Bosky)

Right from day one, erlang based systems like ours are designed built for horizontal scaling – which allows plug-n-play addition to our growing cluster. Regardless of the development stack you work on – some things need to be built in early and that’s something we spend a whole lot of time during our first year fine tuning.

Especially for us – where practically every page hit – for every one of our users – reflects a page visit to us where we need to compute and render hovers in a matter of milliseconds. To this end – before starting out application-logic, we first built out our own distributed priority-queuing systems, our own distributed crawler and various indexing mechanisms, a time-splicing based cpu allocation for various tasks, which made things like adding jobs, executing them a more controlled operation regardless of what the actual job is and has been handling burst mode quite well.

Moreover, we can also add workers on-the-fly to almost all major tasks much like an Amazon ec2 instance where each work itself is supervised for crash recovery thanks to erlang’s open telecom platform libraries and guidelines. Caching is something else we have and continue to work on consistently. No matter how many papers, algorithms or implementations there are out there – every system needs to fine tune their own unique set of optimizations vs compromises that reflect their infrastructure, traffic, memory & space needs,etc ..

Having granular control of this is something that is a real challenge as well as a pleasure with the stack (Linux, Yaws, Mnesia, Erlang). We ‘ve also been quick to adopt cloud-computing initiatives like Amazon s3, and more recently cloudfront for our static content delivery needs.

We’re also working on a parallel map-reduce implementation, exploring options with xmpp, and better logging for our developers to find and fix glitches or bottlenecks, eventually translating to a faster and better user experience for our users.

Q: You moved to Pune specifically to start hover.in. What made you choose Pune?

Yes, I did move to Pune to start hover.in, however, it would not be fair to say that is the only reason why I moved here. I have lived most of my formative years here in Pune, before going to USA. And as you know, once a Puneite, always a Puneite!

Actually we had to choose from 2 cities – Chennai (Our Co-founder, Bhasker VK, is from Chennai) and Pune. Few important aspects tilted the balance in favour of latter. Better weather, Pune’s proximity to Mumbai where majority of big publishers, investors and advertisers have their offices. To add to it all Pune has great startup & tech community.

Q: In the journey so far, have you made any significant mistakes that you’d like to share with others, so they can learn from your experience?

Absolutely… Mistakes are important aspect of learning process and especially for first generation entrepreneurs like Bosky and Me. I think “attention to detail” is one of the most important aspects that an entrepreneur should watch for. You need to have clear in-depth blueprint in your mind about the direction your startup is going to take, otherwise it’s very easy to fall off the cliff!

Optimizing, especially during these tough times – be it resources, infrastructure or even your time. Optimize everything. Startups can’t afford any leaks.

The third thing and the one which I don’t see very often. Partner with other startups; see if there are any synergies between you. In most cases it is a win-win situation for both of them

Q: Are you partnering with other startups? At this stage, would it be possible for you to share info about any of these partnerships?

Yes, we are…one example would be Alabot (another Pune startup -ed.). Where we have got their NLP application (Travel bot) inside our hoverlet. So for any travel related publishers, it becomes a boon. So a win-win situation for both of us.

Another example would be – Before we got our own office, 2 other startups were kind enough to accommodate us for few weeks – These kind of partnerships in any way possible go a long way !

Q: What would your advice for struggling Pune entrepreneurs be?

Entrepreneurship is a roller coaster ride … It ain’t easy, but the thrills along the way make it all more than worth it!

Just jump into the rough waters and by the time you reach the other side, you will be glad you did it….

Reblog this post [with Zemanta]

How (and why) to bootstrap your own startup

I am liveblogging the Pune OpenCoffee Club saturday meetup, where we are discussing how to bootstrap your startup. We have invited three speakers who have experience with both bootstrapping a startup and VC funding: Anand SomanTarun Malaviya,  and Shridhar Shukla.  The following is a quick-n-dirty capture of some of the discussion that happened.

This is not intended to be a well thought out, well structured article – hopefully that will happen after a few days, when hopefully someone blogs about this event.

Update: The hope has come true. Here are two other blog posts on that event: A post by Santosh (or maybe Anjali) of Bookeazy, and one by Rishi from Thinking Space Technologies (ActiveCiti and EventAZoo)

Getting funded vs. bootstrapping your own startup:

Anand: Take funding when you want to do something that you cannot do without funds. This can be VC funds, or any other source of funds. But make sure that the goals of the financer are the same as your goals, otherwise you’ll get into trouble.

Tarun: Most of you will not get funded. Officially there are 45 million businesses in India. Unofficially, 85% of 350 million people are in the unorganized sector – so they are businesspeople.

I don’t want to be a mom-and-pop show in a corner. I want to be a big business. But remember Reliance is a bootstrapped company. Finance is a good thing at a certain time, not always. I’ve seen too many people fixated on getting funded. It is better to be fixated on running your business well – keeping customers happy.

Shridhar: You get funding only if you’ve proved that you have a viable business. Which means that you have to bootstrap until you reach that point. So you need to figure out how to do this in any case.

What are the disadvantages of VC funding?

Tarun: You have to build a bridge across a river. You need a million dollars. You get funds for only 50k. What do you do? Fold the business? That’s what a financial investor will suggest. An entrepreneur will do anything to keep the business running. Buy boats to get across. Run a boating service. Change tactics to make some progress.

VCs insist on big returns. Leave you with no choice.

Q: If someone wants to build a lifestyle business, then you don’t need VC funding. But if you want to build a product, there are many people across the world with the same idea. If they are funded and you are not, then you are at a major disadvantate.

Lifestyle business = this is a lifestyle choice for the founders. They are doing this just because they enjoy doing this, and they are making a little money. They are not interested in giving a huge exit to their investors.

Don’t worry about lifestyle or not. Focus on building value for customers. As long as you can do that, and create significant value, you will do fine, and you will attract investors. If the value created is not significant, you’ll find out soon enough, and you’ll change your strategy.

There is a major trade-off involved here. You must believe in yourself. But not to such an extent that you are blind to realities and are not listening to anybody at all. So you need to balance this – believing in yourself vs. listening to feedback. That is difficult.

So why take VC funding?

Tarun: Family businesses get ruined by all the informal/unprofessional structure. VC funding is a great way of getting a professional corporate structure that is necessary for success.

VCs have wisdom, if you select wisely. They open doors to contacts. They are advisors. Some of them can be given a stake without being given money. So that is important.

Every VC that I’ve talked to has helped me in some way. So even if you don’t take funding, take advice from them. Go through the process.

Anand: VCs are not the only source. Many other sources.
One good source is raising money from your first customer. Win-win situation. He invests because he can direct you to build a product he wants. This is good because the understands your product and understands the business requirements. And he is happier giving you a contract because he actually has control over you 9since he funded you).

Bootstrapping your product through services.

Shridhar: Do services. Charge high. Don’t worry about good quality work. Do boring work because it doesn’t take up too much of your time (so you have time to work on your product). Don’t have qualms about doing this. Even when you do your product, keep the structure in place for keeping the flow of money from services. Money from any source is good. Don’t give up on that.

Another possibility for bootstrapping is to moonlight. Work somewhere. Have a day job. Work at night. Don’t create pressure on your own savings or your friends savings. Many businesses got started that way. Many people are worried about being fast, and first to market. That is not so important. But be ethical. Don’t do your current employer in

Anand: 14 of the top 15 companies in the world were started by part-timers.

Tarun:
Q: Why do you want to do product?
A: Exponential returns. Build once, and sell many times. Product encapsulates a service. You can bootstrap this. Sell your product as a service to the first few customers. Or maybe, start by selling your expertise.

Shridhar:
Don’t tell your customer that you are going to build a product in the space that you are doing a project in. Be smart. Don’t sign a contract that gives away your IP.

Reblog this post [with Zemanta]

Technology overview – Druvaa Continuous Data Protection

Druvaa, a Pune-based product startup that makes data protection (i.e. backup and replication) software targeted towards the enterprise market, has been all over the Indian startup scene recently. It was one of the few Pune startups to be funded in recent times (Rs. 1 crore by Indian Angel Network). It was one of the three startups that won the TiE-Canaan Entrepreneural challenge in July this year. It was one of the three startups chosen to present at the showcase of emerging product companies at the NASSCOMM product conclave 2008.

And this is not confined to national boundaries. It is one of only two (as far as I know) Pune-based companies to be featured in TechCrunch (actually TechCrunchIT), one of the most influential tech blogs in the world (the other Pune company featured in TechCrunch is Pubmatic).

Why all this attention for Druvaa? Other than the fact that it has a very strong team that is executing quite well, I think two things stand out:

  • It is one of the few Indian product startups that are targeting the enterprise market. This is a very difficult market to break into, both, because of the risk averse nature of the customers, and the very long sales cycles.
  • Unlike many other startups (especially consumer oriented web-2.0 startups), Druvaa’s products require some seriously difficult technology.

Rediff has a nice interview with the three co-founders of Druvaa, Ramani Kothundaraman, Milind Borate and Jaspreet Singh, which you should read to get an idea of their background, why they started Druvaa, and their journey so far. Druvaa also has a very interesting and active blog where they talk technology, and is worth reading on a regular basis.

The rest of this article talks about their technology.

Druvaa has two main products. Druvaa inSync allows enterprise desktop and laptop PCs to be backed up to a central server with over 90% savings in bandwidth and disk storage utilization. Druvaa Replicator allows replication of data from a production server to a secondary server near-synchronously and non-disruptively.

We now dig deeper into each of these products to give you a feel for the complex technology that goes into them. If you are not really interested in the technology, skip to the end of the article and come back tomorrow when we’ll be back to talking about google keyword searches and web-2.0 and other such things.

Druvaa Replicator

Overall schematic set-up for Druvaa Replicator
Overall schematic set-up for Druvaa Replicator

This is Druvaa’s first product, and is a good example of how something that seems simple to you and me can become insanely complicated when the customer is an enterprise. The problem seems rather simple: imagine an enterprise server that needs to be on, serving customer requests, all the time. If this server crashes for some reason, there needs to be a standby server that can immediately take over. This is the easy part. The problem is that the standby server needs to have a copy of the all the latest data, so that no data is lost (or at least very little data is lost). To do this, the replication software continuously copies all the latest updates of the data from the disks on the primary server side to the disks on the standby server side.

This is much harder than it seems. A simple implementation would simply ensure that every write of data that is done on the primary is also done on the standby storage at the same time (synchronously). This is unacceptable because each write would take unacceptably long and this would slow down the primary server too much.

If you are not doing synchronous updates, you need to start worrying about write order fidelity.

Write-order fidelity and file-system consistency

If a database writes a number of pages to the disk on your primary server, and if you have software that is replicating all these writes to a disk on a stand-by server, it is very important that the writes should be done on the stand-by in the same order in which they were done at the primary servers. This section explains why this is important, and also why doing this is difficult. If you know about this stuff already (database and file-system guys) or if you just don’t care about the technical details, skip to the next section.

Imagine a bank database. Account balances are stored as records in the database, which are ultimately stored on the disk. Imagine that I transfer Rs. 50,000 from Basant’s account to Navin’s account. Suppose Basant’s account had Rs. 3,00,000 before the transaction and Navin’s account had Rs. 1,00,000. So, during this transaction, the database software will end up doing two different writes to the disk:

  • write #1: Update Basant’s bank balance to 2,50,000
  • write #2: Update Navin’s bank balance to 1,50,000

Let us assume that Basant and Navin’s bank balances are stored on different locations on the disk (i.e. on different pages). This means that the above will be two different writes. If there is a power failure, after write #1, but before write #2, then the bank will have reduced Basant’s balance without increasing Navin’s balance. This is unacceptable. When the database server restarts when power is restored, it will have lost Rs. 50,000.

After write #1, the database (and the file-system) is said to be in an inconsistent state. After write #2, consistency is restored.

It is always possible that at the time of a power failure, a database might be inconsistent. This cannot be prevented, but it can be cured. For this, databases typically do something called write-ahead-logging. In this, the database first writes a “log entry” indicating what updates it is going to do as part of the current transaction. And only after the log entry is written does it do the actual updates. Now the sequence of updates is this:

  • write #0: Write this log entry “Update Basant’s balance to Rs. 2,50,000; update Navin’s balance to Rs. 1,50,000” to the logging section of the disk
  • write #1: Update Basant’s bank balance to 2,50,000
  • write #2: Update Navin’s bank balance to 1,50,000

Now if the power failure occurs between writes #0 and #1 or between #1 and #2, then the database has enough information to fix things later. When it restarts, before the database becomes active, it first reads the logging section of the disk and goes and checks whether all the updates that where claimed in the logs have actually happened. In this case, after reading the log entry, it needs to check whether Basant’s balance is actually 2,50,000 and Navin’s balance is actually 1,50,000. If they are not, the database is inconsisstent, but it has enough information to restore consistency. The recovery procedure consists of simply going ahead and making those updates. After these updates, the database can continue with regular operations.

(Note: This is a huge simplification of what really happens, and has some inaccuracies – the intention here is to give you a feel for what is going on, not a course lecture on database theory. Database people, please don’t write to me about the errors in the above – I already know; I have a Ph.D. in this area.)

Note that in the above scheme the order in which writes happen is very important. Specifically, write #0 must happen before #1 and #2. If for some reason write #1 happens before write #0 we can lose money again. Just imagine a power failure after write #1 but before write #0. On the other hand, it doesn’t really matter whether write #1 happens before write #2 or the other way around. The mathematically inclined will notice that this is a partial order.

Now if there is replication software that is replicating all the writes from the primary to the secondary, it needs to ensure that the writes happen in the same order. Otherwise the database on the stand-by server will be inconsistent, and can result in problems if suddenly the stand-by needs to take over as the main database. (Strictly speaking, we just need to ensure that the partial order is respected. So we can do the writes in this order: #0, #2, #1 and things will be fine. But #2, #0, #1 could lead to an inconsistent database.)

Replication software that ensures this is said to maintain write order fidelity. A large enterprise that runs mission critical databases (and other similar software) will not accept any replication software that does not maintain write order fidelity.

Why is write-order fidelity difficult?

I can here you muttering, “Ok, fine! Do the writes in the same order. Got it. What’s the big deal?” Turns out that maintaining write-order fidelity is easier said than done. Imagine the your database server has multiple CPUs. The different writes are being done by different CPUs. And the different CPUs have different clocks, so that the timestamps used by them are not nessarily in sync. Multiple CPUs is now the default in server class machines. Further imagine that the “logging section” of the database is actually stored on a different disk. For reasons beyond the scope of this article, this is the recommended practice. So, the situation is that different CPUs are writing to different disks, and the poor replication software has to figure out what order this was done in. It gets even worse when you realize that the disks are not simple disks, but complex disk arrays that have a whole lot of intelligence of their own (and hence might not write in the order you specified), and that there is a volume manager layer on the disk (which can be doing striping and RAID and other fancy tricks) and a file-system layer on top of the volume manager layer that is doing buffering of the writes, and you begin to get an idea of why this is not easy.

Naive solutions to this problem, like using locks to serialize the writes, result in unacceptable degradation of performance.

Druvaa Replicator has patent-pending technology in this area, where they are able to automatically figure out the partial order of the writes made at the primary, without significantly increasing the overheads. In this article, I’ve just focused on one aspect of Druvaa Replicator, just to give an idea of why this is so difficult to build. To get a more complete picture of the technology in it, see this white paper.

Druvaa inSync

Druvaa inSync is a solution that allows desktops/laptops in an enterprise to be backed up to a central server. (The central server is also in the enterprise; imagine the central server being in the head office, and the desktops/laptops spread out over a number of satellite offices across the country.) The key features of inSync are:

  • The amount of data being sent from the laptop to the backup server is greatly reduced (often by over 90%) compared to standard backup solutions. This results in much faster backups and lower consumption of expensive WAN bandwidth.
  • It stores all copies of the data, and hence allows timeline based recovery. You can recover any version of any document as it existed at any point of time in the past. Imagine you plugged in your friend’s USB drive at 2:30pm, and that resulted in a virus that totally screwed up your system. Simply uses inSync to restore your system to the state that existed at 2:29pm and you are done. This is possible because Druvaa backs up your data continuously and automatically. This is far better than having to restore from last night’s backup and losing all data from this morning.
  • It intelligently senses the kind of network connection that exists between the laptop and the backup server, and will correspondingly throttle its own usage of the network (possibly based on customer policies) to ensure that it does not interfere with the customer’s YouTube video browsing habits.

Data de-duplication

Overview of Druvaa inSync. 1. Fingerprints computed on laptop sent to backup server. 2. Backup server responds with information about which parts are non-duplicate. 3. Non-duplicate parts compressed, encrypted and sent.
Overview of Druvaa inSync. 1. Fingerprints computed on laptop sent to backup server. 2. Backup server responds with information about which parts are non-duplicate. 3. Non-duplicate parts compressed, encrypted and sent.

Let’s dig a little deeper into the claim of 90% reduction of data transfer. The basic technology behind this is called data de-duplication. Imagine an enterprise with 10 employees. All their laptops have been backed up to a single central server. At this point, data de-duplication software can realize that there is a lot of data that has been duplicated across the different backups. i.e. the 10 different backups of contain a lot of files that are common. Most of the files in the C:\WINDOWS directory. All those large powerpoint documents that got mail-forwarded around the office. In such cases, the de-duplication software can save diskspace by keeping just one copy of the file and deleting all the other copies. In place of the deleted copies, it can store a shortcut indicating that if this user tries to restore this file, it should be fetched from the other backup and then restored.

Data de-duplication doesn’t have to be at the level of whole files. Imagine a long and complex document you created and sent to your boss. Your boss simply changed the first three lines and saved it into a document with a different name. These files have different names, and different contents, but most of the data (other than the first few lines) is the same. De-duplication software can detect such copies of the data too, and are smart enough to store only one copy of this document in the first backup, and just the differences in the second backup.

The way to detect duplicates is through a mechanism called document fingerprinting. Each document is broken up into smaller chunks. (How do determine what constitutes one chunk is an advanced topic beyond the scope of this article.) Now, a short “fingerprint” is created for each chunk. A fingerprint is a short string (e.g. 16 bytes) that is uniquely determined by the contents of the entire chunk. The computation of a fingerprint is done in such a way that if even a single byte of the chunk is changed, the fingerprint changes. (It’s something like a checksum, but a little more complicated to ensure that two different chunks cannot accidently have the same checksum.)

All the fingerprints of all the chunks are then stored in a database. Now everytime a new document is encountered, it is broken up into chunks, fingerprints computed and these fingerprints are looked up in the database of fingerprints. If a fingerprint is found in the database, then we know that this particular chunk already exists somewhere in one of the backups, and the database will tell us the location of the chunk. Now this chunk in the new file can be replaced by a shortcut to the old chunk. Rinse. Repeat. And we get 90% savings of disk space. The interested reader is encouraged to google Rabin fingerprinting, shingling, Rsync for hours of fascinating algorithms in this area. Before you know it, you’ll be trying to figure out how to use these techniques to find who is plagiarising your blog content on the internet.

Back to Druvaa inSync. inSync does fingerprinting at the laptop itself, before the data is sent to the central server. So, it is able to detect duplicate content before it gets sent over the slow and expensive net connection and consumes time and bandwidth. This is in contrast to most other systems that do de-duplication as a post-processing step at the server. At a Fortune 500 customer site, inSync was able reduce the backup time from 30 minutes to 4 minutes, and the disk space required on the server went down from 7TB to 680GB. (source.)

Again, this was just one example used to give an idea of the complexities involved in building inSync. For more information on other distinguishinging features, check out the inSync product overview page.

Have questions about the technology, or about Druvaa in general? Ask them in the comments section below (or email me). I’m sure Milind/Jaspreet will be happy to answer them.

Also, this long, tech-heavy article was an experiment. Did you like it? Was it too long? Too technical? Do you want more articles like this, or less? Please let me know.

Related articles:

Enhanced by Zemanta

Tech Community Building: Startup Lunch, OpenCoffee Club and Bloggers Meet

Startup Lunch, Pune

[A StartupLunch] is roughly the same as the SpeedDating concept. The startup founders are seated on one side and the candidates get to say hello and have a quick conversation to talk about what the background of the founder is, why he started the company and what sort of person he is looking for, while asking questions to the candidate about the reason to join a startup and what his/her passions are and ten minutes later the same process continues with the next founder. Within an hour, you would have met/spoken to most of the startups, and by the end of the day would know whom to get in touch with for your first/next job.

Something like this would be really useful, especially in Pune, because most job seekers are not aware of the startups that exist in the area, and where to find them. The Startup Lunch is an attempt at fixing this.

More details

Register here if you are looking for a job. And register here if you are a startup. Jaspreet Singh (jaspreet.singh _at_ druvaa.com) from Druvaa has taken the lead in organizing this in Pune.

VC Circle Growth Capital Forum

VC Circle is holding a day long event targeted towards the venture capital, capital investments and entrepreneural community. Basically, companies seeking funds to grow, and people with money who are willing to give it out.
Snapshot of the Event
Venue: Le Meridien, Pune
Date: April 4, 2008
Time: 10.30 am-5.30 pm
Registration fee per attendant: Rs 3,000 inclusive of all taxes.

The confirmed list of speakers:
George Thomas, Partner, India Value Fund

Nikhil Khattau, MD, Mayfield Fund
Srini Vudayagiri, Managing Director, Lightspeed Venture Partners
Cyrus Driver, MD, Helix Investments
Gaurav Mathur, MD, India Equity Partners
Deep Kalra, CEO, MakeMyTrip
Niren Shah, MD, Norwest Venture Partners

Kartik Parija, MD, Zephyr Peacock
Shantanu Surpure, Managing Advocate, Sandhill India Advisors
Subba Rao, Chairman, Robo Silicon
Abizer Diwanji, Executive Director, KPMG
Shiraz Bhugwadia, Director, o3 Capital Partners
Kuntal Shah, Co-Founder, Axis Holdings

More details

Pune OpenCoffee Club

The Pune OpenCoffee Club is an attempt to establish recognized, open and regular meeting places where entrepreneurs can meet with investors, advisor’s, (and anyone else) in a totally informal setting.nurture the startup eco-system through Community participation

Pune OpenCoffee Club was started by Santosh Dawara, co-founder of Bookeazy.

If you are interested, please sign up

Bloggers Meet

Pune Bloggers Meet organized by Vineet in conjunction with IndiBlogger and Microsoft last weekend was attended by over 50 people.

There was quite a diverse crowd:

There are lots of interesting and enthusiastic people in Pune.