Ultimately, we build all the systems so that they are used at one point of time, and the system that we build will be used only if they are usable. If a system is not usable, how will someone use it?
It is very common to come across doors that are supposed to be pushed to open (and there is a sticker close to the handle bar which says PUSH) but most of the people will pull it. Also, it is very common to see doors that are supposed to be pulled but people end up pushing them. It’s not really a problem with the people. It’s a problem with the design of the door. It’s a usability issue. There are doors that are pushed when they are expected to be pushed. It’s all in the design, isn’t it?
When Gmail was launched, it was an instant hit. Even though it didn’t do as many things as other email services did at that point of time (like support for all browsers, drafts etc) but it was still a revolution by itself. And the reason was simple – usability. What was the need of Gnome project? Wasn’t it the usability of GNU/Linux for non-programmers?
A system is not an island. It is connected with various other entities by various means. And it shares one relationship or the other with those entities. A conceptual model effectively captures these various entities and what kind of relationship our system has with these external entities.
The model of Britanica Encyclopedia is that a group of experts together create an encyclopedia over important topics which can then be read by people. The model of Wikipedia is that people (and that includes everyone in the world) can help in writing the encyclopedia that everyone can read. The model of Google Knol is that experts can write articles on specific subjects which everyone can read. The readers can suggest improvements that the original authors can incorporate.
The conceptual model must be made usable. The entire system will eventually be built on top of the conceptual model.
2. Interface usability
So, we have settled that a system maintains some relationships with some external entities. This relationship is exposed through interfaces and we need to think through the usability of those interfaces.
Gmail is an excellent example of interface usability. As I mentioned earlier, there were several email services when Gmail was introduced but its interface usability was far superior. This is an example of how the same conceptual model can be presented to the user with completely different interfaces.
The conceptual usability is a must to help the user understand the system. Wikipedia is an encyclopedia which can be read and modified by anyone. And interface usability is a must to help the user *do* something with the system. On each page of Wikipedia, everywhere your eye goes, you’ll find a link to edit the page or a section thereof. It almost invites you to modify the page.
Methodology and Evaluation
There are ways/methodologies to design usable systems. The usable systems are still created by people whose thought process naturally evaluates the usability at every step. However, these methods can make the system designer better grounded in the real world and make him/her more efficient too.
Much like a system cannot be declared as functional till it is tested for the same, usability of the system has to be tested with equal vigor. After all, if the system is not usable, who is going to use it (even if it is functional)? And there is only one way to test the usability of the system – by letting target users use it and that too without providing any guidance. Closely observing the users can be an eye opener many times.
And the system designer/developer cannot verify the system usability. In the course of developing, the developer has learnt too much about the system and knows exactly how it works. However, the user would not have so much of knowledge about the system and may not attempt to do things in the same ways. If you don’t believe me, go and re-read the doors example.
And what else?
Ultimately, it is possible to educate people how the system works. But the willingness of people to be educated will depend on why they need to use the system and how often. So, keep it as the last resort.
And first thing is still the last thing. We need to create usable systems because nobody uses unusable systems. And very few systems are usable by accident. Most of the usable systems are developed with usability as a focus.
What kind of decisions does this help you with? How to cut costs. Better understanding of customers (which ones are credit worthy? which one are at most risk of switching to a competitor’s product?) Better planning of flow of goods or information in the enterprise.
This is not easy because amount of data is exploding. There’s too much data. Humans can’t make sense of all of them.
To manage this kind of information you need a big storage platform and a systematic way of storing all the information and being able to analyze the data (with the aforementioned complex queries). Collect together data from different sources in the enterprise. Pull from various production servers and stick it into an offline, big, fat database. This is called a data warehouse.
The data needs to be cleaned up quite a lot before it is usable. Inconsistencies between data from different data sources. Duplicates. Mis-matches. If you are combining all the data into one big database, it needs to be consistent and without duplicates. Then you start analyzing the data. Either with a human doing various reports and queries (OLAP), or the computer automatically finding interesting patterns (data mining).
Business Intelligence is an application that sits on top of the Data Warehouse.
Lots of difficult problems to be solved.
Many different data sources: flat files, CSVs, legacy systems, transactional databases. Need to pick updates from all these sources on a regular basis. How to do this incrementally and efficiently? How often – daily, weekly, monthly? Parallelized loading for speed. How to do this without slowing down the production system. Might have to do this during a small window at night. So now you have to ensure that the loading finishes in the given time window.
This is the first lecture of a 6-lecture series. Next lectures will be Business Applications of BI. This will give an idea of which industries benefit from BI – specific examples: e.g. banking for assessing credit risk, fraud, etc. Then Data Management for BI. Various issues in handling large volumes of data; data quality, transformation and loading. These are huge issues, and need to be handled very carefully, to ensure that the performance remains acceptable in spite of the huge volumes. Next lecture is technology trends in BI. Where is this technology going in the future. Then one lecture on role of statistical techniques in BI. You’ll need a bit of a statistical background to appreciate this lecture. And final session on careers in BI. For detailed schedule and other info of this series, see the Pune Tech events calendar, which is the most comprehensive source of tech events info for Pune.
SAS R&D India works on Business Applications of BI (5 specific verticals like banking), on Data management, on some of the solutions. A little of the analytics – forecasting. Not working on core analytics – that is only at HQ.
We are trying to get the slides used in this talk from the speaker. Hopefully in a few days. Please check back by Monday.
And this is not confined to national boundaries. It is one of only two (as far as I know) Pune-based companies to be featured in TechCrunch (actually TechCrunchIT), one of the most influential tech blogs in the world (the other Pune company featured in TechCrunch is Pubmatic).
Why all this attention for Druvaa? Other than the fact that it has a very strong team that is executing quite well, I think two things stand out:
It is one of the few Indian product startups that are targeting the enterprise market. This is a very difficult market to break into, both, because of the risk averse nature of the customers, and the very long sales cycles.
Unlike many other startups (especially consumer oriented web-2.0 startups), Druvaa’s products require some seriously difficult technology.
The rest of this article talks about their technology.
Druvaa has two main products. Druvaa inSync allows enterprise desktop and laptop PCs to be backed up to a central server with over 90% savings in bandwidth and disk storage utilization. Druvaa Replicator allows replication of data from a production server to a secondary server near-synchronously and non-disruptively.
We now dig deeper into each of these products to give you a feel for the complex technology that goes into them. If you are not really interested in the technology, skip to the end of the article and come back tomorrow when we’ll be back to talking about google keyword searches and web-2.0 and other such things.
This is Druvaa’s first product, and is a good example of how something that seems simple to you and me can become insanely complicated when the customer is an enterprise. The problem seems rather simple: imagine an enterprise server that needs to be on, serving customer requests, all the time. If this server crashes for some reason, there needs to be a standby server that can immediately take over. This is the easy part. The problem is that the standby server needs to have a copy of the all the latest data, so that no data is lost (or at least very little data is lost). To do this, the replication software continuously copies all the latest updates of the data from the disks on the primary server side to the disks on the standby server side.
This is much harder than it seems. A simple implementation would simply ensure that every write of data that is done on the primary is also done on the standby storage at the same time (synchronously). This is unacceptable because each write would take unacceptably long and this would slow down the primary server too much.
If you are not doing synchronous updates, you need to start worrying about write order fidelity.
Write-order fidelity and file-system consistency
If a database writes a number of pages to the disk on your primary server, and if you have software that is replicating all these writes to a disk on a stand-by server, it is very important that the writes should be done on the stand-by in the same order in which they were done at the primary servers. This section explains why this is important, and also why doing this is difficult. If you know about this stuff already (database and file-system guys) or if you just don’t care about the technical details, skip to the next section.
Imagine a bank database. Account balances are stored as records in the database, which are ultimately stored on the disk. Imagine that I transfer Rs. 50,000 from Basant’s account to Navin’s account. Suppose Basant’s account had Rs. 3,00,000 before the transaction and Navin’s account had Rs. 1,00,000. So, during this transaction, the database software will end up doing two different writes to the disk:
write #1: Update Basant’s bank balance to 2,50,000
write #2: Update Navin’s bank balance to 1,50,000
Let us assume that Basant and Navin’s bank balances are stored on different locations on the disk (i.e. on different pages). This means that the above will be two different writes. If there is a power failure, after write #1, but before write #2, then the bank will have reduced Basant’s balance without increasing Navin’s balance. This is unacceptable. When the database server restarts when power is restored, it will have lost Rs. 50,000.
After write #1, the database (and the file-system) is said to be in an inconsistent state. After write #2, consistency is restored.
It is always possible that at the time of a power failure, a database might be inconsistent. This cannot be prevented, but it can be cured. For this, databases typically do something called write-ahead-logging. In this, the database first writes a “log entry” indicating what updates it is going to do as part of the current transaction. And only after the log entry is written does it do the actual updates. Now the sequence of updates is this:
write #0: Write this log entry “Update Basant’s balance to Rs. 2,50,000; update Navin’s balance to Rs. 1,50,000” to the logging section of the disk
write #1: Update Basant’s bank balance to 2,50,000
write #2: Update Navin’s bank balance to 1,50,000
Now if the power failure occurs between writes #0 and #1 or between #1 and #2, then the database has enough information to fix things later. When it restarts, before the database becomes active, it first reads the logging section of the disk and goes and checks whether all the updates that where claimed in the logs have actually happened. In this case, after reading the log entry, it needs to check whether Basant’s balance is actually 2,50,000 and Navin’s balance is actually 1,50,000. If they are not, the database is inconsisstent, but it has enough information to restore consistency. The recovery procedure consists of simply going ahead and making those updates. After these updates, the database can continue with regular operations.
(Note: This is a huge simplification of what really happens, and has some inaccuracies – the intention here is to give you a feel for what is going on, not a course lecture on database theory. Database people, please don’t write to me about the errors in the above – I already know; I have a Ph.D. in this area.)
Note that in the above scheme the order in which writes happen is very important. Specifically, write #0 must happen before #1 and #2. If for some reason write #1 happens before write #0 we can lose money again. Just imagine a power failure after write #1 but before write #0. On the other hand, it doesn’t really matter whether write #1 happens before write #2 or the other way around. The mathematically inclined will notice that this is a partial order.
Now if there is replication software that is replicating all the writes from the primary to the secondary, it needs to ensure that the writes happen in the same order. Otherwise the database on the stand-by server will be inconsistent, and can result in problems if suddenly the stand-by needs to take over as the main database. (Strictly speaking, we just need to ensure that the partial order is respected. So we can do the writes in this order: #0, #2, #1 and things will be fine. But #2, #0, #1 could lead to an inconsistent database.)
Replication software that ensures this is said to maintain write order fidelity. A large enterprise that runs mission critical databases (and other similar software) will not accept any replication software that does not maintain write order fidelity.
Why is write-order fidelity difficult?
I can here you muttering, “Ok, fine! Do the writes in the same order. Got it. What’s the big deal?” Turns out that maintaining write-order fidelity is easier said than done. Imagine the your database server has multiple CPUs. The different writes are being done by different CPUs. And the different CPUs have different clocks, so that the timestamps used by them are not nessarily in sync. Multiple CPUs is now the default in server class machines. Further imagine that the “logging section” of the database is actually stored on a different disk. For reasons beyond the scope of this article, this is the recommended practice. So, the situation is that different CPUs are writing to different disks, and the poor replication software has to figure out what order this was done in. It gets even worse when you realize that the disks are not simple disks, but complex disk arrays that have a whole lot of intelligence of their own (and hence might not write in the order you specified), and that there is a volume manager layer on the disk (which can be doing striping and RAID and other fancy tricks) and a file-system layer on top of the volume manager layer that is doing buffering of the writes, and you begin to get an idea of why this is not easy.
Naive solutions to this problem, like using locks to serialize the writes, result in unacceptable degradation of performance.
Druvaa Replicator has patent-pending technology in this area, where they are able to automatically figure out the partial order of the writes made at the primary, without significantly increasing the overheads. In this article, I’ve just focused on one aspect of Druvaa Replicator, just to give an idea of why this is so difficult to build. To get a more complete picture of the technology in it, see this white paper.
Druvaa inSync is a solution that allows desktops/laptops in an enterprise to be backed up to a central server. (The central server is also in the enterprise; imagine the central server being in the head office, and the desktops/laptops spread out over a number of satellite offices across the country.) The key features of inSync are:
The amount of data being sent from the laptop to the backup server is greatly reduced (often by over 90%) compared to standard backup solutions. This results in much faster backups and lower consumption of expensive WAN bandwidth.
It stores all copies of the data, and hence allows timeline based recovery. You can recover any version of any document as it existed at any point of time in the past. Imagine you plugged in your friend’s USB drive at 2:30pm, and that resulted in a virus that totally screwed up your system. Simply uses inSync to restore your system to the state that existed at 2:29pm and you are done. This is possible because Druvaa backs up your data continuously and automatically. This is far better than having to restore from last night’s backup and losing all data from this morning.
It intelligently senses the kind of network connection that exists between the laptop and the backup server, and will correspondingly throttle its own usage of the network (possibly based on customer policies) to ensure that it does not interfere with the customer’s YouTube video browsing habits.
Let’s dig a little deeper into the claim of 90% reduction of data transfer. The basic technology behind this is called data de-duplication. Imagine an enterprise with 10 employees. All their laptops have been backed up to a single central server. At this point, data de-duplication software can realize that there is a lot of data that has been duplicated across the different backups. i.e. the 10 different backups of contain a lot of files that are common. Most of the files in the C:\WINDOWS directory. All those large powerpoint documents that got mail-forwarded around the office. In such cases, the de-duplication software can save diskspace by keeping just one copy of the file and deleting all the other copies. In place of the deleted copies, it can store a shortcut indicating that if this user tries to restore this file, it should be fetched from the other backup and then restored.
Data de-duplication doesn’t have to be at the level of whole files. Imagine a long and complex document you created and sent to your boss. Your boss simply changed the first three lines and saved it into a document with a different name. These files have different names, and different contents, but most of the data (other than the first few lines) is the same. De-duplication software can detect such copies of the data too, and are smart enough to store only one copy of this document in the first backup, and just the differences in the second backup.
The way to detect duplicates is through a mechanism called document fingerprinting. Each document is broken up into smaller chunks. (How do determine what constitutes one chunk is an advanced topic beyond the scope of this article.) Now, a short “fingerprint” is created for each chunk. A fingerprint is a short string (e.g. 16 bytes) that is uniquely determined by the contents of the entire chunk. The computation of a fingerprint is done in such a way that if even a single byte of the chunk is changed, the fingerprint changes. (It’s something like a checksum, but a little more complicated to ensure that two different chunks cannot accidently have the same checksum.)
All the fingerprints of all the chunks are then stored in a database. Now everytime a new document is encountered, it is broken up into chunks, fingerprints computed and these fingerprints are looked up in the database of fingerprints. If a fingerprint is found in the database, then we know that this particular chunk already exists somewhere in one of the backups, and the database will tell us the location of the chunk. Now this chunk in the new file can be replaced by a shortcut to the old chunk. Rinse. Repeat. And we get 90% savings of disk space. The interested reader is encouraged to google Rabin fingerprinting, shingling, Rsync for hours of fascinating algorithms in this area. Before you know it, you’ll be trying to figure out how to use these techniques to find who is plagiarising your blog content on the internet.
Back to Druvaa inSync. inSync does fingerprinting at the laptop itself, before the data is sent to the central server. So, it is able to detect duplicate content before it gets sent over the slow and expensive net connection and consumes time and bandwidth. This is in contrast to most other systems that do de-duplication as a post-processing step at the server. At a Fortune 500 customer site, inSync was able reduce the backup time from 30 minutes to 4 minutes, and the disk space required on the server went down from 7TB to 680GB. (source.)
Again, this was just one example used to give an idea of the complexities involved in building inSync. For more information on other distinguishinging features, check out the inSync product overview page.
Have questions about the technology, or about Druvaa in general? Ask them in the comments section below (or email me). I’m sure Milind/Jaspreet will be happy to answer them.
Also, this long, tech-heavy article was an experiment. Did you like it? Was it too long? Too technical? Do you want more articles like this, or less? Please let me know.
It’s the middle of the night, and your prepaid phone runs out of credits, and you need to make a call urgently. Don’t you wish that you could re-charge your prepaid mobile over the internet? Pune-based startup ApnaBill allows you to do just that. Fire up a browser, select your operator (they have partnerships with all major service providers), pay from your bank account or by credit card, and receive an SMS/e-mail with the recharge PIN. Done. They have extended this model to satellite TV (TataSky, Dish), with more such coming out of the pipeline.
PuneTech interviewed co-founder and lead developer Mayank Jain where he talks about various things, from technical challenges (does your hosting provider have an upper limit on number of emails you can send out per day?), to unexpected problems that will slow down your startup (PAN card!), and advice for other budding entrepreneurs (start the paperwork for registration/bank accounts as soon as possible).
On to the interview.
Overview of ApnaBill:
Simply put, ApnaBill.com is a online service for facilitating Prepaid and Postpaid Utility Bill payments.
Available now, are Prepaid utility bill payments like prepaid mobile recharge and prepaid vouchers for Tata Sky, World Phone, Dish TV etc.
Organizationally, ApnaBill.com is an offshoot of Four Fractions. It aims at being the single point of contact between service providers and customers, thereby minimizing transactional costs. The benefit of this is directly passed onto our customers as we do NOT charge any transaction costs from our customers. Its an ApnaBill.com policy and would be applicable to all of our product line.
Apart from regular Utility Bill Payments, we are also exploring some seemingly blue ocean verticals which have not been targeted by the online bill payment sector – yet.
We have managed to make our business model such that despite absorbing the transactional cost, we’ll be able to make profits. They would definitely be low but the sheer amount of transactions (which we would attract because of no-transaction-charge policy) would put our figures in positive direction.
Moreover, profit generated from transactions is just one revenue source. Once we have a good traction, our advertisement revenue sources would also become viable.
We are definitely looking at a long term brand building.
Technical Challenges – Overview
Contrary to popular belief, technology is generally the simplest ingredient in a startup – specially because the startup can generally excercise full control over how it is used and deployed. And with increasingly cheaper computing resources, this space is becoming even more smoother.
However, following problems were a real challenges which we faced and solved.
Being a web 2.0 startup, we faced some major cross browser issues.
Minimizing client side internet connectivity and page display speeds
Thankfully, ApnaBill.com is running Ruby on Rails under the hood – and all the solutions we designed, just got fit into the right grooves.
Technical Challenges – Details
Ruby on Rails a one of the best framework a web developer can ask for. All the solutions to the above problems just come bundled with it.
To overcome mail capping limits for shared hosts, we devised our own modules which would schedule mails if they were crossing the mail caps. However, we later discovered that there’s a great Ruby gem – ar_mailer to do just that. We are planning to make the shift.
If you see our homepage, no JS loads up when the page is loading up. However, once the page is loaded, we initiate a delayed JS load which renders our news feed in the end.
Database versioning is an inbuilt feature in Rails. We can effectively revert back to any version of ApnaBill.com (in terms of functionality) with standard Rails framework procedures.
Integrating various vendors and services was visibly the biggest challenge we overcame during the (almost) 9 months development cycle of ApnaBill.com.
Getting the organization up and running was another big challenge. The paperwork takes a lot of valuable time – which if visioned properly, can be minimized to a manageable amount.
Payment Gateways are a big mess for startups. They are costly, demand huge chunks of money for security deposits and have very high transaction costs. Those who are cheap – lack even the basic courtesy and quality of service. Sooner or later, the backbone of your business becomes the single most painful factor in your business process – specially when you have no control over its functioning.
Thankfully, there are a few payment gateways which are above all of this. We hope to make an announcement soon.
The process of founding ApnaBill:
When and how did you get the idea of founding ApnaBill? How long before you finally decided to take the plunge and start in earnest? What is your team like now?
In June 2007, one of the founding members of Four Fractions saw a friend of his, cribbing about how he cannot recharge his prepaid mobile phone from the comforts of his home. He had to walk about 1 km to reach the nearest local shop to get his phone connection recharged.
This idea caught the founder’s attention and he, along-with others formed Four Fractions on 20th December ’07 to launch ApnaBill.com as one of their flagship products.
ApnaBill.com was opened for public transactions on 15th June 08. The release was a birthday present to ApnaBill.com’s co-founder’s mom.
Our team is now 5 people strong, spread across New Delhi and Pune. As of now, we are self funded and are actively looking for seed funding.
What takes most of the time:
As I mentioned earlier, getting various services integrated took most of the time. If we had to just push out our own product (minus all collaborations), it would have taken us less than 3 months.
There was this funny thing that set us back by almost 1 month…
We applied for a PAN card for Four Fractions. First, our application somehow got lost in the process. Then someone in the government department managed to put down our address as 108 when it was supposed to be 10 B (8 and B are very similar looking).
None of us ever envisioned this – but it happened. We lost a precious month sorthig this issue out. And since all activities were dependent on official papers, other things like bank accounts, payment gateway intgrations etc also got pushed back. But I am glad, we sorted this out in the end. Our families supported us through this all the way.
Every process like creating Bank accounts, getting PAN cards etc are still very slow and manual in nature. If we can somehow improve on them, the ecosystem can prove very helpful for budding startups.
About the co-founders:
There are 3 CoFounders for ApnaBill.com
Sameer Jain: Sameer is the brain behind our revenue generation streams and marketing policies. He is a Post Grad from Delhi University in International Marketing.
Sandeep Kumar: Sandeep comes from billing (technical) background. With him, he has brought vast knowledge about billing processes and solid database knowhow.
Myself (Mayank Jain): I come from desktop application development background. I switched to Ruby on Rails almost 18 months ago – and since then, I am a devoted Ruby evangelist and Rails developer.
Luckily, we have a team which is just right. We have two polarizing ends – Sandeep and Sameer. One of them is constantly driving organization to minimizing costs while the other is driven towards maximizing revenue from all possible sources. I act as a glue between both of them. Together, we are constantly driving the organization forward.
About selection for proto.in:
Proto.in was the platform for which we were preparing for from almost 2 months. We had decided our launch dates in such a way that we would launch and be LIVE just in time for Proto.in.
Being recognized for your efforts is a big satisfaction.
Proto.in was also a huge learning experience. Interacting directly with our potential users gave us an insight on how they percieve ApnaBill.com and what they want out of it. We also came across some interesting revenue generation ideas when interacting with the startup veterans at Proto.
There are a lot of people who are currently doing a job somewhere, but who harbor a desire to start something on their own. Since you have already gone that route, what suggestions would you have for them?
Some tips I would like to share with my peer budding entrepreneurs…
Focus, focus and focus!
If you are an internet startup, book your domain before anything and get the right hosting partner.
Start the paperwork for firm/bank accounts registration as soon as possible.
Write down your financial/investment plan on paper before you start. Some plan is way better than a no plan!
Adopt proper development process for the tech team. With a process in place, development activities can be tracked rationally.
Get someone to manage your finances – outsourcing is a very attractive option.
The most important factor for a startup besides anything else – is to keep fighting during the adverse scenarios. Almost everything would spring into your face as a problem. But a team which can work together to find a solution for it – makes it to the end.
Just remember, more than the destination, it is the journey that would count.
Registration and Fees: This event is free for everyone. Register here.
DevCon is a Developer Conference from the developers, by the developers and for the developers. Developers may be professionals or students who will represent next generation developers. Agenda has been determined through voting. For information about the expected presenters look here. DevCon 2007 attracted 1200 people.
Featured Products/Topics: Windows Embedded, Windows Mobile, Microsoft Office SharePoint Server, .NET 3.5, Visual Studio 2008, Silverlight, Expression Studio, Microsoft SQL Server 2008, Security, Expression Studio
Recommended Audiences: IT Professionals, Microsoft Partners, Solution Architects, Software Developers, Students, Technical Decision Makers, Developers, Architects
Jhumkee: This field started around World War-II. Aircraft accidents. Instead of saying that pilots are idiots, the engineers decided to change the design so that mistakes don’t happen. Instead of engineers designing a system by themselves, involve the users in the process. Don’t just think about what they want. Instead, ask them. Or watch them using the product.
Military, aerospace, and other fields really embraced this field. In India, this is a fairly new field. Especially in IT.
But it is common sense.
Shashank: In the era of electronics and IT, it is very easy to put in new features. This is a problem. In general, in most product companies, engineers first create a product, and then go around looking for users who are interested in that product.
But adding features, normally results in reducing usability. So, especially for small startups, there is a choice to make – add features or add usability?
Harrshada: How did you start your startup? Did you find a need and try to fill it, or did you have a cool technology/algorithm that you wanted to implement? Usability says that you should always have a target audience in mind, and work towards solving their problems. Your technology is not the important part. Constantly be in touch with users and keep observing them.
It’s rather trivial to say that we should keep users in mind when designing the product. But, how to actually go about this?
Jhumkee: You must get a real user, and then there are a number of techniques that are used to get information out of the user. First of all: You are not a user. Many designers of systems are under the impression that they are a user. Because, they are actually using their own product. In fact, Steve Yegge argues passionately that you should only build products that you yourself want. But, the problem is that as you are designing the system you become an expert. You know everything about the system. You are not a regular user. Hence, you must spend time with real users.
Shashank: There is a science behind this. There are a bunch of techniques for doing this. Some of them are obvious, and some hidden means by which you can get usability information out of users. You need to think through this process. But it doesn’t have to be anything very fancy. Interview your users. Ask open ended questions about what they were trying to achieve, what they felt, what made them happy, and what frustrated them. Use this to determine some broad areas of concern, and then start digging deeper.
Jhumkee: There is no silver-bullet here. Some of this comes from experience. A lot of this differs based on the But there are some broad guidelines. It must be an iterative process. Make changes. Test with real users. Repeat.
There are a lot of guidelines on individual things (e.g. font sizes, navigation architecture, accessibility factors) etc. But you can’t simply apply them without a deeper understanding. Because usability is a holistic thing. Even if the parts are all OK, the whole system might still not be very usable.
RouteGuru: Usability is a huge issue for us. How to present information about an entire route in SMS form, and how to do this in a way that the route gets built up in their head. Another big hassle is the 80-20 problem. The last mile is significantly more complicated than the rest of the directions. Also, some users are only interested in the last mile, as they know how to get to the general vicinity. Others want all the directions. We are still grappling with this issue.
Somebody I don’t know: For usability, keep only one action per page. One page should be for one purpose only (except for the home page). If there is a form, there should be only one button. Use a tool from google that is used to serve two layouts of the same page to different users and then study their behavior. Use this information to decide what works and what doesn’t.
Shashank: This last technique is a very quantitative mechanism. Analytics, heat-maps, etc. give you a lot of data. You don’t always know how to use this data. The world is moving towards qualitative analysis.
Manas: Users don’t always know what they want. So how do you handle this?
Jhumkee: What you do is task-based analysis. Find out what the users want to do, and then figure out how long it takes them to do it, and whether they get frustrated doing it, and whether they are successful or not. This will give you good insights. So the real work is in figuring out what these tasks should be.
Unfortunately, I had to leave the meeting at this stage to get back to my kids. Hopefully I’ll be able to fill in the gaps with notes taken by someone else.
The Indian Express reports that Pune is leading the nation in the use of IT for governance. Recently, it had set up kiosks where citizens can pay property taxes, and get copies of birth and death certificates. This initiative has received an award from the state government, and Vansh Infotech, the company implementing the kiosks has promised to expand this concept to other cities.
Excerpts from the article:
This is the second time that the PMC initiative of information technology-enabled services (ITES) has won accolades. In December last year, the civic body had earned the international honour of the World Leadership Forum for its Auto-DCR software, building plan approval software. The Auto-DCR was later in demand from various municipal corporations of the country.
Going a step further, he said that they had introduced additional features at kiosks for payment of electricity bills to MSEDCL and bills of select mobile telephone services. They are also planning to include payment of Direct to Home (DTH) television subscriptions and allow checking of railway and air ticket availability.
There are around 70 such kiosks being installed in various places of the city, with a monthly average collection of Rs 5 crore for the PMC, Dudhedia said, adding that the number of kiosks being installed would be 90 by November-end. A publicity campaign would be undertaken for further promotion of the services available at the kiosks.
Meanwhile, the PMC is also planning to provide an innovative facility for citizens to contact the police during emergency situations like terrorist actions or accidents. The civic administration plans to install emergency buttons at the kiosks which can be used to directly get in touch with the nearest police station.
Registration and Fees: This event is free for everyone, but you must register by sending an e-mail to manasgarg at NOSPAM gmail dot com
The general agenda is to have a free-wheeling discussion on various aspects of UI development including (of course not limited to) tools/methodologies for quick prototyping, usability aspects etc. Jhumkee Iyengar, Shashank Deshpande, and Harrshada Deshpande (with a combined experience of 40+ years in design and usability) have graciously agreed to be present to guide the discussion.
Jhumkee Iyengar has been doing design and usability since 1988, in IT, manufacturing and other industries, most recently in Persistent Systems, where she created and grew the usability group. She also launched usability in e-Governance and is responsible for improvements in PMC’s websites. She is also a presenter for the Nielsen Norman Group, and conducts usability workshops all over the world.
Shashank Deshpande Shashank has been in the field of IT usability for 15+ years (yes, he has been doing usability since before it became a known/popular field in India). He was the head of usability at Symantec India (formerly Veritas) for 9 years. Just this week, he is returning from conducting a 4-day workshop on usability at Yahoo! India. For more information about Shashank, see his linked-in profile.
Harrshada Desphande (not related to Shashank!) has also agreed to be present to guide the discussion. Harrshada has 9 years of experience in managing user experience design in the IT industry – most recently in SAS R&D. She also organized the hugely successful IdeaCamp Pune.
What: Pune Google Technology Users Group (Pune GTUG) presents a seminar on GWT (the Google Web Toolkit).
When: Saturday, 23rd August. 1:30pm to 5pm
Where: Synerzip. Dnyanvatsal Commercial Complex, Survey No. 23, Plot No. 189, Near Mirch Masala Restaurant , Opp Vandevi Temple, Karve Nagar (Map). Registration and Fees: The event is free for all, but you mustregister here.
“GWT in Depth” Seminar will brush up on GWT basics and then jump on to practical use of GWT.
Attendees are required to know the concept of GWT. The seminar would include following
There is no single comprehensive source of information for all the events in Pune that are of interest to the Technology community. The PuneTech events page only carries information about events coming up soon. IT Vidya has an Events page but that is for events all over the country, and is also not comprehensive enough. Also, both of these are more like blogs than an events calendar, and are missing many features that a calendar should have.
On the suggestion of Freeman Murray, we have started using upcoming.org as an events calendar for tech events in Pune. The Pune Tech Events Group on Upcoming will track all the tech events in Pune. This is a free, non-commercial, community driven initiative. Anybody can join the group. Anybody can add events. Anybody can subscribe to get updates.