Category Archives: Technology

Conference report: The 4th IndicThreads conference on Java Technologies

(The IndicThreads conference on Java Technologies was held in Pune last weekend. This conference report by Dhananjay Nene was published on his must-read blog and is re-published here with permission. The slides used during the presentations can be downloaded from the conference website here and are also linked to in context in Dhananjay’s report below. In general, PuneTech is interested in publishing reports of tech events and conferences that happen in Pune, as long as they go into sufficient technical depth, and especially if links to slides are available. So please do get in touch with us if you have such a report to share.)

indicthreads logo smallThe annual indicthreads.com java technology conference is Pune’s best conference on matters related to Java technologies. I looked forward to attending the same and was not disappointed a bit. The last one was held about 3 days ago on Dec 11th and 12th, and this post reviews my experiences at the same.

As with any other conference usually something or the other isn’t quite working well in the morning, so I soon discovered we had a difficulty with the wireless network being swamped by the usage. There were some important downloads that needed to be completed, so my early morning was spent attempting to get these done .. which meant I missed most of Harshad Oak’s opening session on Java Today.

The next one i attended was Groovy & Grails as a modern scripting language for Web applications by Rohit Nayak. However I soon discovered that it (at least initially) seemed to be a small demo on how to build applications using grails. Since that was something I was familiar with, I moved to the alternative track in progress.

The one I switched to even as it was in progress was Java EE 6: Paving the path for the future by Arun Gupta. Arun had come down from Santa Clara to talk about the new Java EE6 spec and its implementation by Glassfish. Arun talked about a number of additional or changed features in Java EE6 in sufficient detail for anyone who got excited by them to go explore these in further detail. These included web fragments, web profile, EJB 3.1 lite, increased usage of annotations leading to web.xml now being optional, and a number of points on specific JSRs now a part of Java EE6. Some of the things that excited me more about Glassfish were, (a) OSGi modularisation and programmatic control of specific containers (eg Servlet, JRuby/Rails etc.), embeddability, lightweight monitoring. However the one that excited me the most was the support for hot deployment of web apps for development mode by allowing the IDEs to automatically notify the running web app which in turn automatically reloaded the modified classes (even as the sessions continued to be valid). The web app restart cycle in addition to the compile cycle was alway one of my biggest gripes with Java (second only to its verbosity) and that seemed to be going away.

I subsequently attended Getting started with Scala by Mushtaq Ahmed from Thoughtworks. Mushtaq is a business analyst and not a professional programmer, but has been keenly following the developments in Scala for a couple of years (and as I later learnt a bit with Clojure as well). Unlike a typical language capability survey, he talked only about using the language for specific use cases, a decision which I thought made the presentation extremely useful and interesting. The topics he picked up were (a) Functional Programming, (b) DSL building and (c) OOP only if time permitted. He started with an example of programming/modeling the Mars Rover movements and using functions and higher order functions to do the same. Looking back I think he spent lesser time on transitioning from the requirements into the code constructs and in terms of what he was specifically setting out to do in terms of higher order functions. However the demonstrated code was nevertheless interesting and showed some of the power of Scala when used to write primarily function oriented code. The next example he picked up was a Parking Lot attendant problem where he started with a Java code which was a typical implementation of the strategy pattern. He later took it through 7-8 alternative increasingly functional implementations using Scala. This one was much easier to understand and yet again demonstrated the power of Scala quite well in terms of functional programming. Onto DSLs, Mushtaq wrote a simple implementation of a “mywhile” which was a classical “while” loop as an example of using Scala for writing internal DSLs. Finally he demonstrated the awesome power of using the built in support for parser combinators for writing an external DSL, and also showed how a particular google code of summer problem could be solved using Scala (again for writing an external DSL). A very useful and thoroughly enjoyable talk. (Here is a link to the code used in this presentation. -PuneTech)

The brave speaker for the post lunch session was Rajeev Palanki who dealt both with overall IBM directions on Java and a little about MyDeveloperworks site. In his opinion he thought Java was now (post JDK 1.4) on the plateau of productivity after all the early hype and IBM now focused on Scaling up, Scaling down (making it easier to use at the lower end), Open Innovation (allow for more community driven innovation) and Real Time Java. He emphasised IBMs support to make Java more predictable for real time apps and stated that Java was now usable for Mission Critical applications referring to the fact that Java was now used in a USS Destroyer. He referred to IBMs focus on investing in Java Tooling that worked across different JRE implementations. Tools such as GCMV, MAT, and Java Diagnostic Collector. Finally he talked about the IBM MyDeveloperWorks site at one stage referring to it as the Facebook for Geeks.

The next session was Overview of Scala Based Lift Web Framework by Vikas Hazarati, Director, Technology at Xebia. Another thoroughly enjoyable session. Vikas dealt with a lot of aspects related to the Lift web framework including various aspects related to the mapper, the snippets, usage of actors for comet support etc. I was especially intrigued by Snippets which act as a bridge between the UI and the business logic have a separate abstraction for themselves in the framework and how the construct and functionality in that layer is treated so differently from other frameworks.

I subsequently attended Concurrency: Best Practices by Pramod Nagaraja who works on the IBM JRE and owns the java.nio packages (I think I heard him say owns). He talked about various aspects and best practices related to concurrency and one of the better aspects of the talk was how seemingly safe code can also end up being unsafe. However he finished his session well in time for me to quickly run over and attend the latter half of the next presentation.

Arun Gupta conducted the session Dynamic Languages & Web Frameworks in GlassFish which referred to the support for various non java environments in Glassfish including those for Grails/Groovy, Rails/JRuby, Django/Python et. al. The impression I got was Glassfish is being extremely serious about support for the non java applications as well and is dedicating substantial efforts to make Glassfish the preferred platform for such applications as well. Arun’s blog Miles to go … is most informative for a variety of topics related to Glassfish for both Java and non Java related aspects.

The last talk I attended during the day was Experiences of Fully Distributed Scrum between San Francisco and Gurgaon by Narinder Kumar, again from Xebia. Since a few in the audience were still not aware of agile methodologies (Gasp!), Narinder gave a high level overview of the same before proceeding down the specific set of challenges his team had faced in implementing scrum in a scenario where one team was based in Gurgaon, India and another in San Fransciso, US. To be explicit, he wasn’t describing the typical scrum of scrum approaches but was instead describing a mechanism wherein the entire set of distributed teams would be treated as a single team with a single backlog and common ownership. This required some adjustments such as a meeting where only one person from one of the locations and all from another would take part in a scrum meeting in situations where there were no overlapping working hours. There were a few other such adjustments to the process also described. The presentation ended with some strong metrics which represented how productivity was maintained even as the activities moved from a single location to a distributed model. Both during the presentation and subsequently Narinder described some impressive associations with senior Scrum visionaries and also some serious interest in their modified approach from some important companies. However one limitation I could think of the model was, that it was probably better geared to work where you had developers only in one of the two locations (offshoring). I perceived the model as a little difficult to work if developers were located across all locations (though that could end up being just my view).

The second day started with a Panel Discussion on the topic Turning the Corner between Arun Gupta, Rohit Nayak, Dhananjay Nene (thats yours truly) and moderated by Harshad Oak. It was essentially a discussion about how we saw some of the java and even many non java related technologies evolving over the next few years. I think suffice to say one of the strong agreements clearly was the arrival of Java the polyglot platform as compared to Java the language.

The next session was Developing, deploying and monitoring Java applications using Google App Engine by Narinder Kumar. A very useful session describing the characteristics, opportunities and challenges with using Google App Engine as the deployment platform for Java based applications. One of the take away from the sessions was that subject to specific constraints, it was possible to use GAE as the deployment platform without creating substantial lockins since many of the Java APIs were supported by GAE. However there are a few gotchas along the way in terms of specific constraints eg. using Joins etc.

I must confess at having been a little disappointed with Automating the JEE deployment process by Vikas Hazrati. He went to great depths in terms of what all considerations a typical J2EE deployment monitoring tool should take care of, and clearly demonstrated having spent a lot of time in thinking through many of the issues. However the complexities he started addressing started to get into realms which only a professional J2EE deployment tool writer would get into. That made the talk a little less interesting for me. Besides there was another interesting talk going on simultaneously which I was keen on attending as well.

The other talk I switched to half way was Create Appealing Cross-device Applications for Mobile Devices with Java ME and LWUIT by Biswajit Sarkar (who’s also written a book on the same topic). While keeping things simple, Biswajit explained the capabilities of Java ME. He also described LWUIT which allowed creation of largely similar UI across different mobile platforms. He explained that while the default Java ME used native rendering leading to differing look and feel across mobile handsets just like Java AWT, using LWUIT allowed for a Java Swing like approach where the rendering was performed by the LWUIT library (did he say around 300kb??) thus allowing for a more uniform look and feel. He also showed sample programs and how they worked using LWUIT.

Allahbaksh Asadullah then conducted the session on Implementing Search Functionality With Lucene & Solr, where he talked about the characteristics and usage of Lucene and Solr. It was very explicitly addressed at the very beginners to the topic (an audience I could readily identify myself with) and walked us through the various characteristics of search, the different abstractions, how these abstractions are modeled through the API and how some of these could be overridden to implement custom logic.

How Android is different from other systems – An exploration of the design decisions in Android by Navin Kabra was a session I skipped. However I had attended a similar session by him earlier so hopefully I did not miss much.

However Navin did contribute occasionally into the next session Java For Mobile Devices – Building a client application for the Android platform by Rohit Nayak. Rohit demonstrated an application he is working on along with a lot of the code that forms the application using Eclipse and the Android plugin. A useful insight into how an Android application is constructed.

As the event drew to a close, the prizes were announced including those for the Indicthreads Go Green initiative. A thoroughly enjoyable event, leaving me even more convinced to make sure to attend the next years session making it a third in a row.

(Comments on this post are closed. Please comment at the site of the original article.)

PuneChips Editor’s Blog – SystemVerilog and Designer Productivity

The most recent PuneChips event was easily the most successful one in the short history of the group. Over 50 engineers attended the “SystemVerilog” talk by Clifford Cummings, President of Sunburst Design and SystemVerilog industry guru. A big thank you to a few folks who made this possible is in order; first off Parag Mehta of Qlogic for connecting us with Cliff; secondly in addition to Parag, Pravin Desale and Deepak Lala of LSI, and Jagdish Doma of Virage Logic for driving the attendance. Last, but not the least, we must also thank Cliff for taking us through a complex topic in a very engaging manner. Cliff certainly held the audience in rapt attention through an hour of highly technical discussion. The Q&A session was also very engaging. Of course, Cliff being the industry celebrity that he is, was mobbed by engineers asking questions after his speech.

It is very clear that SystemVerilog is clearly targeted at improving designer productivity. Failing productivity due to increasing design complexity is one of the biggest challenges faced by chip designers today, and it is not at all surprising that the EDA tool industry is focused on rectifying this. The chart below (source: SEMATECH) shows a rather grim picture – while design complexity has been growing at 58% CAGR, productivity has been increasing at only 21% CAGR. It is obvious to anyone that tools that fill this gap will be in great demand.

Failing Designer Productivity (Source: SEMATECH)
Failing Designer Productivity (Source: SEMATECH)

The reason for increasing design complexity is multifold – decreasing geometries allow designers to add more and more elements to the chip, making the entire process challenging. Number of IP cores per chip has grown from ~30 in 2003 to over 250 in 2006 and possibly much more today (source: EETimes). In addition, a big bull’s eye has been painted on power consumption numbers and most chips now must be designed using low power techniques. Plus, increasing complexity means that chip verification becomes more complex; 50% of all ASIC designs today require respins due to functional/logic errors (Source: Colette International Research).

Rather than a single solution, it is very likely that a multitude of innovative solutions that address individual problems will emerge. For example, better modeling techniques that can give a very accurate QoR estimate at the architecture stage itself can reduce the design complexity downstream. Languages such as SystemVerilog literally reduce the lines of code that a designer or verification engineer must write, thus boosting productivity. Time also may be right for ESL design, which has been around for a while, as conventional techniques fail to keep up.

All in all, we live in very interesting times. Faster and smaller is not always for the better. The industry must innovate and rise up to the economic and design challenges if it is to survive and prosper.

Reblog this post [with Zemanta]

Pune’s KQInfoTech is porting Sun’s ZFS File-System to Linux

Pune-based KQInfoTech is working on porting Sun‘s ZFS file-system to the Linux Platform. ZFS is arguably one of the best file-systems available today, and Linux is one of the most widely used operating systems for servers by new startups. So, having ZFS available on Linux would be great. And, With many years of experience in Veritas building VxFS, another one of best file-systems in the world, the founders of KQInfoTech do have the technical background to be able to do a good job of this. Check out the full announcement on their blog:

We have a ZFS building as a module and the following primitive operations are possible.

  • Creating a pool over a file (devices not supported yet)
  • Zpool list, remove
  • Creating filesystems and mounting them

But we are still not at a stage, where we can create files and read and write to them

See the full article, for more details and some interesting issues related to the license compatibility between ZFS and Linux.

About KQInfoTech

Pune-based KQInfoTech is an organization started by Anurag Agarwal and Anand Mitra, both of whom chucked high-paying jobs in the industry because they felt that there was a desperate need to work on the quality of students that is being churned out by our colleges. For the 2 years or so, they have been trying various experiements in education, at the engineering college level. All their experiments are based on one basic premise: students’ ability to pay should not be a deterrent – in other words, the offerings should be free for the students; KQInfoTech focuses on finding alternative ways to pay for the costs of running the course. See all PuneTech articles related to KQInfoTech for more details.

Reblog this post [with Zemanta]

ASIC Verification: Trends and Challenges

(This is a guest post for PuneTech by Arati Halbe, who has close to 9 years experience in ASIC front end design and verification. Post silicon validation and FPGA prototyping is her recent area of interest and expertise. Arati has worked with Wipro Technologies and Conexant Systems. Arati did her B.E. from University of Pune and M.Tech from CEDT, Indian Institute of Science, Bangalore. See Arati’s linked-in profile for more details.)

As the complexity of Integrated Circuits (specifically ASIC and SoC) increases, and as their sizes keep reducing, the task of testing the chip gets more and more challenging. Engineers need to come up with better and different methodologies to ensure what goes to the factory for manufacturing is actually what they intended to deliver. Verification occurs at various stages in the ASIC development cycle. How much is enough at each stage is a problem that needs to be addressed on a case to case basis. A sound knowledge of various techniques and awareness of capabilities and limitations of each technique goes a long way in making decisions about when, where and what.

The integrated circuit from an Intel 8742, a 8...
Click on the image to see all PuneChips articles on PuneTech. Image via Wikipedia

Keeping this in mind, PuneChips had verification expert Jagdish Doma talk about “ASIC verification: Trends and Challenges” on 20th August 2009. Though impacted by the H1N1 scare we had a small but diverse audience. Jagdish discussed in detail the strengths and limitations of the various techniques, viz: ESL, Formal verification, Dynamic simulation, FPGA prototyping and Emulation.

ESL or Electronic System Level testing is the newest trend. Supporters of ESL claim that it is a highly powerful system level modeling tool. It enables fast software bring-up if combined with an emulation/FPGA prototyping platform. ESL has been used successfully to validate systems for mobile applications where only one peripheral/application is active on the processor bus. ESL does not seem suitable for systems where multiple processes and interfaces are active simultaneously, like for example in a networking system.

Formal verification, a static verification technique which is mainly assertion based, is useful to check control paths. It cannot be used to verify datapaths. Dynamic simulation is a very effective way of verifying functionality of every block in the ASIC including the datapath. Gate level simulations performed after the back annotated placement and routing data is available are used to identify timing related issues or omissions/errors in stating multi-cycle paths.

The need to find hardware bugs as early as possible in the ASIC lifecycle drives the emulation and/or FPGA prototyping effort. Both these techniques enable the testing of scenarios which are generally not possible to test in dynamic functional verification, well before the actual silicon comes back from the fab. Emulation or prototyping also accelerate fast software ramp up and the software team can get a development platform ready well before the actual chip is available. Emulation involves running test cases on hardware accelerated platforms like Palladium from Cadence and Veloce from Mentor. For FPGA prototyping, Single or multiple FPGAs are used to build a PCB system targeted for the testing of the ASIC/SoC. The ASIC code is then fully or partially programmed on the FPGA/s and functionality can thus be tested.

Scenarios with much longer simulation times than what normal functional simulation allows can be run on the emulation platforms. All the internal signals are available for viewing and debug, just like in functional simulation. The FPGA prototype platform does enable longer test time, but the debugging available is limited. The hardware accelerators are costly, and investing in them makes sense if a company has lot of ASIC programs running simultaneously. For companies which have similar chips planned back to back, investing in a home grown FPGA based emulation/prototyping platform makes sense. Another advantage FPGA prototyping is that the RTL goes through a complete synthesis and place and route cycle and testing is done on a circuit which is as close to the real ASIC as possible.

To ensure that a bug free product reaches the customer is a complex activity and poses multiple challenges. Coverage, legacy code, repeatability are issues that need to be tackled. Ensuring that the coverage is at an acceptable level is important. Code coverage is run to find out if all the possibilities of a written code are exercised in a test suite. Simulators from cadence (ius), synopsys(vcs) and mentor (modelsim) have their own code coverage analyzers. Functional coverage means to find out if each feature listed in the specification for an ASIC/SoC is verified. It is essential that the functional specification document has an individual numbered paragraph for each feature so that traceability is easier. Functional coverage is an activity that needs planning, reviews and careful test case designing. Methodologies like eRM (e reuse methodology – Specman based) and OVM (open verification methodology – System verilog based) do assist checking functional coverage, but the inputs provided need careful specification and reviews.

Reviews, not just for coverage, but at every stage in the ASIC cycle are extremely important. One of the challenges encountered while designing an ASIC is that the hardware team interprets a certain behavior from software and the software expects that certain things are taken care of in hardware. It is very important to involve members from design team, verification team, architecture team, software & firmware team for verification review.

It takes a good amount of effort to come up with a verification environment, and it is very common for a team to use what has worked before when schedules are demanding. Legacy environment saves lot of time, but it also handicaps the team. Talking about saving time, efficiency goes a long way in shrinking the schedules. The initial time and effort investment in automation of repetitive tasks save lot of time in future. Use of re-usable methodologies will definitely save time and effort.

Finally, while choosing the verification flow for a certain ASIC, team needs to look at what is available in terms of resources as well as time, understand the end user requirement, and make a decision on which technique to employ at what stage.

Reblog this post [with Zemanta]

Musings on why Cloud Computing will prevail…

suhas kelkar headshot

Today’s post is a guest post by Suhas Kelkar. Suhas leads the Innovation & Incubation Lab at BMC Software India. Prior to BMC he was the Vice President of Product Management at Digite, an enterprise software company in the field of Project Portfolio Management. See his linked-in profile for details.

In the recent Hype Cycle for Cloud Computing 2009 special report by Gartner, technologies at the ‘Peak of Inflated Expectations’ include Cloud Computing! (For description of five phases of Hype Cycle look here) This means that Cloud Computing is on the verge of entering the “Trough of Disillusionment” phase. Many technologies have been unable to come out of this dreaded trough where they fail to meet expectations and quickly become unfashionable. Articles such as “Could the cloud lead to an even bigger 9/11” clearly indicate that Gartner’s analysis is right and that cloud computing indeed has reached the peak of hype!

This article has my musings on why cloud computing will eventually come out of this phase and would reshape the way we run business.

Hype Cycle for Cloud Computing 2009
Hype Cycle for Cloud Computing 2009

I had an opportunity to attend VmWorld 2009 conference. During the course of this conference, VmWare announced its latest initiative, vCloud. vCloud is essentially using VmWare’s virtualization technology to create an ecosystem of cloud service providers. With this initiative VmWare joins already crowded space of public cloud providers such as Amazon, Rackspace Cloud and Savvis. Out of all the exhibitors at the VmWorld conference, almost everyone was trying to get on the bandwagon of Cloud Computing. And this was not even a Cloud Computing focused conference! The more you look into Cloud Computing the more you feel like it is indeed the next big thing after the internet gold rush of 90s.

All this hype for Cloud Computing feels like a déjà vu. Turn the dial few years ago and the area of Software As A Service (SaaS) went through very similar transition. After SaaS reached the trough of disillusionment skeptics were raising doubts. Many argued that they would never consider putting their competitive data (CRM) in a software system outside of their corporate networks. Salesforce had to fight an uphill battle as it tried to establish its SaaS products. However the value proposition of SaaS, in terms of zero install and pay-as-you-go was too attractive to ignore. Today SaaS is the architecture of choice for many enterprise software products and last time I checked Salesforce is sitting pretty at a massive market cap of 7.13 billion dollars!

Let’s look at the benefits of Cloud Computing,

  • Lower Costs – OPEX not CAPEX: Cloud Computing avoids capital expenditure (CapEx) on hardware, software and services by renting it from a third party provider (such as Amazon). Consumption is usually billed on a utility (resource based like electricity) or subscription (time based, like a monthly cable subscription) basis with little or no upfront cost. You pay as you go and pay for what you need. This seemingly straight forward benefit has deep impact on business models and strategy.
  • Self service and Agility: Provisioning a server used to take days if not weeks. With Amazon you can procure a server on their public cloud in minutes! Users can generally terminate the contract at any time (improving ROI and eliminating financial risks), and the services are often covered by service level agreements (SLAs) with financial penalties.
  • Focus on your business: Cloud computing abstracts away underlying resources (server, network and storage) and management of it so that you can focus on your core business. Win-win for Providers and Consumers.
  • Cloud Infrastructure and services are by default multi-tenant enabled, with multiple customers sharing resources and the costs associated with these. Providers run centralized infrastructure at low cost locations and make use of expertise of providers in terms of utilization and efficiency of infrastructure. Providers benefit with increased efficiency due to economies of scale and are able to provide the same service at lesser costs to happy consumers.

  • Elastic Scalability: Hosting your applications on Cloud Infrastructure enable dynamic (“on-demand”) provisioning of resources that can be done at near real time, without having to waste server resources engineered for peak loads. This enables small business to start offering their services on the web with low entry barriers and then scale as and when their load demands are higher.
  • Consider for example that you want to start a small web based business selling toys. Your business plan calls for exponential growth with number of customers ramping from few hundred in the first year to thousands in 2-3 years to million plus in 5-7 years. Ofcourse this plan does not even include wild fluctuations during peak holiday seasons. Until today, planning for this type of scenario involved lot of upfront costs that created huge barriers of entry for start ups. Now with cloud computing and public cloud infrastructure, such small companies can dream of doing exactly what they want to do and provides them with unlimited elasticity!

Similar to SaaS success story, it will be the benefits of the “cloud” that will eventually win over the skeptics due to underlying benefits. Of course an important factor would also be for an eco system to evolve in a timely fashion. One of the reasons why SaaS was successful was the fact that an entire ecosystem made itself available that rendered well to the SaaS Model including Web Standards (SOAP, WSDL, UDDI) and architectures such as AJAX.

Similar to the platform wars of the eighties (followed by browser wars of nineties), Cloud Computing is currently going through a war with each player trying to establish itself as the destination. Some efforts have started to promote interoperability and openness of cloud. Open Cloud Initiative is one such example. However it remains to be seen how the industry as a whole matures and adopts such efforts…

Cloud computing is here to stay and will succeed as a concept eventually. It has the power to establish new business models and change existing processes. More will have to be written about what does it mean for enterprises of tomorrow to manage their businesses in cloud. Do provide feedback via your comments if you would like to hear about it more…

See also: Suhas’ previous PuneTech article: The Changing Landscape of Data Centers.

Reblog this post [with Zemanta]

Tech Trends for 2015, by Anand Deshpande, Shridhar Shukla, Monish Darda

On Monday, I participated in a Panel Discussion “Technology Trends” organized by CSI Pune at MIT college. The panelists were Anand Deshpande, CEO of Persistent Systems, Shridhar Shukla, MD of GS Lab, Monish Darda, GM of BladeLogic India (which is now a part of BMC Software), and me.

Anand asked each of us to prepare a list of 5 technology trends that we felt would be important in the year 2015, and then we would compare and contrast our lists. I’ve already published my own list of 5 things for students to focus on last week. Basically I cheated by listing a just a couple of technology trends, and filled out the list with one technology non-trend, and a couple of non-technology non-trends.

Here are my quick-n-dirty notes of the other panelists tech trends, and other points that came up during the discussion.

Here is Shridhar’s list:

  • Shridhar’s trend #1: Immersive environments for consumers – from games to education. Partial virtual reality. We will have more audio, video, multi-media, and more interactivity. Use of keyboards and menu driven interfaces will reduce. Tip for students based on trend #1: don’t look down on GUIs. On a related note, sadly, none of the students had heard of TED. Shridhar asked them all to go and google it and to checking out “The Sixth Sense” TED video.
  • Shridhar’s trend #2: totally integrated communication and information dissemination.
  • Shridhar’s trend #3: Cloud computing, elastic computing. Computing on demand.
  • Shridhar’s trend #4: Analytics. Analytics for business, for government, for corporates. Analyzing data, trends. Mining databases.
  • Shridhar’s trend #5: Sophisticated design and test environments. As clouds gain prominence, large server farms with hundreds of thousands of servers will become common. As analytics become necessary, really complicated, distributed processes will run to do the complex computations. All of this will require very sophisticated environments, management tools and testing infrastructure. Hardcore computer science students are the ones who will be required to design, build and maintain this.

Monish’s list:

  • Monish’s trend #1: Infrastructure will be commoditized, and interface to the final user will assume increasing importance
  • Monish’s trend #2: Coming up with ideas – for things people use, will be most important. Actually developing the software will be trivial. Already, things like AWS makes a very sophisticated server farm available to anybody. And lots of open source software makes really complex software easy to put together. Hence, building the software is no longer the challenge. Thinking of what to build will be the more difficult task.
  • Monish’s trend #3: Ideas combining multiple fields will rule. Use of technology in other areas (e.g. music) will increase. So far, software industry was driven by the needs of the software industry first, and then other “enterprise” industries (like banking, finance). But software will cross over into more and more mainstream uses. Be ready for the convergence, and meeting of the domains.
  • Monish’s trend #4: Sophisticated management of centralized, huge infrastructure setups.

Anand’s list:

  • Anand’s trend #1: Sensors. Ubiquitous tiny computing devices that don’t even look like computers. All networked. And
  • Anand trend #2: The next billion users. Mobile. New devices. New interfaces. Non-English interfaces. In fact, non-text interfaces.
  • Anand’s trend #3: Analytics. Sophisticated processing of large amounts of data, and making sense out of the mess.
  • Anand’s trend #4: User interface design. New interfaces, non-text, non-keyboard interfaces. For the next billion users.
  • Anand’s trend #5: Multi-disciplinary products. Many different sciences intersecting with technology to produce interesting new products.

These lists of 5 trends had been prepared independently, without any collaboration. So it is interesting to note the commonalities. Usability. Sophisticated data analysis. Sophisticated management of huge infrastructure setups. The next billion users. And combining different disciplines. Thinking about these commonalities and then wondering about how to position ourselves to take advantage of these trends will form the topic of another post, another day.

Until then, here are some random observations. (Note: one of the speakers before the panel discussion was Deepak Shikarpur, and some of these observations are by him)

  • “In the world of Google, memory has no value” – Deepak
  • “Our students are in the 21st century. Teachers are from 20th century. And governance is 19th century” -Deepak
  • “Earning crores of rupees is your birthright, and you can have it.” – Deepak
  • Sad. Monish asked how many students had read Isaac Asimov. There were just a couple
  • Monish encouraged students to go and read about string theory.
Reblog this post [with Zemanta]

Web Scalability and Performance – Real Life Lessons (Pune TechWeekend #3)

Last Saturday, we had TechWeekend #3 in Pune, on the theme of Website Scalability and Performance.  Mukul Kumar, co-founder, and VP of Engineering at Pubmatic, talked about the hard lessons in scalability they learnt on their way to building a web service that serves billions of ad impressions per month.

Here are the slides used by Mukul. If you cannot see the slides, click here.
Web Scalability & Performance

The talk was live-tweeted by @punetechlive and @d7ylive. Here are a few highlights from the talk:

  • Keep it simple: If you cannot explain your application to your sales staff, you probably won’t be able to scale it!
  • Use JMeter to monitor performance, to a good job of scaling your site
  • Performance testing idea: Take 15/20 Amazon EC2 servers, run JMeter with 200threads on each for 10 hours. Bang on your website! (a few days later, @d7y pointed out that using openSTA instead of JMeter can give you upto 500 threads per server even on old machines.)
  • Scaling your application: have a loosely coupled, shared nothing, stateless, distributed architecture
  • Mysql scalability tip: Be careful before using new features, or new versions. Or don’t use them at all!
  • Website scalability: think global. Some servers in California, some servers in London, etc. Similarly, think global when designing your app. Having servers across the world will drive architecture decisions. When half your data-center is 3000 miles from the other half, interesting, non-trivial problems start cropping up. Also, think carefully about horizontal scaling (lots of cheap servers) vs vertical scaling (few big fat servers)
  • memcache tip: pre-populate memcache with most common objects
  • Scalability tip: Get a hardware load balancer (if you can afford one). Amazon AWS has some load-balancers, but they don’t perform so well
  • Remember the YouTube algo for scaling:
    while(1){
    identify_and_fix_bottlenecks();
    eat_drink();
    sleep();
    notice_new_bottleneck();
    }

    there’s no alternative to this.
  • Scalability tip: You can’t be sure of your performance unless you test with real load, real env, real hardware, real software!
  • Scalability tip – keep the various replicated copies of data loosely consistent. Speeds up your updates. But, figure out which parts of your database must be consistent at all times, and which ones can have “eventual consisteny”
  • Hard lessons: keep spare servers at all times. Keep servers independent – on failure shouldn’t affect other servers
  • Hard lessons: Keep all commands in a script. You will have to run them at 2am. Then 3am. Then 7am.
  • Hard lessons: Have a well defined process for fault identification, communication and resolution (because figuring these things out at 2am, with a site that is down, is terrible.)
  • Hard lessons: Monitor your web service from 12 cities around the world!
  • Hard lesson, Be Paranoid – At any time: servers can go down, DDOS can happen, NICs can become slow or fail!

Note: a few readers of of the live-tweets asked questions from Nashik and Bombay, and got them answered by Mukul. +1 for twitter. You should also start following.

Reblog this post [with Zemanta]

Resisting Scala – Why good managers resist great new technologies

Dhananjay Nene, a software architect, a passionate programmer, an internet enthusiast, is one of the strong, in-depth, technical voices that graces tech events in Pune, and the online/offline tech community. In spite of an MBA from IIM-A, he has remained a techie, but he uses his dual background to good advantage – amongst managers, he becomes a techie, explaining to them complexities they don’t understand or appreciate, and amongst a group of techies he starts channeling managers, to hammer some business sense into them.

In the latest post on his blog (which you must subscribe to), he is pretending to be a manager who is resisting exhortations from his techies that the group should switch all development to Scala, the hot new language that is touted as a long-term replacement for Java. Note: these are not necessarily Dhananjay’s personal views – he is just trying to explain to techies all the issues that a good manager might worry about with respect to adoption of new technologies.

This is a must-read for all techies, and hence, is reproduced here with permission.

Why should I switch to Scala?

This post is a role-play and does not reflect my individual opinion about scala accurately. I am convinced about the capabilities and features of Scala along with the fact that it deserves the mantle of a long term replacement for Java. However language adoption goes beyond technical capabilities, and this post is a speculation on what a typical manager might be dealing with when attempting to decide whether to switch to Scala.

So I have been reading a lot about Scala lately and even opinions about how it will be a long term replacement for Java. I’ve also read some interesting writeups about Scala adoption such as On Scala’s Future and A Tipping Point for Scala. While I used to code a lot, my responsibilities today require me to interact with and address a lot of issues including those faced by our customers, our development teams and also engage with my peers and superiors on many other difficulties bedeviling our organisation. This gives me little time to try out Scala. I know I should be doing that, but sincerely I do not have the time. So I rely on the feedback of my team, the trade journals and other influential architects within and outside my organisation.

I have heard about many developers switching from Java to Python / Ruby. However I have heard of relatively only a smaller number of large Java shops which have done the shift – most of the switch stories I’ve heard reflect a smaller sized teams. I can feel the excitement Scala has generated amongst the development teams – the brevity, the functional programming model introduction, the exciting stuff being done concurrently et al. I have no doubt that, given so much excitement it must really be a good language.

To introduce my organisation – it is one of those shops which service many projects concurrently. Given the tremendous business and growth, I must confess we do not always have the luxury of being able to hire the most top notch talent. We do have a lot of projects we use Java for, and thats a language our customers are comfortable with. I’ve had some of the senior people check out Scala to gain some feedback into the language. But at this stage I must say I am inclined to evaluate the shift but not convinced enough to do so. I am sure that I could if convinced drive the change to Scala incrementally. However my fear stems from the fact that if things don’t turn out well, despite all the great advice I’ve received – its going to be my rear end on the line. So here’s some of my concerns regarding evaluating the shift to Scala and there are many of them, so some of you might be able to help me through this thought process.

  • Functional Programming : I’m sure in many ways it rocks. But my guys tell me they are not sure how to use it in the typical bread and butter applications which read from database, do some processing and write back to the database. Does Functional Programming help me in this context ? Will my team scale into being able to write functions with no side effects assuming thats a desirable goal ? What if they tie themselves up in knots and my release to the customer is risked ? I can’t afford that. Is functional programming even desirable in such contexts ? So I am not sure if in these contexts I should just ditch functional programming and work with just normal imperative programming capabilities of Scala. I am so confused, and afraid.
  • Different Syntax : While Scala runs on the JRE, its syntax is very different from Java. From what I could gather, it is much easier for a Java programmer to read (make sense of) simple Python code than to read Scala code. Is it true ? So even if I do get compatibility in terms of the runtime environment, would I be picking up a language that is syntactically so different a language that it would involve a substantial relearning curve ? I remember when we had to learn Java and Javascript. For better or for worse these were indeed relatively minor modifications of the C/C++ syntax, compared to what I sense as the syntactic shift between Java and Scala. Am I wrong ? If so, could you help point me to resources which help me understand that Scala code is not much different than Java ?
  • Sample code : Guys, I need your help. I need to see some good sample code. Some code which reflects how a typical application is architected, designed and programmed in Scala. And I don’t need it for a complex multi threaded actor based processing – I just need to see simple J2EE server based departmental applications maybe a simple recruitment tracking or library maintenance application. If I find a good one, I’ll just take it and give it to my team and say – there, thats how we’re largely going to build it, and even if we make a few changes along the way we at least have a reasonable template that we can build from.
  • Dumbed down environment : I remember my great adventures with C and vi and make. But my team today is very different. They want great IDEs. They must have syntax highlighting, autocompletion and nice refactoring capabilities. If I ask them to move, some of them might be excited about the change and be willing to overcome these short term hurdles. But there are some of them who will not be keen to do so and may be disinclined to support such a shift. And at the end of the day my ability to conduct this shift is a function of my ability to carry a large proportion of them along with me. Even when I considered a shift from svn to git, the IDE support was a big issue even though quite obviously git capabilities were really exciting. I couldn’t push along that change, and in this case we are talking of changing the language.
  • Is this a good time to shift to Scala ? I remember the early adopters of Java from 1996 thru 2001. While they gained a lot of experience, JRE and J2EE really matured only post JRE 1.3. Scala seems to be coming out with so many enhancements so fast, I am not sure if it has stabilised. I am told there is a 2.8 coming out in a few months. So if I train my team and Scala continues to change rapidly will I have to keep on retraining my team regularly ? And what about the customers I take to production. Will the frequent upgrades mean I end up supporting multiple customers on multiple versions of Scala ? Maybe Scala is stable but it would be helpful for someone important enough to make a clear statement that there are no new major shifts anticipated anytime soon and that these version shifts are likely to be no faster than the JRE version upgrades (which were fast enough).
  • Support from peers and superiors : I remember the day I decided to shift to Java. What made the move easy for me was the sheer fact that Java was a big paradigm leap away from the then dominant C++. Not only was it cross platform with binary compatibility thrown in for good measure, Sun ensured that it made all the right noises to appeal to the enterprise architects and all the business managers. I see the senior developers in my team clamouring for the shift to Scala, but my peer managers and my superiors don’t display even the fraction of the enthusiasm they displayed during the Java shift. The implication for me is that the risk cover I get when I order the shift is far lesser than what I had when I made the move to Java. Which means if things don’t quite work out well, I’m really going to be screwed.
  • Business friendliness : I understand all the nice talk about the technical excellence of Scala. But I really need to translate all these great language features into a projected ROI that I can use to convince others about. So I would like to see actual case studies of applications that were moved to Scala and what impact it had on the time and cost so that I can use it to compute my ROI. And what scares me is that learning curve may risk the initial applications long enough to push my breakeven point of shifting to Scala well beyond a 12 month and perhaps even a 24 month period. I fear things might not be as difficult but in absence of known studies, I am likely to lean towards projecting a worst case scenario rather than an optimistic one.

So folks, I am asking for your help. And while a lot of you may think that people like us who balk at the thought of limited IDE support are wimps, please remember that 80% of us don’t fit into the top 20%. And if you would like Scala to be popular, you need us as much as we need you. And if you are not too sure, please remember Lisp and Smalltalk are great languages as well.

About the author – Dhananjay Nene

Dhananjay is a Software Engineer with around 17 years of experience in the field. He is passionate about software engineering, programming, design and architecture. He did his post graduation from Indian Institute of Management, Ahmedabad, and has been involved in Senior Management positions and has managed team sizes in excess of 120 persons. His tech blog, and twitter stream are a must read for anybody interested in programming languages or development methodologies. Those interested in the person behind the tech can check out his general blog, and personal twitter stream. For more details, check out Dhananjay’s PuneTech wiki profile.

Reblog this post [with Zemanta]

Call for Speakers – IndicThreads Conference on Java Technologies, Pune Dec 2009

indicthreads logo smallThe IndicThreads Java conference is a technology conference that happens in Pune every year. The conference has in-depth, vendor-neutral technical sessions about a wide range of topics in the Java space. If you have done some interesting work in or related to Java, you should consider submitting a proposal.

PuneTech has detailed coverage of last year’s IndicThreads Java conference. For even more details, you can see the list of speakers and the slides used in their presentations at the conference website. That should give you an idea of what this conference is about.

Here is the call for speakers reproduced from the conference website:

Call for Speakers

IndicThreads.com invites submissions for the 4th IndicThreads.com Conference On Java Technology to be held on 11th and 12th December 2009 in Pune, India. The conference is the premier independent conference on Java technology in India and is the place to be, to learn the latest in the Java world while meeting with like-minded individuals from across the industry.

IndicThreads welcomes submissions from subject experts across fields, geographic locations and areas of development. Topics of interest include new and groundbreaking technologies and emerging trends, successful practices and real world learnings.

Topics appropriate for submission to this conference include but are not restricted to the below, stated in no particular order –

1. Java Language Specs & Standards
2. Optimization, Scaling and Performance Tuning
3. Cloud Computing
4. Rich Internet Applications, Ajax and Web 2.0
5. Scripting languages for Java like JRuby, Groovy, Rhino, JavaFX.
6. Open Source Frameworks
7. Enterprise Architecture
8. Spring
9. Virtualization
10. Social Networking
11. Security
12. Agile Techniques, Extreme Programing, Test Driven Development
13. New and emerging technologies
14. Case Studies and Real World Experiences

Submission

  • Please note that marketing-oriented submissions aimed at promoting specific organizations or products will not be accepted.
  • All sessions will be between 50-90 minutes. One / both of your proposals might be accepted.
  • The audience consists mostly of senior developers and project leads. Before submission consider how your submission can provide best value to this target segment.
  • Submissions will be accepted only on the website and not through emails. Please complete the entire form including the two session proposals.
  • The decision of the conference team as regards sessions, durations, timings, speaker benefits and all related aspects will be final and binding.

Speaker Benefits

  • Complimentary Full Conference Pass
  • We will arrange for your hotel stay and cover the room tariff. Please note that hotel incidentals will not be covered.
  • We will reimburse up to Rs 5000 from the air fare or the actual, whichever is less.
  • Speaking at an IndicThreads event gets you recognition as a subject expert.

Write to [ conf AT rightrix DOT com ] in case of any other queries.

Important Dates

  • Submission Deadline – 31st August 2009
  • Conference Dates – 11 and 12 December, 2009
Enhanced by Zemanta

PUG’s Microsoft Technologies Developer Conference – 8th/9th Aug

PUG DevCon 2009
PUG DevCon 2009

What: Pune User Group (PUG)’s DevCon conference on Microsoft technologies
When: 8, 9 August, 9:30am to 5:30pm
Where: College of Engineering Pune (COEP)
Registration and Fees: This event is free for everyone. Register here.

Details:
PUG Devcon is an event for developers to share, collaborate and meet up with like minded technology enthusiasts. Along with interesting interactive sessions, DevCon proves to be a platform for learning and sharing new technology. You get to meet the industry gurus, people in Microsoft and explore the issues in working on mobile devices, working on Windows application development, integrated Web solutions, Microsoft Office programming ,language enhancement and IDE productivity features. Issues such as easier development of applications across client types and migrating applications to .NET are all discussed here and new optimized solutions are provided.

DevCon is a Developer Conference from the developers, by the developers and for the developers. Developers may be professionals or students who will represent next generation developers. The agenda consists of two tracks that will cover .NET, Azure, Silverlight, and a bunch of other technologies. For information about the expected presenters look here.

Featured Products/Topics: .NET 4.0 Internals, Azure Services Platform, Silverlight 3 with Blend 3 Sketch Flow and DeepZoom, ASP.NET MVC, MS Office for Developers, PowerShell extending and embedding.

Recommended Audiences: IT Professionals, Microsoft Partners, Solution Architects, Software Developers, Students, Technical Decision Makers, Developers, Architects

For more information about the organizers, see the PuneTech profile of Pune User Group.

Reblog this post [with Zemanta]