Tag Archives: security

Network Security Workshop by “Null” – Dec 21

What: Workshop on Network Security by security awareness group “Null
When: Sunday, 21st December, 10am to 1:30pm
Where: I2IT, Hinjewadi
Registration and Fees: This event is free for all. No registration required.

Details:

Null, a Network Security
group, is organizing an event on the 21st of December, 2008 at
International Institute of Information Technology, Hinjewadi, Pune.

The seminars which would be held are as below:

Time

                  Workshops                 

Speaker

10 AM – 10:30 AM

Introduction to Null
and Network Security

Mr. Aseem Jakhar

10:30 AM – 11:30 PM

Wireless Security

Mr. Rohit

11:30 PM – 12:30 PM

Application Security

Mr. Ajit Hatti

12:30 PM – 1:30 PM

TCP/IP and NMAP

Mr. Murtuja

Null is a Network Security community for ethical hackers, security
professionals and security enthusiasts, born out of the need for a centralized
knowledge base in security and the fact that security is treated as an add-on
and ignored many a times. It is a step to move towards immunity from security.

Apart from having fun, we also:

          Share security related knowledge

          Create a disclosure platform

          Design/Develop innovative ideas to combat current threats

          Define a “Must-Have” security knowledge-base for different roles (programmers, QA, admin, end user)

          Spread security awareness

          Organize Meetings/Conferences/Training

For further information:

          Contact: Mr. Aseem Jakhar ( giimale@gmail.com )

          Visit the website: http://null.co.in

Speakers

1. Aseem Jakhar (Founder: NULL security community)
A network security and open source enthusiast (and a system programmer
for living). He has contributed to the development of various security products
and networking/security modules including:

– Firewall
– Regex filters.
– Baysian filters.
– Heuristic filters.
– Genetic Algorithm based score generator for heuristic filters.
– Advanced attachment filters.
– Multicast packet-reflection daemon.
– SMTP engine.
– DNSBL engine.

Aseem is an active speaker at security/open source conferences like Blackhat
Europe 2008, ClubHack 2008, Gnunify 2007. He was also invited to speak at
Inbox/Outbox UK 2008. He is a C|EH from Ec-Council and is actively involved in security research. He has also given security advisories
to various organizations including banks.

2. Murtuja Bharmal (Co-founder – NULL)
Murtuja is a Linux Kernel and Network Security
maniac. Earning livelihood by working as a System Programmer. He has been
contributing in development of various Network Security
products
like Firewall, VPN, Application Proxies, and Authentication
Modules for the past 5 years. Murtuja is a C|EH from EC-Council, is
actively involved in Security practices, development, consultancy, with
prestigious organizations. He has single handedly developed firewall product
and got it compliant with ICSA-Labs and also has expertise in
customization, security patching and integration of open
source products
like SQUID, IPTables,
VRRP, and OpenSwan.

3. Rohit Srivastwa (Member – NULL)
Founder of ClubHack, has several years experience in providing consultancy and
training in the fields of Information security, Cyber Crime Investigation and Penetration Testing. He
is actively involved advising and teaching several military agencies, law enforcement
personnel, Corporates and Government bodies in these fields

4. Ajit Hatti (Member – NULL)

Ajit Hatti is a “Software Architect &

System Programmer” by profession and “Network Security, Linux Enthusiast”. From last 4 years he has been
contributing in research & development of security products like
IPS/UTM/Mail Security & Network Scanners with various renowned
Organizations. Ajit is also actively contributing in vulnerability research of
various protocol implementations and has been researching on modern techniques
of Fingerprinting & Application/OS detection. Ajit is also associated with
PLUG, CSI, and Ubuntu’s development and testing.

81% of Pune’s Wi-Fi Networks are insecure – ClubHack report

Wi-Fi Security in Pune. Only the WEP encrypted access points (cream colored pie) are secure. Everything else is unsecure.
Wi-Fi Security in Pune. Only the WPA encrypted access points (cream colored pie) are secure. Everything else is insecure.

ClubHack, the group hell-bent on hammering some sense of security hygiene into the heads of an ignorant and careless public, went around Pune making a note of how secure or insecure various Wi-Fi hotspots in the city were, and found that a full 50% were not protected at all, and another 31% were only weakly protected. That just leaves 19% adequately protected.

If you have no idea what I am talking about, here is a little bit of explanation. More and more users are now using wireless networking cards to get their internet access. In such a setup, there is a Wi-Fi card that goes into your desktop/laptop (most modern laptops have this built-in), and to complete the connection there is a device that needs to be plugged into your internet connection (i.e. your broadband cable, or telephone line). This device is called an access point (AP), and is typically a wireless router. The computer then communicates wirelessly with your wi-fi router to connect to the internet.

The above report points out that in 50% of all wi-fi access points installed in Pune, there is no protection against random third-party computers from connecting to the AP. That’s like leaving your front door open. Not only can they access the internet using your AP, but more importantly, it is very likely that they can access the other computers on your network, and can tap into the network traffic going back and forth between those computers and the internet. If you are unlucky, they can get access to sensitive data, like passwords to your email account, or worse, bank account. Or, if, like our government, you want to focus on the wrong thing, you can worry that THE TERRORISTS CAN USE YOUR NETWORK TO SEND BOMB THREATS!!! (and we dutifully reported that in PuneTech.)

Of the remaining, 31% think that they have protected their AP using encryption, but the encryption scheme they are using (WEP) is known to be very weak, and can be broken in a matter of minutes. Which means that a hacker (cracker actually) sitting in a car outside your building can easily break into the network without anybody realizing it.

How did ClubHack find out? This is what they did:

On 10th November 2008, ClubHack created a setup in a car which included laptops & GPS enabled devices for the exercise. The car was driven in all the popular areas which included IT parks, multiplexes, residential areas, markets, busy streets etc. While the car was driving at a normal speed, the GPS and wireless enabled devices sensed the availability of wireless signals on the road. These signals were then recorded with details like MAC address of the access point, name of the network, security used, longitude and latitude of the location where the signal of a particular network was highest.

And just in case anybody amongst you is thinking that what they did was illegal and actionable, don’t worry! They took permission of Pune Police to undertake this mission, and Pune Police actually sent an officer to accompany them. For some more details of their project and findings, you can check out the short report, or the full report (PDF).

What should you do? If you are reading PuneTech, then no doubt you are one of the smart ones who are in the 19% that use WPA based encryption. But just in case someone slipped through, what you need to do is educate yourself about wi-fi security issues, and ensure that you change the settings on your wi-fi access point to use one of the WPA based encryption schemes. (There are 6 or 7 variants like WPA-PSK, WPA2-Personal, etc. Any one of them will do.) And please change the default administrator password for your AP. And if you have no clue what I am talking about, get a friend who understands to help you. Or pony up the Rs. 1000 for the wi-fi security workshop that ClubHack is going to conduct next month, or the Rs. 8000 for the wi-fi security training that AirTight networks is going to conduct later this month. This last one is certainly recommended if you are the network admin for one of the IT companies that ClubHack managed to snag during their wardrive.

And just in case the remaining 19% are feeling very pleased with yourself, I should also point out that security guru Bruce Schneier keeps his own wi-fi network open. It is a fascinating, and insightful, and a different take on this issue that you should read. But inspite of Bruce’s sage advice, I keep my router protected with WPA. Because Bruce’s advice amounts to saying that I should leave my door open, but keep all my drawers, and cupboards, and closets and bedroom door locked, and the fridge and TV chained to the wall. I’m not a security guru, and I am sure I’ll leave some door open. Don’t want to take that chance.

Pune company watch: Companies that are doing work related to this area in Pune: Airtight Networks, Symantec, QuickHeal.

The Risks with OpenID

A few months ago, PuneTech carried an article by Hemant Kulkarni of Pune-based singleid.net giving an overview of OpenID, an up and coming technology that addresses a real pain point of anybody who has used the web – it removes the need to remember different passwords for different sites. This is called single-sign on or SSO in security parlance. More importantly, it achieves this with high security, without having to pass passwords all over the place. Actually, OpenID is much more than than this – read the whole article for more details.

Now, Rohit Srivastwa, founder of ClubHack (a group of volunteers dedicated to increasing awareness of security issues in Pune and elsewhere), has created a presentation on the risks associated with OpenID (for more information about Rohit, see his PuneTech wiki profile):

Risks With OpenID

View SlideShare presentation or Upload your own. (tags: clubhack openid)

Basically, he points out that a bunch of standard, well-known security attacks (we’ve listed some of them at the end of this article) that have been developed by hackers will also work against your OpenID provider (if you don’t know what provider means in this context, you really should skim that overview article), and that results in the criminals being able to access all your online accounts with the convenience and security of single-sign-on provided by OpenID. Not the effect you were trying for, eh?

So what is to be done? This doesn’t mean that OpenID is bad. In fact, it is great and will make online life much easier. All you need to do is be aware of the risks, and be more careful. Specifically, don’t use OpenID or single-sign-on for banks or credit card account access until we tell you otherwise. Always use https. When in doubt, be paranoid – just because you aren’t paranoid, doesn’t mean they aren’t all out to get you. And don’t take any biscuits from strangers (you’ll be surprised how many people do that on Pune-Nashik buses). And get free education on security issues from the activities of ClubHack.

Some background about security attacks

These days, one of the most important (and easiest to fall for) security risks is the possibility of getting phished. A phishing attack is one in which criminals create a website that looks just like some other website (e.g. your bank’s website) and then tricks you into divulging important information (like account number, password etc.) to them.

There are a bunch of other scary attacks possible – man-in-the-middle attack, replay attack, cross-site request forgery, and cross-site scripting attack.

A man-in-the-middle attack is when an evil website sits between you and your bank website. It pulls all information from the bank website and shows it to you – so it looks like the real thing. And it takes inputs (account number, PIN codes etc.) from you and passes them on to the bank site so that it is able to access your account and show you authentic information from your account. However, along the way, it has managed to get access to your account without your knowledge.

A cross-site request forgery is an attack where malicious code to access your bank account is embedded (and hidden) in the webpage at another website – maybe some chat forum that you visit. Here’s an example from the wikipedia:

For example, one user, Bob, might be browsing a chat forum where another user, Mallory, has posted a message. Suppose that Mallory has crafted an HTML image element that references a script on Bob’s bank’s website (rather than an image file), e.g.,

If Bob’s bank keeps his authentication information in a cookie, and if the cookie hasn’t expired, then the attempt by Bob’s browser to load the image will submit the withdrawal form with his cookie, thus authorizing a transaction without Bob’s approval.

A cross-site scripting (XSS) attack, is a vulnerability in which a hacker can inject malicious scripts (i.e. a little program that sits inside your webpage) into otherwise genuine webpages, and hence it is able to do something terrible either to your local computer, or your account.

Note: these exploits are not specific to OpenID. These are well-known attacks that are used all over the web in all kinds of situations. Wikipedia claims that 68% of all websites are vulnerable to XSS attacks. If you are now afraid of using your computer, shouldn’t even read this article that gives an idea of how the underground hacker economy works. But do contact ClubHack to get yourself educated on basic security hygiene. To paraphrase QuickHeal‘s marketing message, aap ke PC meiN kauN rehta hai? Hacker ya ClubHack? (Incidentally, QuickHeal happens to be a Pune-based company, which is giving multi-nationals like Symantec a run for their money (incidentally, Symantec happens to have its largest R&D center in Pune (incidentally, did you notice that Pune is a very happening place technologically? (incidentally, I think you should let everybody know about how happening a place Pune is (technologically speaking) by asking them to subscribe to PuneTech)))).

Stop terrorists from hacking into your company computers with AirTight networks?

AirTight Logo

In a report titled “Wi-Fi networks extremely vulnerable to terror attacks,” the Economic Times points out that:

 

The recent incident involving US national Kenneth Haywood, whose Internet Protocol (IP) address was allegedly used to send the terror e-mail prior to the Ahmedabad serial blasts, should be regarded as a wake up call. While this incident of wireless hacking took security agencies by surprise, lakhs of individuals and companies are actually exposed to a similar risk. Incidents of such hacking are common, but go unreported since they may not have such grave implications.

The police version of the Haywood incident, as reported in the newspapers, is that suspected criminals allegedly hacked into the Wi-Fi network of his laptop and used it to send the terror e-mail. Prior to this hacking, Mr Haywood is said to have complained of high browsing bills. If this is to be believed, then one possibility is that Haywoood may have left his access point open. The suspected terrorist could then have hooked on to this access point and sent the email, which then showed Haywood’s IP address as the originator. This is regarded, in hacking terminology, as stealing of bandwidth while impersonating Haywood.

Wi-Fi hacking is an even bigger a problem for companies that have many employees who take their laptops all over the place and might come back infected, or who have a number of access points that can be easy targets if not secured properly. This is the market that Pune-based AirTight Networks is going after:

Hemant Chaskar, Airtight’s technology director, explained: “Companies earlier used firewalls, which prevented or regulated data access between internal systems and the external world. With the adoption of wireless, firewalls can be bypassed, exposing internal systems to free external access. External devices can access internal enterprise networks, while internal devices can also connect to networks outside the company’s premises in the absence of adequate security measures.

There are a few different capabilities that a company needs to be able to tackle this threat. First, being able to detect that wireless intrusion is happening. Second, being able to phyisically (i.e. goegraphically) locate exactly where the threat is coming from. Third, being able to do something about it. And finally, for the sake of compliance with government laws, being able to generate appropriate reports proving that you took all the appropriate steps to keep your company’s data secure from hackers. This last one is required whether you are worried about hackers or not, and is a huge pain.

AirTight provides all these facilities and then goes one step further, which makes it unique. At $20000 a pop, most small companies would balk at the price of all the infrastructure required for achieving all this. So AirTight provides WiFi security as an online service – you simply install a few sensors in your company. Everything else is on AirTight’s servers. So you just have to pay a small monthly fee, as low as $60 per month. And you get full security from wi-fi hacking, and you keep the government happy with nice compliance reports.

For a more details of AirTight’s products, see the PuneTech wiki profile of AirTight.

Reblog this post [with Zemanta]

Pune startup presents at DEMOfall, San Diego


Pune-based startup Maverick Mobile launched their latest product, Maverick Secure Mobile (MSM), at the DEMO conference earlier this week. DEMO is one of the premier conferences for new startups to launch their products. A video of their presentation is available from the DEMO site.

Maverick Mobile is a Pune-based mobile services and products company. Maverick develops mobile applications (for example a mobile security application, and a mobile dictionary), mobile games (about a dozen of them), and also mobile content (mp3s, music videos, ringtones, wallpapers etc.)

[edit]Products

[edit]Applications

[edit]Maverick Secure Mobile

Maverick Secure Mobile is a security application that protects your handset as well as the data stored in it. Using MSM, one can retrieve the entire phone book remotely from the stolen / lost phone. MSM can also send thief activity reports via SMS on the reporting number. The owner of the device can lock/hang the phone remotely. MSM can be used in case of theft, or for parental control.

This product was launched at DEMOfall conference, September 2008, in San Diego.

[edit]YO SMS

Yo SMS is a peer to peer application which allows a user to attach backgrounds, sounds, audibles, smilies to the regular SMS messages.

[edit]Maverick Dictionary

A dictionary of more than 1,45,000 words, with a user interface customized for mobile usage.

[edit]Mobile Games

Maverick has developed about a dozen mobile games, including their own versions of classics like Sudoku, Poker, Blackjack etc.

[edit]Mobile Content

  • In India, Maverick mobile is a first company to launch pre loaded memory cards containing Mp3 songs, video songs video scenes, ring tones, wallpapers, games in retail market.
  • Maverick has legal tie up with various film distribution houses for selling Bollywood content using through Memory cards.
  • In the span of 6 months maverick has built more than 50,000 customer base in different states of India.
  • Maverick has strong distribution network of more than 130 Distributors & 1000 retailers.

[edit]Links

[edit]People

Spammers Leverage Interest in Olympics, says Symantec Pune

Official logo of the 2008 Summer Olympic GamesImage via Wikipedia

From PC World – Business Center:

Public interest in the Olympic Games is helping spammers, who are using text related to the games in e-mails to get users to click through to their malware and phishing Web sites, or to go to product sites, according to an executive at Symantec.

Spam messages were 78 percent of all messages in July, up from 66 percent a year ago, according to a monthly report on spam released by Symantec earlier this week.

While spam is increasing overall as a trend, there has been a spike ahead of the Beijing Olympics, said Shantanu Ghosh, vice president of Symantec’s India product operations, on Thursday. Symantec’s center in Pune, India, has one of nine security response labs run by Symantec worldwide.

For more information on Symantec Pune, see the PuneTech profile of Symantec.

McAfee to Buy Data Protection Vendor Reconnex

Security vendor McAfee has just announced that it will data leakage prevention startup Reconnex.

McAfee expects to close the US$46 million cash acquisition by the end of September and will roll the products into its data protection business unit, where they will be sold under the McAfee Adaptive Protection brand name.

Source: McAfee to Buy Data Protection Vendor Reconnex – Yahoo! News

Reconnex has an engineering location in Pune. We had profiled it a few months back.

OpenID – Your masterkey to the net

The OpenID logoImage via Wikipedia

OpenID is a secure, customizable, user-controllable, and open mechanism to share personal information (username/password, credit card numbers, address) on the web. It will eliminate the need to enter the same information over and over again in different websites, or to remember different username/password combinations. It will be a major improvement over the current system once it gains widespread adoption. PuneTech asked Hemant Kulkarni of singleid.net to give us an introduction to OpenID, its benefits, and how it works.

Overview

In 2005, a new idea took hold and spread across the internet – OpenID. The concept is very simple – to provide users with a single unique login-password set with which they will be able to access all the different sites on the internet.

In June 2007 the OpenID Foundation was formed with the sole goal to protect OpenID. The original OpenID authentication protocol was developed by Brad Fitzpatrick, creator of popular community website LiveJournal, while working at Six Apart. The OpenID Foundation received a recent boost when the internet leaders Microsoft, Google, Yahoo! and Verisign became its corporate members.

Millions of users across the internet are already using OpenID and several thousand websites have become OpenID enabled.

Need for OpenID

The internet is fast becoming an immovable part of our everyday life. Many tasks such as booking tickets for movies, airlines, trains and buses, shopping for groceries, paying your electricity bills etc. can now be done online. Today, you can take care of all your mundane household chores at the click of a button.

When you shop online, you are usually required to use a login and a password to access these sites. This means that, as a user, you will have to maintain and remember several different login-password sets.

OpenID enables you to use just one login-password to access these different sites – making life simpler for you. With OpenID, there is no need to bother with remembering the several different logins and passwords that you may have on each different site.

Internet architecture inherently assumes that there are two key players in today’s internet world – end users who use the internet services and the websites which provide these services. It is a common misconception that OpenID-based login benefits only the end users. Of course it does. But it also has an equal value proposition for the websites that accept OpenID too.

Later, in a separate section, we will go into the details of the benefits to the websites that accept OpenID-based logins.

And before that, it is equally important to understand the few technological aspects and the various entities involved in the OpenID world.

What is OpenID

OpenID is a digital identity solution developed by the open source community. A lightweight method of identifying individuals, it uses the same framework for identifying websites. The OpenID Foundation was formed with the idea that it will act as a legal entity to manage the community and provide the infrastructure required to promote and support the use of OpenID.

In essence, an OpenID is a URL like http://yourname.SingleID.net which you can put into the login box of a website and sign in to a website. You are saved the trouble of filling in the online forms for your personal information, as the OpenID provider website shares that information with the website you are signing on to.

Adoption

As of July 2007, data shows that there are over 120 million OpenIDs on the Internet and about 10,000 sites have integrated OpenID consumer support. A few examples of OpenID promoted by different organizations are given below:

  • America Online provides OpenIDs in the form “openid.aol.com/screenname”.
  • Orange offeres OpenIDs to their 40 million broadband subscribers.
  • VeriSign offers a secure OpenID service, which they call “Personal Identity Provider”.
  • Six Apart blogging, which hosts LiveJournal and Vox, support OpenID – Vox as a provider and LiveJournal as both a provider and a relying party.
  • Springnote uses OpenID as the only sign in method, requiring the user to have an OpenID when signing up.
  • WordPress.com provides OpenID.
  • Other services accepting OpenID as an alternative to registration include Wikitravel, photo sharing host Zooomr, linkmarking host Ma.gnolia, identity aggregator ClaimID, icon provider IconBuffet, user stylesheet repository UserStyles.org, and Basecamp and Highrise by 37signals.
  • Yahoo! users can use their yahoo ids as OpenIDs.
  • A complete list of sites supporting OpenID(s) is available on the OpenID Directory.

Various Entities in OpenID

Now let us look at the various entities involved in the OpenID world.

The Open ID Entities

End user

This is the person who wants to assert his or her identity to a site.

Identifier

This is the URL or XRI chosen by the End User as their OpenID identifier.

Identity provider or OpenID provider

This is a service provider offering the service of registering OpenID URLs or XRIs and providing OpenID authentication (and possibly other identity services).

Note: The OpenID specifications use the term “OpenID provider” or “OP”.

Relying party

This is the site that wants to verify the end user’s identifier, who is also called a “service provider”.

Server or server-agent

This is the server that verifies the end user’s identifier. This may be the end user’s own server (such as their blog), or a server operated by an identity provider.

User-agent

This is the program (such as a browser) that the end user is using to access an identity provider or a relying party.

Consumer

This is an obsolete term for the relying party.

Technology in OpenID

Typically, a relying party website (like example.website.com) will display an OpenID login form somewhere on the page. Compared to a regular login form where there are fields for user name and password, the OpenID logic form only has one field for the OpenID identifier. It is often accompanied by the OpenID logo: open id logo medium. This form is in turn connected to an implementation of an OpenID client library.

The Open ID Protocol

A user will have to register and have an OpenID identifier (like yourname.openid.example.org) with an OpenID provider (like openid.example.org). To login to the relying party website, the user will have to type in their OpenID identifier in the OpenID login form.

The relying party website will typically transform the OpenID identifier into a URL (like http://yourname.openid.example.org/). In OpenID 2.0, the client will thus discover the identity provider service URL by requesting the XRDS document (which is also called the Yadis document) with the content type application/xrds+xml which is available at the target URL and is always available for a target XRI.

Now, here is what happens next. The relying party and the identity provider establish a connection referenced by the associate handle. The relying party then stores this handle and redirects the user’s web browser to the identity provider to allow the authentication process.

In the next step, the OpenID identity provider prompts the user for a password, or an InfoCard and asks whether the user trusts the relying party website to receive their credentials and identity details.

The user can either agree or decline the OpenID identity provider’s request. If the user declines, the browser is redirected to the relying party with a message to that effect and the site refuses to authenticate the user. If the user accepts the request to trust the relying party website, the user’s credentials are exchanged and the browser is then redirected to the designated return page of the relying party website. Then the relying party also checks that the user’s credentials did come from the identity provider.

Once the OpenID identifier has been properly verified, the OpenID authentication is considered successful and the user is considered to be logged into the relying party website with the given identifier (like yourname.openid.example.org). The website then stores the OpenID identifier in the user’s session.

Case Study

Now let us take a simple case of Sunil, who wants to buy a Comprehensive Guide to OpenID by Raffeq Rehman, CISSP. This e-book is available only on-line at www.MyBooks.com a technology thought leader which believes in easing the end user’s on-line experience by accepting OpenID based login.

Sunil is a tech savvy individual who has already registered himself at www.singleid.net (India’s first OpenID provider) and they have provided him with his unique login identity, which is: http://sunil.sigleid.net.

The easiest entity to recognize in this purchase scenario is Sunil, the End-User. Obviously Sunil will use his web browser, which is known as the User-agent to access the MyBooks.com.

So, Sunil visits www.MyBooks.com and starts to look for the book he wants. He follows the standard procedures on this website and chooses his book and clicks the check-out link to buy this book. First thing MyBooks.com does is asks him to log-in and gives him an option of logging in with your OpenID.

Since Sunil has already registered himself with SingleId.net, they have provided him with his login-id (which is bit different). So here, www.singleid.net is the Identity Provider or OpenID provider.

Now we know that OpenID uses same method to identify individuals, which is commonly used for identifying websites and hence his identity (Identifier in OpenID context) is http://sunil.sigleid.net. Now SingleId.net part in his identity tells MyBooks.com that he has registered himself at www.singleid.net.

At this stage MyBooks.com sends him to www.singleid.net to log in. Notice that MyBooks.com does not request Sunil to login but relies on SingleID.net. And so MyBooks.com or www.MyBooks.com is the Relying Party or the Consumer. Once Sunil complete his login process which is authenticated against the Server-Agent (typically Server-Agent is nothing but your identity provider) SingleID.net sends him back to MyBooks.com and tells MyBooks.com that Sunil is the person who he says he is, and MyBooks.com can let him complete the purchase operation.

Leading Players in the OpenID World & Important Milestones

  • Web developer JanRain was an early supporter of OpenID, providing OpenID software libraries and expanding its business around OpenID-based
  • In March 2006, JanRain developed a Simple Registration Extension for OpenID for primitive profile-exchange
  • With Verisign and Sxip Identity joining and focusing on OpenID development new standard of OpenID protocol OpenID 2.0 and OpenID Attribute Exchange extension were developed
  • On January 31, 2007, computer security company Symantec announced support for OpenID in its Identity Initiative products and services. A week later, on February 6 Microsoft made a joint announcement with JanRain, Sxip, and VeriSign to collaborate on interoperability between OpenID and Microsoft’s Windows CardSpace digital identity platform, with particular focus on developing a phishing-resistant authentication solution for OpenID.
  • In May 2007, information technology company Sun Microsystems began working with the OpenID community, announcing an OpenID program.
  • In mid-January 2008, Yahoo! announced initial OpenID 2.0 support, both as a provider and as a relying party, releasing the service by the end of the month. In early February, Google, IBM, Microsoft, VeriSign, and Yahoo! joined the OpenID Foundation as corporate board members

OpenID: Issues in Discussion and Proposed Solutions

As is the case with any technology, there are some issues in discussion with regard to OpenID and its usability and implementation. Let us have a look at the points raised and the solutions offered:

Issue I:

Although OpenID may create a very user-friendly environment, several people have raised the issue of security. Phishing and digital identity theft are the main focus of this issue. It is claimed that OpenID may have security weaknesses which might leave user identities vulnerable to phishing.

Solution Offered:

Personal Icon: A Personal Icon is a picture that you can specify which is then presented to you in the title bar every time you visit the particular site. This aids in fighting phishing as you’ll get used to seeing the same picture at the top of the page every time you sign in. If you don’t see it, then you know that something might be up.

Issue II:

People have also criticized the login process on the grounds that having the OpenID identity provider into the authentication process adds complexity and therefore creates vulnerability in the system. This is because the ‘quality’ of such an OpenID identity provider cannot be established.

Solution Offered:

SafeSignIn: SafeSignIn is an option that users can set on their settings page that allows you to choose the option where you cannot be redirected to your OpenID provider to enter a password. You can only sign-in in provider’s login page. If you are redirected to your provider from another site, you are presented with the dialog warning you not to enter your password anywhere else.

Value Proposition

There are several benefits to using OpenID – both for the users and for the websites.

Benefits for the End User:

  • You don’t have to remember multiple user IDs and passwords – just one login.
  • Portability of your identity (especially if you own the domain you are delivering your identity from). This gives you better control over your identity.

Benefits for OpenID Enabled Websites:

  • No more registration forms: With OpenID, websites can get rid of the clutter of the registration forms and allow users to quickly engage in better use of their sites, such as for conversations, commerce or feedback.
  • Increased stickiness: Users are more likely to come back since they do not have to remember an additional username-password combination.
  • Up-to-date registration information: Due to the need of frequent registrations, users often provide junk or inaccurate personal information. With OpenID, since only a one-time registration is necessary, users are more likely to provide more accurate data.

OpenID thus provides the users with a streamlined and smooth experience and website owners can gain from the huge usability benefit and reduce their clutter.

Why Relying Parties should implement OpenID based authentication?

  • Expedited customer acquisition: OpenID allows users to quickly and easily complete the account creation process by eliminating entry of commonly requested fields (email address, gender, birthdates etc.), thus reducing the friction to adopt a new service.
  • Outsourcing authentication saves costs: As a relying party you don’t have to worry about lost user names, passwords, a costly infrastructure, upgrading to new standards and devices. You can just focus on your core. From research the average cost per user for professional authentication are approximately €34 per year. In the future, the relying party will end up paying only a few Cents per authentication request (transaction based).
  • Reduced user account management costs: The primary cost for most IT organizations is resetting forgotten authentication credentials. By reducing the number of credentials, a user is less likely to forget their credentials. By outsourcing the authentication process to a third-party, the relying party can avoid those costs entirely.
  • Your customers are demanding user-centric authentication: User-centric authentication gives your customers comfort. It promises no registration hassle and low barriers of entry to your service. Offering UCA to your customers can be a unique selling point and stimulate user participation.
  • Thought leadership: There is an inherent marketing value for an organization to associate itself with activities that promote it as a thought leader. It provides the organization with the means to distinguish itself from its competitors. This is your chance to outpace your competitors.
  • Simplified user experience: This is at the end of the list because that is not the business priority. The business priority is the benefit that results from a simplified user experience, not the simplified user experience itself.
  • Open up your service to a large group of potential customers: You are probably more interested in the potential customers you don’t know, versus the customers you already service. UCA makes this possible. If you can trust the identity of new customers you can start offering services in a minute.
  • The identity provider follows new developments: When a new authentication token or protocol is introduced you don’t have to replace your whole infrastructure.
  • Time to market: Due to legislation you are suddenly confronted with an obligation to offer two factor authentications. UCA is very easy to integrate and you are up and running a lot quicker
  • Data sharing: If the identity provider also offers the option to provide additional (allowed) attributes of the end-user you don’t have to store all this data yourself. For example, if I go on a holiday for a few weeks, I just update my temporary address instead of calling the customer service of my local newspaper!
  • Quickly offer new services under your brand: If you take over a company or want to offer a third party service under your brand/ infrastructure UCA makes it much easier to manage shared users. How much time does this take at the moment?
  • Corporate image: As SourceForge states they also offer OpenID support to join the web 2.0 space and benefit from the first mover buzz. Besides adding a good authentication mechanism provided by a trusted identity provider could add value to your own service. It is like adding a trust seal of your SSL certificate provider.
  • Extra Traffic: Today you get only those users whom you solicit but miss on those who are solicited by other similar businesses like yours. OpenID brings extra traffic to your website without you spending that extra effort.
  • Analytics: Providers can give you much more analytics on your users’ behavior patterns (this can be anonymous to keep user identity private and report something like 30% of people who visit your site also visit site ‘x’).

OpenID and Info-Cards

It is believed that user-id/password based log-in is the oldest, commonly used and easily implementable, but, at the same time, a weak method of authenticating and establishing somebody’s identity.

To overcome this problem and enhance the security aspect of OpenID based login processes, OpenID providers are using new techniques such as Info-cards (virtual cards based on user PC) based authentication. Microsoft is specially working with various leading OpenID providers to make Microsoft CardSpace as the de-facto standard for OpenID authentication.

There are two types of Info-Cards, Self-issued and Managed (or Managed by the provider). Self issued are the ones which are created by user stored on her/his PC and used during the login process. Since these cards are self issued level of verification provided by the users, their use is limited to the self-verified category and as such, provides a more secure replacement for User Id / Password combination only.

On the other hand ‘Managed Cards’ are managed by the specific provider. This can be your OpenID provider or your Bank. In this scenario, the data on the card is validated by the provider significantly enhancing the value of the verification. As such, these cards can easily be used in financial transactions for easing your on-line purchase process or for proving your legal identity.

There is emerging trend to bridge the gap between info-cards (virtual) and smart-cards (physical) and establish the link between them. Data can be copied to and fro giving your virtual card a physical status. In this scenario, your Info card (which was managed by the required management authority like Bank, RTO or so on) can take the place of your identity proof.

Some Interesting Sites Which Accept OpenID

Circavie.Com

An interesting site where you can create your own ‘story of your life’ – an interactive and chronological blog site, but with a difference (and that difference is not about being OpenID enabled) – see it to believe it!

Doxory.Com

If you are the kind of person who simply cannot decide whether to do ‘x’ or ‘y’, then here is the place for you. Put up your question and random strangers from the internet post their advice.

Highrisehq.Com

Here is the perfect solution for all those internet based companies – manage your contacts, to-do lists, e-mail based notifications, and what-not on this site. If the internet is where you work, then this site is perfect for you to get managing your business smoothly!

Foodcandy.Com

If you are a foodie then this site is the place for you! Post your own recipes and access the recipes posted by other people. Read opinions of people who have tried out the different recipes. Hungry?

About SingleID

SingleID is an OpenID provider – the first in India to do so. It allows users to register and create their OpenID(s) for FREE. It provides all the typical OpenID provider functions – allowing users to create their digital identity and using that to login to several OpenID enabled websites across the internet.

OpenID is being hailed as the ‘new face of the internet’ and SingleID is bringing it close to home. The main focus area of the company is to promote usage of OpenID in India.

If a user wants, he can also create multiple SingleID(s) with different account information, to use on different sites. So it allows you – the user – to control your digital identity, much in the same way as a regular login-password would – but with the added benefits of the OpenID technology.

SingleID has created a unique platform for website owners in India to generate a smooth user experience and create a wider base of operations and access for their websites.

Other user-centric services such as Virtual Cards (for more secure authentication) or allowing the use of user specific domain name (e.g. hemant.kulkarni.name) as an OpenID will be offered very soon.

For our partners we provide secured identity storage and authentication and authorization service alleviating headaches of critical security issues related to personal data.

We also provide the OpenID enablement service. Using our services companies can upgrade their user login process by accepting the OpenID based login largely enhancing their user base.

Links for Reference

· SingleID Home Page – http://www.singleid.net and Registration – https://www.singleid.net/register.htm

· OpenID Foundation Website – http://openid.net

· The OpenID Directory – http://openiddirectory.com/

About the author: Hemant Kulkarni is a founder director of SingleID.net. He has more than 25 years of product engineering and consulting experience in domains of networking and communications, Unix Systems and commercial enterprise software. You can reach him at hemant@singleid.net.

Zemanta Pixie

Data Leakage Prevention – Overview

A few days ago, we posted a news article on how Reconnex has been named a top leader in Data Leakage Prevention (DLP) technology by Forrester Research. We asked Ankur Panchbudhe of Reconnex, Pune to write an article giving us a background on what DLP is, and why it is important.

Data leakage protection (DLP) is a solution for identifying, monitoring and protecting sensitive data or information in an organization according to policies. Organizations can have varied policies, but typically they tend to focus on preventing sensitive data from leaking out of the organization and identifying people or places that should not have access to certain data or information.

DLP is also known by many other names: information security, content monitoring and filtering (CMF), extrusion prevention, outbound content management, insider thread protection, information leak prevention (ILP), etc.

[edit] Need for DLP

Until a few years ago, organizations thought of data/information security only in terms of protecting their network from intruders (e.g. hackers). But with growing amount of data, rapid growth in the sizes of organizations (e.g. due to globalization), rise in number of data points (machines and servers) and easier modes of communication (e.g. IM, USB, cellphones), accidental or even deliberate leakage of data from within the organization has become a painful reality. This has lead to growing awareness about information security in general and about outbound content management in particular.

Following are the major reasons (and examples) that make an organization think about deploying DLP solutions:

  • growing cases of data and IP leakages
  • regulatory mandates to protect private and personal information
    • for example, the case of Monster.com losing over a million private customer records due to phishing
  • protection of brand value and reputation
    • see above example
  • compliance (e.g. HIPAA, GLBA, SOX, PCI, FERBA)
    • for example, Ferrari and McLaren engaging in anti-competitive practices by allegedly stealing internal technical documents
  • internal policies
    • for example, Facebook leaking some pieces of their code
  • profiling for weaknesses
    • Who has access to what data? Is sensitive data lying on public servers? Are employees doing what they are not supposed to do with data?

[edit] Components of DLP

Broadly, the core DLP process has three components: identification, monitoring and prevention.

The first, identification, is a process of discovering what constitutes sensitive content within an organization. For this, an organization first has to define “sensitive”. This is done using policies, which are composed of rules, which in turn could be composed of words, patterns or something more complicated. These rules are then fed to a content discovery engine that “crawls” data sources in the organization for sensitive content. Data sources could include application data like HTTP/FTP servers, Exchange, Notes, SharePoint and database servers, repositories like filers and SANs, and end-user data sources like laptops, desktops and removable media. There could be different policies for different classes of data sources; for example, the policies for SharePoint could try to identify design documents whereas those for Oracle could be tuned to discover credit card numbers. All DLP products ship with pre-defined policy “packages” for well-known scenarios, like PCI compliance, credit card and social security leakage.

The second component, monitoring, typically deployed at the network egress point or on end-user endpoints, is used to flag data or information that should not be going out of the organization. This flagging is done using a bunch of rules and policies, which could be written independently for monitoring purposes, or could be derived from information gleaned during the identification process (previous para). The monitoring component taps into raw data going over the wire, does some (optional) semantic reconstruction and applies policies on it. Raw data can be captured at many levels – network level (e.g. TCP/IP), session level (e.g. HTTP, FTP) or application level (e.g. Yahoo! Mail, GMail). At what level raw data is captured decides whether and how much semantic reconstruction is required. The reconstruction process tries to assemble together fragments of raw data into processable information, on which policies could be applied.

The third component, prevention, is the process of taking some action on the data flagged by the identification or monitoring component. Many types of actions are possible – blocking the data, quarantining it, deleting, encrypting, compressing, notifying and more. Prevention actions are also typically configured using policies and hook into identification and/or monitoring policies. This component is typically deployed along with the monitoring or identification component.

In addition to the above three core components, there is a fourth piece which can be called control. This is basically the component using which the user can [centrally] manage and monitor the whole DLP process. This typically includes the GUI, policy/rule definition and deployment module, process control, reporting and various dashboards.

[edit] Flavors of DLP

DLP products are generally sold in three “flavors”:

  • Data in motion. This is the flavor that corresponds to a combination of monitoring and prevention component described in previous section. It is used to monitor and control the outgoing traffic. This is the hottest selling DLP solution today.
  • Data at rest. This is the content discovery flavor that scours an organization’s machines for sensitive data. This solution usually also includes a prevention component.
  • Data in use. This solution constitutes of agents that run on end-servers and end-user’s laptops or desktops, keeping a watch on all activities related to data. They typically monitor and prevent activity on file systems and removable media options like USB, CDs and Bluetooth.

These individual solutions can be (and are) combined to create a much more effective DLP setup. For example, data at rest could be used to identify sensitive information, fingerprint it and deploy those fingerprints with data in motion and data in use products for an all-scenario DLP solution.

[edit] Technology

DLP solutions classify data in motion, at rest, and in use, and then dynamically apply the desired type and level of control, including the ability to perform mandatory access control that can’t be circumvented by the user. DLP solutions typically:

  • Perform content-aware deep packet inspection on outbound network communication including email, IM, FTP, HTTP and other TCP/IP protocols
  • Track complete sessions for analysis, not individual packets, with full understanding of application semantics
  • Detect (or filter) content that is based on policy-based rules
  • Use linguistic analysis techniques beyond simple keyword matching for monitoring (e.g. advanced regular expressions, partial document matching, Bayesian analysis and machine learning)

Content discovery makes use of crawlers to find sensitive content in an organization’s network of machines. Each crawler is composed of a connector, browser, filtering module and reader. A connector is a data-source specific module that helps in connecting, browsing and reading from a data source. So, there are connectors for various types of data sources like CIFS, NFS, HTTP, FTP, Exchange, Notes, databases and so on. The browser module lists what all data is accessible within a data source. This listing is then filtered depending on the requirements of discovery. For example, if the requirement is to discover and analyze only source code files, then all other types of files will be filtered out of the listing. There are many dimensions (depending on meta-data specific to a piece of data) on which filtering can be done: name, size, content type, folder, sender, subject, author, dates etc. Once the filtered list is ready, the reader module does the job of actually downloading the data and any related meta-data.

The monitoring component is typically composed of following modules: data tap, reassembly, protocol analysis, content analysis, indexing engine, rule engine and incident management. The data tap captures data from the wire for further analysis (e.g. WireShark aka Ethereal). As mentioned earlier, this capture can happen at any protocol level – this differs from vendor to vendor (depending on design philosophy). After data is captured from the wire, it is beaten into a form that is suitable for further analysis. For example, captured TCP packets could be reassembled into a higher level protocol like HTTP and further into application level data like Yahoo! Mail. After data is into a analyzable form, first level of policy/rule evaluation is done using protocol analysis. Here, the data is parsed for protocol specific fields like IP addresses, ports, possible geographic locations of IPs, To, From, Cc, FTP commands, Yahoo! Mail XML tags, GTalk commands and so on. Policy rules that depend on any such protocol-level information are evaluated at this stage. An example is – outbound FTP to any IP address in Russia. If a match occurs, it is recorded with all relevant information into a database. The next step, content analysis, is more involved: first, actual data and meta-data is extracted out of assembled packet, and then content type of the data (e.g. PPT, PDF, ZIP, C source, Python source) is determined using signatures and rule-base classification techniques (a similar but less powerful thing is “file” command in Unix). Depending on the content type of data, text is extracted along with as much meta-data as possible. Now, content based rules are applied – for example, disallow all Java source code. Again, matches are stored. Depending on the rules, more involved analysis like classification (e.g. Bayesian), entity recognition, tagging and clustering can also be done. The extracted text and meta-data is passed onto the indexing engine where it is indexed and made searchable. Another set of rules, which depend on contents of data, are evaluated at this point; an example: stop all MS Office or PDF files containing the words “proprietary and confidential” with a frequency of at least once per page. The indexing engine typically makes use of an inverted index, but there are other ways also. This index can also be used later to do ad-hoc searches (e.g. for deeper analysis of a policy match). All along this whole process, the rule engine keeps evaluating many rules against many pieces of data and keeping a track of all the matches. The matches are collated into what are called incidents (i.e. actionable events – from an organization perspective) with as much detail as possible. These incidents are then notified or shown to the user and/or also sent to the prevention module for further action.

The prevention module contains a rule engine, an action module and (possibly) connectors. The rule engine evaluates incoming incidents to determine action(s) that needs to be taken. Then the action module kicks in and does the appropriate thing, like blocking the data, encrypting it and sending it on, quarantining it and so on. In some scenarios, the action module may require help from connectors for taking the action. For example, for quarantining, a NAS connector may be used or for putting legal hold, a CAS system like Centera may be deployed. Prevention during content discovery also needs connectors to take actions on data sources like Exchange, databases and file systems.

[edit] Going Further

There are many “value-added” things that are done on top of the functionality described above. These are sometimes sold as separate features or products altogether.

  • Reporting and OLAP. Information from matches and incidents is fed into cubes and data warehouses so that OLAP and advanced reporting can be done with it.
  • Data mining. Incident/match information or even stored captured data is mined to discover patterns and trends, plot graphs and generate fancier reports. The possibilities here are endless and this seems to be the hottest field of research in DLP right now.
  • E-discovery. Here, factors important from an e-discovery perspective are extracted from the incident database or captured data and then pushed into e-discovery products or services for processing, review or production purposes. This process may also involve some data mining.
  • Learning. Incidents and mined information is used to provide a feedback into the DLP setup. Eventually, this can improve existing policies and even provide new policy options.
  • Integration with third-parties. For example, integration with BlueCoat provides setups that can capture and analyze HTTPS/SSL traffic.

DLP in Reconnex

Reconnex is a leader in the DLP technology and market. Its products and solutions deliver accurate protection against known data loss and provide the only solution in the market that automatically learns what your sensitive data is, as it evolves in your organization. As of today, Reconnex protects information for more than one million users. Reconnex starts with the protection of obvious sensitive information like credit card numbers, social security numbers and known sensitive files but goes further by storing and indexing upto all communications and upto all content. It is the only company in this field to do so. Capturing all content and indexing it enables organizations to learn what information is sensitive and who is allowed to see it, or conversely who should not see it. Reconnex is also well-known for its unique case management capabilities, where incidents and their disposition can be grouped, tracked and managed as cases.

Reconnex is also the only solution in the market that is protocol-agnostic. It captures data at the network level and reconstructs it to higher levels – from TCP/IP to HTTP, SMTP and FTP to GMail, Yahoo! Chat and Live.

Reconnex offers all three flavors of DLP through its three flagship products: iGuard (data-in-motion), iDiscover (data-at-rest) and Data-in-Use. All its products have consistently been rated high in almost surveys and opinion polls. Industry analysts, Forrester and Gartner, also consider Reconnex a leader in their domain.

About the author: Ankur Panchbudhe is a principal software engineer in Reconnex, Pune. He has more than 6 years of R&D experience in domains of data security, archiving, content management, data mining and storage software. He has 3 patents granted and more than 25 pending in fields of electronic discovery, data mining, archiving, email systems, content management, compliance, data protection/security, replication and storage. You can find Ankur on Twitter.

Related articles:

Zemanta Pixie

Reconnex named a leader in DLP by Forrester

(Newsitem forwarded to punetech by Anand Kekre of Reconnex)

Reconnex, has been named a “top leader” in the data leak prevention space by Forrester in its DLP Q2 2008 report.

DLP software allows a company to monitor all data movements in the company and ensure that “sensitive” data (i.e. intellectual property, financial information, etc.) does not go out of the company. Reconnex and Websense have been named as the two top leaders in this space by Forrester.

Forrester employed approximately 74 criteria in the categories of current offering, strategy, and market presence to evaluate participating vendors on a scale from 0 (weak) to 5 (strong).

  • Reconnex received a perfect score of 5.0 in the sub-categories of data-in-motion (i.e., the network piece of DLP), unified management, and administration
  • Reconnex tied for the top scores in the sub-categories of data-at-rest (i.e., discovery) and forensics

“Reconnex offers best-in-class product functionality through its automated classification and analysis engine, which allows customers to sift through the actual data that the engine monitors to learn what is important to protect,” according to the Forrester Wave: Data Leak Prevention, Q2 2008 Report. “This solution stands out because it is the only one that automatically discovers and classifies sensitive data without prior knowledge of what needs to be protected.”

For more information about this award, see Reconnex’ press release.

For more information about Reconnex technology, see the punetech wiki profile of Reconnex.

Related articles:

Zemanta Pixie