LiveBlog #tw5: Intro to Functional Programming & Why it’s important

This is a live-blog of TechWeekend 5 on Functional Programming. Please keep checking regularly, this will be updated once every 15 minutes until 1pm.

Why Functional Programming Matters by Dhananjay Nene

Dhananjay Nene started off with an introductory talk on FP – what it is, and why it is important.

FP is a language in which functions have no side-effects. i.e., the result of a function is purely dependent on its inputs. There is no state maintained.

Effects/Implications of “no side effects”

  • Side-effects are necessary: FP doesn’t mean completely side-effect free. If you have no side-effects, you can’t do IO. So, FP really means “largely side-effect free”. Specifically, there are very few parts of the code that have side-effects, and you know exactly which those are.
  • Testability: Unit Testing becomes much easier. There are no “bizarre interactions” between different parts of the code. “Integration” testing becomes much easier, because there are no hidden effects.
  • Immutability: There are no “variables”. Once a value has been assigned to a ‘name’, that value is ‘final’. You can’t change the value of that ‘name’ since that would be ‘state’ and need ‘side-effects’ to change it.
  • Lazy Evaluation: Since a function always produces the same result, the compiler is free to decide when to execute the function. Thus, it might decide to not execute a function until that value is really needed. This gives rise to lazy evaluation.
  • Concurrency control is not so much of a problem. Concurrency control and locks are really needed because you’re afraid that your data might be modified by someone else while you’re accessing it. This issue disappears if your data is immutable.
  • Easier parallelization: The biggest problem with parallelizing programs is handling all the concurrency control issues correctly. This becomes a much smaller problem with FP.
  • Good for multi-core: As the world moves to multi-core architectures, more and more parallelism will be needed. And humans are terrible at writing parallel programs. FP can help, because FP programs are intrinsically, automatically parallelizable.

Another important feature of functional programming languages is the existence of higher order functions. Basically in FP, functions can be treated just like data structures. They can be passed in as parameters to other functions, and they can be returned as the results of functions. This makes much more powerful abstractions possible. (If you know dependency injection, then higher-order functions are dependency injection on steroids.)

FP gives brevity. Programs written in FP will typically be much shorter than comparable imperative programs. This is probably because of higher-order functions and clojures. Compare the size of the quicksort code in Haskell vs. Java at this page

You need to think differently when you start doing functional programming.

Think different:

  • Use recursion or comprehensions instead of loops
  • Use pattern matching instead of if conditions
  • Use pattern matching instead of state machines
  • Information transformation instead of sequence of tasks
  • Software Transactional Memory FTW!

Advantages of FP:

  • After initial ramp-up issues, development will be faster in FP
  • Code is shorter (easier to read, understand)
  • Clearer expression of intention of developer
  • Big ball of mud is harder to achieve with pure functions. You will not really see comments like “I don’t know why this piece of code works, but it works. Please don’t change it.”
  • Once you get used to FP, it is much more enjoyable.
  • Faster, better, cheaper and more enjoyable. What’s not to like?

The cost of doing FP:

  • Re-training the developers’ brains (this is a fixed cost). Because of having to think differently. Can’t just get this from books. Must do some FP programming.
  • You can suffer from a lack of third-party libraries(?), but if you pick a language like Clojure which sits on the JVM, then you can easily access java libraries for the things that don’t exist natively in your language.

Should a company do it’s next project in a functional programming language? Dhananjay’s recommendation: start with small projects, and check whether you have the organizational capacity for FP. Then move on to larger and larger projects. If you’re sure that you have good programmers, and there happens to be a 6-month project for which you’re OK if it actually becomes a 12-month project, then definitely do it in FP. BG’s correction (based on his own experience): the 6-month project will only become a 8-month project.

Some things to know about Erlang by Bhasker Kode

Bhasker is the CEO of http://hover.in. They use Erlang in production for their web service.

Erlang was created in 1986 by developers at Ericsson for their telecom stack. This was later open-sourced and is now a widely used language.

Erlang is made up of many “processes”. These are programming language constructs – not real operating system processes. But otherwise, they are similar to OS processes. Each process executes independently of other processes. Processes do not share any data. Only message passing is allowed between processes. There are a number of schedulers which schedule processes to run. Normally, you will have as many schedulers as you have cores on your machine. Erlang processes are very lightweight.

Garbage collection is very easy, because as soon as a process dies, all its private date can be garbage collected because this is not shared with anyone else.

Another interesting thing about Erlang is that the pattern matching (which is used in all functional programming languages) can actually match binary strings also. This makes it much easier to deal with binary data packets.

Erlang has inbuilt support and language features for handling failures of processors, and which process takes over the job and so on, supervisor processes, etc.

Erlang allows you to think beyond for loops. Create processes which sit around waiting for instructions from you. And then the primary paradigm of programming is to send a bunch of tasks to a bunch of processes in parallel, and wait for results to come back.

Some erlang applications for developers:

  • Webservers built in erlang: Yaws, mochiweb, nitrogen, misultin
  • Databases built in erlang: amazon simpledb, riak, couch, dynomite, hibari, scalaris
  • Testing frameworks: distil, eunit, quickcheck, tsung

Who is using erlang? Amazon (simpledb), Facebook (facebook chat), microsoft, github, nokia (disco crawler), ea (the games company), rabbitmq (a messaging application), ejabberd (the chat server, which has not crahsed in 10 years). Indian companies using erlang: geodesic, http://hover.in.

How Clojure handles the Expression Problem by Baishampayan Ghose

If you’ve gone deep into any programming language, you will find a reference to lisp somewhere. So, every programmer must be interested in lisp. To quote Eric Raymond:

LISP is worth learning for the profound enlightenment experience you will have when you finally get it. That experience will make you a better programmer for the rest of your days, even if you never actually use LISP itself a lot.

BG had conducted a 2 day Clojure tutorial in Pune a few months back, and he will happily do that again if there is enough interest. This talk is not about the basics of Clojure. It is talking about a specific problem, and how it is solved in Clojure, in the hope that it gives some interesting insights into Clojure.

Clojure is a dialect of lisp. And the first thing that anybody notices about lisp is all the parantheses. Don’t be afraid of the parantheses. After a few days of coding in lisp, you will stop noticing them.

Clojure has:

  • first-class regular expressions. A # followed by a string is a regular expression.
  • arbitrary precision integers and doubles. So don’t worry about the data-type of your numbers. (It internally uses the appropriately sized data types.)
  • code as data and data as code. Clojure (and lisp) is homoiconic. So lisp code is just lists, and hence can be manipulated in the program by your program to create new program constructs. This is the most ‘difficult’ and most powerful part of all lisp based languages. Google for “macros in lisp” to learn more. Most people don’t “get” this for a long time, and when they “get” lisp macros, the suddenly become very productive in lisp.
  • has a nice way to attach metadata to functions. For example, type hints attached to functions can help improve performance
  • possibility of speed. With proper type-hints, Clojure can be as fast as Java

_(Sorry: had to leave the talk early because of some other commitments. Will try to update this article later (in a day or two) based on inputs from other people.)

8 thoughts on “LiveBlog #tw5: Intro to Functional Programming & Why it’s important

  1. Quoting
    “Lazy Evaluation: Since a function always produces the same result, the compiler is free to decide when to execute the function. Thus, it might decide to not execute a function until that value is really needed. This gives rise to lazy evaluation.”

    I have been pondering about this, and I feel to really deliver on this promise FP has to be used with dependency injection, because that is the only way we can guarantee the values on which the function will act, regardless of when it is called.

    I am getting a feeling that lack of dependency injection may be a major FP anti-pattern…

    Any thoughts Dhananjay, BG ?

  2. Small correction. I meant to suggest to the audience that HOFs could be imagined to be like policy injections (eg. tax computations). Not that HOFs inject dependencies the way conventionally done in JEE such as a JDBC Connection Provider.

  3. @Parag

    There is no lack of the benefits of dependency injection. Its just that dependency injection is not important enough to be flagged off as a special pattern in FP. A more detailed enunciation of this is likely to be substantially beyond the scope of this post or its comment stream (eg. dependency injection works on object constructors – where are the objects in FP? πŸ™‚ )

    As a trivial example your main function could create a database connection pass it and a set of query parameters to a query function (not side effect free) and pass the results of the query function to the subsequent processing functions which will be side effect free until the final persistence back into the database.

  4. @Dhananjay, @Parag,

    (Maybe I’m misunderstanding what dependency injection really is, but…) The impression I have of functional programming is that “good” functional programming (with real macros, and proper use of higher order functions) can be thought of as all dependency injection, all the time. In fact it is so widespread that it doesn’t need a special name – it happens all the time. Or, to put it differently, with lazy-evaluation and closures and macros, fp is so much beyond DI that DI feels like a poor attempt to introduce fp concepts in Java.

    As you get good, and as you start understanding your problem domain, and as you start seeing common (DRY) patterns, more and more of your functions will be sort-of-generic manipulations of inputs where both, the manipulated data structures and the manipulating methods are both passed in.

    Does that sound correct, or am I completely off?

  5. Parag: As Dhananjay pointed out, languages like Java need IoC, DI, etc. because they don’t have HOFs. You don’t need these things. Let me give a small example –

    Say you want to filter items out of a collection based on whether the items are even or not. You can then have a class called EvenFilter with the method run which would take a collection and return a new collection of items which are even.

    EvenFilter ef = new EvenFilter();
    ef.run([1, 2, 3, 4, 5, 6]); // result -> [2, 4, 6]

    But now, you want to filter out the odd ones, and may be later the prime ones. So you have more classes –

    OddFilter of = new OddFilter();
    of.run([1, 2, 3, 4, 5, 6]); // result -> [1, 3, 5]

    PrimeFilter pf = new PrimeFilter();
    pf.run([10, 11, 12, 13]); // result -> [11, 13]

    I am sure you can see a pattern. Now you might want to make this whole filtering operation generic and let the GenericFilter class take an instance of some other class which conforms to a specific interface, say IPredicate. Now, you can keep your GenericFilter class (which encapsulates the filtering behaviour) and let the class take an implementation of IPredicate (which specifies a method called check with a signature like this –

    Boolean check(Integer);

    So, you can happily go and create various implementations of IPredicate like these –

    class CheckEven implements IPredicate…
    class CheckOdd implements IPredicate…
    class CheckPrime implements IPredicate…

    You now instantiate GenericFilter like this –

    GenericFilter ef = new GenericFilter(new CheckEven());
    GenericFilter of = new GenericFilter(new CheckOdd());
    GenericFilter pf = new GenericFilter(new CheckPrime());

    The instances of GenericFilter hold on to the IPredicate instances internally and they invoke the IPredicate.check() when the GenericFilter.run() method is called.

    The world is happy, your code is now modular enough.

    So what did we discover here? We discovered Dependency Injection, which is a special case of Inversion of Control.

    The question is why do you need all these things? Well, you need them because you don’t have higher order functions.

    Let me show you how I would solve it in Clojure –

    (filter even? [1 2 3 4 5 6]) ; returns [2 4 6]
    (filter odd? [1 2 3 4 5 6]) ; returns [1 3 5]
    (filter prime? [10 11 12 13]) ; returns [11 13]

    I hope I have been able to clear your doubts. The bottom-line is that all these “Design Patterns” in Java, etc. were invented just so that they could mimic the way FP languages work given their rather primitive facilities.

    PS – My Java is rusty, so please excuse me in case of any mistakes.

  6. @Navin

    The examples provided by BG provide a great example of the syntactic aspects of injection in FP.

    DI usually stands for Dependency Injection. Another term used (whose acronym also happens to be DI) is Dependency Inversion. The latter usually boils down to reducing your design to a scenario where you program to an interface and not to an implementation.

    In this example the dependency inversion happens at the point in time the GenericFilter method invokes check(). DI could be thought to happen when GenericFilter gets constructed (usually DI happens either through constructors or setter methods – constructors or setters both being absent in FP unless you imagine partial function invocation). There is another aspect of DI which is more semantic and not syntactic. DI primarily replaces the locator pattern. Thus a non-DI code usually performs a lookup to get some well known resource. A good example is resources referenced in JNDI eg. ConnectionProvider, MessageQueueProvider etc. Traditionally (but not always) DI gets applied to contexts when such lookups would’ve otherwise been done out of some central registry but instead get instantiated upfront and get passed using the constructors or setters. Conventionally every argument that gets passed in is not a dependency in this context – usually something that would get published in a registry would be. Having said that, the semantic meaning of dependency in DI has slowly been relaxing since the days the term was early on coined by Martin Fowler. ref: http://martinfowler.com/articles/injection.html

    Quite often the arguments to a function / method may not typically refer to what is conventionally understood to be dependencies in the DI pattern. Hence my emphasis on using a separate word – policy injection (not that it was necessarily a befitting word, it simply was the best one that came to my mind).

    If the above is confusing, it is still safe to assume that both injection and inversion are alive and kicking in FP in their corresponding idiomatic ways and you will not find these capabilities wanting πŸ™‚

Comments are closed.