Mar 12, 2017

Idiomatic F# Design

HERE are the basic points of idiomatic (according to me) F# design. The idioms I present below are not the ones generally used in the F# community; they are fairly controversial. But they are similar to idioms used in the wider ML languages, and I genuinely believe they offer value in terms of easy-to-read code.

Prefer modules, records, and functions

Try to avoid classes, because they are somewhat higher-ceremony than plain F# records. For example: records can be destructured effortlessly and updated immutably by replacing the values of named fields with new values.

Avoid member methods and properties

They don't mesh well with the functional style because they can't be passed around directly. Module member values and functions can.

Put everything inside modules

Ideally, dedicate a module to a (primary) type and name the type just t to avoid repetition; refer to the type (and other module members) prefixed by the module name for maximum clarity.

In fact, put everything (types, exceptions, and values) inside modules; they are an excellent organisational tool and clarify your thinking about what things go together.

Design top-down using interface (.fsi) files

F# is fantastic for top-down design because you can write the signatures of your modules, data, and operations separately as syntactically valid code in .fsi (F# interface) files and then fill in the blanks later with the actual implementation (.fs) files. Even if you don't consider yourself a top-down thinker, give it a try and you may be surprised by F#'s expressive power here.

Interface files are like C/C++ header files, and let you hide any module members you don't want to expose. Crucially, they also let you hide data definitions and give your caller abstract types to work with, enabling information hiding and decoupling. Quick example:

(* person.fsi *)
namespace MyProject

module Person =
  type t

  val make : string -> t
  val id : t -> int64
  val name : t -> string
  val with_name : string -> t -> t

(* person.fs *)
namespace MyProject

module Person =
  type t = { id : int64; name : string }

  let make name = { id = Id_service.get (); name = name }
  let id { id = i } = i
  let name { name = n } = n
  let with_name new_name t = { t with name = new_name }

In the above example, we hide the Person.t type's implementation details so that callers can't create values of the type themselves bypassing our API. Nor can they operate on values of the type in any way except the ones we provide as part of the Person module.

Also, we take advantage of the above-mentioned F# destructuring pattern matching and immutable record update--benefits we don't get from OOP style--to quickly access and change exactly the parts of the data we're interested in.

Use classic ML-style type parameter application

The recommended style of type parameter application in .NET is with angle brackets after the type constructor. But this is noisy, and just gets noisier as your types get more complex. Use ML-style, which reverses the order of application to postfix, but gets rid of (most of) the noise. Let's compare:

type a1 = Async<option<Person.t>>
type a2 = Person.t option Async

type b1 = Choice<string, Person.t>
type b2 = (string, Person.t) Choice

type c1<'a> = Map<string, 'a>
type 'a c2 = (string, 'a) Map

ML-style type application gives you the biggest wins for sequences of single-parameter applications, but it still spaces out the code in general, which helps with reading.

Use type parameters to control allowed operations

When your type has operations that are only valid when the type is in a certain state, then you can express it as a discriminated union (a sum type) so that operations work in different ways for different cases of the DU. But the problem with exposing the cases of a DU is that you lose decoupling--you can't easily control future refactoring of the type because user code will depend on existing DU cases.

Another problem is, maybe certain operations don't work for all states of a type. Let's try an example:

namespace MyProject

module Bulb =
  type t = On | Off

  exception Already_on
  exception Already_off

  let turn_on = function
    | Off -> On
    | On -> raise Already_on

  let turn_off = function
    | On -> Off
    | Off -> raise Already_off

Both of our critical operations on the Bulb.t type are raising exceptions if called wrongly. Sure, we could have returned a None : Bulb.t option instead if something went wrong; but that just passes the checking to the caller somewhere else. There is a better way: phantom types. Here's the above example converted:

(* bulb.fsi *)
namespace MyProject

module Bulb =
  type on
  type off
  type 'a t

  val on : on t
  val off : off t
  val turn_on : off t -> on t
  val turn_off : on t -> off t

(* bulb.fs *)
namespace MyProject

module Bulb =
  type on = interface end
  type off = interface end

  (* Phantom type--left-hand type parameter isn't used on right hand. *)
  type 'a t = On | Off

  let on = On
  let off = Off

  (*
  Compile warnings are OK here because we're controlling allowed inputs.
  *)
  let turn_on Off = On
  let turn_off On = Off

Note that the compiler warns us here about incomplete pattern matches on the Bulb.t type, but we ignore them in this case because we're explicitly controlling allowable function inputs at the type level. We can turn off this warning with a #nowarn "25" compiler directive in the bulb.fs file, but in general turning off warnings is not a good idea.

Prefer records of functions to interfaces

Here is a good write-up about this, but let me reiterate: interface methods can't be easily passed around, unlike record member functions (which can be closures, remember). An example:

(* api.fs *)
namespace MyProject

module Api =
  exception Invalid_id

  type 'a t =
    { get_exn : int64 -> 'a Async
      add : 'a -> unit Async
      remove_exn : int64 -> unit Async
      list : unit -> 'a seq Async }

(* person.fs *)
namespace MyProject

module Person =
  type t = { id : int64; name : string }

  let make id name = { id = id; name = name }
  let id { id = i } = i
  let name { name = n } = n

  (* val api : t Api.t *)
  let api =
    { get_exn = fun id -> ...
      add = fun person -> ...
      remove_exn = fun id -> ...
      list = fun () -> ... }

Above we implement a generic data store API and a domain type that implements the API. Note that as a convention we add an _exn suffix to code which might raise an exception. This helps the reader prepare to reason about exception handling.

Wrap OOP-style APIs in modular F#

When dealing with traditional OOP-style APIs, e.g. Windows Forms, try to wrap them up in modular F# and expose only the F# abstraction. E.g., suppose you're writing a simple GUI calculator app in WinForms. You need to (1) write out the logic and make it testable; and (2) render the GUI in a form. Solution--put these things in a module:

(* calculator.fsi *)
namespace CalculatorApp

module Calculator =
  type number = Zero | One | Two | ... | Nine
  type op = Plus | Minus | ... | Sqrt

  type t

  val init : t

  (*
  We will implement these operations as mutating to allow a more fluent,
  pipeline-based API like:

  let result =
    calc
      |> clear
      |> press_number Five
      |> press_op Plus
      |> press_number Six
      |> calculate

  At this point, `calc` is in the state that it holds a calculation
  result. We can `clear` it to do more calculations.
  *)

  val clear : t -> t
  val press_number : number -> t -> t
  val press_op : op -> t -> t
  val press_decimal : t -> t
  val calculate : t -> double

  (*
  Draws the app and hooks up event handlers to the above operations.
  *)
  val render : t -> System.Windows.Forms.Control

Now in the implementation, follow the types to implement the business logic and the GUI. Note how the logic is testable because the main operations are exposed, but the messy OOP GUI rendering is not--the caller just gets back a Control to embed in their main window. This is composable and readable.

General philosophy

To conclude, we aim for these design points:

  • Information hiding with interface files
  • Simpler, less noisy syntax is better
  • Modular is better--ideally everything is in a module
  • Logic encoded in the types is better--you get compile-time guarantees and have to do less runtime error handling.

Feb 27, 2017

Can F# be liberated from the .NET architecture?

Recently I've been thinking about the delicate situation that the F# language designers find themselves in whenever they want to introduce new features (especially semantics) into the language, or whenever the C# team want to introduce features that may affect that language or the common language runtime.

The problem has clearly existed for a while; F# hasn't gotten any significant new features since its last major release. The latest example to pop up is probably the uncertainty over adding typeclasses, especially now that it looks like C# is thinking about adding them too.

So how can F# be freed from its ties to C# and the .NET runtime? Well, one way would be to split the F# compiler into two parts: the frontend that compiles F# code to its own intermediate language (let's call it F# Intermediate Language, or FSIL); and the backend that compiles FSIL to some other target language like MSIL (or JavaScript like Fable, JVM bytecode, native, ...).

Compiling F# to FSIL

This presents the main advantage that the F# language can evolve independently of C# and the CLR. The compiler frontend need only worry about doing a great job of implementing F# syntax and semantics in terms of a single target representation that it controls fully. The language can rapidly experiment with new syntax and semantics, or in fact with the traditional ML language family semantics, just by defining a general FSIL to target.

Some examples of this would be implementing typeclasses, or ML-style parameterised modules (functors), or Scala-style implicit evidences, or an easy-to-use call-by-name function parameter declaration syntax for easier control-flow DSLs, or higher-kinded types, or ... any of a myriad of things. All you have to do is teach the frontend to understand your new syntax and compile it to FSIL.

The nice thing about FSIL is that it would preserve nice concepts like units of measure, typeclasses, ML modules and functors, etc., because it was specifically built to understand them. F# modules would be compiled down to FSIL files on a one-to-one basis and F#-specific libraries could be distributed as FSIL packages. Users could link against them to get the nice higher-level functionality that MSIL doesn't offer.

Compiling FSIL to MSIL (and others)

This step would be concerned with the best way to encode FSIL in terms of the target language or runtime. E.g., currently F# modules are encoded as static MSIL classes. This is obviously a great fit, and others may be found for higher-level F# features.

But no doubt some FSIL functionality would be erased--e.g. functors would be defunctorised and compiled away to nothing. Any concrete instantiations of functors would survive as normal modules encoded as static classes as usual. Or, typeclasses would be compiled down to pass instances in explicitly in the MSIL encoding.

With this organisation, other backends could spring up. Fable could become a backend from FSIL to ES5/6/.... Someone could write a backend for FSIL to JVM, or the Erlang runtime (BEAM), or the OCaml runtime. There are quite a few possibilities opened up by decoupling the two ends from each other.

Optimisation

With FSIL we would have the opportunity to do separate optimisation passes on the higher-level, typed, F# focused FSIL, and on the lower-level CLR (or other target) outputs. We would be able to take advantage of the information that we would have at those points in the process while not worrying about the previous or following stages.

F#ixing up

There are many advantages to a decoupled frontend/backend design for the F# toolchain. Possibly the biggest one is that it places F#'s future as a powerful language squarely in the hands of the F# language designers, and lifts from them the burden of having to worry about compatibility with the rest of .NET.

Jan 21, 2017

BuckleScript: a significant new OCaml to JavaScript compiler

RECENTLY I've been paying attention to the BuckleScript compiler from OCaml to JavaScript (ES5, to be exact). I'm seeing some significant advances that it brings to the state of the art, and thought I would share.

A little background: I use Scala.js at work, and it is rock-solid. Once you're set up with a project, you are pretty much good to go. Bindings to significant JS libraries are high-quality; I use some of the biggest names in JS frontend dev libraries today, and all in a nice Scala-style object-functional layer.

That aside, I still periodically look around the ecosystem and try to evaluate the latest developments. I've been hearing about BuckleScript for a while, but only recently decided to try it out for a side project after trying and failing to understand how this works in JavaScript.

So, without further ceremony, let me present my findings.

BuckleScript's compiler is insanely fast, because Bob Zhang (the project lead) has taken to heart the OCaml obsession with performance and routinely tries to optimise away millisecond-level delays. Once you get a taste of that speed (entire project compiled faster than you can blink), you'll find it difficult to go back to something slower. It's like getting to use git after you've used svn all your life.

It compiles to idiomatic, readable ES5, with nice indentation, (almost) no name mangling, and a one-to-one mapping from OCaml modules to ES modules (whichever kind you prefer: Require, AMD, Google).

It targets and integrates with the existing npm ecosystem; it doesn't try to introduce yet another package manager. It makes writing bindings (types) for existing JS libraries reasonably easy, and the documentation (the manual especially) is fantastic at guiding you through that.

OCaml is a bit of an odd duck syntax-wise, even among the functional programming languages. There are nuances to get used to. But once you get used to them, it is a pleasure to program in. And if you just can't get used to them, you can always try out Facebook's Reason, which is an alternative, JavaScript-lookalike syntax for OCaml.

This focus on integration and ease of reuse of the JavaScript ecosystem means it's feasible to leverage the npm package collection in your pure OCaml project. You can deploy a backend server which performs core functions as a statically compiled, native binary (i.e. not nodejs); deploy ES5 nodejs services which take advantage of specialised npm packages for MSSQL querying, or SOAP clients, or what have you; and you can deploy ES5 in your frontend webapp scripts, all written in pure OCaml.

So, why OCaml specifically? After all, there are plenty of nice languages out there.

As it turns out, that OCaml obsession with speed and type-safety together serve it well here. It's a pragmatic, simple, and matter-of-fact language, and its runtime model maps very well to the JavaScript runtime model, while also preserving important compile-time safety guarantees.

I should emphasise that it's pragmatic: you're not forced to deal with the added mental load of monads and other type system rabbit holes--but they're all available if you need them! Personally, I feel that laziness, purity, and monads have driven away more people than they've attracted. I think that OCaml gets the balance right. Others obviously feel differently. But in concrete terms, BuckleScript is a significant contribution that shouldn't be missed.

If you've developed in a compiled language for any length of time and like type-safety guarantees, after trying BuckleScript you'll be asking yourself how much time you've wasted over the years waiting for your compiler to finish so you can continue your edit-compile cycle. Maybe it's best not to think too much about that.

Nov 4, 2016

Nana

NANA is what I called him, but to the world he is the late Lutful Quadir Chowdhury, a well-loved husband, father, grandfather, prominent in the respect he earned from his large extended family, and a highly-regarded elder statesman of the South Asian banking community. For a time he settled down and raised his family in then-West Pakistan, but when the time came he chose to migrate to the newly-formed Bangladesh.

My Nana was a foundational rock of my childhood in Dhaka. To me his spacious home compound, with its large bungalow, lush green back lawn, and giant driveway with two garages at opposite ends, were my personal domain to explore and play in. I have many happy hours of running, jumping, cycling, cricket, soccer, badminton, carrom-board, Monopoly ... and every other childhood activity there. I remember monsoon seasons when we would bring the storm shutters down and the family would gather for tea and watch the rain pour down outside through the wall-to-wall balcony windows. Hours and hours of poring through his rooms and shelves filled to bursting with books. To this day I have dreams that are set in that house.

Nana was a collector of knick-knacks that any hipster today would give his right arm for. Hand-powered ice-cream makers, spotless Swiss food processor sets, a video cassette recorder that he would bring out to keep me entertained, a slide rule he gave me when I showed signs of interest in math ... but books got the pride of place in his home. They were proudly displayed on his shelves, kept in storage in room after room, kept near his side of the bed for easy access. I would spend hours entranced by those books. Every subject you can think of, from Greek mythology to Somerset Maugham, O. Henry (The Gift of the Magi) to Khushwant Singh. I, a small child, absorbed as much of them as he would lend me, like a sponge. They awakened my interest in chess, algebra, calculus, mythology, history, essays, poetry, ... the world.

He would take me for fun-filled trips to the video cassette rental store, treat us out to fast food, order in food whenever we visited (we accidentally got cuttlefish once instead of his order of cutlets), show me how he recorded his expenses in his ledger, and all the while carry out his responsibilities as a busy man even after retirement.

He would receive many visitors, whether friends,  family, or others; to some he would give the help or advice they sought; with others he would enjoy a few rounds of chess; and from others he would procure various services: palm tree tappers, snake charmers, gardeners, hairdressers....

He would make the rounds of his home every evening, one of his well-loved flashlights in hand, checking that all windows were securely fastened, all doors were locked and bolted shut, and that all of his household were accounted for. He would discourage anyone from leaving the house after nightfall, and be up again in the early morning, making sure everyone was awake. I've never been a morning person, and he always had to spend quite a lot of time convincing me to wake up.

I think Nana, coming from a zamindari (landowning) family that lost much of its wealth to the turbulent times, had an idea in his mind of what his life should be as a patriarch of his family, and he made that a reality through sheer hard work and persistence. He created an evergreen world, a perfect world for a childhood.

In the years after that, I was away from Bangladesh most of the time. I didn't get the chance or take the opportunity to see Nana as much as I should have. For years I kept telling myself I would meet him again and we would have another game of chess. Earlier this year I finally saw him and we had our game. I told him I would come back and we would have a rematch, and he agreed. I told him my younger brother would visit him soon, and he was glad to hear it. It gave him something to look forward to. I'm so grateful I was able to keep at least one promise.

After his passing, my overwhelming feeling is gratitude that I knew him.

Oct 16, 2016

The Birkana hexadecimal number symbols

AMONG number systems, the hexadecimal system of counting (or 'radix') has a special place in the hearts of programmers, being closely related to binary, the fundamental number system used by all modern computers. Unlike decimal, which counts ten numbers (0 through 9) before having to add a 'place' (an order of magnitude), hexadecimal allows us to count sixteen numbers (0 through 9, then A through F) before having to add a place.

While the hexadecimal notation of using the first six letters of the alphabet is practical in a rough-and-ready sort of way, no one ever accused it of being elegant. There does, however, exist a fairly elegant (and I think clever) notation for writing hexadecimal--it's called Birkana. The only problem with Birkana is that most of the internet seems to have forgotten about it.

Well, almost all. I obviously came across Birkana somewhere on the internet at some point, years ago. And there may well be many posts written about it in the back reaches of the Google search engine. But to this day, the only result I've managed to find in my searches has been a mailing list post in the discussions of The International Slide Rule Group--probably not a very prominent corner of the online world.

That's too bad, because Birkana is pretty cool and it does deserve a proper introduction. So, what is it exactly?

Birkana is a Runic symbol set for expressing the hexadecimal number radix, but it's designed in such a way that the exact shape of each number rune is built by combining a basic shape for the number zero and accents which correspond to the numbers 1, 2, 4, and 8. In the right combinations, they can express any number from 0 to F. Here are the shapes for the 'building block' numbers:

0x0

0x1

0x2

0x4

0x8

As you can see, every combination of the zero line and the accents will give us a hexadecimal number (between 0 and F). I find it easiest to sum up starting at the highest possible number, then adding on the next-highest number, and so on. Some examples:

0x3 (= 0x2 + 0x1)

0x6 (= 0x4 + 0x2)

0x9 (= 0x8 + 0x1)

0xD (= 0x8 + 0x4 + 0x1)

0xF (= 0x8 + 0x4 + 0x2 + 0x1)

Notice also the exact positions of the accents--they intersect with the zero line at either the top, middle, or bottom, and they end up parallel to exactly a quarter of the way up or down the zero line. The design may have been inspired by ancient runes, but it is very carefully thought-out.

Unfortunately, at the moment it's not practical to type in Birkana in any digital format. Theoretically, the Unicode Consortium could decide to add the Birkana symbols to the Unicode specification and some enterprising font designer could come up with a set for general use. Until then, you're unlikely to come across Birkana symbols on the internet. So for now, enjoy them here, and feel free to copy the SVG shapes off this page's source.

Feb 8, 2016

The Essence of Phantom Types in Scala

The phantom of the type opera

HEIKO Seeberger over at the Codecentric blog published an interesting post about using Scala's typelevel programming to encode phantom types in a strict way so that you could tightly control the types that are allowed to be phantasmal.

For example, if you have a hacker: Hacker[Decaffeinated], and you call hacker.hackOn, you want a compile-time error saying essentially that a decaffeinated hacker can't code on.

Heiko's techniques make some tradeoffs:

  • Having to encode the methods' phantom type requirements as type bound sandwiches or implicit evidence of types. This is required if we want to keep using object-oriented method call syntax.
  • Having to bound the Hacker's phantom type parameter to an explicit hierarchy of allowed phantom types, i.e. either State or its subtypes Caffeinated, Decaffeinated. I believe this is unnecessary, as the Hacker constructor is private and the companion object provides smart constructors that allow you to get only a Hacker[State.Caffeinated] or a Hacker[State.Decaffeinated].

If we trade away the object-oriented syntax, and give up the unnecessary phantom type hierarchy, we get:

class Hacker[S] private () /* 1 */

object Hacker {
  trait Decaffeinated /* 4 */
  trait Caffeinated /* 5 */

  val caffeinated: Hacker[Caffeinated] = new Hacker /* 7 */
  val decaffeinated: Hacker[Decaffeinated] =
    new Hacker /* 8 */

  def hackOn( /* 10 */
    hacker: Hacker[Caffeinated]): Hacker[Decaffeinated] = {
    println("Hacking, hacking, hacking!")
    decaffeinated /* 12 */
  }

  def drinkCoffee( /* 15 */
    hacker: Hacker[Decaffeinated]): Hacker[Caffeinated] = {
    println("Slurp...")
    caffeinated /* 18 */
  }
}

In this version, a few things are going on. By line number:

1
We make the phantom type parameter unbound and invariant, because we made the constructor private. Client code can't create any Hackers, it can only use the ones we provide. Also, we move all the logic out of the class body and into the companion object.
4, 5
We don't need a phantom type hierarchy, or even to seal the traits, because again client code can only use the Hackers that we provide, and the phantom type parameter is, as mentioned, invariant. Also, I don't put the states inside their own companion object because they're merely incidental to the main logic; they don't have any logic dedicated to them.
7, 8
We provide the smart constructors here. Note that the constructors don't need to be methods; they can just be values because these values are immutable. So operations on them can just keep reusing the same values.
10, 15
We make the hackOn and drinkCoffee methods both take and return a Hacker with the correct phantom type to explicitly show the transitions Hacker[Caffeinated] => Hacker[Decaffeinated] and Hacker[Decaffeinated] => Hacker[Caffeinated].
12, 18
We use our own smart constructors internally to separate interface from implementation as much as possible.

With the above code, we can get the highly desirable 'type mismatch' error that immediately tells us what's wrong:

scala> Hacker drinkCoffee Hacker.caffeinated
<console>:8: error: type mismatch;
 found   : Hacker[Hacker.Caffeinated]
 required: Hacker[Hacker.Decaffeinated]
              Hacker drinkCoffee Hacker.caffeinated
                                        ^

scala> Hacker hackOn (Hacker hackOn (Hacker drinkCoffee Hacker.decaffeinated))
<console>:8: error: type mismatch;
 found   : Hacker[Hacker.Decaffeinated]
 required: Hacker[Hacker.Caffeinated]
              Hacker hackOn (Hacker hackOn (Hacker drinkCoffee Hacker.decaffeinated))
                                    ^

Admittedly, we've given up object-oriented syntax to get here. But I personally think the tradeoff is worth it.

Dec 30, 2015

How does the State monad work?

HANDLING state in a monadic way is one of the techniques Haskellers come to learn about. But how does it work, roughly?

The following is a simplified, intuitional explanation of the state monad. It's basically a trick of function currying. Suppose you have some program state, and some functions that do something to your program state:

type MyState = Int

increaseState :: MyState -> Int -> MyState
increaseState myState n = myState + n

decreaseState :: MyState -> Int -> MyState
decreaseState myState n = myState - n

That's workable, but it's not composable: it's inconvenient to keep wrapping function calls like:

let myState = 0
in increaseState (decreaseState myState 1) 2

Because the earlier states are wrapped inside the first argument, and you have to also provide a second argument, you don't get any syntactic wins here.

But if you express the functions a little differently:

increaseState :: Int -> MyState -> MyState
increaseState n myState = myState + n

decreaseState :: Int -> MyState -> MyState
decreaseState n myState = myState - n

You can take advantage of the associativity of the function arrow and think of the function types as Int -> (MyState -> MyState). Now, this return type is a transformation from an initial state to a final state. If we alias the type: type StateWrapper = MyState -> MyState, we can write our function types as Int -> StateWrapper.

So, the new function types are:

increaseState, decreaseState :: Int -> StateWrapper

If we say increaseState 2, we get a result of type StateWrapper. If we say decreaseState 1, we get a result also of type StateWrapper. If we compose these two values (which are actually functions, remember), we get a final value of type StateWrapper, that is MyState -> MyState. In fact, no matter how many stateful operations we do, if we compose them all in sequence, we get a final function of MyState -> MyState.

At the end of the sequencing (the composition), all we have to do is feed in the initial state, and we get back the final state after carrying out all the 'stateful' operations.

So, you ask, why the need for a state monad at all? Why not just do function composition in the first place? The answer is that the state wrapper type is slightly more complicated than what we've seen above. In reality, it's closer to type StateWrapper s a = s -> (a, s), that is, given an initial state s, it returns a result value a and a final state s. A series of functions of this type can't be composed together using normal function composition; and hence we use monadic binding to do the job.

So monadic binding is a more powerful form of function composition and hence action sequencing that can handle unruly types (but which still follow a certain pattern, hence the famous (>>=) :: m a -> (a -> m b) -> m b).

To get a more accurate idea of how the state monad works, see Graham Hutton's excellent tutorial.