Tuesday, January 28, 2014

Ember.js is driving me crazy

For the past few months I've been working on a project with a fairly complex interactive web interface. This required me to venture into the wild and unpredictable jungle of Javascript development. I was totally unprepared for what I would find. Soon after starting the project it became clear that just using JQuery would not be sufficient for my project. I needed a higher level Javascript framework. After a doing a little research I settled on Ember.js.

The Zombie Code Apocalypse

Ember was definitely a big improvement over straight JQuery, and allowed me to get some fairly complex UI behavior working very quickly. But recently I've run into some problems. The other day I had a UI widget defined like this:

App.FooController = Ember.ObjectController.extend({
    // ...
});
App.FooView = Ember.View.extend({
    // ...
});

It was used somewhere on the page, but at some point I decided that the widget was no longer needed, so I commented out the widget's markup. I wasn't sure whether we would ultimately keep the widget or not, so I opted to keep the above javascript code for the controller and view around for awhile so it would be easily available if I later decided to re-enable that UI element.

Everything seemed to work fine until a few days later when I noticed that another one of my controls, Bar, was not being populated with data. After spending hours trying to figure out the problem, I finally happened to comment out the unused code for the Foo widget and the problem went away. WTF?!? Why should this have anything to do with the functioning of a completely unrelated widget? This makes absolutely no sense to me, and it completely violates the natural assumption that the controller and view for two completely unrelated controls would have no impact on each other. I would have liked to know the underlying cause, but I didn't want to waste time with it, so I just removed the code and moved on.

Spontaneously Changing Values

Maybe a week later I ran into another problem. Some data was changing when I didn't expect it to. I looked everywhere I could think of that might affect the data, but couldn't find anything. Again, I spent the better part of a day trying to track down the source of this problem. After awhile I was getting desperate, so I started putting print statements all over the place. I discovered that the data was changing in one particular function. I examined it carefully but couldn't find any hint of this data being impacted. Eventually I isolated the problem to the following snippet:

console.log(this.get('foo'));
this.set('bar', ...);
console.log(this.get('foo'));

The first log line showed foo with a value of 25. The second log line showed foo with a value of 0. This is utter madness! I set one field, and a completely different one gets changed! In what world does this make any shred of sense? This time, even when I actually figured out where the problem was happening I still couldn't figure out how to solve it. At least the first time I could just comment out the offending innocuous lines. Here I narrowed down the exact line that's causing the problem, but still couldn't figure out how to fix it. Finally I got on the #emberjs IRC channel and learned that Ember's set function has special behavior for values in the content field, which foo was a part of. I was able to fix this problem by initializing the bar field to null. WAT?!?

I was in shock. This seemed like one of the most absurd behaviors I've encountered in all my years of programming. Back in the C days you could see some crazy things, but at least you knew that array updates and pointer arithmetic could be dangerous and possibly overwrite other parts of memory. Here there's no hint. No dynamic index that might overflow. Just what we thought was a straightforward getter and setter for a static field in a data type.

Blaming Systems, Not People

Before you start jumping all over me for all the things I did wrong, hear me out. I'm not blaming the Ember developers or trying to disparage Ember. Ember.js is an amazing library and my application wouldn't exist without it or something like it. I'm just a feeble-minded Haskell programmer and not well-versed in the ways of Javascript. I'm sure I was doing things that contributed to the problem. But that's not the point. I've been around long enough to realize that there are probably good justifications for why the above behaviors exist. The Ember developers are clearly way better Javascript programmers than I will ever be. There's got to be a better explanation.

Peter Senge, in his book The Fifth Discipline, talks about the beer distribution game. It's a game that has been played thousands of times with diverse groups of people in management classes all over the world. The vast majority of people who play it perform very poorly. Peter points out that we're too quick to attribute a bad outcome to individual people when it should instead be attributed to the structure of the system in which those people were operating. This situation is no different.

Like the beer distribution game, Javascript is a complex system. The above anecdotes demonstrate how localized well-intentioned decisions by different players resulted in a bad outcome. The root of the problem is the system we were operating in: an impure programming language with weak dynamic typing. In a different system, say the one we get with Haskell, I can conclusively say that I never would have had these problems. Haskell's purity and strong static type system provide a level of safety that is simply unavailable in Javascript (or any other mainstream programming language for that matter).

The Godlike Refactoring

In fact, this same project gave us another anecdote supporting this claim. The project's back end is several thousand lines of Haskell code. I wrote all of the back end code, and since we have a pretty aggressive roadmap with ambitious deadlines the code isn't exactly all that pretty. There are a couple places with some pretty hairy logic. A few weeks ago we needed to do a major refactoring of the back end to support a new feature. I was too busy with other important features, so another member of the team worked on the refactoring. He had not touched a single line of the back end code before that point, but thanks to Haskell's purity and strong static type system he was able to pull off the entire refactoring single-handedly in a just a couple hours. And once he got it compiling, the application worked the first time. We are both convinced that this feat would have been impossible without strong static types.

Conclusion

I think there are a couple of interesting points worth thinking about here. First of all, the API chosen by Ember only hid the complexity, it didn't reduce it. What seemed to be a simple get() method was actually a more complex system with some special cases. The system was more complex than the API indicated. It's useful to think about the true complexity of a problem compared to the complexity of the exposed API.

The second point is that having the ability to make categorical statements about API behavior is very important. We use this kind of reasoning all the time, and the more of it we can do, the fewer the number of assumptions we will have to question when something isn't behaving as we expect. In this case, I made the seemingly reasonable categorical assumption that unused class definitions would have no effect on my program. But for some reason that I still don't understand, it was violated. I also made the categorical assumption that Ember's get() and set() methods worked like they would work in a map. But that assumption didn't hold up either. I encounter assumptions that don't hold up all the time. Every programmer does. But rarely are they so deeply and universally held as these.

So what can we learn from this? In The Fifth Discipline, Senge goes on to talk about the importance of thinking with a systems perspective; about how we need to stop blaming people and focus more on the systems involved. I think it's telling how in my 5 or 6 years of Haskell programming I've never seen a bug as crazy as these two that I encountered after working only a few months on a significant Javascript project. Haskell with it's purity and strong static type system allows me to make significantly more confident categorical statements about what my code can and cannot do. That allows me to more easily build better abstractions that actually reduce complexity for the end user instead of just hiding it away in a less frequented area.

Thursday, December 20, 2012

Haskell Web Framework Matrix

A comparison of the big three Haskell web frameworks on the most informative two axes I can think of.

image/svg+xml DSLs Combinators Snap Happstack Yesod NS Some thingsdynamic Everythingtype-safe

Note that this is not intended to be a definitive statement of what is and isn't possible in each of these frameworks. As I've written elsewhere, most of the features of each of the frameworks are interchangeable and can be mixed and matched. The idea of this matrix is to reflect the general attitude each of the frameworks seem to be taking, because sometimes generalizations are useful.

Thursday, November 8, 2012

Using Cabal With Large Projects

In the last post we talked about basic cabal usage. That all works fine as long as you're working on a single project and all your dependencies are in hackage. When Cabal is aware of everything that you want to build, it's actually pretty good at dependency resolution. But if you have several packages that depend on each other and you're working on development versions of these packages that have not yet been released to hackage, then life becomes more difficult. In this post I'll describe my workflow for handling the development of multiple local packages. I make no claim that this is the best way to do it. But it works pretty well for me, and hopefully others will find this information helpful.

Consider a situation where package B depends on package A and both of them depend on bytestring. Package A has wide version bounds for its bytestring dependency while package B has narrower bounds. Because you're working on improving both packages you can't just do "cabal install" in package B's directory because the correct version of package A isn't on hackage. But if you install package A first, Cabal might choose a version of bytestring that won't work with package B. It's a frustrating situation because eventually you'll have to end up worrying about dependencies issues that Cabal should be handling for you.

The best solution I've found to the above problem is cabal-meta. It lets you specify a sources.txt file in your project root directory with paths to other projects that you want included in the package's build environment. For example, I maintain the snap package, which depends on several other packages that are part of the Snap Framework. Here's what my sources.txt file looks like for the snap package:

./
../xmlhtml
../heist
../snap-core
../snap-server

My development versions of the other four packages reside in the parent directory on my local machine. When I build the snap package with cabal-meta install, cabal-meta tells Cabal to look in these directories in addition to whatever is in hackage. If you do this initially for the top-level package, it will correctly take into consideration all your local packages when resolving dependencies. Once you have all the dependencies installed, you can go back to using Cabal and ghci to build and test your packages. In my experience this takes most of the pain out of building large-scale Haskell applications.

Another tool that is frequently recommended for handling this large-scale package development problem is cabal-dev. cabal-dev allows you to sandbox builds so that differing build configurations of libraries can coexist without causing problems like they do with plain Cabal. It also has a mechanism for handling this local package problem above. I personally tend to avoid cabal-dev because in my experience it hasn't played nicely with ghci. It tries to solve the problem by giving you the cabal-dev ghci command to execute ghci using the sandboxed environment, but I found that it made my ghci workflow difficult, so I prefer using cabal-meta which doesn't have these problems.

I should note that cabal-dev does solve another problem that cabal-meta does not. There may be cases where two different packages may be completely unable to coexist in the same Cabal "sandbox" if their set of dependencies are not compatible. In that case, you'll need cabal-dev's sandboxes instead of the single user-level package repository used by Cabal. I am usually only working on one major project at a time, so this problem has never been an issue for me. My understanding is that people are currently working on adding this kind of local sandboxing to Cabal/cabal-install. Hopefully this will fix my complaints about ghci integration and should make cabal-dev unnecessary.

There are definitely things that need to be done to improve the cabal tool chain. But in my experience working on several different large Haskell projects both open and proprietary I have found that the current state of Cabal combined with cabal-meta (and maybe cabal-dev) does a reasonable job at handling large project development within a very fast moving ecosystem.

Friday, November 2, 2012

A Practical Cabal Primer

I've been doing full-time Haskell development for almost three years now, and while I recognize that Cabal has been painful to use at times, the current reality is that Cabal does what I need it to do and for the most part stays out of my way. In this post, I'll describe the Cabal best practices I've settled on for my Haskell development.

First, some terminology. GHC is the de facto Haskell compiler, Hackage is the package database, Cabal is a library providing package infrastructure, and cabal-install is a command line program (confusingly called "cabal") for building and installing packages, and downloading and uploading them from Hackage. This isn't a tutorial for installing Haskell, so I'll assume that you at least have GHC and cabal-install's "cabal" binary. If you have a very recent release of GHC, then you're asking for problems. At the time of this writing GHC 7.6 is a few months old, so don't use it unless you know what you're doing. Stick to 7.4 until maintainers have updated their packages. But do make sure you have the most recent version of Cabal and cabal-install because it has improved significantly.

cabal-install can install things as global or user. You usually have to have root privileges to install globally. Installing locally will put packages in your user's home directory. Executable binaries go in $HOME/.cabal/bin. Libraries go in $HOME/.ghc. Other than the packages that come with GHC, I install everything as user. This means that when I upgrade cabal-install with "cabal install cabal-install", the new binary won't take effect unless $HOME/.cabal/bin is at the front of my path.

Now I need to get the bad news over with up front. Over time your local Cabal package database will grow until it starts to cause problems. Whenever I'm having trouble building packages, I'll tinker with things a little to see if I can isolate the problem, but if that doesn't work, then I clean out my package repository and start fresh. On linux this can be done very simply with rm -fr ~/.ghc. Yes, this feels icky. Yes, it's suboptimal. But it's simple and straightforward, so either deal with it, or quit complaining and help us fix it.

I've seen people also say that you should delete the ~/.cabal directory as well. Most of the time that is bad advice. If you delete .cabal, you'll probably lose your most recent version of cabal-install, and that will make life more difficult. Deleting .ghc completely clears out your user package repository, and in my experience is almost always sufficient. If you really need to delete .cabal, then I would highly recommend copying the "cabal" binary somewhere safe and restoring it after you're done.

Sometimes you don't need to go quite so far as to delete everything in ~/.ghc. For more granular control over things, use the "ghc-pkg" program. "ghc-pkg list" shows you a list of all the installed packages. "ghc-pkg unregister foo-2.3" removes a package from the list. You can also use unregister without the trailing version number to remove every installed version of that package. If there are other packages that depend on the package you're removing, you'll get an error. If you really want to remove it, use the --force flag.

If you force unregister a package, then "ghc-pkg list" will show you all the broken packages. If I know that there's a particular hierarchy of packages that I need to remove, then I'll force remove the top one, and then use ghc-pkg to tell me all the others that I need to remove. This is an annoying process, so I only do it when I think it will be quicker than deleting everything and rebuilding it all.

So when do you need to use ghc-pkg? Typically I only use it when something breaks that I think should build properly. However, I've also found that having multiple versions of a package installed at the same time can sometimes cause problems. This can show up when the package I'm working on uses one version of a library, but when I'm experimenting in ghci a different version gets loaded. When this happens you may get perplexing error messages for code that is actually correct. In this situation, I've been able to fix the problem by using ghc-pkg to remove all but one version of the library in question.

If you've used all these tips and you still cannot install a package even after blowing away ~/.ghc, then there is probably a dependency issue in the package you're using. Haskell development is moving at a very rapid pace, so the upstream package maintainers may not be aware or have had time to fix the problem. You can help by alerting them to the problem, or better yet, including a patch to fix it.

Often the fix may be a simple dependency bump. These are pretty simple to do yourself. Use "cabal unpack foo-package-0.0.1" to download the package source and unzip it into the current directory. Then edit the .cabal file, change the bounds, and build the local package with "cabal install". Sometimes I will also bump the version of the package itself and then use that as the lower bound in the local package that I'm working on. That way I know it will be using my fixed version of foo-package. Don't be afraid to get your hands dirty. You're literally one command a way from hacking on upstream source.

For the impatient, here's a summary of my tips for basic cabal use:

  1. Install the most recent versions of cabal-install
  2. Don't install things with --global
  3. Make sure $HOME/.cabal/bin is at the front of your path
  4. Don't be afraid to use rm -fr ~/.ghc
  5. Use ghc-pkg for fine-grained package control
  6. User "cabal unpack" to download upstream code so you can fix things yourself

Using these techniques, I've found that Cabal actually works extremely well for small scale Haskell development--development where you're only working on a single package at a time and everything else is on hackage. Large scale development where you're developing more than one local package requires another set of tools. But fortunately we've already have some that work reasonably well. I'll discuss those in my next post.

Thursday, November 1, 2012

Why Cabal Has Problems

Haskell's package system, henceforth just "Cabal" for simplicity, has gotten some harsh press in the tech world recently. I want to emphasize a few points that I think are important to keep in mind in the discussion.

First, this is a hard problem. There's a reason the term "DLL hell" existed long before Cabal. I can't think of any package management system I've used that didn't generate quite a bit of frustration at some point.

Second, the Haskell ecosystem is also moving very quickly. There's the ongoing iteratees/conduits/pipes debate of how to do IO in an efficient and scalable way. Lenses have recently seen major advances in the state of the art. There is tons of web framework activity. I could go on and on. So while Hackage may not be the largest database of reusable code, the larger ones like CPAN that have been around for a long time are probably not moving as fast (in terms of advances in core libraries).

Third, I think Haskell has a unique ability to facilitate code reuse even for relatively small amounts of code. The web framework scene demonstrates this fairly well. As I've said before, even though there are three main competing frameworks, libraries in each of the frameworks can be mixed and matched easily. For example, web-routes-happstack provides convenience code for gluing together the web-routes package with happstack. It is 82 lines of code. web-routes-wai does the same thing for wai with 81 lines of code. The same thing could be done for Snap with a similar amount of code.

The languages with larger package repositories like Ruby and Python might also have small glue packages like this, but they don't have the powerful strong type system. This means that when a Cabal build fails because of dependency issues, you're catching an interaction much earlier than you would have caught it in the other languages. This is what I'm getting at when I say "unique ability to facilitate code reuse".

When you add Haskell's use of cross-module compiler optimizations to all these previous points, I think it makes a compelling case that the Haskell community is at or near the frontier of what has been done before even though we may be a ways away in terms of raw number of packages and developers. Thus, it should not be surprising that there are problems. When you're at the edge of the explored space, there's going to be some stumbling around in the dark and you might go down some dead end paths. But that's not a sign that there's something wrong with the community.

Note: The first published version of this article made some incorrect claims based on incorrect information about the number of Haskell packages compared to the number of packages in other languages. I've removed the incorrect numbers and adjusted my point.

Tuesday, April 17, 2012

LTMT Part 2: Monads

In part 1 of this tutorial we talked about types and kinds. Knowledge of kinds will help to orient yourself in today's discussion of monads.

What is a monad? When you type "monad" into Hayoo the first result takes you to the documentation for the type class Monad. If you don't already have a basic familiarity with type classes, you can think of a type class as roughly equivalent to a Java interface. A type class defines a set of functions involving a certain data type. When a data type defines all the functions required by the type class, we say that it is an instance of that type class. When a type Foo is an instance of the Monad type class, you'll commonly hear people say "Foo is a monad". Here is a version of the Monad type class.

class Monad m where
    return :: a -> m a
    (=<<) :: (a -> m b) -> m a -> m b

(Note: If you're the untrusting type and looked up the real definition to verify that mine is accurate, you'll find that my version is slightly different. Don't worry about that right now. I did it intentionally, and there is a method to my madness.)

This basically says that in order for a data type to be an instance of the Monad type class, it has to define the two functions return and (=<<) (pronounced "bind") that have the above type signatures. What do these type signatures tell us? Let's look at return first. We see that it returns a value of type m a. This tells us that m has the kind signature m :: * -> *. So whenever we hear someone say "Foo is a monad" we immediately know that Foo :: * -> *.

In part 1, you probably got tired of me emphasizing that a type is a context. When we look at return and bind, this starts to make more sense. The type m a is just the type a in the context m. The type signature return :: a -> m a tells us that the return function takes a plain value a and puts that value into the context m. So when we say something is a monad, we immediately know that we have a function called return that lets us put arbitrary other values into that context.

Now, what about bind? It looks much more complicated and scary, but it's really pretty simple. To see this, let's get rid of all the m's in the type signature. Here's the before and after.

before :: (a -> m b) -> m a -> m b
after  :: (a -> b) -> a -> b

The type signature for after might look familiar. It's exactly the same as the type signature for the ($) function! If you're not familiar with it, Haskell's $ function is just syntax sugar for function application. (f $ a) is exactly the same as (f a). It applies the function f to its argument a. It is useful because it has very low precedence and is right associative, so it is a nice syntax sugar that allows us to eliminate parenthesis in certain situations. When you realize that (=<<) is roughly analogous to the concept of function application (modulo the addition of a context m), it suddenly makes a lot more sense.

So now what happens when we look at bind's type signature with the m's back in? (f =<< k) applies the function f to the value k. However, the crucial point is that k is a value wrapped in the context m, but f's parameter is an unwrapped value a. From this we see that the bind function's main purpose is to pull a value out of the context m and apply the function f, which does some computation, and returns the result back in the context m again.

The monad type class does not provide any mechanism for unconditionally pulling a value out of the context. The only way to get access to the unwrapped value is with the bind function, but bind does this in a controlled way and requires the function to wrap things up again before the result is returned. This behavior, enabled by Haskell's strong static type system, provides complete control over side effects and mutability.

Some monads do provide a way to get a value out of the context, but the choice of whether to do so is completely up to the author of said monad. It is not something inherent in the concept of a monad.

Monads wouldn't be very fun to use if all you had was return, bind, and derived functions. To make them more usable, Haskell has a special syntax called "do notation". The basic idea behind do notation is that there's a bind between every line, and you can do a <- func to unwrap the return value of func and make it available to later lines with the identifier 'a'.

You can find a more detailed treatment of do notation elsewhere. I hear that Learn You a Haskell and Real World Haskell are good.

In summary, a monad is a certain type of context that provides two things: a way to put things into the context, and function application within the context. There is no way to get things out. To get things out, you have to use bind to take yourself into the context. Once you have these two operations, there are lots of other more complicated operations built on the basic primitives that are provided by the API. Much of this is provided in Control.Monad. You probably won't learn all this stuff in a day. Just dive in and use these concepts in real code. Eventually you'll find that the patterns are sinking in and becoming clearer.

Monday, April 16, 2012

The Less Travelled Monad Tutorial: Understanding Kinds

This is part 1 of a monad tutorial (but as we will see, it's more than your average monad tutorial). If you already have a strong grasp of types, kinds, monads, and monad transformers, and type signatures like newtype RST r s m a = RST { runRST :: r -> s -> m (a, s) } don't make your eyes glaze over, then reading this won't change your life. If you don't, then maybe it will.

More seriously, when I was learning Haskell I got the impression that some topics were "more advanced" and should wait until later. Now, a few years in, I feel that understanding some of these topics earlier would have significantly sped up the learning process for me. If there are other people out there whose brains work somewhat like mine, then maybe they will be able to benefit from this tutorial. I can't say that everything I say here will be new, but I haven't seen these concepts organized in this way before.

This tutorial is not for absolute beginners. It assumes a basic knowledge of Haskell including the basics of data types, type signatures, and type classes. If you've been programming Haskell for a little bit, but are getting stuck on monads or monad transformers, then you might find some help here.

data Distance = Dist Double
data Mass = Mass Double

This code defines data types called Distance and Mass. The name on the left side of the equals sign is called a type constructor (or sometimes shortened to just type). The Haskell compiler automatically creates functions from the names just to the right of the equals sign. These functions are called data constructors because they construct the types Distance and Mass. Since they are functions, they are also first-class values, which means they have types as seen in the following ghci session.

$ ghci ltmt.hs
GHCi, version 7.0.3: http://www.haskell.org/ghc/  :? for helpghci> :t Dist
Dist :: Double -> Distance
ghci> :t Mass
Mass :: Double -> Mass
ghci> :t Distance
<interactive>:1:1: Not in scope: data constructor `Distance'

We see here that Dist and Mass are functions that return the types Distance and Mass respectively. Frequently you'll encounter code where the type and the constructor have the same name (as we have here with Mass). Distance, however, illustrates that these are really two separate entities. Distance is the type and Dist is the constructor. Types don't have types, so the ":t" command fails for Distance.

Now, we need to pause for a moment and think about the meaning of these things. What is the Distance type? Well, when we look at the constructor, we can see that a value of type Distance contains a single Double. The constructor function doesn't actually do anything to the Double value in the process of constructing the Distance value. All it does is create a new context for thinking about a Double, specifically the context of a Double that we intend to represent a distance quantity. (Well, that's not completely true, but for the purposes of this tutorial we'll ignore those details.) Let me repeat that. A type is just a context. This probably seems so obvious that you're starting to wonder about me. But I'm saying it because I think that keeping it in mind will help when talking about monads later.

Now let's look at another data declaration.

data Pair a = MkPair a a

This one is more interesting. The type constructor Pair takes an argument. The argument is some other type a and that is used in some part of the data constructor. When we look to the right side, we see that the data constructor is called MkPair and it constructs a value of type "Pair a" from two values of type "a".

MkPair :: a -> a -> Pair a

The same thing we said for the above types Distance and Mass applies here. The type constructor Pair represents a context. It's a context representing a pair of values. The type Pair Int represents a pair of Ints. The type Pair String represents a pair of strings. And on and on for whatever concrete type we use in the place of a.

Again, this is all very straightforward. But there is a significant distinction between the two type constructors Pair and Distance. Pair requires a type parameter, while Distance does not. This brings us to the topic of kinds. (Most people postpone this topic until later, but it's not hard to understand and I think it helps to clarify things later.) You know those analogy questions they use on standardized tests? Here's a completed one for you:

values : types    ::   types : kinds

Just as we categorize values by type, we categorize type constructors by kind. GHCi lets us look up a type constructor's kind with the ":k" command.

ghci> :k Distance
Distance :: *
ghci> :k Mass
Mass :: *
ghci> :k Dist
<interactive>:1:1: Not in scope: type constructor or class `Dist'
ghci> :k Pair
Pair :: * -> *
ghci> :k Pair Mass
Pair Mass :: *

In English we would say "Distance has kind *", and "Pair has kind * -> *". Kind signatures look similar to type signatures because they are. When we use Mass as Pair's first type argument, the result has kind *. The Haskell report defines kind signatures with the following two rules.

  • The symbol * represents the kind of all nullary type constructors (constructors that don't take any parameters).
  • If k1 and k2 are kinds, then k1->k2 is the kind of types that take one parameter that is a type of kind k1 and return a type of kind k2.

As an exercise, see if you can work out the kind signatures for the following type constructors. You can check your work with GHCi.

data Tuple a b = Tuple a b
data HardA a = HardA (a Int)
data HardB a b c = HardB (a b) (c a Int)

Before reading further, I suggest attempting to figure out the kind signatures for HardA and HardB because they involve a key pattern that will come up later.


Welcome back. The first example is just a simple extension of what we've already seen. The type constructor Tuple has two arguments, so it's kind signature is Tuple :: * -> * -> *. Also if you try :t you'll see that the data constructor's type signature is Tuple :: a -> b -> Tuple a b.

In the case of the last two types, it may be a little less obvious. But they build on each other in fairly small, manageable steps. For HardA, in the part to the left of the equals sign we see that there is one type parameter 'a'. From this, we can deduce that HardA's kind signature is something of the form ? -> *, but we don't know exactly what to put at the question mark. On the right side, all the individual arguments to the data constructor must have kind *. If (a Int) :: *, then the type 'a' must be a type constructor that takes one parameter. That is, it has kind * -> *, which is what we must substitute for the question mark. Therefore, we get the final kind signature HardA :: (* -> *) -> *.

HardB is a very contrived and much more complex case that exercises all the above simple principles. From HardB a b c we see that HardB has three type parameters, so it's kind signature has the form HardB :: a -> b -> c -> *. On the right side the (a b) tells us that b :: * and a :: * -> *. The second part (c a Int) means that c is a type constructor with two parameters where its first parameter is a, which has the type signature we described above. So this gives us c :: (* -> *) -> * -> *. Now, substituting all these in, we get HardB :: (* -> *) -> * -> ((* -> *) -> * -> *) -> *.

The point of all this is to show that when you see juxtaposition of type constructors (something of the form (a b) in a type signature), it is telling you that the context a is a non-nullary type constructor and b is its first parameter.

Continue on to Part 2 of the Less Travelled Monad Tutorial