Sunday, July 13, 2014

Haskell Best Practices for Avoiding "Cabal Hell"

I posted this as a reddit comment and it was really well received, so I thought I'd post it here so it would be more linkable.  A lot of people complain about "cabal hell" and ask what they can do to solve it.  There are definitely things about the cabal/hackage ecosystem that can be improved, but on the whole it serves me quite well.  I think a significant amount of the difficulty is a result of how fast things move in the Haskell community and how much more reusable Haskell is than other languages.

With that preface, here are my best practices that seem to make Cabal work pretty well for me in my development.

1. I make sure that I have no more than the absolute minimum number of packages installed as --global.  This means that I don't use the Haskell Platform or any OS haskell packages.  I install GHC directly.  Some might think this casts too much of a negative light on the Haskell Platform.  But everyone will agree that having multiple versions of a package installed at the same time is a significant cause of build problems.  And that is exactly what the Haskell Platform does for you--it installs specific versions of packages.  If you use Haskell heavily enough, you will invariably encounter a situation where you want to use a different version of a package than the one the Haskell Platform gives you.

2. Make sure ~/.cabal/bin is at the front of your path.  Hopefully you already knew this, but I see this problem a lot, so it's worth mentioning for completeness.

3. Install happy and alex manually.  These two packages generate binary executables that you need to have in ~/.cabal/bin.  They don't get picked up automatically because they are executables and not package dependencies.

4. Make sure you have the most recent version of cabal-install.  There is a lot of work going on to improve these tools.  The latest version is significantly better than it used to be, so you should definitely be using it.

5. Become friends with "rm -fr ~/.ghc".  This command cleans out your --user repository, which is where you should install packages if you're not using a sandbox.  It sounds bad, but right now this is simply a fact of life.  The Haskell ecosystem is moving so fast that packages you install today will be out of date in a few months if not weeks or days.  We don't have purely functional nix-style package management yet, so removing the old ones is the pragmatic approach.  Note that sandboxes accomplish effectively the same thing for you.  Creating a new sandbox is the same as "rm -fr ~/.ghc" and then installing to --user, but has the benefit of not deleting everything else you had in --user.

6. If you're not working on a single project with one harmonious dependency tree, then use sandboxes for separate projects or one-off package compiles.

7. Learn to use --allow-newer.  Again, things move fast in Haskell land.  If a package gives you dependency errors, then try --allow-newer and see if the package will just work with newer versions of dependencies.

8. Don't be afraid to dive into other people's packages.  "cabal unpack" makes it trivial to download the code for any package.  From there it's often trivial to make manual changes to version bounds or even small code changes.  If you make local changes to a package, then you can either install it to --user so other packages use it, or you can do "cabal sandbox add-source /path/to/project" to ensure that your other projects use the locally modified version.  If you've made code changes, then help out the community by sending a pull request to the package maintainer.  Edit: bergmark mentions that unpack is now "cabal get" and "cabal get -s" lets you clone the project's source repository.

9. If you can't make any progress from the build messages cabal gives you, then try building with -v3.  I have encountered situations where cabal's normal dependency errors are not helpful.  Using -v3 usually gives me a much better picture of what's going on and I can usually figure out the root of the problem pretty quickly.

Friday, May 16, 2014

Implicit Blacklisting for Cabal

I've been thinking about all the Haskell PVP discussion that's been going on lately. It should be no secret by now that I am a PVP proponent. I'm not here to debate the PVP in this post, so for this discussion let's assume that the PVP is a good thing and should be adopted by all packages published on Hackage. More specifically, let's assume this to mean that every package should specify upper bounds on all dependencies, and that most of the time these bounds will be of the form "< a.b".

Recently there has been discussion about problems encountered when packages that have not been using upper bounds change and start using them. The recent issue with the HTTP package is a good example of this. Roughly speaking the problem is that if foo-1.2 does not provide upper bounds on it's dependency bar, the constraint solver is perpetually "poisoned" because foo-1.2 will always be a candidate even long after bar has become incompatible with foo-1.2. If later foo-3.9 specifies a bound of bar < 0.5, then when bar-0.5 comes out the solver will try to build with foo-1.2 even though it is hopelessly old. This will result in build errors since bar has long since changed its API.

This is a difficult problem. There are several immediately obvious approaches to solving the problem.

  1. Remove the offending old versions (the ones missing upper bounds) from Hackage.
  2. Leave them on Hackage, but mark them as deprecated/blacklisted so they will not be chosen by the solver.
  3. Go back and retroactively add upper bounds to the offending versions.
  4. Start a new empty Hackage server that requires packages to specify upper bounds on all dependencies.
  5. Start a new Hackage mirror that infers upper bounds based on package upload dates.

All of these approaches have problems. The first three are problematic because they mess with build reproducibility. The fourth approach fragments the community and in the very best case would take a lot of time and effort before gaining adoption. The fifth approach has problems because correct upper bounds cannot always be inferred by upload dates.

I would like to propose a solution I call implicit blacklisting. The basic idea is that for each set of versions with the prefix a.b.c Cabal will only consider a single one: the last one. This effectively means that all the lower versions with the prefix a.b.c will be implicitly blacklisted. This approach should also allow maintainers to modify this behavior by specifying more granular version bounds.

In our previous example, suppose there were a number of 0.4 versions of the bar package, with 0.4.3.3 being the last one. In this case, if foo specified a bound of bar < 0.5, the solver would only consider 0.4.3.3. 0.4.3.2 and 0.4.3.1 would not be considered. This would allow us to completely hide a lack of version bounds by making a new patch release that only bumps the d number. If that release had problems, we could address them with more patch releases.

Now imagine that for some crazy reason foo worked with 0.4.3.2, but 0.4.3.3 broke it somehow. Note that if bar is following the PVP, that should not be the case. But there are some well-known cases where the PVP can miss things and there is always the possibility of human error. In this case, foo should specify a bound of bar < 0.4.3.3. In this case, the solver should respect that bound and only consider 0.4.3.2. But 0.4.3.1 would still be ignored as before.

Implicit blacklisting has the advantage that we don't need any special support for explicitly marking versions as deprecated/blacklisted. Another advantage is that it does not cause any problems for people who had locked their code down to using specific versions. If foo specified an exact version of bar == 0.4.3.0, then that will continue to be chosen. Implicit blacklisting also allows us to leave everything in hackage untouched and fix issues incrementally as they arise with the minimum amount of work. In the above issue with HTTP-4000.0.7, we could trivially address it by downloading that version, adding version bounds, and uploading it as HTTP-4000.0.7.1.

All in all, I think this implicit blacklisting idea has a number of desirable properties and very few downsides. It fixes the problem using nothing but our existing infrastructure: version numbers. It doesn’t require us to add new concepts like blacklisted/deprecated flags, out-of-band “revision” markers to denote packages modified after the fact, etc. But since this is a complicated problem I may very well have missed something, so I'd like to hear what the community thinks about this idea.

Tuesday, January 28, 2014

Ember.js is driving me crazy

For the past few months I've been working on a project with a fairly complex interactive web interface. This required me to venture into the wild and unpredictable jungle of Javascript development. I was totally unprepared for what I would find. Soon after starting the project it became clear that just using JQuery would not be sufficient for my project. I needed a higher level Javascript framework. After a doing a little research I settled on Ember.js.

The Zombie Code Apocalypse

Ember was definitely a big improvement over straight JQuery, and allowed me to get some fairly complex UI behavior working very quickly. But recently I've run into some problems. The other day I had a UI widget defined like this:

App.FooController = Ember.ObjectController.extend({
    // ...
});
App.FooView = Ember.View.extend({
    // ...
});

It was used somewhere on the page, but at some point I decided that the widget was no longer needed, so I commented out the widget's markup. I wasn't sure whether we would ultimately keep the widget or not, so I opted to keep the above javascript code for the controller and view around for awhile so it would be easily available if I later decided to re-enable that UI element.

Everything seemed to work fine until a few days later when I noticed that another one of my controls, Bar, was not being populated with data. After spending hours trying to figure out the problem, I finally happened to comment out the unused code for the Foo widget and the problem went away. WTF?!? Why should this have anything to do with the functioning of a completely unrelated widget? This makes absolutely no sense to me, and it completely violates the natural assumption that the controller and view for two completely unrelated controls would have no impact on each other. I would have liked to know the underlying cause, but I didn't want to waste time with it, so I just removed the code and moved on.

Spontaneously Changing Values

Maybe a week later I ran into another problem. Some data was changing when I didn't expect it to. I looked everywhere I could think of that might affect the data, but couldn't find anything. Again, I spent the better part of a day trying to track down the source of this problem. After awhile I was getting desperate, so I started putting print statements all over the place. I discovered that the data was changing in one particular function. I examined it carefully but couldn't find any hint of this data being impacted. Eventually I isolated the problem to the following snippet:

console.log(this.get('foo'));
this.set('bar', ...);
console.log(this.get('foo'));

The first log line showed foo with a value of 25. The second log line showed foo with a value of 0. This is utter madness! I set one field, and a completely different one gets changed! In what world does this make any shred of sense? This time, even when I actually figured out where the problem was happening I still couldn't figure out how to solve it. At least the first time I could just comment out the offending innocuous lines. Here I narrowed down the exact line that's causing the problem, but still couldn't figure out how to fix it. Finally I got on the #emberjs IRC channel and learned that Ember's set function has special behavior for values in the content field, which foo was a part of. I was able to fix this problem by initializing the bar field to null. WAT?!?

I was in shock. This seemed like one of the most absurd behaviors I've encountered in all my years of programming. Back in the C days you could see some crazy things, but at least you knew that array updates and pointer arithmetic could be dangerous and possibly overwrite other parts of memory. Here there's no hint. No dynamic index that might overflow. Just what we thought was a straightforward getter and setter for a static field in a data type.

Blaming Systems, Not People

Before you start jumping all over me for all the things I did wrong, hear me out. I'm not blaming the Ember developers or trying to disparage Ember. Ember.js is an amazing library and my application wouldn't exist without it or something like it. I'm just a feeble-minded Haskell programmer and not well-versed in the ways of Javascript. I'm sure I was doing things that contributed to the problem. But that's not the point. I've been around long enough to realize that there are probably good justifications for why the above behaviors exist. The Ember developers are clearly way better Javascript programmers than I will ever be. There's got to be a better explanation.

Peter Senge, in his book The Fifth Discipline, talks about the beer distribution game. It's a game that has been played thousands of times with diverse groups of people in management classes all over the world. The vast majority of people who play it perform very poorly. Peter points out that we're too quick to attribute a bad outcome to individual people when it should instead be attributed to the structure of the system in which those people were operating. This situation is no different.

Like the beer distribution game, Javascript is a complex system. The above anecdotes demonstrate how localized well-intentioned decisions by different players resulted in a bad outcome. The root of the problem is the system we were operating in: an impure programming language with weak dynamic typing. In a different system, say the one we get with Haskell, I can conclusively say that I never would have had these problems. Haskell's purity and strong static type system provide a level of safety that is simply unavailable in Javascript (or any other mainstream programming language for that matter).

The Godlike Refactoring

In fact, this same project gave us another anecdote supporting this claim. The project's back end is several thousand lines of Haskell code. I wrote all of the back end code, and since we have a pretty aggressive roadmap with ambitious deadlines the code isn't exactly all that pretty. There are a couple places with some pretty hairy logic. A few weeks ago we needed to do a major refactoring of the back end to support a new feature. I was too busy with other important features, so another member of the team worked on the refactoring. He had not touched a single line of the back end code before that point, but thanks to Haskell's purity and strong static type system he was able to pull off the entire refactoring single-handedly in a just a couple hours. And once he got it compiling, the application worked the first time. We are both convinced that this feat would have been impossible without strong static types.

Conclusion

I think there are a couple of interesting points worth thinking about here. First of all, the API chosen by Ember only hid the complexity, it didn't reduce it. What seemed to be a simple get() method was actually a more complex system with some special cases. The system was more complex than the API indicated. It's useful to think about the true complexity of a problem compared to the complexity of the exposed API.

The second point is that having the ability to make categorical statements about API behavior is very important. We use this kind of reasoning all the time, and the more of it we can do, the fewer the number of assumptions we will have to question when something isn't behaving as we expect. In this case, I made the seemingly reasonable categorical assumption that unused class definitions would have no effect on my program. But for some reason that I still don't understand, it was violated. I also made the categorical assumption that Ember's get() and set() methods worked like they would work in a map. But that assumption didn't hold up either. I encounter assumptions that don't hold up all the time. Every programmer does. But rarely are they so deeply and universally held as these.

So what can we learn from this? In The Fifth Discipline, Senge goes on to talk about the importance of thinking with a systems perspective; about how we need to stop blaming people and focus more on the systems involved. I think it's telling how in my 5 or 6 years of Haskell programming I've never seen a bug as crazy as these two that I encountered after working only a few months on a significant Javascript project. Haskell with it's purity and strong static type system allows me to make significantly more confident categorical statements about what my code can and cannot do. That allows me to more easily build better abstractions that actually reduce complexity for the end user instead of just hiding it away in a less frequented area.

Thursday, December 20, 2012

Haskell Web Framework Matrix

A comparison of the big three Haskell web frameworks on the most informative two axes I can think of.

image/svg+xml DSLs Combinators Snap Happstack Yesod NS Some thingsdynamic Everythingtype-safe

Note that this is not intended to be a definitive statement of what is and isn't possible in each of these frameworks. As I've written elsewhere, most of the features of each of the frameworks are interchangeable and can be mixed and matched. The idea of this matrix is to reflect the general attitude each of the frameworks seem to be taking, because sometimes generalizations are useful.

Thursday, November 8, 2012

Using Cabal With Large Projects

In the last post we talked about basic cabal usage. That all works fine as long as you're working on a single project and all your dependencies are in hackage. When Cabal is aware of everything that you want to build, it's actually pretty good at dependency resolution. But if you have several packages that depend on each other and you're working on development versions of these packages that have not yet been released to hackage, then life becomes more difficult. In this post I'll describe my workflow for handling the development of multiple local packages. I make no claim that this is the best way to do it. But it works pretty well for me, and hopefully others will find this information helpful.

Consider a situation where package B depends on package A and both of them depend on bytestring. Package A has wide version bounds for its bytestring dependency while package B has narrower bounds. Because you're working on improving both packages you can't just do "cabal install" in package B's directory because the correct version of package A isn't on hackage. But if you install package A first, Cabal might choose a version of bytestring that won't work with package B. It's a frustrating situation because eventually you'll have to end up worrying about dependencies issues that Cabal should be handling for you.

The best solution I've found to the above problem is cabal-meta. It lets you specify a sources.txt file in your project root directory with paths to other projects that you want included in the package's build environment. For example, I maintain the snap package, which depends on several other packages that are part of the Snap Framework. Here's what my sources.txt file looks like for the snap package:

./
../xmlhtml
../heist
../snap-core
../snap-server

My development versions of the other four packages reside in the parent directory on my local machine. When I build the snap package with cabal-meta install, cabal-meta tells Cabal to look in these directories in addition to whatever is in hackage. If you do this initially for the top-level package, it will correctly take into consideration all your local packages when resolving dependencies. Once you have all the dependencies installed, you can go back to using Cabal and ghci to build and test your packages. In my experience this takes most of the pain out of building large-scale Haskell applications.

Another tool that is frequently recommended for handling this large-scale package development problem is cabal-dev. cabal-dev allows you to sandbox builds so that differing build configurations of libraries can coexist without causing problems like they do with plain Cabal. It also has a mechanism for handling this local package problem above. I personally tend to avoid cabal-dev because in my experience it hasn't played nicely with ghci. It tries to solve the problem by giving you the cabal-dev ghci command to execute ghci using the sandboxed environment, but I found that it made my ghci workflow difficult, so I prefer using cabal-meta which doesn't have these problems.

I should note that cabal-dev does solve another problem that cabal-meta does not. There may be cases where two different packages may be completely unable to coexist in the same Cabal "sandbox" if their set of dependencies are not compatible. In that case, you'll need cabal-dev's sandboxes instead of the single user-level package repository used by Cabal. I am usually only working on one major project at a time, so this problem has never been an issue for me. My understanding is that people are currently working on adding this kind of local sandboxing to Cabal/cabal-install. Hopefully this will fix my complaints about ghci integration and should make cabal-dev unnecessary.

There are definitely things that need to be done to improve the cabal tool chain. But in my experience working on several different large Haskell projects both open and proprietary I have found that the current state of Cabal combined with cabal-meta (and maybe cabal-dev) does a reasonable job at handling large project development within a very fast moving ecosystem.

Friday, November 2, 2012

A Practical Cabal Primer

I've been doing full-time Haskell development for almost three years now, and while I recognize that Cabal has been painful to use at times, the current reality is that Cabal does what I need it to do and for the most part stays out of my way. In this post, I'll describe the Cabal best practices I've settled on for my Haskell development.

First, some terminology. GHC is the de facto Haskell compiler, Hackage is the package database, Cabal is a library providing package infrastructure, and cabal-install is a command line program (confusingly called "cabal") for building and installing packages, and downloading and uploading them from Hackage. This isn't a tutorial for installing Haskell, so I'll assume that you at least have GHC and cabal-install's "cabal" binary. If you have a very recent release of GHC, then you're asking for problems. At the time of this writing GHC 7.6 is a few months old, so don't use it unless you know what you're doing. Stick to 7.4 until maintainers have updated their packages. But do make sure you have the most recent version of Cabal and cabal-install because it has improved significantly.

cabal-install can install things as global or user. You usually have to have root privileges to install globally. Installing locally will put packages in your user's home directory. Executable binaries go in $HOME/.cabal/bin. Libraries go in $HOME/.ghc. Other than the packages that come with GHC, I install everything as user. This means that when I upgrade cabal-install with "cabal install cabal-install", the new binary won't take effect unless $HOME/.cabal/bin is at the front of my path.

Now I need to get the bad news over with up front. Over time your local Cabal package database will grow until it starts to cause problems. Whenever I'm having trouble building packages, I'll tinker with things a little to see if I can isolate the problem, but if that doesn't work, then I clean out my package repository and start fresh. On linux this can be done very simply with rm -fr ~/.ghc. Yes, this feels icky. Yes, it's suboptimal. But it's simple and straightforward, so either deal with it, or quit complaining and help us fix it.

I've seen people also say that you should delete the ~/.cabal directory as well. Most of the time that is bad advice. If you delete .cabal, you'll probably lose your most recent version of cabal-install, and that will make life more difficult. Deleting .ghc completely clears out your user package repository, and in my experience is almost always sufficient. If you really need to delete .cabal, then I would highly recommend copying the "cabal" binary somewhere safe and restoring it after you're done.

Sometimes you don't need to go quite so far as to delete everything in ~/.ghc. For more granular control over things, use the "ghc-pkg" program. "ghc-pkg list" shows you a list of all the installed packages. "ghc-pkg unregister foo-2.3" removes a package from the list. You can also use unregister without the trailing version number to remove every installed version of that package. If there are other packages that depend on the package you're removing, you'll get an error. If you really want to remove it, use the --force flag.

If you force unregister a package, then "ghc-pkg list" will show you all the broken packages. If I know that there's a particular hierarchy of packages that I need to remove, then I'll force remove the top one, and then use ghc-pkg to tell me all the others that I need to remove. This is an annoying process, so I only do it when I think it will be quicker than deleting everything and rebuilding it all.

So when do you need to use ghc-pkg? Typically I only use it when something breaks that I think should build properly. However, I've also found that having multiple versions of a package installed at the same time can sometimes cause problems. This can show up when the package I'm working on uses one version of a library, but when I'm experimenting in ghci a different version gets loaded. When this happens you may get perplexing error messages for code that is actually correct. In this situation, I've been able to fix the problem by using ghc-pkg to remove all but one version of the library in question.

If you've used all these tips and you still cannot install a package even after blowing away ~/.ghc, then there is probably a dependency issue in the package you're using. Haskell development is moving at a very rapid pace, so the upstream package maintainers may not be aware or have had time to fix the problem. You can help by alerting them to the problem, or better yet, including a patch to fix it.

Often the fix may be a simple dependency bump. These are pretty simple to do yourself. Use "cabal unpack foo-package-0.0.1" to download the package source and unzip it into the current directory. Then edit the .cabal file, change the bounds, and build the local package with "cabal install". Sometimes I will also bump the version of the package itself and then use that as the lower bound in the local package that I'm working on. That way I know it will be using my fixed version of foo-package. Don't be afraid to get your hands dirty. You're literally one command a way from hacking on upstream source.

For the impatient, here's a summary of my tips for basic cabal use:

  1. Install the most recent versions of cabal-install
  2. Don't install things with --global
  3. Make sure $HOME/.cabal/bin is at the front of your path
  4. Don't be afraid to use rm -fr ~/.ghc
  5. Use ghc-pkg for fine-grained package control
  6. User "cabal unpack" to download upstream code so you can fix things yourself

Using these techniques, I've found that Cabal actually works extremely well for small scale Haskell development--development where you're only working on a single package at a time and everything else is on hackage. Large scale development where you're developing more than one local package requires another set of tools. But fortunately we've already have some that work reasonably well. I'll discuss those in my next post.

Thursday, November 1, 2012

Why Cabal Has Problems

Haskell's package system, henceforth just "Cabal" for simplicity, has gotten some harsh press in the tech world recently. I want to emphasize a few points that I think are important to keep in mind in the discussion.

First, this is a hard problem. There's a reason the term "DLL hell" existed long before Cabal. I can't think of any package management system I've used that didn't generate quite a bit of frustration at some point.

Second, the Haskell ecosystem is also moving very quickly. There's the ongoing iteratees/conduits/pipes debate of how to do IO in an efficient and scalable way. Lenses have recently seen major advances in the state of the art. There is tons of web framework activity. I could go on and on. So while Hackage may not be the largest database of reusable code, the larger ones like CPAN that have been around for a long time are probably not moving as fast (in terms of advances in core libraries).

Third, I think Haskell has a unique ability to facilitate code reuse even for relatively small amounts of code. The web framework scene demonstrates this fairly well. As I've said before, even though there are three main competing frameworks, libraries in each of the frameworks can be mixed and matched easily. For example, web-routes-happstack provides convenience code for gluing together the web-routes package with happstack. It is 82 lines of code. web-routes-wai does the same thing for wai with 81 lines of code. The same thing could be done for Snap with a similar amount of code.

The languages with larger package repositories like Ruby and Python might also have small glue packages like this, but they don't have the powerful strong type system. This means that when a Cabal build fails because of dependency issues, you're catching an interaction much earlier than you would have caught it in the other languages. This is what I'm getting at when I say "unique ability to facilitate code reuse".

When you add Haskell's use of cross-module compiler optimizations to all these previous points, I think it makes a compelling case that the Haskell community is at or near the frontier of what has been done before even though we may be a ways away in terms of raw number of packages and developers. Thus, it should not be surprising that there are problems. When you're at the edge of the explored space, there's going to be some stumbling around in the dark and you might go down some dead end paths. But that's not a sign that there's something wrong with the community.

Note: The first published version of this article made some incorrect claims based on incorrect information about the number of Haskell packages compared to the number of packages in other languages. I've removed the incorrect numbers and adjusted my point.