Thursday, November 8, 2012

Using Cabal With Large Projects

In the last post we talked about basic cabal usage. That all works fine as long as you're working on a single project and all your dependencies are in hackage. When Cabal is aware of everything that you want to build, it's actually pretty good at dependency resolution. But if you have several packages that depend on each other and you're working on development versions of these packages that have not yet been released to hackage, then life becomes more difficult. In this post I'll describe my workflow for handling the development of multiple local packages. I make no claim that this is the best way to do it. But it works pretty well for me, and hopefully others will find this information helpful.

Consider a situation where package B depends on package A and both of them depend on bytestring. Package A has wide version bounds for its bytestring dependency while package B has narrower bounds. Because you're working on improving both packages you can't just do "cabal install" in package B's directory because the correct version of package A isn't on hackage. But if you install package A first, Cabal might choose a version of bytestring that won't work with package B. It's a frustrating situation because eventually you'll have to end up worrying about dependencies issues that Cabal should be handling for you.

The best solution I've found to the above problem is cabal-meta. It lets you specify a sources.txt file in your project root directory with paths to other projects that you want included in the package's build environment. For example, I maintain the snap package, which depends on several other packages that are part of the Snap Framework. Here's what my sources.txt file looks like for the snap package:

./
../xmlhtml
../heist
../snap-core
../snap-server

My development versions of the other four packages reside in the parent directory on my local machine. When I build the snap package with cabal-meta install, cabal-meta tells Cabal to look in these directories in addition to whatever is in hackage. If you do this initially for the top-level package, it will correctly take into consideration all your local packages when resolving dependencies. Once you have all the dependencies installed, you can go back to using Cabal and ghci to build and test your packages. In my experience this takes most of the pain out of building large-scale Haskell applications.

Another tool that is frequently recommended for handling this large-scale package development problem is cabal-dev. cabal-dev allows you to sandbox builds so that differing build configurations of libraries can coexist without causing problems like they do with plain Cabal. It also has a mechanism for handling this local package problem above. I personally tend to avoid cabal-dev because in my experience it hasn't played nicely with ghci. It tries to solve the problem by giving you the cabal-dev ghci command to execute ghci using the sandboxed environment, but I found that it made my ghci workflow difficult, so I prefer using cabal-meta which doesn't have these problems.

I should note that cabal-dev does solve another problem that cabal-meta does not. There may be cases where two different packages may be completely unable to coexist in the same Cabal "sandbox" if their set of dependencies are not compatible. In that case, you'll need cabal-dev's sandboxes instead of the single user-level package repository used by Cabal. I am usually only working on one major project at a time, so this problem has never been an issue for me. My understanding is that people are currently working on adding this kind of local sandboxing to Cabal/cabal-install. Hopefully this will fix my complaints about ghci integration and should make cabal-dev unnecessary.

There are definitely things that need to be done to improve the cabal tool chain. But in my experience working on several different large Haskell projects both open and proprietary I have found that the current state of Cabal combined with cabal-meta (and maybe cabal-dev) does a reasonable job at handling large project development within a very fast moving ecosystem.

Friday, November 2, 2012

A Practical Cabal Primer

I've been doing full-time Haskell development for almost three years now, and while I recognize that Cabal has been painful to use at times, the current reality is that Cabal does what I need it to do and for the most part stays out of my way. In this post, I'll describe the Cabal best practices I've settled on for my Haskell development.

First, some terminology. GHC is the de facto Haskell compiler, Hackage is the package database, Cabal is a library providing package infrastructure, and cabal-install is a command line program (confusingly called "cabal") for building and installing packages, and downloading and uploading them from Hackage. This isn't a tutorial for installing Haskell, so I'll assume that you at least have GHC and cabal-install's "cabal" binary. If you have a very recent release of GHC, then you're asking for problems. At the time of this writing GHC 7.6 is a few months old, so don't use it unless you know what you're doing. Stick to 7.4 until maintainers have updated their packages. But do make sure you have the most recent version of Cabal and cabal-install because it has improved significantly.

cabal-install can install things as global or user. You usually have to have root privileges to install globally. Installing locally will put packages in your user's home directory. Executable binaries go in $HOME/.cabal/bin. Libraries go in $HOME/.ghc. Other than the packages that come with GHC, I install everything as user. This means that when I upgrade cabal-install with "cabal install cabal-install", the new binary won't take effect unless $HOME/.cabal/bin is at the front of my path.

Now I need to get the bad news over with up front. Over time your local Cabal package database will grow until it starts to cause problems. Whenever I'm having trouble building packages, I'll tinker with things a little to see if I can isolate the problem, but if that doesn't work, then I clean out my package repository and start fresh. On linux this can be done very simply with rm -fr ~/.ghc. Yes, this feels icky. Yes, it's suboptimal. But it's simple and straightforward, so either deal with it, or quit complaining and help us fix it.

I've seen people also say that you should delete the ~/.cabal directory as well. Most of the time that is bad advice. If you delete .cabal, you'll probably lose your most recent version of cabal-install, and that will make life more difficult. Deleting .ghc completely clears out your user package repository, and in my experience is almost always sufficient. If you really need to delete .cabal, then I would highly recommend copying the "cabal" binary somewhere safe and restoring it after you're done.

Sometimes you don't need to go quite so far as to delete everything in ~/.ghc. For more granular control over things, use the "ghc-pkg" program. "ghc-pkg list" shows you a list of all the installed packages. "ghc-pkg unregister foo-2.3" removes a package from the list. You can also use unregister without the trailing version number to remove every installed version of that package. If there are other packages that depend on the package you're removing, you'll get an error. If you really want to remove it, use the --force flag.

If you force unregister a package, then "ghc-pkg list" will show you all the broken packages. If I know that there's a particular hierarchy of packages that I need to remove, then I'll force remove the top one, and then use ghc-pkg to tell me all the others that I need to remove. This is an annoying process, so I only do it when I think it will be quicker than deleting everything and rebuilding it all.

So when do you need to use ghc-pkg? Typically I only use it when something breaks that I think should build properly. However, I've also found that having multiple versions of a package installed at the same time can sometimes cause problems. This can show up when the package I'm working on uses one version of a library, but when I'm experimenting in ghci a different version gets loaded. When this happens you may get perplexing error messages for code that is actually correct. In this situation, I've been able to fix the problem by using ghc-pkg to remove all but one version of the library in question.

If you've used all these tips and you still cannot install a package even after blowing away ~/.ghc, then there is probably a dependency issue in the package you're using. Haskell development is moving at a very rapid pace, so the upstream package maintainers may not be aware or have had time to fix the problem. You can help by alerting them to the problem, or better yet, including a patch to fix it.

Often the fix may be a simple dependency bump. These are pretty simple to do yourself. Use "cabal unpack foo-package-0.0.1" to download the package source and unzip it into the current directory. Then edit the .cabal file, change the bounds, and build the local package with "cabal install". Sometimes I will also bump the version of the package itself and then use that as the lower bound in the local package that I'm working on. That way I know it will be using my fixed version of foo-package. Don't be afraid to get your hands dirty. You're literally one command a way from hacking on upstream source.

For the impatient, here's a summary of my tips for basic cabal use:

  1. Install the most recent versions of cabal-install
  2. Don't install things with --global
  3. Make sure $HOME/.cabal/bin is at the front of your path
  4. Don't be afraid to use rm -fr ~/.ghc
  5. Use ghc-pkg for fine-grained package control
  6. User "cabal unpack" to download upstream code so you can fix things yourself

Using these techniques, I've found that Cabal actually works extremely well for small scale Haskell development--development where you're only working on a single package at a time and everything else is on hackage. Large scale development where you're developing more than one local package requires another set of tools. But fortunately we've already have some that work reasonably well. I'll discuss those in my next post.

Thursday, November 1, 2012

Why Cabal Has Problems

Haskell's package system, henceforth just "Cabal" for simplicity, has gotten some harsh press in the tech world recently. I want to emphasize a few points that I think are important to keep in mind in the discussion.

First, this is a hard problem. There's a reason the term "DLL hell" existed long before Cabal. I can't think of any package management system I've used that didn't generate quite a bit of frustration at some point.

Second, the Haskell ecosystem is also moving very quickly. There's the ongoing iteratees/conduits/pipes debate of how to do IO in an efficient and scalable way. Lenses have recently seen major advances in the state of the art. There is tons of web framework activity. I could go on and on. So while Hackage may not be the largest database of reusable code, the larger ones like CPAN that have been around for a long time are probably not moving as fast (in terms of advances in core libraries).

Third, I think Haskell has a unique ability to facilitate code reuse even for relatively small amounts of code. The web framework scene demonstrates this fairly well. As I've said before, even though there are three main competing frameworks, libraries in each of the frameworks can be mixed and matched easily. For example, web-routes-happstack provides convenience code for gluing together the web-routes package with happstack. It is 82 lines of code. web-routes-wai does the same thing for wai with 81 lines of code. The same thing could be done for Snap with a similar amount of code.

The languages with larger package repositories like Ruby and Python might also have small glue packages like this, but they don't have the powerful strong type system. This means that when a Cabal build fails because of dependency issues, you're catching an interaction much earlier than you would have caught it in the other languages. This is what I'm getting at when I say "unique ability to facilitate code reuse".

When you add Haskell's use of cross-module compiler optimizations to all these previous points, I think it makes a compelling case that the Haskell community is at or near the frontier of what has been done before even though we may be a ways away in terms of raw number of packages and developers. Thus, it should not be surprising that there are problems. When you're at the edge of the explored space, there's going to be some stumbling around in the dark and you might go down some dead end paths. But that's not a sign that there's something wrong with the community.

Note: The first published version of this article made some incorrect claims based on incorrect information about the number of Haskell packages compared to the number of packages in other languages. I've removed the incorrect numbers and adjusted my point.