Posts

Showing posts from March, 2018

Fake: Generating Realistic Test Data in Haskell

On a number of occasions over the years I've found myself wanting to generate realistic looking values for Haskell data structures.  Perhaps I'm writing a UI and want to fill it in with example data during development so I can see how the UI behaves with large lists.  In this situation you don't want to generate a bunch of completely random unicode characters.  You want things that look plausible so you can see how it will likely look to the user with realistic word wrapping, etc.  Later, when you build the backend you actually want to populate the database with this data.  Passing around DB dumps to other members of the team so they can test is a pain, so you want this stuff to be auto-generated.  This saves time for your QA people because if you didn't have it, they'd have to manually create it.  Even later you get to performance testing and you find yourself wanting to generate several orders of magnitude more data so you can load test the database,...

Efficiently Improving Test Coverage with Algebraic Data Types

Think of a time you've written tests for (de)serialization code of some kind, say for a data structure called Foo .  If you were using the lowest level of sophistication you probably defined a few values by hand, serialized them, deserialized that, and verified that you ended up with the same value you started with.  In Haskell nomenclature we'd say that you manually verified that parse . render == id .  If you were a little more sophisticated, you might have used the QuickCheck library  (or any of the numerous similar packages it inspired in other languages) to verify the  parse . render == id  property for a bunch of randomly generated values.  The first level of sophistication is often referred to as unit testing.  The second frequently goes by the term property testing or sometimes fuzz testing. Both unit testing and property testing have some drawbacks.  With unit testing you have to write fairly tedious boilerplate of listing by hand ...

Armor Your Data Structures Against Backwards-Incompatible Serializations

As almost everyone with significant experience managing production software systems should know, backwards compatibility is incredibly important for any data that is persisted by an application. If you make a change to a data structure that is not backwards compatible with the existing serialized formats, your app will break as soon as it encounters the existing format. Even if you have 100% test coverage, your tests still might not catch this problem. It’s not a problem with your app at any single point in time, but a problem with how your app evolves over time. One might think that wire formats which are only used for communication between components and not persisted in any way would not be susceptible to this problem. But these too can cause issues if a message is generated and a new version of the app is deployed before the the message is consumed. The longer the message remains in a queue, redis cache, etc the higher the chances of this occurring. More subtly, if you deploy a...