Data vs Codata

Infinite lists

OCaml and Haskell are both statically typed functional programming languages. Their type systems are very similar. They both have function types, algebraic data types, higher-order types (list of things, maybe/option types, tuple types). They both support type variants of the Hindley-Milner type inference algorithm.

But in Haskell, one often heralded featured is that data types can be "infinite". Haskellers love to showcase this clever one-linear which produces the fibonacci numbers:

fibs :: [Int]
fibs = 1 : 1 : zipWith (+) fibs (tail fibs)

This creates an "infinite" list called fibs. Given any non-negative integer index, Haskell gives back the i-th fibonacci number: fibs !! i. If you want the first 12 fibonacci numbers, you can take the first 12 elements with take 12 fibs. You can even go to the interactive interpreter and print all of them with print fibs. (Since the list is "infinite", it prints out elements in the list until you CTRL+C to terminate it).

OCaml, on the other hand, acts like any boring old programming language. Lists must always be finite. If you want an "infinite" list, you have to jump through hoops, either wrapping the result in a lambda and forcing the programmer to demand values bit-by-bit.

So there are actually two kinds of data types: those which must be finite and those which can be infinite. In computer science research, these often go by the shorter names:

Data and Codata - the two worlds

For the sake of technical accuracy, let us use List as our poster child for data and Streams as the same for codata. In addition, I will assume these container types always contain Int, for simplicity. So basically, I will use the following definitions:

data List = Nil 
          | Cons Int List

codata Stream = Stream Int Stream

Note the "co-" in front. I'm using a fictitious syntax here for illustration, but it would be perfectly sensible to implement this.

Because List is defined as a data type, we restrict all Lists to be finite in length. (It is this way in OCaml). So a list consists of zero or more cons cells, ending in nil.

Streams, on the other hand, is defined as a codata type. This means we may recurse indefinitely (as it would be in Haskell), defining values in terms of themselves

The reason we are using Streams instead of Haskell's list type is to guarantee our codata is always infinitely long. (Notice Stream has no Nil constructor). In this way, we pit the two ends of the spectrum against each other: Lists are always finite. Streams are always infinite.

The Cost of Infinity

So given that you can both create data types which are finite and data types which are infinite, the question naturally arises: Why wouldn't you always choose infinite?

As it turns out, lunch is never free, and there are trade-offs to allowing infinitely long lists in your language.

The difference becomes very obvious when you restrict yourself to total functions. A total function is one which never gets stuck in a loop, never throws exceptions, does not have side-effects, and cannot error out other than perhaps some catastrophic event (the runtime running out of memory, hardware failure).

Using Lists and Streams, let's look at some code.

length : List -> Int
length Nil = 0
length (Cons x xs) = 1 + length xs

head : Stream -> Int
head (Stream x xs) = x

Both length and head should be familiar to everyone. The former takes a List and tells you how long it is. The latter takes a Stream and tells you what the first element is.

But, with a funny juxtaposition, we witness a subtle tragedy. Imagine that we define each function for the other data type.

 head2 : List -> Int
 head2 (Cons x xs) = x

 length2 : Stream -> Int
 length2 (Stream x xs) = 1 + length2 xs

Now, both of these new definitions work in both OCaml and Haskell. However, they are not ideal to program with. Everyone knows it is an error to take the head of an empty list. The reason is that we have not defined what head2 Nil should be equal to. For this reason, we call head2 a partial function - it is defined for some inputs, but not all inputs.

The other function, length2, is even more problematic. While length2 passes the typechecker, it never terminates for any inputs!

To summarize, some functions work great for data, but fail for codata. And some functions work great for codata, but fail for data.

Being infinite isn't always going to work in your favor. It also matters how you wish to use data.

Types are defined by how you put things together and how to take them apart

In computer science (and oddly enough, in actual programming as well), things always come in pairs. Every read has its write. Every setup has its cleanup. Every open paren has a close.

In this spirit, every type in functional programming comes equipped with an introduction rule and an eliminator. The introduction rule makes objects of that type. The eliminator allows you to use objects of that type.

Of course, as usual, "introduction rule" and "elimination rule" are the fancy-ass type theory names. In functional programming languages, we call introduction rules "constructors" and the eliminator rules... also "constructors". I'll explain.

In both OCaml and Haskell, constructors are the one thing that can live on either side of an equals side in a definition. When a constructor appears on the right hand side, it acts as an introduction rule. When it appears on the left hand side, it acts as an elimination rule. (You might also think of it as a "destructor" here).

example1 : List 
example1 = Cons 0 Nil
-- two lists are created here. 
--     1. Nil creates the empty list
--     2. Then Cons creates another list, containing only the number 0.

example2 : Stream -> Int
example2 (Stream x xs) = x
-- one stream is used here to give you an integer
-- the Stream constructor on the left hand side works to break the stream up into its head and tail.

There's a hidden symmetry here. Look at the types.

example1 : List
example2 : Stream -> Int

Now, it so happens that if we turn example1 into a function. Not just any function, though. We want the resulting type to be isomorphic. In other words, we want it to be basically the same type, but a function insead of just a List. To do this, we make the seemgly pointless function

 example1 : Unit -> List
 example1 unit = Cons 0 Nil

(We're using Unit to stand for the 1-element type and unit for the element).

Which has exactly the properties we want. So now compare the two types

  example1 : Unit -> List
  example2 : Stream -> Int

The symmetry is more obvious. In the first example, we make (introduce) a List. In the second example, we break up (eliminate) a Stream. To state this generally, introductions are functions which return the type, and eliminations are functions which take the type as a parameter.

(As a cute note, example1 would also constitute an eliminator for Unit and example2 would constitute an introduction for Int).

Of course, in actual programming languages, we don't want to treat example1 as a function. It's silly. It returns the same thing every time. So we just drop the Unit -> part and we call it a List. However, for the sake of seeing this symmetry, I will be writing it out this long way.

Breaking down Codata. (And Making up Regular Data).

A fundamental rule that computer science inherited from category theory was that if two things are opposites, you make up a name for one and call the other "co-whatever".

Data and codata have such an opposing relationship. One is finite, the other infinite. One lives mostly in OCaml, the other, mostly in Haskell. In the above examples, I was clever and gave an introduction rule for List and an elimination rule for Stream.

It turns out these two cases were the easy ones. When you're building data with introduction rules, you really can't go wrong. You just apply the constructors and make sure it typechecks. Similarly, when you're breaking apart codata with elimination rules, you just need to pattern match correctly.

So now we look at the two missing cases: the elimination rules for data and the introduction rules for codata. You're probably very familiar with one. You may never have heard the other one before.

The elimination rule for data corresponds to recursion. More accurately, it corresponds to something called well-founded recursion (recursion that is guaranteed to always terminate). However, we won't dwell on the conditions for well-foundedness. For the time being, we'll just use the term "recursion" and then use our intuition to check that we can't get stuck in loops.

But in case you forgot all about recursion, we'll do a crash course. Functions can be defined in terms of themselves. When defining a function, you can use the function itself inside the definition. Again, we'll ignore this so-called well-foundedness condition, but at the very least, you need to make sure that every constructor is handled. Here's an example:

sum : List -> Int
sum Nil = 0
sum (Cons x xs) = x + sum xs

Note how there are two clauses in this definition: one for Nil and one for Cons. Just as we agreed.

As we stated, recursion is used to break down data. Whereas creating data is trivial, breaking it down is tough. You need to remember to cover all your cases. You need to guarantee termination.

Now corecursion is just the opposite. Breaking down codata is dead simple. Literally, the only operation codata supports is breaking it down via pattern matching. However, since codata has potential to be infinite (and in our example Stream, must be!), we need to be more careful in how we create it. Corecursion is another way to define a function. But instead of breaking down the argument of the function, we build up the result.

Here is a simple example.

countFrom : Int -> Stream
countFrom n = Stream n (countFrom (n + 1))

A new functional programmer might say that this looks just like recursion. But it's not. Recursion must always get closer and closer to a base case. But this function seems to spiral out of control, counting up and up forever.

No. What we see here is corecursion. The codata constructor Stream prevents the function from getting stuck in a loop. Instead, it is only evaluated as it is needed.

Let's use a different notation based on copattern syntax. You can think of it like a set of fields for a codata type. In other words, it gives names to the different parts of an object of the type.

codata Stream = 
    Head : Int,
    Tail : Stream

With copatterns, we end up with a definition that looks like this instead.

countFrom : Int -> Stream
Head (countFrom n) = n
Tail (countFrom n) = countFrom (n + 1)

Look at the symmetries we have revealed. With the recursive sum function, we had two cases, and thus, two patterns to match on. With our corecursive countFrom function, we have two cases again, and two copatterns.

For fun, let's consider these two functions, both based on the famous map function. They both "do" the same thing, adding 1 to each element.

incList : List -> List
incList Nil = Nil
incList (Cons x xs) = Cons (x + 1) (incList xs)

incStream : Stream -> Stream
Head (incStream xs) = Head xs + 1
Tail (incStream xs) = incStream (Tail xs)

Each feels as if it were the other turned inside out. The x + 1 in the first argument of the Cons on the third line corresponds to the first clause of incStream. In the second clause of both definitions, Cons appears on both the left and right hand sides of the equation, and so does Tail.

Seeing this, you might notice that map, in its full generality, can be seen as either a recursive definition or as a corecursive definition in a language like Haskell.

But why have I never heard of codata before?

For most languages, it makes no sense to distinguish between data and codata. Haskell and OCaml both allow partial functions, and as soon as you have those, the distinction between data and codata becomes blurred. In Haskell, for instance, everything is essentially codata, but you can establish conventions that certain data types will not be used to create infinite data. For instance, when writing a compiler in Haskell, you would never consider constructing an infinitely deep abstract syntax tree. Similarly, many functions assume your lists are finite.

In a total language, however, the distinction becomes very useful. Examples of total languages are Idris, Agda, Coq, and a handful of other languages. At the moment, they are mostly used for research.

In a total language, all functions must work for all inputs. But what if you have an algorithm that doesn't? Well, you can stick it in a Partial:

codata Partial a =
    Later (Partial a),
    Now a

If this were data, it would be pointless - a bunch of Laters in a row, always ending in a Now with a value. But when it's codata, you get another possiblility. It could be the case that your object is an endless stream of Later (Later (Later (...))), ad infinitum. In other words, it could be the case it doesn't actually contain any value of type a at all!

If you think about it, this is very similar to Haskell's Maybe types or OCaml's option types. But the difference is that with those, you can recognize immediately when a value is empty and when it's not. You can easily write a function:

isEmpty : Maybe a -> Bool
isEmpty Nothing = True
isEmpty (Just x) = False

On the other hand, you can't write a function:

isEmptyBAD : Partial a -> Bool
isEmptyBAD (Later p) = isEmptyBAD p
isEmptyBAD (now x) = True

For one thing, this function can never return False! Worse, it is not total, since in the case where it should return False, it instead gets stuck in an infinite loop.

Two very important classes of programs are operating systems and servers. But a good operating system doesn't ever actually need to shut down ever. (Any good Linux box can be kept up near indefinitely). Would total languages fail us here? Not if your language supported codata. Types for interacting with the outside world, such as Haskell's infamous IO types, would also be codata. (Or at least, it would be if it weren't baked into the compiler). Corecursion is exactly what we would need to write the combinator forever : IO a -> IO b, which takes an IO-action and repeats it forever. But note that while the runtime is forced to repeatedly execute the same action, the evaluation always comes to an end before the next action is executed.