# Recursion schemes.

# Folding a list.

Say we have a data type,

and we would like to fold over this data structure:

```
foldr :: forall a z. z -> (a -> z -> z) -> List a -> z
foldr nil cons = go where
go Nil = nil
go (Cons h t) = cons h (go t)
```

Let’s look at the type of `foldr`

and make some isomorphic transformations (see Counting Type Inhabitants for some extra techniques that are useful when dealing with isomorphisms):

```
∀ a z. z -> (a -> z -> z) -> List a -> z
~ ∀ a z. (1 -> z) -> (a * z -> z) -> List a -> z
~ ∀ a z. ((1 + a * z) -> z) -> List a -> z
-- define ListF a z = 1 + a * z
~ ∀ a z. (ListF a z -> z) -> List a -> z
```

We can define `ListF`

as:

Armed with this new data type, and the new type signature for `foldr`

, we rewrite it as:

```
foldr :: forall a z. (ListF a z -> z) -> List a -> z
foldr alg = go where
go Nil = alg NilF
go (Cons h t) = alg (ConsF h (go t))
```

#### Trying a different ADT

Let’s start with a different ADT to see if we can use the same technique as we did for lists:

```
data Expr = Lit Int | Add Expr Expr | Mul Expr Expr
foldExpr :: forall z. (Int -> z) -> (z -> z -> z) -> (z -> z -> z) -> Expr -> z
foldExpr lit add mul = go where
go (Lit i) = lit i
go (Add l r) = add (go l) (go r)
go (Mul l r) = mul (go l) (go r)
```

```
∀ z. (Int -> z) -> (z -> z -> z) -> (z -> z -> z) -> Expr -> z
~ ∀ z. (Int -> z) -> (z * z -> z) -> (z * z -> z) -> Expr -> z
~ ∀ z. ((Int + z * z + z * z) -> z) -> Expr -> z
-- define ExprF z = Int + z * z + z * z
~ ∀ z. (ExprF z -> z) -> Expr -> z
```

We can define `ExprF`

as:

#### Back to lists

Let’s continue our `List`

transformations, perhaps we can notice something else:

```
∀ a z. (ListF a z -> z) -> List a -> z
~ ∀ a. List a -> (∀ z. (ListF a z -> z) -> z)
~ ∀ a. List a -> (μ z. ListF a z)
-- define List' a = μ z. ListF a z
~ List :~> List'
```

We see that `foldr`

’s type tells us it is isomorphic to a natural transformation from `List`

to a structure of nested `ListF`

s. Note that our specific instance (`foldr`

) of that type is actually a natural isomorphism, or at least one part of it. What is the other part?

It’s signature,

```
List' :~> List
~ forall a. (μ z. ListF a z) -> List a
~ forall a. (forall z. (ListF a z -> z) -> z) -> List a
```

is not particularly insightful. But its implementation,

```
transform :: forall a. (forall z. (ListF a z -> z) -> z) -> List a
transform go = go $ \case
NilF -> Nil
ConsF a z -> Cons a z
```

is rather interesting.

What happens is that given a `forall z. (ListF a z -> z) -> z`

, we first substitute `z = List a`

, obtaining `(ListF a (List a) -> List a) -> List a`

and then pass it a function that *embeds* a pattern functor `ListF a (List a)`

into `List a`

. Let’s call this function `embed`

:

And now we can write `transform`

as:

Ahh, this is quite satisfying.

Let’s now go back to `foldr`

:

```
foldr :: forall a z. (ListF a z -> z) -> List a -> z
foldr alg = go where
go Nil = alg NilF
go (Cons h t) = alg (ConsF h (go t))
```

A reader with a minor OCD might notice that if `embed`

was perfectly symmetric, the second case of `go`

here is not symmetric. Ideally we would like to have `ConsF h t :: ListF a (List a)`

and then somehow transform it into `ConsF h (go t) :: ListF a z`

. What does it remind you of? An `fmap`

!

Now we can rewrite `foldr`

as:

```
foldr :: forall a z. (ListF a z -> z) -> List a -> z
foldr alg = go where
go Nil = alg NilF
go (Cons h t) = alg $ fmap go $ (ConsF h t)
```

And factoring out `project :: List a -> ListF a (List a)`

:

```
project :: List a -> ListF a (List a)
project Nil = NilF
project (Cons a z) = ConsF a z
foldr :: forall a z. (ListF a z -> z) -> List a -> z
foldr alg = go where
go = alg . fmap go . project
```

Note the symmetry of `project`

and `embed`

. In particular, it is easy to show that `project . embed = id`

and `embed . project = id`

, so they are two parts of an isomorphism between `ListF a (List a)`

and `List a`

.

What happens if we pass `embed`

into `foldr`

?

What does `go`

do? I suggest taking a few moments or minutes to convince yourself that `go`

above (specialized for `embed`

) is just an identity function on lists.

#### Generalizing

By following the same exact methodology as above, we can similarly derive:

```
embedExpr :: ExprF Expr -> Expr
embedExpr (LitF i) = Lit i
embedExpr (AddF l r) = Add l r
embedExpr (MulF l r) = Mul l r
projectExpr :: Expr -> ExprF Expr
projectExpr (Lit i) = LitF i
projectExpr (Add l r) = AddF l r
projectExpr (Mul l r) = MulF l r
foldExpr :: forall z. (ExprF z -> z) -> Expr -> z
foldExpr alg = go where
go = alg . fmap go . projectExpr
```

This suggests that we can generalize folding over different data types by using a typeclass:

```
type Algebra f a = f a -> a
newtype Mu f = Mu (forall a. Algebra f a -> a)
class Functor f => Recursive t f | t -> f where
project :: t -> f t
embed :: f t -> t
cata :: Algebra f a -> t -> a
cata alg = go where
go = alg . fmap go . project
embed' :: Mu f -> t
embed' (Mu go) = go embed
```

Note that I added two new type definitions, `Algebra f a`

.

which is also known as F-Algebra, and `Mu f`

:

`foldr`

became `cata`

, which stands for catamorphism.

Further note that `Recursive t f`

is usually defined entirely in terms of `project`

(so there is no `embed`

nor `embed'`

), but this is not completely satisfactory as there is always an `embed`

when there is `project`

, since together they form an isomorphism between `t`

and `f t`

.

# An aside: F-algebras

As we defined above,

Assuming that `f`

is a `Functor`

, this is called *F-algebra*, `a`

is called the *carrier* and `f`

the *signature* of the algebra. Note that *F* in F-algebra stands for `f`

, a particular functor. However people often call algebras F-algebras in colloquial speech even if there is no one particular `f`

.

F-Algebras form a category,

- Objects:
`F`

-algebras, arrows`φ :: F a -> a`

.`F`

is a fixed functor for all objects of this category. Note that`a`

is not universally quantified here, but rather fixed to a particular object of the underlying category. - Arrows: triples of
`(f, φ, ψ)`

where`φ`

and`ψ`

are F-algebras and`f :: cod φ -> cod ψ`

is a homomorphism from`φ`

to`ψ`

, which means that`f ∘ φ = ψ ∘ F f`

. Assuming that`φ :: F a -> a`

and`ψ :: F b -> b`

, it means that mapping`F a`

first with`a -> b`

and then reducing`F b`

with`ψ`

is the same as first reducing it with`F a -> a`

and then applying`a -> b`

. Here is an example of three algebras and two homomorphisms (one of them is almost an isomorphism):

```
-- functor
data F a = Lit Int | Add a a | Mul a a
-- first algebra
φ :: F [[Int]] -> [[Int]]
φ (Lit x) = [[x]]
φ (Add x y) = x ++ y
φ (Mul x y) = (++) <$> x <*> y
-- second algebra
ψ :: F Int -> Int
ψ (Lit x) = x
ψ (Add x y) = x + y
ψ (Mul x y) = x * y
-- third algebra
τ :: F [()] -> [()]
τ (Lit x) = replicate x ()
τ (Add x y) = x ++ y
τ (Mul x y) = (\_ _ -> ()) <$> x <*> y
-- (f, φ, ψ) form an algebra homomorphism
f :: [[Int]] -> Int
f = foldr (+) 0 . fmap (foldr (*) 1)
-- (g, ψ, τ) form an algebra homomorphism
g :: Int -> [()]
g = flip replicate ()
```

- Identity is
`(id, φ, φ)`

. - Composition is
`(f, φ₂, φ₃) ∘ (g, φ₁, φ₂) = (f ∘ g, φ₁, φ₃)`

Now that we have defined a category, we should ask ourselves if this category has initial / terminal objects, products and coproducts, etc.

Turns out that for a lot of functors `F`

, there is an initial object in the category of `F`

-algebras, which is called *initial F-algebra*. An initial object of a category is such an object that there is a unique arrow from it to any other object. So in our case, an initial

`F`

-algebra is some morphism `embed :: F μF -> μF`

(`μF`

is some object of the underlying category) such that there is a unique homomorphism `⦇φ⦈ :: μF -> a`

to every other `F`

-algebra `φ :: F x -> x`

, where `⦇φ⦈ ∘ embed = φ ∘ F ⦇φ⦈`

. Let’s write it out as code:```
-- functor
data F a = Lit Int | Add a a | Mul a a
-- initial algebra carrier
data μF = ???
-- initial algebra, known as `in` above
embed :: F μF -> μF
-- a unique ⦇φ⦈
cata :: forall x. (F x -> x) -> μF -> x
-- ⦇φ⦈ ∘ embed = φ ∘ F ⦇φ⦈
cata φ . embed = φ . fmap (cata φ)
```

Does this look familiar? These are almost all of the ingredients needed for `Recursive μF F`

!

Before we continue, let’s review the laws that the initial algebra satisfies:

- Cancellation: for any
`F`

-algebra`φ`

, there is a unique`⦇φ⦈`

such that`⦇φ⦈ ∘ embed = φ ∘ F ⦇φ⦈`

. - Reflection: Since
`⦇embed⦈`

is a unique homomorphism, it is necessarily an identity homomorphism`id`

, therefore`⦇embed⦈ = id`

. - Fusion: For any two
`F`

-algebras`φ : F a -> a`

,`ψ : F b -> b`

and an arrow`f : a -> b`

,`f ∘ φ = ψ ∘ F f`

(a homomorphism condition) implies`f ∘ ⦇φ⦈ = ⦇ψ⦈`

. This is again due to uniqueness.`f ∘ ⦇φ⦈`

is an algebra homomorphism between the initial algebra and`ψ`

. But`⦇ψ⦈`

is also a homomorphism between the initial algebra and`ψ`

, hence by uniqueness of algebra homomorphisms out of the initial algebra, they are the same.

From these laws we can prove a fundamental theorem, known as Lambek lemma:

#### Lambek lemma

The initial algebra`embed :: F μF -> μF`

is an isomorphism with the inverse defined as
`project :: μF -> F μF`

`project = ⦇F embed⦈`

Note that `F embed :: F (F μF) -> F μF`

and `⦇F embed⦈ :: μF -> F μF`

as an algebra homomorphism from the initial `F`

-algebra and an `F`

-algebra with `F μF`

as the carrier.

```
⦇embed⦈ :: μF -> μF
⦇F embed⦈ :: μF -> F μF
embed ∘ ⦇F embed⦈ :: μF -> μF
⦇F embed⦈ ∘ embed :: F μF -> F μF
embed ∘ ⦇F embed⦈
-- fusion
= ⦇embed⦈
-- reflection
= id
⦇F embed⦈ ∘ embed
-- cancellation
= F embed ∘ F ⦇F embed⦈
-- F is a functor
= F (embed ∘ ⦇F embed⦈)
-- see above
= F id
-- F is a functor
= id
```

If we look at the signature of `cata`

a bit more closely,

At the same time we can see that given `mu :: μ x. F x`

, we can run `mu embed`

and get a `μF`

out, so the natural follow-up question is whether `μF`

is isomorphic to `μ x. F x ~ ∀ x. (F x -> x) -> x`

.

We will prove it in polymorphic lambda calculus.

```
cata :: Λ x. (F x -> x) -> μF -> x
from :: μF -> Λ x. (F x -> x) -> x
from muF = Λ x. λ (f : F x -> x). cata x f muF
to :: (Λ x. (F x -> x) -> x) -> μF
to muF = λ (f : Λ x. (F x -> x) -> x). f μF embed
to (from muF)
= (λ (f : Λ x. (F x -> x) -> x). f μF embed) (Λ x. λ (f : F x -> x). cata x f muF)
= (Λ x. λ (f : F x -> x). cata x f muF) μF embed
= (λ (f : F μF -> μF). cata μF f muF) embed
= (cata μF embed) muF
-- using the homomorphism law
```

# An aside: containers

# An aside: derivatives

# Folding with location

In this section we’ll discuss how to fold over a data structure while having some notion of location: e.g. if we are folding over a tree, we would like to know which path we took to arrive to a node. To motivate this problem, consider how you would accurately report that one of the tree nodes is erroneous.

#### Taking `List`

’s derivative

First, let’s see what happens if we take a derivative of `List a`

with respect to `a`

:

```
∂ₐ(List a)
= ∂ₐ(1 + a * List a)
= ∂ₐ(a) * List a + a * ∂ₐ(List a)
= List a + a * ∂ₐ(List a)
= List a * List a
```

On the other hand, as we have shown before, there is an isomorphism between `List a`

and `μ x . ListF a x`

, so

```
∂ₐ(List a)
= ∂ₐ(μ x . ListF a x)
= ∂ₐ(ListF a (μ x . ListF a x))
= ∂ₐ(ListF a x)[x=μ x . ListF a x] +
∂ₓ(ListF a x)[x=μ x . ListF a x] * ∂ₐ(μ x . ListF a x)
= μ w . ∂ₐ(ListF a x)[x=μ x . ListF a x] +
∂ₓ(ListF a x)[x=μ x . ListF a x] * w
= ∂ₐ(ListF a x)[x=μ x . ListF a x] *
List ∂ₓ(ListF a x)[x=μ x . ListF a x]
= x[x=μ x . ListF a x] *
List a[x=μ x . ListF a x]
= List a *
List a
```

Let’s put `List a`

and `∂ₐ(List a)`

side by side:

```
List a = μ w . ListF a w
= μ w . 1 + a * w
∂ₐ(List a) = μ w . ∂ₐ(ListF a x)[x=μ x . ListF a x] +
∂ₓ(ListF a x)[x=μ x . ListF a x] * w
= μ w . List a + a * w
```

What do both branches of the recursive structure mean in each case? In the case of `List a`

, `1`

means that we are at the end of the list and `a * w`

means that we have another element of the list, `a`

, and the rest of the list `w`

. In the case of `∂ₐ(List a)`

, `List a`

means that we are focused on the first element of the list and `a * w`

means that we are descending further into the structure, passing an `a`

as we go in; `w`

represents descending in.

#### Not quite there yet

Note that when folding a data structure, we don’t really observe one `a`

at a time, but rather one layer `ListF a z`

at a time, with `z`

representing the already folded part of the structure. This means that `∂ₐ(List a) = ∂ₐ(μ x . ListF a x)`

is not a good notion of location in a data structure.

We can improve the situation by using `ListF a (z, lz : List a)`

, where `z`

is the result of folding `lz`

(this is known as paramorphism). This tells us how what subtree `z`

was produced from, but this doesn’t give us information about the path we took in the data structure to arrive to a particular layer `ListF a z`

.

We can represent that path as a list of `∂ₓ(ListF a x)[x=List a] ~ a`

(we’ll use `[]`

instead of `List`

to indicate that it does not depend on the particular data structure we are considering): each element of the path is a particular choice of the direction we take as we descend down into the structure.

Generalizing to an arbitrary recursive data type, we get:

```
histCata :: Recursive t f => ([∂ₓ(f x)[x=t]] -> f (z, t) -> z) -> [∂ₓ(f x)[x=t]] -> t -> z
-- We pass [∂ₓ(f x)[x=t]] as an argument to histCata
-- to represent the path we have taken before we started
-- folding.
```

We have a slight problem, `∂ₓ(f x)`

can not be represented as a type constructor / type lambda, and furthermore we don’t have any operations consuming and producing values of type `∂ₓ(f x)[x=t]`

yet.

#### Derivative of `ExprF`

```
∂ₓ(ExprF x)
~ ∂ₓ(LitF Int | AddF x x | MulF x x)
~ ∂ₓ(Int + x * x + x * x)
~ 0 + (∂ₓ(x) * x + x * ∂ₓ(x)) + (∂ₓ(x) * x + x * ∂ₓ(x))
~ (x + x) + (x + x)
~ (AddL x | AddR x) | (MulL x | MulR x)
```

Let’s define it as a data type:

Now let’s look at the definition of `project`

for expressions:

```
project :: Expr -> ExprF Expr
project (Lit i) = LitF i
project (Add l r) = AddF l r
project (Mul l r) = MulF l r
```

Notice that when we project, every `Expr`

inside of `ExprF Expr`

has a particular location in `ExprF`

that could be represented by `∂ₓ(ExprF x)`

:

```
projectD :: Expr -> ExprF (ExprFD Expr, Expr)
projectD (Lit i) = LitF i
projectD (Add l r) = AddF (AddL r, l) (AddR l, r)
projectD (Mul l r) = MulF (MulL r, l) (MulR l, r)
```

Note that `projectD`

is *not* an isomorphism. And in the opposite direction there is a function of type `(ExprFD Expr, Expr) -> Expr`

(also not an isomorphism):

```
embedD :: ExprFD Expr -> Expr -> Expr
embedD (AddL r) l = Add l r
embedD (AddR l) r = Add l r
embedD (MulL r) l = Mul l r
embedD (MulR l) r = Mul l r
```

Now we can express `histCata`

for expressions as:

```
histCata :: ([ExprFD Expr] -> ExprF (z, Expr) -> z) -> [ExprFD Expr] -> List a -> z
histCata alg = go where
go hist = alg hist . fmap (\(d, e) -> (go (d : hist) e, e)) . projectD
```

#### Generalizing

To generalize `histCata`

, we’ll need to modify our typeclass by adding another type parameter for the derivative:

```
class Functor f => Recursive t f d | t -> f, t -> d where
project :: t -> f t
embed :: f t -> t
-- and other methods
projectD :: t -> f (d t, t)
embedD :: (d t, t) -> t
histCata :: ([d t] -> f (z, t) -> z) -> [d t] -> t -> z
histCata alg = go where
go hist = alg hist . fmap (\(d, e) -> (go (d : hist) e, e)) . projectD
```

# Folding from the left

Consider the difference between `foldl`

and `foldr`

as seen through the lens of recursion schemes. Folding from the right is folding bottom-up, we start with leaves (non-recursive data constructors), fold those into `z`

, then all data constructors above the leaves, etc. Folding from the left is folding top-down, we peel one layer of a data structure at a time and then proceed to the next layer until we reach the leaves. But this makes sense only if there is just a single direction to move! What if we have binary recursive data constructors like `Add`

in `Expr`

?

We can generalize `foldl`

to arbitrary data structures in a few ways:

```
-- descend down all possible paths summing all values at the leaves
topDown1 :: Recursive t f d, Monoid z => z -> (z -> f () -> f z) -> z
-- preorder traversal
topDown2 :: Recursive t f d, Traversable f => z -> (z -> f () -> f z) -> (d z -> z) -> z
-- ??
```

# References (that I have not read yet)

https://bartoszmilewski.com/2013/06/10/understanding-f-algebras/ http://comonad.com/reader/2009/recursion-schemes/ https://github.com/gallais/metamorphismsinagda https://en.wikipedia.org/wiki/F-coalgebra https://web.archive.org/web/20160304030154/http://www.cs.swan.ac.uk/_{csetzer/slides/lisbon2012/setzerLisbon2012InductionInductionInduction.pdf https://homotopytypetheory.org/2011/04/24/higher-inductive-types-a-tour-of-the-menagerie/ http://www.cs.swan.ac.uk/}csetzer/slides/dybjer60thBirthdayGothenburgJune2013/dybjer60thBirthdayGothenburgJune2013.pdf https://pchiusano.github.io/2014-08-12/zippers-not-useful.html https://stackoverflow.com/questions/5919901/data-structure-differentiation-intuition-building# http://www.randomhacks.net/2011/05/20/derivatives-of-algebraic-data-structures-an-excellent-tutorial/ http://okmij.org/ftp/continuations/zipper.html https://blog.sumtypeofway.com/an-introduction-to-recursion-schemes/ https://medium.com/@jaredtobin/practical-recursion-schemes-c10648ec1c29 https://blog.sumtypeofway.com/recursion-schemes-part-iv-time-is-of-the-essence/ https://www.reddit.com/r/haskell/comments/16qr6q/fixing_gadts_generic_recursion_schemes_for_gadts/ https://www.reddit.com/r/haskell/comments/3q0w7a/a_couple_of_quick_questions_about_recursionschemes/ http://blog.ezyang.com/2010/06/well-founded-recursion-in-agda/ https://hackage.haskell.org/package/recursion-schemes https://www.beyondthelines.net/programming/introduction-to-tagless-final/