# Counting type inhabitants.

# Preamble

For Scala readers: Haskell’s `Void`

type can be assumed to be roughly equivalent to `Nothing`

. Neither has any inhabitants other than diverging or erroneous expressions that fail at runtime. `Maybe`

is the same as `Option`

, except its empty case is called `Nothing`

and its non-empty case is called `Just`

.

For Haskell readers: `Any`

in Scala is roughly equivalent to

`data Any = forall a. Any a`

the least informative existential type.

I will write `forall`

interchangeably with `∀`

.

# Type inhabitants

An *inhabitant* of a type `T`

is any expression `e : T`

of type `T`

. It is of importance to be able to ascertain the number of such inhabitants for a variety of types in pure functional languages since:

- Knowing that a complicated polymorphic type has only a small number of inhabitants (e.g.
`forall a. a -> a`

having only one inhabitant) means that we can partially or fully understand its behavior based on the type alone without referring to a particular implementation. - Knowing that a
`T`

has only one inhabitant`e : T`

allows us to simplify any complicated`complicated_expression : T`

and replace it with`e : T`

. - If we can find the number of inhabitants, we can likely also enumerate them. This allows for type-based auto-completion a la https://twitter.com/edwinbrady/status/1052913078927613954.
- Proving that a type
`P`

has inhabitants is equivalent to proving the corresponding theorem`P`

. If we can automatically prove that a type has no inhabitants, we can save the developer some time:

`foo :: SomeTypeThatIsUninhabited -> Something foo x = impossible -- Idris has something like that.`

There might be some confusion regarding what exactly we mean by “inhabitants”, e.g. are `1 + 1`

and `2`

different inhabitants of type `Int`

? This is a rather deep topic (that I will cover in a future blog post), but for now we will treat two expressions of type `a`

to be separate inhabitants of `a`

if and only if there is some predicate `f : a -> Bool`

that distinguishes between then. So all of the following definitions define equal inhabitants of their respective types:

```
foo1 :: Int
foo1 = 1 + 1
foo1 = 2
foo2 :: Int -> Int
foo2 = \x -> x + 1
foo2 = \x -> x + 2 - 1
foo2 = \y -> y + 1
foo2 x = x + 1
foo2 = \y -> (\x -> x + 1) y
foo4 :: Int -> ()
foo4 x = ()
foo4 x = case x of
10 -> ()
x -> ()
```

while these are different

```
foo1 :: Int
foo1 = 1
foo1 = 2
foo2 :: Int -> Int
foo2 = \x -> x
foo2 x = case x of
1 -> 100
x -> x
foo3 :: forall a. (a, a) -> a
foo3 (x, y) = x
foo3 (x, y) = y
```

There are a few interesting examples of seemingly different expressions being the same inhabitants of a type that might seem completely counter-intuitive at first, such as:

```
foo1 :: Void -> Int
foo1 x = 10
foo1 x = 3
-- Consider writing a predicate (Void -> Int) -> Bool
-- that distinguishes between them. Can you do it?
foo2 :: Int -> ()
foo2 x = ()
foo2 x = if isPrime 2147483647 then () else ()
-- These are the same even though they might take
-- different amount of time to run. Performance is
-- out of the picture for now.
foo3 :: Int -> Any
foo3 x = Any x
foo3 x = Any True
foo3 x = Any ()
-- These are the same because distinguishing between
-- different Any values without violating parametricity
-- is impossible. See
-- https://alexknvl.com/posts/any-is-final-object.html
```

Our notion of equivalence between type inhabitants is observational equivalence. We will denote equality in the above sense as `a = b`

. `a`

and `b`

must be of the same type but they might be either values or expressions.

# Isomorphisms

We will define an *isomorphism* between two types `a`

and `b`

as two functions, `to : a -> b`

and `from : b -> a`

such that `to . from = id`

and `from . to = id`

. The existence of an isomorphism between two types will be denoted as `a ~ b`

.

When we say that `a ~ b`

is witnessed by `to`

and `from`

, we mean that `to`

and `from`

form one particular isomorphism between `a`

and `b`

. Ideally we should also prove that `to . from = id`

and `from . to = id`

, but we will usually omit such proofs. Proving these equalities in polymorphic lambda calculus often relies on parametricity.

Isomorphisms are *the tool* for counting type inhabitants since isomorphic types have the same number of type inhabitants. For example, `Either () () ~ Bool`

, where we can witness the isomophism by

```
to :: Either () () -> Bool
to (Right ()) = True
to (Left ()) = False
from :: Bool -> Either () ()
from True = Right ()
from False = Left ()
```

Using isomorphisms for counting necessitates having types with specific inhabitant counts. For example `Either () Bool`

is isomorphic to a type with 3 inhabitants that we could define as `data Three = One | Two | Three`

. Dependently typed languages like Idris allow us to write `Fin 3`

in place of `Three`

. Neither option is quite satisfactory since we will be dealing with such types all the time and “Fin 3” is 4 characters too many. We will denote such types simply as `3`

(`3 : Type`

). Scala developers should be careful not to confuse that with singleton type `3 <: Int with Singleton`

, which has only 1 inhabitant in Scala.

```
Void ~ 0
() ~ 1
Bool ~ 2
Three ~ 3
Fin 3 ~ 3
Fin 10 ~ 10
Int ~ 2^32
Any ~ 1
```

##### Coproducts, “`Either`

-like” types

We have already established that `Either () () ~ Bool ~ 2`

, but can we generalize this to an arbitrary `Either a b`

? `e : Either a b`

is either a value of type `a`

or a value of type `b`

, which seems to imply that the total number of inhabitants should be the sum of the number of inhabitants of both types `a`

and `b`

, e.g.

```
Either () () ~ 2 = 1 + 1
Either () Bool ~ 3 = 1 + 2
Either Bool () ~ 3 = 2 + 1
Either Bool Bool ~ 4 = 2 + 2
```

As an exercise, see if you can find a witness for the last isomorphism:

```
data Four = One | Two | Three | Four
to :: Either Bool Bool -> Four
to = ...
from :: Four -> Either Bool Bool
from = ...
```

We arrive at the following formula…

`Either a b ~ a + b`

but what do we mean by `+`

here? Surely you can not just *add* types! Clearly, we need to update our notion of types to allow such frivolities.

##### Temporary notation

First, we’ll introduce a new notation `|_| : Type ⟶ ℕ`

, which maps types to their inhabitant counts. Note that this notation is part of our meta-language and *not* of the programming language. We will also need `[_] : ℕ ⟶ Type`

, which maps natural numbers to our newly introduced types `1`

, `2`

, etc.

Now we can write our formula more rigorously:

`Either a b ~ [|a| + |b|]`

We will revise our notation at the end of this section, but for now it will do.

##### Products, a.k.a. tuples

Let’s look at tuples now.

```
(Bool, ()) ~ Bool ~ 2
(Bool, Bool) ~ Four ~ 4
(Void, Int) ~ Void
(Int, Int) ~ Long
-- Or in other words,
(Word32, Word32) ~ Word64
(Bool, Bool, Bool) ~ 8
```

As an exercise, convince yourself that all of the above isomorphisms are “true” (can be witnessed by some functions `to`

and `from`

).

An `e : (a, b)`

is a tuple of some `x : a`

and some `y : b`

. If there are `|a|`

ways to choose such an `x`

and `|b|`

ways to choose a `y`

, the total number of possible `e`

is `|a| * |b|`

, or in other words:

```
(a, b) ~ [|a| * |b|]
(a, b, c) ~ [|a| * |b| * |c|]
-- and so on
```

##### Algebraic data types

An arbitrary ADT such as

`data Foo = Bar | Baz Bool | Baf Int`

can be represented in the “coproduct of products” form, `Foo ~ Either () (Either Bool Int)`

(write a witness as an exercise).

This means that

`Foo ~ [1 + (2 + 2^32)] ~ [2^32 + 3]`

This also works for types taking type parameters, `Maybe a ~ Either () a`

,

`Maybe a ~ [1 + |a|]`

##### Functions

A function `f : a -> b`

assigns a single value `y : b`

for every possible value `x : a`

, so it is equivalent to a very large tuple of `b`

’s, with a total of `|a|`

`b`

’s. This means that the number of inhabitants of `a -> b`

is `|b| * |b | * ... * |b|`

(`|a|`

times) or in other words:

`a -> b ~ [|b| ^ |a|]`

As an exercise, see if you can write the corresponding witness (in Idris):

```
data Sized : (t : Type) -> (n : Nat) -> Type where
MkSized : Iso t (Fin n) -> Sized t n
proof : Sized a n -> Sized b m -> Sized (a -> b) (m ^ n)
```

As we will see later, `a -> b`

is a little bit different from coproducts and products in the way it interacts with `forall a.`

when we have (parametric) type polymorphism. The above formula can be used *only* if `a`

and `b`

are free from any universally quantified variables.

##### Simpler notation

From this point on, we will write type `a`

to mean any type `x`

such that `x ~ a`

. In other words, we will conflate `a`

and the equivalence class of `a`

with respect to `~`

. Types like `Bool`

, `2`

, and `Either () ()`

will be treated as if they are the same.

This notational trick allows us to write `Either a b ~ a + b`

or `(a, b) ~ a * b`

without explicitly lifting and unlifting `a`

and `b`

to the realm of natural numbers.

We will write `a > 0`

to mean that `a`

has at least one inhabitant.

##### Currying and uncurrying

Another two useful isomorphisms on functions are

```
(a, b) -> c
~ a -> b -> c
(a, b, c) -> d
~ a -> (b, c) -> d
~ (a, b) -> c -> d
~ (a, c) -> b -> d
~ a -> b -> c -> d
-- and so on
```

and

```
-- note our use of `+` and `*` notation
a -> (b, c)
~ (a -> b) * (a -> c)
a -> (b, c, d)
~ (a -> (b, c)) * (a -> d)
~ (a -> b) * (a -> (c, d))
~ (a -> (b, d)) * (a -> c)
~ (a -> b) * (a -> c) * (a -> d)
```

A somewhat similar isomorphism holds for coproducts:

```
(a + b) -> c
~ (a -> c) * (b -> c)
(a + b + c) -> z
~ (a -> z) * (b -> z) * (c -> z)
-- and so on...
```

Corollary:

`(n -> a) -> (m -> a) -> x ~ ((n + m) -> a) -> x`

##### Introduction rules

These might seem rather pointless but they will turn out to be invaluable once we get to polymorphic lambda calculus:

```
1 ~ a -> 1 -- for any a
1 ~ 0 -> a -- for any a
0 ~ a -> 0 -- for any a, |a| > 0
a ~ 1 -> a -- for any a
```

and as corollaries

```
a -> x ~ (1 -> a) -> x -- for any a and x
x ~ (0 -> a) -> x -- for any a and x
(a, a)
~ (1 -> a, 1 -> a)
~ (1 + 1) -> a
~ 2 -> a
```

##### Distributing `*`

over `+`

Why are `struct`

s and `enum`

s in C/C++ are inadequate for representing types? Why do functional languages use coproducts of products representation (algebraic data types) of data types rather than something more complicated?

We can answer these questions by pointing out the following isomorphism:

`a * (b + c) ~ a * b + a * c`

This isomorphism allows us to distribute `*`

(products) over `+`

(coproducts), which means that we can take any complicated type consisting of any combination of products and coproducts and arrive at a flattened representation that is isomorphic to the original type:

```
(a * (b + c)) * (d + (e * g))
~ (a * b + a * c) * (d + (e * g))
~ (a * b + a * c) * d + (a * b + a * c) * (e * g)
~ a * b * d + a * c * d + (a * b + a * c) * (e * g)
~ a * b * d + a * c * d + a * b * e * g + a * c * e * g
```

If you are familiar with mathematical logic, ADTs are the disjunctive normal form of types.

##### Combining it all together

```
Either Bool (Maybe Bool)
~ 2 + (1 + 2)
~ 5
```

```
Bool -> Bool
~ 2 -> 2
~ 2^2
~ 4
```

```
Maybe Bool -> Bool
~ (1 + 2) -> 2
~ 2^3
~ 8
```

```
Bool -> Maybe Bool
~ 2 -> (1 + 2)
~ 3^2
~ 9
```

```
(Bool, Bool, Bool) -> ()
~ (2 * 2 * 2) -> 1
~ 1^8
~ 1
```

```
Bool -> Bool -> (Either Bool Bool) -> Bool
~ (Bool, Bool, (Either Bool Bool)) -> Bool
~ (2 * 2 * (2 * 2)) -> 2
~ 2 ^ 16
```

```
() -> Bool
~ 1 -> 2
~ 2
```

Please see Any is a final object for an explanation of this one.

```
Any -> ()
~ 1 -> 1
~ 1
```

##### Sets and non-regular types [1]

Consider the type of `Set a`

, an abstract type of sets containing objects of type `a`

. Neither Haskell nor Scala have precisely the type we want, since neither language use the correct notion of equality (observational equality), but both come close enough for our purposes.

In order to calculate `|Set a|`

, we first note that *a set* is either a set of 0 elements, or a set of 1 element, or a set of 2 elements, and so on.

```
|Set a| = |Set₀ a| + |Set₁ a| + |Set₂ a| + ...
|Set₀ a| = 1
|Set₁ a| = |a|
|Set₂ a| = |a| (|a| - 1) / 2
```

To derive the last formula, consider how many ways there are to choose the first element `x : a`

of a set (`|a|`

) and how many ways to choose the second element `y : a`

(`|a| - 1`

). Then notice that both `{ x, y }`

and `{ y, x }`

are the same set but we counted them twice.

As an exercise see if you can prove that in general:

`|Setₙ a| = |a| (|a| - 1) .. (|a| - n + 1) / n!`

What is `|Setₙ a|`

for `n > |a|`

? It must be zero since there are not enough elements `x : a`

to make such a set, a set must have all of its elements be unique.

Now, imagine that we have some type `b ~ 1 + a`

, which has exactly one more element than `a`

. How are `|Set b|`

and `|Set a|`

related? Let’s say that `b = Maybe a`

, and that we have some `x : |Set b|`

. There are possible cases, either `Nothing ∈ x`

or `Nothing ∉ x`

. There are `|Set a|`

ways to fill the rest of the set with elements `Just (z : a)`

, which means that `|Set b| = 2 |Set a|`

.

We conclude that `|Set [1 + |a|]| = 2 |Set a|`

, or in simplified notation `Set (1 + n) ~ 2 * Set n`

, and from `|Set 0| = 1`

, we can arrive at the final expression `|Set n| = 2^n`

.

But wait a second, `2^n ~ n -> 2 ~ n -> Bool`

. Turns out that we spent all this time trying to derive the number of inhabitants of a set when we could’ve used a shortcut: a set is isomorphic to a predicate that tells us whether a particular value is in a set or not.

##### Double negation translation

I could write a whole another blog post about this topic, but here is the gist: `x > 0`

if and only if `(x -> 0) -> 0 ~ 1`

and `x ~ 0`

if and only if `x -> 0 > 0`

.

We will first prove the latter:

- If
`x ~ 0`

, then`x -> 0 ~ 0 -> 0 ~ 1`

. - Suppose
`x -> 0 > 0`

but`x > 0`

, then there is some`f : x -> 0`

and some`a : x`

. This means that`f a : 0`

and`0 > 0`

, contradiction.

Note that `x -> 0`

can not have more than one inhabitant. Suppose we could distinguish between two `a, b : x -> 0`

with a predicate `p : (x -> 0) -> Bool`

, then `p`

would need to somehow extract a bit of information out of `x -> 0`

, but the only way to probe `x -> 0`

is to supply it with an argument `x`

. Above we have proved that if there is an argument that we could supply to `x -> 0`

, then `x -> 0 ~ 0`

, which contradicts `a : x -> 0`

.

Since `x -> 0`

can have at most one inhabitant, we can conclude that:

- If
`x ~ 0`

iff`x -> 0 ~ 1`

. - If
`x > 0`

iff`x -> 0 ~ 0`

.

Now we can prove that `x > 0`

iff `(x -> 0) -> 0 ~ 1`

:

- If
`x > 0`

, then`x -> 0 ~ 0`

, and`(x -> 0) -> 0 ~ 1`

. - If
`(x -> 0) -> 0 ~ 1`

, then`x -> 0 ~ 0`

, and`x > 0`

.

As an exercise, prove that `(x -> 0) -> 0`

is a covariant functor (exercise) and a monad. Prove that `x -> 0`

is a contravariant functor.

# Counting inhabitants of rank-1 simple-kinded types [1]

In this section we will finally get into polymorphic lambda calculus. Our universe of types expands to encompass types such as `forall a. a -> a`

and `forall b. (forall a. (a, b) -> a) -> a -> b`

.

##### Distributivity of universal quantification

Universal quantification distributes over products:

`∀ x. f x * g x ~ (∀ x. f x) * (∀ x. g x)`

This can be witnessed by (Idris code):

```
to : {f : Type -> Type} -> {g: Type -> Type} ->
((x : Type) -> (f x, g x)) -> ((x : Type) -> f x, (x : Type) -> g x)
to fg = (\x => fst $ fg x, \x => snd $ fg x)
from : {f : Type -> Type} -> {g: Type -> Type} ->
((x : Type) -> f x, (x : Type) -> g x) -> ((x : Type) -> (f x, g x))
from (f, g) = \x => (f x, g x)
```

The corresponding distributivity law for coproducts,

`∀ x. f x + g x ~ (∀ x. f x) + (∀ x. g x)`

is not nearly as obvious, although it may appear trivial at first sight. Consider how you would derive `e : (∀ x. f x) + (∀ x. g x)`

:

```
to :: (∀ x. Either (f x) (g x)) -> Either (∀ x. f x) (∀ x. g x)
to fg = ...
```

The only way to construct an `Either`

is to use its `Left`

or `Right`

constructor but which one? To determine that we first need to instantiate `∀ x. f x + g x`

with some type:

```
to fg = case (fg :: Either (f ()) (g ())) of
Left f -> Left $ ...
Right g -> Right $ ...
```

Consider the first case. We have a `f ()`

, but we need to construct `∀ x. f x`

… and evaluating `fg`

with a different type parameter might give us a different result, couldn’t it?

Well, this is where we have to rely on parametricity and purity of `fg`

. If we didn’t have either one, this isomorphism would not hold!

Parametricity implies that this isomorphism holds, but in Haskell (or Scala) there is no way to write its witness without using coercion or a partial `case`

expression (`asInstanceOf`

or partial `match`

).

##### Distributivity of ∀ over functions

As we will see later, the interaction between `∀`

and `⟶`

is surprisingly rich, but there are some simple cases where `∀ x. f x -> g x`

can be simplified quite easily. If `f`

is a constant type constructor, meaning that `f x`

is always the same type `a`

, then:

```
∀ x. a -> f x
~ a -> ∀ x. f x
```

which can be witnessed by

```
def to[A, F[_]]: ∀[λ[x => A => F[x]]] => (A => ∀[F]) =
f => a => ∀(f.apply.apply(a))
def from[A, F[_]]: (A => ∀[F]) => ∀[λ[x => A => F[x]]] =
f => ∀[λ[x => A => F[x]]](a => f(a).apply)
```

You might want to check out Wikipedia’s page on universal quantification to learn other similar isomorphisms.

##### Yoneda lemma

If `f`

is a covariant functor,

`∀ x. (a -> x) -> f x ~ f a`

In order to show that there is indeed an isomorphism, we first construct two functions going between these types:

```
to :: (∀ x. (a -> x) -> f x) -> f a
to f = f id
from :: Functor f => f a -> (∀ x. (a -> x) -> f x)
from fa = \f -> fmap f fa
```

As an exercise I would recommend proving that `to . from = id`

and `from . to = id`

. One direction will prove very easy, the other will seem impossible. AFAIU the proof *must* rely on parametricity, so the equality would hold in relational model of polymorphic lambda calculus. I suggest going all the way back to Reynolds’ papers on polymorphic lambda calculus for information.

##### Yoneda lemma for contravariant functors

If `f`

is a contravariant functor,

`∀ x. (x -> a) -> f x ~ f a`

##### Corollary

If `f`

is a covariant functor,

```
∀ x. f x
~ ∀ x. (0 -> x) -> f x
~ f 0
```

If `f`

is a contravariant functor,

```
∀ x. f x
~ ∀ x. (x -> 1) -> f x
~ f 1
```

If `f`

is a phantom functor (`x`

is free in `f x`

),

```
∀ x. f x
-- since a phantom functor is covariant
~ f 0
-- since a phantom functor is contravariant
~ f 1
```

##### Simple examples

```
∀ a. a -> a
-- a ~ 1 -> a
~ ∀ a. (1 -> a) -> a
-- Yoneda with f x = x
~ 1
```

```
∀ a b. (a, b) -> a
-- uncurrying
~ ∀ a b. b -> a -> a
-- 1 -> b ~ b
~ ∀ a b. (1 -> b) -> a -> a
-- Yoneda with f x = a -> a
~ ∀ a. a -> a
-- reusing the above proof
~ 1
```

```
∀ a. (a, a) -> a
-- (a, a) ~ 2 -> a
~ ∀ a. (2 -> a) -> a
-- Yoneda with f x = a -> a
~ 2
```

```
∀ a. (a, a) -> (a, a)
-- (a, a) ~ 2 -> a
~ ∀ a. (2 -> a) -> (a, a)
-- Yoneda with f x = (x, x)
~ (2, 2)
~ 4
```

```
∀ a b c. (a -> c) -> b -> a -> c
-- Yoneda with f x = b -> a -> x
~ ∀ a b. b -> a -> a
-- reusing one of the above theorems
~ 1
```

```
∀ a b c. (a -> b) -> (b -> c) -> a -> c
~ ∀ a b. (a -> b) -> ∀ c. (b -> c) -> a -> c
-- Yoneda with f x = a -> x
~ ∀ a b. (a -> b) -> a -> b
-- Yoneda with f x = a -> x
~ ∀ a. a -> a
-- reusing one of the above theorems
~ 1
```

```
∀ a b. (a -> b) -> b -> a -> b
~ ∀ a b. ((a + 1) -> b) -> a -> b
-- Yoneda with f x = a -> x
~ ∀ a. a -> (a + 1)
-- a ~ 1 -> a
~ ∀ a. (1 -> a) -> (a + 1)
-- Yoneda with f x = x + 1
~ 1 + 1
~ 2
```

```
∀ a b c. (a -> b) -> b -> a -> c
-- (0 -> c) -> x ~ x
~ ∀ a b c. (0 -> c) -> ((a -> b) -> b -> a -> c)
-- Yoneda with f x = (a -> b) -> b -> a -> x
~ ∀ a b. (a -> b) -> b -> a -> 0
~ ∀ a. (∀ b. (a -> b) -> b) -> a -> 0
~ ∀ a. a -> a -> 0
~ ∀ a. (2 -> a) -> 0
-- to prove that ∀ a. f a -> 0 is uninhabited
-- we need to prove that f a is inhabited for at least one a
-- substitute a = 1, we get
-- (2 -> 1) ~ 1
-- hence
~ 0
```

```
∀ a. (0 -> a) -> a -> 0
~ ∀ a. (0 -> a) -> (1 -> a) -> 0
~ ∀ a. (1 -> a) -> 0
-- Yoneda with f x = 0
~ 0
```

```
∀ a b. (a -> 0, b) -> 2 * b
~ ∀ a b. b -> (a -> 0) -> 2 * b
-- Yoneda
~ ∀ a. (a -> 0) -> 2
-- Add (0 -> a) ~ 1
~ ∀ a. (0 -> a) -> (a -> 0) -> 2
-- Yoneda
~ (0 -> 0) -> 2
~ 1 -> 2
~ 2
```

```
∀ a. (a -> 0) -> 0
~ ∀ a. (0 -> a) -> (a -> 0) -> 0
~ (0 -> 0) -> 0
~ 1 -> 0
~ 0
```

```
∀ a. (a -> 0) -> a -> 0
~ ∀ a. a -> (a -> 0) -> 0
~ (1 -> 0) -> 0
~ 0 -> 0
~ 1
```

```
∀ a. (a -> 0, a -> 0) -> (a -> 0)
~ ∀ a. a -> 2 * (a -> 0) -> 0
~ ∀ a. (1 -> a) -> 2 * (a -> 0) -> 0
~ 2 * (1 -> 0) -> 0
~ 2 * 0 -> 0
~ 1
```

```
∀ a b. (a -> b) -> Maybe a -> Maybe b
-- Yoneda with f x = Maybe a -> Maybe x
~ ∀ a. Maybe a -> Maybe a
-- Maybe's definition
~ ∀ a. (1 + a) -> Maybe a
-- Distributing `->` over `+`
~ ∀ a. (1 -> Maybe a) * (a -> Maybe a)
-- Distributing `∀` over `*`
~ (∀ a. Maybe a) * (∀ a. a -> Maybe a)
-- Maybe's definition
~ (∀ a. (1 + a)) * (∀ a. a -> Maybe a)
-- 0 -> a introduction (as a unit)
~ (∀ a. (0 -> a) -> (1 + a)) * (∀ a. a -> Maybe a)
-- Yoneda lemma with f x = 1 + x
~ (1 + 0) * (∀ a. a -> Maybe a)
-- Simplifying via x + 0 = x and 1 * x = x
~ ∀ a. a -> Maybe a
-- unit introduction and Maybe's definition
~ ∀ a. (1 -> a) -> (1 + a)
-- Yoneda lemma with f x = 1 + x
~ 2
```

# Representable functors and containers

Yoneda lemma tells us that `∀ x. (a -> x) -> g x ~ g a`

, but what if instead of `a -> x`

we have some arbitrary `f x`

, and a natural transformation between functors `∀ x. f x -> g x`

.

There is a class of functors `f x`

that are isomorphic to `a -> x`

for some `a`

, called representable functors. Some examples include:

```
() ~ 0 -> x
x ~ 1 -> x
(x, x, x) ~ 3 -> x
-- basically, anything that looks like a tuple
```

If `f`

is a representable functor, `f x ~ a -> x`

, and `g`

is a covariant functor, then

```
∀ x. f x -> g x
~ ∀ x. (a -> x) -> g x
~ g a
```

That’s a nice shortcut, but it is pretty limiting! What if `f = Maybe`

, or say, more generally, `f x ~ f₁ x + f₂ x`

, where `f₁ x ~ a₁ -> x`

and `f₂ x ~ a₂ -> x`

(in the case of `Maybe`

, `a₁ ~ 0`

and `a₂ ~ 1`

):

```
∀ x. f x -> g x
~ ∀ x. (f₁ x + f₂ x) -> g x
~ ∀ x. ((a₁ -> x) + (a₂ -> x)) -> g x
~ ∀ x. ((a₁ -> x) -> g x) * ((a₂ -> x) -> g x)
~ (∀ x. (a₁ -> x) -> g x) * (∀ x. (a₂ -> x) -> g x)
~ (g a₁) * (g a₂)
```

In the case of `∀ x. Maybe x -> Maybe x`

, we get:

```
∀ x. Maybe x -> Maybe x
~ Maybe 0 * Maybe 1
~ 1 * 2
~ 2
```

Much simpler than what we had to do before!

Now, there is another class of functors called containers, where `f x ~ Σᵢ aᵢ -> x`

, and `i`

can range either over some finite range or over `ℕ`

.

Almost every “container-like” type (a type with a functor instance) you can imagine is a container:

```
Maybe x ~ (0 -> x) + (1 -> x)
(Bool, x)
~ 2 * x
~ (1 + 1) * (1 -> x)
~ (1 -> x) + (1 -> ax)
Bool -- f x = Bool
~ 2 * (0 -> x)
~ (0 -> x) + (0 -> x)
List x
~ (0 -> x) + (1 -> x) + (2 -> x) + ...
~ Σᵢ xⁱ
Either Bool x
~ (2 + x)
~ (0 -> x) + (0 -> x) + (1 -> x)
```

The only exception that I know of is `(x -> 0) -> 0`

and similar constructions.

Now we can generalize our previous result to an arbitrary container:

```
∀ x. f x -> g x
~ ∀ x. (Σᵢ fᵢ x) -> g x
~ ∀ x. (Σᵢ aᵢ -> x) -> g x
~ ∀ x. Πᵢ ((aᵢ -> x) -> g x)
~ Πᵢ (∀ x. (aᵢ -> x) -> g x)
~ Πᵢ g aᵢ
```

##### A different view on free theorems

Consider some function `f`

that has the same signature as `map`

on lists:

```
∀ a b. (a -> b) -> [a] -> [b]
-- Yoneda with f x = [a] -> [x]
~ ∀ a. [a] -> [a]
-- using the container formula
~ ∀ a. (Σᵢ aⁱ) -> [a]
~ Πᵢ [i]
-- expressing it using dependent types and renaming i to n
~ (n : Nat) -> [Fin n]
```

We can see that a function with `map`

’s signature is isomorphic to a function that takes the list’s length `n : Nat`

and then produces a list of indices of elements from the original list. It may duplicate some indices and skip some, but that’s about it! Compare that with the free theorem for `m : ∀ a b. (a -> b) -> [a] -> [b]`

:

`m f = m id . map f = map f . m id`

I don’t know about you, but I find the free theorem to be less intuitive than our approach of finding a neat isomorphism.

Similarly if take a look at `filter`

’s signature,

```
∀ a. (a -> Bool) -> [a] -> [a]
~ ∀ a. [a] -> (a -> Bool) -> [a]
-- using the container formula
~ ∀ a. (Σᵢ aⁱ) -> (a -> Bool) -> [a]
~ Πᵢ (i -> Bool) -> [i]
-- expressing it using dependent types and renaming i to n
~ (n : Nat) -> Set (Fin n) -> [Fin n]
```

It takes the list’s length and a set of indices and produces a list of indices. We can see exactly how much freedom the signature leaves us! For example, a function that takes every element that comes *after* one that satisfies the predicate and then shuffles them somehow based on the length of the list is a valid function satisfying the signature `∀ a. (a -> Bool) -> [a] -> [a]`

.

# Recursive types

But what to do about `∀ a. (a -> a) -> (a -> a)`

? Intuitively, we know that the answer is `ℵ₀`

(a countable number of inhabitants) as `\f -> f`

, `\f -> f . f`

, `\f -> f . f . f`

, etc are all valid expressions with that type. Can we make this proof more rigorous and less syntax-dependent?

Well, time for big guns,

```
∀ a. (a -> a) -> a -> a
-- (a -> a, a) ~ (1 + a) -> a
~ ∀ a. ((1 + a) -> a) -> a
-- define f x = 1 + x
~ ∀ a. (f a -> a) -> a
~ μ x. f x
~ μ x. 1 + x
~ ℵ₀
```

From this we can conclude that the number of inhabitants of `∀ a. (a -> a) -> a -> a`

is countable, and it is isomorphic to the set of Peano (natural) numbers.

```
∀ a b. (a -> b -> a) -> a -> b -> a
-- move parameters around
~ ∀ a b. b -> (a -> b -> a) -> a -> a
~ ∀ a b. b -> ((a * b + 1) -> a) -> a
~ ∀ b. b -> (μ x. x * b + 1)
-- Yoneda
~ μ x. x * () + 1
~ ℵ₀
```

```
∀ a. (a -> a) -> a
-- f x = x
~ μ x. x
~ 0
```

##### Checking that `μ x. f x`

is inhabited

Is there a mechanical way to check if `μ x. f x`

is inhabited? Note that if `μ x. f x`

is inhabited, `μ x. f x > 0`

, then

```
μ x. f x
~ ∀ x. (f x -> x) -> x > 0
-- substituting x = 0
(f 0 -> 0) -> 0 > 0
-- double-negation translation preserves `> 0`
f 0 > 0
```

and we conclude that `μ x. f x > 0`

implies `f 0 > 0`

(proof by mniip).

On the other hand,

```
f 0 > 0
-- assuming f is a covariant functor,
-- we can map it with 0 -> (μ x. f x)
f (μ x. f x) > 0
-- and we conclude
μ x. f x > 0
```

Hence, `μ x. f x`

is inhabited if and only if `f 0`

is inhabited and `f`

is a covariant functor.

##### Checking that `μ x. f x`

has infinite inhabitants

If `f 0 ~ ℵ₀`

, then following the same logic as in our proof of inhabitance, we get:

```
f 0 ~ ℵ₀
-- assuming f is a covariant functor,
-- we can map it with 0 -> (μ x. f x)
f (μ x. f x) ~ ℵ₀
-- and we conclude
μ x. f x ~ ℵ₀
```

But what if `0 < f 0 < ℵ₀`

? Let’s assume that `f`

is a container, `f x ~ (Σᵢ aᵢ -> x) + r`

, where `aᵢ > 0`

, `r > 0`

, and `i`

ranges over a non-empty set `I`

. Then `f x ~ 1 + (a₀ -> x) + ((r - 1) + Σᵢ aᵢ -> x)`

, where `Σᵢ`

now ranges over `I ∖ {0}`

.

In other words, `f x ~ 1 + (a₀ -> x) + R`

. As an exercise see how you would construct an infinite sequence of elements of `μ x. 1 + (a₀ -> x) + R`

, proving that `μ x. f x ~ ℵ₀`

.

## What is not free about Functor laws?

Suppose that `F :: * -> *`

is a functor, we write some implementation of `fmap :: forall a b. (a -> b) -> F a -> F b`

for it, what should we look out for?

```
∀ a b. (a -> b) -> F a -> F b
-- f b = F a -> F b
-- if F is a functor,
~ ∀ a. F a -> F a
~ F :~> F
```

So `fmap ~ F :~> F`

, meaning that the only freedom in implementing `fmap`

that we have is modifying the context (`F`

) without modifying the contents (`a`

) of `F a`

. This explains why `fmap id = id`

is crucially important, if this law were to be omitted, we could write an `fmap`

that combines a natural transformation of `F :~> F`

with lifting of `a -> b`

to `F a -> F b`

.

## Another way to derive theorems for free?

```
∀ a. (a -> Bool) -> F a -> F a
~ ∀ a. (F a, a -> Bool) -> F a
-- G x = (F x, x -> Bool)
~ G :~> F
-- C a b = (a -> b, b -> a)
-- D a b = a -> b
-- α . (G f) = (F f) . α
```

This section is still WIP, but here is something to think about: can every free theorem be expressed as a sufficient condition for some `F :~> G`

to be a natural transformation?

# Counting inhabitants for rank-N and higher-kinded types.

```
∀ f. f A -> f B
-- define g x = (x = A)
-- f A ~ g :~> f
~ ∀ f. (g :~> f) -> f B
-- Yoneda
~ g B
~ A = B
```

```
∀ f a b. f (G a) -> f (G b)
~ ∀ a b. ∀ f. f (G a) -> f (G b)
-- using the theorem above
~ ∀ a b. G a = G b
```

This is really tricky and we’ll have to take a pause and talk about…

### Bounding the number of inhabitants above.

Consider any polymorphic type `∀ a. f a`

, can we say anything about the number of its inhabitants by instantiating `a`

with different types?

Can the number of instances increase because of instantiation? Most certainly!

```
∀ a. a -> a ~ 1
-- but
2 -> 2 ~ 4
```

Can the number of instances decrease because of instantiation? Yes:

```
∀ a. (a, a) -> a ~ 2
-- but
(0, 0) -> 0 ~ 1
-- and
(1, 1) -> 1 ~ 1
```

Well, I will figure it out later.

# Counting inhabitants of types given laws.

Is hard!