Depth Comonads
I haven’t written much on this blog recently: since starting a PhD all of my writing output has gone towards paper drafts and similar things. Recently, though, I’ve been thinking about streams, monoids, and comonads and I haven’t manage to wrangle those thoughts into something coherent enough for a paper. This blog post is a collection of those (pretty disorganised) thoughts. The hope is that writing them down will force me to clarify things, but consider this a warning that the rest of this post may well be muddled and confusing.
Streams
The first thing I want to talk about is streams.
record Stream (A : Type) : Type where coinductive field head : A tail : Stream A
This representation is coinductive: the type above contains infinite values. Agda, unlike Haskell, treats inductive and coinductive types differently (this is why we need the coinductive
keyword in the definition). One of the differences is that it doesn’t check termination for construction of these values:
alternating : Stream Bool alternating .head = true alternating .tail .head = false alternating .tail .tail = alternating
alternating :: [Bool]
= True : False : alternating alternating
We have the equivalent in Haskell on the right. We’re also using some fancy syntax for the Agda code: copatterns (Abel and Pientka 2013).
Note that this type is only definable in a language with some notion of laziness. If we tried to define a value like alternating
above in OCaml we would loop. Haskell has no problem, and Agda—through its coinduction mechanism—can handle it as well.
Update 4-5-22: thanks to Arnaud Spiwack (@aspiwack) for correcting me on this, it turns out the definition of alternating
above can be written in Ocaml, even without laziness. Apparently Ocaml has a facility for strict cyclic data structures. Also, I should be a little more precise with what I’m saying above: even without the extra facility for strict cycles, you can of course write a lazy list with some kind of lazy wrapper type.
There is, however, an isomorphic type that can be defined without coinduction:
(notice that, in this form, the function ℕ-alternating
is the same function as even : ℕ → Bool
)
In fact, we can convert from the coinductive representation to the inductive one. This conversion function is more familiarly recognisable as the indexing function:
_[_] : Stream A → ℕ-Stream A xs [ zero ] = xs .head xs [ suc n ] = xs .tail [ n ]
I’m not just handwaving when I say the two representations are isomorphic: we can prove this isomorphism, and, in Cubical Agda, we can use this to transport programs on one representation to the other.
Proof of isomorphism
tabulate : ℕ-Stream A → Stream A tabulate xs .head = xs zero tabulate xs .tail = tabulate (xs ∘ suc) stream-rinv : (xs : Stream A) → tabulate (xs [_]) ≡ xs stream-rinv xs i .head = xs .head stream-rinv xs i .tail = stream-rinv (xs .tail) i stream-linv : (xs : ℕ-Stream A) (n : ℕ) → tabulate xs [ n ] ≡ xs n stream-linv xs zero = refl stream-linv xs (suc n) = stream-linv (xs ∘ suc) n stream-reps : ℕ-Stream A ⇔ Stream A stream-reps .fun = tabulate stream-reps .inv = _[_] stream-reps .rightInv = stream-rinv stream-reps .leftInv xs = funExt (stream-linv xs)
One final observation about streams: another way to define a stream is as the cofree comonad of the identity functor.
record Cofree (F : Type → Type) (A : Type) : Type where coinductive field root : A step : F (Cofree F A) 𝒞-Stream : Type → Type 𝒞-Stream = Cofree id
Concretely, the Cofree F A
type is a possibly infinite tree, with branches shaped like F
, and internal nodes labelled with A
. It has the following characteristic function:
{-# NON_TERMINATING #-} trace : ⦃ _ : Functor 𝐹 ⦄ → (A → B) → (A → 𝐹 A) → A → Cofree 𝐹 B trace ϕ ρ x .root = ϕ x trace ϕ ρ x .step = map (trace ϕ ρ) (ρ x)
Like how the free monad turns any functor into a monad, the cofree comonad turns any functor into a comonad. Comonads are less popular and widely-used than monads, as there are less well-known examples of them. I have found it helpful to think about comonads through spatial analogies. A lot of comonads can represent a kind of walk through some space: the extract
operation tells you “what is immediately here”, and the duplicate
operation tells you “what can I see from each point”. For the stream, these two operations are inhabited by head
and the following:
duplicate : Stream A → Stream (Stream A) duplicate xs .head = xs duplicate xs .tail = duplicate (xs .tail)
Generalising Streams
There were three key observations in the last section:
- Streams are coinductive. This requires a different termination checker in Agda, and a different evaluation model in strict languages.
- They have an isomorphic representation based on indexing. This isomorphic representation doesn’t need coinduction or laziness.
- They are a special case of the cofree comonad.
Going forward, we’re going to look at generalisations of streams, and we’re going to see what these observations mean in the contexts of the new generalisations.
The thing we’ll be generalising is the index of the stream. Currently, streams are basically structures that assign a value to every ℕ
: what does a stream of—for instance—rational numbers look like? To drive the intuition for this generalisation let’s first look at the comonad instance on the ℕ-Stream
type:
ℕ-extract : ℕ-Stream A → A ℕ-extract xs = xs zero ℕ-duplicate : ℕ-Stream A → ℕ-Stream (ℕ-Stream A) ℕ-duplicate xs zero = xs ℕ-duplicate xs (suc n) = ℕ-duplicate (xs ∘ suc) n
This is the same instance as is on the Stream
type, transported along the isomorphism between the two types (we could have transported the instance automatically, using subst
or transport
; I have written it out here manually in full for illustration purposes).
The ℕ-duplicate
method here can changed a little to reveal something interesting:
ℕ-duplicate₂ : ℕ-Stream A → ℕ-Stream (ℕ-Stream A) ℕ-duplicate₂ xs zero m = xs m ℕ-duplicate₂ xs (suc n) m = ℕ-duplicate₂ (xs ∘ suc) n m ℕ-duplicate₃ : ℕ-Stream A → ℕ-Stream (ℕ-Stream A) ℕ-duplicate₃ xs n m = xs (go n m) where go : ℕ → ℕ → ℕ go zero m = m go (suc n) m = suc (go n m) ℕ-duplicate₄ : ℕ-Stream A → ℕ-Stream (ℕ-Stream A) ℕ-duplicate₄ xs n m = xs (n + m)
In other words, duplicate
basically adds indices.
There is something distinctly monoidal about what’s going on here: taking the (ℕ, +, 0)
monoid as focus, the extract
method above corresponds to the monoidal empty element, and the duplicate
method corresponds to the binary operator on monoids. In actual fact, there is a comonad for any function from a monoid, often called the Traced
comonad.
Traced : Type → Type → Type Traced E A = E → A extractᵀ : ⦃ _ : Monoid E ⦄ → Traced E A → A extractᵀ xs = xs ε duplicateᵀ : ⦃ _ : Monoid E ⦄ → Traced E A → Traced E (Traced E A) duplicateᵀ xs e₁ e₂ = xs (e₁ ∙ e₂)
Reifying Traced
The second observation we made about streams was that they had an isomorphic representation which didn’t need coinduction. What we can see above, with Traced
, is a representation that also doesn’t need coinduction. So what is the corresponding coinductive representation? What does a generalised reified stream look like?
So the first approach to reifying a function to a data structure is to simply represent the function as a list of pairs.
C-Traced : Type → Type → Type C-Traced E A = Stream (E × A)
This representation obviously isn’t ideal: it isn’t possible to construct an isomorphism between C-Traced
and Traced
. We can—kind of—go in one direction, but even that function isn’t terminating:
{-# NON_TERMINATING #-} lookup-env : ⦃ _ : IsDiscrete E ⦄ → C-Traced E A → Traced E A lookup-env xs x = if does (x ≟ xs .head .fst) then xs .head .snd else lookup-env (xs .tail) x
I’m not too concerned with being fast and loose with termination and isomorphisms for the time being, though. At the moment, I’m just interested in exploring the relationship between streams and the indexing functions.
As a result, let’s try and push on this representation a little and see if it’s possible to get something interesting and almost isomorphic.
Segmented Streams
To get a slightly nicer representation we can exploit the monoid a little bit. We can do this by storing offsets instead of the absolute indices for each entry. The data structure I have in mind here looks a little like this:
┏━━━━━━━━━━┳━━━━━━┳━━━━━━┉ ┃x ┃y ┃z ┉ ┡━━━━━━━━━━╇━━━━━━╇━━━━━━┉ ╵⇤a╌╌╌╌╌╌╌⇥╵⇤b╌╌╌⇥╵⇤c╌╌╌╌┈
Above is a stream containing the values x
, y
, and z
. Instead of each value corresponding to a single entry in the stream, however, they each correspond to a segment. The value x
, for instance, labels the first segment in the stream, which has a length given by a
. y
labels the second segment, with length b
, z
with length c
, and so on.
The Traced
version of the above structure might be something like this:
str :: Traced m a
| i < a = x
str i | i < a + b = y
| i < a + b + c = z
| ...
So the index-value mapping is also segmented. The stream, in this way, is kind of like a ruler, where different values mark out different quantities along the ruler, and the index function takes in a quantity and tells you which entry in the ruler that quantity corresponds to.
In code, we might represent the above data structure with the following type:
record Segments (E : Type) (A : Type) : Type where field length : E label : A next : Segments E A open Segments
The question is, then, how do we convert this structure to an Traced
representation?
Monuses
We need some extra operations on the monoid in the segments in order to enable this conversion to the Traced
representation. The extra operations are encapsulated by the monus algebra: I wrote about this in the paper I submitted with Nicolas Wu to ICFP last year (2021). It’s a simple algebra on monoids which basically encapsulates monoids which are ordered in a sensible way.
The basic idea is that we construct an order on monoids which says “x is smaller than y if there is some z that we can add to x to get to y”.
_≼_ : ⦃ _ : Monoid A ⦄ → A → A → Type _ x ≼ y = ∃ z × (y ≡ x ∙ z)
A monus is a monoid where we can extract that z
, when it exists. On the monoid (ℕ, +, 0)
, for instance, this order corresponds to the normal ordering on ℕ
.
Extracting the z
above corresponds to a kind of difference operator:
_∸_ : ℕ → ℕ → ℕ x ∸ zero = x suc x ∸ suc y = x ∸ y _ ∸ _ = zero
This operator is sometimes called the monus. It is a kind of partial, or truncating, subtraction:
_ : 5 ∸ 2 ≡ 3 _ = refl _ : 2 ∸ 5 ≡ 0 _ = refl
And, indeed, this operator “extracts” the z
, when it exists.
∸‿is-monus : ∀ x y → (x≼y : x ≼ y) → y ∸ x ≡ fst x≼y ∸‿is-monus zero _ (z , y≡0+z) = y≡0+z ∸‿is-monus (suc x) (suc y) (z , y≡x+z) = ∸‿is-monus x y (z , suc-inj y≡x+z) ∸‿is-monus (suc x) zero (z , 0≡x+z) = ⊥-elim (zero≢suc 0≡x+z)
Our definition of a monus is simple: a monus is anything where the order ≼, sometimes called the “algebraic preorder”, is total and antisymmetric. This is precisely what lets us write a function which takes the Segments
type and converts it back to the Traced
type.
{-# NON_TERMINATING #-} Segments→Traced : ⦃ _ : Monus E ⦄ → Segments E A → Traced E A Segments→Traced xs i with xs .length ≤? i ... | yes (j , i≡xsₗ∙j) = Segments→Traced (xs .next) j ... | no _ = xs .label
This function takes an index, and checks if that length is greater than or equal to the first segment in the stream of segments. If it is, then it continues searching through the rest of the segments with the index reduced by the size of that first segment. If not, then it returns the label of the first segment.
Taking the old example, we are basically converting to ∸ from +:
str :: Traced m a
| i < a = x
str i | i ∸ a < b = y
| i ∸ a ∸ b < c = z
| ...
The first issue here is that this definition is not terminating. That might seem an insurmountable problem at first—we are searching through an infinite stream, after all—but notice that there is one paremeter which is decreasing on each recursive call: the index. Well, it only decreases if the segment is non-zero: this can be enforced by changing the definition of the segments type:
record ℱ-Segments (E : Type) ⦃ _ : Monus E ⦄ (A : Type) : Type where coinductive field label : A length : E length≢ε : length ≢ ε next : ℱ-Segments E A open ℱ-Segments
This type allows us to write the following definition:
module _ ⦃ _ : Monus E ⦄ (wf : WellFounded _≺_) where wf-index : ℱ-Segments E A → (i : E) → Acc _≺_ i → A wf-index xs i a with xs .length ≤? i ... | no _ = xs .label wf-index xs i (acc wf) | yes (j , i≡xsₗ∙j) = wf-index (xs .next) j (wf j (xs .length , i≡xsₗ∙j ; comm _ _ , xs .length≢ε)) ℱ-Segments→Traced : ℱ-Segments E A → Traced E A ℱ-Segments→Traced xs i = wf-index xs i (wf i)
Trying to build an isomorphism
So the ℱ-Segments
type is interesting, but it only really gives one side of the isomorphism. There is no way to write a function Traced E A → ℱ-Segments E A
.
The problem is that there’s no way to get the “next” segment from a function E → A
. We can find the label of the first segment, by applying the function to ε
, but there’s no real way to figure out the size of this segment. We can change Traced
little to provide this size, though.
Ind : ∀ E → ⦃ _ : Monus E ⦄ → Type → Type Ind E A = E → A × Σ[ length ⦂ E ] × (length ≢ ε)
This new type will return a tuple consisting of the value indicated by the supplied index, along with the distance to the next segment. For instance, on the example stream given in the diagram earlier, supplying an index i
that is bigger than a
but smaller than a + b
, this function should return y
along with some j
such that i + j ≡ a + b
. Diagrammatically:
╷⇤i╌╌╌╌╌╌╌╌⇥╷⇤j╌╌⇥╷ ┢━━━━━━━━┳━━┷━━━━━╈━━━━━━┉ ┃x ┃y ┃z ┉ ┡━━━━━━━━╇━━━━━━━━╇━━━━━━┉ ╵⇤a╌╌╌╌╌⇥╵⇤b╌╌╌╌╌⇥╵⇤c╌╌╌╌┈
This can be implemented in code like so:
module _ ⦃ _ : Monus E ⦄ where wf-ind : ℱ-Segments E A → (i : E) → Acc _≺_ i → A × ∃ length × (length ≢ ε) wf-ind xs i _ with xs .length ≤? i ... | no xsₗ≰i = let j , _ , j≢ε = <⇒≺ i (xs .length) xsₗ≰i in xs .label , j , j≢ε wf-ind xs i (acc wf) | yes (j , i≡xsₗ∙j) = wf-ind (xs .next) j (wf j (xs .length , i≡xsₗ∙j ; comm _ _ , xs .length≢ε)) ℱ-Segments→Ind : WellFounded _≺_ → ℱ-Segments E A → Ind E A ℱ-Segments→Ind wf xs i = wf-ind xs i (wf i)
Again, if the monus has finite descending chains, this function is terminating. And the nice thing about this is that it’s possible to write a function in the other direction:
Ind→ℱ-Segments : ⦃ _ : Monus E ⦄ → Ind E A → ℱ-Segments E A Ind→ℱ-Segments ind = let x , s , s≢ε = ind ε in λ where .label → x .length → s .length≢ε → s≢ε .next → Ind→ℱ-Segments (ind ∘ (s ∙_))
The problem here is that this isomorphism is only half correct. We can prove that converting to Ind
and back is the identity, but not the other direction. There are too many functions in Ind
.
Nonetheless, it’s still interesting!
State Comonad
There is a comonad on state (Waern 2018; Kmett 2018) that is different from store. Notice that above the Ind
type has the same type (almost) as State E A
.
This is interesting in two ways: first, it gives some concrete, spatial intuition for what’s going on with the state comonad.
Second, it gives a kind of interesting monad instance on the stream. If we apply the Ind→ℱ-Segments
function to the implementation of join
on state, we should get a join
on ℱ-Segments
. And we do!
First, we need to redefine Ind
to the following:
𝒜-Ind : ∀ E → ⦃ _ : Monus E ⦄ → Type → Type 𝒜-Ind E A = (i : E) → A × Σ[ length ⦂ E ] × (i ≺ length)
This is actually isomorphic to the previous definition, but we return the absolute value of the next segment, rather than the distance to the next segment.
𝒜-iso : ⦃ _ : Monus E ⦄ → 𝒜-Ind E A ⇔ Ind E A 𝒜-iso .fun xs i = let x , s , k , s≡i∙k , k≢ε = xs i in x , k , k≢ε 𝒜-iso .inv xs i = let x , s , s≢ε = xs i in x , i ∙ s , s , refl , s≢ε 𝒜-iso .rightInv _ = refl 𝒜-iso .leftInv xs p i = let x , s , k , s≡i∙k , k≢ε = xs i in x , s≡i∙k (~ p) , k , (λ q → s≡i∙k (~ p ∨ q)) , k≢ε
The implemention of join
on this type is the following:
𝒜-join : ⦃ _ : Monus E ⦄ → 𝒜-Ind E (𝒜-Ind E A) → 𝒜-Ind E A 𝒜-join xs i = let x , j , i<j = xs i y , k , k<j = x j in y , k , ≺-trans i<j k<j
This is the same definition of join
as for State
, modulo the <
fiddling.
On a stream, this operation corresponds to taking a stream of streams and collapsing it to a single stream. It does this by taking a prefix of each internal stream equal in size to the segment of the outer entry. Diagrammatically:
┏━━━━━━━━━━┳━━━━━━┳━━━━━━┉ ┃xs ┃ys ┃zs ┉ ┡━━━━━━━━━━╇━━━━━━╇━━━━━━┉ ╵⇤a╌╌╌╌╌╌╌⇥╵⇤b╌╌╌⇥╵⇤c╌╌╌╌┈ ╱ ╲ ╱ ╲ ╱ ╲ ╱ ╲ ╱ ╲ ╷⇤b╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌⇥╷ ┢━━━━━━━┳━━━━━━┳━━━┷┉ ys = ┃xʸ ┃yʸ ┃zʸ ┉ ┡━━━━━━━╇━━━━━━╇━━━━┉ ╵⇤aʸ╌╌╌⇥╵⇤bʸ╌╌⇥╵⇤cʸ╌┈
Here we start with a stream consisting of the streams xs
, ys
, and zs
, followed by some other streams. Zooming in on ys
, we see that it is in a segment of length b
, and consists of three values xʸ
, yʸ
, and zʸ
, with segment lengths aʸ
, bʸ
, and cʸ
, respectively.
Calling join
on this stream will give us the following stream:
┏━┉━┳━━━━┳━━━━┳━━━━━┳━━━━┉ ┃ ┉ ┃xʸ ┃yʸ ┃zʸ ┃ ┉ ┡━┉━╇━━━━╇━━━━╇━━━━━╇━━━━┉ │ │⇤aʸ⇥╵⇤bʸ⇥╵⇤╌╌┈⇥│ ╵⇤a⇥╵⇤b╌╌╌╌╌╌╌╌╌╌╌╌⇥╵⇤c╌╌┈
Again, we’re focusing on the ys
section here, which occupies the segment from a
to a ∙ b
. After join
, this segment is occupied by three elements, xʸ
, yʸ
, and zʸ
.
Notice that this isn’t quite the normal join
on streams. That join
takes a stream of streams, and turns the i
th entry into the i
th entry in the underlying stream. It’s a diagonalisation, in other words.
This one is kind of similar, but it takes chunks of the outer stream.
Theory
All of this so far is very hand-wavy. We have an almost isomorphism (a split surjection, to be precise), but not much in the way of concrete theoretical insights, just some vague gesturing towards spatial metaphors and so on.
Thankfully, there are two seperate areas of more serious research that seem related to the stuff I’ve talked about here. The first is update monads and directed containers, and the second is graded comonads. I think I understand graded comonads and the related work better out of the two, but update monads and directed containers seems more closely related to what I’m doing here.
Update Monads and Directed Containers
There are a few papers on this topic: Ahman, Chapman, and Uustalu (2012), Ahman and Uustalu (2013; Ahman and Uustalu 2014; Ahman and Uustalu 2016).
The first of these, “When Is a Container a Comonad?” constructs, as the title suggests, a class for containers which are comonads in a standard way.
Here’s the definition of a container:
Container : Type₁ Container = Σ[ Shape ⦂ Type ] × (Shape → Type) ⟦_⟧ : Container → Type → Type ⟦ S , P ⟧ X = Σ[ s ⦂ S ] × (P s → X)
Containers are a generic way to describe a class of well-behaved functors. Any container is a pair of a shape and position. Lists, for instance, are containers, where their shape is described by the natural numbers (the shape here is the length of the list). The positions in such a list are the numbers smaller than the length, in dependently-typed programming we usually use the Fin
type for this:
Fin : ℕ → Type Fin n = ∃ m × (m <ℕ n)
The container version of lists, then, is the following:
ℒ𝒾𝓈𝓉 : Type → Type ℒ𝒾𝓈𝓉 = ⟦ ℕ , Fin ⟧
Here’s the same list represented in the standard way, and as a container:
The benefit of using containers is that it gives a standard, generic, and composable way to construct functors that have some nice properties (like strict positivity). They’re pretty annoying to use in practice, though, which is a shame.
Directed containers are container that have three extra operations.
- A
tail
-like operation, where a position can be converted into the shape of containers that the suffic from that position. - A
head
-like operation, where you can always return the root position. - A
+
-like operation, where you take a position on some tail and translate it into a position on the original container, by adding it.
As the paper observes, these are very similar to a “dependently-typed” version of the monoid methods. This seems to me to be very similar to the indexing stuff we were doing earlier on.
The real interesting part is in the paper “Updated Monads: Cointerpreting Directed Containers” (Ahman and Uustalu 2014). This paper presents a variant on state monads, called “update monads”.
These are monads that use a monoid action:
record RightAction (𝑃 : Type) (𝑆 : Type) : Type where infixl 5 _↓_ field ⦃ monoid⟨𝑃⟩ ⦄ : Monoid 𝑃 _↓_ : 𝑆 → 𝑃 → 𝑆 ↓-assoc : ∀ x y z → (x ↓ y) ↓ z ≡ x ↓ (y ∙ z) ↓-ε : ∀ x → x ↓ ε ≡ x
A (right) monoid action is a monoid along with a function ↓
that “acts” on some other set, in a way that coheres with the monoid methods. The definition is given above. One way to think about it is that if a monoid 𝑃
has an action on 𝑆
it means that elements of 𝑃
can kind of be transformed into elements of 𝑆 → 𝑆
.
Upd : (𝑃 𝑆 : Type) ⦃ _ : RightAction 𝑃 𝑆 ⦄ → Type → Type Upd 𝑃 𝑆 X = 𝑆 → 𝑃 × X η : ⦃ _ : RightAction 𝑃 𝑆 ⦄ → A → Upd 𝑃 𝑆 A η x s = ε , x μ : ⦃ _ : RightAction 𝑃 𝑆 ⦄ → Upd 𝑃 𝑆 (Upd 𝑃 𝑆 A) → Upd 𝑃 𝑆 A μ xs s = let p , x = xs s q , y = x (s ↓ p) in (p ∙ q , y)
It turns out that the dependently-typed version of this gives directed containers.
Grading and the Cofree Comonad
I’m still in the early stages of understanding all of this material, but at the moment graded comonads and transformers are concepts that I’m much more familiar and comfortable with.
The idea behind graded monads and comonads is similar to the idea behind any indexed monad: we’re adding an extra type parameter to the monad or type, which can constrain the operations involved. The graded monads and comonads use a monoid as that index. This works particularly nicely, in my opinion: just allowing any index at all sometimes feels a little unstructured. The grading construction seems to constrain things to the right degree: the use of the monoid, as well, works really well with comonads.
That preamble out of the way, here’s the definition of a graded comonad:
record GradedComonad (𝑆 : Type) ⦃ _ : Monoid 𝑆 ⦄ (𝐶 : 𝑆 → Type → Type) : Type₁ where field extract : 𝐶 ε A → A extend : (𝐶 y A → B) → 𝐶 (x ∙ y) A → 𝐶 x BThis also has a few laws, which are expressed cleaner using cokleisli composition:
_=<=_ : (𝐶 x B → C) → (𝐶 y A → B) → 𝐶 (x ∙ y) A → C (g =<= f) x = g (extend f x) field idˡ : (f : 𝐶 x A₀ → B₀) → PathP (λ i → 𝐶 (ε∙ x i) A₀ → B₀) (extract =<= f) f idʳ : (f : 𝐶 x A₀ → B₀) → PathP (λ i → 𝐶 (∙ε x i) A₀ → B₀) (f =<= extract) f c-assoc : (f : 𝐶 x C₀ → D₀) (g : 𝐶 y B₀ → C₀) (h : 𝐶 z A₀ → B₀) → PathP (λ i → 𝐶 (assoc x y z i) A₀ → D₀) ((f =<= g) =<= h) (f =<= (g =<= h))
This seems to clearly be related to the stream constructions. Grading is all about the monoidal information about a comonad: the streams above are a comonad which indexes its entries with a monoid.
There are now two constructions I want to show that suggest a link betweent the stream constructions and graded comonads. First of these is the Cofree degrading comonad:
record G-CofreeF (𝐹 : Type → Type) (𝐶 : 𝑆 → Type → Type) (A : Type) : Type where coinductive; constructor _◃_ field here : A step : 𝐹 (∃ w × 𝐶 w (G-CofreeF 𝐹 𝐶 A)) open G-CofreeF G-Cofree : ⦃ _ : Monoid 𝑆 ⦄ → (Type → Type) → (𝑆 → Type → Type) → Type → Type G-Cofree 𝐹 𝐶 A = 𝐶 ε (G-CofreeF 𝐹 𝐶 A)
This construction is similar to the cofree comonad transformer: it is based on the cofree comonad, but with an extra (graded) comonad wrapped around each level. For any functor 𝐹 and graded comonad 𝐶, G-Cofree 𝐹 𝐶
is a comonad. The implementation of extract
is simple:
extract′ : ⦃ _ : Monoid 𝑆 ⦄ ⦃ _ : GradedComonad 𝑆 𝐶 ⦄ → G-Cofree 𝐹 𝐶 A → A extract′ = here ∘ extract
extend
is more complex. First, we need a version of extend
which takes a proof that the grade is of the right form:
module _ { 𝐶 : 𝑆 → Type → Type } where extend[_] : ⦃ _ : Monoid 𝑆 ⦄ ⦃ _ : GradedComonad 𝑆 𝐶 ⦄ → x ∙ y ≡ z → (𝐶 y A → B) → 𝐶 z A → 𝐶 x B extend[ p ] k = subst (λ z → 𝐶 z _ → _) p (extend k)
Then we can implement the characteristic function on the free comonad: traceT
. On graded comonads it has the following form:
module Trace ⦃ _ : Monoid 𝑆 ⦄ ⦃ _ : GradedComonad 𝑆 𝐶 ⦄ ⦃ _ : Functor 𝐹 ⦄ where module _ {A B} where {-# NON_TERMINATING #-} traceT : (𝐶 ε A → B) → (𝐶 ε A → 𝐹 (∃ w × 𝐶 w A)) → 𝐶 ε A → G-Cofree 𝐹 𝐶 B traceT ϕ ρ = ψ where ψ : 𝐶 x A → 𝐶 x (G-CofreeF 𝐹 𝐶 B) ψ = extend[ ∙ε _ ] λ x → ϕ x ◃ map (map₂ ψ) (ρ x)
This function is basically the unfold for the free degrading comonad. If G-Cofree
is a internally-labelled tree, then ϕ
above is the labelling function, and ρ
is the “next” function, returning the children for some root.
Using this, we can implement extend
:
extend′ : (G-Cofree 𝐹 𝐶 A → B) → G-Cofree 𝐹 𝐶 A → G-Cofree 𝐹 𝐶 B extend′ f = traceT f (step ∘ extract)
The relation between this and the stream is that the stream can be defined in terms of this: Stream W = G-Cofree id (GC-Id W)
.
Finally, the last construction I want to introduce is the following:
module _ ⦃ _ : Monus 𝑆 ⦄ where data Prefix-F⊙ (𝐹 : Type → Type) (𝐶 : 𝑆 → Type → Type) (i j : 𝑆) (A : Type) : Type where prefix : ((i≤j : i ≤ j) → A × 𝐹 (∃ k × 𝐶 k (Prefix-F⊙ 𝐹 𝐶 k (fst i≤j) A))) → Prefix-F⊙ 𝐹 𝐶 i j A Prefix⊙ : (𝐹 : Type → Type) (𝐶 : 𝑆 → Type → Type) (j : 𝑆) (A : Type) → Type Prefix⊙ 𝐹 𝐶 j A = 𝐶 ε (Prefix-F⊙ 𝐹 𝐶 ε j A) Prefix : (𝐹 : Type → Type) (𝐶 : 𝑆 → Type → Type) (A : Type) → Type Prefix 𝐹 𝐶 A = ∀ {i} → Prefix⊙ 𝐹 𝐶 i A
This type is designed to mimic sized type definitions. It has an implicit parameter which can be set, by the user of the type, to some arbitrary depth. Basically the parameter means “explore to this depth”; by using the ∀
we say that it is defined up to any arbitrary depth.
When the ≺
relation on the monus is well founded it is possible to implement traceT
:
module _ ⦃ _ : GradedComonad 𝑆 𝐶 ⦄ ⦃ _ : Functor 𝐹 ⦄ (wf : WellFounded _≺_) {A B : Type} where traceT : (𝐶 ε A → B) → (𝐶 ε A → 𝐹 (∃ w × (w ≢ ε) × 𝐶 w A)) → 𝐶 ε A → Prefix 𝐹 𝐶 B traceT ϕ ρ xs = extend[ ∙ε _ ] (λ xs′ → prefix λ _ → ϕ xs′ , map (map₂ (ψ (wf _))) (ρ xs)) xs where ψ : Acc _≺_ y → (x ≢ ε) × 𝐶 x A → 𝐶 x (Prefix-F⊙ 𝐹 𝐶 x y B) ψ (acc wf) (x≢ε , xs) = extend[ ∙ε _ ] (λ x → prefix λ { (k , y≡x∙k) → ϕ x , map (λ { (w , w≢ε , xs) → w , ψ (wf k (_ , y≡x∙k ; comm _ _ , x≢ε)) (w≢ε , xs)}) (ρ x)}) xs
Conclusion
Comonads are much less widely used than monads in Haskell and similar languages. Part of the reason, I think, is that they’re too powerful in a non-linear language. Monads are often used to model sublanguages where it’s possible to introduce “special” variables which interact with the monadic context.
= do
pyth <- [1..10]
x <- [1..10]
y <- [1..10]
z *x + y*y == z*z)
guard (xreturn (x,y,z)
The x
variable here semantically spans over the range [1..10]
. In the following two examples we see the semantics of state and maybe:
sum :: [Int] -> Int
sum xs = flip evalState 0 $ do
0
put $ \x -> do
for_ xs <- get
n + x)
put (n <- get
m return m
data E = Lit Int | E :+: E | E :/: E
eval :: E -> Maybe Int
Lit n) = n
eval (:+: ys) = do x <- eval xs
eval (xs <- eval ys
y return (x + y)
:/: ys) = do x <- eval xs
eval (xs <- eval ys
y /= 0)
guard (y return (x / y)
The variables n
and m
introduced in the state example are “special” because their values depend on the computations that came before. In the maybe example the variables introduced could be Nothing
.
You can’t do the same thing with comonads because you’re always able to extract the “special” variable with extract :: m a -> a
. Instead of having special variable introduction, comonads let you have special variable elimination. But, since Haskell isn’t linear, you can always just discard a variable so this isn’t much use.
Looking at the maybe example, we have a function eval :: E -> Maybe Int
that introduces an Int
variable with a “catch”: it is wrapped in a Maybe
. We want to use the eval
function as if it were a normal function E -> Int
, with all of the bookkeeping managed for us: that’s what monads and do notation (kind of) allow us to do.
An analagous example with comonads might be having a function consume :: m V -> String
. This “handles” a V
value, but the “catch” is that it needs an m
context to do so. If we want to treat the consume
function as if it were a normal function V -> String
then comonads (and codo notation Orchard and Mycroft 2013) would be a perfect fit.
The reason that this analagous case doesn’t arise very often is that we don’t have many handlers that look like m V -> String
in Haskell. Why? Because if we want to “handle” a V
we can just discard it: as a non-linear language, you do not need to perform any ceremony to discard a variable in Haskell.
Graded comonads, though, seem to be much more useful than normal comonads. I think it is becuase they basically get rid of the m a -> a
function, changing it into a much more restricted form. In this way, they give a kind of small linear language, but just for the monoidal type parameter.
And there are a lot of uses for the graded comonads. Above we’ve used them for termination checking. A recursive function might have the form a -> b
, where a
is the thing being recursed on. If we’re using well-founded recursion to show that it’s terminating, though, we add an extra parameter, an Acc _<_
proof, turning this function into Acc _<_ w × a -> b
. The Acc _<_
here is the graded comonad, and this recursive function is precisely the “handler”.
Other examples might be provacy or permissions: a function might be able to work on some value, but only if it has particular permission regarding that value. The permission here is the monoid.
There are other examples I’m sure, those are just the couple that I have been thinking about.
References
Abel, Andreas, and Brigitte Pientka. 2013. “Wellfounded Recursion with Copatterns” (0) (June): 25. http://www2.tcs.ifi.lmu.de/%7Eabel/icfp13-long.pdf.
Ahman, Danel, James Chapman, and Tarmo Uustalu. 2012. “When Is a Container a Comonad?” In Foundations of Software Science and Computational Structures, 74–88. Lecture Notes in Computer Science. Springer, Berlin, Heidelberg. doi:10.1007/978-3-642-28729-9_5.
Ahman, Danel, and Tarmo Uustalu. 2014. “Update Monads: Cointerpreting Directed Containers”: 23 pages. doi:10.4230/LIPICS.TYPES.2013.1.
———. 2013. “Distributive laws of directed containers.” Progress in Informatics (10) (March): 3. doi:10.2201/NiiPi.2013.10.2.
———. 2016. “Directed Containers as Categories” (April). doi:10.4204/EPTCS.207.5.
Kidney, Donnacha Oisín, and Nicolas Wu. 2021. “Algebras for Weighted Search.” Proceedings of the ACM on Programming Languages 5 (ICFP) (August): 72:1–72:30. doi:10.1145/3473577.
Kmett, Edward. 2018. “The State Comonad.” Blog. The Comonad.Reader. http://comonad.com/reader/2018/the-state-comonad/.
Orchard, Dominic, and Alan Mycroft. 2013. “A Notation for Comonads.” In Implementation and Application of Functional Languages, ed by. Ralf Hinze, 1–17. Lecture Notes in Computer Science. Berlin, Heidelberg: Springer. doi:10.1007/978-3-642-41582-1_1.
Waern, Love. 2018. “I made a monad that I haven’t seen before, and I have a few questions about it.” Reddit Post. reddit.com/r/haskell. https://www.reddit.com/r/haskell/comments/7oav51/i_made_a_monad_that_i_havent_seen_before_and_i/.