Monads are too powerful: The expressiveness spectrum

55 pointsposted 4 days ago
by hackandthink

51 Comments

spewffs

5 hours ago

Yes monads in general are too expressive but the answer isn't to limit the typeclass to something between applicative and monad but rather to limit what monads are allowed. The problem is that there should only be one monad: an effect monad loaded with various effects depending on the side effect needed. Instead of defining this or that monad, there should only be the capability to define the effect you need.

In that case, everything runs within the effect monad and then no one would ever really need to learn what a monad is, just that some calls are effectful (like reading a file or throwing an exception).

SchemaLoad

6 hours ago

I tried learning Haskell for a decent chunk of time and could make some stuff, but despite trying to learn, I still could not tell you what a monad actually is. All the explanations for it seemed to make no sense.

ilikebits

5 hours ago

Monads are a generalization of Promises. Each type in Monad defines their own `.then` in a different way. For promises, `.then` is defined as "run this function once you have this deferred value from the last promise". For optionals (`Maybe`), `.then` is defined as "run this function if the last optional had an actual value". For Either, `.then` is defined as "run this function if the last Either returned Right, otherwise early-return with the value from Left" (this is functional early-return, basically).

bhaney

5 hours ago

This is the first explanation of monads I've heard that makes intuitive sense to me and feels like it sufficiently captures the point. Unless I come back in a few hours to see a bunch of replies from uber-haskellers saying "no that's not what a monad is at all," then I'll consider my search for a good monad explanation to finally be over.

tickettotranai

4 hours ago

No I believe he's basically formally correct.

You need to be able to "wrap" values and then also "wrap" functions in the way you expect. That's literally it.

Btw, the list monad example is stupid imo and borderline misleading. The promise/nullable/Either examples are better. you "wrap" a function by putting it as the only value in a list, and "map" pretty much acts as your function wrapper, but technically this you need to jump through a couple hoops to make it monadic, and I'm just not sure the metaphor is helpful here

tasuki

5 hours ago

Promises? Even more generally, Monads are a generalization of Chaining.

(I'm trying very hard not to fall into the trying-to-explain-monads trap!)

ww520

5 hours ago

Monad is a pattern for constructing container types so that their instances can be used in chain operations safely. Given a value type, you can build a monad type wrapping the value type with a set of prescribed monadic functions. Instances of the monad type are called monadic values.

Example is best for illustration. I'll use a made-up syntax.

  // 'Maybe' is a monad wrapping any value type T that can be null.
  class Maybe<T> {
    // The value wrapped by the monadic value
    value: T;

    // A constructor to make a Maybe monadic value from a plain value T.
    // This is called 'unit' or 'return' in Haskell, or 'lift' in other languages.
    // It's really a constructor.
    static wrap(v: T) Maybe<T> {
        return new Maybe { value = v } 
    }

    // then() applies fn on the unwrapped value. fn returns a Maybe<T>.
    // This is called 'bind' in Haskell, or 'flatMap' in other languages.
    then(fn: (T) => Maybe<T>) Maybe<T> {
        return this.value == null ? Maybe.wrap(null) : fn(this.value);
    }
  }
That's it! That's all to to monad. You can use the same pattern to build other monadic types, like List<T>, Promise<T>, IO<T>, as long as the wrap() and then() functions are built accordingly. Back to this example, to use it,

  let a = Maybe<int>.wrap(4)                   // construct a monadic value
  let b = a.then(x => Maybe<int>.wrap(x + 1))  // add 1 to it
  let c = Maybe<int>.wrap(null)                // construct a null monadic value
  let d = c.then(x => Maybe<int>.wrap(x + 1))  // safely handle null; d is null

  let e = Maybe<int>.wrap(5)
    .then(x => Maybe<int>.wrap(x + 1))
    .then(x => Maybe<int>.wrap(x * 2))    // chain the calls

  let f = Maybe<float>.wrap(5.0)          // The same Maybe on a different type
    .then(x => Maybe<float>.wrap(null))
    .then(x => Maybe<float>.wrap(x * 2))  // chain the calls; safely handle null

1718627440

4 hours ago

Why does it sound so complicated when it is just a wrapper type that conforms to an interface?

yobbo

11 minutes ago

It's a generalised typeclass for wrappers. It's not called "then" but "bind" and spelled (>>=), which doesn't exactly evoke useful associations for newcomers.

ww520

an hour ago

It is what you described at high level. The devils are in the detail. People keep using words to describe the detail when code shows everything.

oncallthrow

6 hours ago

A monad is just a monoid in the category of endofunctors

kmstout

5 hours ago

It's like an enchilada, right?

astrange

5 hours ago

It's an implementation of the typeclass Monad, which happens to come with a special "do" keyword.

munk-a

5 hours ago

Someone may correct me but - in three levels of conciseness...

A monad is a function that can be combined with other functions.

It's a closure (or functor to the cool kids) that can be bound and arranged into a more complex composite closure without a specification of any actual value to operate on.

It's a lazy operation declaration that can operate over a class of types rather than a specific type (though a type is a class of types with just a single type so this is more a note on potential rather than necessary utility) that can be composed and manipulated in languages like Haskell to easily create large declarative blocks of code that are very easy to understand and lend themselves easily to abstract proofs about execution.

You've probably used them or a pattern like them in your code without realizing it.

1718627440

5 hours ago

So it's a function pointer, right? /s

hinkley

6 hours ago

Unfortunately no one can tell you what a monad is. You have to experience it for yourself. - Haskell Morpheus

the__alchemist

5 hours ago

Nan-in received a university professor who came to inquire about Monads

Nan-in served tea. He poured his visitor’s cup full, and then kept on pouring.

The professor watched the overflow until he no longer could restrain himself. “It is overfull. No more will go in!”

“Like this cup,” Nan-in said, “you are full of your own opinions and speculations. How can I show you a Monad unless you first empty your cup?”

tasuki

5 hours ago

This is as good an explanation of monads as any. Which is to say, bad.

valiant55

6 hours ago

Forget all the academic definitions, at it's core a monad is a container or wrapper that adds additional functionality to a type.

1718627440

5 hours ago

Yes, this is the only definition I get, but then I don't get all the rage about monads, because containers and standardized interfaces are nothing new, so surely that definition must be wrong?

tickettotranai

5 hours ago

Like many things in life it is far easier to give examples than it is to describe the thing.

Examples of monads are Promises and Elvis operators (for values that can be nullptrs). In a sense exceptions as well. Having heard this I think if you do a second pass at the type definitions, I think you may be able to parse them out

It really is just "a wrapper for values that sticks around when you do something to the values".

Think of it as a coding pattern, and it's much easier to grok

It's handy for IO because, well, did you see the examples I gave? A monad lets you basically ignore the failure mode/weirdness (the async stuff in the context of promises, null type in the case of Elvis) and worry about the computation you actually want to be doing.

Other places you might apply these would be fileio (file not found? cool, don't care, deal with it later) or networking (as long as the connection's good, do this.)

1718627440

4 hours ago

Sorry I never understood the functional approach outside of LISP being cool, because code is data.

Promises are wrappers, I see that. How are exceptions wrappers? They stop the code at the point the exception occurs and unwind the stack. They don't allow you to continue doing stuff and deal with it later.

> A monad lets you basically ignore the failure mode/weirdness

I just don't see how this works out in practice?

Ok, I defer dealing with file not found. Does that mean I know perform heavy computation and parsing on something that does not even exist. Wouldn't it be way easier to just do an early return and encode that the file exists in the type? And how does it play out to let the user wait for you doing something and you at the end coming around, well actually the input file was already missing?

And then you have some intermediate layer that needs to store all the computation you are doing until you now whether something succeeded or not. All this to save a single return statement.

tickettotranai

4 hours ago

(rewrote this to be less insufferable)

It's not about saving a single return statement, even in the elvis case. How many times have you written code along the lines of "if this isn't null do this, otherwise return. If the result of that isn't null, do this, otherwise return" etc etc.

Elvis ops are a small QoL change, Promises are essential to async Exceptions (much of the time) are kind of a "catch it pass it on" logic for me, and man do I wish I didn't need to write it every time.

With networking this really shines, or really just anything async etc

1718627440

4 hours ago

> How many times have you written code along the lines of "if this isn't null do this, otherwise return. If the result of that isn't null, do this, otherwise return" etc etc.

Yeah that sucks, but why would you write it that way. I thought it is common to write "if this is null return, anyways: do this, do that."

> Promises are essential to async Exceptions

I don't see that either. If errors are specific to functions then there is only one case where I handle them, so it doesn't save something to put these checks elsewhere. If they can be accumulated over many calls, then they should be just part of the object state (like feof), so I can query them in the end.

bananaflag

6 hours ago

You should first understand what a typeclass and a Functor is.

gowld

6 hours ago

The important thing to know first is that a monad is not a single thing like "Optional". "monad" is a pattern or "interface" (called a "typeclass" in Haskell), that has many implementations, (Optional, Either, List, State Transormer, IO (Input/Output), Logger, Continuation, etc). Sort of how "Visitor" pattern in C++/Java is not a single thing.

https://hackage.haskell.org/package/base-4.21.0.0/docs/Contr...

https://book.realworldhaskell.org/read/monads.html

A common metaphor for monad is "executable semicolons". They are effectively a way to add (structured) hook computations (that always returns a specific type of value) to run every time a "main" computation (akin to a "statement" in other languages) occurs "in" the monad.

It's sort of like a decorator in Python, but more structured. It lets you write a series of simple computational steps (transforming values), and then "dress them up" / "clean them up" by adding a specific computation to run after each step.

Ryder123

6 hours ago

This makes SchemaLoad's comment perfectly clear.

(but do I appreciate the effort you put into your reply - reading that monad's are more like interfaces is new information to me, and might help down the road)

tickettotranai

5 hours ago

Somehow I hear this all the time, but the haskell people have to realize that code patterns are absolutely a thing in all languages, ever? A lack of syntactic sugar doesn't mean monads don't exist in other languages.

Typeclasses are a distraction, the point is computation ignoring annoying contexty stuff (file not found errors, null on failure, etc) and there's dozens of examples in literally every language ever.

Not all problems are solved with a technical definition.

bitwize

6 hours ago

Just think of it as a design pattern, but a bit more strict than the Gang of Four patterns. Fundamentally it's a relationship between types and other types such that certain operations make sense and follow well-understood rules (the monadic laws). Study the monadic laws, and try playing with the State, IO, and List monads to get a better sense of what those operations are and why they're useful for sequencing in a pure-functional context.

jancsika

6 hours ago

It'd be nice to have a process like the following:

1. I free solo a bunch of junk in vanilla javascript with state flowing hither and thither until I'm out of coffee

2. I test the exact behaviors(s) I wanted to make possible in the GUI I just wrote.

3. The framework whitelists only the event chains from my test.

4. For any blacklisted event chains, the user gets a Youtube video screencast of the whitelisted test so they can learn the correct usage of my GUI.

tickettotranai

4 hours ago

If someone paired this with a code-lite GUI designer, the world would be a brighter place indeed

PaulHoule

8 hours ago

I'd argue the exact opposite. Compared to what you can do if you can write compilers anything that involves composing functions is weak beer and most monad examples cover computational pipelines as opposed to computational graphs. It's like that Graham book On Lisp, it's a really fun book but then you realize that screwing around with functions and macros doesn't hold a candle to what you learn from the Dragon Book.

fn-mote

7 hours ago

> screwing around with functions and macros doesn't hold a candle to what you learn from the Dragon Book

This depends a lot on what you mean. My first take is that the more you know about macros the more you realize what they can do.

I don’t know what your takeaway from the Dragon Book was, but writing DSLs using macros feels very usefully powerful to me.

I think you are undervaluing modern macros.

taeric

8 hours ago

I maintain that the big advantage of the On Lisp approach is that all of that is available without having to write a new compiler.

Granted, I also don't have as heavy an attachment to pure functional as most people seem to build. Don't get me wrong, wanton nonsense is nonsensical. But that is just as true in immutable contexts.

PaulHoule

8 hours ago

What I found remarkable about that book is that 80% of what is in it can be done with functions and no macros, mostly you can rewrite the examples in Python except for the coroutines but Python already has coroutines. It also irks me that the I don’t think the explanation of coroutines in Scheme is very clear but it’s become the dominant one you find in the net and I can’t find a better one.

As for ‘compiler’ you also don’t need to go all the way to bare metal, some runtime like WASM or the JVM which is more civilized is a good target these days.

taeric

7 hours ago

Totally fair. I think a lot of the things we used to do in the name of efficiency has been completely lost in the progress of time. Largely from the emergence and refinement of JIT compilers, I think?

That is, a lot of why you would go with macros in the past was to avoid the expense of function calls. Right? We have so far left the world of caring about function call overhead for most projects, that it is hard to really comprehend.

Coroutines still strike me as a hard one to really grok. I remember reading them in Knuth's work and originally thinking it was a fancy way of saying what we came to call functions and methods. I think without defining threads first, defining a coroutine is really hard to nail down. And too many of us take understanding of threads as a given. Despite many of us (myself not immune) having a bad understanding of threads.

bvrmn

6 hours ago

Coroutines as a technique to implement state machines is the first things which comes to my mind. It's a more abstract and requires a way less fundamentals to know comparing to concurrency.

taeric

6 hours ago

But coroutines really only work any better than "objects" if you understand the implication to the stack pointer? Which requires understanding exactly what a thread is. Right?

That is, a basic class that has defined state and methods to modify the state is already enough to explain a state machine. What makes coroutines better for it?

andersmurphy

6 hours ago

Yeah, I've had fun using macros to create optimised functions at runtime (inline caching effectively) and/or generate code that is more friendly to the JVM JIT.

Also, there's always plenty of use for doing work at compile time.

In some sense they can also be seen as a better code generation.

veqq

6 hours ago

But lisp programs are compilers. That's the whole point of lisp and macros. Your functions can happily emit assembly direction.

instig007

7 hours ago

> if you can write compilers anything that involves composing functions is weak beer

> screwing around with functions and macros doesn't hold a candle to what you learn from the Dragon Book.

---

So, what is it that you learn from that book that's a revelation for you compared to the weak beer of composable effect systems?

bionhoward

6 hours ago

And here I thought it was a pedantic word for “data box”

jcmontx

6 hours ago

Haskell looks a heck lot like F#, even more than Ocaml if you ask me

retrac

3 hours ago

I'd say the family resemblance is the other way around. F# was influenced by Haskell, both in syntax and semantics. Just as Haskell was influenced by early ML.

Haskell and ML make up one of the major language families. They are more like each other than they are like other languages. Inspired by the lambda calculus. Strong static typing with type inference. A succinct math-like syntax that emphasizes pattern matching.

Haskell goes further with syntactic sugar and tries to be almost equation-like:

    a = 5
    f = \x -> x + 1
    g x = x + 2
    f $ g a
SML:

    val a = 5
    val f = fn x => x + 1
    fun g x = x + 2
    f (g a)
But F# like Haskell makes no distinction between values and functions:

    let a = 5
    let f = fun x -> x + 1
    let g x = x + 2
    f (g a)
The use of indent-based blocks is another Haskell-ish influence on F#. But now we're awfully close to bikeshedding.

whycombinetor

8 hours ago

Yes. For the same reason that the Yoneda lemma and the Cayley theorem are almost meaningless tautologies once you fully understand what they're saying. "Every small thing (of a certain type) is able to be expressed as a subcase of a bigger thing that contains every single possible subcase in existence." Well no shit.

IshKebab

7 hours ago

Interesting, but it seems like he kind of proved himself wrong? Monads are the only option he presented that are sufficiently powerful for normal programs.

bokumo

6 hours ago

I don't think you're being fair to Chris Penner. He ends his blog post with: "It may take me another 5 years to finally finish it, but at some point we'll continue this journey and explore how we can sequence effects using the hierarchy of Category classes instead." Emphasis by me.

So while it is true, that what he has described so far is not sufficiently powerful for normal programs, he has clearly stated that there are more abstractions between Applicative and Monad to explore than what he has presented so far.