Download framework here.
All posts are here:
- Part I - Workers and ParallelWorkers
- Part II - Agents and control messages
- Part III - Default error management
- Part IV - Custom error management
- Part V - Timeout management
- Part VI - Hot swapping of code
- Part VII - An auction framework
- Part VIII – Implementing MapReduce (user model)
- Part IX – Counting words …
I like to try out different programming paradigms. I started out as an object oriented programmer. In university, I used Prolog. I then learned functional programming. I also experimented with various shared memory parallel paradigms (i.e. async, tasks and such). I now want to learn more about message based parallel programming (Erlang style). I’m convinced that doing so makes me a better programmer. Plus, I enjoy it …
My usual learning style is to build a framework that replicates a particular programming model and then write code using it. In essence, I build a very constrained environment. For example, when learning functional programming, I didn’t use any OO construct for a while even if my programming language supports them.
In this case, I built myself a little agent framework based on F# MailboxProcessors. I could have used MailboxProcessors directly, but they are too flexible for my goal. Even to write a simple one of these guys, you need to use async and recursion in a specific pattern, which I always forget. Also, there are multiple ways to to do Post. I wanted things to be as simple as possible. I was willing to sacrifice flexibility for that.
Notice that there are serious efforts in this space (as Axum). This is not one of them. It’s just a simple thing I enjoy working on between one meeting and the next.
Workers and ParallelWorkers
The two major primitives are spawning an agent and posting a message.
let echo = spawnWorker (fun msg -> printfn "%s" msg) echo <-- "Hello guys!"
There are two kinds of agents in my system. A worker is an agent that doesn’t keep any state between consecutive messages. It is a stateless guy. Notice that the lambda that you pass to create the agent is strongly typed (aka msg is of type string). Also notice that I overloaded the <— operator to mean Post.
Given that a worker is stateless, you can create a whole bunch of them and, when a message is posted, route it to one of them transparently.
let parallelEcho = spawnParallelWorker(fun s -> printfn "%s" s) 10 parallelEcho <-- "Hello guys!”
For example, in the above code, 10 workers are created and, when a message is posted, it gets routed to one of them (using a super duper innovative dispatching algorithm I’ll describe in the implementation part). This parallelWorker guy is not really needed, you could easily built it out of the other primitives, but it is kind of cute.
To show the difference between a worker and a parallelWorker, consider this:
let tprint s = printfn "%s running on thread %i" s Thread.CurrentThread.ManagedThreadId let echo1 = spawnWorker (fun s -> tprint s) let parallelEcho1 = spawnParallelWorker(fun s -> tprint s) 10 let messages = ["a";"b";"c";"d";"e";"f";"g";"h";"i";"l";"m";"n";"o";"p";"q";"r";"s";"t"] messages |> Seq.iter (fun msg -> echo1 <-- msg) messages |> Seq.iter (fun msg -> parallelEcho1 <-- msg)
The result of the echo1 iteration is:
**a running on thread 11
b running on thread 11
c running on thread 11
d running on thread 11
While the result of the parallelEcho1 iteration is:
**a running on thread 13
c running on thread 14
b running on thread 12
o running on thread 14
m running on thread 13
Notice how the latter executes on multiple threads (but not in order). Next time I’ll talk about agents, control messages and error management.