libmuth tutorial

Helmut Grohne

Abstract

Libmuth is a library that implements userspace threading in C++. The goal of this library is to provide a method to write programs using more than one CPU at once and to be able to write things sequential when you think sequential. This has fundamental consequences to the usage of the library. Using it will always require proper locking as one has to expect to use more than one CPU at once. Implementing stateful protocols like HTTP will be easy as one simply has to write code for reading a request, writing an answer and closing the connection. The function will look simple, but what actually happens will be in parallel. So unlike other threading environments a large number of running threads is nothing to worry about, because neither creating threads nor switching threads is slow or takes a lot of memory. Actually the programmer doesn't have to register many event handlers as each event can be made up from a blocking operation in a single thread.


Table of Contents

Microthreads
Runners
Channels
Schedulers

Microthreads

We talked about threads, but what are microthreads? Microthreads are functions that are run in parallel. Indeed microthreads do not always run, they are delayed sometimes. This means the execution is frozen and a state is kept that may later be continued. Some people might want to call them coroutines. What actually happens when the program runs is that a microthread is run or continued and then delayed to run or continue another microthread. This also means that microthreads require some points where they are interrupted and the programmer is responsible for providing them. Microthreads can voluntary suspend themselves by calling Microthread::delayme. They will be continued after beeing rescheduled using Microthread::scheduleme. This is used to put a microthread to sleep instead of blocking the thread by invoking blocking systemcalls. When the blocking operation is known to not block the microthread is resumed and the operation is done. When a microthread wants to resume another microthread and suspend itself it can simply pass control to the other microthread using Microthread::swapto. This is used with channels for instance. The following example shows a simple microthread;.

class ExampleThread : public Microthread {
	private:
		ExampleContainer largedata;
		int exampleparameter;
	public:
		ExampleThread(int p) : exampleparameter(p) {}
		void run();
};

void ExampleThread::run() {
	/*
	 * Schedule the Microthread for continuation so when it gets
	 * delayed.
	 */
	this->scheduleme();
	this->delayme();
	/* This will delete this Microthread object. */
	return;
}
			

Runners

On a single CPU there is one runner. It simply starts executing the first microthread until the microthread passes control back to the runner. While running other microthreads may be created and put into the queue of pending microthreads. After gaining control again it runs the next microthread from the queue. Two microthreads might add each other alternating to the queue and simulate running in parallel this way. However on a multi-CPU system there is a runner for each CPU and each runner may execute a microthread. When there are more microthreads than runners some will be waiting in the queue. When there are more runners than microthread some will simply block. Running multiple runners requires the use of an external threading library like pthread. The runner threads are however created only once so creating a large number of microthreads does not hurt. In order to boot the system simply run runnerLoop with n beeing the number of runners or omitted. When compiling without threads any given n must be 1. The number of CPUs may be autodetected using detectCPUs which returns 1 when compiled without threads.

Channels

One way to do inter-microthread communication is using channels. Basically channels work a bit like fifos. One channel can send an object to a channel and another can receive the object from the channel. Objects passed through channels will be called messages. A microthread that wants to receive messages from any channel needs to create a ChannelManager object. A Channel object is constructed with a given manager object and the created channel is owned by the microthread owning the manager. Any microthread can invoke Channel::send to send a message to the microthread owning the channel. The send method will not block, but reschedule the microthread. A microthread owning a channel can receive a message from his channel by invoking Channel::receive. This operation may instantly return a previously sent message, but it may also delay the running microthread and wake it when another microthread sends a message. For receiving messages from more than one channel a ChannelGroup can be created and the ChannelGroup::select can be used to blockingly determine the channel to receive messages from.

Schedulers

Schedulers don't schedule CPU usage like the name might suggest. They really schedule blocking operations like reads from filedescriptors or writes to filedescriptors, but may also delay a microthread for some time similar to the sleep function. Indeed schedulers are microthreads themselves. As they take the load to do blocking operations the programmer has to provide a real thread for each scheduler. This implies that there might be more than one active scheduler, so microthreads actually can chose which scheduler to use. All schedulers share the common base class BaseScheduler. This class provides methods like BaseScheduler::read, BaseScheduler::write, BaseScheduler::sleep, BaseScheduler::accept and BaseScheduler::connect which basically work like the functions from the C library.