Difference between user-level and kernel-supported threads?

Edit: The question was a little confusing, so I’m answering it two different ways.

OS-level threads vs Green Threads

For clarity, I usually say “OS-level threads” or “native threads” instead of “Kernel-level threads” (which I confused with “kernel threads” in my original answer below.) OS-level threads are created and managed by the OS. Most languages have support for them. (C, recent Java, etc) They are extremely hard to use because you are 100% responsible for preventing problems. In some languages, even the native data structures (such as Hashes or Dictionaries) will break without extra locking code.

The opposite of an OS-thread is a green thread that is managed by your language. These threads are given various names depending on the language (coroutines in C, goroutines in Go, fibers in Ruby, etc). These threads only exist inside your language and not in your OS. Because the language chooses context switches (i.e. at the end of a statement), it prevents TONS of subtle race conditions (such as seeing a partially-copied structure, or needing to lock most data structures). The programmer sees “blocking” calls (i.e. data = file.read() ), but the language translates it into async calls to the OS. The language then allows other green threads to run while waiting for the result.

Green threads are much simpler for the programmer, but their performance varies: If you have a LOT of threads, green threads can be better for both CPU and RAM. On the other hand, most green thread languages can’t take advantage of multiple cores. (You can’t even buy a single-core computer or phone anymore!). And a bad library can halt the entire language by doing a blocking OS call.

The best of both worlds is to have one OS thread per CPU, and many green threads that are magically moved around onto OS threads. Languages like Go and Erlang can do this.

system calls and other uses not available to user-level threads

This is only half true. Yes, you can easily cause problems if you call the OS yourself (i.e. do something that’s blocking.) But the language usually has replacements, so you don’t even notice. These replacements do call the kernel, just slightly differently than you think.


Kernel threads vs User Threads

Edit: This is my original answer, but it is about User space threads vs Kernel-only threads, which (in hindsight) probably wasn’t the question.

User threads and Kernel threads are exactly the same. (You can see by looking in /proc/ and see that the kernel threads are there too.)

A User thread is one that executes user-space code. But it can call into kernel space at any time. It’s still considered a “User” thread, even though it’s executing kernel code at elevated security levels.

A Kernel thread is one that only runs kernel code and isn’t associated with a user-space process. These are like “UNIX daemons”, except they are kernel-only daemons. So you could say that the kernel is a multi-threaded program. For example, there is a kernel thread for swap. This forces all swap issues to get “serialized” into a single stream.

If a user thread needs something, it will call into the kernel, which marks that thread as sleeping. Later, the swap thread finds the data, so it marks the user thread as runnable. Later still, the “user thread” returns from the kernel back to userland as if nothing happened.

In fact, all threads start off in kernel space, because the clone() operation happens in kernel space. (And there’s lots of kernel accounting to do before you can ‘return’ to a new process in user space.)

Leave a Comment