Why you should use Processes instead of Threads to isolate loads in Python

Key Learning

Python uses a Global Interpreter Lock to make sure that  memory shared between threads isn’t corrupted. This is a design choice of the language that has it’s pros and cons. One of these cons is that in multi-threaded applications where at least one  thread applies a large load to the CPU, all other threads will slow down as well.

For multi-threaded Python applications that are at least somewhat time-sensitive, you should use Processes over Threads.

Experiment

I wrote a simple python script to show this phenomenon. Let’s take a look.

The core is this increment function. It takes in a Value and then sets it over and over, increment each loop, until the running_flag is set to false. The value of count_value is what is graphed later on, and is the measure of how fast things are going.

The other important bit is the load function:

Like incrementload is the target of a thread or process. The z variable quickly becomes large and computing the loop becomes difficult quickly.

The rest of the code is just a way to have different combinations of increment and load running at the same time for varying amounts of time.

Result

The graph really tells the story. Without the load thread running, the process and thread versions of increment run at essentially the same rate. When the load thread is running, increment  in a thread grinds to a halt compared to the process which is unaffected.

That’s all! I’ve pasted the full source below so you can try the experiment yourself.

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.