Forking and IPC in Ruby, Part I

Share this article

Fork of three roads

I like to think of forking as the underdog of the concurrency world. In fact, at this point, many programmers have probably never even heard of it. The term “multithreaded” has almost become synonymous with “concurrent” or “parallel.”

The fork() system call creates a “copy” of the current process. For our purposes in Ruby, it enables arbitrary code to run asynchronously. Since that code will be scheduled at the operating system level, it will run concurrently just like any other process.

The goal of this article is to enable the reader to think in terms of processes rather than threads. There are many advantages to thinking in this way, as we will see.

The source code for this tutorial is available on github. When in doubt, try running the code from there.

Note: Since fork() is a POSIX system call, the code in this tutorial won’t do much if you are running Ruby in Windows. I recommend virtualbox if you want to tinker with a Linux, BSD, or other UNIX-like operating system.

The Global Interpreter Lock

Spend more than a few minutes reading about concurrency in Ruby, and you will discover a topic of much debate: the Global Interpreter Lock. Frankly, the GIL has a worse reputation than it deserves, due to a great deal of misinformation about it passed around the Ruby community. To know what the GIL takes from you, it’s necessary to understand the difference between concurrency and parallelism:

  • Concurrency – Two tasks are executed in overlapping time periods (but do so at such a rate they feel simultaneous) i.e. listening to music while editing a document.

  • Parallelism – Two tasks are executed on separate processor cores simultaneously

As of this writing, CRuby’s GIL limits a Ruby virtual machine to one native thread at a time. If there is more than one thread in the VM, each takes turns running on the native thread. As such, threads can execute concurrently but not in parallel. Now, that is not to say that concurrency is possible while parallelism is not, as we will see in a moment.

If you would like to learn more about the GIL, I recommend these posts:

A Quick Experiment

You can run this code from the repository in CRuby (1.9.3, 2.0.0, etc) to see the difference between using forks and threads.

# thread_fork_comparison.rb
# runs 4 tasks in 4 threads and 4 forks and reports the times for each

def time_forks(num_forks)
  beginning = Time.now
  num_forks.times do 
    fork do
      yield
    end
  end

  Process.waitall
  return Time.now - beginning
end

def time_threads(num_threads)
  beginning = Time.now
  num_threads.times do 
    Thread.new do
      yield
    end
  end

  Thread.list.each do |t|
    t.join if t != Thread.current
  end
  return Time.now - beginning
end

def calculate(cycles)
  x = 0
  cycles.times do
    x += 1
  end
end

cycles = 10000000

threaded_seconds = time_threads(4) {  calculate(cycles) }
puts "Threading finished in #{threaded_seconds} seconds"

forked_seconds = time_forks(4) {calculate(cycles) }
puts "Forking finished in #{forked_seconds} seconds"

output: Threading finished in 1.670291209 seconds Forking finished in 0.419124546 seconds

Using forks took 1/4 the amount of time to complete when compared to threads. That’s a significant performance gain.

Note: I used a quad-core processor to get these results.

Forking vs. Threading

Despite restricting thread-level parallelism, the CRuby core team has decided to keep the GIL for now, and they have a good reason: writing multithreaded code that performs correctly is “easier” with a global lock. In addition, whenever an interpreter has a GIL, features tend to grow around the guarantees that it provides, making it difficult to remove down the road.

Unlike “threadsafe,” you will rarely encounter the word “forksafe.” Threads share the same memory, so they can operate on data simultaneously, potentially corrupting it. On the other hand, forked processes are given a new virtual memory space, so any changes to data in the fork will occur in the new space, rather than the original. This concept is known as process isolation.

A simple comparison looks like this:

  • threading:
    • global data is easily corrupted through parallelism
    • need to selectively lock data to prevent corruption
    • cheaper than forking
    • threads are killed when the program exits
  • forking:
    • more difficult to corrupt data through parallelism
    • need to selectively share data to enable cooperation
    • somewhat expensive, especially if Copy-on-Write is not utilized
    • child processes are not killed when the main process exits normally

With the preliminaries aside, let’s see how forking in Ruby works.

Avoiding Zombies

Creating a fork in Ruby is easy. Kernel#fork can take a block and will execute the code of that block in another process. Since a fork inherits the terminal from its parent, its output can be seen in the same terminal.

# basic_fork.rb
# A fork inherits the terminal of its parent process

fork do
  sleep 2
  puts "Fork: finished"
end

puts "Main: finished"

One of the biggest dangers of forking is losing control of your worker processes. Unlike threads, child processes will not be killed when the main process exits normally. While this can be a good thing in some situations, it’s easy to build up a collection of zombie processes that must be killed manually. If you are going to be creating a lot of processes, you might want to get handy access to a process manager. I personally like htop.

# zombie_process.rb
# creates a process that won't end on its own. 
# Terminate it in the console with: 
#   $ kill [whatever pid the zombie has]

fork do
  puts "Zombie: *comes out of grave*"
  puts "Zombie: rahhh...kill me with: $ kill #{$$}"
  loop do
    puts "Zombie (#{$$}): brains..."
    sleep 1
  end
end

puts "Main (#{$$}): finished"

The $$ returns the pid of the current process. If you would like the parent process to know the pid of any child it creates, you can use the value returned by fork.

# pid.rb
# Shows different ways of getting pids for parent and child processes

fork_pid = fork do
  puts "child: my pid is #{$$}"
  puts "child: my parent's pid is #{Process.ppid}"
end

puts "parent: my pid is #{Process.pid}"
puts "parent: my child's pid is #{fork_pid}"

Sometimes child processes run in an infinite loop. You can store the pids generated by fork calls and use Process#kill to terminate the child processes in code.

# process_kill.rb
# Shows how to terminate processes programmatically

puts "initializing worker processes..."

pids = 5.times.map do |i|
  fork do
    trap("TERM") do
      puts "Worker#{i}: kill signal received...shutting down"
      exit
    end

    loop do
      puts "Worker#{i}: *crunches numbers*"
      sleep rand(1..3)
    end
  end
end

sleep 5
puts "killing worker processes..."
pids.each { |pid| Process.kill(:TERM, pid) }

One way we can prevent the introduction of zombie processes is to wait on child processes to finish. This way, if a child hangs, it will be obvious in the terminal. To do this, just add a call to Process#waitall at the point where you would like the program to block until every fork finishes its work. If you know the pid of any process you would like to wait on, you can use Process#wait.

# process_wait.rb
# Sometimes it's useful to wait until all processes have finished

fork do
  3.times do
    puts "Zombie: brains..."
    sleep 1
  end
  puts "Zombie: blehhh *dies*"
end

Process.waitall

puts "Main: finished"

Earlier I said that forks will not terminate on their own when the main process finishes, unlike threads. This is true if the main process finishes normally. If it receives the interrupt signal (SIGINT), like with ctrl-c, it will pass the interrupt signal to all of its children, and they will be interrupted as well.

So, if you use Process#waitall, you have an opportunity to interrupt every process with a quick ctrl-c if any of them hang.

# shutup_kids.rb
# If a process receives an interrupt signal, it will pass it on to its children
# Send SIGINT with ctrl-c to make the kids shut up

kids = %w{Bubba Montana}

kids.each do |kid|
  fork do
    loop do
      puts "#{kid}: when.will.we.get.there."
      sleep 1
    end
  end
end

Process.waitall

Sometimes terminating processes outright like this isn’t desirable. Thankfully, You can gracefully shutdown a process upon receiving signals with Kernel#trap.

If you use a trap for special behavior, make sure you don’t forget to end the child processes with the trap. Otherwise the signal would not kill the process, since the default behavior would be overridden. If you find yourself in this situation, use a different kill signal. For example, if the trap handles SIGTERM, send a SIGKILL or SIGINT. gnu.org has a great page on signals.

# i_said_shutup_kids.rb
# Signal responses can be customized using Kernel#trap or Signal#trap
# Send interrupt with ctrl-c to shutup

kids = %w{Bubba Montana}

kids.each do |kid|
  fork do
    @whiny = true
    trap("INT") do
      puts "#{kid}: Ugh! Shutup signal RECEIVED, dad!"
      @whiny = false
    end

    loop do
      puts "#{kid}: when.will.we.get...there"
      sleep rand(1..2)
      break if not @whiny
    end

  end
end

Process.waitall

Shared Memory

When a fork is performed, objects created beforehand will be available to the new process.

# shared_memory.rb
# Forks have access to objects created before the fork

data = [1,2,3]

fork do
  puts "data in child: #{data}"
end

puts "data in parent: #{data}"

output:

data in parent: [1,2,3]
data in child: [1,2,3]

The fork can see data because forks inherit state from their parent processes. This includes variables and open file descriptors. Initially, information is shared, rather than copied. Once a write occurs, the data is copied to the child process. If changes to the information were shared after the fork, there would be no process isolation.

# copy_on_write.rb
# Changes to memory after the fork do not cross the process barrier

data = [1,2,3]

fork do
  sleep 1
  puts "data in child: #{data}"
end

data[0] = "a"
puts "data in parent: #{data}"

Process.waitall

output: data in parent: [“a”, 2, 3] data in child: [1, 2, 3]

The sharing of process data until a write occurs is known as copy on write optimization. Using data from the parent process significantly reduces the cost of creating child processes, allowing forking to compete with threading.

This is why achieving parallelism through multiple processes is popular in Unix.

Unfortunately, although this is highly relevant when it comes to forking in general, it did not apply to Ruby for a long time. Prior to a change in 2.0, Ruby’s garbage collector’s mark-and-sweep algorithm would make changes to the objects themselves, forcing the operating system to copy the memory. The problem was fixed in Ruby Enterprise Edition, but for a long time most Ruby users were left with inefficient forking.

So, for the purposes of programming, forked data isn’t really “shared,” and it shouldn’t be due to the need for process isolation. Still, for many scenarios, we need a way for our parent and child processes to share changes to data that take place after the fork. A naive approach might be to take turns writing to a resource. However, in POSIX there are solid interprocess communication mechanisms that will allow our processes to send data back and forth.

Conclusion

At this point you should have a basic understanding of why fork() is useful and how it can be used in Ruby. In Part II, we will cover interprocess communication.

Frequently Asked Questions on Forking and IPC in Ruby

What is the main purpose of forking in Ruby?

Forking in Ruby is a technique used to create a new process. This new process, known as a child process, is an exact copy of the parent process, but it runs independently. This allows for concurrent execution of tasks, which can significantly improve the performance of your Ruby application, especially when dealing with heavy computational tasks or IO-bound tasks. Forking can also be used to isolate certain parts of your code, as the child process does not share memory with the parent process.

How does inter-process communication (IPC) work in Ruby?

Inter-process communication (IPC) in Ruby is a mechanism that allows different processes to communicate with each other. This can be achieved through various methods, such as pipes, sockets, or shared memory. For instance, you can use the IO.pipe method to create a pipe, which is a two-way communication channel between processes. One process can write to the pipe, and the other process can read from it, allowing them to exchange information.

How can I handle errors when forking processes in Ruby?

When forking processes in Ruby, it’s important to handle potential errors to prevent your application from crashing. You can do this by using exception handling techniques, such as the begin-rescue-end block. In the child process, you can wrap your code in a begin-rescue block, and in the rescue block, you can handle the error appropriately, for example, by logging the error message and terminating the child process.

How can I control the execution of child processes in Ruby?

In Ruby, you can control the execution of child processes using various methods provided by the Process module. For instance, you can use the Process.wait method to make the parent process wait until the child process has finished executing. You can also use the Process.kill method to send a signal to the child process, which can be used to terminate the child process or cause it to perform certain actions.

How can I share data between processes in Ruby?

Sharing data between processes in Ruby can be a bit tricky, as each process has its own memory space. However, you can use IPC mechanisms to achieve this. For example, you can use pipes or sockets to send data from one process to another. Alternatively, you can use shared memory, which is a portion of memory that can be accessed by multiple processes. Ruby provides the DRb (Distributed Ruby) module, which allows you to share Ruby objects between processes.

What are the potential issues with forking in Ruby?

While forking can be very useful, it also comes with some potential issues. One of the main issues is the consumption of system resources, as each process requires its own memory space. This can lead to high memory usage if you create a large number of processes. Another issue is the complexity of managing multiple processes and ensuring proper communication between them. It’s also important to handle potential errors in child processes to prevent them from crashing your application.

How can I optimize the performance of forked processes in Ruby?

Optimizing the performance of forked processes in Ruby can be achieved through various techniques. One way is to minimize the amount of data that needs to be shared between processes, as IPC can be expensive in terms of performance. Another way is to balance the workload between processes to ensure that all processes are utilized efficiently. You can also use techniques such as process pooling, where a fixed number of processes are created and reused, to reduce the overhead of creating and destroying processes.

Can I use threads instead of processes in Ruby?

Yes, you can use threads instead of processes in Ruby. Threads are lighter than processes, as they share the same memory space and do not require IPC to communicate. However, due to Ruby’s Global Interpreter Lock (GIL), threads in Ruby do not run truly concurrently, which can limit their performance in CPU-bound tasks. Therefore, whether to use threads or processes depends on the specific requirements of your application.

How can I debug forked processes in Ruby?

Debugging forked processes in Ruby can be challenging, as each process runs independently. However, you can use various techniques to make it easier. One way is to use logging to record the actions of each process. You can also use the debugger to attach to a specific process and inspect its state. Additionally, you can use the Process.wait method to catch and handle any exceptions that occur in child processes.

Can I use forking in Ruby on all operating systems?

Forking in Ruby is supported on Unix-based operating systems, such as Linux and macOS. However, it is not supported on Windows. If you need to create concurrent tasks on Windows, you can use threads or other concurrency models, such as event-driven programming or asynchronous I/O.

Robert QuallsRobert Qualls
View Author

Robert is a voracious reader, Ruby aficionado, and other big words. He is currently looking for interesting projects to work on and can be found at his website.

Share this article
Read Next
Get the freshest news and resources for developers, designers and digital creators in your inbox each week