Applications that handle multiple connections can be built in one of three ways; by forking processes, using multiple threads, or using asynchronous I/O.
Before getting started, lets define a ‘process’ simply as an instance of an application running on your system. Forking a process means that you duplicate it from the current point of execution. So when the new (child) process is created it will have the same state as it’s parent at the time it was forked. Unlike threads, each forked process gets it’s own memory space to which the state information is copied (which is why processes are thread safe). Once the process is forked, the new child process may go about it’s own execution path that is separate from it’s parent. The two processes can run simultaneously (in parallel). Forking allows a server to handle multiple concurrent connections. For an example, Apache web server can be configured to fork processes, in which there is a main process that waits for requests from remote machines to connect to the server. When a remote machine requests a connection, Apache will fork a process to handle the request. The main process then continues to listen for other requests. Note that forking applies to Unix/Linux machines. MS Windows does not fork processes. Note that processes are sometimes referred to as ‘tasks’.
There are two other ways that servers can handle multiple connections. One is by using threads. Threads are like sub-processes, but the major difference between a thread and a process is that all threads created by an application share the same state and memory space (remember that when you fork a new process it gets its own copy of the application state). Obviously, creating new threads requires less memory than forking new processes, but because all threads share the same application data, you may run into issues when two threads try to update a variable in an application. When using threads you must be sure to keep your code synchronized so that two threads don’t try to change a variable that might cause conflicts.
Finally, a server could use asynchronous I/O. This approach uses a single process that does not create new processes or threads. In this case the process runs an event loop that listens for connections. When a new connection is created the event loop adds it to a queue. The event loop continually cycles through the queue to see if any clients are requesting data. The queue may also contain other code (not related to clients, just other stuff that the app needs to do). When the event loop finishes running some code, it removes it from the queue and moves to the next item in the queue. Note that asynchronous I/O has an advantage over forking because it does not use up as much memory (remember that processes get a copy of the application’s state, which requires memory). Async I/0 does not use threads either, so you don’t have the worry about dealing with synchronizing access to the variables in your app. The drawback to asynchronous I/0 is that when each bit of code is run from the queue, everything blocks until it that piece of code completes. So if your app is in the middle of doing some intensive code that takes time to run, connections will not be allowed until that code completes.