Updated: August 31, 2022
The focus of the final part of this article is not dedicated to the methods of application performance improvement through multithreading. Instead, it examines its degradation using methods of thread synchronization. However, in this case, the application performance degradation is essential, since the processing of the same data in asynchronous threads may lead to unpredictable results and the application will perform poorly.
If you are interested in ways of creating multi-threaded applications in .NET, we invite you to read Part 1, Part 2, and Part 3 of the article.
Why is thread synchronization essential?
Thread synchronization is essential in the case when the same resource (a variable, an object, data source or a file) which is common for the whole application, is used by several threads synchronously as provided in the code below.
using System;
using System.Threading;
namespace NonSyncExample
{
class Program
{
static int x = 0;
// function, that sent to thread
public static void Count()
{
x = 1;
for (int i = 1; i < 9; i++)
{
// in this cycle x is increasing by step = 1
Console.WriteLine("{0}: {1}", Thread.CurrentThread.Name, x);
x++; // but this operation is provided by 5 threads, so x will have unpredictable value
Thread .Sleep(100);
}
}
static void Main(string[] args)
{
for (int i = 0; i < 5; i++)
{
Thread th1 = new Thread(Count);
th1.Name = "Thread " + i.ToString();
th1.Start();
}
Console.ReadLine();
}
}
}
In this example, the variable “x” is common to all five threads ( five threads of Thread class, the Main thread does not perform the Count method()). The Count method searches “x” variables from 1 to 8. But the Count method itself is evoked in five threads. The threads will switch during the application performance. In this case, the “x” value will be unpredictable.
The Lock operator
The Lock operator limits the other threads access to the resource of the other threads, while it is used by the the current thread. The Lock operator identifies the code block which can be referred to by only one thread at any specific moment.
using System;
using System.Threading;
namespace LockExample
{
class Program
{
// door is an object which can be locked then thread process it
// and it will be released then thread is finished his work on it
static object door = new object();
static int x = 0;
public static void Count()
{
lock (door)
{
// in this section code will be executed by only one thread at the moment
x = 1;
for (int i = 1; i < 9; i++)
{
Console.WriteLine("{0}: {1}", Thread.CurrentThread.Name, x);
x++;
Thread.Sleep(100);
}
}
}
static void Main(string[] args)
{
for (int i = 0; i < 5; i++)
{
Thread myThread = new Thread(Count);
myThread.Name = "Thread " + i.ToString();
myThread.Start();
}
Console.ReadLine();
}
}
}
An object is issued to the lock operator as a parameter. It can be both a specific variable of the Object type and This – a reference to an object of the current class. The object is locked up, when the code performance reaches the Lock operator. Only one thread gets exclusive access to the object during the lock-up. Another thread can “capture” the object issued to the Lock and get exclusive access to it, when the object is released.
If you’re interested in more, read Microsoft Roslyn – using the compiler as a service
Monitor
The performance of the monitor objects of the System.Threading.The Monitor class is similar to that of lock operator. Moreover, the lock uses the Monitor in its performance. An example of Monitor performance is provided below:
using System;
using System.Threading;
namespace MonitorExample
{
class Program
{
static int x = 0;
static object door = new object();
public static void Count()
{
Monitor.Enter(door);
try
{
x = 1;
for (int i = 1; i < 9; i++)
{
Console.WriteLine("{0}: {1}", Thread.CurrentThread.Name, x);
x++;
Thread.Sleep(100);
}
}
finally
{
Monitor.Exit(door);
}
}
static void Main(string[] args)
{
for (int i = 0; i < 5; i++)
{
Thread myThread = new Thread(Count);
myThread.Name = "Thread " + i.ToString();
myThread.Start();
}
Console.ReadLine();
}
}
}
The Monitor.Enter() method takes an object as a parameter, just like the Lock operator. The object is blocked, giving access for one thread only. There is a code block in the block try, which is performed by only one thread at any specific moment. The block finally releases the object which was issued to the monitor by means of the Monitor.Exit method() in order to make it accessible to other threads.
The Monitor also provides several methods of lock management:
- Wait() unlocks the object and transfers the thread to the wait queue, which allows the next thread from the queue to lock the object;
- Pulse() allows the thread to release from the wait queue and lock the object;
- PulseAll() allows all of the threads to release from the wait queue and transfer to the ready queue, where one of the treads will be allowed to get the object locking.
If you’re interested in more, read .NET Core Framework Complete Review
AutoResetEvent (ManualResetEvent)
This class is a Wrap over WinAPI events. It allows the sending of signals from events to objects, which control the threads synchronization. At the same time, the object AutoResetEvent switches from the non-signal state to the signal state and functions as the sender and the receiver of these signals.
using System;
using System.Windows;
using System.Windows.Media;
using System.Windows.Media.Imaging;
using System.Threading;
using System.Windows.Threading;
namespace MyApp
{
public partial class MainWindow : Window
{
AutoResetEvent ewh;
public void consrenew()
{
this.Dispatcher.BeginInvoke(DispatcherPriority.Normal,
(ThreadStart)delegate() { Cons.Text = "City at night";});
ewh.Reset();
ewh.WaitOne();
}
private void Cons_Click(object sender, RoutedEventArgs e)
{
ewh.Set();
}
private void Window_Loaded(object sender, RoutedEventArgs e)
{
ewh = new AutoResetEvent(false);
thr = new Thread(consrenew);
thr.Start();
}
}
After creation of the variable of AutoResetEvent type, we may specify that initially the object will be in non-signal or signal state by having transmitted to the constructor true or false.
The ewh.Set() method notifies all of the waiting threads that the object autoResetEvent is in the signal state and one of the threads can capture this object.
The ewh.WaitOne() method locks the object autoResetEvent and prevents other threads’ access to it, until the Set() signal arrives.
The methods WaitAny and WaitAll can be used, if the program uses several objects of the AutoResetEvent. These methods take as a parameter, the array of the AutoResetEvent objects.
The ewh.Reset() notifies all of the threads that the autoResetEvent object is captured by another thread.
In the example above, the access to the WPF application interface (API) and the WPF from another thread is used by means of Dispatcher and the button Cons which is assigned the name “City at night”. After the assignment operation, the reset of the signal state AutoResetEvent occurs by evoking the ewh.WaitOne () function; then the thread performance is suspended until the button Cons is pushed. After the Cons button the thread is signaled by means of the ewh.Set () method and the thread performance continues.
AutoResetEvent performs the method Reset() automatically as soon as the object autoResetEvent is captured by another thread. It is its main advantage over the ManualResetEvent, where the method Reset is to be evoked manually. Otherwise, the AutoResetEvent and ManualResetEvent perform the same methods of thread synchronization by means of events and signals.
Mutex
The System.Threading.Mutex type is a wrap over the object WinAPI Mutex.
using System;
using System.Threading;
namespace MutexExample
{
class Program
{
static Mutex mutex = new Mutex();
static int x = 0;
public static void Count()
{
mutex.WaitOne();
x = 1;
for (int i = 1; i < 9; i++)
{
Console.WriteLine("{0}: {1}", Thread.CurrentThread.Name, x);
x++;
Thread.Sleep(100);
}
mutex.ReleaseMutex();
}
static void Main(string[] args)
{
for (int i = 0; i < 5; i++)
{
Thread myThread = new Thread(Count);
myThread.Name = "Thread " + i.ToString();
myThread.Start();
}
Console.ReadLine();
}
}
There are only two Mutex methods for thread synchronization:
the mutex WaitOne() method suspends the thread performance until the method mutex.ReleaseMutex() is evoked.
However, Mutex can be used not only as the thread synchronization object but also for synchronization between processes. For instance, an application for only one run can be created.
using System;
using System.Reflection;
using System.Runtime.InteropServices;
using System.Threading;
namespace SingleAppExample
{
class Program
{
static void Main(string[] args)
{
bool exist;
// we get GUID applications
string guid = Marshal.GetTypeLibGuidForAssembly(Assembly.GetExecutingAssembly()).ToString();
Mutex mutex = new Mutex(true, guid, out exist);
if (exist)
{
Console.WriteLine("App is working");
}
else
{
Console.WriteLine("App is already working. And this copy will be closed now.");
Thread.Sleep(5000);
return;
}
Console.ReadLine();
}
}
}
The overloaded constructor is used when the Mutex object is created.
The first parameter identifies whether the invoking thread must be the primary operator of the Mutex.
The second parameter receives the unique identificator guid obtained as a GUID application (application unit linking) of .NET.
The third parameter existed of the bool type returns true, if the mutex object has been requested and received successfully. In this case, the Mutex will be requested and received successfully only once. When the second copy of the application is run, it will be closed in 5 seconds.
Semaphore
The System.Threading. Semaphore type is a wrap over the WinAPI Semaphore object. The Semaphore limits the access to the common resource only for a certain number of threads. Only one semaphore can be created in the application, as its constructor is declared to be static.
using System;
using System.Threading;
namespace SemaphoreExample
{
class Reader
{
static Semaphore sem = new Semaphore(5, 5);
Thread thr;
int n = 5; // reader's visit counter
public Reader(int i)
{
thr = new Thread(Read);
thr.Name = "Reader " + i.ToString();
thr.Start();
}
public void Read()
{
;while (n > 0)
{
sem.WaitOne();
Console.WriteLine("{0} enter the library", Thread.CurrentThread.Name);
Console.WriteLine("{0} reads", Thread.CurrentThread.Name);
Thread.Sleep(500);
Console.WriteLine("{0} leave the library", Thread.CurrentThread.Name);
sem.Release();
n--;
Thread.Sleep(500);
}
}
}
class Program
{
static void Main(string[] args)
{
for (int i = 1; i < 10; i++)
{
Reader r = new Reader(i);
}
Console.ReadLine();
}
}
}
Two parameters are transferred to the semaphore constructor:
- The first parameter identifies the number of threads which get access to the semaphore immediately after its creation.
- The second parameter shows the maximum number of threads which the semaphore can use.
The example shows the behavior of the readers in the library. Each reader r may visit the library not more than five times within a certain period. In this case, no more than five readers can be in the “sem” library simultaneously. After a while, when one reader leaves the library (sem.Release()), another one replaces him. It does not matter how many threads have been created in the Main method, the Read() function wrapped by the semaphore ‘sem’ will be performed by five threads only.
How the synchronization affects the application performance
It is better to use the synchronization in the extreme case only, when it is truly essential. The use of thread synchronization methods in multi-threaded applications can very often lead to the degradation of application performance. This is due to the fact that only a certain number of threads can be performed at a certain moment, and very often only one thread can be performed.
The use of the Lock operator or Monitor is the most effective. However, this is not always true. The issue is that the Lock operator, Monitor and Mutex have similar parameters and solve the same problem. However, the mutexes are more convenient compared to the critical sections. The speed response of the Mutex is also much higher than the critical section.
The critical sections are simple and effective when there are few competitive threads. However, when their number increases, the number of input and output cycles also increases.
It is more appropriate to use the semaphore when the number of competitive threads is great. The semaphore decreases the number of competitive threads, without changing the program model.
The use of the Event model of synchronization (AutoResetEvent and ManualResetEvent) also decreases the application performance due to the lock-ups. The decrease is minor when the section of the code is chosen correctly.
In terms of performance, it is inappropriate to lock a big object by means of one global lock-up. Any lock-up consumes significant system resources. The reason is not only that very often one thread performs, but that the lock-up (or mutex) itself also takes a lot of time. Therefore, it is recommended to use several different mutexes for different resources which require locking-up and processing by a single thread (or by a limited number of threads). Frequent lock-up and unlocking of the resource also negatively affects the application performance.
Summary
The article, which consists of four parts, considers the principal ways of thread creation and management in .NET.
Thread pool (see Part 1) automates creation, destruction and re-use of the threads.
Low-level thread management with the use of Thread type objects (see Part 2) allows the manual creation of the number of threads needed, call the name of the thread and prioritize the threads.
Asynchronous delegates (see Part 2) use the pool thread when they perform. Therefore, they implement all the peculiarities of the pool use with its advantages and disadvantages.
BackgroundWorker (see Part 2) also uses the pool thread to perform the tasks in an asynchronous mode. BackgroundWorker is intended to perform the tasks in a background mode. It has full access to the visual application interface without the use of Invoke and Dispatcher and supports the inheritance of the user class from it.
Libraries TPL и PLINQ (see Part 3) are used to parallelize separate code snippets or database queries. The libraries fully automate the process of thread creation and destruction.
The methods of thread synchronization (Part 4) are essential when the application resources are used by several threads and it is necessary to get rid of the unpredictable operation results.
Due to their peculiarities, each of the above-mentioned methods can be used effectively, only in certain cases. Despite all of the disadvantages of these methods, the construction of modern computers enables parallel data processing by several processor cores simultaneously. This processing would be impossible without the creation of multi-threaded applications.