Press "Enter" to skip to content

Contention, Coherency and Software Engineering

Today I want to tell you a few words about how you can describe your system through mathematical equations — at least to some degree.You will get familiar with terms like Contention, Coherency, and Coherency Delay. Additionally, I want to show you laws and their mathematical equations that can help you calculate the impact of these 3 mechanics on your application.

This article is more focused on overall system design and architecture than any other written by me till today — so consider yourself warned. In the article, I will try to answer a question: Can we scale systems indefinitely?

However, to have a full understanding of today’s topic, we will start from answering a simpler question: What is scalability?

What is Scalability?

In my opinion, one of the best definitions of scalability can be found in Wikipedia (yeah, you did not expect that for sure) and it could be quoted in this way:

A property of a system to handle a growing amount of work by adding more resources (…)

From such a statement, we can expect that if our system is believed to be scalable, we can add more and more resources (threads, CPUs, nodes) and potentially handle any increase of incoming traffic.

Unfortunately, this holds true only in an ideal world. In our disappointing reality, we have to take into consideration the three concepts mentioned in the introduction.

Before I start explaining why it works this way, I want to give you more details about linear scalability.

Linear Scalability

In general, it is the best case scenario that we want to achieve. Here, when we add resources to the system, it always results in its performance increase. What is even more important is that we can do it INDEFINITELY without suffering any negative effects. The mathematical equation that we can use to calculate linear scalability impact is quite small and simple.

Linear Scalability Equation

S(N) = N

S → overall improvement in execution time of the whole task

N → improved resources (Threads, CPU, Nodes)

Now we can see that the overall improvement of execution time depends only on improved resources, so the more we add, the bigger performance gains we achieve. Unfortunately, as I stated above, such a case is almost never possible in more complex systems.

After this brief explanation of linear scalability, we can move further and explore other topics for today, starting with Contention.

What is Contention ?

Simply stated, it is competition over the access to a shared resource. Almost everything can be such a resource, from low level things like CPU cycles to high level like database access threads or even higher abstraction like system sockets.

As every competition, it can have one winner but, what is more significant, after the “winner” is chosen, the rest of the “players” have to wait, still utilizing and blocking their resources, for the “winner” to complete its task and then start the whole competition from the beginning.

Again, as in any other competition, the more competitors we have, the more time it will take to elect the exact competitor as the winner. In our software-oriented world, it means that we can expect that all acceptable time limits will be broken while we put our system under sufficient load. Meanwhile, our system will continue to utilize resources that may be needed elsewhere, which may lead to more and more failures.

The situation seems to be more complex in the case of thread pools of any kind. There, we can see a situation in which we end up with multiple winners, because we have many threads in the pool, and multiple losers. Additionally, the process of choosing the next winner in such cases is strongly related to particulate pool implementation.

Now when you know what Contention is and how it may affect your application, we can move forward to the first of the laws I want to describe today.

What Is Amdahl’s Law?

Amdahl’s Law was formulated by, surprisingly, Gene Amdahl and was presented in 1967. It describes the theoretical improvement in latency of task execution under fixed load inside the system whose resources are improved — for example, in cases of multithreading, we mean adding more threads.

In short and more understandable terms, it can be used to calculate the maximum improvement gained from adding more resources to our system. What is crucial here is that only the code that uses a particular resource benefits from this resource improvement, which is quite a logical conclusion, so we are limited by the execution time of non-benefiting parts.

For example:

If we want to add more threads to our systems, only the parts that are multithreaded benefit from such actions and we are limited by the execution time of a single-threaded part.

Amdahl’s Law Equation

S(N) = N/(1+q(N-1))

S → overall improvement in execution time of the whole task

σ → proportion of execution time that the part not benefiting from improved resources originally took

N → improved resources (Threads, CPU, Nodes)

When the parameter (σ) is set to 0 then Amdahl’s Law reduces itself to the linear scalability equation form.

You can replace σ with (1 – p) and after transformations, receive another equation similar in its form to the one above:

S(N) = 1/(1-p+p/n)

p → proportion of execution time that the part benefiting from improved resources originally took

Similarly to σ = 0 for p = 1 Amdahl’s Law reduces itself to the linear scalability equation form. Both cases describe the maximum potential benefit when the whole application benefits from improving resources.

Keep in mind that both equations are equal and should return the same result for respective data. I will include calculations in examples.

Example:

We have a job running for 9 hours. We assume that the 5 hours long part can be parallelized and improved and that after improvement it will be three times as fast as before.

Now we know:

N = 3

p = 5/9 = 0.56

σ = 4/9 = 0.44

so

Examples calculation for both amdhal's Law equations

As you can see, in such a case, we will be able to complete our task 0.59 times faster than before (in approximately 6 hours and 3 minutes).

There are 2 notable things here:

  1. S may be equal at most 3 in case when the whole program benefits from system resource improvement; σ = 0 | p = 1.
  2. The bigger the benefiting part, the bigger the improvement.

Above you can see a plotted graph for N from 0 to 10 and σ = 0.44 for Amdahl’s Law.

Amdahl’s Law and Contention

As Contention limits parallelization of our system by making resources increasingly less available for each requestor, we can see that the more contentious our code base is, the less overall impact our improvements will have.

After explaining Contention, its impact on our software, and how we can calculate it, we can move to the second concept for today.

What is Consistency?

Consistency is about dealing with order of reads/writes to ensure that both will be seen in an order that makes sense. It is one of the most important topics in overall system design. That is why it found its representation in the CAP theorem under letter C. The two most important consistency models are eventual consistency and strong consistency. There are also other consistency models but in my opinion, they are less important.

What is Coherency?

It is a term strongly related to consistency and is about ensuring that any interested parties see the same particular piece of data in the same state in a given moment of time. It is a crucial concept for all systems that share some part of its state between two or more nodes.

What is Coherency Delay?

If coherency is about ensuring that any request to a specific part of the state returns the same value, Coherency Delay can be viewed as the time required to reach system-wide coherence.

One thing worth mentioning here is that adding more nodes to the system increases the time of Coherency Delay.

Coherency Delay and Gunther’s Law

Gunther’s Law or Universal Scalability Law (USL) was formulated by Neil J. Gunther in 1993. It is built atop Amdahl Law but, additionally, it accounts for the overhead due to inter process communication.

In plain English, it allows us to calculate the scalability of our system, taking into account Concurrency, Contention (Amdahl’s Law), and Coherency Delay, which makes your final results more in line with real life. In fact, thanks to introducing Coherency Delay into our equations, we are able to see negative results of our attempts to scale up the system.

Gunther’s Law Equation

S(N) = N/(1+q(n-1)-xN(N-1))

S → overall improvement in execution time of the whole task

σ → proportion of execution time that the part not benefiting from improved resources originally took

κ → proportion of execution time that system spends on achieving consistency, in other words, Coherency Delay; there is no way to calculate it, you have to measure it or just experiment with it by placing different values in the equation to see how well your system can scale

N → improved resource (Threads, CPU, Nodes)

When the parameter describing Coherency Delay (κ) is set to 0 then Gunther’s Law reduces itself to the Amdahl’s Law equation form.

Example:

Data is the same as that for Amdahl’s Law but here, additionally, our system spends 7% of its uptime on bringing its state in line.

N = 3

σ = 4/9 = 0.44

κ = 0.07

So what can we do based on the above calculations?

  1. Our performance improvement decreased and instead of 0.59 improvement, we have only 0.30.
  2. If we continue to add resources to the system, we can see a decrease in performance instead of an increase, here we can see a decrease of 0.11 when we scale N to 10.
  3. Coherency Delay (κ) has a much bigger impact on overall improvement than contention as you can see from calculations.

Ploted Gunther Law

Below you can find a plotted graph for Gunther’s Law for N from 0 to 10 and σ = 0,44 κ = 0.07. You can see that for N equal 9, we start to notice performance degradation.

How to Choose the Optimal N?

The answer is quite easy but requires doing some math, as do most things in this article. First of all, you can view Gunther’s Law as a function S(N) and as for any function (in practice, only a part), we can calculate the point (N) where S(N) has the maximum value. Having such value gives us two very important pieces of information:

  1. We know how many resources we have to prepare before go-live.
  2. We can calculate how far we can scale the system in its current state.

And all of this is achievable with a plain old calculator or paper and pencil with no computer included (depending on how advanced your calculator is).

Optimal N equation:

p = sqrt((1-q)/k)

Example:

We are calculating the number of nodes needed to maximize our improvement for data from the Gunther’s Law Example.

σ = 4/9 = 0.44

κ = 0.07

Calcuations for pAs you can see, the optimal number of nodes is 2.83 but since we are unable to add partial nodes, I would go for adding 3 nodes and end up with p = 3.

To verify it, we will calculate S(4) and see how big an improvement we will achieve with 4 nodes instead of 3. Keep in mind that S(3) = 1.3.

Calculation for qunther's LawS(4) = 1.26

As you can see, S(4) is smaller (slightly but still) than S(3).

Answering the Final Question

Can we scale our systems indefinitely? Definitely not.

Unless your system is totally stateless (rare), we can only scale it to the point described by Gunther’s Law. Adding more resources over that limit, you will end up decreasing your system performance instead of improving it.

Even in the absolute best scenario, with almost no Coherency Delay, you will end up limited by Amdahl’s Law which is still far from linear scalability. To achieve such a level of scalability, you will basically have to have no state and perfectly parallelized software with no serial choke-points.

Ploted Laws graphAbove, you can see plotted graphs for: linear scalability (blue line), Amdahl’s Law (green line), and Universal Scalability Law (red line) for N from 0 to 5. You can clearly see the difference between them and especially how far they are from the desired scalability (linear).

course, the bigger N becomes, the more visible this difference will be. From my research during the writing of this article, there are no general rules for the value of the κ parameter. Its real value will depend only on the complexity of your system.

Conclusion

After all this math and text, I am glad that you managed to reach this paragraph. I hope that you learned something new about system design. I also hope that this knowledge will come in handy one day.

Just keep in mind that you can calculate your scalability of your system before you will release it to production.

Thank you for your time.

Comments are closed.