Key Takeaways
- Time complexity, expressed through Big O notation, represents the relationship between the computing time of an algorithm and the amount of input it processes. It is particularly relevant for functions such as sorting and recursive calculations, which require significant computing time.
- Efficient algorithms have lower time complexity, reducing the computing time. An example of an efficient algorithm is the binary search algorithm with a time complexity of O(log(n)). In contrast, inefficient algorithms, such as the bogosort algorithm, have high time complexity, making them less desirable for tasks requiring efficient computing.
- While time complexity is a crucial factor in algorithm efficiency, it is not the only consideration. Depending on the task at hand, other factors such as the specific requirements of the application, the size of the input data, and the availability of computational resources can also influence the choice of algorithm.
Time Complexity
Time complexity is, as mentioned above, the relation of computing time and the amount of input. This is usually about the size of an array or an object. Time complexity also isn’t useful for simple functions like fetching usernames from a database, concatenating strings or encrypting passwords. It is used more for sorting functions, recursive calculations and things which generally take more computing time. This is not because we don’t care about that function’s execution time, but because the difference is negligible. We don’t care if it takes 10ms instead of 3ms to fetch a username. However, if we have a recursive sorting algorithm which takes 400ms and we can reduce that to 50ms, that would be an interesting thing to do. As you might guess, the lower the computing time, the more efficient the algorithm is. The question that follows is: ‘how can we define time complexity in an universal way?’. That’s where we’ll use the ‘Big O notation’.Big O notation
The Big O notation is a notation for the time complexity of an algorithm. It is a mathematical representation of the upper bound of the limit of the scaling factor of the algorithm. For example, if we double the size of an input array, by how much does the computing time increase? This might become clear with two examples:$numbers = array(14,82,4,0,24,28);
foreach($numbers as $number)
{
echo $number;
}
Imagine that the $numbers
array is the argument of the function. We have a foreach loop running through its items. If we calculate the time that the code takes to run, what happens if we double the size of the array? We can easily see in this example that it will double the time to run. We see that there is a linear relationship between the size of the array and the computing time. So if we write the size of the array as n, we can write the time complexity as O(n).
Another example:
$numbers = array(14,82,4,0,24,28);
foreach($numbers as $number1)
{
foreach($numbers as $number2)
{
if($number1 >= $number2)
{
echo $number1 . " is greater than or equal to " . $number2;
}
else
{
echo $number1 . " is smaller than " . $number2;
}
}
}
In this piece of code, there is a foreach loop located inside another foreach loop. Let’s say ‘n’ is the size of $numbers
. Then we loop ‘n’ times through ‘n’. That makes the total amount of loops n². As you might guess, we write the time complexity as O(n²).
The big O notation expresses the scaling of computing time and uses some sort of mixture between the upper bound and the limit of that scaling. For example:
$numbers = array(14,82,4,0,24,28);
foreach($numbers as $number1)
{
foreach($numbers as $number2)
{
if($number1 >= $number2)
{
echo $number1 . " is greater than or equal to " . $number2;
}
else
{
echo $number1 . " is smaller than " . $number2;
}
}
}
foreach($numbers as $number)
{
echo $number;
}
You might feel the urge to write that time complexity as O(n²+n). While, technically, it is not wrong, it is rather meaningless: you always define time complexity as the mathematical limit to infinity. If you take the limit to infinity of a polynomial, it is always the variable with the highest exponent that matters. Since time complexity applies to the rate of change of time, factors are never written before the variables. This means that, for example, you can replace O(5n) by O(n).
Efficient algorithms
Now that we know how to express time complexity, we can take a look at some examples of efficient algorithms. For the first one, I want to introduce another special notation: O(log(n)), which shows a logarithmic relationship. An example of an algorithm that uses this is the binary search algorithm. For those too lazy to read the full article: you want to find a name in an alphabetically ordered list and so you go to the center. If the name you search comes before that, you go to the center between the center page and the beginning (so the 1/4th). You continue that until you find the right name. The time complexity of that algorithm is O(log(n)). If you were to find the name by looping through the list entry after entry, the time complexity would be O(n). While that isn’t bad, O(log(n)) is many times better. It can be qualified as an efficient algorithm.Inefficient algorithms
Just as there are efficient algorithms, we have inefficient algorithms as well. One of them is the bogosort algorithm. While (fortunately) nobody actually uses it, it’s used as a demonstration of how you should not do it. When using it for sorting a list of numbers in descending order, it will, at random, choose an order for the list. It will then check if the list is in the correct order, if it is not, it will randomize it again. As you see, that algorithm isn’t very efficient and it has a time complexity of O(n x n!) (with n! as factorial of n). If you want to sort arrays in a time efficient manner, look for another algorithm, Heapsort for example.Writing an algorithm and optimizing it
I will now demonstrate how we can apply time complexity by first writing an algorithm, and then writing a better one. You will see why the latter is better by looking at its complexity. I want to write a function with an array as argument. The array will consist of a number of positive integers. The function then will return a new array, containing these integers, sorted by increasing size. The first algorithm I will use is called insertion sort. In short: it will loop through the array, and if an integer is smaller than the next one, it will switch them. A more detailed description can be read here . I implemented the algorithm this way:function insertionSort($array)
{
$currentNumber;
for($i = 1; $i < count($array); $i++)
{
$currentNumber = $array[$i];
while (($i-1 >= 0) && ($currentNumber < $array[$i-1])) //While there is a smaller number to the left
{
$array[$i] = $array[$i-1]; //replace the current number by the number to its left
$i--;
}
//there are no smaller numbers to the left anymore
$array[$i] = $currentNumber; //set the current number to the number that originally had index i
}
return $array;
}
$array = array(4,29,9,2,9);
print_r(insertionSort($array));
You see that there is a while loop inside a for loop. The worst case scenario is a time complexity of O(n²). While the algorithm does a good job at what it’s designed for, O(n²) is not good if you’re dealing with bigger arrays. I will now demonstrate a better algorithm for the job: this algorithm will first find the maximum of the array that is passed as argument. It will then create an associative array named $counting
, which counts the number of times that each index appears in the original array. Finally it loops through the counting array and it adds every index ‘n’ times to a new array, where ‘n’ is the value of the index. For example, if the value of $counting[23] is ‘3’, it will add 23 3 times to the new array.
function findMax($array)
{
$maximum = $array[0];
for($i = 1; $i < count($array); $i++)
{
$maximum = ($array[$i] > $maximum ? $array[$i] : $maximum);
}
return $maximum;
}
function increasingSort($array)
{
$size = findMax($array);
$counting = array();
for($i = 0; $i <= $size; $i++)
{
$counting[$i] = 0;
}
for($i = 0; $i < count($array); $i++)
{
$counting[$array[$i]]++;
}
$ordered = array();
for($i = 0; $i < count($counting); $i++)
{
for($j = 0; $j < $counting[$i];$j++)
{
$ordered[] = $i;
}
}
return $ordered;
}
$array = array(29,1,2,2,2,28,98);
print_r(increasingSort($array));
The time complexity of this algorithm is O(n), a lot better than the Insertion Sort algorithm. However, note that this algorithm might not be suitable for higher numbers which vary a lot, as the $array
will have a huge size. Always make sure that the algorithm fits the situation.
Time complexity is not everything
Now that I showed you what time complexity is, note that computing time should never be your only focus. While you should always try to find out if your algorithm is time efficient enough, there are other aspects to consider, too. The computing time doesn’t matter if you need to sort ten items, so don’t waste time on that. Also, for most tasks like sorting, searching entries, etc. there already are various efficient and tested algorithms available, waiting for you to Google them.Frequently Asked Questions (FAQs) on Time Complexity of Algorithms
What is the Importance of Understanding Time Complexity in Algorithms?
Understanding time complexity is crucial in algorithm design and programming. It provides a measure of the time an algorithm takes to run as a function of the size of the input data. This understanding allows programmers to predict the running time of an algorithm and choose the most efficient one for a particular task. It also helps in optimizing code, making it run faster and consume less computational resources, which is particularly important in large-scale data processing and real-time applications.
How is Time Complexity Different from Space Complexity?
Time complexity and space complexity are two different aspects of algorithmic efficiency. Time complexity refers to the computational time taken by an algorithm to run, while space complexity refers to the amount of memory space an algorithm needs to execute. An efficient algorithm ideally has both low time and space complexity, but there’s often a trade-off between the two. For instance, an algorithm might run faster (low time complexity) but require more memory (high space complexity), or vice versa.
What is Big O Notation and How is it Used in Time Complexity?
Big O notation is a mathematical notation used to describe the limiting behavior of a function when the argument tends towards a particular value or infinity. In the context of time complexity, Big O notation is used to classify algorithms based on their running time or iterations as an input size grows. It provides an upper bound of the complexity in the worst-case scenario, helping programmers understand the worst-case scenario for an algorithm’s time efficiency.
Can you Explain the Difference Between O(1), O(n), and O(n^2) Time Complexities?
O(1) represents constant time complexity. It means the algorithm takes the same amount of time to execute, regardless of the input size. O(n) represents linear time complexity, where the running time of the algorithm increases linearly with the size of the input data. O(n^2) represents quadratic time complexity, where the running time of the algorithm is proportional to the square of the input size. As the input size increases, algorithms with O(n^2) complexity will take significantly more time to execute than those with O(1) or O(n) complexities.
What is the Role of Data Structures in Time Complexity?
Data structures play a significant role in determining the time complexity of an algorithm. Different data structures have different time complexities for operations like insertion, deletion, and searching. For instance, searching for an item in a hash table can be done in O(1) time, while it takes O(n) time in a linked list. Therefore, choosing the right data structure can significantly improve the efficiency of an algorithm.
How Can I Reduce the Time Complexity of an Algorithm?
Reducing the time complexity of an algorithm often involves optimizing the code or choosing a more efficient algorithm or data structure. This could mean eliminating unnecessary computations, using more efficient sorting or searching methods, or leveraging data structures that allow faster access or manipulation of data. However, it’s important to note that reducing time complexity might increase space complexity, so a balance must be struck based on the specific requirements of your application.
What is Asymptotic Analysis in the Context of Time Complexity?
Asymptotic analysis is a method of describing limiting behavior and is often used in the analysis of algorithms. In the context of time complexity, asymptotic analysis provides a way to compare algorithms based on their efficiency as the input size grows. It helps us understand the growth rate of an algorithm’s time complexity, providing insights into its long-term performance for large input sizes.
What is the Difference Between Best Case, Average Case, and Worst Case Time Complexity?
Best case, average case, and worst case scenarios refer to the potential scenarios of an algorithm’s performance. The best case time complexity is the scenario where the algorithm performs the fastest. The worst case time complexity is the scenario where the algorithm performs the slowest. The average case time complexity is the average scenario considering all possible inputs. Understanding these scenarios helps programmers predict how an algorithm will perform under different conditions.
How Does Recursion Affect Time Complexity?
Recursion can significantly affect the time complexity of an algorithm. A recursive function calls itself to solve a problem, which can lead to multiple function calls and increased time complexity. However, recursion can also simplify the code and make it easier to understand, despite its potential impact on performance. The time complexity of recursive algorithms is often calculated using recurrence relations.
What is Amortized Time Complexity?
Amortized time complexity is a way to express the time complexity of an algorithm over a sequence of operations, rather than for a single operation. It provides a more comprehensive view of an algorithm’s performance, taking into account both the costly and less costly operations. This is particularly useful for algorithms where a single operation might be expensive, but when averaged over a large number of operations, the cost is relatively low.