Then, you state that increasingSort is O(n), this is not true. Or you would have ofcourse bested all other sorting algos out there: http://bigocheatsheet.com/

Before you start alking abot complexity one should talk about the problem instances. Often comparisons are a good place to start (Decision problems).

Another important instance is the function instance. Consider the following code:

foreach( $big_array as $item => $number) {

$result[$item] = in_array( $number, $big_array );

}and:

foreach( $big_array as $item => $number) {

$result[$item] = array_key_exists( $item, $big_array );

}

These look very similiar. But in_arra is O(n) and array_key_exists is O(1) because it uses a hash look up.

So, in goes the ‘function problem’.

Then there is the memory usage. If you use a lot of memory, the system has to allocate that memory, adding extra complexity to your application. this means you will also have to define when memory usage becomes O(something) and when it’s is simply O(1). Same goes for passing in pointers or simple a copy, ….

]]>The size of the array that needs to be sorted is irrelevant, because complexity is calculated indepentent from the array size. AFAIR, the best complexity that you can gain for sorting a vector is O(n log n). Otherwise you would be given a nice prize for finding O(n) class algorithm for sorting a vector :D

Also, please don’t use functions in the conditional. They need to be executed every time the conditional is evaluated. It obviously takes more time that simply comparison of two variables/ variable and const. ]]>

The total number of $j iterations will always be fixed at n, no matter how many iterations of $i. The $j iterations will just be spread out differently across each $i iteration, depending on the number of unique values.

]]>– if every value of $array is different to all the others, then $i is O(n) and $j === 1 for all $i, (every value is different, so can by definition only appear once), which gives O(n).

– if every value of $array is identical to all the others, $then $i === 1 and $j is O(n), which gives O(n).

So at the extremes (input array is uniform or every element of input array is unique) we have O(n).

In-between. Let’s say the number of different values is half the size of the array (i.e. n/2) – something like [1,2,3,1,2,3], then $j === 2 and I think we’re at O(n/2 * 2) i.e. back at O(n).

I might be wrong on this: I’d be interested to see why, if so.

]]>With this concept the second algorythm has no complexity gain. It’s a concept, it’s not up for debate, that is the whole point.

Regards.

]]>