Before getting into Big O Notation fastest to slowest time complexity, itâ€™s always better to have an idea about what is Asymptotic analysis and its fundementals.

### Asymptotic Analysis in data structures and algorithms

Asymptotic analysis based on calculating the running time of an operation on a mathematical basis. Basically in an algorithm, the mathematical relation of run time performance can be identified by asymptotic analysis.

#### Why do we use asymptotic notation?

By using asymptotic analysis, the time required by an algorithm to execute can be calculated and itâ€™s fall under three categories.

- Best case ( Omega Notation ) â€“ Minimum time required for a programme to execute.
- Average case ( Theta Notation ) â€“ Average time required for a programme to execute.
- Worst case ( Big O notation ) â€“ Maximum time required for a programme to execute.

#### What are the factors that affect the running time of an algorithm?

- Speed of the computer
- The efficiency of the programming language
- Compiler performance
- Number of lines and loops in the code
- Number of operations within a loop

### What is Big Omega notation?

Big omega notation expresses the algorithm takes at least a certain amount of time to implement. Big omega is the asymptotic lower bound of an algorithm.

For **Î©** (**f(n)**), the running time will be at least **k . f(n)** for a constant **k**.

### What is Big O notation?

Big O notation expresses the maximum time that the algorithm will take to implement. Itâ€™s the most level that the running time can be extended to. Big O notation is the asymptotic upper bound of an algorithm.

For **O (f(n))**, the running time will be at most **k . f(n)** for a constant k.

### What is Big Theta notation?

Big theta notation expresses the average exact running time performance which lies in between the upper bound and the lower bound of an algorithm. Big theta notation is the asymptotically tight bound of an algorithm.

For **Î˜(f(n))**, the running time lies in between the least value **k _{1} . f(n**) and the most value

**k**) for some constants

_{2}. f(n**k**and

_{1}**k**.

_{2}### Big O Notation fastest to slowest time complexity

Big O notation mainly gives an idea of how complex an operation is. It expresses how long time an operation will run concerning the increase of the data set.

**1 < log(n) < âˆšn < n < n log(n) < nÂ² < nÂ³ < 2 ^{n }< 3^{n } < n^{n} **

A clarification of Big O Notation fastest to slowest ( best to worst complexity ) according to time complexity can be implemented.

- O(
**1**) â€“ speed is constant. It doesnâ€™t change with the size of the data set. - O(
**log n**) â€“ logarithmically changes. Little more time consuming than O(1) - O(
**n**) â€“ linear change. Time taken is directly proportional to the size of the data set. - O(
**n log n**) â€“ Linearithmic change. Time increases by a multiplier of 1 with the size of the data set doubles. - O(
**nÂ²**) â€“ Quadratic change. Time consumes four times more when the size of data set doubles - O(
**nÂ³**) â€“ Cubic change. Time consumes eight times more when the size of data set doubles - O(
**2**) â€“ exponential change. time takes twice long for every new element added to the data set.^{n} - O(
**n!**) â€“ Factorial change. n! Times longer time consuming directly proportional to the size of the data set. - O(
**n**) â€“ Highest time-consuming function and the slowest to implement because of its time complexity.^{n}

This is valid for larger data sets. However **n!** can place anywhere between **1** < **n!** < **n ^{n} ** with the change of the value of

**n**.

### What is the importance of Big O notation in programming?

Working with large data sets is very difficult and there are many things to concern before selecting a data structure and an algorithm to use. The Big O notation expresses the time complexity which gives an idea about the time taken to perform a certain algorithm and also the space complexity which expresses the amount of space that will be required by an algorithm.