Exponentiation, speed and complexity.

Published: 16th of September, 2011 This article is about different implementations of exponentiation in programming. With speed in mind.

Question you often hear today is do we *really* need to know how functions and methods are implemented in the programming languages we use, or this is just the thing from the past? Majority (if not all -- please, correct me if I'm wrong) of the modern languages that are commonly used today on the web/mobile have these mathematical operations inbuild, so unless you are really curios about "how stuff works", there is no need to know that, right?

I'm a graphic engineer and self-thought computer enthusiast, using programming languages to make stuff and have recently decided to strengthen my computer science basis. So I came across Introduction to Computer Science and Programming, MIT Open Course Ware lectures, where in lecture 8 (Complexity: Log, Linear, Quadratic, Exponential Algorithms) Prof. Eric Grimson and Prof. John Guttag also discuss about exponentiation and show examples how to implement it.

There are at least three ways:

Let's make a simple comparison of these first two examples versus the third one:

Comparison of orders O(n) and O(log n).
Notation n = 10 n = 100 n = 1000
O(n) 10 100 1000
O(log n) 3.32192809 6.64385619 9.96578428
Now think of n representing the time (perhaps even seconds) required to compete the task.

The method of exponentiation, represented above is (usualy) a built-in method, but is there any need to think about that or in this direction? I would say absolutely. Speed, performance, lower bandwidth traffic, they all sum up in an overall better user experience.

What do you think? Do you ask yourself if you can make something run faster, or smoother?
Do you know how exponentiation is implemented in your favourite programming language?

blog comments powered by Disqus