Too Much Time?

If your software runs but is too slow, you need to optimize it. Optimization is a special form of refactoring: don't change the visible functionality, but improve the code in some way. Here you are trying to turn a big program into a smaller one.

Once upon a time in a land far, far away, computer clock speeds doubled every eighteen months. Every program got faster and faster, and every user was happy. The end.

This little fairy tale has come to an end in more ways than one. Program functionality has grown apace with processor speed, and processor clock speeds are no longer increasing due to the "power wall" - further increases in speed come at huge costs in power consumption.

The rise of mobile computing - smartphones and tablets - has raised another issue: faster code is necessary to keep the system running all day without recharging.

Tricks of the Trade

So despite decades of processor speed improvements, runtime optimization is still important. There are a number of ways, obvious or subtle, to improve the runtime of a program. Here are some examples, starting with ways that tend to yield the biggest "bang for the buck:"

  • Look for inefficient loops that need O(n2) or worse (perhaps even exponential) runtime;
  • Precompute results and move their computation out of loops;
  • Store past results for reuse (caching);
  • Use better data structures to reduce the need to perform expensive operations;
  • Write special-purpose code for common cases;
  • Compute approximate results rather than exact answers; and
  • Rewrite modules to use more-efficient algorithms.

Each of these techniques has served me well at least once in the more than 1,000,000 lines of code I've written.

Inefficient loops tend to arise when the amount of data to be processed is larger than what the programmer expected; more-efficient loops often require more coding, and the rule is "make it right before you make it fast." Programmers don't always know in advance where the "hotspots" will be, so they use simpler code and optimize later.

Common subexpressions occur so frequently in code that any decent optimizing compiler will look for values that are guaranteed not to change in a loop, then move their computation out of the loop. Still, there are limits to this capability, and a good programmer can often find values that change rarely or not at all.

Depending on the application, new input data may look very similar to previous input data. If the calculations are expensive compared to the size of the parameters, storing old sets of parameters with the calculated results can be very effective in reducing runtime. The bad news is that memory consumption will go up, and typically the caching strategy (how many results, how long to keep them) must be determined experimentally.

Better data structures help when a program spends much of the time looking for things. Trees, heaps, and hash tables are all better than singly-linked lists.

Special-case code can have a huge impact on run times for subsets of the possible input data, when those subsets have properties that allow computational efficiency. A classic example is in computational geometry - the calculations for orthogonal or 45 degree lines are much more efficient than calculations for lines with arbitrary angles.

Many complex programs try to compute the "best" answer using complex heuristics, because the cost to try all possible solutions is too high - often exponential in the size of the problem, or worse. The nature of the solution space may allow the use of approximations at some points during the program. These approximations allow a rapid approach to an optimum point, at which time more-complex calculations may be used to complete the job. And of course the user sometimes wants an approximate answer to get quick evaluations of design alternatives. A full optimization to obtain results 3% better than an approximation is wasted if the user's target is missed by 10% anyway.

Finally, it may be necessary to rewrite code with better algorithms. Obviously this is an expensive proposition, but sometimes there is no other way to get significant gains. After noticing that a design rule checking program spent half its time sorting data, I redefined the data structures and algorithms to eliminate the sorting step. Additional efficiencies made possible by these changes resulted in an overall speedup of 3x.

Conclusions

Some of these methods increase memory usage, so they must be used judiciously. It is even possible that a program could have both runtime and memory usage problems, so the best solution must be determined experimentally.

Runtime optimization takes effort, and often it increases the complexity of the code. It is important not to optimize too early, especially in a research and development environment where the final form of the program is not fully known. You don't want to optimize code that is already fast enough, and you need to minimize the number of bugs in order to evaluate its function. "Make it right before you make it fast."

Contents of this Web site Copyright © 2011-2020 by David C. Chapman. All Rights Reserved. Contact me

Joomla Templates: from JoomlaShack