all blog now

October 13, 2023

Why Maddahl's Law Doesn't Apply to Parallel Computers

You are a cleaner. Your job is to go to a house which is an hour away from your home, hop on a bus at 8AM and then work for 9 hours until 6PM without stopping (theoretical cleaners don?t get lunch breaks). Every time you want to clean the whole house you have to spend one hour for the bus ride and 9 hours for cleaning. Even if you have a very fast uniprocessor computer and can run the entire program in one minute, your work will always be limited by the part of the work that must be done sequentially.

The parallel slowdown effect which can plague certain tasks when they are scaled up beyond a certain number of parallel nodes is a manifestation of this limitation. It is caused by the amount of time needed for communication between the parallel parts of the task. If the non-parallel parts of the program can?t catch up to the parallel parts they will continue to be slower than a single processor.

A more practical approach is to break down the serial portion of the program into parallel sections and then to optimize those parts. For example you can move the most important and expensive tasks to faster hardware and then to allocate non-critical tasks to lower power processors. This can improve both performance and battery life on modern machines like smartphones. The only limit to this is the amount of the program that must be run sequentially (which in practice is usually inevitable).