105% Complete

One of my UI pet peeves is misleading progress indicators.

This morning, I’m running a disk drive diagnostic program. The first test claimed it would take 200 seconds. After only 10 seconds, the display said “10% done”. Huh? Last time I checked, 10 ÷ 200 was 0.05, or 5%. Near the end of the test, the display said “100% done” about 10 seconds before it actually completed.

To a user, incorrect progress bars are annoying and useless. I don’t know about you, but when something is “100% done”, I think of it as, well, done. Not close to done. Those eternal seconds between “100% done” and actually done are immensely frustrating. People have even done research papers on designing progress bars to improve the user’s perception of a program’s progress. But this kind of research is light years ahead of real life, where we can’t even get a decent linear progress bar to work.

From my programmer’s perspective, misleading progress indicators are especially perplexing. You have to write extra code to get it wrong. Since programmers are lazy, nobody should be writing extra code, and all of our progress bars should just work.

Consider the disk drive diagnostic. You’d expect a plain jane linear progress indicator to be computed something like this:

indicator_value = work_done / total_work

Pretty simple, right? Ten seconds into a 200 second test should give us 0.05.

Now it’s likely that the calculation is done with integer arithmetic, and the final result will be presented as a percentage. So we have to scale up the numbers like so:

indicator_value = 100 * work_done / total_work

Now, 100 * 10 seconds ÷ 200 seconds gives us 5. So why in the world would the disk diagnostic claim 10%? Because somebody decided to do more work than was necessary.

Somebody apparently decided to have the indicator display progress in 10 percentage point increments (10%, 20%, 30%, etc.). Fine, you have to draw the line somewhere. So we revise our code to something like this:

indicator_value = 100 * work_done / total_work / 10 * 10

Mathematicians unfamiliar with integer arithmetic in most programming languages are now scratching their heads. The salient detail is that there’s an implicit floor function on the result of each division operation when working with integers. Conceptually, the above is equivalent to:

indicator_value = floor(floor(100 * work_done / total_work) / 10) * 10

Any programmer who find him- or herself at this point should stop, take a breath, and check off the progress indicator feature as complete.

But many programmers don’t stop there. Something bothers them. They instinctively worry about the “rounding down” that the integer division does. They want to fix it. They want to round up. Next thing you know, they’ve written an expression like this:

indicator_value = (100 * work_done / total_work + 5) / 10 * 10

Adding 5 (which is half of the 10 percentage point interval) will bias the number up. In many cases, this is the right thing to do. If you’re writing code to credit frequent flier miles to my account, I want you to round up. But we’re talking about code for a progress indicator. Don’t over-promise. Manage the user’s expectations.

It’s true that, 10 seconds into a 200 second operation, we’re closer to 10% than we are to 0%. But we’re not 10% done. We’re not even close. And, at the other end, you’ll show 100% done 10 seconds before we’re actually done. That’s an outright lie. User interfaces shouldn’t lie to users.

Don’t round up on progress indicators. Ever!

“But now users will panic,” some of you are sure to complain, “because it’ll seem like nothing is happening for 20 whole seconds!”

Perhaps so. Maybe it’s time to revisit the decision to show progress in 10 percentage point increments. You could have done even less work.

[Happy Pi Day!]