FPS: A common yet flawed metric of game performance
One of the most common ways to provide a simple measure of graphics performance in game titles is frame rate, expressed in frames per second. However, this measure can be quite deceiving, especially with today's faster video hardware. While it may provide some measure of performance, when it comes to making judgments regarding optimization, FPS is a very poor means of measuring application performance.
Non-Linearity of FPS values
The problem with using FPS to measure performance, from a mathematical perspective, is that it is non-linear. But before I go there, let's look at it another way: simply put, it's asking the wrong side of the question. When evaluating code performance in a real-time rendering application, the concern is how long it takes to render each frame, and how much time various sections of code are contributing to this. However, FPS asks the flip side of this question: it's like asking how long it took to get from point A to point B, and being told that the car was traveling at 60 miles per hour. OK, if we know the distance from A to B we could figure it out, but it's not what we asked!
Now, this may seem like I'm being a bit picky on the details, and to be honest part of it comes from a pet peeve. Namely, that it seems like at least once a week I see a question to the effect of:
Or a common variation on this theme:
Hey, notice something going on here? The time is changing linear with the number of objects rendered, but the frame rate is not! In fact, it is highly non-linear, as shown in the above graph, which plots the frame rate from execution times of 1 millisecond through 40 milliseconds. Now, to illustrate how radically this can slant one's perception of performance, do you think the person complaining above would react the same way to a drop from 60FPS to 56.25FPS? Probably not, I think... but check this out:
Seeing such a disparity, one can see how bad conclusions could be reached, especially when comparing methods in different contexts. For example, if one method in an app caused a 900FPS to 450FPS drop, while another method in another engine caused a drop from 60FPS to 55FPS, which might you think to be more expensive? If you've been paying attention, you should suspect that the 5FPS drop is a sign of a greater performance cost than the 450FPS drop seen with the first method! In fact, that 5FPS drop represents 36.4% more execution time than the 450FPS drop!
So take that as food for thought if you are currently using an FPS counter as a measure of your performance. If you want a quick indication of performance between profiling sessions, use frame time instead. In the DX9 framework, for example, you could modify the CD3DApplication::UpdateStats() function to use something like:
_sntprintf( m_strFrameStats, cchMaxFrameStats,
Till next time....
Visitors Since 1/1/2000: