Everyone in these things looks at graphical hardware, or resolution when there is a lot more going on then even the "video" enthusiasts talk about. lets talk about what makes up a frame. a game is primarily comprised of what is referred to as a game loop which has 3 basic/primary parts: logical update, physics update (typically based on logical operations), and graphical analysis/render. These parts of the game loop are running constantly, and can be expanded into: pre-input update, input check/processing, post-input update, physics move, collision detection, collision resolution, calculation of camera frustrum, post process effect pass(es) (this includes shaders, some of which have to be done one at a freaking time over the ENTIRE scene), and then render.
now currently I am just giving a list of steps that a game loop goes through, but in reality these steps happen every "frame" (I will get back to frame locking later), and depending on the amount of GameObjects, and the complexity of commands be issued. now your logical operations are running on the primary processor, and then your physics operations are all running on the GPU (cause graphics alone isn't enough for it to do). So in reality if these commands are rather complex, and a large number of them then this "can" get rather slow.
usually the biggest bottleneck in a game is a trade off between physics, AI, and graphics. if you have a highly complex physics environment (deformable-interactive terrain, interactive terrain, or just a lot of concave collidable surfaces). AI can be a bottleneck if you have a highly complex AI algorithm, or even a lot of simple AI algorithms running concurrently, and mind you if each one has a unique instruction, or decision set then it gets worse by a long ways. Graphics can actually only really be a bottleneck when we start talking about highly detailed models, or complex shaders running, but this can still get out of hand quickly.
so this is probably quite a bit of babel to many of people, but what it amounts to is if your processor, RAM, or GPU is slower then "bench-marked" then these bottlenecks start to become a cascading issue.now under the right circumstances it is possible to have an unoptimized game run at a higher frame-rate, and do so consistently, but this is the exception, and not the rule. usually a target frame rate needs to be determined (this determination can be made at any stage of development, but it is typically better to decide it sooner then later) then each portion of the game can be profiled, and optimized accordingly. now if the game can smoothly run at say 75 fps, but every so often there are drops down to 60 fps there are a couple of options: either let this happen (maybe market the game as 60fps anyways to save face), or cap the game at 60fps-render. there is a difference between capping something at XXfps-render, and XXfps when you cap a game as a certain render limit then you are telling the game to process everything for as many frames as it can handle, but only render the capped value (running a clock in the background, and every time the clock hits value X then call render. the other form of cap is where you holding the entire update at ransom to a clock, and then having it process things. this can be exaggerated by having a number of things in the game be controlled by "real-time" rather then "system-ticks" (but that is a lengthier discussion that is trickier to describe without examples that for the life of me I can't think of at the moment).
all in all, at the end of the day: there is a lot going on at play then just the rendering, or just the graphical hardware. you have to consider the entire machine.
source actual experience optimizing numerous games (as an uncredited contractor)
in the video there was mention of what it takes to enable motion blur in a rendered video, and the question was raised as to what it would take to do this in a game "on the fly" in a game if you want motion blur there are 2 ways to accomplish this effectively. the first way is to take the current frame without blur, and then before that frame is rendered grab the next frame find what is different, and then apply blur to the difference. the other way is to use interpolation take frame1 then grab frame2, but instead of rendering frame2 render frame1.5 which is halfway in between with blur (basically what Marla was talking about with the smart TVs). and it is not cut and dry which of these is faster. if there is a low number of things on the screen then you can easily do either with not an extreme impact, but if you have a lot of simple objects on screen then you want the first approach cause we are pretty sure physics is done all the way, but if there are a lot of complex objects then we do the second approach because if the physics didn't finish all the way then it is possible to hand the render a "majority" of the new information and the old, and still be successful, and as long as the incomplete data will be either minor corrections, or to the edges of the frame the eye will not notice "as much". this is probably the most taxing shader/post-process effect you can do in a game currently.
typically in a rendering program the reason it can take such a dramatic increase in compile/render time is that instead of the program looking at the objects in the scene (typically if the program is not 100% layer driven where every object is on a different layer then the program just has to ASSuME things, and even then user ability, and scene density are still drawn into question) it instead has to look at a pixel-by-pixel bases as to what is, and isn't blurred. a simple test for this is to in your rendering software take 2 objects starting on either side of the screen (toward the edges) then in about 2 frames move then to about the opposite sides of the screen respectively (best results if only about 3 quarters of the screen for each). what you should notice is that about the center you should see the 2 objects seem merge together as 1. in a game this would not happen as each object is treated independently from each other object