What Does the End of Moore's Law Mean for Gaming?

What Does the End of Moore's Law Mean for Gaming?

This is a very strange time for computer technology. It's always a strange time for one reason or another, but that's why this time is extra strange. The only thing we were ever sure of is no longer a sure thing.

Read Full Article

Indeed, my PC is starting to run into trouble trying to play the latest AAA games, I'm looking forward to having to upgrade maybe for the last time.

Isn't this always the case with all technology? If you spend enough efforts on perfecting one technology, the effectiveness eventually flattens out and it becomes too expensive for too little effect.
It is only when a new technology is invented that efforts to perfect once again become worthwhile and the whole cycle starts up again.

Explain to the Luddite: If stuff that directly interacts with the player needs to be on a single core (at least a small amount of cores to allow serial processing), does that mean that the extra cores can be used to manage separate events, rather than ramping up the graphics?

For example, say I was making a zombie game optimized for a computer with 18 cores. I figure I need 8 for the actual player experience, and 4 more for the graphics (those proportions are pulled from nowhere). Could I then use the spare 6 cores to write what's happening in the rest of the city? For example, could you use that processing power to track herds of zombies as the move around distant areas of the city interacting with survivor outposts and natural events like forest fires, switching them over to the main cores as they got close enough to the player to become part of their game? Because they don't need to have their threads intimately connected with what's happening on a millisecond-to-millisecond basis, could you use that spare computing power to run large events in the background and report the results as necessary, instead of just trying to make things look better?

I'm a sucker for big strategy games, and one of the most interesting parts to me is when I break through the fog of war and find out what's been happening on the other side of the globe after 60 turns. I'm interested in knowing if this current plateau can be used to make games more broad by applying the same coding to a wider scope, rather than more complex.

I agree with you Shamus, Sony and Microsoft (and for that matter AMD) betting the farm on a ton of low-power cores in their CPUs has been rather disastrous if you look at the technical side of things. New N' Tasty on those consoles have big framerate issues, because the Unity engine it runs on is happiest with as much Ghz as possible, which those APUs just don't provide. My PC running a dual-core i3 has no problem with that game, because it barely gives a shit about the second core and enjoys the 3.1 Ghz. The Dishonored "HD" collection has trouble maintaining 30 FPS on those machines, which is absolutely inexcusable for a port from the last generation of consoles, and I'm more than happy to point at the CPU for that too.

I see the economical side of it, and the emergence of octo-core phones and tablets that need to use as little juice and generate as little heat as possible can't be understated. I'm sure the big 2 figured that it would make heating less of an issue (avoiding RROD-like issues), and that game programming would reach some magical zenith where more cores makes for more performance.

I'm perfectly happy with graphics now. I was perfectly happy at every generation actually, because the games I was attracted to had a distinct and enjoyable art style and aesthetic. Textures really don't need to be higher res, shadows don't need to be more accurate, and we don't need a million piles of alpha-effects on the screen to make the games look better. I'd be fine with that arms race if the cost of production wasn't bankrupting publishers and causing such a safe and conservative mindset in the AAA space, and that's where a line has to be drawn somewhere before all the big players get sucked into a graphical vortex.

I guess my point is this: Amazing graphics have never made a shitty game worth playing through, and I've never put down an enjoyable experience because there wasn't enough eye candy.

Still, isn't the solution pretty obvious at this point - go back and reverse all the damage done by Wirth's Law by recoding our software from scratch to be more efficient? Or is game programming actually pretty efficient as it is, and it's everything else that's the problem?

With VR right around the bend, I disagree that graphics power isn't something we need to worry about. The graphics themselves may not be getting too much more complex anymore, but it won't be too many more years before we have VR hardware pushing 4K with full antialiasing at 90FPS. If nothing else, that's gonna require a TON of VRAM.

P.S. Thanks

Thunderous Cacophony:
Explain to the Luddite: If stuff that directly interacts with the player needs to be on a single core (at least a small amount of cores to allow serial processing), does that mean that the extra cores can be used to manage separate events, rather than ramping up the graphics?

For example, say I was making a zombie game optimized for a computer with 18 cores. I figure I need 8 for the actual player experience, and 4 more for the graphics (those proportions are pulled from nowhere). Could I then use the spare 6 cores to write what's happening in the rest of the city? For example, could you use that processing power to track herds of zombies as the move around distant areas of the city interacting with survivor outposts and natural events like forest fires, switching them over to the main cores as they got close enough to the player to become part of their game? Because they don't need to have their threads intimately connected with what's happening on a millisecond-to-millisecond basis, could you use that spare computing power to run large events in the background and report the results as necessary, instead of just trying to make things look better?

I'm a sucker for big strategy games, and one of the most interesting parts to me is when I break through the fog of war and find out what's been happening on the other side of the globe after 60 turns. I'm interested in knowing if this current plateau can be used to make games more broad by applying the same coding to a wider scope, rather than more complex.

Yes. You can do something like that if you prefer. I think the Stardock games do this (Galactic Civilization series) where they spend extra computing time on AI cycles. A more powerful computer will have better AI.

Honestly, I think both Sony and Microsoft under-powered on their consoles' core CPUs. If I've read the specs right, neither, despite a wealth of cores, is running at even 2GHz; my nearly five-year-old PC is faster. Squeezing things into a sub-$500 box that can fit under a television without overheating has its drawbacks. The ongoing 720p-900p-1080p 30fps/60fps debacle certainly suggests that on that side, at least, a bit more graphics processing power might not have gone amiss, either. And companies like NVidia seem to be doing some interesting things in making graphics cards take over other normally CPU-intensive tasks like physics.

I guess the question becomes less about when we'll reach the limits of available hardware as when the current limits of available hardware, or near enough, will come down in price to the point that typical consumers will have such devices in their homes.

On the plus side, recent news suggests breakthroughs in SSD technology means we can soon expect 10TB SSD drives about the size of a stick of gum. And Microsoft is making all sorts of noises about how wonderful and efficient DirectX 12 will be; time will tell. We've still got space to grow for a time, though Shamus' 10-year projection may well still prove accurate.

Callate:

On the plus side, recent news suggests breakthroughs in SSD technology means we can soon expect 10TB SSD drives about the size of a stick of gum. And Microsoft is making all sorts of noises about how wonderful and efficient DirectX 12 will be; time will tell. We've still got space to grow for a time, though Shamus' 10-year projection may well still prove accurate.

Samsung already has already been showing off a 16TB SSD and they say they will double that next year.

Xeorm:

Thunderous Cacophony:

I'm a sucker for big strategy games, and one of the most interesting parts to me is when I break through the fog of war and find out what's been happening on the other side of the globe after 60 turns. I'm interested in knowing if this current plateau can be used to make games more broad by applying the same coding to a wider scope, rather than more complex.

Yes. You can do something like that if you prefer. I think the Stardock games do this (Galactic Civilization series) where they spend extra computing time on AI cycles. A more powerful computer will have better AI.

While this kind of thing can be parallelised it's probably not the kind of thing that can really soak up that much processor time to great effect. This is probably more a time and game design limitation than anything.

@Thunderous Cacophony:

It's actually more complex than that. You cannot start processing certain events because they require output from operations that hasn't completed yet, but, depending on complexity, events can be split into operations that can be processed in parallel. Actually writing code to do this directly, or to write code to detect this and split the load, is difficult. This is usually an issue the development costs not being worth the increase in performance. This tends to be a bigger issue in areas like supercomputing where the difference is days vs weeks of run time.

I'm not sure of the reason for Shamus' assertion about games being CPU-bound, current generation games usually scale down their graphical fidelity if they cannot run on current systems. Certainly, graphics are easier to brute-force with computing power, but consoles are certainly not at that point. Games that hit CPU limits are usually from the RTS genre since they have to handle AI and pathing for hundreds of units at a time. These also tend to be uncommon on consoles and usually have vastly fewer units if they are.

Thunderous Cacophony:
Explain to the Luddite: If stuff that directly interacts with the player needs to be on a single core (at least a small amount of cores to allow serial processing), does that mean that the extra cores can be used to manage separate events, rather than ramping up the graphics?

For example, say I was making a zombie game optimized for a computer with 18 cores. I figure I need 8 for the actual player experience, and 4 more for the graphics (those proportions are pulled from nowhere). Could I then use the spare 6 cores to write what's happening in the rest of the city? For example, could you use that processing power to track herds of zombies as the move around distant areas of the city interacting with survivor outposts and natural events like forest fires, switching them over to the main cores as they got close enough to the player to become part of their game? Because they don't need to have their threads intimately connected with what's happening on a millisecond-to-millisecond basis, could you use that spare computing power to run large events in the background and report the results as necessary, instead of just trying to make things look better?

I'm a sucker for big strategy games, and one of the most interesting parts to me is when I break through the fog of war and find out what's been happening on the other side of the globe after 60 turns. I'm interested in knowing if this current plateau can be used to make games more broad by applying the same coding to a wider scope, rather than more complex.

Unfortunately it's a lot more complicated than that. Shamus was going for a fairly simplified analogy just to explain the basic gist of it.

The biggest issues with multi-core programming are dependencies and shared resources.

With your RTS example, things might be able to work (But it would definitely not map out to 18 times the speed, 10 might be optimistic). There would be a number of complications though. Turn order would royally screw a lot of things up. If one player's actions are dependent on another player's actions, then you have to wait for the previous player. And you kind of need them to, otherwise you could have two players trying to move units to the same location, or you could have two units moving to attack each other, both moving to where the other unit was on the previous turn.

Then you also have shared resources with making changes to the game board. If you have two processes trying to modify the same data, you run into an issue called race conditions. Race conditions are where you can have unpredictable behaviour based on the order in which processes modify data. Typically you deal with these by limiting the number of processes that can modify a structure at a given time, requiring all the others to wait.

Even beyond all this, code that works well with multiple core is just a lot harder to write. You're going to have to invest a lot more programming and debugging hours to make sure that you're not letting by an error that might slip through in a one in a million situation.

However, there are situations where you don't have dependencies where you benefit from having a shit ton of cores. Some examples of this are physics engines, and fluid simulation. At this years Siggraph (Computer graphics conference) they had the majority of the presenters doing the computation work on the GPU instead of the CPU. Fluid simulation is hard because it requires you to compare each of tens of thousands of particles to tens of thousands of other particles, making hundreds of millions of calculations per frame [1]. All of these calculations are based on the last position of each particle, so you don't need to know the new position of the other particles to calculate the position of the one you're working on.

Actually, something that's becoming increasingly popular is executing code on the GPU for programs that have absolutely no intent of drawing anything.

[1] This is a bit of a simplification, you can cut this number down by ignoring particles that are too far away to have a noticeable effect

"and the really taxing high-end graphics are so expensive to produce that only the top AAA developers can afford to use them."

Then how did GSC Game World do this:
https://youtu.be/YAYLHAPPkvw?list=PLD2B82E405CF9650C

And 4A made Metro Last Light (still probably the best looking game)... they are poor EVEN if we account for cheaper labor...
That is the only thing I dont get. That and Witcher 3... are they just BETTER developers with superior techies or is there something else too? :P

Callate:
Honestly, I think both Sony and Microsoft under-powered on their consoles' core CPUs. If I've read the specs right, neither, despite a wealth of cores, is running at even 2GHz; my nearly five-year-old PC is faster.

It is an Accelerated Processing Unit, which has 4 CPU cores and 4 GPU cores on the same die. This has some advantages and disadvantages, but ultimately it isn't much more powerful than similarly priced PCs.

Callate:

I guess the question becomes less about when we'll reach the limits of available hardware as when the current limits of available hardware, or near enough, will come down in price to the point that typical consumers will have such devices in their homes.

The limits are hardware are still a ways off but more problematically, consumers and to some extent business, aren't demanding more powerful hardware so tech companies don't have the volume for cheaper parts nor the incentive to research more powerful tech.

Best time to buy a new gaming rig is two year into a new generation. If this one is going to last another 8 years, then things are great. Like someone else said above, developers can focus on gameplay, innovations, level design and story. Who knows, maybe AAA will try some new things, maybe even take risks...

Xeorm:
I think the Stardock games do this (Galactic Civilization series) where they spend extra computing time on AI cycles. A more powerful computer will have better AI.

Wow, really? That really sucks!
The goal is to make the game as good as possible on all systems, for the best experience.
It would be really unfair if my sweet ass PC would kick my ass, just because I OC'ed it up to the heavens...
Really unfair, and probably unwanted by devs. Optional would be another story though; can NEVER have too many of those! :D

Honestly, multi-cores are doing just fine. It is not a coincidence that the very tasks which are most CPU intensive are also the most amenable to being split up. Graphics, physics, pathfinding (and related AI operations), fog of war - what do they have in common? They're operating over many objects in a wide area. This makes them expensive - but it also makes them multi-task-able. Operations like the cited input-effect example are CPU-trivial; very early games had the same limitations.

The problem is almost always a limitation in the software's design, not a limitation in the hardware's capability. However, devising clever software that can efficiently do the calculation is a very difficult thing to do, and most developers are on such a deadline that they just can't afford to do it. But, regardless, allow me to give an example that demonstrates what I mean.

I once had to write some code that ran a simulation on 1 million point particles for 1 million iterations on each particle. That's a total of 1 trillion iterations! Fortunately, there were certain conditions that would occur such to allow one to not perform the simulation on the particle ever again after a certain point. Now, the naive solution to this problem is to set up a double for-loop that iterates over each advancement of the simulation and then each particle, with an if-statement in the particle's iteration to check whether you need to do simulation calculation on the particle. The strange thing is, this solution still takes an hour to run, because the computer is still doing 1 trillion iterations, even if some of those iterations don't do anything. This was indeed how I did the simulation, at first.

Then a clever thing dawned on me. Since once a particle was taken out of the simulation it stayed out, why bother iterating over it further? The easiest way to force the particle out of iteration and not have to bother checking is to structure my particles in a linked list. Then, instead of a for-loop, I used a while loop that iterated until it reached the end of the list. When a particle was forced out of the calculation, it was simply unlinked out of the list. This means that as the calculation proceeds, there are fewer and fewer iterations that actually had to be done to achieve the equivalent of doing the full 1 trillion iterations naively. Even further, the linked list was allocated as a contiguous array so that it remained aligned in memory, reducing cache thrashing (which can bring even the fastest processor to its knees).

The combination of using a linked list whose links are updated as particles are removed from the simulation and ensuring the data for the particles are aligned contiguously in memory resulted in the code going from code that did 1 trillion iterations and took 1 hour to run to code that did the equivalent calculation of 1 trillion iterations but only took 3 minutes!! From 60 minutes run-time to 3 minutes run-time! A factor 20 improvement from simply recognizing a better, more efficient way to do the same calculation (using the linked list) that also took advantage of the intrinsics of the hardware (contiguous alignment of data to prevent cache thrashing).

As an example of the effect of cache thrashing, I had an earlier program that did some calculations on a grid of numbers. Following the equations for the calculation, I was initially iterating by column and then by row, that is, column iteration on the outer loop and row iteration on the inner loop. This was purely wrong, because it constantly caused the processor to mis-predict the next line of data needed. Consequently, the processor was being forced to constantly wipe and reload lines of data from main memory instead of having it ready in the processor cache. This is cache thrashing. My code took 35 minutes to run on the entire grid.

Then, I figured out how to switch the iteration order without invalidating the calculation, iterating by row and then column. Doing this kept the data contiguous, and the processor was able to properly predict the next data line and have it preloaded while it was working on a prior data line. Doing it this way, my code only took 5 minutes to run! A factor 7 improvement!

Both these incidents happened a decade or more ago.

The point here is the hardware is already hella, stupid fast, has been for a long time now. However, our approach to developing efficient code, which sometimes requires some clever tricks, is what is significantly lagging.

Thunderous Cacophony:
Explain to the Luddite: If stuff that directly interacts with the player needs to be on a single core (at least a small amount of cores to allow serial processing), does that mean that the extra cores can be used to manage separate events, rather than ramping up the graphics?

First, you are not a luddite if you are going on about multi-core processors. Just sayin.

There is a level of detail that will make a difference in a game, beyond that no appreciable benefits will be had. That applies to more than graphics. Do you need to know that the unseen unit driving to a destination has a leaky left front tire? Or that it hasn't met the recommended maintenance level? No, that can all be covered with a simple die roll; One 20-sided die can cover if the unit will break down. Indeed, the breakdown itself might be incorporated in the reduction of combat efficiency by nothing more than a percentage decrease. A miniscule decrease at that.

Now, just as the ever increasing level of simulation detail approaches the wall of 'no appreciable gain' so does the programming effort. All that extra work requires actual work to be done by humans (you in this case) and frankly the game/software needs to ship at some point. Unless the game is a hobby project, then whatever level of detail/work/realism is defined differently.

When we're already bumping our heads on limitations on consoles, I don't think we're quite to that point yet.

Kenjitsuka:

Xeorm:
I think the Stardock games do this (Galactic Civilization series) where they spend extra computing time on AI cycles. A more powerful computer will have better AI.

Wow, really? That really sucks!
The goal is to make the game as good as possible on all systems, for the best experience.
It would be really unfair if my sweet ass PC would kick my ass, just because I OC'ed it up to the heavens...
Really unfair, and probably unwanted by devs. Optional would be another story though; can NEVER have too many of those! :D

The difference isn't too much. With AI you run very quickly into diminishing returns. Even then, I don't remember it being much of an improvement to have a much better computer. Plus, better AI means your allied friends are less derps than usual.

Rack:
While this kind of thing can be parallelised it's probably not the kind of thing that can really soak up that much processor time to great effect. This is probably more a time and game design limitation than anything.

No, not at all. Good AI is processor and memory intensive. So much so that the majority of efforts are done to minimize the amount of resources needed for the AI, rather than making it strictly better.

Thanks for all the responses, folks; I know little about how computers actually work, and it's good to learn a bit more. It seems like the answer is "There are some situations where it can help, but it's not a cure-all." I guess I'm not secretly the greatest programming mind of my generation.

Reading through the comment of @geizr, I wonder how much of the coding for the next ten years is going to be people rebuilding engines to do things in a more efficient way, giving you all the practical benefits with none of those pesky hardware changes.

freaper:
Indeed, my PC is starting to run into trouble trying to play the latest AAA games, I'm looking forward to having to upgrade maybe for the last time.

We are not there yet, expect at least a decade of upgrades yet, and this is assuming materials science doesn't give us another huge leap in processing performance. This is just a what-if, but say a graphene based processor could be made that would allow significantly high clock speeds, like 100Ghz, the upgrade cycle would continue.

Things are slowing down, but we still have quite a ways to go.

It also depends on the genre of game. Turn-based asynchronous games can easily split up their processes, and the PC can divide and perform it's tasks while waiting for the human player to finish his turn. Problems arise, of course, if one starts to hammer the end turn button.

Fighting games OTOH have very little ability to divvy up amongst cores, due to their natureofconstantlyquicklyprocessing. There's just no time for cores to chat between themselves.

Xeorm:

Kenjitsuka:

Xeorm:
I think the Stardock games do this (Galactic Civilization series) where they spend extra computing time on AI cycles. A more powerful computer will have better AI.

Wow, really? That really sucks!
The goal is to make the game as good as possible on all systems, for the best experience.
It would be really unfair if my sweet ass PC would kick my ass, just because I OC'ed it up to the heavens...
Really unfair, and probably unwanted by devs. Optional would be another story though; can NEVER have too many of those! :D

The difference isn't too much. With AI you run very quickly into diminishing returns. Even then, I don't remember it being much of an improvement to have a much better computer. Plus, better AI means your allied friends are less derps than usual.

Rack:
While this kind of thing can be parallelised it's probably not the kind of thing that can really soak up that much processor time to great effect. This is probably more a time and game design limitation than anything.

No, not at all. Good AI is processor and memory intensive. So much so that the majority of efforts are done to minimize the amount of resources needed for the AI, rather than making it strictly better.

GalCiv doesn't punish you for having a faster processor, difficulty select is still in play. A better CPU just means than smarter enemies are now an option. Technically, you could force a higher than advised difficulty level than your rig can handle, but the turn times will be positively glacial.

As for my question: How much of the recent CPU development slowdown is due to the shifting nature of the market? Since the proven successes of the first Blackberries/IPods, development moved away from faster CPUs to more enrgey efficient, cooler CPUs. While there's certainly parallel development between the two paths, how much has mobile device R&D taken away from home consoles and PCs? With the slowing of the mobile device market, will big CPU development heat up again?

P-89 Scorpion:

Callate:

On the plus side, recent news suggests breakthroughs in SSD technology means we can soon expect 10TB SSD drives about the size of a stick of gum. And Microsoft is making all sorts of noises about how wonderful and efficient DirectX 12 will be; time will tell. We've still got space to grow for a time, though Shamus' 10-year projection may well still prove accurate.

Samsung already has already been showing off a 16TB SSD and they say they will double that next year.

Yes. But is it the size of a stick of gum? :D

Sorry, I'm spouting off a bit- Maximum PC had a recent article about how 3D NAND flash memory pioneered by Micron and Intel enables vertical chip-stacking that's more power and space efficient- thus the stick-of-gum thing. Samsung's drive is what's currently more typical in SSD drives, the 2.5 inch form factor.

Which is not to say I would turn up my nose at one if it were offered to me right now. Nearly five-year-old computer and all that. But as long as console manufacturers are going to insist on a package that's roughly the size of a VCR, I can imagine a tiny, power-efficient SSD drive being an attractive proposition, assuming the price on the technology drops as quickly as tech writers would like to believe.

DrunkOnEstus:

I'm perfectly happy with graphics now. I was perfectly happy at every generation actually, because the games I was attracted to had a distinct and enjoyable art style and aesthetic. Textures really don't need to be higher res, shadows don't need to be more accurate, and we don't need a million piles of alpha-effects on the screen to make the games look better. I'd be fine with that arms race if the cost of production wasn't bankrupting publishers and causing such a safe and conservative mindset in the AAA space, and that's where a line has to be drawn somewhere before all the big players get sucked into a graphical vortex.

The obsession on both sides of the fence for MORE GRAPHICS to the exclusion of everything else regardless of the impracticality and infeasibility of it is easily what is dragging down the video game industry more than anything else. Graphics should have been improving at a VERY slow rate, slow enough to ensure that as the creation of graphics for each game and development in general becomes faster and easier with the same amount of time and people involved the graphics improve to reflect this and thus the costs of development are kept down as low as humanly possible.

I guess my point is this: Amazing graphics have never made a shitty game worth playing through, and I've never put down an enjoyable experience because there wasn't enough eye candy.

image
This^. To any reasonable person as long as the graphics are good enough that one can tell what it is they are looking at they are insignificant to the experience of any game, it's everything ELSE that is far far more important by an absolutely massive amount.

Are you sure about that "consoles/gaming are probably held back by their CPU" part? It is quite possible they used a far too slow CPU to begin with but I was surprised how well my 2011 550€ PC (i5-2400) runs Witcher 3. CPU sure as hell wasn't an issue.

I'd even say the currently slower advancement in CPU speeds is very interesting for that very reason: You probably don't need the biggest CPU possible, you just need one that is fast enough, as the difference after that gets negligible. Developers also are less likely to make their game too CPU-intensive, thus making it run on more systems and avoiding all kinds of hassle for customers. Well, unless you're a console manufacturer without any foresight and want to save $10 per machine, I guess...

Clock speed levelling off has nothing to do with Moors law

Moors Law says nothing about clock speed, its about performance, just because the amount of clock cycles has stopped increase does not mean the performance increase has also slowed. Improvements in architecture transistor count and memory bandwidth are far more important.

Whist the improvement in single core performance increase is slowing ( all be it slightly) the increase in overall multi threaded performance is still holding true to mores law. Programmers just need to start making better use of multiple cores.

Comparing the tailing off of clock speed increase to a slow down in moors law is a fundamental misunderstanding of the theory.

Compare the performance of say a 800mhtz Pentium 3 and a current gen arm core running at the same frequency. The current gen arm core will run rings around the P3.

The big problem is we are reaching the limits of what the physical properties of the silicon die can do, transistors are getting to the point where they cannot be any smaller. Intel has on the roadmap for 2017 10nm technology and you cannot get much smaller than this without single atom transistors.

There's plenty more concurrency to exploit in modern sofware -- in particular the move away from thread models and into message passing (erlang-style) and fork/join models (opencl, cuda). If there's graphics power to spare, you can always have it do serial tasks.

Consumer software has a long way to go to catch up to enterprise/embedded systems, but that mostly means there's more insanely expensive GPUs to be bought down the line! New architectures ahoy!

I've had a similar thought, at least about graphics, many times. The need for the tech has been pushing the industry. At first, I worried about what would happen when we hit the computing event horizon. But two things have changed my opinion. First is the resurgence of indies and their use of pixelated graphics. Though semi-unrelated forces caused both movements, it proved gaming would be fine as long a people had ideas they could interpret in gameplay. The technology was always there to do it in any fashion they wanted, but the key was the culture not being so addicted to the hottest and sexiest, that simpler designs would appeal.

The second was the potential of a universal format. With no where else to go, this would serve as a perfectly good reason to drop proprietary hardware and have a single format. The only snag in this nirvana is oddball Nintendo inventing new ways to play with the current tech.

But to be fair, their recent consoles are another solution to keeping the hardware generations going.

 

Reply to Thread

Log in or Register to Comment
Have an account? Login below:
With Facebook:Login With Facebook
or
Username:  
Password:  
  
Not registered? To sign up for an account with The Escapist:
Register With Facebook
Register With Facebook
or
Register for a free account here