• This is a political forum that is non-biased/non-partisan and treats every person's position on topics equally. This debate forum is not aligned to any political party. In today's politics, many ideas are split between and even within all the political parties. Often we find ourselves agreeing on one platform but some topics break our mold. We are here to discuss them in a civil political debate. If this is your first visit to our political forums, be sure to check out the RULES. Registering for debate politics is necessary before posting. Register today to participate - it's free!

The future of PC graphics and gaming.. my 10 year look ahead

jmotivator

Computer Gaming Nerd
DP Veteran
Joined
Feb 24, 2013
Messages
34,698
Reaction score
19,158
Gender
Undisclosed
Political Leaning
Conservative
With the release of the nVidia 3080 and 3090, we have, in theory, reached the upward limit of graphics output. The 3090, for example, can run a game at 8K @ 60fps

While there is a weird gray area in human vision-vs-reaction, and we see that humans can react faster than our eyes can theoretically see (there is evidence that skilled gamers can get benefits from frame rates up to 240 while the eye technically doesn't process images that fast). That, however, has more to do with the precision of the mouse, and the rate at which hit boxes update.

So 8K is, at normal viewing distance, higher resolution than the eye can see, and anything over 60fsp is really higher than human perception.. what is the next big lead for graphics?

I think the easy answer is pushing polygons, but that hit's it's theoretical ceiling at about 33 million, at which point each pixel on an 8k display represents a polygon. It could be argued that the theoretical cap then is 11 million polygons since a true polygon would need at least 3 pixels.

With complex game models we see polygon counts in the hundreds of thousands today, and I think the polygon count cap is not really worth considering as it will be hitting the theoretical useful cap soon if the 3090 isn't already capable.

I think the next big move in graphics will be in two old technology pathways that were until fairly recently limited by a processing bottleneck: Voxels and Physics.

Voxel graphics have been around forever, and have been very useful in creating mold-able terrain. It turns the virtual world into a set of discrete 3D pixels (usually cubes). Minecraft is a rudimentary voxel engine, for example. But as graphics cards increase their flop counts to supercomputer levels, if is approaching the possibility that in a few years that pixel counts will no longer matter (like how 2D render speeds eventually became a pointless comparison), and we will be measuring graphics by voxels per second (VPS).

On top of that, I would not be surprised to see a return to discrete physics cards, and would have their own drivers that would perform a whole host of calculations regarding virtualized material physics. For instance a 3D world could be generated and the artist would assign a voxel based block a material identifier like "Concrete", and when that object was impacted by a small, fast moving voxel-based entity with the identifier "lead", the physics card to calculate the impacts, fractures, relative speeds of particles and hand off to the graphics card/engine where and how to render the resulting voxels.

The one outcome of this would be a temporary return to solo gaming for the wow factor experiences until network speeds are fast enough to share the that much change data... OR, we would see the birth of dumb-terminal gaming where all users connect remotely to a central data center and all user interactions ware managed server side, with only control inputs and screen draws exchanged between the end use and the server... like the ultimate version of game streaming.

So I guess that is three things to look for in the next 10 years:

1) Voxel graphics accelerators
2) Discrete Physics accelerators
3) Growth in server side gaming
 
I should proof read these things. In my defense I was doing three things at the same time...

Oh crap... I should have proof read that company email before hitting send... :lamo
 
Everything will be monetized, so you pay $0,10 for every step you take in a game.
 
Since I do zero game creation or modding I have no idea what all that meant but it sure sounded interesting!! Basically, my understanding is that with a higher power graphics card I get faster rendering and more "stuff" to see. With that understanding and my personal interest in how better graphics capabilities improve my gaming experience, I've been really impressed with the attention to detail seen in landscape renderings. I started to pay more attention to that kind of thing with Morrowind but more recent games such as Red Dead Redemption 2 and Witcher 3 have gorgeous scenery. In all the games, however, the NPCs seem to still move unnaturally. I've seen great improvement in skin and clothing texture but all the NPCs seem to walk at the same speed, keep their heads pointed in the same direction, adopt the same dozen or so poses, etc. They also never move ANYTHING in their villages. If there's a barrel or a bale of hay next to a barn one day it will be there every time you go by that spot even though the folks in town are supposed to be working.

I guess that isn't really graphics issue as much as it is an AI issue but if we could see as much attention put into creating a more dynamic environment such as we have seen put into textures and landscapes I figure that would improve the gaming experience in my eyes.
 
Since I do zero game creation or modding I have no idea what all that meant but it sure sounded interesting!! Basically, my understanding is that with a higher power graphics card I get faster rendering and more "stuff" to see. With that understanding and my personal interest in how better graphics capabilities impin 3D space_ove my gaming experience, I've been really impressed with the attention to detail seen in landscape renderings. I started to pay more attention to that kind of thing with Morrowind but more recent games such as Red Dead Redemption 2 and Witcher 3 have gorgeous scenery. In all the games, however, the NPCs seem to still move unnaturally. I've seen great improvement in skin and clothing texture but all the NPCs seem to walk at the same speed, keep their heads pointed in the same direction, adopt the same dozen or so poses, etc. They also never move ANYTHING in their villages. If there's a barrel or a bale of hay next to a barn one day it will be there every time you go by that spot even though the folks in town are supposed to be working.

I guess that isn't really graphics issue as much as it is an AI issue but if we could see as much attention put into creating a more dynamic environment such as we have seen put into textures and landscapes I figure that would improve the gaming experience in my eyes.

Yes, a graphics card is essentially a high powered math co-processor. What graphics processors excel at is floating point (aka decimal) calculations. This is essential for doing the billions of geometry calculations necessary to calculate the visible shape of polygons (usually 3 pointed 2D objects in a virtual 3D space) from changing perspectives. For example, consider this virtual Icosahedron (20 sided polyhedron):

View attachment 67295629

As it "turns" in virtual space the graphics card is calculating all of the angles of all of the triangles relative to the virtual camera position and then flattening that to a 2D image to draw a single frame (a 2D composite of all polygons at a point in time).

In normal 3D rendering objects are essentially empty shells comprised of thousands of such polygons

Voxel is a different approach where all objects in the world are build from a single, repeating polyhedron (usually a cube), and rather than being an empty shell, the cubes run throughout the object (think cardboard box -vs- a solid block of legos).

I think with advancement in voxel tehnology and physics acceleration you could, in theory, do away from motion capture or clunky, canned movements and use a set of rules that govern how human beings walk, and let the engine itself render everything in real time, with enough horse power.

Modern games fool you into seeing movement as less canned and more natural by, for instance, turning off the canned animations on an object that is supposed to be unconscious or dead. The "rag-doll physics" of the lifeless body imparts the illusion of individual purpose of the body beforehand.

Also, now that you hint at it, I have to post this because it is hilarious and accurate...


 
OP you are wrong about how human eyes work and this pokes many holes in the theory.

(also, the 8k/60fps is based on upscaling images that are actually rendered at lower resolutions, so there's tons of room left in increased GPU horsepower)
 
All I know is that it's too expensive to try and keep up.
 
All I know is that it's too expensive to try and keep up.

You certainly can dump pretty absurd amounts of money into PC gaming for marginal increases.

Years back there was a (terrible) tech youtube channel that built a $30,000 PC set up that let seven people game on the same machine. (it had seven monitors, keyboards, mice, etc) Of course, those seven people could have just bought seven computers and the result would have been way, way better...
 
All I know is that it's too expensive to try and keep up.
I try to dump enough money into a machine that I'll get 5-7 years out of it.

I went pretty high end in 2014 when I had my previous machine built. It was and still is fine for most of what's on the market. The only thing that was messing me up was lack of memory but that was easy to resolve with external hard drives. Anyway, it had been 6 years and I kind of wanted a new machine anyway so I dropped $3k on my current machine but it should serve well for another 5-7 years or more.
 
I try to dump enough money into a machine that I'll get 5-7 years out of it.

I went pretty high end in 2014 when I had my previous machine built. It was and still is fine for most of what's on the market. The only thing that was messing me up was lack of memory but that was easy to resolve with external hard drives. Anyway, it had been 6 years and I kind of wanted a new machine anyway so I dropped $3k on my current machine but it should serve well for another 5-7 years or more.
I don't play games enough to make it worth it.
 
Personally I think 120fps is basically as high as is reasonable before you get diminished returns, beyond 120fps is... Nice for those who like to pump up those numbers but, I recently got the Series X and do have a TV with 120hz support and it is really, really nice I gotta say, the smoothness of the image is beyond and I upgraded directly from the Xbox One which was a very underpowered machine so it was nice to have such a massive jump in performance, those extra pixels and performance really show.

But I agree with you Jm, I think we have reached the upper limits of what hardware can really give us.

The only thing I would say though is that faster than we can flick a switch, we will begin the transition from hardware to Cloud Gaming with internet speeds reaching heights that will allow it, at least in Urban Areas, I don't know what that will do to traditional hardware and graphical fidelity but I think then after that, the switch to far more interactive experiences begins and I'm talking like, Holograms, VR, AR and the rest of it, I know it sounds far fetched now, but because I agree with you that we have reached the upper and almost final limits of performance and fidelity on traditional hardware, I don't really see where else we can do besides that.

And things like VR, despite its relative success is still relatively niche, very expensive and lacks killer apps to make the expense worth it at this time, I think this will change quickly and lead to the aforementioned movement into those mediums.

That's my take anyway.
 
Graphics have been improving over the last decade, but not that much compared to advances that were seen in decades past. Games look more polished than they did a decade ago but much of the fundamentals have not taken any big leaps, it seems to be a roadblock. This is not just software but hardware, pc computers are nearing the limits with existing architecture, they have mostly been finding ways to improve performance with the existing architecture rather than making major hardware improvements.

On the software side only so much can be done as well, there needs to be another breakthrough in architecture to support new hardware that can support new software to make leaps in graphics. This not just a bitrate issue like with the change between 32 and 64 bit, but a bandwidth issue as well. There is also the limitations of binary coming through, and scientists have tried researching multiple options.

Some of the proposed ideas have been ternary, which uses 3 digits instead of two, such computers have been built by the soviet union in the 1960's, but were largely abandoned in favor of binary. Another is quantum computing which is very promising but no where near ready for primetime for a long time. The last one I will bring up is a long shot which is analog computing, which is much faster as there is no digital processing, however analog can produce errors, and is near impossible to produce analog circuitry small enough to support home computing anytime in the near future.

There are likely many other proposed ideas I did not mention however it is clear it is not just a resolution issue or just a software issue, it is just we are near the limits of the current pc, as such improvements will be smaller and smaller trying to squeaze out was they can out of a limited architecture.

Edit- one thing that may make games much better for another decade is returning to precise coding, like they did in the 1980's, where bloated software could not be made up by more hardware, so coders had to be efficient utilizing every resource with minimal waste, they had gui operating systems that could run off a floppy disk and 64k of ram, such efficiency is possible in future games, but I doubt any company would go that far unless hardware absolutely hit a brick wall as precise coding requires top end coders who want a lot of money, and need a lot of time to make such a game.
 
Last edited:
I'm surprised nobody has mentioned ray tracing yet. That's the future of photorealistic graphics.

The OP is right that all these advances will cost a lot of computational power.
 
Im in the process of building a new gaming rig, but I don’t think I’ll be in quite that deep, but it was a great read.
 
I'm surprised nobody has mentioned ray tracing yet. That's the future of photorealistic graphics.

The OP is right that all these advances will cost a lot of computational power.
The problem is we are nearing the brickwall on hardware, so we are nearing a point either coders need to be highly efficient to push such advances in current hardware, or the next gen of hardware be introduced either quantum ternary etc, as the current 64 bit binary is nearing it's limits with not so much room to improve overall.
 
With the release of the nVidia 3080 and 3090, we have, in theory, reached the upward limit of graphics output. The 3090, for example, can run a game at 8K @ 60fps

While there is a weird gray area in human vision-vs-reaction, and we see that humans can react faster than our eyes can theoretically see (there is evidence that skilled gamers can get benefits from frame rates up to 240 while the eye technically doesn't process images that fast). That, however, has more to do with the precision of the mouse, and the rate at which hit boxes update.

So 8K is, at normal viewing distance, higher resolution than the eye can see, and anything over 60fsp is really higher than human perception.. what is the next big lead for graphics?

I think the easy answer is pushing polygons, but that hit's it's theoretical ceiling at about 33 million, at which point each pixel on an 8k display represents a polygon. It could be argued that the theoretical cap then is 11 million polygons since a true polygon would need at least 3 pixels.

With complex game models we see polygon counts in the hundreds of thousands today, and I think the polygon count cap is not really worth considering as it will be hitting the theoretical useful cap soon if the 3090 isn't already capable.

I think the next big move in graphics will be in two old technology pathways that were until fairly recently limited by a processing bottleneck: Voxels and Physics.

Voxel graphics have been around forever, and have been very useful in creating mold-able terrain. It turns the virtual world into a set of discrete 3D pixels (usually cubes). Minecraft is a rudimentary voxel engine, for example. But as graphics cards increase their flop counts to supercomputer levels, if is approaching the possibility that in a few years that pixel counts will no longer matter (like how 2D render speeds eventually became a pointless comparison), and we will be measuring graphics by voxels per second (VPS).

On top of that, I would not be surprised to see a return to discrete physics cards, and would have their own drivers that would perform a whole host of calculations regarding virtualized material physics. For instance a 3D world could be generated and the artist would assign a voxel based block a material identifier like "Concrete", and when that object was impacted by a small, fast moving voxel-based entity with the identifier "lead", the physics card to calculate the impacts, fractures, relative speeds of particles and hand off to the graphics card/engine where and how to render the resulting voxels.

The one outcome of this would be a temporary return to solo gaming for the wow factor experiences until network speeds are fast enough to share the that much change data... OR, we would see the birth of dumb-terminal gaming where all users connect remotely to a central data center and all user interactions ware managed server side, with only control inputs and screen draws exchanged between the end use and the server... like the ultimate version of game streaming.

So I guess that is three things to look for in the next 10 years:

1) Voxel graphics accelerators
2) Discrete Physics accelerators
3) Growth in server side gaming

I’m in the process of building a new gaming rig around the AMD Ryzen5 5600x, making my damn near new Nvidea 1060 obsolete. Time to put the fam on short rations again. LOL.

The biggest hurdle is lag. How long before game companies spec “Quantum internet 8000 Cubits per nano sec required”.
 
The problem is we are nearing the brickwall on hardware, so we are nearing a point either coders need to be highly efficient to push such advances in current hardware, or the next gen of hardware be introduced either quantum ternary etc, as the current 64 bit binary is nearing it's limits with not so much room to improve overall.

The limiting factor on computational power is becoming the machine's ability to dump heat.

Many servers are liquid-cooled. It wouldn't surprise me to see home computers in the near future adapt the same technology.

Like all nascent technologies, that would be expensive at first but gradually become cheaper.
 
The limiting factor on computational power is becoming the machine's ability to dump heat.

Many servers are liquid-cooled. It wouldn't surprise me to see home computers in the near future adapt the same technology.

Like all nascent technologies, that would be expensive at first but gradually become cheaper.
The limiting factor is actually not the ability to dump heat, they had pentium 4 processors near 2 decades ago as well as celerons outclocking most modern computers yet they actually performed slower. The issue is bandwidth and how it is utilized, multi core processors worked better than highly clocked heat pumping single cores when software allowed because the bandwidth and processing was distributed among multiple cores.

Now a days we still have the same problem, no matter how fast a cpu or gpu is, it can only work as fast as the bandwidth, having more clock speed and more heat is like building a 1 million gph water pump but only having lines that can handle 1000 gph volume. The processors do no good when the data bus is slower than what the cpu can handle.

The bus limitations have been looked at in multiple directions, from fiber optics to exceeding binary and many others. I work in automotive and car computers use similar can bus tech to home computers, they are hitting the same issues, it is not what a processor can handle but how fast the data can be sent to that processor and out of it.


A good example was I had a 3.8 ghz celeron with 1.25 gbs ram a long time ago, my brother had an amd computer with 2ish ghz and 512 mb ram, his computer ram nearly twice as fast, in the end I realized it was the bus speed on the amd processor back then was near the same speed of processor itself, while the celeron was severely bottlenecked. This is an example of how bus speeds can be more crucial than clock speeds and raw power, it is also why intel had focused so much on areas outside raw power and clock speed to be undisputed in the market until recently with their i-5 and i-7 processors.


In short we have a bandwidth, bitrate limitation, I doubt it will halt advancement as the bandwidth and bitrate we have today was unforseen in the 1980's. I imagine like computers of the 80's and the 90's it will hit a brick wall before the next big leap in tech, until that next big leap advancements will be small not big due to limitations of existing platforms.
 
With the release of the nVidia 3080 and 3090, we have, in theory, reached the upward limit of graphics output. The 3090, for example, can run a game at 8K @ 60fps

While there is a weird gray area in human vision-vs-reaction, and we see that humans can react faster than our eyes can theoretically see (there is evidence that skilled gamers can get benefits from frame rates up to 240 while the eye technically doesn't process images that fast). That, however, has more to do with the precision of the mouse, and the rate at which hit boxes update.

So 8K is, at normal viewing distance, higher resolution than the eye can see, and anything over 60fsp is really higher than human perception.. what is the next big lead for graphics?

I think the easy answer is pushing polygons, but that hit's it's theoretical ceiling at about 33 million, at which point each pixel on an 8k display represents a polygon. It could be argued that the theoretical cap then is 11 million polygons since a true polygon would need at least 3 pixels.

With complex game models we see polygon counts in the hundreds of thousands today, and I think the polygon count cap is not really worth considering as it will be hitting the theoretical useful cap soon if the 3090 isn't already capable.

I think the next big move in graphics will be in two old technology pathways that were until fairly recently limited by a processing bottleneck: Voxels and Physics.

Voxel graphics have been around forever, and have been very useful in creating mold-able terrain. It turns the virtual world into a set of discrete 3D pixels (usually cubes). Minecraft is a rudimentary voxel engine, for example. But as graphics cards increase their flop counts to supercomputer levels, if is approaching the possibility that in a few years that pixel counts will no longer matter (like how 2D render speeds eventually became a pointless comparison), and we will be measuring graphics by voxels per second (VPS).

On top of that, I would not be surprised to see a return to discrete physics cards, and would have their own drivers that would perform a whole host of calculations regarding virtualized material physics. For instance a 3D world could be generated and the artist would assign a voxel based block a material identifier like "Concrete", and when that object was impacted by a small, fast moving voxel-based entity with the identifier "lead", the physics card to calculate the impacts, fractures, relative speeds of particles and hand off to the graphics card/engine where and how to render the resulting voxels.

The one outcome of this would be a temporary return to solo gaming for the wow factor experiences until network speeds are fast enough to share the that much change data... OR, we would see the birth of dumb-terminal gaming where all users connect remotely to a central data center and all user interactions ware managed server side, with only control inputs and screen draws exchanged between the end use and the server... like the ultimate version of game streaming.

So I guess that is three things to look for in the next 10 years:

1) Voxel graphics accelerators
2) Discrete Physics accelerators
3) Growth in server side gaming
Naww. Just think how fast those GPUs could mine bitcoin!
 
Interesting announcements by Epic last week and the next iteration of Unreal Engine should probably have nVidia worried.

In you all haven't seen, Epic unveiled their plans for Unreal Engine 5 last week which will, if announcements are to be believed, send a shudder through card manufacturers, and specifically nVidia.

Unreal Engine will, they reported, unveil a new software alternative to Ray Tracing that is, by their report, more powerful that Ray Tracing while also not deigned to even use RT cores on graphics cards. Their demos of their new lighting system was demoed on old 1080 cards to demonstrate how light weight it is compared to much less efficient RT brute force methods.

It is possible, so I have heard, that Unreal will be able to leverage RT cores, but they don't seem to think it will be necessary.

If Unreal can natively bring RTX style lighting to cards without RT cores, then a large part of the current nVidia design will become obsolete, and AMD cards, which currently are awful with ray tracing, will become far more desirable.

CAVEAT 1: These kind of announcements are rarely all that accurate.

CAVEAT 2: Even if true, Unreal 5 Engine games are likely years away.

CAVEAT 3: Even if Ture, the number of games today that even implement Ray Tracing are low.

CAVEAT 3.1: In Unreal will do Enhanced lighting natively, maybe more games will implement it...

CAVEAT 4: Ease of Development is still the name of the game, and if developers don't develop in Unreal 5 then this won't matter....
 
There will certainly be a cap for our human senses where they cannot perceive anything higher.

We are probably getting pretty close to it then it will just be improvements to game design.

Look at Skyrim and how modders have used that same technology to vastly improve the game in all areas.
 
Back
Top Bottom