I am probably unqualified to speak about this, as I am using an RX 550 low profile and a 768P monitor and almost never play newer titles, but I want to kickstart a discussion, so hear me out.
The push for more realistic graphics was ongoing for longer than most of us can remember, and it made sense for most of its lifespan, as anyone who looked at an older game can confirm - I am a person who has fun making fun of weird looking 3D people.
But I feel games’ graphics have reached the point of diminishing returns, AAA studios of today spend millions of dollars just to match the graphics’ level of their previous titles - often sacrificing other, more important things on the way, and that people are unnecessarily spending lots of money on electricity consuming heat generating GPUs.
I understand getting an expensive GPU for high resolution, high refresh rate gaming but for 1080P? you shouldn’t need anything more powerful than a 1080 TI for years. I think game studios should just slow down their graphical improvements, as they are unnecessary - in my opinion - and just prevent people with lower end systems from enjoying games, and who knows, maybe we will start seeing 50 watt gaming GPUs being viable and capable of running games at medium/high settings, going for cheap - even iGPUs render good graphics now.
TLDR: why pay for more and hurt the environment with higher power consumption when what we have is enough - and possibly overkill.
Note: it would be insane of me to claim that there is not a big difference between both pictures - Tomb Raider 2013 Vs Shadow of the Tomb raider 2018 - but can you really call either of them bad, especially the right picture (5 years old)?
Note 2: this is not much more that a discussion starter that is unlikely to evolve into something larger.
Is it diminishing returns? Yes, of course.
Is it taxing on your GPU? Absolutely.
But, consider Control.
Control is a game made by the people who made Alan Wake. It’s a fun AAA title that is better than it has any right to be. Packed with content. No microtransactions. It has it all. The only reason it’s as good as it is? Nvidia paid them a shitload of money to put raytracing in their game to advertise the new (at the time) 20 series cards. Control made money before it even released thanks to GPU manufacturers.
Would the game be as good if it didn’t have raytracing? Well, technically yes. You can play it without raytracing and it plays the same. But it wouldn’t be as good if Nvidia hadn’t paid them, and that means raytracing has to be included.
A lot of these big budget AAA “photorealism” games for PC are funded, at least partially, by Nvidia or AMD. They’re the games you’ll get for free if you buy their new GPU that month. Consoles are the same way. Did Bloodborne need to have shiny blood effects? Did Spiderman need to look better than real life New York? No, but these games are made to sell hardware, and the tradeoff is that the games don’t have to make piles of money (even if some choose to include mtx anyway).
Until GPU manufacturers can find something else to strive for, I think we’ll be seeing these incremental increases in graphical fidelity, to our benefit.
This is part of the problem, not a justification. You are saying that companies such Nvidia have so much power/money, that the whole industry must spend useless efforts into making more demanding games just to make their products relevants.
Right? “Vast wealth built on various forms of harm is good actually because sometimes rich people fund neat things that I like!” Yeah sure, tell that to somebody who just lost their house to one of the many climate-related disasters lately.
I’m actually disgusted that “But look, a shiny! The rich are good actually! Some stupid ‘environment’ isn’t shiny cool like a videogame!” has over fifty upvotes to my one downvote. I can’t even scrape together enough sarcasm at the moment to bite at them with. Just… gross. Depressing. Ugh.
Maybe it’s not what you’re saying but how you’re saying it.
Yeah, its too bad that that’s always true for every game and not just ~30 AAA year.
Too bad Dave the Diver, Dredge, and Silksong will never get made 😔
So the advantage is that it helps create more planned obsolescence and make sure there will be no one to play the games in 100 years?
Is that a real question? Like, what are we even doing here?
The advantage is that game companies are paid by hardware companies to push the boundaries of gamemaking, an art form that many creators enjoy working in and many humans enjoy consuming.
“It’s ultimately creating more junk so it’s bad” what an absolutely braindead observation. You’re gonna log on to a website that’s bad for the environment from your phone or tablet or computer that’s bad for the environment and talk about how computer hardware is bad for the environment? Are you using gray water to flush your toilet? Are you keeping your showers to 2 minutes, unheated, and using egg whites instead of shampoo? Are you eating only locally grown foods because the real Earth killer is our trillion dollar shopping industry? Hope you don’t watch TV or go to movies or have any fun at all while Taylor Swift rents her jet to Elon Musk’s 8th kid.
Hey, buddy, Earth is probably over unless we start making some violent changes 30 years ago. Why would you come to a discussion on graphical fidelity to peddle doomer garbage, get a grip.
reminder to be nice on our instance
Sorry hoss lost my cool won’t happen again 😎
cheers m8 💜 it happens
Would the game be as good if it didn’t have raytracing? Well, technically yes. You can play it without raytracing and it plays the same. But it wouldn’t be as good if Nvidia hadn’t paid them, and that means raytracing has to be included.
To place another point on this: Control got added interest because their graphics were so good. Part of this was Nvidia providing marketing money that Remedy didn’t have before, but I think the graphics themselves helped this game break through the mainstream in a way that their previous games did not. Trailers came out with these incredible graphics, and critics and laygamers alike said “okay I have to check this game out when it releases.” Now, that added interest would mean nothing if the game wasn’t also a great game beyond the initial impressions, but that was never a problem for Remedy.
For a more recent example, see Baldur’s Gate 3. Larian plugged away at the Divinity: OS series for years, and they were well-regarded but i wouldn’t say that they quite hit “mainstream”. Cue* BG3, where Larian got added money from Wizards of the Coast that they could invest into the graphics. The actual gameplay is not dramatically different from the Divinity games, but the added graphics made people go “this is a Mass Effect” and suddenly this is the biggest game in the world.
We are definitely at a point of diminishing returns with graphics, but it cannot be denied that high-end, expensive graphics drive interest in new game releases, even if those graphics are not cutting-edge.
This comment is 100% on-point, I’m just here to address a pet peeve: Queue = a line that you wait in, like waiting to checkout at the store Cue = something that sets off something else, e.g. “when Jimmy says his line, that’s your cue to enter stage left”
So when you said:
Queue BG3, where Larian got added money from Wizards of the Coast that they could invest into the graphics.
What you meant was:
Cue BG3, where Larian got added money from Wizards of the Coast that they could invest into the graphics.
haha thanks, i thought it felt off when i was typing it.
“We are definitely at a point of diminishing returns with graphics,”
WTF. How can you look at current NERF developments and honestly say that? This has never been further from the truth in the history of real time graphics. When have we ever been so close to unsupervised, ACTUALLY photo-realistic (in the strict sense) graphics?
Wizards of the Cost didn’t give any funds to BG3, they actually paid for the IP.
I played Control on a 10 year old PC. I’m starting to think I missed out on something
I played it on PS5 and immediately went for the higher frame rate option instead.
I think Ghostwire Tokyo was a much better use of RT than Control.
Also found Ratchet and Clank to be surprisingly good for 30fps. I can’t put my finger on exactly what was missing from the 60fps non RT version, but it definitely felt lesser somehow.
Until GPU manufacturers can find something else to strive for
Machine Learning / AI has been a MASSIVE driver for Nvidia, for quite some time now. And bitcoin…
Shadow can definitely look a lot better than this picture suggests.
The biggest advancements in game graphics have not occurred in characters, except for perhaps in terms of animation and subsurface scattering tech.
The main character always gets a disproportionate graphical resource allocation, and we achieved “really damn good” in that category a while ago.
Adam Jensen didn’t look that much better in Mankind Divided, than he did in Human Revolution, but Prague IS SO MUCH MORE DETAILED than Detroit was.
Then there’s efficiency improvements in rendering brought by systems like nanite, material shader improvements, more detailed lighting systems and more efficient ambient occlusion.
Improvements in reverse kinematics is something I’m really excited about, as well.
Sounds like I need to try Mankind Divided. Ultra-detailed Prague as a level sounds badass.
It is. Adam works for the secret underground interpol base in the middle of the city. There are abusive secret societies to dismantle, murder cases to solve, drug rings to bust, corrupt cops to beat up. Mankind Divided is a prime example of making a hub-world medium sized but super detailed, being just as good if not better than huge and full of nothing.
I can not elaborate on that as I am unqualified - remember, I have never played newer titles.
My main point is that a headshot of the main character is not a good yardstick. The mc is always going to be rendered with enough oomph to look good, no matter the settings or game generation.
The difference in recent years has been in environment detail and material shading, lightning, things you maybe can’t even enable due to playing on older hardware.
While I agree ray tracing is a total energy hog, that’s not the only area seeing advancement. Rendering pipelines like nanite enable more graphics, AND less power consumption.
Three thoughts:
-
I wonder if you would still have this take if you played a newer, high quality AAA game on a high end setup. I don’t mean to imply that your mind will definitely be blown — really don’t know — but it would be interesting to see what doing so would do to your opinion.
-
Gaming is about entertainment. There is no denying that better/bigger/smoother/more immersive tends to add to the entertainment. So devs push those boundaries both for marketing reasons and because they want to push the limits. I have a hard time seeing a world in which gaming development as a whole says “hey, we could keep pushing the limits, but it would be more environmentally friendly and cheaper for our customers if we all just stopped advancing game performance.”
-
There are SO MANY smaller studios and indie devs making amazing games that can run smoothly on 5/10/15 year old hardware. And there is a huge number of older games that are still a blast to play.
-
Another point in favour of new graphics tech, you mentioned you’re worried about artist needing to do more work. As someone who has done 3D work, I can tell you that its actually easier to make something photo-real. The hard part is making it look good within the limitations of a game engine. How to get something that looks just as good, with simpler material shaders and fewer polygons.
Tech like nanite actually eliminates the need for all that work. You can give the game engine the full-quality asset, and it handles all the difficult stuff to render it efficiently. This is why we are now seeing games that look as good as Unrecord coming from tiny new studios like DRAMA.
deleted by creator
I totally agree. And I would add some of my favorite games like Outer Wilds, Satisfactory, or The Witness to the list that look great but don’t try to be realistic. Their art style only serves the purpose of their respective core gameplay.
Satisfactory is a great example. In no way realistic looking but at the same time it is.
Mass Effect on replay definitely is not about looking realistic, but damn does it have a style
deleted by creator
Windwaker and TF2 as well.
I loved Borderlands cel shading art style for this reason.
I see where you’re coming from but I don’t agree, well at least not anymore. I used to. Thing is, we reached a point for me personally where an old game doesn’t necessarily look bad anymore, as my brain will fill on the gaps, even as an adult. I can hardly do that with any of the really old games, as they lack polygons and details, but anything more modern is good enough. Art style is more important than graphics quality. And art style doesn’t exclude realism. The same way a real life thing can look good or bad, even though both have real world graphics if you so want.
I also don’t think it’s as easy as to make every game comic or pixel art, for some genre it simply doesn’t work as part of the gameplay is immersion in the world. Thankfully though we reached a point where even indie developer can use something like Unreal Engine or Unit to create high quality games. In future this will advance further thanks to AI animation, voiceover, textures and even models.
The thought that today’s state of technology is enough and we should stop improving sounds pretty Amish to me.
The word you want is Luddite
Luddites, the original ones were pretty rad. They were anti tech for anti capitalist reasons.
I agree that Luddite is the more correct term since it’s more general now, but I hate that the term got warped over time to mean anyone that hates any new tech
I quite like what Cory Doctorow has to say about it. (Author of Little Brother, coiner of the term “enshittification” (and much much more obviously))
I didn’t mean we should stop improving, what I meant is we should focus more on the efficiency and less on the raw power.
I think game and engine developers should do both. If it’s possible to improve efficiency and performance it should be done. But at the same time hardware is improving as well and that performance gain should be used.
I’m kinda worried a little bit about the recent development in hardware though. At the moment GPU power mostly increases with energy consumption and only a little with improved architecture. That was different some years ago. But in my eyes thats a problem the hardware manufactorera have, not the game developers.
Performance is always about doing as much as possible with as little as possible. Making a game runs faster automatically makes it more efficient because the only way it can run faster is by doing less work. It’s just that whenever you can run faster it means the game has more room for other things.
It’s resource consumption and graphics output are directly linked. If you gain more efficiency, that gives you more headroom to either reduce resource consumption or increase the graphics output. But you can’t maximize both. You have to decide what do to with the resources you have. Use them, or not use them. You can’t do both at the same time.
To kinda reiterate the point in a different light, I’d like more games with cel shading, which can look somewhat timeless if done right. I think Risk Of Rain 2 is cel shaded, and it looks fantastic while not also being the centerpiece of the game.
Yes, it basically boils down to diminishing returns, it also eats up all the good potential for creative graphic design.
I don’t think higher graphics requirements hurt creativity, you can have an unrealistic looking game that is very GPU-intensive, I was mainly concerned about the costs and wasted money/efforts.
But lowering the graphics budget - and the budget in general - can make creativity/risk-taking a more appealing option for AAA studios.
Edit : I just noticed both sentences kind of contradict each other but you get the point.
I understand the sentiment, but it seems like you’re drawing arbitrary lines in the sand for what is the “correct” amount of power for gaming. Why waste 50 watts of GPU (or more like 150 total system watts) on a game that something like a SteamDeck will draw 15watts to do almost identically. 10 times less power for definitely not 10 times less fidelity. We could all the way back to the original Gameboy for 0.7 watts, the fidelity drops but so does the power. What is the “correct” wattage?
I agree that the top end gpus are shit at efficiency and we should could cut back. But I don’t agree that fidelity and realism should stop advancing. Some type of efficiency requirement would be nice, but every year games should get more advanced and every year gpus should get better (and hopefully stay efficient).
I agree that I shouldn’t have set the arbitrary 50 watt thing, I just saw my GPU and bigger ones and came out with that number.
I agree that the top end gpus are shit at efficiency and we should could cut back.
According to Steam survey, 4090, 3090, 6900XT, and 7900 XTX combined are being used by about 1.7% of gamers.
This number is, of course, inflated (at least slightly) because people who have money to buy these cards are also more likely to buy games and people owning older/cheaper cards are more likely to be playing pirated copies.
The top tier cards are showcase of technological advancement. They are not really used by a large number of people. So there’s not much point. It will only reduce the baseline for next generation, leading to less advancement.
That’s a very good point, but a little misleading. A better number would be to add up all the top tier cards from every generation, not just the past 2. Just because they’re old doesn’t mean they still aren’t relatively inefficient for their generation.
If we kept the generations exactly the same, but got rid of the top 1 or 2 cards. The technological advancement would be happening just as fast. Because really, the top tier cards are about silicon lottery and putting as much power in while keeping stable clocks. They aren’t different from an architecture perspective within the same generation. It’s about being able to sell the best silicon and more VRAM at a premium.
But as you said, it’s still a drop in the bucket compared to the overall market.
Because it impresses people and so it sells. If they didn’t do that, all those EAs and Ubisofts would have to find a new selling point like making their games good or something.
I’ve been honestly blown away with how newer games look since I upgraded my graphics card.
Remnant 2 is not even a AAA game but does such a good job with light and reflections that it looks better than anything released 5+ years ago.
Then you have games like Kena: Bridge of Spirits, which have a very nice art style but take advantage of current hardware to add particles everywhere.
I like seeing advances in graphics technology but if the cost is 10 year dev cycle and still comes out s-s-s-stuttering on high end PCs and current gen consoles then scale back some.
I think we hit a point where it’s just not feasible enough to do it anymore.
“I want shorter games with worse graphics made by people who are paid more to work less and I’m not kidding”
EDIT: Never mind, I thought that was a sarcastic comment mocking the other user.
And what’s wrong with that, exactly? Would you prefer broken games made by under paid and overworked people?
As for “worse graphics”, AC: Unity came out in 2014, The Witcher 3 came out in 2015, and the Arkham Knight is also from 2015. All of those have technically worse graphics, but they don’t look much different from modern games that need much beefier systems to run.
And here’s AC: Unity compared to a much more modern game.
I’m pretty sure that’s in support of the concept.
Ah, by bad. I didn’t even realize it was a known quote, I just thought it was a sarcastic reply making fun of the other user.
It’s from a tweet. It’s earnest. You can google the quote to get more context.
You picked the absolute best examples of their respective years while picking the absolute worst example of the current year, that makes the comparison a bit partial, doesn’t it? Why not compare them to Final Fantasy XVI or one of the remakes like Dead Space or Resident Evil 4? Or pick the worst example of previous years, like Rambo: The Video Game (2014)? While good graphics don’t make a good game, better hardware allows devs to spend less time doing better graphics. 2 of the 3 examples you gave have static lightning (ACU and BAK), while the bad example you gave have dynamic lightning. Baking static lightning into the map is a huge time consuming factor while making a game, I assure you that from my second hand experience that at least 1 of those 2 games you mentioned had to compromise gameplay because they couldn’t change the map after the light-baking was done. And I’m just scratching the surface on the amount of things that are time consuming when making good graphics like the games you mentioned. As an example, you have the infamous “squeezing through the cap” cutscene that a lot of AAA of the last generation had, because it allowed the game to load the next area. That was time wasted on choosing the best times to do it, recording the scenes, scripting, testing etc. etc. All because the hardware was a limiting factor. Now that consumers have better hardware that isn’t a problem anymore, but consumers had to upgrade to allow it. That was also true for a lot of other techniques like Tessellation, GPU Particles etc. The consumers had all to upgrade to allow devs to make the game prettier with less cost. And it will also be true with ray-tracing and Nanite, both cut a LOT of dev time while making the game prettier but requires the consumers to upgrade their hardware. Graphics are not all about looks, it is also about facilitating dev time which makes the worst looking graphics look better. If Rambo The Video Game (2014) was made with the tech of today, it would look much better while costing the devs the same amount of time. Please don’t see make comment as a critique, I’m just trying to make you understand that not everything is black and white, especially on something that is as complex as AAA development.
EDIT: I guess the absolute worst example of the current year would be Gollum, not Forspoken.
If Rambo The Video Game (2014) was made with the tech of today, it would look much better while costing the devs the same amount of time.
I don’t think this is quite correct. A while back devs were talking about a AAApocalypse. Basically as budgets keep on growing, having a game make its money back is exceedingly hard. This is why today’s games need all sorts of monetisation, are always sequels, have low-risk game mechanics, and ship in half broken states. Regardless of the industry basically abandoning novel game engines to focus on Unreal (which is also a bad thing for other reasons), game production times are increasing, and the reason is that while some of the time is amortised, the greater graphical fidelity makes the lower fidelity work stand out. I believe an “indie” or even AA game could look better today for the same amount of effort than 10 years ago, but not a AAA game.
For example, you could not build Baldur’s Gate 3 in Unreal. This is an unhealthy state for the industry to be in.
Yeah, I agree with everything you said. But what I was trying to say is that it is not all of the graphics push that are hurting production, I believe that on this generation alone we have many new graphics techniques that are aiming to improve image quality at the same time that it takes the load out of the devs. Just look at Remnant II that has the graphical fidelity of a AAA but the budget of a AA. Also, some of the production time is increasing due to feature creep that a lot of games have. Every new game has to have millions of fetch quests, a colossal open world map, skill trees, online mode, crafting, looting system etc. etc. Even if it makes no sense for the game to have it. Almost every single game mentioned on this thread suffers from this. With Batman being notorious for their Riddler trophies, The Witcher having more question marks on the map than an actual map, and Assassin’s Creed… Well, do I even need to mention it? So the production time increase is not all the fault of the increase in graphical fidelity.
I saw that episode. Can’t disagree.
I already wrote another comment on this, but to sum up my thoughts in a top comment:
Most (major) games nowadays don’t look worlds better than the Witcher 3 (2015), but they still play the same as games from 2015 (or older), while requiring much better hardware with high levels of energy consumption. And I think it doesn’t help that something like an RTX 4060 card (power consumption of a 1660 with performance slightly above a 3060) gets trashed for not providing a large increase in performance.
Its not so much the card itself, its the price and false market around it (2 versions to trick the average buyer, one with literally double the memory). Also its a downgrade from previous gen now cheaper 3070. Its corporate greed with purpose misleading. If the card was 100 € cheaper, it would be actually really good. I think that is the census on reviewers like GN but don’t quote a random dude on the Internet ahah
It’s less of the fact that there is a version with double the memory and more of the fact that the one with less memory has a narrower memory bus than the previous generation resulting in worse performance than the previous generation card in certain scenarios.
I don’t know about the 2 versions, but the 3070 bit is part of what I mean.
Price has been an issue with all hardware recently - even in regard to other things due to inflation in the last few years - but it’s not exclusive to the 4060. But more importantly, from what I can tell, the 3070 has a 1.2x to 1.4x increase in performance in games, but it consumes about 1.75x the power (rough numbers, i’m kind busy rn). Because I don’t have much time right now I can’t look at prices, but when you consider the massive difference in consumption, the price different might start making more sense and only seem ridiculous if you just focus on power.
still play the same as games from 2015
I wish they played more like games from the late 90’s, early 2000’s, instead of stripping out a lot of depth in favor of visuals. Back then, I expected games to get more complex and look better. Instead, they’ve looked better, but played worse each passing year.
Part of this is also just the fact that simpler = broader appeal to a lot of companies which translates to more money.
I sometimes I question if it’s just that, or if it’s also due to the fact they generally employ like a hundred artists to make models and textures and what not, but only have like 5 people doing the programming. Why is it not an even mix? Even if simplicity was a good thing, I’m pretty sure there would be fewer technical issues if they had more people doing the technical things and not just creating more visual assets. It’s not just the complexity of the game mechanics that has suffered, it’s the quality and general stability of the software, too, that has declined.
Then again, the games I would like to see come back from back in the day were made by small teams. Bigger teams can also be problematic in and of themselves. Communication issues, versioning, making sure everything actually connects together when they are made by separate departments, etc. When it’s one dude with a vision and passion, it can actually be a better thing than something made by a huge team of thousands of people. 🤷🏻♂️
I remember seeing an article somewhere about this. Effectively, there really bad diminishing returns with these game graphics. You could triple the detail, but there’s only so much that can fit on the screen, or in your eyes.
And at the same time, they’re bloating many of these AAA games sizes with all manner of garbage too, while simultaneously cutting the corners of what is actually good about them.
There’s definitely something to be said about proper use of texture quality. Instead of relying on VRAM to push detail, for games that go for realism I think it’s interesting to look at games like Battlefield 1 - which even today looks incredible despite very clearly having low quality textures. Makes sense - the game is meant to be you running around and periodically stopping, so the dirt doesn’t need to be much more than some pixelated blocks. On the other hand, even just looking at the ground of Baldur’s Gate 3 looks like the polish rest of Battlefield 1 visual appeal.
Both these games are examples of polish put in the right places (in regards to visual aesthetics) and seem to benefit from it greatly with not a high barrier for displaying it. Meanwhile still visually compelling games like 2077 or RDR2 do look great overall but just take so much more resources to push those visuals. Granted there’s other factors at play like genre which of course dictates other measures done to maintain the ratio of performance and fidelity and both these games are much larger in scope.
I think in some cases there’s a lot of merit to it, for example Red Dead Redemption, both games are pretty graphically intensive (if not cutting edge) but it’s used to further the immersion of the game in a meaningful way. Red Dead Redemption 2 really sells this rich natural environment for you to explore and interact with and it wouldn’t quite be the same game without it.
Also that example of Tomb Raider is really disingenuous, the level of fidelity in the environments is night and day between the two as well as the quality of animation. In your example the only real thing you can tell is the skin shaders, which are not even close between the two, SotTR really sells that you are looking at real people, something the 2013 game approached but never really achieved IMO.
if you don’t care then good for you! My wallet wishes I didn’t but it’s a fun hobby nontheless to try and push things to their limits and I am personally fascinated by the technology. I always have some of the fastest hardware every other generation and I enjoy playing with it and doing stuff to make it all work as well as possible.
You are probably correct in thinking for the average person we are approaching a point where they just really don’t care, I just wish they would push for more clarity in image presentation at this point, modern games are a bit of a muddy mess sometimes especially with FSR/DLSS
It mattered a lot more early on because doubling the polygon count on screen meant you could do a lot more gameplay wise, larger environments, more stuff on screen etc. these days you can pretty much do what you want if you are happy to drop a little fidelity in individual objects.
Also that example of Tomb Raider is really disingenuous, the level of fidelity in the environments is night and day between the two as well as the quality of animation. In your example the only real thing you can tell is the skin shaders, which are not even close between the two, SotTR really sells that you are looking at real people, something the 2013 game approached but never really achieved IMO.
I’ve noticed this a lot in comparisons claiming to show that graphics quality has regressed (either over time, or from an earlier demo reel of the same game), where the person trying to make the point cherry-picks drastically different lighting or atmospheric scenarios that put the later image in a bad light. Like, no crap Lara looks better in the 2013 image, she’s lit from an angle that highlights her facial features and inexplicably wearing makeup while in the midst of a jungle adventure. The Shadow of the Tomb Raider image, by comparison, is of a dirty-faced Lara pulling a face while being lit from an unflattering angle by campfire. Compositionally, of course the first image is prettier – but as you point out, the lack of effective subsurface scattering in the Tomb Raider 2013 skin shader is painfully apparent versus SofTR. The newer image is more realistic, even if it’s not as flattering.
I’m someone who doesn’t care about graphics a whole lot. I play most modern games at 1080p Mid/high on my RTX 3060.
And yet, I totally agree with your points. Many times, older games had rich looking environment from a distance, but if you go close or try to interact with it, it just breaks the illusion. Like, leaves can’t move independently or plants just don’t react to your trampling then etc.
A lot of graphical improvements are also accompanied with improvements in how elements interact with other elements in the game. And that definitely adds to the immersion, when you can feel like you’re a part of the environment.
Yatzee from The Escapist recently did a video on this exact topic.
Here’s the link for those interested.
I agree with everything he said. But I’ve also been saying things like that for thirty years. I remember when Morrowind came out complaining about companies using extra processing for shitty 3D graphics instead of sticking with high quality 2d that works perfectly fine and putting that extra processing power to work on better AI or something.
I think the problem is that better graphics is the one thing they can do that will please a mass audience. Sure, there are plenty of other things they could be doing, but I would bet that each of them has a niche appeal that will have fewer fans to spread the cost among. Thus producers of “AAA” titles pretty much by definition have to pursue that mass audience. The question is when they reach that point of diminishing returns and be becomes more profitable to produce lower cost niche titles for smaller audience. And we also have to factor in that part of that “profit” of pleasing that assumption our society has that anything with niche appeal is necessarily “lower” in status than mass appeal stuff.
I think we are approaching that point, if we haven’t already reached it. Indie stuff is becoming more and more popular, and more prevalent. It’s just hard to tell because indie stuff tends to target a smaller but more passionate audience. For example, while I am looking forward to trying Starfield out, I may be too busy playing yet more Stardew Valley to buy it right away, and end up grabbing it in a sale. (I haven’t even really checked if it’ll run on my current gaming laptop.)
Holy shit Yatzee is still at it? Good for him.
Games don’t need better, more complex graphics. They need adequate time and resources during the development process. They need to actually be completed by their release date, not just barely playable. They need to be held to a higher standard of quality when publishers judge if they’re ready to sell.
I’m with you. Barely notice the changes in graphics, just the increasing of my gpu fan speeds over the years.
I’m more interested in games that graphics that look good enough, but do more interesting things with the extra horsepower we have these days.
But what will all these hardware companies do if we stop raising system requirements!
Yeah there seems to be a planned obsolescence element here overall!
Everything is ruined by marketing it’s capitalist roots, and game development is no exception.
They push for fidelity just because sells well, the fact that this makes for the need of much powerful hardware is not a drawback for them. It’s actually good, since it’s someone you can profits on.
Games need artistic direction and vision, much more than they need photorealism (which is great for some kind of games, but not a universal standard).
Photorealism really only fits a dew genres, almost all indie games have some kind of interesting artstyle like the long dark for example