• Canadian_Cabinet @lemmy.ca
    link
    fedilink
    arrow-up
    120
    arrow-down
    1
    ·
    1 year ago

    Unirronically I agree with this. I still have yet to see a use case of ray tracing that makes it worth the 50% hit in fps.

    • schema@lemmy.world
      link
      fedilink
      arrow-up
      57
      ·
      edit-2
      1 year ago

      A lot of the implications for ray tracing are on the dev-side of things. It’s a bit hard to explain without going into technical details.

      Essentially, getting light to look “right” is very very hard. To do it, devs employ a lot of different techniques. One of those older techniques is baking the light on static objects, essentially pre-rendering where light goes and how it bounces. This has been done for a long time, e.g. even in Half-Life, the lights are baked for static geometry. So in a way, we have been using ray-tracing in games for a long time. however, it isn’t real time ray-tracing, as the information gets stored in light map textures, so there is no performance impact other than storing the texture in RAM/VRAM and drawing the texture together with others.

      The inherit problem of that technique is that it only really works for static geometry. If you move your light or any objects in the scene, your lightmaps will no longer match. To solve this, there are mixed modes which use real-time lights, dynamic light maps, and other tricks. However, these are often subject to problems and/or the limitations of using real-time lights. Real-time light problems are: You can only do a limited number before getting a serious performance impact, especially if the lights produce shadows. Soft shadows, shadows in big areas, and very detailed shadows are extremely hard to do as well without some advanced tricks. Also, ambient occlusion and global illumination is not something you can just give lights (there is screen-space GI and AO, but they don’t look good in all circumstances, and you have limited control. There are also some other techniques some engines did for real time GI.).

      Also there is the problem of baked light affecting dynamic objects, such as characters. This has been solved by baking so called “light probes”. These are invisible spheres that store the light data and the closest data then gets applied to the characters and other dynamic objects. This again has a some problem, as it’s hard to apply multiple light probes to the same object, so lighting might be off. Also, light direction is not accurate, which causes normal maps to look very flat in this light, and local shadows do not work using light probes. The same is done for reflections using reflection probes which are static. These are 360° “screen shots” essentially storing the reflection at that point in space. This however costs DiskSpace/RAM/VRAM, and it will not hold any information for moving objects (that’s why sometimes you can’t see yourself in the mirror in games). Also, the reflections sometimes look “out of place” or distorted when the reflection probe is too far from the reflecting surface (again, these cost VRAM and RAM so you don’t want to place them in front of every single reflective surface). It costs a lot of time to find the right balance. For the rest, usually screen space reflections are used, as any other real-time reflection is extremely costly as you essentially render the whole scene again for each local reflection. Screen space reflection is an advanced technique that works very well for stuff like reflective floors, but you will quickly see its downsides on very mirrored surfaces as it lacks information that is not on the screen. Some games like Hitman for example use the mix of those techniques extremely well.

      Coming back to lighting, there are now better techniques used for example by unreal and some other engines (and now unity in experimental). The light gets stored in more predictable data structures, such as 3d textures. This way, you can store the direction of all light in each cell. The light then gets applied to the objects passing through those cells. This looks pretty good, and the runtime cost is fairly low, but the storage cost of such textures is a tradeoff of texture resolution and fidelity. These textures cost a lot of VRAM to store and without using advanced techniques and tricks, have their own limits (e.g. for scene size). It also costs a lot of time to create each time you change the scene, and it also doesn’t eliminate all problems mentioned above, like reflections, moving lights, etc.

      Specifically, there is the problem of character lighting itself. Using light probes on characters usually looks pretty bad, as it removes a lot of detail of advanced skin shaders. Even with the above mentioned techniques, character lighting is still extremely hard to do. There is also some other problems, like ambient shadow in already shadowed areas, and light balancing for character versus scene lighting.

      For that reason, most AAA games use separate light rigs for characters. Essentially floating lights that ONLY affect the character and move with them. When the mixing with the scene lights is done right, the rig adapts to the current situation in terms of light direction, color, and intensity. If you look in most AAA games, you can often see situations where rim-light comes from a direction where there is no actual light source. However, this way, the devs and artists have full control over lighting the characters. Essentially like a real movie production would have, but without the limitation of the real world.

      Now, ray-tracing as you know it right now is not quite there yet, but eventually, ray tracing is the solution to a lot of the problems mentioned above. Things like polygon density, light count, global illumination, ambient occlusion, light direction, reflections, and much more are simply “there” for you to use. Now this doesn’t mean that it will automatically make everything look great, but with the overwhelming amount of different tricks that have to be used for current gen games to make the look good, it opens a whole new world of possibilities.

      Also, something that will not directly influence the final game, it will eventually simplify things for devs so that more time can be invested into other things.

      At this current usage of ray-tracing, it’s more like a gimmick, because devs will still focus most resources on the current ways to use light. This is because most people don’t have cards with sufficient ray-tracing capabilities. So for the moment, I agree that the performance hit is not worth it. However, eventually it might become the default way to draw games. While we are not quite there, in terms of performance, I think that things might become a lot more consistent and predictable eventually for raytracing.

      • Juki@lemmy.world
        link
        fedilink
        arrow-up
        15
        ·
        1 year ago

        YES, thank you! You saved me a lot of writing haha

        This is spot on and the real advantage of ray tracing - when it becomes the norm it’ll look better, provide effects that are extremely difficult or impossible and do so with minimal dev pain.

      • QueriesQueried@sh.itjust.works
        link
        fedilink
        English
        arrow-up
        3
        ·
        1 year ago

        Worth mentioning that we’re also about halfway on the average time for these big features to hit significant saturation, like with PhysX. It’s pretty common for a GPU (and sometimes CPU/Chip set) to take 3-4 generations to trickle down enough through new products and used product sales to have decent enough depth/usage. At this point depending on how Apple is handling ray tracing, they might slow down the transition away from rasterized.

      • ClamDrinker@lemmy.world
        link
        fedilink
        arrow-up
        3
        ·
        1 year ago

        Awesome and great explanation for a layperson. Because the industry has been faking lighting for so long and lighting is quite important, the industry has become incredibly good at it. But it also takes a lot of development time that could be spent on adding more content or features. There’s a reason the opinion about ray tracing is extremely positive within the game development industry. But also nobody’s expecting it to become the norm over night, and the period with hybrid support for both raytracing and legacy lighting is only just starting.

    • Vqhm@lemmy.world
      link
      fedilink
      arrow-up
      45
      ·
      1 year ago

      Remember PhysX back when it was a separate
      card Physics Processing Unit before they shoved it on the GPU before they even had multithreading? Yea it evolved. But the original implementation was not ideal.

        • Hexarei@programming.dev
          link
          fedilink
          arrow-up
          1
          ·
          1 year ago

          Honestly all of the “it runs badly even on a 4090” stuff is talking about 4K with all the settings maxed - It runs at a solid minimum 30 for me at 1440p on my 3080 at nearly ultra settings. As long as you’re not expecting 60FPS at 4K, you can enable RT overdrive on affordable hardware.

    • Tau
      link
      fedilink
      arrow-up
      16
      ·
      1 year ago

      Raytracing is good but the problem is that were are in a transitional period (and Nvidia keeps upselling it’s products)

    • Fades@lemmy.world
      link
      fedilink
      arrow-up
      13
      ·
      edit-2
      1 year ago

      I haven’t personally experienced a game that made use of it, so it must not exist

      This u?

    • AdrianTheFrog@lemmy.world
      link
      fedilink
      English
      arrow-up
      12
      ·
      edit-2
      1 year ago

      The problem with raytracing is that it’s real strengths are in places where traditional rendering doesn’t work at all. As soon as raytraced games stop needing a rasterized option, raytracing will really become useful. Most of it’s advantages are around dynamic scenes where you can’t just bake the lighting, or reflections which without raytracing will break if you look at them slightly wrong.

      Edit: Most of the minecraft raytracing implementations are lacking in my opinion, but minecraft is a game that is well suited for raytracing. Really just anything with a dynamic world.

    • BigDaddySlim@lemmy.world
      link
      fedilink
      English
      arrow-up
      12
      arrow-down
      1
      ·
      1 year ago

      Yup, I have a 3090 and even then I don’t bother with RTX. It’s a gimmick Jensen and Nvidia love to push as a must have feature. In reality you don’t notice it if you’re playing a game normally, it’s a “stop and smell the roses” feature you only turn on to check out once and turn off immediately when you get frame dips.

      • ThunderingJerboa@kbin.social
        link
        fedilink
        arrow-up
        9
        ·
        1 year ago

        How can you implement anything meaningful with ray tracing when shocker, not everyone can use ray tracing. Games are unfortunately designed for the median crowd. I would argue maybe the next console generation shall be that point when ray tracing is the norm. We have seen this fairly recently with SSDs, where they floated around for nearly 10-14 years in the consumer market being a cool piece of tech but most games were being designed for a hard disk except now most consoles have SSDs as the base standard, so this means the game can be designed around that specification and take advantage of it. Even though I am a PC stan, I understand consoles have a huge impact on the gaming industry.

      • Holzkohlen@feddit.de
        link
        fedilink
        arrow-up
        4
        ·
        1 year ago

        That is why they came up with DLSS and then the frame generation. But of course it’s proprietary tech confined to the newest most expensive cards by Nvidia. Utterly useless

        • ThunderingJerboa@kbin.social
          link
          fedilink
          arrow-up
          5
          ·
          1 year ago

          Are we going to ignore FSR and XESS probably wouldn’t have existed without this push? Like even if you don’t use Ray tracing I think its fair to say you can benefit from DLSS (even though one can argue its a cheap gimmick to raise your fps count) but having it as an option is a good thing.

          • miss_brainfart@lemmy.ml
            link
            fedilink
            arrow-up
            7
            ·
            edit-2
            1 year ago

            Frame interpolation is still a weird one to me.

            Like, with how the latency is obviously still tied to the base framerate, and the fact that lower framerates mean less information to calculate good interpolated frames from…

            Basically, the tech is at its worst for low-end hardware that needs it the most. (Which is probably why they chose to restrict it to new models, now that I think about it)

            A 4090 owner turning on DLSS3 is kinda like a dental surgeon getting a third car for their birthday.

            Upscaling has come a long way though, and the anti-aliasing they use in DLSS is so good, they’ve released it as a standalone feature. That I can appreciate, anything is better than what some games do with TAA.

    • CopernicusQwark@lemmy.world
      link
      fedilink
      arrow-up
      9
      ·
      1 year ago

      Spider-Man 2 has launched with ray tracing always on and looks and plays phenomenally. Super immersive to swing around the city and have proper reflections off all the skyscrapers!