Probably a dumb question but:

With the upcoming GTA 6 where people speculate it might only run at 30FPS, I was wondering why there isn’t some setting on current gen consoles to turn on motion smoothing.

For example my 10 year old TV has a setting for motion smoothing that works perfectly fine even though it probably has less performance than someone’s toaster.

It seems like this is being integrated in some instances for NVIDIA and AMD cards such as DLSS and Fluid Motion Frames which is compatible with some limited games.

But I wonder why can’t this be a thing that is globally integrated in modern tech so that we don’t have to play something under 60FPS anymore in 2025? I honestly couldn’t play something in 30FPS since it’s so straining and hard to see things properly.

  • Blackmist@feddit.uk
    link
    fedilink
    English
    arrow-up
    1
    ·
    1 day ago

    Because it’s not a good way to get from 30fps to 60. It’s a way to go from 60 to 120 (and 240 on the new 50x0 series) where you won’t notice the extra latency.

    Most 30fps games on consoles have a 60 fps setting anyway, that turns off the extra graphical wankery and tones the resolution down a touch.

  • jarfil@beehaw.org
    link
    fedilink
    arrow-up
    4
    ·
    edit-2
    3 days ago

    Motion smoothing means that instead of showing:

    • Frame 1
    • 33ms rendering
    • Frame 2

    …you would get:

    • Frame 1
    • 33ms rendering
    • #ms interpolating Frames 1 and 2
    • Interpolated Frame 1.5
    • 16ms wait
    • Frame 2

    It might be fine for non-interactive stuff where you can get all the frames in advance, like cutscenes. For anything interactive though, it just increases latency while adding imprecise partial frames.

    It will never turn 30fps into true 60fps like:

    • Frame 1
    • 16ms rendering
    • Frame 2
    • 16ms rendering
    • Frame 3
    • Boomkop3@reddthat.com
      link
      fedilink
      arrow-up
      3
      ·
      edit-2
      1 day ago

      It’s worse

      • render frame 1 - 33ms
      • render frame 2 -33ms
      • interpolate frame 1|2
      • show frame 1
      • start rendering frame 3…
      • wait 16ms
      • show frame 1|2
      • wait 16 ms
      • show frame 2
      • interpolate frame 2|3
      • start working on frame 4…
      • wait 16ms
      • show frame 2|3
      • wait 16 ms
      • show frame 3 -> this is a whole 33ms late!

      And that’s while ignoring the extra processing time of the interpolation and asynchronous workload. That’s so slow, that if you wiggle your joystick 15 times per second the image on the screen will be moving in the opposite direction

      • jarfil@beehaw.org
        link
        fedilink
        arrow-up
        1
        ·
        edit-2
        18 hours ago

        Hm… good point… but… let’s see, assuming full parallel processing:

        • […]
        • Frame -2 ready
        • Frame -1 ready
          • Show frame -2
          • Start interpolating -2|-1 (should take less than 16ms)
          • Start rendering Frame 0 (will take 33ms)
          • User input 0 (will be received in 20ms if wired)
        • Wait 16ms
          • Frame -2|-1 ready
        • Show Frame -2|-1
        • Wait 4ms
          • Process User input 0 (max 12ms to get into next frame)
          • User input 1 (will be received in 20ms if wired)
        • Wait 12ms
        • Frame 0 ready
          • Show Frame -1
          • Start interpolating -1|0 (should take less than 16ms)
          • Start rendering Frame 1 {includes User input 0} (will take 33ms)
        • Wait 8ms
          • Process User input 1 (…won’t make it into a frame before User input 2 is received)
          • User input 2 (will be received in 20ms if wired)
        • Wait 8ms
          • Frame -1|0 ready
        • Show Frame -1|0
        • Wait 12ms
          • Process User Input 1+2 (…will it take less than 4ms?)
        • Wait 4ms
        • Frame 1 ready {includes user input 0}
          • Show Frame 0
          • Start interpolating 0|1 (should take less than 16ms)
          • Start rendering Frame 2 {includes user input 1+2… maybe} (will take 33ms)
        • Wait 16ms
          • Frame 0|1 ready {includes partial user input 0}
        • Show Frame 0|1 {includes partial user input 0}
        • Wait 16ms
        • Frame 2 ready {…hopefully includes user input 1+2}
          • Show Frame 1 {includes user input 0}
        • […]

        So…

        • From user input to partial display: 66ms
        • From user input to full display: 83ms
        • Some user inputs will be bundled up
        • Some user inputs will take some extra 33ms to get displayed

        Effectively, an input-to-render equivalent of between a blurry 15fps, and an abysmal 8.6fps.

        Could be interesting to run a simulation and see how many user inputs get bundled or “lost”, and what the maximum latency would be.

        Still, at a fixed 30fps, the latency would be:

        • 20ms best case
        • 53ms worst case (missed frame)
        • Boomkop3@reddthat.com
          link
          fedilink
          arrow-up
          1
          ·
          17 hours ago

          You’ve just invented time travel.

          The basic flow is
          [user input -> render 33ms -> frame available]
          It is impossible to have a latency lower than this, a newer frame simply does not exist yet.

          But with interpolation you also need consistent time between frames. You can’t just present a new frame and the interpolated frame instantly after each other. First you present the interpolated frame, then you want half a frame and present the new frame it was interpolated to.

          So your minimum possible latency is 1.5 frames, or 33+16=59ms (which is horrible)

          One thing I wonder tho… could you use the motion vectors from the game engine that are available before a frame even exists?

          • jarfil@beehaw.org
            link
            fedilink
            arrow-up
            1
            ·
            16 hours ago

            You’ve just invented time travel.

            Oops, you’re right. Got carried away 😅

            could you use the motion vectors from the game engine that are available before a frame even exists?

            Hm… you mean like what video compression algorithms do? I don’t know of any game doing that, but it could be interesting to explore.

            • Boomkop3@reddthat.com
              link
              fedilink
              arrow-up
              1
              ·
              15 hours ago

              No, modern game engines produce a whole lot more than the necessary information to generate a frame. Like a depth map and such. One of those is a map of where everything is going and how fast.

              It wouldn’t include movement produced by shaders, but it should include all polygons on screen. which would allow you to just warp the previous frame, no next frame required

  • MentalEdge
    link
    fedilink
    arrow-up
    48
    ·
    edit-2
    4 days ago

    Because it introduces latency.

    Higher framerates only in part improve the experience due to looking better, they also make the game feel faster because what you input is reflected in-game that fraction of a second sooner.

    Increasing framerate while incurring higher latency might look nicer for an onlooker, but it generally feels a lot worse to actually play.

  • narc0tic_bird@lemm.ee
    link
    fedilink
    arrow-up
    11
    ·
    4 days ago

    Input latency for one, because the next frame is delayed where the interpolated frame is inserted.

    And image quality. The generated frame is, as I said, interpolated. Whether that’s just using an algorithm or machine learning, it’s not even close to being accurate (at this point in time).