• deegeese
    link
    fedilink
    English
    arrow-up
    11
    ·
    9 个月前

    From the description it kind of reminds me of the performance gains seen in the 1990s when the industry moved from direct geometry calls to using display lists to cut down on the number of times the GPU is waiting for instructions.

  • echo64@lemmy.world
    link
    fedilink
    English
    arrow-up
    13
    arrow-down
    4
    ·
    9 个月前

    One of the big reasons the ps5 usually beats out the Xbox hardware, despite worse hardware by the numbers, is because of this kind of thinking.

    They built a custom ssd controller that the gpu has direct access to. This means when something needs to happen involving the ssd and gpu, the gpu can just directly access the memory it needs to without needing to wait on the cpu.

    Just let the gpu do what it does best without having to wait on the cpu, and everything is going to go so much smoother.

  • mindbleach@sh.itjust.works
    link
    fedilink
    English
    arrow-up
    8
    arrow-down
    7
    ·
    9 个月前

    At some point… do you need the CPU? There’s stuff it will be better at, yes, and more power is always better. But the GPU can run any code.

    The whole computer outside the video card could be reduced to a jumped-up southbridge.

    • zalgotext@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      16
      arrow-down
      1
      ·
      9 个月前

      GPUs are ridiculously, ludicrously good at doing an absolute shit-ton of very simple, non-dependent calculations simultaneously. CPUs are good at… Well, everything else. So yes, you do still need the CPU.

      • mindbleach@sh.itjust.works
        link
        fedilink
        English
        arrow-up
        2
        arrow-down
        9
        ·
        9 个月前

        GPUs are pretty good at doing half a shit-ton of mildly complex calculations simultaneously. And even the things they’re not so good at, they can still do in parallel.

        Remember that GPU ray-tracing didn’t start with bespoke hardware. The first Nvidia card celebrated by path-tracing nerds was the GTX 480 from 2010. “GPGPU” shenanigans began even before CUDA. People were coercing work out of video cards by converting data to colors. You ever worry about the middle bytes of a long int getting saturated? It’s not good times.

        Couple that with how good design practices have pushed toward stream processing, just to make good use of many-core CPUs, and the question is worth asking. Especially when true parallel hardware is easier to scale up, costs way less per flop, and won’t run into looming obstacles in SRAM size.

        I guess the hybrid alternative path is just Xeon Phi.

        • CausticFlames
          link
          fedilink
          English
          arrow-up
          6
          arrow-down
          1
          ·
          9 个月前

          GPU’s as the ONLY compute source in a computer cannot and will not function, mainly due to how pipelining works on existing architectures (and other instructions)

          You’re right, in that GPU’s are excellent at parallelization. Unfortunately when you pipeline several instructions to be run in parallel, you actually increase each individual instruction’s execution time. (Decreasing the OVERALL execution time though).

          GPU’s are stupid good at creating triangles effectively, and pinning them to a matrix that they can then do “transformations” or other altering actions to. A GPU would struggle HARD if it had to handle system calls and “time splitting” similar to how an OS handles operating system background tasks.

          This isnt even MENTIONING the instruction set changes that would be needed for x86 for example to run on a GPU alone.

          TLDR: CPU’s are here to stay for a really really long time.

          • mindbleach@sh.itjust.works
            link
            fedilink
            English
            arrow-up
            3
            arrow-down
            2
            ·
            9 个月前

            … why would you run x86?

            Nevermind that “cannot function” is not the same thing as “slow.” Every reply has been a technically-proficient attack rather than sincere consideration of what is possible. The article is about rearranging the established relationship of CPU and GPU - the root comment asks “at some point.” An all-caps dismissal of running existing software is a tell.

            We’re not talking about binaries you already have. We’re not necessarily talking about general software. This is about future games. We’re not even talking about a system with no CPU - the root comment describes reducing the importance of components. Crucial pieces of discrete hardware in past computers live on in modern motherboards as a tiny fraction of some chip.

            Even CPUs themselves are experimenting with heterogeneous core layouts, where an itty-bitty Atom or ARMv7 handles the basics, while some wildly different silicon either sits idle or does all the work. The difference between that and an APU chewing through SPIR-V might be less than you think.

            • CausticFlames
              link
              fedilink
              English
              arrow-up
              2
              ·
              edit-2
              9 个月前

              You are the one who brought up the question of even needing the CPU at all. Also, It wasn’t meant to be an attack. Just an explanation as to why you’d still need a CPU.

              why would you run x86

              All I meant was a large portion of software and compatibility tools still use it, and our modern desktop CPU architectures are still inspired from it. Things like CUDA are vastly different was my point

              But if what you meant by your original comment was to not do away with the CPU, then yes! By all means, plenty of software is now migrating to taking advantage of the GPU as much as possible. I was only addressing you asking “at some point do we even need the CPU?” - the answer is yes :)

        • Socsa@sh.itjust.works
          link
          fedilink
          English
          arrow-up
          2
          arrow-down
          1
          ·
          9 个月前

          Lol, there are a lot of people in here who got their digital design education from Linus Tech Tips downvoting you.

          • mindbleach@sh.itjust.works
            link
            fedilink
            English
            arrow-up
            2
            ·
            9 个月前

            It’s a whole mess of legitimate reasons we’re not already doing it (which I know), misapplied to ‘so it can never make sense,’ with a tone of ‘how dare you.’

            The obvious example application ages ago would’ve been a console - y’know, specialty hardware with bespoke software, optimized for maximum oomph at minimum up-front cost. But everything since the PS3-360-Wii generation has been a whole-ass computer judged on its ability to handle multiplatform games. Even your damn phone is expected to run Fortnite. Everything’s gotta have an everything.

            Maybe people are no longer used to considering how computing could get weird.

            Maybe they don’t recognize how weird it already got.

      • mindbleach@sh.itjust.works
        link
        fedilink
        English
        arrow-up
        3
        arrow-down
        8
        ·
        9 个月前

        Are they, though? They’re hardware-threaded. Context switches are how they deal with a cache miss.

        This specific news sounds a lot like what an interrupt-driven scheduler would do.

        The bigger obstacle to a GPU OS is surely that the video card does not tend to talk to itself… and evidently that’s being addressed.

    • KeenFlame@feddit.nu
      link
      fedilink
      English
      arrow-up
      2
      ·
      9 个月前

      Buncha dry students here giving you shit. It is not a stupid question.

      Some day we might not need a cpu. The biggest hurdle probably isn’t actually even the chip architecture, but that the software needs to be remade and it’s not something you do in a day exactly

      • Socsa@sh.itjust.works
        link
        fedilink
        English
        arrow-up
        2
        ·
        edit-2
        9 个月前

        Right, GPGPU is a thing. You can do branch logic on GPU and you can do SIMD on a CPU. But in general, logic and compute have some orthogonal requirements which means you end up with divergent designs if you start optimizing in either direction.

        This is also a software architecture and conceptual problem as well. You simply can’t do conditional SIMD. You can compute both graphs in parallel and “branch” when the tasks join (which is a form of speculative execution), but that’s rarely more efficient than defining and dispatching compute tasks on demand when you get to the edges of the performance curve.

    • mindbleach@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      3
      arrow-down
      3
      ·
      edit-2
      9 个月前

      Fuck me for playing what-if, apparently.

      Not like this news is explicitly about upending the typical CPU-GPU relationship.