• a baby duck@lemmy.world
    link
    fedilink
    English
    arrow-up
    53
    arrow-down
    1
    ·
    edit-2
    2 months ago

    Intel skepticism aside, I hope they can deliver on this. M-series Macs seem streets ahead in terms of battery life right now and it doesn’t feel great buying any other portable.

    • brucethemoose@lemmy.world
      link
      fedilink
      English
      arrow-up
      31
      arrow-down
      3
      ·
      edit-2
      2 months ago

      Honestly, a lot of that is budget.

      Apple makes low clocked, very wide SoCs, and are always the first customer of the most cutting edge silicon node. This is very expensive. And Apple can eat it with their outrageous prices.

      Intel (and AMD) go more for “balance,” with smaller cheaper dies and higher peak clocks. Their OEMs also “cheap out” by bundling a bunch of bloatware that also drains the battery to pad margins. You can find PCs with big batteries and better stock configs, but these are more expensive.

      AMD is only just now getting into the “premium” game with the upcoming Strix Halo chip (M2 Pro-ish spec wise). Intel isn’t there yet, but there are rumors they will as well.

      • sugar_in_your_tea@sh.itjust.works
        link
        fedilink
        English
        arrow-up
        24
        arrow-down
        2
        ·
        2 months ago

        bloatware

        Even if you remove all that crap, battery life is nowhere near the same vs the M-series chips. So while it may be a problem, it’s still not anywhere close to the reason battery life sucks.

        • brucethemoose@lemmy.world
          link
          fedilink
          English
          arrow-up
          11
          arrow-down
          1
          ·
          edit-2
          2 months ago

          It can be if you run linux and throttle the chips. Even my older G14 last a long time, as the AMD SoCs are great, it can run fanless throttled down, and it just has a straight up bigger battery than razor thin Macs.

          But again, it’s just not configured this way in most laptops, which sacrifice battery for everything else because, well, OEMs are idiots.

          • sugar_in_your_tea@sh.itjust.works
            link
            fedilink
            English
            arrow-up
            2
            ·
            2 months ago

            I don’t, I just run stock. I run an E495 and get something like 3-5 hours battery life, depending on what I’m doing, and after a few years of ownership, I still get around 3 hours battery life.

          • bamboo@lemm.ee
            link
            fedilink
            English
            arrow-up
            2
            ·
            2 months ago

            Current gen MacBooks have massive batteries. The MacBook Pro 14 inch is 70-73Wh, same as your G14, and the 16 inch MBP is 100Wh, the legal limit to take on an airplane. Even the 13inch air, apple’s thinnest and smallest, is still 52Wh.

          • Melco@lemmy.world
            link
            fedilink
            English
            arrow-up
            1
            ·
            2 months ago

            What specific driver and linux tools do you use to throttle your CPU?

            Also throttling often produces the opposite result in terms of extended battery life as it likely takes more time in the higher states to do the same amount of work whereas running at a faster clock speed, the work is completed faster and the CPU returns to a lower less energy using state quicker and resides there more of the time.

            I would be interested to hear your results. Have you done any tests comparing a throttled versus throttled system with the tools you are using?

            • brucethemoose@lemmy.world
              link
              fedilink
              English
              arrow-up
              1
              ·
              2 months ago

              On my G14, I just uses the ROG utility to disable turbo and make some kernel tweaks. I’ve used ryzenadj before, but its been awhile. And yes I measured battery drain in the terminal (but again its been awhile).

              Also throttling often produces the opposite result in terms of extended battery life as it likely takes more time in the higher states to do the same amount of work whereas running at a faster clock speed, the work is completed faster and the CPU returns to a lower less energy using state quicker and resides there more of the time.

              “Race to sleep” is true to some extent, but after a certain point the extra voltage one needs for higher clocks dramatically outweighs the benefit of the CPU sleeping longer. Modern CPUs turbo to ridiculously inefficient frequencies by default before they thermally throttle themselves.

        • Valmond@lemmy.world
          link
          fedilink
          English
          arrow-up
          2
          ·
          2 months ago

          Isn’t the screen eating most of the power in laptops? I just have an old T490 that I don’t use very much so I might be not that well informed.

          • sugar_in_your_tea@sh.itjust.works
            link
            fedilink
            English
            arrow-up
            2
            ·
            2 months ago

            I thought so too, but if Apple is getting more than 2x the battery life vs competitors while having a more dense screen, then I suppose it’s not as significant as I had thought.

    • pycorax@lemmy.world
      link
      fedilink
      English
      arrow-up
      6
      ·
      edit-2
      2 months ago

      There were some benchmarks that showed Ryzen getting very close and in some cases beating with Zen 4 based Z1 Extreme already. They just aren’t in laptops.

    • kelvie@lemmy.ca
      link
      fedilink
      English
      arrow-up
      6
      ·
      2 months ago

      I think in terms of actually doing stuff AMD is close in terms of power draw (W/performance) but it’s the little things like going to sleep and while completely idle that the entire MacBook draws so little power that needs to catch up – and that’s not entirely on the processor.

  • emax_gomax@lemmy.world
    link
    fedilink
    English
    arrow-up
    21
    arrow-down
    2
    ·
    edit-2
    2 months ago

    I couldn’t find any clarification in the article but in guessing these are still x86_64 and from the description it seems like they’ve stacked a lot of different components into a single CPU core. Normally both those things would make it a big powerhouse so I’m not sure how it’s going to beat arm on baterry which competes by having a smaller simpler ISA that doesn’t need as much resources or complexity to process.

    • ForgotAboutDre@lemmy.world
      link
      fedilink
      English
      arrow-up
      21
      ·
      2 months ago

      Extra components mean more specific hardware to complete each task. This more specific hardware can process the same data often faster and with less power consumption. The drawback is cost, complexity and these compose are only good for that one task.

      CPUs are great because they are multipurpose and can do anything, given infinite time and storage. This flexibility means it isn’t as optimised.

      People are not creating custom code to solve their own problems. They are running very common applications, using very common libraries for similar functions. So for the general user specific hardware for encryption, video codecs, networking etc will reduce power consumption and increase processing speed in a practical way.

      • emax_gomax@lemmy.world
        link
        fedilink
        English
        arrow-up
        4
        ·
        2 months ago

        Out of curiosity this wouldn’t be automatically supported right? Like you’d need the os or dependent libraries to know about these special chips and take advantage of them for things like encryption for example. Is it common to define tailored hardware for this kind of functionality or is this intel trying to setup a very tailored mass market appeal product for laptops.

        • ForgotAboutDre@lemmy.world
          link
          fedilink
          English
          arrow-up
          2
          ·
          2 months ago

          You need software support to use them. But, it’s already common to support this. But it does take time to develop test and deploy this software.

          The software will exist in kernels, drivers and libraries. Intel already supports things like this.

          You may need to wait or use a bleeding edge version of your os to support these extra features.

        • pycorax@lemmy.world
          link
          fedilink
          English
          arrow-up
          2
          ·
          2 months ago

          It’s somewhat common. On the media encoding/decoding front, Intel has been doing this with stuff like QuickSync, AMD with AMF and Nvidia with NVENC.

      • IchNichtenLichten@lemmy.world
        link
        fedilink
        English
        arrow-up
        4
        arrow-down
        1
        ·
        2 months ago

        So they’re promising ARM-beating battery life while just beginning to incorporate the kind of custom silicon that Apple has been integrating for years now?

        I’ll believe it when I see it.

        • Thrashy@lemmy.world
          link
          fedilink
          English
          arrow-up
          7
          ·
          2 months ago

          Right now Intel and AMD have less to fear from Apple than they do from Qualcomm – the people who can do what they need to do with a Mac and want to are already doing that, it’s businesses that are locked into the Windows ecosystem that drive the bulk of their laptop sales right now, and ARM laptops running Windows are the main threat in the short term.

          If going wider and integrating more coprocessors gets them closer to matching Apple Silicon in performance per watt, that’s great, but Apple snatching up their traditional PC market sector is a fairly distant threat in comparison.

        • GamingChairModel@lemmy.world
          link
          fedilink
          English
          arrow-up
          6
          ·
          2 months ago

          Well, specifically, they’re promising battery life that beats Qualcomm’s implementation of an ARM laptop SoC.

          Qualcomm is significantly behind Apple. I’m not convinced that the ISA matters all that much for battery life. AMD’s x86_64 performance per watt blew Intel’s out of the water in recent generations, and Qualcomm/Samsung’s ARM chips can’t compete with Apple’s ARM chips in the mobile, tablet, or laptop space.

          • Rinox@feddit.it
            link
            fedilink
            English
            arrow-up
            3
            ·
            2 months ago

            Afaik most laptops with Qualcomm X chips seem to be even more efficient than Apple’s Macbooks, at least when running native code. The biggest problem they are having is platform maturity, Microsoft has spent the last decade doing all the wrong decisions, and now they are waiting for software developers to port their code to ARM, while Apple has had a 4-year head start.

            The chips are not bad though. As for competing, there’s really no competition as Apple uses their chips exclusively on their laptops, so there’s literally no room for any competition.

            • GamingChairModel@lemmy.world
              link
              fedilink
              English
              arrow-up
              1
              ·
              2 months ago

              The biggest problem they are having is platform maturity

              Maybe that’s an explanation for desktop/laptop performance, but I look at the mobile SoC space where Apple holds a commanding lead over ARM chips from Qualcomm, and where Qualcomm has better performance and efficiency than Samsung’s Exynos line, and I’m thinking a huge chunk of the difference between manufacturers can’t simply be explained by ISA or platform maturity. Apple has clearly been prioritizing battery life and efficiency for 10+ generations of Apple Silicon in the mobile market, and has a lead independent of its ISA, even as it trickled over to the laptop and desktop market.

        • ForgotAboutDre@lemmy.world
          link
          fedilink
          English
          arrow-up
          5
          ·
          2 months ago

          Yeah. I think they will struggle to match apple. By the time they do apple will have progressed further.

          Another big issue, is these features need deep and well implemented software. This is really easy for apple, they control all the hardware and software. They write all the drivers and can modify their kernel to their hearts content. A better processor is still unlikely to match apples overall performance. Intel have to support more operating systems and interface with more hardware of which they have little control over. It won’t be until years after release that these processors even realistically reach their potential. By which time intel and apple with both have newer releasesed chips with more features, that intel users won’t be able to use for a while.

          This strategy has intel on the back foot and they will remain their indefinitely. They really need a bolder strategy if they want to reclaim best desktop processors. It’s pretty embarrassing apple laptop and integrated GPU completely wipe the floor of intel desktop cpus with dedicated gpus in certain workflows, it can often be the cheaper option to buy the apple device if your in a creative profession.

          Qualcomm will have similar issues, but they won’t be limited to inferior x86 architecture. x86 only serves backwards compatibility and intel/amd. Arm is used on phones because with the same fab and power restrictions it makes better processors. This has been know for a long time, but consumers would accept this till apple proved it.

          I wouldn’t be surprised if these intel chips flop initially, intel cuts their losses and stops developing new ones. Then we will see lots of articles saying intel should never have stopped developing these, there really competitive relativel to their contemporaries, not realising the software took that much time to effectively utilise them.

    • brucethemoose@lemmy.world
      link
      fedilink
      English
      arrow-up
      12
      ·
      edit-2
      2 months ago

      People overblow the importance of ISA.

      Honestly a lot of the differences are business decisions. There is a balance between price, raw performance and power efficiency. Apple tend to focus exclusively on the latter two at the expense of price, while Intel (and AMD) have a bad habit of chasing cheap raw performance.

      • InvertedParallax@lemm.ee
        link
        fedilink
        English
        arrow-up
        3
        ·
        2 months ago

        Decode overhead is fairly fixed, and proportionately has become tiny over the decades. Most larger instructions dispatch to microcode, and compilers know better than to use them much.

        There’s a price to x86, but for larger cores it’s pretty small, we’ve learned to work around it.

        Apple bothered to do the things Intel was too lazy to do for so long, particularly improve the ooo and other resources to where Intel didn’t want to spend the silicon. Intel has always been cheap, nickel and diming their way out of performance, this is the cost.

      • GamingChairModel@lemmy.world
        link
        fedilink
        English
        arrow-up
        2
        ·
        2 months ago

        Apple does two things that are very expensive:

        1. They use a huge physical area of silicon for their high performance chips. The “Pro” line of M chips have a die size of around 280 square mm, the “Max” line is about 500 square mm, and the “Ultra” line is possibly more than 1000 square mm. This is incredibly expensive to manufacture and package.
        2. They pay top dollar to get the exclusive rights to TSMC’s new nodes. They lock up the first year or so of TSMC’s manufacturing capacity at any given node, at which point there is enough capacity to accommodate other designs from other TSMC clients (AMD, NVIDIA, Qualcomm, etc.). That means you can just go out and buy an Apple device made from TSMC’s latest node before AMD or Qualcomm have even announced the lines that will be using those nodes.

        Those are business decisions that others simply can’t afford to follow.

        • InvertedParallax@lemm.ee
          link
          fedilink
          English
          arrow-up
          1
          ·
          2 months ago

          800 is reticle, they’re not past that, it doesn’t make sense.

          They chiplet past 500, the economics break down otherwise.

          • GamingChairModel@lemmy.world
            link
            fedilink
            English
            arrow-up
            1
            ·
            2 months ago

            They chiplet past 500

            I don’t know if I’m using the right vocabulary, maybe “die size” is the wrong way to describe it. But the Ultra line packages two Max SoCs with a high performance interconnect, so that the whole package does use about 1000 mm^2 of silicon.

            My broader point is that much of Apple’s performance comes from their willingness to actually use a lot of silicon area to achieve that performance, and it’s very expensive to do so.

            • InvertedParallax@lemm.ee
              link
              fedilink
              English
              arrow-up
              2
              ·
              2 months ago

              You could say total die size, but you wouldn’t say die, that implies a single cut exposure of silicon.

              But agreed, Apple just took all the tricks Intel dabbled with and turned them to 11, Intel was always too cheap because they had crazy volumes (and once upon a time had a good process) and there was no point.

    • Blue_Morpho@lemmy.world
      cake
      link
      fedilink
      English
      arrow-up
      11
      ·
      2 months ago

      The Risc Isa isn’t simple anymore. It has more instructions than a 90’s CISC CPU.

      Arm has 64 bit, 32bit and 16 bit (Thumb) instructions.

      The legacy 8-32bit Intel ISA doesn’t eat power if it isn’t used. It wastes a little silicon but it’s extremely tiny on modern CPU’s.

    • SaltySalamander@fedia.io
      link
      fedilink
      arrow-up
      7
      arrow-down
      1
      ·
      2 months ago

      in guessing these are still x86_64 and from the description it seems like they’ve stacked a lot of different components into a single CPU core. Normally both those things would make it a big powerhouse so I’m not sure how it’s going to beat arm on baterry

      You realize that arm chips are also SoC’s, containing all of those same, or similar, bits and bobs, right?

  • FiveMacs@lemmy.ca
    link
    fedilink
    English
    arrow-up
    9
    ·
    2 months ago

    Can you just make the batteries better without beating our arms… There’s really no need for violence Intel.