• EliasChao@lemmy.oneOP
    link
    fedilink
    arrow-up
    26
    ·
    1 year ago

    Isn’t it funny that every tech commenter was like “Apple would have to re-engineer their whole iMessage stack if they want to cut off access to Beeper Mini”?

    • Nogami@lemmy.world
      link
      fedilink
      arrow-up
      22
      ·
      edit-2
      1 year ago

      That would seem to imply that tech commenters know less than Apple about Apple’s own servers. Shocking.

      My bet is that is if Apple comments at all, they will talk about closing a security vulnerability rather than cutting off android users.

        • Nogami@lemmy.world
          link
          fedilink
          arrow-up
          1
          ·
          1 year ago

          And the founders quote is hilarious.

          “if Apple truly cares about the privacy and security of their own iPhone users, why would they stop a service that enables their own users to now send encrypted messages to Android users, rather than using unsecure SMS?”

          One of these things are their own iPhone users. One of them is not.

          Swoosh.

          If you want security, stay in the Apple ecosystem and you don’t need to send to insecure android users.

        • jard
          link
          fedilink
          arrow-up
          12
          arrow-down
          1
          ·
          edit-2
          1 year ago

          deleted by creator

    • Dmian@lemmy.world
      link
      fedilink
      arrow-up
      7
      arrow-down
      1
      ·
      edit-2
      1 year ago

      The thing with this service is, if I understand it correctly, that they were using someone else’s device ID to send messages.

      So, say for example that someone started using my Mac Mini’s ID (my Mac being located in Madrid, Spain) to send iMessages in the US….

      People expected Apple not noticing it?

      It worked when it was some hacker’s project because at that time, a few stolen Apple device IDs didn’t raise too many red flags. But at a large scale, and used by a company, it may be easy for Apple to detect.

      And don’t be fooled: the system worked by stealing someone else’s legitimate device ID, and pose as it to send messages to the system. So, this company could be making money by using you Apple device ID. I’m not ok with that.

      • jard
        link
        fedilink
        arrow-up
        8
        arrow-down
        2
        ·
        edit-2
        1 year ago

        deleted by creator

        • chiisana@lemmy.chiisana.net
          link
          fedilink
          arrow-up
          6
          ·
          1 year ago

          I was under the impression that interaction with Apple’s servers required some kind of “proof” (honor system really) that you’re using an Apple device, which used device ID that was spoofed; just like how Hackintosh had done for push notifications for years.

          Worth noting that Hackintosh got to a point where someone wrote scripts to generate random strings to brute force until they encounter a valid device ID, so they’d literally assume someone else’s legitimate device to get push notifications.

          • jard
            link
            fedilink
            arrow-up
            8
            arrow-down
            1
            ·
            edit-2
            1 year ago

            deleted by creator

            • chiisana@lemmy.chiisana.net
              link
              fedilink
              arrow-up
              2
              ·
              1 year ago

              Thanks for digging into this and confirming my understanding!

              On a quick glance, this looks to be more secure the the old Hackintosh push notification (where it was based solely on a single device ID/serial number), but rather, some kind of certificate based identity system. This makes it more secure because without access to Apple’s private signing keys, it should be very difficult to get a certificate signed by Apple to spoof the interaction. Though, I wonder how were the devices getting it in the first place, and if that part would be the next vector that’d need to be compromised (i.e.: if you get a signed certificate during device activation, then it’d be possible to swipe a signed certificate from a Mac you own; or that activation process itself becomes the next attack vector).

              Having interacted very briefly with Eric Migicovsky a long time ago (due to Pebble), this does not surprise me that much. He’s a great guy, and appears to want to do the right thing to help everyone. Beeper wanted to do it in the cloud with Mac systems/VMs, which is a costly endeavour. This POC would allow the interaction to run natively without themselves essentially MITM’ing all users, so it would save their company a lot of money. POC was done allegedly by some high school kid, and given Eric’s Pebble fame, I think he’s just thrilled that they could save some money and help some kid get started.

              In all cases, it is certainly interesting to see how this has been playing out, and I’d be curious to see how this continue to play out, because I doubt this will be the end of this story.

        • TORFdot0@lemmy.world
          link
          fedilink
          English
          arrow-up
          3
          ·
          1 year ago

          Beeper mini still needed a device serial for it to register with apple’s serial which makes it easy for Apple to see a swath of fake device serials being registered.

    • misk
      link
      fedilink
      arrow-up
      1
      arrow-down
      2
      ·
      1 year ago

      Why would Apple have to reverse engineer their own protocol?