Per one tech forum this week: “Google has quietly installed an app on all Android devices called ‘Android System SafetyCore’. It claims to be a ‘security’ application, but whilst running in the background, it collects call logs, contacts, location, your microphone, and much more making this application ‘spyware’ and a HUGE privacy concern. It is strongly advised to uninstall this program if you can. To do this, navigate to 'Settings’ > 'Apps’, then delete the application.”

  • Armand1@lemmy.world
    link
    fedilink
    English
    arrow-up
    66
    arrow-down
    4
    ·
    edit-2
    12 hours ago

    For people who have not read the article:

    Forbes states that there is no indication that this app can or will “phone home”.

    Its stated use is for other apps to scan an image they have access to find out what kind of thing it is (known as "classification"). For example, to find out if the picture you’ve been sent is a dick-pick so the app can blur it.

    My understanding is that, if this is implemented correctly (a big ‘if’) this can be completely safe.

    Apps requesting classification could be limited to only classifying files that they already have access to. Remember that android has a concept of “scoped storage” nowadays that let you restrict folder access. If this is the case, well it’s no less safe than not having SafetyCore at all. It just saves you space as companies like Signal, WhatsApp etc. no longer need to train and ship their own machine learning models inside their apps, as it becomes a common library / API any app can use.

    It could, of course, if implemented incorrectly, allow apps to snoop without asking for file access. I don’t know enough to say.

    Besides, you think that Google isn’t already scanning for things like CSAM? It’s been confirmed to be done on platforms like Google Photos well before SafetyCore was introduced, though I’ve not seen anything about it being done on devices yet (correct me if I’m wrong).

    • ZILtoid1991@lemmy.world
      link
      fedilink
      English
      arrow-up
      7
      ·
      6 hours ago

      Issue is, a certain cult (christian dominionists), with the help of many billionaires (including Muskrat) have installed a fucking dictator in the USA, who are doing their vow to “save every soul on Earth from hell”. If you get a porn ban, it’ll phone not only home, but directly to the FBI’s new “moral police” unit.

    • Ulrich@feddit.org
      link
      fedilink
      English
      arrow-up
      28
      ·
      11 hours ago

      Forbes states that there is no indication that this app can or will “phone home”.

      That doesn’t mean that it doesn’t. If it were open source, we could verify it. As is, it should not be trusted.

        • FauxLiving@lemmy.world
          link
          fedilink
          English
          arrow-up
          3
          ·
          8 hours ago

          The Graphene devs say it’s a local only service.

          Open source would be better (and I can easily see open source alternatives being made if you’re not locked into a Google Android-based phone), but the idea is sound and I can deny network privileges to the app with Graphene so it doesn’t matter if it does decide to one day try to phone home… so I’ll give it a shot.

          • Armand1@lemmy.world
            link
            fedilink
            English
            arrow-up
            7
            ·
            7 hours ago

            God I wish I could completely deny internet access to some of my apps on stock android. It’s obvious why they don’t allow it though.

            • xspurnx@lemmy.dbzer0.com
              link
              fedilink
              English
              arrow-up
              3
              ·
              6 hours ago

              Check out Netguard. It’s an app that pretends to be a VPN client so most of your traffic has to go through it - and then you can deny/allow internet access per app. Even works without root.

    • Opinionhaver@feddit.uk
      link
      fedilink
      English
      arrow-up
      14
      ·
      11 hours ago

      Doing the scanning on-device doesn’t mean that the findings cannot be reported further. I don’t want others going thru my private stuff without asking - not even machine learning.

    • lepinkainen@lemmy.world
      link
      fedilink
      English
      arrow-up
      9
      arrow-down
      12
      ·
      12 hours ago

      This is EXACTLY what Apple tried to do with their on-device CSAM detection, it had a ridiculous amount of safeties to protect people’s privacy and still it got shouted down

      I’m interested in seeing what happens when Holy Google, for which most nerds have a blind spot, does the exact same thing

      • lka1988@lemmy.dbzer0.com
        link
        fedilink
        English
        arrow-up
        9
        ·
        edit-2
        8 hours ago

        I have 5 kids. I’m almost certain my photo library of 15 years has a few completely innocent pictures where a naked infant/toddler might be present. I do not have the time to search 10,000+ pics for material that could be taken completely out of context and reported to authorities without my knowledge. Plus, I have quite a few “intimate” photos of my wife in there as well.

        I refuse to consent to a corporation searching through my device on the basis of “well just in case”, as the ramifications of false positives can absolutely destroy someone’s life. The unfortunate truth is that “for your security” is a farce, and people who are actually stupid enough to intentionally create that kind of material are gonna find ways to do it regardless of what the law says.

        Scanning everyone’s devices is a gross overreach and, given the way I’ve seen Google and other large corporations handle reports of actually-offensive material (i.e. they do fuck-all), I have serious doubts over the effectiveness of this program.

        • Ledericas@lemm.ee
          link
          fedilink
          English
          arrow-up
          1
          ·
          39 minutes ago

          im not surprised if they are also using an AI, which is very error prone.

      • Natanael@infosec.pub
        link
        fedilink
        English
        arrow-up
        15
        ·
        edit-2
        11 hours ago

        Apple had it report suspected matches, rather than warning locally

        It got canceled because the fuzzy hashing algorithms turned out to be so insecure it’s unfixable (easy to plant false positives)

        • Clent@lemmy.dbzer0.com
          link
          fedilink
          English
          arrow-up
          1
          arrow-down
          1
          ·
          5 hours ago

          The official reason they dropped it is because there were security concerns. The more likely reason was the massive outcry that occurs when Apple does these questionable things. Crickets when it’s Google.

          The feature was re-added as a child safety feature called “Comminication Saftey” that is optional on a child accounts that will automatically block nudity sent to children.

      • Noxy@pawb.social
        link
        fedilink
        English
        arrow-up
        17
        ·
        11 hours ago

        it had a ridiculous amount of safeties to protect people’s privacy

        The hell it did, that shit was gonna snitch on its users to law enforcement.

      • Modern_medicine_isnt@lemmy.world
        link
        fedilink
        English
        arrow-up
        2
        arrow-down
        2
        ·
        12 hours ago

        Overall, I think this needs to be done by a neutral 3rd party. I just have no idea how such a 3rd party could stay neutral. Some with social media content moderation.