Many might’ve seen the Australian ban of social media for <16 y.o with no idea of how to implement it. There have been mentions of “double blind age verification”, but I can’t find any information on it.

Out of curiosity, how would you implement this with privacy in mind if you really had to?

  • socsa@piefed.social
    link
    fedilink
    English
    arrow-up
    1
    ·
    24 minutes ago

    It can’t be. The entire concept is a Trojan horse to kill the anonymous internet.

  • Kissaki@programming.dev
    link
    fedilink
    English
    arrow-up
    1
    ·
    edit-2
    44 minutes ago

    Who has age authority? A state agency or service. Like the state issues an ID with age.

    Preferable, we want the user to interact with a website, that website request age authentication, but not the website to talk to the government, but through the user.

    Thus, something/somewhat like

    1. State agency issues a certificate to the user
    2. User assigns a password to encrypt the user certificate
    3. User connects to random website A
    4. Random website A creates an age verification request signed to only be resolveable by state agency but sends it to the user
    5. User sends the request to a state service with their user certificate for authentication
    6. State agency confirms-signs the response
    7. User passes the responds along to the random website A

    There may be alternative, simpler, or less verbose/complicated alternatives. But I’m sure it would be possible, and I think it lays out how “double-blind”(?) could work.

    The random website A does not know the identity or age of the user - only to the degree they requested to verify - and the state agency knows only of a request, not its origin or application - to the degree the request and user pass-along includes.

  • Simulation6
    link
    fedilink
    arrow-up
    11
    ·
    12 hours ago

    Sites are just going to ask people ‘Are you over 16? (Y/N)’. Site is now legally covered, and that is all anyone cares about.

  • PlexSheep@infosec.pub
    link
    fedilink
    arrow-up
    15
    ·
    edit-2
    14 hours ago

    If the governments would get their shit together, we could have something like age assertion with the eid chips in our IDs. Imagine that. The important thing is that website.com just asks the government “is this user an adult?” And the government replies “yes”. No information besides the relevant one is provided, and it’s through a trusted authority.

    Yeah, not gonna happen, just like using the keys in my Personalausweis to send encrypted mail.

    • FooBarrington@lemmy.world
      link
      fedilink
      arrow-up
      8
      ·
      9 hours ago

      The system would have to be built so that the government can’t connect the user to the website, as you don’t want the government to build profiles on website usage by person. Though the bigger challenge here is trust - even a technically perfect system could be circumvented by the operators.

      A good example for this were the COVID tracking apps. The approach was built so that as little information was leaked as possible.

      • Buddahriffic@lemmy.world
        link
        fedilink
        arrow-up
        3
        ·
        6 hours ago

        Could have a system where a government site cryptographically signs a birth year plus random token provided by the site you want to use.

        Step 1: access site
        Step 2: site sends random token
        Step 3: user’s browser sends token plus user authentication information
        Step 4: gov site replies with a string containing birth year, token, and signature
        Step 5: send that string to the other site where it uses the government’s public key to verify the signature, showing the birth year is attested by the government

        No need to have any direct connection with the user’s identity and the site or been the gov and site.

  • /home/pineapplelover@lemm.ee
    link
    fedilink
    arrow-up
    4
    ·
    13 hours ago

    Well Australia will probably so something privacy invading and fascist.

    I guess if you want it to be somewhat private you could have some kind of hash or token generated from your identification information. I bet that would be fairly private

  • conciselyverbose@sh.itjust.works
    link
    fedilink
    arrow-up
    27
    arrow-down
    2
    ·
    1 day ago

    You can’t.

    Age verification is not compatible with any remotely acceptable version of the internet. It’s an obscene privacy violation in all cases by definition.

    Any implementation short of a webcam watching you while you use the site is less than trivial to bypass with someone else’s ID while opening numerous massive tracking/security holes for no reason.

  • eyeon@lemmy.world
    link
    fedilink
    arrow-up
    10
    ·
    1 day ago

    All I can think of are some variations of you trusting a service to validate your id and give you a token that just asserts your id has been validated.

    But it’s still not really privacy preserving because it relies on trusting both parties to not collaborate against your privacy. if at some point the id provider decides to start keeping records of what tokens were generated from your id, and the service provider tracking what was consumes with that token, then you can still put it all back together.

    • phlegmy@sh.itjust.works
      link
      fedilink
      arrow-up
      1
      ·
      14 hours ago

      That’s when you add an extra point of failure validator.
      Server 1 generates a token for server 2 to validate.
      You send the token to server 2, who validates and generates you a token for server 3. Then finally server 3 validates the token and grants/denies your access.

      The more nodes you have across different countries, the harder it is for the last server to discover your identity.

      Definitely not without its flaws, but I wonder if a decentralised node setup similar to the tor network could work.

  • hector@sh.itjust.works
    link
    fedilink
    arrow-up
    13
    arrow-down
    2
    ·
    2 days ago

    My friend has worked with a government to create zero-knowledge proof from IDs. Turns out there’s a lot of good software engineered to solve that problem.

    The UX is still shit tho

  • letsgo@lemm.ee
    link
    fedilink
    English
    arrow-up
    30
    ·
    2 days ago

    Not a cryptographic expert by any means but maybe something like this would work. This’d be implemented in common places people shop: supermarkets for instance. You’d go up to customer service and show your ID for visual confirmation only; no records can be created. In return the service rep would give you a list of randomised GUIDs against which the only permissible record can be “has been taken”. Each time you need to prove your age you’d feed in one of those GUIDs.

      • litchralee@sh.itjust.works
        link
        fedilink
        English
        arrow-up
        14
        ·
        edit-2
        2 days ago

        Sadly, this type of scheme suffers from: 1) repudiation, and 2) transferability. An ideal system would be non-repudiable, meaning that when a GUID is used, it is unmistakably an action that could only be undertaken by the age-verified person. But a GUID cannot guarantee that, since it’s easy enough for an adult to start selling their valid GUIDs online to the highest bidder en-masse. And being a simple string, it can easily and confidentially be transferred to the buyer, so that no one but those two would know that the transaction actually took place, or which GUID was passed along.

        As a general rule, when complex questions arise which might possibly be solved by encryption, it’s fairly safe to assume that expert cryptographers have already looked at the problem and that no easy or obvious solution exists. That’s not to say that cryptographers must never be questioned, but that the field is complicated enough that incomplete answers abound.

        IMO, the other comments have it right: there does not exist a general solution to validate age without also compromising anonymity or revealing one’s identity to someone. And that alone is already a privacy compromise.

        • JeremyHuntQW12@lemmy.world
          link
          fedilink
          arrow-up
          3
          ·
          edit-2
          2 days ago

          You upload identity to a site and it gives you a date stamped token which confirms your age.

          Then when that token is uploaded to an SM site, it verfies the identity of the giver with the site that gives the token. The identity is a hash generated by the token site and contained in both the token and a namespace at the token site, so only the token site knows the real identity. Once the token has been confirmed, the namespace is re-used.

          So you can’t really sell the token, because its linked back to the identity you uploaded to the token site. You need to be logged in to the token site.

          • litchralee@sh.itjust.works
            link
            fedilink
            English
            arrow-up
            5
            ·
            edit-2
            2 days ago

            To make sure we’re all on the same page, this proposal involves creating an account with a service provider, then uploading some sort of preexisting, established proof-of-identity (eg passport data page), and then requesting a token against that account. The token is timestamped and non-fungible, so that when the token is presented to an age-restricted website, that website can query the service provider to verify that: 1) the token is still valid, 2) the person associated with the token is at least a certain age.

            If I understood that correctly, what you’re describing is an account service combined with an identity service, which could achieve the objectives of a proof-of-age service, but does not minimize privacy complications. And we already have account services of varying degrees and complexity: Google Accounts, OAuth, etc. Basically any service where you log-in, since the point of logging in is to associate to a account, although one person can have multiple accounts. Passing around tokens isn’t strictly necessary since you can just ask the user to prove account ownership by signing into their Google Account, for example. An account service need not necessarily verify age, eg signing in to post a comment on a news article.

            Compare this with an identity service like ID.me, which provide records on an individual; there cannot be multiple records for the same live person. This type of service is distinct from an account service, but some accounts are necessarily tied to a single identity, such as online banking. But apart from KYC regulations or filing one’s taxes online, an identity service isn’t required for most day to day activities, and any additional uses pose identify theft concerns.

            Proof-of-age – as I understand it from the Australian legislation – does not necessarily demand an identity service be used to satisfy the law, but the question in this Lemmy thread is whether that’s a distinction without a difference. We don’t want to be checking identities if we don’t have to, for privacy and identity theft reasons.

            In short, can a person be uniquely, anonymously age-verified online? I suspect not. Your proposal might be reasonable for an identity service, but does not move us further towards a theoretical privacy-centric proof-of-age validation mechanism. If such a mechanism doesn’t exist, then the Australian legislation would be mandating identity checks for subject websites, which then become targets for the holder of those identity records. This would be bad.

    • LordCrom@lemmy.world
      link
      fedilink
      arrow-up
      1
      ·
      2 days ago

      To be certain the list isn’t being handed out willy nilly, your id must be scanned, that will be kept for auditing purposes. If only 10 guids can be given at a time, this is the only way, plus it identifies ids used too often.

      And I can guarantee any powers that bee will turn this into a service like stupid id.me where you create an account for guid access

  • ben_dover@lemmy.ml
    link
    fedilink
    arrow-up
    11
    arrow-down
    5
    ·
    1 day ago

    in blockchain tech, there’s the concept of “zero knowledge proofs”, where you can prove having certain information without revealing the info itself

    • sinceasdf@lemmy.world
      link
      fedilink
      arrow-up
      4
      ·
      1 day ago

      Would be interesting to see a govt tackle setting up a trustless system like it required for cybersecurity best practices. I think it’s a thorny issue without a trusted authority though.

      What stops an ID for being posted publicly or shared en masse? So one ID can be used unlimited times - just share the key with minors for $1 at no risk to oneself since there’s no knowledge of the ‘transaction’ being sent around. Better for individual privacy but that undermines the political impetus for wanting the verification. Usage would probably have to be monitored or capped, kind of defeating the advantage of the anonymous protocol (or accept that abuse is unenforceable).

    • IphtashuFitz@lemmy.world
      link
      fedilink
      English
      arrow-up
      3
      ·
      1 day ago

      So how would you use it to solve this problem? There still needs to be some sort of foolproof way of saying “person X is only 14 years old”.

      • planish@sh.itjust.works
        link
        fedilink
        arrow-up
        6
        ·
        1 day ago

        You would prove something like “I possess a private key that matches a public key that is in this list of public keys belonging to people at least X years old”. But without revealing which item in the list is the specific one for you. Which is the zero knowledge proofs’ cool trick.

  • e0qdk@reddthat.com
    link
    fedilink
    arrow-up
    38
    ·
    edit-2
    2 days ago

    Frankly, the only sane option is an “Are you over the age of (whatever is necessary) and willing to view potentially disturbing adult content?” style confirmation.

    Anything else is going to become problematic/abusive sooner or later.

    • actually@lemmy.world
      link
      fedilink
      arrow-up
      13
      ·
      2 days ago

      Doesn’t this assume the issuing agency has all employees who are morally sound and not leaking data, unnoticed by an internally badly designed system, which is designed by people who are out of touch? Most things like this are designed that way, irregardless of country .

      I’m sure one can make it watertight but it’s so hard and still depends in trusting people. The conversation here is about one thing of a larger system. There are probably a hundred moving parts in any bureaucracy.

      • demesisx@infosec.pub
        link
        fedilink
        English
        arrow-up
        29
        arrow-down
        2
        ·
        2 days ago

        This is the understanding ANYWHERE. How do we know there aren’t back doors in our OS’s? We literally have no clue. We do THE BEST WE CAN using the clues we have.

        • pro3757@programming.dev
          link
          fedilink
          arrow-up
          17
          ·
          edit-2
          2 days ago

          Yeah, these things quickly boil down to the trusting trust thing (see Ken Thompson’s Turing award lecture). You can’t trust any system until you’ve designed every bit from scratch.

          You gotta put your trust somewhere, or you won’t be able to implement jack.

          • socsa@piefed.social
            link
            fedilink
            English
            arrow-up
            1
            ·
            12 minutes ago

            This isn’t as limiting as it seems at first glance though. Sending pictures of a true one time pad cipher doesn’t rely on the security of the transport or the camera. From there you can choose to make a compromise of convenience and get to things like Private key cryptography where the ciphers are done via basic xor arithmetic you can do by hand.

        • actually@lemmy.world
          link
          fedilink
          arrow-up
          7
          arrow-down
          2
          ·
          2 days ago

          I don’t know anything about cryptology; I have an imagination about how many things can go wrong hooking up parts and running them.

          If it’s the law to make an age verification system then it will be made.

          But I think one either has an age verification or privacy, but not both, in any country in the world.

          I’m totally sure many of the discussions here about crypto are way above my head. But I’m equally sure while any one part will look fine in paper, the sum total will be used by an expanding government agency, crime, or both.

      • MalReynolds@slrpnk.net
        link
        fedilink
        English
        arrow-up
        5
        ·
        2 days ago

        I’ve always thought that it should be the relevant ID issuing organisation, with whom the damage to privacy has already been done, might as well leverage it.

    • leisesprecher@feddit.org
      link
      fedilink
      arrow-up
      12
      arrow-down
      7
      ·
      2 days ago

      God I hate cryptography so much for making me feel stupid every time I read anything about it.

      I want to feel smat!

      • demesisx@infosec.pub
        link
        fedilink
        English
        arrow-up
        22
        ·
        edit-2
        2 days ago

        I find it intimidating for sure. They say “never roll your own crypto” and I take those words to heart. Still, it would suck to have to hire someone and just trust their work. That person could be another Sam Bankman Fried or Do Kwan and you’d be party to their scam and you’d have no idea.

        • leisesprecher@feddit.org
          link
          fedilink
          arrow-up
          2
          arrow-down
          12
          ·
          2 days ago

          I’m not sure what these things have to do with each other. How exactly would cryptography have prevented SBF, you know, a crypto bro.

          • demesisx@infosec.pub
            link
            fedilink
            English
            arrow-up
            13
            arrow-down
            4
            ·
            2 days ago

            It wouldn’t have. You totally misunderstood my comment. Reread it.

            To paraphrase: when you hire a cryptographer to work on your project you have to hope that they are not a scammer because they could easily lie to you about the soundness of their cryptography and you’d have no idea. You see, SBF and Do Kwan were liars. If they had been cryptographers (they aren’t and weren’t) their employer would have to believe them since they would be an expert in something nearly impossible for a layman to understand.

            Do you get it yet?

            • leisesprecher@feddit.org
              link
              fedilink
              arrow-up
              4
              arrow-down
              8
              ·
              2 days ago

              I get what you’re trying to say, but I’m not sure it makes sense.

              I mean, that’s literally every field you’re not an expert in. And most of us are experts in less than one field.

              You don’t know about medicine, car engines, electricity or tax laws, you have your guys for that. Even in our field, we have guys for databases, OSes, networking, because quite frankly nobody understands those really.

              So I’m not sure what the point of your comment is. That having experts is good? Yeah, I guess? Did we need to have that reinforced?

              • demesisx@infosec.pub
                link
                fedilink
                English
                arrow-up
                7
                ·
                2 days ago

                If a doctor or mechanic was wrong, at least you’d have an inkling that things were wrong and you’d be able to sue them. Whereas with cryptography, no one has ANY IDEA WHATSOEVER if there are back doors until they are used to rob people blind. In all of the cases you mentioned, victims of those abuses have recourse whereas in cryptography, if things are wrong, they often CANNOT be patched and it’s even exceptionally hard for an expert to prove what went wrong.

      • demesisx@infosec.pub
        link
        fedilink
        English
        arrow-up
        19
        arrow-down
        5
        ·
        edit-2
        2 days ago

        You seem to be joking but ZK and Homomorphic encryption don’t necessarily need to involve blockchain but they can.

        This is like someone mentioning UUID’s and you leave a weird sarcastic comment about databases (and everyone suddenly villainizing them due to them being used for scams).

        • PoolloverNathan@programming.dev
          link
          fedilink
          arrow-up
          11
          arrow-down
          5
          ·
          2 days ago

          I believe they were referring to last year’s trend of blockchain being introduced to everything unnecessarily (as a marketing buzzword, similar to AI).

          • demesisx@infosec.pub
            link
            fedilink
            English
            arrow-up
            14
            arrow-down
            4
            ·
            edit-2
            2 days ago

            I got the joke. What I didn’t get is why it was even remotely relevant to the discussion at hand since ZK is used a lot in crypto but it’s also used everywhere else. It muddied the waters and made the joke somewhat nonsensical, IMO. Perhaps OP was unaware of how prevalent ZK is in the crypto world…

            Oh well. Have a good day.

            • jonathan@lemmy.zip
              link
              fedilink
              arrow-up
              3
              arrow-down
              5
              ·
              edit-2
              2 days ago

              You say you got the joke, but everything else you said suggests you didn’t. Just to be clear I wasn’t being critical of your reply, I was mocking the cryptobros the other poster mentioned.

  • Draconic NEO@programming.dev
    link
    fedilink
    arrow-up
    22
    arrow-down
    4
    ·
    2 days ago

    It can’t. It requires invasion of privacy to verify information about the individual they don’t have the right to access.

    Digital age verification goes against privacy. Let’s not delude ourselves into thinking it can.

  • Asidonhopo@lemmy.world
    link
    fedilink
    English
    arrow-up
    23
    arrow-down
    1
    ·
    2 days ago

    I seem to remember Leisure Suit Larry verified age using trivia questions that only older people would answer correctly. I know this because at 8 years old I guessed enough of them on my father’s friends computer to play it.

    • Kissaki@programming.dev
      link
      fedilink
      English
      arrow-up
      1
      ·
      38 minutes ago

      I talked to a friend of mine last week and they didn’t know of the old PS/2 mouse/keyboard cable/sockets. They’ve seen it before, but it wasn’t familiar to them. Nobody only having used USB devices will remember those.

    • onlinepersona@programming.devOP
      link
      fedilink
      English
      arrow-up
      7
      ·
      2 days ago

      oof, I’d fail trivia questions for my age group because I had a… complicated childhood. But it would probably be a problem for foreigners who didn’t grow up the country. Imagine coming from Chile and having to know about Australian trivia from the 70s or something to sign up for a social media platform 😄

      Anti Commercial-AI license

  • incogtino@lemmy.zip
    link
    fedilink
    English
    arrow-up
    24
    arrow-down
    3
    ·
    2 days ago

    A joke answer, but with the kernel of truth - IRL age verification often requires a trusted verifier (working under threat of substantial penalty) but often doesn’t require that verifier to maintain any documentation on individual verification actions

    https://chinwag.au/verification/

    • onlinepersona@programming.devOP
      link
      fedilink
      English
      arrow-up
      5
      ·
      2 days ago

      As in, you have to roll up to an “age verification bureau” and say “I’d like to sign up to $platform, please verify that I’m of legal age to use it and tell them so”, then you buy a “token” that you can enter upon signing up? Am I understanding that correctly?

      Anti Commercial-AI license

      • Pup Biru@aussie.zone
        link
        fedilink
        English
        arrow-up
        1
        ·
        23 hours ago

        yes and no: the government already has systems in place that know your age, or they can pay 3rd parties to have maintain records… so yes kinda you’d have to verify with them or they’d already have them, but you wouldn’t need to do that for each platform: it’d likely act like a social login (“login with facebook” etc) where you just tap a button and have the service attest to identity details without providing the identity itself

      • JustEnoughDucks@feddit.nl
        link
        fedilink
        arrow-up
        1
        ·
        edit-2
        2 days ago

        Here in Belgium we have cryptographically signed tokens on our legally mandated IDs.

        You can use that token to do all sorts of things (my company uses them as authorship signatures for our quality system for medical devices), but if we had some standard like that, then we could have some software that would have a OTP based on that that is a huge list of valid OTPs in a website API or so, not linked to the token itself. (So you would have to trust this software that generates the OTP). You will get people using the same OTP, but that wouldn’t matter because it would just be a validity check. Lind of like the old product key generators for games.

        Sure this could be abused or gotten around by a programmer or hack, but for 95% of the population it would be effective age verification without giving away any information or statistics. Sure, people could also abuse it and save a code and use it constantly, but then they would already have been verified. Sharing a code around would also happen with teens, but it would be far more effective than not, especially for the low stakes of age verification.

      • incogtino@lemmy.zip
        link
        fedilink
        English
        arrow-up
        3
        ·
        edit-2
        2 days ago

        I wasn’t thinking in detail, just addressing an assumption I think a lot of age verification discussions include, which is that the verifier would have to be trusted to maintain some sort of account for you, retaining your data etc.

        I have no idea what the legislation says, but I’d be a happier privacy-conscious user if the verification platforms were independent (i.e. not in any other data business) and regulated, with a requirement they don’t retain my personal data at all (like the liquor store example)

        So the verifier gathers data from you, matches it with a request from the platform, provides confirmation that some standard has been met, and deletes almost all personal information - I acknowledge that this may not rise to the double-blind standard of the original request

        Edited to add:

        • you don’t have to ‘buy’ a token, the platform needs to pay verifiers as a cost of business

        • some other comments are asking how you prevent the verifier knowing the platform - to my mind you don’t, instead the verifier retains a request id record from the platform, but forgets entirely who you are