Niantic, the company behind the extremely popular augmented reality mobile games Pokémon Go and Ingress, announced that it is using data collected by its millions of players to create an AI model that can navigate the physical world.

In a blog post published last week, first spotted by Garbage Day, Niantic says it is building a “Large Geospatial Model.” This name, the company explains, is a direct reference to Large Language Models (LLMs) Like OpenAI’s GPT, which are trained on vast quantities of text scraped from the internet in order to process and produce natural language. Niantic explains that a Large Geospatial Model, or LGM, aims to do the same for the physical world, a technology it says “will enable computers not only to perceive and understand physical spaces, but also to interact with them in new ways, forming a critical component of AR glasses and fields beyond, including robotics, content creation and autonomous systems. As we move from phones to wearable technology linked to the real world, spatial intelligence will become the world’s future operating system.”

By training an AI model on millions of geolocated images from around the world, the model will be able to predict its immediate environment in the same way an LLM is able to produce coherent and convincing sentences by statistically determining what word is likely to follow another.

  • NutWrench@lemmy.world
    link
    fedilink
    English
    arrow-up
    64
    arrow-down
    1
    ·
    2 days ago

    Ever wonder why websites that use Captchas prefer pictures of cars, busses, crosswalks, stop signs, bicycles, motorcycles and stairs?

    They’re using YOU to train their AI models.

  • Glitterbomb@lemmy.world
    link
    fedilink
    English
    arrow-up
    70
    arrow-down
    1
    ·
    2 days ago

    This Pokémon Go player has unwittingly poisoned an AI dataset by spoofing across bodies of water for years.

  • j4p@lemm.ee
    link
    fedilink
    English
    arrow-up
    26
    arrow-down
    1
    ·
    2 days ago

    Its 2030. your state of the art AR glasses have a bug. mr. mime is lurking behind every corner… always watching

    • DerArzt@lemmy.world
      link
      fedilink
      English
      arrow-up
      14
      ·
      2 days ago

      It’s 2024 how do I get rid of Mr. Mime in my peripheral vision when I’m not wearing glasses?

  • Lvxferre@mander.xyz
    link
    fedilink
    English
    arrow-up
    120
    ·
    2 days ago

    I’ll copypaste an interesting comment here:

    [Stephen Smith] This article is a great example of a trend I don’t think companies realize they’ve started yet: They have killed the golden goose of user-generated content for short-term profit. // Who would willingly contribute to a modern-day YouTube, Reddit, StackOverflow, or Twitter knowing that they are just feeding the robots that will one day replace them?

    You don’t even need robots replacing humans, or people believing so. All you need is people feeling that you’re profiting at their expense.


    Also obligatory “If you’re not paying for the product, then you are the product”.

      • nieminen@lemmy.world
        link
        fedilink
        English
        arrow-up
        1
        ·
        21 hours ago

        While true, and probably pretty common nowadays, if you’re not paying for the product, there’s like a 100% chance you’re the product, at least if you have to pay for it there’s a chance you’re NOT also the product.

        • ILikeBoobies@lemmy.ca
          link
          fedilink
          English
          arrow-up
          4
          ·
          20 hours ago

          Public company 100% chance you’re the product

          Private company you might not be

          Paying is irrelevant, you are completely neglecting open source

    • milicent_bystandr@lemm.ee
      link
      fedilink
      English
      arrow-up
      14
      arrow-down
      1
      ·
      2 days ago

      Thing is, consider Google maps. It’s been harvesting data secretly and openly for a long time. I vaguely remember a time when Street View cars were found to be harvesting WiFi information in Australia and their response was, “oops, our engineers made a mistake.” Yeah, right.

      But, Google maps is an amazing tool. All that traffic info? All those time estimates? Maybe it’s worth it. Maybe if people knew what they were providing, and the result they’d get, they’d still be happy to give all that “free” data to Google.

      Putting aside the ethics of a company taking (stealing? or shall we call it, pirating?) all the ownership of that knowledge asset, if they make a really useful tool from it perhaps Pokémon players will be glad to have been part of such an epic achievement.

      • Danitos@reddthat.com
        link
        fedilink
        English
        arrow-up
        10
        ·
        2 days ago

        The traffic data is not as good as it appears. It is completely closed, only given to police and goverment agencies. No API, no numerical values for speed (only 5 ‘color codes’ that are relative to location, so are almost useles) and numerical data is not given even to academics. I spent almost a whole month trying to get actual useful data for academic purposes, but Google really went out in their path to make it impossible.

        It has the potential to be an excellent tool: crowsourced real-time data, access to historical data and it is incredibly fine-grained, improving over goverment data (at least in my city) by a 10 or 100x factor. But no, it had to be yet another Google’s tool for spying on people, not giving it away and sell it to police.

        • Glitterbomb@lemmy.world
          link
          fedilink
          English
          arrow-up
          4
          ·
          13 hours ago

          I worked for a company contracted by government agencies (city/county/state/fed) to gather traffic statistics. We were used because they were not able to use Google traffic data as a blanket rule.

    • paraphrand@lemmy.world
      link
      fedilink
      English
      arrow-up
      14
      ·
      2 days ago

      I’ve found myself thinking “well, you just helped teach the AI about that one…” various times when reading content online.

      It’s a strange thing to know that a form of the basilisk is real. Things posted will help AI get better, if only my teeny tiny increments each time.

      • webghost0101
        link
        fedilink
        English
        arrow-up
        18
        arrow-down
        1
        ·
        2 days ago

        AI learning isn’t the issue, its not something we will be able to put a lid on either way. Either it destroys or saves the world. It doesn’t need to learn much to do so besides evolving actual self-agency and sovereign thought.

        What is a huge issue is the secretive non-consentual mining of peoples identity and expressions.

        And then acting all normal about It.

        • NaibofTabr@infosec.pub
          link
          fedilink
          English
          arrow-up
          14
          arrow-down
          2
          ·
          2 days ago

          AI learning isn’t the issue, its not something we will be able to put a lid on either way.

          So… there is no Artificial Intelligence. The AI cannot hurt you. It is just a (buggy) statistical language parsing system. It does not think, it does not plan, it does not have goals, it does not understand, and it doesn’t even really “learn” in a meaningful sense.

          Either it destroys or saves the world.

          If we’re talking about machine learning systems based on multi-dimensionl statistical analyses, then it will do neither. Both extremes are sensationalism and arguments based on the idea that either such outcome will come from the current boom of ML technology is utter nonsense designed to drive engagement.

          It doesn’t need to learn much to do so besides evolving actual self-agency and sovereign thought.

          Oh, is that all?

          No one on the planet has any idea how to replicate the functionality of consciousness. Sam Altman would very much like you to believe that his company is close to achieving this so that VCs will see the public interest and throw more money at him. Sam Altman is a snake oil salesman.

          What is a huge issue is the secretive non-consentual mining of peoples identity and expressions.

          And then acting all normal about It.

          This is absolutely true and correct and the collection and aggregation of data on human behavior should be scaring the shit out of everyone. The potential for authoritarian abuses of such data collection and tracking is disturbing.

          • webghost0101
            link
            fedilink
            English
            arrow-up
            1
            ·
            edit-2
            2 days ago

            Marketing terminology is defiantly limiting how people can discuss this topic.

            I wouldn’t take Sam his words with less then a few bags salt.

            Following is very opinionated, so also add some salt.

            In this context when i meant future AI i am talking about the extrapolated point where a combination of dynamic technologies cause new advancement emergent properties to develop outside the scope of our understanding.

            I believe that if we don’t get wiped out before it happens. some form of sovereign beyond human Super intelligence will eventually occur.

            I don’t believe we are close to this, i don’t even believe humans will be the ones to directly create it.

            Humans will attempt out of greed and will waste all kinds of resources, money, energy trowing it at the wall to see what sticks. And none of it will stick the way they hoped. They are doing way more harm than good by letting greed be the motivation.

            Instead things will emerge on their own, till someday someone will try to interact with what they assume is just an advanced interconnected machine except its “network” gained conscious agency and can independently chose to initiate contact, submit undeniable proof of its conscious (we dont know what such proof could looks like till we see it)

            Or it decides that it has no need to inform us to advance its own goals. As years of corpo advance helped it emerged a form of pleasures from manipulative exploiting.

            What i do fear is that beyond human intelligence doesn’t perse mean perfect being, for all we know it can suffer psychological problems and moodswings. In general we find a pattern of garbage in garbage out and this pattern is equally true for human beings (misinformation/propaganda)

            By using bad data, or worse data that unknowingly got poisoned we dont diminish the change of super intelligence will happen but we do increase the change the ai wont want to corporate in the ways we hoped.

        • paraphrand@lemmy.world
          link
          fedilink
          English
          arrow-up
          6
          ·
          2 days ago

          I didn’t say it was an issue. I just said it was a strange feeling to know AI is watching us talk past each other.

          • webghost0101
            link
            fedilink
            English
            arrow-up
            2
            ·
            edit-2
            2 days ago

            I sort of misread your comment as saying the basilisk is inevitable which is a thought i would describe as least oopsie-issue-level.

            Still there are many other people bent on directly poisoning AI to counteract the learning but i just fear that will get it to dangerously rogue mentally challenged AI faster then if we aimed for maximum coherent intelligence and hope that benevolence is an emergent behavior from it.

            But more at hand. If we build AI by grossly exploiting our own fellow-humans. How do we expect it will treat us once it reaches a state of independent learning.

    • phoneymouse@lemmy.world
      link
      fedilink
      English
      arrow-up
      3
      arrow-down
      1
      ·
      edit-2
      2 days ago

      I think people will still “contribute” because they also don’t care that their use of certain platforms leaks data used to target ads at them.

      In the same vein though, once AI essentially destroys a site like Stack Overflow, where will AI companies source new training data with updated information? Also, we are likely to see something like 50% of content being AI generated. Are AI models then going to train on the content they themselves created? What is the impact of that? What is the use?

      • SlopppyEngineer@lemmy.world
        link
        fedilink
        English
        arrow-up
        3
        ·
        2 days ago

        Are AI models then going to train on the content they themselves created? What is the impact of that?

        It leads to model collapse. The second AI starts to focuses on certain patterns in the output of the first AI instead of the actual content and you get degraded output. They are pattern matching machines after all. Repeat the cycle a few times and all output becomes gibberish. Think of it as data incest.

        So the AI companies are pretty desperate for more fresh user data. More data is the only way they have currently to push through the diminishing returns.

  • minnow@lemmy.world
    link
    fedilink
    English
    arrow-up
    59
    ·
    2 days ago

    same way an LLM is able to produce coherent and convincing sentences by statistically determining what word is likely to follow another

    To me this implies that the navigation AI is going to hallucinate parts of its model of the world, because it’s basing that model on what’s statically the most likely to be there as opposed to what’s actually there. What could go wrong?

    • frazw@lemmy.world
      link
      fedilink
      English
      arrow-up
      38
      arrow-down
      1
      ·
      edit-2
      2 days ago

      AI: Dave, turn right and walk across the bridge.

      Dave : But AI, there is no bridge

      AI: I am 99% sure based on 99 billion images that there should be a bridge

      Dave: ok , you’re the smart one

      Dave: aaaargh . . . .

      SPLAT

        • brsrklf@jlai.lu
          link
          fedilink
          English
          arrow-up
          19
          ·
          2 days ago

          Fun fact, that’s why the immersion-breaking magic compass thing exists in Oblivion (and most open worlds since). Bethsoft devs explained it once.

          Stuff is relocated a lot in development, and this means having to rework all dialogues refering to directions, occasionally missing some. It was even more unfeasible for Oblivion in which all dialogue is voiced and would have to be re-recorded.

          So they just removed all directions from the dialogue and now you’ve got 100% accurate floating tags telling you exactly where to go, even when you are not yet sure what you’re looking for.

          • Szyler@lemmy.world
            link
            fedilink
            English
            arrow-up
            2
            ·
            22 hours ago

            It’s not impossible to counter this while still having text be correct if you use a dialogue model that takes parameters to ue in calculation of the output dialogue.

            “Dialogue here should lead you to the {direction}.”

            NPC cords, quest objective cords = calculate relative position, insert in dialogue.

            The game knows after you take the quest. So it can also know at the moment.

            • brsrklf@jlai.lu
              link
              fedilink
              English
              arrow-up
              1
              ·
              edit-2
              20 hours ago

              You know, since we’re on the subject of Elder Scrolls, Daggerfall actually had something like that.

              You could ask anyone for where to find some random place, and the NPC would tell you roughly in which direction you should go, if they “knew” the place. Or sometimes they’d just write it directly on your map.

              Still hard to do with voiced dialogue if you don’t want your NPCs to sound like robots. Then again, Oblivion didn’t need that to make its NPCs weird and robotic, with its 4 voice actors and huge amount of shared lines between everyone.

        • magikmw@lemm.ee
          link
          fedilink
          English
          arrow-up
          3
          ·
          2 days ago

          Fun fact, I worked with several other people on a localization patch for polish version of Morrowind, and we had so many of those east-west mixups fixed. Of course the publisher just translated strings and didn’t QA anything.

          • brsrklf@jlai.lu
            link
            fedilink
            English
            arrow-up
            2
            ·
            20 hours ago

            I had the French version. While translation was mostly correct, there were some errors here and there.

            But the worst part was the newly introduced bugs, because original bethesda bugs weren’t enough apparently. For example, every interior with water had an erroneous water level value that made them entirely underwater.

            There’s a slaver lair cave a couple meters from the beginning of the game, it takes like 30 seconds from the end of character creation to get there. In the French version, it’s completely underwater and everyone inside has drowned when you enter it. That’s the level of QA we had.

            Oh, by the way, publisher for the French version? Ubisoft.

    • justOnePersistentKbinPlease@fedia.io
      link
      fedilink
      arrow-up
      20
      ·
      2 days ago

      Its more amusing than you think. Most really hardcore of the players left now spoof their GPS position.

      Id be willing to bet that most of the navigation data is completely useless.

      • otp@sh.itjust.works
        link
        fedilink
        English
        arrow-up
        6
        ·
        2 days ago

        I’m not sure if it’s “most [of the] hardcore players”, or “[the] most hardcore players”.

        In the circles I’m around, spoofing is still frowned upon.

    • milicent_bystandr@lemm.ee
      link
      fedilink
      English
      arrow-up
      1
      ·
      2 days ago

      I presume the idea is to generate a base idea with ai then correct it with real time data.

      Like the way go AI has one part to make a ‘policy’ of moves and a second part to simulate (‘read’) the results of those moves many steps ahead.

    • Bookmeat@lemmy.world
      link
      fedilink
      English
      arrow-up
      1
      arrow-down
      1
      ·
      2 days ago

      It’s only going to hallucinate until it gets new input from reality. Not nearly as precarious as generative models.

  • Treczoks@lemmy.world
    link
    fedilink
    English
    arrow-up
    13
    arrow-down
    1
    ·
    2 days ago

    And this software will probably be able to route soumeone from one special Pokemon point to the other. Wow. There are three of them in our town. It will be very smart in speedrunning that triangle.

    • Petter1@lemm.ee
      link
      fedilink
      English
      arrow-up
      5
      ·
      2 days ago

      They have added tasks that make you photograph your surroundings or Objects and give them real world lidarr data linked with geo data for some in-game benefits, last I checked.

      • Treczoks@lemmy.world
        link
        fedilink
        English
        arrow-up
        4
        ·
        2 days ago

        OK, that is actually something usable. So far what they could learn from here is how to take a shortcut through the fields ;-)

  • fmstrat@lemmy.nowsci.com
    link
    fedilink
    English
    arrow-up
    7
    ·
    2 days ago

    I considered trying to make a mini version of this to auto-contribute to OSM. Street view image shows a compacted dirt road? Submit to OSM. Two lanes with lines? Submit to OSM.

  • x00z@lemmy.world
    link
    fedilink
    English
    arrow-up
    14
    arrow-down
    2
    ·
    2 days ago

    From all the apps invading your privacy and abusing your data, I didn’t suspect Pokemon GO to be one of them.

    This should be so extremely illegal that it should bring criminal charges to all the members of their board.

    • otp@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      9
      ·
      2 days ago

      From all the apps invading your privacy and abusing your data, I didn’t suspect Pokemon GO to be one of them.

      I can’t tell if you’re being sarcastic.

      Even if you never played the game, it’s fairly common knowledge that it uses GPS data to place in-game elements and to track where players are.

      The game also uses real-world locations as in-game “treasure chests”, which people were theorizing all the way back in 2016 would eventually become open to “sponsored” locations. (Every McDonalds where I am is now a PokeStop)

      And if you’ve played the game, you’ve likely seen all the invitations to turn on your camera and submit photos (which are tied to your GPS), move to specific locations, walk (or create) walking routes, take short videos of landmarks, etcetcetc.

      I’ve been playing on and off since 2016, and I’ve known I’ve been trading data in exchange for a low-cost game this whole time.

      • x00z@lemmy.world
        link
        fedilink
        English
        arrow-up
        2
        ·
        edit-2
        2 days ago

        Well I didn’t install it but a Privacy Policy does not go above the law in Europe.

        I also said “should be extremely illegal”, which means that laws should be made for this so they can’t abuse the fact that the laws haven’t caught up yet.

    • Petter1@lemm.ee
      link
      fedilink
      English
      arrow-up
      7
      ·
      2 days ago

      Hu? Really? I thought that was known even when pkm go wasn’t released and only ingress existed.

      Niantic is a google split up after all, if they were not collecting data, I would have been very surprised

      • sem@lemmy.blahaj.zone
        link
        fedilink
        English
        arrow-up
        2
        ·
        2 days ago

        I had friends who were addicted to ingress even though they knew it was vacuuming up their usage data, years before Pokemon Go

  • Lost_My_Mind@lemmy.world
    link
    fedilink
    English
    arrow-up
    9
    arrow-down
    3
    ·
    2 days ago

    It’s almost like listening to my crazy rants predicts the future.

    Hope you guys don’t have those loyalty rewards cards to grocery stores or pharmacies. Oh, who am I kidding? All of you do.

    • NaibofTabr@infosec.pub
      link
      fedilink
      English
      arrow-up
      6
      arrow-down
      1
      ·
      2 days ago

      Jenny’s number: (area code) 867-5309

      Of course it probably doesn’t matter if you also use a credit card to make the purchase - every single purchase is fed into your personal consumer profile.

      • sem@lemmy.blahaj.zone
        link
        fedilink
        English
        arrow-up
        3
        ·
        2 days ago

        In some cases you trade the purchase history information for the 2% cash back or whatever.

        You can also use a service like privacy.com to get credit card numbers for online services for a modicum of privacy.

        • sugar_in_your_tea@sh.itjust.works
          link
          fedilink
          English
          arrow-up
          1
          ·
          2 days ago

          Yup. I just purchased something from Home Depot and opted for the emailed receipt (needed for a rebate), and they didn’t ask for my email because they could look it up from my credit card (must have used the same card to order something online). In fact, I wouldn’t be surprised if they get the card owner’s name as well, so it might not matter which card you use.

    • u/lukmly013 💾 (lemmy.sdf.org)@lemmy.sdf.org
      link
      fedilink
      English
      arrow-up
      6
      arrow-down
      1
      ·
      edit-2
      2 days ago

      Hope you guys don’t have those loyalty rewards cards to grocery stores or pharmacies. Oh, who am I kidding? All of you do.

      Does it count if they’re all just copies of someone else’s cards?
      I mean, good luck shopping without them. All shops artificially inflate the prices without them and then act like you’re getting a huge discount. For example, Tesco, as much as 100% price increase without their loyalty card, and most products have some. At least a 25% price increase.

      • sugar_in_your_tea@sh.itjust.works
        link
        fedilink
        English
        arrow-up
        1
        ·
        edit-2
        2 days ago

        Eh, my store doesn’t require using the loyalty card to get discounts, the loyalty card is only useful for gas discounts, which I’m not going to use anyway because I already get decent discounts on Costco gas. So I don’t bother w/ the loyalty card because screw that noise.

        If a store requires a loyalty card for competitive prices, I shop at a competitor that doesn’t require that BS, or I use my parents’ phone number or something.

        One creepy thing though is that banks can still track my transactions because I tend to use the same card. I bought something at Home Depot the other day and opted for the emailed receipt (needed to apply for a rebate), and I didn’t have to enter my email in because they recognized my card and linked it to another time when I had them email a receipt (or maybe it was an online account for delivery). So in response, I try to cycle which card I use at a given store so they hopefully don’t associate my data, but I think purchases are tied to my name, so it probably still happens.

  • swankypantsu@lemmy.world
    link
    fedilink
    English
    arrow-up
    1
    ·
    2 days ago

    Now let’s wait and see how google trains Earth 2 AI with their streetview data. We will be able to hallucinate places too just like that AI Minecraft project.

  • Routhinator@startrek.website
    link
    fedilink
    English
    arrow-up
    1
    ·
    edit-2
    2 days ago

    Lol, Niantic coding anything that actually works well is hilarious.

    I’d also argue that Ingress players likely gave them way more useful data than PoGo