Dropbox removed ability to opt your files out of AI training::undefined

  • @reksas
    link
    English
    185 months ago

    Time for dropbox users to upload all kinds of crap for ai to “learn” from, all within tos of course.

    I bet there are many kinds of ways to make your files poison the ai learning data. Its going to be fun for those ai guys to sort which files are probably safe and which are not. I think even if ONE user manages to slip something that corrupts the training data and its not noticed soon enough it might cause problems for them. Though someone who actually knows something about the subject might want to tell if i’m talking shit or not.

    I’m not against ai in general, but if its trained with data that was obtained from unwilling people, like this, then its makers can fuck off.

    • @JonEFive@midwest.social
      link
      fedilink
      English
      35 months ago

      It really depends on what the AI training is looking for. You can potentially poison an AI training model, but you’ll likely have to add enough data to be statistically relevant.

      • @reksas
        link
        English
        15 months ago

        enough data as in many different people will have to upload one or two files that contain such data or you have to upload very large file that contains a lot of data that causes problems?

        • @JonEFive@midwest.social
          link
          fedilink
          English
          25 months ago

          It’s honestly difficult for me to say because there are so many different ways to train AI. It really depends more on what the trainers configure to be a data point. Volume of files vs size of a single file aren’t as important as what the AI believes is a data point and how the data points are weighted.

          Just as a simple example, a data point may be considered a row on a spreadsheet without regard for how that data was split up across files. So ten files with 5 rows each might have the same weight as one file with 50 rows. But there’s also a penalty concept in some models, so the trainer can set it so that data that all comes from one file may be penalized. Or the opposite could be true if data coming from the same file is deemed to be more important in some way.

          In terms of how AIs make their decisions, that can also vary. But generally speaking, if 1000 pieces of data are used that are all similar in some way and one of them is somewhat different from the others, it is less likely that that one-off data will be used. It’s much more likely to have an effect If 100 of the 1000 pieces of data have that same information. There’s always the possibility of using that 1/1000 data, it’s just less likely to have a noticeable effect.

          AIs build confidence in responses based on how much a concept is reinforced, so you’d have to know something about the training algorithm to be able to intentionally impact the results.

          • @reksas
            link
            English
            05 months ago

            thank you, this was the kind of information i was hoping for