stravanasu

  • 169 Posts
  • 639 Comments
Joined 3 years ago
cake
Cake day: July 5th, 2023

help-circle



  • Sharing similar music is of course interesting.

    But this is a community for fans. Discussions about the technical skills, production volume, or whatever of other bands are welcome, as long as they’re made in a non-disparaging way. I think one has the right to be a fan of something or someone, and to enjoy its popularity if that happens.

    Of course you have the right not to like something or not to agree with its popularity. But If your point is to be disparaging, it probably makes more sense to create a community for similarly-minded people: you can criticize the band or its popularity there as much as you like without offending one another.







  • It is actually not so difficult to see this for yourself in a much simplified setting. One can easily build a “Small Language Model” that extracts correlations between only three consecutive words. On the web there’s plenty of short scripts that do this; here and here is one example. The output created by such a SLM can have remarkably long sentences with grammatical meaning (see the examples in the links above); this is remarkable since all it learned was correlations between triplets of words.

    Now you can take a large amount of output from such a SLM, and use it to train a second, identical or even better SLM, then check the output generated by this second one. You’ll see that the new output is less coherent than the one from the first SLM. Give the output of the second SLM to a third, and you’ll see even less coherent text coming out. And so on.


  • They aren’t out of context, and you have just said the same thing. Data processing can help in removing noise, but it can’t help in creating information or extracting information that wasn’t there in the first place. In fact – again as you said – it can end up destroying part of the original information.

    LLMs extract word correlations from textual data. Already in this process they are losing information, since they can’t extract correlations beyond a certain (yet large) length, and don’t extract correlations at shorter lengths. And in creating output they insert spurious correlations that replace (destroy) some of the original ones. This output will contain even less information than the original training data. So a new LLM trained with such an output will give back even less.













  • I’ve been having similar turd-kind encounters with bank apps even within Android. I use the egregious Heliboard from F-droid, and my bank app refused to start because I use an “untrusted keyboard” – funny as it’s way more trustworthy that Gboard or Microslop board apps. Turns out the apps of all banks in my country are like that. So now I simply access the bank via the browser instead. Fuck their apps.

    But I understand that the browser solution may not work for everyone :(

    Partly this problem comes from incompetence of the app’s developers, partly for shifting responsibility: it seems to me that they let Play Store do the checks, so if any hacking happens they can blame Play Store. And there’s also the modern motto: “if you want to make an app secure, make it unusable”. Even better I’d then say “don’t make it at all”! – there, security-problem fully solved.

    Put pressure on banks would be best. Possibly one could also play a “disability” card: I must use such-and-such app or OS owing to visual impairment, say. Or collect signatures for a petition… but I imagine we’re a very small minority.

    As a protest in my case I changed bank a couple of times.

    But thank you for the USB-ADB tip! I’ll use it when I switch to GrapheneOS.