Troll honeypot, apparently.
Suggested blocks:
Removed by mod
someone needs to teach these kids.
——
Moderated by:
beep, boop I am a human.
I got lemmy for that!
——
Moderated by:
beep, boop I am a human.
De Nile is a river in africa and I hear it is lovely.
Because the pope is an abusive leader with no morals.
@jordanlund Unlike the measure you were wrongly claiming would help curb abuse (110), this one would!
whatcha gonna do when they…
That’s unfortunate. I mean, let’s acknowledge that admins to an extent need to protect mods, and again that mods do labor for free. Lots of mods do the work they do because they want the communities they cherish to flourish.
But let’s also acknowledge that there is such a thing as user churn and that will ultimately be the downfall of any site. With enough money you can burn through the churn indefinately. Volunteer sites will just disconnect and users will go to the for-profit sites like reddit, causing more users to be tracked, the internet to be more centralized, etc…
I think it’s also worth mentioning that mods can create their own burner accounts and use those to troll users. Mods are also humans and if they want to speak their minds they should be able to do so. But, speaking their minds from a position of power puts them in a different power dynamic from users, which reflects not only on them, but on the community and the instance as a whole.
And then users get to see literal scat porn on the front page from a username like “admins are assholes” because a mod wanted troll on main. (edit: this was an actual bit of content I reported not too long ago. maybe it was “mods are assholes” I’m not sure, but it was long that vien.)
Oh shi
I can but as a user and not a moderator or admin it won’t make much difference. I also don’t expect mods (volunteers with a stack of work to do in front of them) to take the time to understand every user before reporting them, though it would be nice if there were a system that facilitated that!
For example, if users (not just comments) were able to be upvoted/downvoted, they could gain or lose “standing” in a community or on an instance. Depending on how much standing they have, their posts / comment would have a visibility in line with their standing. Good standing posts might be promoted slight more than regular posts. Poor standing posts might be shown to users half as much as regular posts. The scope of this would be per community, based on the users’s interaction with that community and the community’s opinion of the user.
That way, less of the moderation is placed on the human moderators. Those moderators will still be needed because no automated or crowd-sourced system is perfect but the goal would be to lean on moderators less and less.
No worries. I make the distinction between people referring to me and people agreeing with my self-deprecation in the context of me making a good-faith comment or argument. The former is fine. The later is, well apparently it’s trolling, I’m learning.
Oh absolutely. Not to mention that if we want to get pedantic and legal, at least for reddit circumventing bans breaks ToS and you can’t accept the terms of the site if you’ve already been banned from it. Can you technically do it? Yes absolutely. Will your new account be shadowbanned? Probably, but that’s not to say you couldn’t circumvent that as well when creating the account.
I’m only talking about the rules of one instance here, not the fediverse as a whole. I don’t think we should prevent people from creating instances, and I know we don’t have the power to do so. Lemmy is open source, so someone could try to sneak in code that would prevent certain instances from being setup, but it probably won’t get accepted and those people could still use an earlier veresion. These are all good, healthy things.
You’re absolutely correct that a medication isn’t going to react the same for every person. People can have weird or even fatal reactions to medications. Any local pharmacist should be able to answer questions about medications and interactions.
To be clear, I am just saying that if adderall works for someone, vyvanse is likely to work for that person as well, because the drugs are so similar. Vyvanse’s biggest difference from adderall is that it’s a prodrug, meaning that starts off as a drug that has no effects on the body until it reacts with an natural enzyme we have in our colon which causes the drug to turn into what is basically adderall.
Adderall is mixed amphetamine salts.
The mixture is composed of equal parts racemic amphetamine and dextroamphetamine, which produces a ratio between dextroamphetamine and levoamphetamine, the two enantiomers of amphetamine.
Compared to vyvanse:
Lisdexamfetamine is an inactive prodrug that is converted in the body to dextroamphetamine, a pharmacologically active compound which is responsible for the drug’s activity.
So technically, Adderall is dextroamphetamine and levoamphetamine. I can’t speak more to this because of my lack of knowledge but “dextro” and “levo” are “right” and “left”, basically meaning something like the left and right “versions” (wrong word) of the molecule. Vyvanse on the otherhand is just the right “version” (wrong word) of the molecule.
This is tricky too… I haven’t looked at your comment history so I’m willing to accept that maybe you did something somewhere to upset this mod. This may not be the case but if it was and that was the reason you were banned, that presents a few problems:
Or to pose the question another (hypothetical, not referring to you) way: should we let a nazi/far-right/facist remain a contributor to a leftist tech support community so long as they are not antagonistic to other users / abide by the rules of that community? I don’t have a clear answer here. I think most people would agree that it’s fair to allow this if the person is truly not antagonistic and is adding value. Other people might say that that person doesn’t deserve to exist in polite society no matter what “value” they might add.
That’s an extreme example, but I’m trying to strongman the mod’s possible reason for banning you.
Another thing for a moderator code of conduct might be to provide adequate ban reasons, possibly generalizing some only in cases of safety / legal reasons. I mean, mods / admins shouldn’t need to write a novel, but it also wouldn’t be technically too difficult to create a page that would link to the specific posts which were removed or which lead to the banning. I realize I’m butting up against outing reporters, so it would be important to maintain that aspect of privacy. I’m just saying when someone says something like “repeated” in a mod report, it would be handy to see the multiple instances. Otherwise, a mod might just look at someone’s moderation history, see mod reports (which may be unfair) and determine those reports constitute “repeated” behavior.
Did OP ever stop laughing?
My motivations my be strange, but they are bigger than just this one account. If it helps, I’ll say that I ran a “social network” prior to facebook existing (if you want to call about 20 users a social network, still it had basic forum features). I got hooked on digg, then went to reddit and now on lemmy. Part of this is time wasting that I’d like to redirect elsewhere, and part of this is genuine interest in trying to fix the ails of social media. I’ve been thinking deeply on this for years and I haven’t come up with anything useful for a fix yet.
I try to behave online and try to challenge people’s ideas intelligently, but I’m a monkey-brained human like everyone else and sometimes the most appropriate reaction truly is to just call someone a name. Or, at least it feels that way.
I’m not sure I agree that it’s a good idea to circumvent bans handled out inappropriately at least for me because it asserts that I know better than the (multiple) people who agreed “naw, ban this guy”. Maybe if I were trying to get the word out about atrocities, sure. But, in my case, I was responding to a guy who was basically saying “yeah trump is old but biden is old so i don’t know hard to say” so this wasn’t the hill I was willing to die on. I mean, I guess it was the hill I got a 7 day ban on, but that was a bit unexpected. I don’t like that I can’t speak my mind in politics, but hey if I broke existing rules, that’s the rules.
Moderation on digg, reddit and lemmy has interested me because it almost doesn’t need moderation - the community moderates itself, yet the power of the community is weak in doing so. Downvotes have little effect, simply showing a lower score on a post. Visibly hiding a post that’s massively downvoted until a threshold is met which basically makes the comment invisible unless someone clicks a button to view it would make it less likely for people to pile onto people and might reduce trolling to a small degree. Even if people do dogpile, the upvotes to responses by other users in that thread could be used to further minimize the visibility of a comment. If a user gets more downvotes than they do upvotes over time, that user may find themselves only allowed to make a few comments a day until their “community reputation” perks up. Or, take the existing system and when a report comes in, consider the amount of downvotes a person receives vs upvotes they receive in what communities (which may have their own admin reputation depending on if the admins want the community around or not, instead of outright banning them, give them a warning system) in the decision to ban / appeal a ban.
Some sites have experimented with some of these features. I’m halfway curious to implement ActivityPub myself and try to make something like this for lemmy, but I’ve got a lot of research to do before I get anywhere close to doing that.
It’s going to happen. I’m just hoping there’s a mechanism to keep tabs on which mods do this over time so admins can determine if action needs to be taken.
Like, what I’m learning is that I could moderate a community right now and when i argue with someone and I don’t like what they have to say, I can ban that person from my community. And, ok this is fair to an extent to create safe spaces online. I guess it gets challenging (maybe for admins) when the community is a larger one like politics and the speech is not really unsafe but more unwanted.
Something a code of conduct would help clear up.
Experimenting with a signature containing some quotes from moderators I have received. See my profile background for the groomer bit. It’s either true or it’s something that they shouldn’t be making light of by pretending to be one. Not sure how else to raise these issues.