I’m gay

  • 108 Posts
  • 295 Comments
Joined vuosi sitten
cake
Cake day: tammi 28, 2022

help-circle
rss

Thanks for this article, it starts out with a strong Scientific background. What I personally found interesting as I started to investigate nicotine vapes a few years back was the lack of solid evidence out there which showed any real harm from nicotine vapes and also how shoddy nearly all the science on how addictive of a substance nicotine was (almost all of it is conducted on cigarettes, failing to control for other chemicals or used outdated animal models of addiction which exaggerate addictive quality).

As we’ve seen throughout the entirety of human history, making substances illegal does not stop people from using them. I’m glad someone has taken the time to investigate this, and I hope we can learn in the future that banning substances doesn’t work. In fact, all the evidence points towards declining usage and increasing safety as drugs are legalized and controlled as they become less adulterated and the taxes can be used for purposes such as fighting addiction.




Thank you for sharing this. You’re absolutely right that its not up to you to educate others. In fact, the concept of educational burden is often brought up when we talk about minorities. If someone unknowingly does something racist or sexist, they often push back and ask for an explanation from the affected party. This is a burden they are placing on others, because they have not educated themselves. But this is also misplaced, because they are the one causing harm and they are usually the person in the position of power or the person who is in a place of privilege.


People mixing their own pre-workout often make this mistake and drop in a tbs or more of caffeine which can and often does kill people. Caffeine is a risky substance when you utilize it in a purified form - a risk with many drugs where the active dose is so small.


I think a focus on the source of the misinformation is misplaced

It’s the power of that source to generate misinfo at a faster speed and for close to no cost that’s a more pressing issue here.

I don’t think this is particularly likely to happen, but imagine I use a LLM to create legal documents to spin up non-profit companies for very little cost, I hire a single lawyer to just file these documents without looking at them and only review if they get rejected. I could create an entire network of fake reporting companies fairly easily. I can then have a LLM write up a bunch of fake news, post it to websites for these fake reporting companies, and embed an additional layer of reporting on top of the reporting to make it seem legit. Perhaps some of the reports are actually twitter bots, Instagram bots, etc. spinning up images with false info on it, and paying for bot farms to surface these posts enough for them to catch on and spread naturally on outrage or political content alone. This kind of reporting might seem above-board enough to actually make it to some reporting websites which in turn could cause it to show up in major media. This could end up with real people creating Wikipedia pages or updating existing information on the internet and sourcing these entirely manufactured stories. While there are some outlets out there who do their research and there are places which fact check or might question these sources, imagine I’m able to absolutely flood the internet with this. At what point of all total reporting/sharing/news/tweeting/youtubing/tiktoking/etc does this become something which our system can actually support investigating?

I also think it’s important to consider the human element - imagine I am an actor interested in spreading misinformation and I have access to a LLM. I can outsource the bulk of my writing to this LLM - I can simply tell it to write a piece about something I wish to spread, and then review it as a human and make minor tweaks to the phrasing, combine multiple responses, or otherwise use it as a fast synthesis engine. I now have more time to spread this misinformation online, meaning that I can reach more venues and create misinformation much quicker than I could previously. This is also a potential vector through which misinformation can be spread more quickly through the use of LLMs. In fact, I’m positive this vector is already being used by many.

However none of that touches on what I think is the most pressing issue of all, the use of AI outside it’s scope and a fundamental misunderstanding of inherent bias in systemic structure. I’ve seen cases where AI was used to determine when people should or shouldn’t receive governmental assistance. I’ve seen AI used to flag when people should be audited. I’ve seen AI used by police to determine who’s likely to commit a crime. Language models aren’t regularly used at policy scale, but language models also have deeply problematic biases. I think we need to be rethinking when AI is appropriate and the limitations of it and to consider the ethical implications during the very design of the model itself or we’re going to have far reaching consequences which will simply amplify existing systemic biases by reinforcing them in their application. Imagine that we trained a model on IRS audits and used it to determine whether someone deserved an audit. We’d end up with an even more racist system than we currently have. We need to stop the over-application of AI because we often have a fundamental misunderstanding of scope, reach, and the very systems we are training them on.


Why do you think that I perceive chatgpt in this way? I voiced an opinion about the biases that chatgpt and most AI have due to their large training sets which reflect systemic biases.


Why do you ask this question?


Can you help me understand what you mean by propaganda device?



@Gaywallet@beehaw.org
creator
toTechnology@beehaw.orgGPT-4 Announced
link
fedilink
7seitsemän päivää

Unfortunately, AI’s typical problem with biases, in particular those towards certain minorities which are discriminated against online, did not warrant making this release. It only gets a tiny mention under limitations:

GPT-4 still has many known limitations that we are working to address, such as social biases, hallucinations, and adversarial prompts.






Our finances are fully transparent and we encourage anyone who enjoys using our service who has the means to consider contributing. 💜

Our current estimated yearly income is $143, slightly less than $12/mo.


This is incredibly anecdotal story. It’s one that highlights the experience of one elder doctor and how they don’t like the expansion of a technology they don’t understand and don’t wish to adapt to. There’s countless studies and even metastudies out there about how incredibly useful and important telehealth is. Hell, there’s even reviews of metastudies which highlight how useful this technology is and how abundant we have data to prove its efficacy. The article doesn’t spend any time touching on the other side of the argument. It’s hyperfocused on this one doctor’s opinion of healthcare, and their perception of it. The one patient he focuses on, is exactly the kind of patient for which the kind of telehealth he was practicing (zoom style narrative only telehealth) is not particularly well suited. There’s a reason that there’s telehealth devices which exist to allow the use of a sphygmomanometer, stethoscope, otoscope, and other important checkup tools or are a hybrid telehealth environment where a nurse can do these and report findings to a doctor who’s present virtually.

As an aside I’m not sure what to think of the publication openmind magazine. They’re relatively new and they claim to have a focus on unbiased reporting, but they also claim to be here to address and debunk conspiracies and deceptions and controversies. If this is meant to be a think piece, the lack of addressing the obvious scientific gap between this anecdotally based thinking about a very well established scientific field makes me think twice about whether this is truly out here to be based on fact or whether this actually just a conservative mouthpiece trying to pass itself off as focused on facts.

With all of that being said, I do think there’s an important consideration to be made in healthcare, and one that’s been discussed in extreme depth in the literature - what kinds of care are better for telehealth and which are best for in-person (or at least, what tech would we need for the two to be comparable). There are absolutely important considerations on what specialties and workflows do well in the telehealth field and which ones are not well suited. Emergency and trauma care, for example, are unlikely to have any telehealth components for a long time. Dermatology and mental health, on the other hand, are extremely successful in the telehealth space and were early adopters. There’s also a specific set of skills and a way of approaching diagnosis that are fundamentally different for those people you see in person and those you see via telehealth and if you are not adequately trained on these considerations it makes a lot of sense that you might not work well in between the two mediums.


I don’t think the criticism is meant for every homeowner, it’s specific to the treatment of housing as only a means to make money off of, which is not what you’re doing here.



This will never end up happening, because big business has its hands in every government, but tracking of any sort really needs to be opt-in, rather than opt-out. In California, for example, this is how it works for companies which like to send out those “we want to share information with our business partners” emails, documents, etc. If you are a resident in California and do not reply, by law the company must assume that you opted-out.




I understand. I’m certainly open to the idea, but I do want to mention that we don’t think propaganda pieces are particularly useful or relevant on this instance. If something is posted to news with bad sources, we’re more than willing to remove it. In fact, we would like you to report it. Fact checking websites are not propaganda.

I have a decent idea of where you may have been participating and understand your concern. We have blocked several lemmy instances which we believe do not act in good faith and are not concerned with being nice or inviting to users and care more about enforcing echo chambers.


If I had to venture a guess, the reason you’d like it’s own community, as opposed to posting slow news style articles in the existing news community is that you do not want to see traditional style news. Is this accurate? Or is there more that I’m missing.



Every day we stray further from the light… I’m so sick and tired of the middle of this god forsaken country









I first was introduced to this concept through a TED talk on behavioral economics as it relates to language - as mentioned in this article, languages which grammatically associate the future and the present together also happen to save more money for retirement, practice safer sex, prioritize their physical health, etc. It’s made me think a lot about all the other ways language likely interacts with how we think, what values we place on society (and society places on us) and other far reaching effects of language on cognition. Thank you for this article as it talks through, in detail, many of these differences based on language structure and has provided me with a plethora of papers to read through!



This got posted to technology sub as well and someone posted a link to sshfs and we might look into utilizing that plus block storage to resolve. For now we’ve trimmed logs again, purged some old info, and are hopefully waiting on an update to pict-rs to resolve. If we run into any troubles I’ll definitely look into reaching out to you, thank you for offering.


There is no one definition of nice, and I think that is nicely captured with this itemized list of ideas on niceness. The rule is purposefully ambiguous, because no two humans are exactly the same. We need to arrive at a space where most people feel comfortable in the space and do our best to eliminate discomfort that can arise from differences of opinion, differences in how people want to be treated, and differences in use of language.

I talk a bit about why we have chosen to avoid long itemized lists of how to be(e)have and what the spirit behind the singular rule is in the linked post. It’s also on our sidebar along with one other post with a few additional points worth exploring. Hopefully these links and the discussion present in this post is enough to give you an idea of how to act. We certainly aren’t in the habit of banning people immediately and without a discussion unless they really heinously cross certain lines (openly advocating for violence on minorities, for example), so as long as you strive to do the behavior you think aligns with being nice the worst thing that might happen is that we might engage with you to have a discussion about behavior if it ever strays into a gray area worthy of discussion.


A few months ago I read Hoffman’s book A Case Against Reality. It was an interesting read, one in which I ended up learning more about quantum mechanics than I ever thought I would when I picked the book up. Frankly I think the book could be distilled down to a much shorter version, as the central concept was not a particularly complicated one, just one which challenges conventional ways of thinking. I think this talk does a better job. If your curious to learn more about the science that supports this particular way of thinking or a more in depth exploration of what it means, particularly with relevance to the concept of spacetime, I’d suggest giving the book a read.






New version bugs - Language undetermined error, Subscribed/local/all not defaulting
If you haven't set a language in your profile and you try to post, the default option is "undetermined" and anything you try to reply/post will give you the unhelpful error of language_not_allowed. To an end user this doesn't provide any guidance on what happened or how to fix it. Similarly if you haven't set a new default since updating, going to the main page of an instance will show whatever your previously saved option was among the options subscribed, local, all but it will always show all (since that is what it defaults to on your profile).
fedilink

I’m having a lot of trouble trying to find the CAT-SEB (Cognitive Analytic Therapy-Swedish Enlistment Battery) which appears to be the IQ test of choice used here. Does anyone happen to have a link to the questions or a sample or some kind of idea of what they’re actually testing?


JKR uses a lot of her time and money to further the TERF agenda. She even proudly considers herself a TERF. The new harry potter game is going to generate a decent amount of revenue for her, which means its directly funding a hateful ideology.

Some queer people and allies have decided to fight against this however they could, which meant that people would hurl insults at people who talked about the game, post memes which spoiled the main plot of the game, and really anything they had control over. It’s been a bit of a nightmare for moderation if you didn’t decide to take a side in the matter.


Okay so I need to be sure I have something that can make sense of s3 calls to storage, I feel like we’re getting closer, just still way out of my own technological depth.


Is there any way to do this and avoid having to use S3? I don’t want a surprise bill from Amazon because we exceeded some thresholds they have on the free tier (nor do I want to have to make new free tiers every 12 months).


Great article, thanks! Completely unsurprising, but I’m glad that issues like this are being surfaced through mediums in which they will receive attention, because these companies certainly aren’t proactively trying to identify and fix these kinds of issues.



I am willing to contribute storage (I have several TB), but I am somewhat bandwidth limited, so I need to be a bit careful with hosting too many images to not impact the other services that I run on the same connection.

How would you accomplish this? I have plenty of bandwidth and plenty of storage I can subsection as a possible solution (hell even buying a raspberry pi and an old hard drive wouldn’t be all that expensive and potentially a fun project) but I really don’t even have an idea of how to connect this to the lemmy instance


If it’s only used for images I’m not all that concerned… images not loading when the rest of the page loads really only matters when the focus of the post is a meme, and I’m not too concerned about those not loading.


Thank you for adding the additional context, hopefully it can help people calibrate how much they should believe this writing.


Honestly it’s kinda fascinating in some extremely weird way…


Of note, there’s no sources to this. To an extent, this is to be expected. Hersh does happen to have a history of breaking a few important stories, but previous stories were backed up by a lot more paperwork than this particular story has.



Tech proficient users of Beehaw, we need your help
We've posted a number of times about our increasing storage issues. We're currently at the cusp of using 80% of the 25gb we have available in the current tier for the online service we run this instance on. This has caused some issues with the server crashing in recent days. We've been monitoring and reporting on this [progress](https://beehaw.org/post/237080?scrollToComments=true) occasionally, including support requests and comments on the main lemmy instance. Of particular note, it seems that pictures tend to be the culprit when it comes to storage issues. The last time a discussion around pict-rs came up, the following [comment](https://lemmy.ml/comment/280731) stuck out to me as a potential solution > Storage requirements depend entirely on the amount of images that users upload. In case of slrpnk.net, there are currently 1.6 GB of pictrs data. You can also use s3 storage, or something like sshfs to mount remote storage. Is there anyone around who is technically proficient enough to help guide us through potential solutions using "something like sshfs" to mount remote storage? As it currently exists, our only feasible option seems to be upgrading from $6/month to $12/month to double our current storage capacity (25GB -> 50 GB) which seems like an undesirable solution.
fedilink

What is up with the way that article is written? Is this meant to be targeting incels? There’s a weird level of hand-holding tutorial interspersed with sexist ideology about owning a girlfriend. There’s also a weird shift from NFT art of women into trying to find a date in VR? but no mention that you’re trying to interact with a human? If this was written by a human (and not AI) I am very concerned


I find it rather interesting that some of the places most keen on adopting AI are some of the places most plagued by racism. Experts in the field pretty unanimously agree that nearly all AI is racist, so choosing a target system that’s already really racist is just not a good idea.

Unfortunately at the end of the day, capitalism is likely to win. This will likely be sold to police departments in the coming months and years, despite this article or any attention it’s going to receive.


This is why you need regulations. When a company, who’s ultimate motive is profits, is allowed control to operate how they wish, they will maximize profits. It’s a really simple formula, it’s been proven time and time again. There’s a whole field of really smart math to prove it. You cannot be anti-regulation without being anti-human.


Never heard the term ‘feudal security’ before. Interesting read, thanks!







This is exactly the kind of AI application that is almost assured to happen in financially strained systems, especially systems of government that are chronically underfunded, that are most at risk of causing serious harm because nearly all algorithms are biased and in particular, racist.

This is the use of AI that scares me the most, and I’m glad it’s facing scrutiny. I just hope we put in extremely strong protections ASAP. Sadly, most people in politics do not see how dangerous using AI for these applications can be, so we most likely will see a lot more of this before we see any regulation.

If you’re curious as to why these kinds of applications are nearly all biased, the following quote from the article helps to explain

The Allegheny Family Screening Tool was specifically designed to predict the risk that a child will be placed in foster care in the two years after the family is investigated.

They are comparing variables to an outcome - the outcome is one which is influenced by existing social structures and biases. This is like correlating the risk of ending up in jail with factors which might loosely correlate with race. What will end up happening is that you’ll find the strongest indicators of race, in particular if you are black, and these will also be the strongest indicators of ending up in jail because our system has these biases and jails black individuals at a much higher rate than individuals of other races.

The same is happening here. The chances of a child being placed in foster care depend heavily on the parents race. We are not assessing how well the child is being treated or whether they might need support, we are assessing the risk that the child will be moved to foster care (which can alternatively be read as assessing the likelihood that the child is of a non-white race). This distinction is critical to understand when AI is reinforcing existing biases.


For one there are a lot of real nazi’s on the internet. I thought most of them were fake trolls just shit posting for lulz.

I can see how someone might fall into this trap, but scholars of speech and social phenomenon have been warning of this for quite some time. When an entire scientific field starts to emerge around events happening in real time, it can help to pay attention to what the experts think, even if you disagree, as they can often provide context or insight.

Also you say this after plugging a website with literally nazi in the url? odd

When I thought free speech I thought that doctors, lawyers should be able to give their professional opinions without fear of censorship.

What an interesting take. I’ve never heard of anyone complaining about this. I happen to work in medicine and not law, so I can’t comment on the latter, but I’ve never heard a clinician of any sort (not just doctors but PAs, NPs, etc.) talk about this fear. In fact most medical schools have spent a lot more effort focusing on how to talk with patients in the last 10-20 years because a lack of censorship has caused fractured clinician patient relationships, especially among minorities.

Things get much more complicated though. There appear to be government actors and company actors who try to prevent sites from growing by posting purposefully provocative content.

While I’m sure this is true on a certain level, I doubt anywhere on lemmy warrants the size for this to truly be happening. I would love to see instances of this happening on this platform, however, to understand better how to defend against this kind of behavior.







Beehaw strives to be open about everything we do, including our state of affairs financially. As a reminder, you can view our financials and information about this website on the link in the post. Here's a quick recap of income we earned in 2022: feb 18.33 mar 4.31 apr 0 may 8.96 jun 9.28 jul 0 aug 0 sep 38.76 oct 19.46 nov 5.52 dec 18.8 avg 11.22 We're currently earning a bit more on average per month than it cost to run our current server, but we will likely need to update how much storage we have available within the next year to support growth and ongoing activities. I'm unsure of the cost to do so as I'm not managing that part of the server, but for the sake of transparency wanted to highlight what's happening. If you have any questions about how we run things feel free to chime in.
fedilink