

You can’t build a fandom full of that many fucked-up people without some of them getting horny in a fucked-up way.
Edit to add: example of a non-erotic fanfic https://archiveofourown.org/works/73396436


You can’t build a fandom full of that many fucked-up people without some of them getting horny in a fucked-up way.
Edit to add: example of a non-erotic fanfic https://archiveofourown.org/works/73396436


That header pic sure takes me back


Felicitations!
(“A job,” Blake thinks. “I need to find one of those.”)


From the replies:
But every man under the age of 30 that I’ve ever met at Lighthaven, that I’ve had the opportunity to speak with privately, has completely and totally integrated IQ differences into their ontology.
The entirety of Lighthaven needs locker insertion


I would also not put my finger on those microscope slides


womp, and wait for it, womp


Do you want Tylers Durden? Because this is how you get Tylers Durden.


Train your chatbot on TV Tropes, and the password will always be swordfish.


The post names Joscha Bach as someone Aella tried to exclude.
You do not under any circumstances have to hand it to Aella


“Yes, I am hammering myself in the balls. But maybe its worth it?”


It’s someone who learned to stop worrying and love the Bomb.


The phrase “ambient AI listening in our hospital” makes me hear the “Dies Irae” in my head.


A longread on AI greenwashing begins thusly:
The expansion of data centres - which is driven in large part by AI growth - is creating a shocking new demand for fossil fuels. The tech companies driving AI expansion try to downplay AI’s proven climate impacts by claiming that AI will eventually help solve climate change. Our analysis of these claims suggests that rather than relying on credible and substantiated data, these companies are writing themselves a blank cheque to pollute on the empty promise of future salvation. While the current negative effects of AI on the climate are clear, proven and growing, the promise of large-scale solutions is often based on wishful thinking, and almost always presented with scant evidence.
(Via.)


It’s morgin’ time


Limor Fried and I had a class together at MIT in 2001. This has no bearing on the present circumstances and offers me no real insight (anything I could say about our extremely limited interactions would amount to confirmation bias). It’s just the odd little factoid that comes to mind whenever adafruit Does Something Online.


Presuming that they are all liars and cheaters is both contrary to the instincts of a scientist and entirely warranted by the empirical evidence.


First of all, like, if you can’t keep track of your transcripts, just how fucking incompetent are you?
Second, I would actually be interested in a problem set where the problems can’t be solved. What happens if one prompts the chatbot with a conjecture that is plausible but false? We cannot understand the effect of this technology upon mathematics without understanding the cost of mathematical sycophancy. (I will not be running that test myself, on the “meth: not even once” principle.)


Mathematicians: [challenge promptfondlers with a fair set of problems]
OpenAI: [breaks the test protocol, whines]
We will aim to publish more information next week, but as I noted above, this was a quite chaotic sprint (you caught us by surprise! please give us time to prepare next time!). We will not be able to gather all the transcripts as they are quite scattered.
Some of the prompts included guidance to iterate on its previous work…
Anyone Else Have Those Weird Dreams Where Sobbing Future Generations Beg You To Change Course?