Lets assume we develop the capacity to create virtual worlds that are near indistinguishable from the real world. We hook you up into a machine and you now find yourself in what effectively is a paraller reality where you get to be the king of your own universe (if you so desire). Nothing is off limits - everything you’ve ever dreamt of is possible. You can be the only person there, you can populate it with unconscious AI that appears consciouss or you can have other people visit your world and you can visit theirs aswell as spend time in “public worlds” with millions of other real people.
Would you try it and do you think you’d prefer it over real world? Do you see it as a negative from individual perspective if significant part of the population basically spend their entire lives there?
That’s why I said AI that appears consciouss
What’s the difference seeming conscious and being conscious?
We literally have no idea and have not figured out a good way to test this.
We do know. Consciousness is what you’re experiencing now. Then again general anesthesia is what non-consciousness feels like. Nothing. It by definition cannot be experienced
What we don’t know is how to measure it. There’s no way to confirm that something is or isn’t consciouss
That’s true from my pov, but I can’t really prove it. Its kinda like the biggest “Trust me bro” that we all assume is true.
Not digging into the ethics, just the ideas are fascinating.
Yeah I agree. The only thing one can be 100% sure of is that they’re consciouss themselves
Consciousness means that you’re capable of having a subjective experience. It feels like something to be you.
If you only seem consciouss then you can’t experience anything. You could aswell not exist at all.
I guess it depends on how realistic the fake consciousness is. Is it indistinguishable from real consciousness? Or would I be acutely aware that every relationship I create is fake? I mean, I guess if we’re claiming it absolutely is not real, then I’ll always know that and it kinda taints the whole idea. It kind of makes me wonder about the whole concept. Like, if we did find a way to determine consciousness somehow, could that knowledge interfere with building an emotional relationship with a indistinguishable but fake conscious AI?
It’s not fake consciousness per se but a character that acts as it was consciouss despite the fact that it’s not. So called “philosophical zombie”
You could have real relationships with other real people in the simulation. AI could be your barista, driver, random people in the city etc.
How do you test that? How do you know that people around you actually have conscious and not just seem to have? If you can’t experience anything, how do you fake conscious? And is this fake conscious really any less real than ours? I think anything that resembles conscious well enough to fool people could be argued to be real, even if it’s different to ours.
I don’t think it matters in this case. I decided that they are not consciouss and only seem to be because I didn’t want this thread to turn into debate about wether it’s immoral to abuse AI systems or not.
I think it matters a great deal! I would like to believe that not only would I not use such a system, I would actively fight to have it made illegal.
Why? That’s like making it illegal to kick your roomba
No. I’m very certain that my Roomba is not conscious. But If we can’t tell whether or not these people are conscious or not, then I don’t think it’s right to have this power over them. A better parallel than a Roomba would be an animal.
No. I wrote the premise myself and I specifically said they appear consciouss, not that they are consciouss. I get what you’re saying but that does not apply here. In this specific case we know for a fact that they’re not consciouss. The only other consciouss being there on top of you are the other real people in the simulation. Not the AI characters.