First off, if there’s a better place to ask this, I’d appreciate a nudge in that direction.
I’ve seen a lot of chatter on YouTube with Newcomb’s paradox lately (MinutePhysics Veritasium Wikipedia) and I’ve been dwelling on it more than I probably should.
To explain the problem briefly for the uninitiated: there is a super intelligent being that knows you to the core and can accurately (with 99.99+% accuracy) predict your actions/decisions. It has 2 boxes. You have the option to take either just the first box, or both boxes. In the first it always puts $1,000. In the second it will put either $1 million if it thinks you’ll take just the first box; or $0 if it thinks you’ll take both.
The apparent contradiction is explained in the videos.
So the solution to the problem I’ve come to is that you should remove your own ability to decide from your “decision” on whether to take the second box.
That is, you walk in the room, you flip a coin (or some similar random chooser) and on heads take both; on tails just take the first.
I think I’m failing to imagine all the consequences of this, but I can’t decide on what this would imply about the super intelligence’s choose of wether to put the $1 million into the box.
Any thoughts on this?
It’s easy to set up any number of seemingly impossible situations if you start with the premise of a super entity that can accurately predict the future.
What you’re given is [Ridiculous Assumption] + [Normal Situation] = [Paradox]
But the trick is to quickly gloss over the [Ridiculous Assumption] and frame it as [Normal Situation] = [Paradox]
Yes, however, that doesn’t mean there can’t be utility in engaging with the thought experiment.
The prisoners dilemma is one that I brushed aside for a long time as just a dumb problem when i first encountered it, but over the years, I’ve seen its utility in all kinda of situations.
If the being has the ability to observe the entire universe (and thus, knows every experience you’ve had) then the result of a coinflip is also calculatable, so a determinist would say the being can calculate your next choice rather than predict.
You have the option to take either just the first box, or both boxes. In the first it always puts $1,000. In the second it will put either $1 million if it thinks you’ll take just the first box; or $0 if it thinks you’ll take both.
I think that’s slightly wrong. IIRC, the machine will put the million in box 2 if it thinks you’ll only choose box 2.
There has to be a way to get the million for this to work, and the way you laid it out would make it impossible to get the million.
Also, I’m not sure that we, as the chooser, know for sure exactly how accurate the machine is - if we did, and it was 99.99% accurate then it’d be a pretty easy choice to only pick box 2 and thus get the million 99.99% of the time.
I think there has to be some doubt about the machine’s predictive accuracy to make the choice of box 2 only, a risky one.
However I’ve only watched the MinutePhysics video, and I was getting quite confused tbh! 😁
Yeah. OP’s alternate scenario, where there’s not supposed to be a way to get the million, is a lot more fragile, since then there’s a huge incentive to get the intelligence to screw up its prediction.
In the original setup, where you can choose either the jackpot box or both the jackpot and $1000 boxes, that incentive basically goes away. Like, maybe you successfully change the odds to 25% $1M, 25% $1.001M, 25% $1000, and 25% $0 (assuming the intelligence’s ability to predict the coin flip is no better than chance). But in the original problem, depending on your analysis, you’ve either got a 99.999% chance of $1M (based on the one-box camp’s analysis of taking one box), or you’ve got a 100% chance of getting $1000 more than you would by taking one box (based on the two-box camp’s analysis of taking two boxes). It doesn’t seem to me that a 25% chance of getting $0 would seem like an improvement to either of those camps.
So yeah. The scenario OP describes would be a lot more broken, because people’s behavior would be much more chaotic.
So the way to get the $1,000,000 is to be the kind of person who would pick only the first box, but then at the last minute change.
That’s what makes it a paradox.
You could go further. A 50/50 coin is arbitrary; what if you used a weighted coin instead? That is, both you and the superintelligence know that you’ll pick the single box with probability p, but neither of you know the coin’s outcome until you flip it.
What’s the ideal value of p in this case? Is it not arbitrarily close to 1?
Hmmm, that’s a good and interesting follow on thought. No idea where we’d land for p, nor how we’d begin to calculate it. But interesting line of thinking.
Coin flips are not random…
Its “random” to you because you are not that supercomputer, you are a mortal meatbag
Don’t be pedantic. We all understand the meaning.
1st, a coin flip is random enough such that no computer can pre determine the result of the flip with 99.9% accuracy. The process is chaotic.
2nd, walk into the room with a Geiger counter and pick the box based on the click you get from a cosmic ray.
This is a hypothetical where a human beings actions can all be predicted with high accuracy. Your actions are constantly being influenced by the inputs you receive, so in order to predict your behavior, you’d also need to predict everything you’re going to be experiencing. This necessarily includes the results of that coin flip and the Geiger counter readings.
. This necessarily includes the results of that coin flip and the Geiger counter readings.
The OP said he flips the coin after going into the room. But the computer setup the boxes before they entered. So the computer knowing how you’d react to the coin flip can’t change the boxes.
I don’t know what you’re getting at. Did I say something to suggest I misunderstood this part?
You said this:
“This necessarily includes the results of that coin flip and the Geiger counter readings.”
The premise states the computer sets up the boxes BEFORE you enter the room. The OP states he flips the coin AFTER he enters the room.
The computer cannot change the boxes after he entered the room. The computer cannot know the results of how you will respond to the coin flip because it happens AFTER it has fixed the boxes.
And you’re saying that those two things are somehow contradictory? Because if so, I don’t see how. If this super intelligent computer knows how you’re going to choose ahead of time, then it must also know how the coin is going to land ahead of time.
Yes they are contradictory. The computer isn’t supernatural. The premise states the computer isn’t 100% accurate. It says 99.9% but it could say 75% without changing the problem. It says 99% to simplify the scenario for the reader so you assume the computer is accurate. The premise is the computer can reliably predict your behavior. The premise is not the computer can defy physics.
The result of the coinflip is measurable, though. It could be done by a hyperintelligent being.
You didn’t read the article. The computer isn’t watching you flip the coin and then switching the boxes at the last moment.
The boxes are fixed before you enter the room. The computer has already predicted your choice.
Which is beside the point that the OP posited using a random process to make the choice for you. The method of randomness isn’t the issue. That’s why I said a Geiger counter could be substituted for a coin flip.
I didnt read the article because i’m familiar with the theory already. I believe the universe is determinate, so every choice is predetermined. Therefore, the “predictor” can calculate your exact choice if it knows all variables of the universe. If it doesnt, it can calculate a likelihood between 99.9 repeating and 50.0 repeating, based on all the variables it does know.
Because the universe can be measured by this entity, it can also know exactly how the coin will land, or in your example, exactly what read the Geiger counter will have.
I believe the universe is determinate
That has been experimentally proven false and outside of all mainstream science.
While you can have a supernatural belief in a clockwork universe, the premise is a supercomputer makes the prediction, not God.
Then the experiments may be flawed. We dont know what we dont know, but we have calculated a lot of “supernatural” phenomenon like gravity, physics, and light, to be computable mathematical formulae. Is it unthinkable to believe that everything can be computation then, if we were aware of every variable involved?
There are a near infinite number of variables involved, but if we knew every variable, we could solve it.
Then the experiments may be flawed. We dont know what we dont know
That’s the same excuse flat Earthers make. Yes every single observation made over the past 100 years could have been wrong and tomorrow we find out that all of quantum mechanics is wrong.
There are a near infinite number of variables involved, but if we knew every variable, we could solve it.
Take a single electron. You can’t define it’s position and motion (momentum) simultaneously. It is fundamentally unsolvable. There aren’t even hidden variables that we are unaware of. Bell’s inequality has been experimentally proven many times. https://en.wikipedia.org/wiki/Bell’s_theorem




