Wednesday, February 21, 2007

Rules

This is something I told Jay once, I believe to demonstrate that rules sometimes feel truer than case by case evaluations.

Entropy always increases. Experiment after experiment confirms this. There are some very unlikely cases where randomness decreases entropy, but they're minimal enough to ignore.

So, let's say we do an experiment in the lab, and this one time, under these conditions, entropy actually decreases. We've got a few options:

1. Conclude something went wrong in the experiment--someone played a trick on us, perhaps. The second law of thermodynamics remains true.

2. Conclude the second law of thermodynamics is false.

3. Conclude the second law of thermodynamics is false in this one instance. The rule stretches very far, but not far enough to include this particular experiment, on this particular day, even though nothing was wrong with the experiment.

We're not forced to pick any one of them. Somehow though, 1 feels truest. It is more likely that the law is true and our experiment is wrong, even though we would properly allow that the latter may, with some very small probability, be the case instead.

It's as if we've attached truth probabilities to each scenario. Conclusion 1 has a 99% chance of being true. Two and three are .5% each. It is logical to go with the truest of the bunch.

Now. Where did those truth values come from? So far as I can tell, we just "felt" them. We "intuited" them. We can't smell or see the probability in any literal sense, and yet we know it.

This sounds mystical. It is mystical, to talk of feelings and intuitions as means of getting truth. It's not problematic.

If there were a non-mystical explanation for scientific truth that religion lacked, Richard Dawkins would be on surer ground--we might even grant him the leeway to be the condescending twit that he is.

But there isn't such an explanation, neither in his book nor any other. Indeed, it is not, as of now, clear how knowledge--scientific or religious--is possible at all. Plato's "justified belief" definition withered, and epistemology is currently in flux, with no clear front runner. Nor is it enough to say in layman's terms, "prove everything" or "always be skeptical," because that won't do for any but the most Cartesian solipsist. You should be skeptical to a point, and you should prove things to a point. But neither Dawkins nor most people demand that we prove, say, we are not brains in vats, or, say, that the future will resemble the past, or, say, that parallel lines will never meet.

My point is not that science is false, or unsupported. It is true, and robust, and I can believe that, because I believe that I can "feel" truth--somehow--that there's nothing wrong with saying it feels as if our experiment was probably a goof and it feels as if the second law of thermodynamics is true. But, given that, I can no longer say to the theist: "Only that which is proven can be believed." I am left with the lamer: "God doesn't feel true."

And my point is not that theism is true. My point is that it is not obviously false. And British evolutionary biologists could do with showing a bit more respect to those who disagree with them.

9 comments:

Jay Goodman Tamboli said...

You're not a scientist, and the way you talk about science makes my head hurt.

Conversely, Dawkins is a scientist, and the way he talks about religion and philosophy makes my head hurt.

Science and religion have and require completely different analytical frameworks, and they have to. For example, I think scientists may follow your idea of assessing probabilities, but it's not because they "believe" one thing or another, but because they know one route will likely find the source of error faster. The choice has nothing to do with one thing being "truer." It is possible to find an answer, and they're simply trying to find the answer the fastest way. In religion, we don't think there's any way to find the answer, so the best we can do is make guesses as to what's more likely.

Scott said...

For example, I think scientists may follow your idea of assessing probabilities, but it's not because they "believe" one thing or another, but because they know one route will likely find the source of error faster.

What error? Let us combine our possibilities into 2:

1. Error in experiment. Probability 99%.

2. Second rule of thermodynamics is false. Probability 1%.

You may be saying that both consist of an error, the first in the current experiment, the second in every other experiment that has confirmed the second rule of thermodynamics. If so, our choices are really:

1. This experiment is flawed. Probability 99%.

2. Every other experiment confirming the second rule is false. Probability 1%.

If you agree that people assign probabilities to possibilities, what do you mean by that if not the belief of a person in the (probability of) truth of that possibility? I believe number one is probably true, and number two is probably false. It is not clear what you mean by “they know one route will likely find the source of error faster.” How does that apply here? I am only giving a list of conclusions possibly following from the evidence, not a list of actions, so I’m not sure how the word “route” even applies.

This leads me to believe you and I are speaking past one another. I hope you can show me how I’m misreading you.

It is possible to find an answer, and they're simply trying to find the answer the fastest way. In religion, we don't think there's any way to find the answer, so the best we can do is make guesses as to what's more likely.

I fail to see how guesses as to what’s more likely, which you describe as the religious mechanism, is different at all from the decision we made between the list of conclusions up above, where we did the exact same thing: guessed that conclusion number 1 was more likely.

Jay Goodman Tamboli said...

[snip Jay talking about "error"]

What error? Let us combine our possibilities into 2:

[snip]

1. This experiment is flawed. Probability 99%.

2. Every other experiment confirming the second rule is false.
Probability 1%.


I think "error" was a bad choice of words. There's certainly an inconsistency between the predicted results and actual results, and you're correct in saying that there are two possibilities: the experiment was flawed or the theory was flawed. The second possibility, though, is not the second possibility you give. Experiments may agree with the theory, but none can "prove" the theory. More importantly, even if the theory is wrong, that doesn't mean all the other experiments were wrong. The theory may be good enough to predict the outcome in 99% of cases, but you've just found the 1% that's a little different. Your formulation as three possibilities in the original post was better.

I think my real problem with your original post stems from the following:

It's as if we've attached truth probabilities to each scenario. Conclusion 1 has a 99% chance of being true. Two and three are .5% each. It is logical to go with the truest of the bunch.

[snip]

This sounds mystical. It is mystical, to talk of feelings and intuitions as means of getting truth. It's not problematic.


What do you mean by "to go with"? A scientist may believe that one situation is correct ("true"), but he's not going to publish that, and he's going to be very careful relying on it in future experiments. There's a world of difference between the scientist's pre-experiment belief in the Third Law of Thermodynamics and his post-experiment belief that his experiment was flawed. Rather than take the 99% solution and believe that, he's probably going to just believe that that possibility is 99% true. He's going to continue believing all the possibilities are possible, excluding none.

Maybe I'm misunderstanding your argument, but I think we agree that absolute scientific knowledge is impossible. Your reason seems to be that there's some intuition necessary, but I think instead the reason must be that a scientist must always be open to (and welcome) the possibility that our models are flawed.

Scott said...

What do you mean by "to go with"?

“Go with” was a poor choice of words; it makes it seem as if the scientist rounded up the truth value of something to 100% for no apparent reason. The mistake is mine.

I suppose if you asked the scientist what he believed was true, he’d say the 99% possibility is true—but of course, by saying he believes something is true, he must not mean it is 100% possibly true, but rather 99% (or some nearby number). In that sense, I meant he’d “go with” the answer.

Past that I think we’re speaking past one another. My point is not that absolute scientific knowledge is impossible, though it probably is: my point is that the assignment of truth possibilities to various solutions is a mystical, intuitive procedure.

As such, this statement:

Maybe I'm misunderstanding your argument, but I think we agree that absolute scientific knowledge is impossible. Your reason seems to be that there's some intuition necessary, but I think instead the reason must be that a scientist must always be open to (and welcome) the possibility that our models are flawed.

…is orthogonal to my point. If absolute scientific knowledge is impossible (let us set that as a working hypothesis), then the logical scientist holds every position tentatively, even if he holds some more stubbornly at least. But one way or the other, he finds some positions more probably true than others, and the point is that the procedure of attaching truth probabilities to all the possible conclusions--experimental error, theoretical error, whatever--is a mystical, intuitive procedure.

The scientist, after performing the experiment above, will not declare that the second law of thermodynamics is false or inapplicable in this circumstance. He will, I imagine, go find out what went wrong with his experiment. Why does he take the second action and not the former? Because one explanation has been judged (somehow) to be more probably true than the other.

Jay Goodman Tamboli said...

[M]y point is that the assignment of truth possibilities to various solutions is a mystical, intuitive procedure.

Fair enough. I don't know if I'd use those words to describe it, as there is some basis in reality, but usually the assignment isn't a pure calculation of probabilities. I take it, then, that your major argument is that this mysticism means that we shouldn't assume the answer we think is most likely is the true one. It's not a real calculation (and I'd say it probably couldn't be).

In that case, Dawkins seems to be "go[ing] with" the answer he thinks is most likely. He is "round[ing] up the truth value of something to 100% for no apparent reason." And therein is your complaint.

Is that a fair characterization? I think I agree with that.

Scott said...

I don't know if I'd use those words to describe it, as there is some basis in reality, but usually the assignment isn't a pure calculation of probabilities.

There is a basis in reality, but that's based on a feeling of the weight of certain factors (the religious weigh their introspective feelings of the existence of God particularly heavily, feelings that are real). The scientist thought about the evidence he had--the experiments he'd witnessed, his faith in his teachers' reliability, his trust in his own perception--and decided conclusion 1 was more likely.

But another scientist could have looked at the same evidence and weighed each bit differently and come up with conclusion 2. The latter scientist would be wrong, but it's not clear why. Why indeed couldn't this experiment have proven that the second law of thermdoynamics is not applicable in every circumstance?

Certainly there is no obvious means of weighing different bits of evidence. Yet we do. Hence, mystic.

I take it, then, that your major argument is that this mysticism means that we shouldn't assume the answer we think is most likely is the true one.

Not at all. I probably believe the opposite, but that's not my point, which is that there is mysticism in the broth in both scientific and religious exploration. In science, the assigning of probabilties is mystic--I don't know why or how it happens. I know that conclusion 1 is more likely, but I don't know why. One could say that there is more evidence for one than two, but that only raises the question of how that evidence is weighed.

In that case, Dawkins seems to be "go[ing] with" the answer he thinks is most likely. He is "round[ing] up the truth value of something to 100% for no apparent reason." And therein is your complaint.

Dawkins is not doing that. He characterizes himself as a probabilistic atheist--he counts the existence of God as having a 99% chance of being false. My point is that though he finds religion obviously false because it is believed without proof, one can make the same criticism of certain aspects of the scientific process, such as how we assign truth probabilities to conclusions.

Jay Goodman Tamboli said...

Blogger just ate my comment.

My point is that though [Dawkins] finds religion obviously false because it is believed without proof, one can make the same criticism of certain aspects of the scientific process, such as how we assign truth probabilities to conclusions.

But a religious person is making an assumption about the end truth, while a scientist is making assumptions (and weakly-held ones at that) about probabilities over different outcomes. Doesn't that make a difference?

Wild Pegasus said...

I believe to demonstrate that rules sometimes feel truer than case by case evaluations.

That's not a surprise. Rules tend to be easier than cases.

- Josh

Scott said...

But a religious person is making an assumption about the end truth, while a scientist is making assumptions (and weakly-held ones at that) about probabilities over different outcomes. Doesn't that make a difference?

I'm not sure how you're measuring the strength of assumptions. The scientist holds possibility number one, by hypothesis, quite tightly. He would not discard it easily.

As to end truth and different end truth assumptions, that seems an uncharitable interpretation of the religious. One can imagine an organic, non-end truth means of building a case for God. We could assume that people who say they feel the existence of God are not delusional, but really do feel something. For any one person, we could hold that with a small degree of probability, but as we find more and more people with the same belief, a case starts to be built that there is a God. We did not assume the end truth--we simply made minor assumptions along the way that grew organically.

This is similar to assuming one experiment has discovered a truth, say that entropy increases in circumstance A, and building experiment by experiment to eventually give us the end truth that there is a second law of thermodynamics.