Today I was talking to a Dominican novice, newly vested and with his first month of Novitiate life behind him. We talked about many things: learning chant, praying with the Psalms, Lectio Divina, and the Thomistic understanding of predestination.
Probably the most interesting thing we talked about was mystery. Both of us had the experience of coming to accept the Thomistic view of predestination despite that being uncomfortably close to Calvin's view for our liking.
I suspect that our reluctance to embrace a stronger view of predestination is due to our Western upbringing. In the West, the assumption we hold is generally that we have not only free will, but strong wills capable of over-powering almost anything else. In reality, our wills are generally pretty weak, which we find out very quickly when we try to give up our small comforts.
The newly-vested Dominican novice explained that it is (not to cast aspersions on all the excellent reasoning done by Dominicans like Garrigou-Lagrange who convinced him that the Thomistic view was correct) ultimately a mystery as to how exactly free will and God's sovereignty intersect in each moment of our lives.
This prompted me to think that theology is fundamentally an apocalyptic exercise. An apocalypse is, literally speaking, an uncovering. It is an unveiling of Truth.
When a bridegroom unveils his bride, he is not providing himself or the witnesses to the wedding with all the answers about who his bride is and all that she has done. Instead, he is revealing the mystery to whom he has committed himself for life.
Even if he is an unusually good husband, he will spend his entire life learning more about his bride, and at the end of his life, she will still be a bit of a mystery. No matter how intimate they become, the bridegroom who becomes the husband will never quite know everything there is to know about her, the bride who became the wife. And vice versa.
In the same way, when we Christians unveil the truth very haltingly and with frequent missteps as we do our humble theological work (or even when Doctors of the Church like St. Thomas Aquinas do that work exceptionally well), we are not providing ourselves with all the answers about who God is and all that He has wrought.
Even an unusually good theologian who investigates thoroughly the things of God will never quite know everything there is to know about God. Indeed, they may feel, like St. Thomas Aquinas did at the end of his life, that all their great theological treatises and syntheses are like mere humble straw compared to the immensity of the mystery of God which has been revealed to them.
As with the physical and social sciences, our theological investigations, no matter how many questions we have have reasoned through, leave us with yet more questions. The scientific work we do uncovers some important answers, and it also leaves us with more mysteries. Science is apocalyptic in the sense that it unveils, yes, but what it unveils is that there is a still deeper mystery.
Theology, the Queen of the Sciences which St. Thomas Aquinas served so faithfully and well, is likewise an apocalyptic science. It is not the writing down of all the answers, but rather the work of unveiling the divine mystery.
And just as with the bride and her bridegroom, the beauty of the mystery is indeed all the greater for the unveiling.
Related: Is Thomas Aquinas a substance dualist?
The above is a picture I took of a statue of St. Thomas Aquinas at the Dominican House of Studies.
I'm at the end of my wisdom, and here I will remain as its limits grow into the event horizon of love.
Quotation
He who learns must suffer, and, even in our sleep, pain that we cannot forget falls drop by drop upon the heart, and in our own despair, against our will, comes wisdom to us by the awful grace of God. - Aeschylus
Showing posts with label Science. Show all posts
Showing posts with label Science. Show all posts
Sunday, September 2, 2018
Monday, May 14, 2018
Fair Questions: How do religious doctrines confer an evolutionary advantage?
One of the questions that came up during the podcast that Sam Harris did with Bret Weinstein surprised me, although it shouldn't have. Bret's answer to that question also surprised me (and seemed fairly surprising to Sam Harris as well), but not in the sense that it cut against my intuitions about what the correct answer would be.
I was just surprised that he generally agreed with my answer, because other left-libertarian evolutionary biologists don't strike me as the sorts of individuals who ever would agree with me on this topic.
When Bret mentioned that he believed that specific religious doctrines had evolutionary value, I thought, "Well, of course." It seems very obvious that they do, given how pervasive religion is, and how evolutionary processes work.
The more interesting question is how that plays out with regard to certain doctrines, and that's what Sam Harris brought up very effectively with his example about the Catholic doctrine of transubstantiation.
I can understand why Harris chose transubstantiation as his example. It's not obvious how that might confer an advantage in evolutionary terms.
One the other hand, the moral doctrine of the Catholic Church which states that it is wrong to use artificial contraceptives for selfish ends is much easier to understand as a doctrine that lends to evolutionary fitness. It's a doctrine that provides an incentive for those who are Catholic to be open to having more children, and that's generally going to improve evolutionary success for those Catholics, ceteris paribus.
But it isn't just the one doctrine. Bret's point that the structure of the religion has to be doing a lot of the work is a useful one, I think, for understanding the value of religion in evolutionary terms. Most people who are opposed to religion will admit that the communities formed under the banner of religion are strong and people find it difficult to leave those support systems.
This robust social support system as a function of religion has fairly obvious evolutionary value as well. But what is less obvious is how the intellectual structures of a religion confer an evolutionary advantage. Here is where I have some ideas to offer, although I'm afraid that I'm offering them in an overly simplistic formulation.
Let's take Buddhism as an example. Buddhism, when it is practiced (either to a greater or lesser degree) tends to reduce an individual's attachment to seeking pleasure and avoiding pain. And this allows individuals to act both more rationally and more compassionately, because the huge barrier we have to being more rational and compassionate is our inordinate focus on short-term interests that tends to lead to shorter and/or less healthy lives and thus fewer offspring in many cases.
So Buddhism's focus on reducing our attachments to seeking pleasure and avoiding pain for their own sake helps us mitigate behaviors that will reduce our evolutionary success, because it improves our ability to make rational decisions about resource utilization and fosters more pro-social behaviors.
But even within that general framework, we still can't easily explain how something specific and more spiritualized like the Mahayana Buddhist teaching on the transference of merit or the Roman Catholic teaching about indulgences fits into this picture. Nonetheless, I actually think it is explicable, albeit in a more subtle way and with some difficulty.
Both of these practices function to help us become less selfish the more we practice them. They may seem like mere pie-in-the-sky magical thinking to those who don't share their respective cosmologies, but what they are doing to us is gradually making us less likely to engage in the sort of immature short-term pleasure-seeking and pain-avoidance behaviors that ultimately make us less successful from an evolutionary standpoint.
They do this in a couple of ways. First, because these practices are prompting us to keep our focus on something other than ourselves, they are training the mind to be less self-centered in how it deals with the world. Second, because these practices take up the time that we might otherwise be inclined to worry about pleasure and pain, we are less likely to enter a downward spiral of anxious thinking and go back to our less valuable behaviors (from an evolutionary standpoint) in those moments.
These practices are just a small sample of the myriad religious practices that serve the same basic functions, and both ancient Buddhist and ancient Christian religious traditions have huge collections of such practices that together are a fairly comprehensive training program for reducing the ill effects of egotistical thinking and behavior. (Click here for more examples of this in the Catholic Church)
So most of the doctrines of religions are a part of the intellectual framework that help promote pro-social behaviors, reduce our reliance on transient pleasures that drives much of our egregiously irrational behaviors, and provides a motivating narrative for us as we navigate a difficult life.
The Roman Catholic teaching on transubstantiation or the Mahayana Buddhist teaching on the Buddha-nature may seem very far away from being valuable in terms of evolutionary success, but they are highly important as motivators. If you believe that God has given Himself to you for your salvation, and that He has gone so far as to give you His very body, that's a strong motivation to keep going in the face of struggles that might otherwise cause you to fall into fatalism.
In a similar way, if you believe that your Buddha-nature means that you have an opportunity to transcend all the suffering of this world, that's highly motivating and may keep you going through rough times so that you can have more offspring and be more successful from an evolutionary standpoint.
It may be that there are specific religious doctrines that, individually, are almost impossible to understand as useful in light of evolutionary processes, but I suspect that taken as a whole body of beliefs, most religions have pretty obvious evolutionary value.
Unless, of course, you're already committed to the view that they couldn't possibly improve human fitness from an evolutionary standpoint. And then it would be very difficult to think of those reasons.
Related: Fair Questions: How did religion help us survive?
I was just surprised that he generally agreed with my answer, because other left-libertarian evolutionary biologists don't strike me as the sorts of individuals who ever would agree with me on this topic.
When Bret mentioned that he believed that specific religious doctrines had evolutionary value, I thought, "Well, of course." It seems very obvious that they do, given how pervasive religion is, and how evolutionary processes work.
The more interesting question is how that plays out with regard to certain doctrines, and that's what Sam Harris brought up very effectively with his example about the Catholic doctrine of transubstantiation.
I can understand why Harris chose transubstantiation as his example. It's not obvious how that might confer an advantage in evolutionary terms.
One the other hand, the moral doctrine of the Catholic Church which states that it is wrong to use artificial contraceptives for selfish ends is much easier to understand as a doctrine that lends to evolutionary fitness. It's a doctrine that provides an incentive for those who are Catholic to be open to having more children, and that's generally going to improve evolutionary success for those Catholics, ceteris paribus.
But it isn't just the one doctrine. Bret's point that the structure of the religion has to be doing a lot of the work is a useful one, I think, for understanding the value of religion in evolutionary terms. Most people who are opposed to religion will admit that the communities formed under the banner of religion are strong and people find it difficult to leave those support systems.
This robust social support system as a function of religion has fairly obvious evolutionary value as well. But what is less obvious is how the intellectual structures of a religion confer an evolutionary advantage. Here is where I have some ideas to offer, although I'm afraid that I'm offering them in an overly simplistic formulation.
Let's take Buddhism as an example. Buddhism, when it is practiced (either to a greater or lesser degree) tends to reduce an individual's attachment to seeking pleasure and avoiding pain. And this allows individuals to act both more rationally and more compassionately, because the huge barrier we have to being more rational and compassionate is our inordinate focus on short-term interests that tends to lead to shorter and/or less healthy lives and thus fewer offspring in many cases.
So Buddhism's focus on reducing our attachments to seeking pleasure and avoiding pain for their own sake helps us mitigate behaviors that will reduce our evolutionary success, because it improves our ability to make rational decisions about resource utilization and fosters more pro-social behaviors.
But even within that general framework, we still can't easily explain how something specific and more spiritualized like the Mahayana Buddhist teaching on the transference of merit or the Roman Catholic teaching about indulgences fits into this picture. Nonetheless, I actually think it is explicable, albeit in a more subtle way and with some difficulty.
Both of these practices function to help us become less selfish the more we practice them. They may seem like mere pie-in-the-sky magical thinking to those who don't share their respective cosmologies, but what they are doing to us is gradually making us less likely to engage in the sort of immature short-term pleasure-seeking and pain-avoidance behaviors that ultimately make us less successful from an evolutionary standpoint.
They do this in a couple of ways. First, because these practices are prompting us to keep our focus on something other than ourselves, they are training the mind to be less self-centered in how it deals with the world. Second, because these practices take up the time that we might otherwise be inclined to worry about pleasure and pain, we are less likely to enter a downward spiral of anxious thinking and go back to our less valuable behaviors (from an evolutionary standpoint) in those moments.
These practices are just a small sample of the myriad religious practices that serve the same basic functions, and both ancient Buddhist and ancient Christian religious traditions have huge collections of such practices that together are a fairly comprehensive training program for reducing the ill effects of egotistical thinking and behavior. (Click here for more examples of this in the Catholic Church)
So most of the doctrines of religions are a part of the intellectual framework that help promote pro-social behaviors, reduce our reliance on transient pleasures that drives much of our egregiously irrational behaviors, and provides a motivating narrative for us as we navigate a difficult life.
The Roman Catholic teaching on transubstantiation or the Mahayana Buddhist teaching on the Buddha-nature may seem very far away from being valuable in terms of evolutionary success, but they are highly important as motivators. If you believe that God has given Himself to you for your salvation, and that He has gone so far as to give you His very body, that's a strong motivation to keep going in the face of struggles that might otherwise cause you to fall into fatalism.
In a similar way, if you believe that your Buddha-nature means that you have an opportunity to transcend all the suffering of this world, that's highly motivating and may keep you going through rough times so that you can have more offspring and be more successful from an evolutionary standpoint.
It may be that there are specific religious doctrines that, individually, are almost impossible to understand as useful in light of evolutionary processes, but I suspect that taken as a whole body of beliefs, most religions have pretty obvious evolutionary value.
Unless, of course, you're already committed to the view that they couldn't possibly improve human fitness from an evolutionary standpoint. And then it would be very difficult to think of those reasons.
Related: Fair Questions: How did religion help us survive?
Wednesday, April 11, 2018
Waking Up Half-Asleep: Scientific Racism and Sam Harris
The latest controversy surrounding Sam Harris has been generated by his willingness to admit that he was wrong about Charles Murray, who earned great notoriety for his book The Bell Curve. Others disagree, and continue to claim that racism is either Murray's motivation for the research and the policies he supports or something he conveniently ignores.
I think the question of Murray's views on race have been beaten to death, re-animated, and beaten to death again so many times that it's largely a futile effort to address it again.
What interested me more was the subject of scientific racism, and Harris' response to Ezra Klein's points about it. You can see the transcript here at Vox if you prefer not to listen to the podcast version.
I want to look at a portion of the conversation and quote the transcript to address the issue of scientific racism:
This is the area where I think Sam Harris' tribalism (which Ezra Klein accused him of ignoring) is actually a factor. Sam Harris has a fundamental commitment to science and scientific values. He's a neuroscientist himself. He's part of the tribe of scientists and pro-science public intellectuals. He's gone so far as to claim that morality can be grounded in a scientific framework.
My sense is that Harris's self-perception is that he's taken his tendencies toward tribalism into account and mitigated those already. Also, he doesn't pretend that no one has ever used science to support racist ideas. He acknowledges this, albeit briefly.
I don't think the tribalism is a problem for him here in a straightforward and obvious way. It's more roundabout, and by way of his views on religion.
Even very rigorous philosophers who regularly take into account their own biases, when they happen to be religious, can easily tilt the way they interpret the strength of the arguments against their religious views to avoid dealing with the full strength of those arguments. I'm sure Harris is familiar with this process from long experience.
What I would suggest is that the same tilting of the way the strength of the arguments is interpreted is happening here. The strength of the arguments regarding the extent to which the Bible was used to rationalize slavery in the U.S. is given a bit greater weight because Harris is committed to opposing religions generally and for that reason the various religious tribes (who generally oppose him).
At the same time, he is unlikely to weight the strength of the arguments regarding the extent to which scientific research was used to rationalize slavery and eugenics and so on in the U.S. so heavily because his view of science is that it is helping us leave those ideas behind rather than keeping us in those ideas as he believes religion does.
My view of the issue is that religious views rather than scientific views played a larger role in maintaining slavery, and that religious views rather than scientific views played a larger role in ending slavery (and the Jim Crow laws and segregation which followed) in the U.S.
It was much more the American moral sense formed by Christianity that motivated the change in views on the issue of slavery and the issues of Jim Crow laws and segregation. The scientific evidence we have now does indeed support the view that traditional racial categories are not very accurate as a description of intra-species differences in humans, but this evidence was not yet well-established and popularized during the colonial era or the Civil War era.
What was well-established at that point were the Christian moral arguments for slavery and the Christian moral arguments against slavery. And it was those religiously-motivated moral arguments that carried the day, and still inoculate us against a return to slavery.
Scientific research doesn't, by itself, produce moral progress in a society. It's typically used in a self-serving way by both sides of any given controversial moral issue, but it's not what motivates societies to change. And that's for a very simple reason: science is generally not what motivates people to change (religion is much more motivating), and it's the people who need to do the changing for societies to change.
Now, I would feel safe betting that Sam Harris would agree with me that people ought to be more motivated by scientific findings. He and I would likely reach an accord quickly on that point.
Certainly more quickly than an accord could be reached by Harris and Klein on the question of whether or not Klein's publishing of articles contra Murray and Harris poisons the space for debate on important issues.
I think it's probably true that Klein's behavior has contributed to poisoning the space for public debate on the policy implications of IQ score differentials, but only ever so slightly. It was a poisonous space for debate long before Klein was involved, and I don't see his contribution being a very large one.
I would also say that Klein is dead wrong about the most ancient justification for racial inequality and bigotry being the belief that folks with dark skin are less intelligent than those with light skin. The roots are deeper than that. Bigotry based on physical characteristics is much older than any of the recorded history for the colonial era (or even the medieval era), and likely much older than any recorded history...period.
We need to look deeper for the roots of bigotry, and even slavery, which is older than any of the religious traditions or scientific findings that have recently been used to rationalize its continuance. We can and should do this while still guarding against future uses of moral reasoning grounded in those religious traditions and scientific findings to bring slavery back or to continue other kinds of ongoing injustices.
I think Klein is exactly right, however, that racism in the United States has consistently had a scientific wrapper. And that we ought to be skeptical of our ability to claim today that we can be reasonably sure that the scientific data showing differences in average IQ scores among populations represents a real innate difference in IQ due to heritable traits.
I can acknowledge that the data exists while remaining agnostic as to the exact causes and their implications (which I do). This curious agnosticism with regard to the question would probably have been a better approach for many people who used scientific research to prop up their racism in the past or the people who continue to use it that way today.
I think we need to make sure that we are not waking up to the problem of racism, still half-asleep and stumbling around while unable to see that the same old pitfalls are still there.
It is probably better to give ourselves a chance to wake up fully before we attempt to leave our resting place too boldly, and then we can make our way more safely while avoiding those old pitfalls.
Sadly, I'm not sure any of us are fully awake at this point.
I think the question of Murray's views on race have been beaten to death, re-animated, and beaten to death again so many times that it's largely a futile effort to address it again.
What interested me more was the subject of scientific racism, and Harris' response to Ezra Klein's points about it. You can see the transcript here at Vox if you prefer not to listen to the podcast version.
I want to look at a portion of the conversation and quote the transcript to address the issue of scientific racism:
Ezra Klein: Something you brought up a couple times is something I wrote in my piece, and I am actually very happy to talk about this. I say that the belief that African-Americans are genetically less intelligent than whites, and then also inferior in other ways, which I’m not saying you guys said, is our oldest, most ancient justification for racial inequality and bigotry. Do you disagree with that? When you look at American history, when you look at what we said at the dawn of this country and all the way through the 1950, the 60s, when I say that, am I wrong?
Sam Harris: In a sense you’re wrong. I agree with the spirit of it. I think you could say the Bible is just as much of a justification, the notion that the race of Ham came under a curse and that these races have a separate theological stature. You had Bible-thumping racist maniacs defending slavery and without any reference to science. That’s a great American tradition.
I think tribalism is at the bottom of it and perceiving other people who look different and sound different from yourself as ineradicably different. I think that is a problem we must outgrow, and I fully agree with the social concerns that follow from noticing how far we have to go in outgrowing that.
Ezra Klein: One of the things I detect in this conversation, this maybe gets to something we discussed that we would talk about later and maybe we’ve hit that point. Something I detect here is the idea that, and I want to think about how to phrase this carefully, because I want to do it without making you defensive, is that ideas can only fit into this lineage if they are being said with racial animus, if they are being said by someone who doesn’t like the people they’re talking about.
I think an important thing when we study the history of racism in this country is that it has always had a scientific wrapper. It has always been not something people thought they were doing because they were hateful, it was something they thought they were doing, because it was true.
This is the area where I think Sam Harris' tribalism (which Ezra Klein accused him of ignoring) is actually a factor. Sam Harris has a fundamental commitment to science and scientific values. He's a neuroscientist himself. He's part of the tribe of scientists and pro-science public intellectuals. He's gone so far as to claim that morality can be grounded in a scientific framework.
My sense is that Harris's self-perception is that he's taken his tendencies toward tribalism into account and mitigated those already. Also, he doesn't pretend that no one has ever used science to support racist ideas. He acknowledges this, albeit briefly.
I don't think the tribalism is a problem for him here in a straightforward and obvious way. It's more roundabout, and by way of his views on religion.
Even very rigorous philosophers who regularly take into account their own biases, when they happen to be religious, can easily tilt the way they interpret the strength of the arguments against their religious views to avoid dealing with the full strength of those arguments. I'm sure Harris is familiar with this process from long experience.
What I would suggest is that the same tilting of the way the strength of the arguments is interpreted is happening here. The strength of the arguments regarding the extent to which the Bible was used to rationalize slavery in the U.S. is given a bit greater weight because Harris is committed to opposing religions generally and for that reason the various religious tribes (who generally oppose him).
At the same time, he is unlikely to weight the strength of the arguments regarding the extent to which scientific research was used to rationalize slavery and eugenics and so on in the U.S. so heavily because his view of science is that it is helping us leave those ideas behind rather than keeping us in those ideas as he believes religion does.
My view of the issue is that religious views rather than scientific views played a larger role in maintaining slavery, and that religious views rather than scientific views played a larger role in ending slavery (and the Jim Crow laws and segregation which followed) in the U.S.
It was much more the American moral sense formed by Christianity that motivated the change in views on the issue of slavery and the issues of Jim Crow laws and segregation. The scientific evidence we have now does indeed support the view that traditional racial categories are not very accurate as a description of intra-species differences in humans, but this evidence was not yet well-established and popularized during the colonial era or the Civil War era.
What was well-established at that point were the Christian moral arguments for slavery and the Christian moral arguments against slavery. And it was those religiously-motivated moral arguments that carried the day, and still inoculate us against a return to slavery.
Scientific research doesn't, by itself, produce moral progress in a society. It's typically used in a self-serving way by both sides of any given controversial moral issue, but it's not what motivates societies to change. And that's for a very simple reason: science is generally not what motivates people to change (religion is much more motivating), and it's the people who need to do the changing for societies to change.
Now, I would feel safe betting that Sam Harris would agree with me that people ought to be more motivated by scientific findings. He and I would likely reach an accord quickly on that point.
Certainly more quickly than an accord could be reached by Harris and Klein on the question of whether or not Klein's publishing of articles contra Murray and Harris poisons the space for debate on important issues.
I think it's probably true that Klein's behavior has contributed to poisoning the space for public debate on the policy implications of IQ score differentials, but only ever so slightly. It was a poisonous space for debate long before Klein was involved, and I don't see his contribution being a very large one.
I would also say that Klein is dead wrong about the most ancient justification for racial inequality and bigotry being the belief that folks with dark skin are less intelligent than those with light skin. The roots are deeper than that. Bigotry based on physical characteristics is much older than any of the recorded history for the colonial era (or even the medieval era), and likely much older than any recorded history...period.
We need to look deeper for the roots of bigotry, and even slavery, which is older than any of the religious traditions or scientific findings that have recently been used to rationalize its continuance. We can and should do this while still guarding against future uses of moral reasoning grounded in those religious traditions and scientific findings to bring slavery back or to continue other kinds of ongoing injustices.
I think Klein is exactly right, however, that racism in the United States has consistently had a scientific wrapper. And that we ought to be skeptical of our ability to claim today that we can be reasonably sure that the scientific data showing differences in average IQ scores among populations represents a real innate difference in IQ due to heritable traits.
I can acknowledge that the data exists while remaining agnostic as to the exact causes and their implications (which I do). This curious agnosticism with regard to the question would probably have been a better approach for many people who used scientific research to prop up their racism in the past or the people who continue to use it that way today.
I think we need to make sure that we are not waking up to the problem of racism, still half-asleep and stumbling around while unable to see that the same old pitfalls are still there.
It is probably better to give ourselves a chance to wake up fully before we attempt to leave our resting place too boldly, and then we can make our way more safely while avoiding those old pitfalls.
Sadly, I'm not sure any of us are fully awake at this point.
Thursday, February 8, 2018
Fair Questions: Why doesn't science show that Buddhist monks are less afraid of death?
Recently, I was pointed to an article in Newsweek which described a study that was done to test the hypothesis that the Buddhist belief that the self (as we generally think of it, a persisting reality) is an illusion would result in Buddhist monks having less fear of death compared to lay Buddhists, Hindus, and Christians.
This hypothesis was thoroughly falsified. The Tibetan Buddhist monks actually reported more fear of self-annihilation upon dying than any of the other groups. And in the test of selflessness (which should be a result of practicing the Buddhist focus on impermanence), they were actually less selfless than others. This was an interesting day for science, and it's always nice to see a hypothesis falsified, because that's scientific progress.
The researcher quoted in the Newsweek article seemed quite surprised by the results. I worry that this is largely because the researcher doesn't understand Buddhism very well, though I could be very wrong about that. My own grasp of Buddhism is better than the average Westerner (as you can see from my extensive writings on it), but is certainly not complete.
At the very least, you can read in the paper they wrote after the study that the researchers relied on knowledgeable Tibetan Buddhist scholar-monks to calibrate their survey questions and understand the degree to which the answers conformed to standard Tibetan Buddhist teaching. That's good methodology.
I do have some suspicions about the possible causes of the research results with regard to the Tibetan Buddhist monks being less selfless than the lay Buddhists in Tibet and Bhutan. I also have some suspicions with regard to the fact that they appeared to be more afraid of self-annihilation. Regarding the fear of self-annihilation, I think they need to do a comparable study with other Mahayana and Theravada Buddhists (both lay and monastic).
The reason I suggest that they should do more research with other Mahayana and Theravada Buddhists is that the Mahayana tradition generally and the Vajrayana tradition of Tibetan Buddhism specifically has some beliefs that are different from Theravada Buddhism which are relevant to how one would view death.
In Mahayana teachings, there's a strong emphasis on buddha-nature and Buddhahood, and actualizing Buddhahood would cause one to essentially live forever as a bodhisattva. Monks are traditionally considered to be the ones most likely to reach that state (as you can read here), and they would have the most to lose by self-annihilation upon death. On the other hand, lay Buddhists have to be resigned to the high probability of suffering a long time (perhaps millions or billions of years) on another plane of existence, so self-annihilation might not look so bad from their perspective.
Theravada teaching tends to more emphasize the cessation of existence within the cycle of saṃsāra (being reborn over and over again and suffering for all or most of eternity). From that perspective too, self-annihilation could look pretty good.
Another important point with regard to the selfishness of the monks when presented with the life-extending medicine is that traditional Buddhist teaching places monks and care for the basic material needs of monks very high on the moral priorities list because they are the most likely to become enlightened and escape saṃsāra. Therefore, one might have less incentive to extend the lifespan of someone who is very likely to die and be reborn in a naraka and suffer for millions or billions of years before getting another chance to be a monk and gain the opportunity to escape saṃsāra.
That said, there may be a deeper and simpler reason that serious Buddhist practitioners who meditate often would be more attached to their own continued existence. During deep meditation, one can find a tranquility or a bliss which far surpasses the banality of daily life in the quality of experience.
One can also notice that while there is no self in the way that we typically think of it as a persisting psychological reality, there is something which is aware of the contents of the psyche, and that something is what remains with us even after a deep meditation which changes us so dramatically that we can no longer pretend that there is a persisting psychological reality which is the ground of our being.
It is this something which is aware that presumably persists through the endless cycles of death and rebirth known as saṃsāra, through both the terrifying and torturous narakas and the highest heavenly planes. One would guess that Buddhist monastics would be highly cognizant of the fact that this something which persists through life after life, if it were to cease, would mean the cessation of their own being, and their chance at living on as an enlightened bodhisattva.
While none of them would believe that a simple lack of a persisting psychological reality (known popularly as the self) is anything to fear because meditation would make it obvious to them that it is not anything to fear, they might be quite fearful of the final cessation of that something which is aware.
After all, they've developed a closeness with it through meditation that most people never develop. They may have become attached to this something through long familiarity, and it may be wrenching to consider losing it forever, this truly persisting thing without which we would not experience bliss or tranquility (so far as we know).
I'm not saying that any of these beliefs or experiences are necessarily causally related to the greater fear of death or the selfish behavior of the Buddhist monks.
I don't know with certainty why Tibetan Buddhist monks would have a greater fear of death than lay Buddhists or members of other religions in the same geographical area.
But I do think the researchers need to consider the complexity of Buddhist beliefs when thinking about these experiments and what they measure.
Related: What is the role of the Sangha in Buddhism?
By Stephen Shephard - Own work, CC BY-SA 3.0, https://commons.wikimedia.org/w/index.php?curid=1130661
This hypothesis was thoroughly falsified. The Tibetan Buddhist monks actually reported more fear of self-annihilation upon dying than any of the other groups. And in the test of selflessness (which should be a result of practicing the Buddhist focus on impermanence), they were actually less selfless than others. This was an interesting day for science, and it's always nice to see a hypothesis falsified, because that's scientific progress.
The researcher quoted in the Newsweek article seemed quite surprised by the results. I worry that this is largely because the researcher doesn't understand Buddhism very well, though I could be very wrong about that. My own grasp of Buddhism is better than the average Westerner (as you can see from my extensive writings on it), but is certainly not complete.
At the very least, you can read in the paper they wrote after the study that the researchers relied on knowledgeable Tibetan Buddhist scholar-monks to calibrate their survey questions and understand the degree to which the answers conformed to standard Tibetan Buddhist teaching. That's good methodology.
I do have some suspicions about the possible causes of the research results with regard to the Tibetan Buddhist monks being less selfless than the lay Buddhists in Tibet and Bhutan. I also have some suspicions with regard to the fact that they appeared to be more afraid of self-annihilation. Regarding the fear of self-annihilation, I think they need to do a comparable study with other Mahayana and Theravada Buddhists (both lay and monastic).
The reason I suggest that they should do more research with other Mahayana and Theravada Buddhists is that the Mahayana tradition generally and the Vajrayana tradition of Tibetan Buddhism specifically has some beliefs that are different from Theravada Buddhism which are relevant to how one would view death.
In Mahayana teachings, there's a strong emphasis on buddha-nature and Buddhahood, and actualizing Buddhahood would cause one to essentially live forever as a bodhisattva. Monks are traditionally considered to be the ones most likely to reach that state (as you can read here), and they would have the most to lose by self-annihilation upon death. On the other hand, lay Buddhists have to be resigned to the high probability of suffering a long time (perhaps millions or billions of years) on another plane of existence, so self-annihilation might not look so bad from their perspective.
Theravada teaching tends to more emphasize the cessation of existence within the cycle of saṃsāra (being reborn over and over again and suffering for all or most of eternity). From that perspective too, self-annihilation could look pretty good.
Another important point with regard to the selfishness of the monks when presented with the life-extending medicine is that traditional Buddhist teaching places monks and care for the basic material needs of monks very high on the moral priorities list because they are the most likely to become enlightened and escape saṃsāra. Therefore, one might have less incentive to extend the lifespan of someone who is very likely to die and be reborn in a naraka and suffer for millions or billions of years before getting another chance to be a monk and gain the opportunity to escape saṃsāra.
That said, there may be a deeper and simpler reason that serious Buddhist practitioners who meditate often would be more attached to their own continued existence. During deep meditation, one can find a tranquility or a bliss which far surpasses the banality of daily life in the quality of experience.
One can also notice that while there is no self in the way that we typically think of it as a persisting psychological reality, there is something which is aware of the contents of the psyche, and that something is what remains with us even after a deep meditation which changes us so dramatically that we can no longer pretend that there is a persisting psychological reality which is the ground of our being.
It is this something which is aware that presumably persists through the endless cycles of death and rebirth known as saṃsāra, through both the terrifying and torturous narakas and the highest heavenly planes. One would guess that Buddhist monastics would be highly cognizant of the fact that this something which persists through life after life, if it were to cease, would mean the cessation of their own being, and their chance at living on as an enlightened bodhisattva.
While none of them would believe that a simple lack of a persisting psychological reality (known popularly as the self) is anything to fear because meditation would make it obvious to them that it is not anything to fear, they might be quite fearful of the final cessation of that something which is aware.
After all, they've developed a closeness with it through meditation that most people never develop. They may have become attached to this something through long familiarity, and it may be wrenching to consider losing it forever, this truly persisting thing without which we would not experience bliss or tranquility (so far as we know).
I'm not saying that any of these beliefs or experiences are necessarily causally related to the greater fear of death or the selfish behavior of the Buddhist monks.
I don't know with certainty why Tibetan Buddhist monks would have a greater fear of death than lay Buddhists or members of other religions in the same geographical area.
But I do think the researchers need to consider the complexity of Buddhist beliefs when thinking about these experiments and what they measure.
Related: What is the role of the Sangha in Buddhism?
By Stephen Shephard - Own work, CC BY-SA 3.0, https://commons.wikimedia.org/w/index.php?curid=1130661
Friday, January 5, 2018
Fair Questions: What can we learn by living with different assumptions?
I had a discussion related to my previous post about the Slow Death of God (the story of former pastor Ryan Bell's year without believing in God). One of my friends asked a good question pursuant to that discussion, and I want to address it more fully than I did in my brief response to him.
The question as posed is in two parts:
The first part of this question is asking whether Bell's experiment was a reasonable way of discovering the truth about whether God exists or not. My short answer to that question is: No.
The second part of this question is asking whether it's a reliable way of discovering truth. My short answer to that is: Maybe. It depends on what kind of truth you want to discover.
But why?
1. Not all truths are falsifiable by experience.
For example, if I were to live for a year under the assumption that our Vice President, Mike Pence, is my best buddy who goes for beers with me at the local bar every weekend. That is something that could (and would) be falsified by my experience of life. I've never met the man before, didn't vote for him, am not a member of his political party or a donor to it, don't have any friends who can introduce me to him, and I doubt he'll respond to my messages asking to have a beer with him now that he's busy with performing the role of the Vice President of the United States. It would become very quickly obvious that my assumption was incorrect.
On the other hand, a proposition like "the contents of scientific theories describe reality as it truly is rather than reality as filtered through our perceptual mechanisms" isn't really susceptible to that sort of falsification because it's not a proposition we could falsify by living as if it's true for a year. Similarly, a proposition like "there is a ground of being from which all that exists gained its existence and sustains its existence and we call this God" isn't susceptible to being falsified by living a year as if it were true.
So if people want to do "a year with ________" or "a year without ________" experiments to discover a truth, it's important that they actually choose a truth that can be falsified (or at least made significantly more or less probable in light of a Bayesian analysis) by means of doing that experiment. Ryan Bell didn't really consider experimental design effectively when deciding to live for a year without God, obviously.
2. Cognitive biases always tilt us toward thinking we've confirmed what we assume.
When we live as if something is true for a significant period of time, it gives our confirmation bias lots of opportunities to work its magic. One of the byproducts of that is that we find all sorts of new reasons to believe whatever proposition it is.
And because of the familiarity principle, the more we expose ourselves to the phenomenon of living this way, the more we appreciate it and enjoy it, as long as there aren't direct and obvious negative consequences to doing so that we can't explain away by way of confirmation bias. Add into the mix that in this process, we will generally seek and find a community of people who believe it as well, and then normal emotional attachments, groupthink, and the in-group bias we all suffer from will tend to keep us on that path.
Atheists often (correctly) point out that tendency to rationalize whatever it is you already believe when looking at how many people share the religion of their parents. Rationalizing what we already believe or how we already behave is a perfectly natural consequence of how our brains work. And that remains true once we change those beliefs. Our brain picks right up and rationalizes our new belief in the same way.
I've often noticed that new believers (whether a new skeptic or new Christian) don't have good reasons that they can articulate for their new belief. Those good reasons and good arguments tend to accumulate over a number of years of thought and dialogue rather than actually being the primary driver of the change in belief.
Of course, another byproduct of living as if something were true is that we understand better the position of those who hold what we commit to living out as true. And that's quite valuable. I've found it to be valuable, at any rate. That seems to me to be the better reason for living for a time as an atheist, or a Buddhist, or a Christian, or a Muslim, and so on. It would be a valuable way to understand those with whom you disagree. That said...
In the end, it's not reasonable to live a year without God as a means of deciding whether God exists or not because it's not the kind of proposition that can be tested that way. And it's not reliable because of the outsized impact of our normal human tendency to rationalize whatever it is we are currently doing on our conclusions about what's true.
The question as posed is in two parts:
"Do you think that living for a year without God is a reasonable way to go about discovering the truth? One of your responses to your friends indicates that living out a certain way of life is a good way to confirm what one already thinks is true. However, is it a good or reliable method of discovering the truth?"
The first part of this question is asking whether Bell's experiment was a reasonable way of discovering the truth about whether God exists or not. My short answer to that question is: No.
The second part of this question is asking whether it's a reliable way of discovering truth. My short answer to that is: Maybe. It depends on what kind of truth you want to discover.
But why?
1. Not all truths are falsifiable by experience.
For example, if I were to live for a year under the assumption that our Vice President, Mike Pence, is my best buddy who goes for beers with me at the local bar every weekend. That is something that could (and would) be falsified by my experience of life. I've never met the man before, didn't vote for him, am not a member of his political party or a donor to it, don't have any friends who can introduce me to him, and I doubt he'll respond to my messages asking to have a beer with him now that he's busy with performing the role of the Vice President of the United States. It would become very quickly obvious that my assumption was incorrect.
On the other hand, a proposition like "the contents of scientific theories describe reality as it truly is rather than reality as filtered through our perceptual mechanisms" isn't really susceptible to that sort of falsification because it's not a proposition we could falsify by living as if it's true for a year. Similarly, a proposition like "there is a ground of being from which all that exists gained its existence and sustains its existence and we call this God" isn't susceptible to being falsified by living a year as if it were true.
So if people want to do "a year with ________" or "a year without ________" experiments to discover a truth, it's important that they actually choose a truth that can be falsified (or at least made significantly more or less probable in light of a Bayesian analysis) by means of doing that experiment. Ryan Bell didn't really consider experimental design effectively when deciding to live for a year without God, obviously.
2. Cognitive biases always tilt us toward thinking we've confirmed what we assume.
When we live as if something is true for a significant period of time, it gives our confirmation bias lots of opportunities to work its magic. One of the byproducts of that is that we find all sorts of new reasons to believe whatever proposition it is.
And because of the familiarity principle, the more we expose ourselves to the phenomenon of living this way, the more we appreciate it and enjoy it, as long as there aren't direct and obvious negative consequences to doing so that we can't explain away by way of confirmation bias. Add into the mix that in this process, we will generally seek and find a community of people who believe it as well, and then normal emotional attachments, groupthink, and the in-group bias we all suffer from will tend to keep us on that path.
Atheists often (correctly) point out that tendency to rationalize whatever it is you already believe when looking at how many people share the religion of their parents. Rationalizing what we already believe or how we already behave is a perfectly natural consequence of how our brains work. And that remains true once we change those beliefs. Our brain picks right up and rationalizes our new belief in the same way.
I've often noticed that new believers (whether a new skeptic or new Christian) don't have good reasons that they can articulate for their new belief. Those good reasons and good arguments tend to accumulate over a number of years of thought and dialogue rather than actually being the primary driver of the change in belief.
Of course, another byproduct of living as if something were true is that we understand better the position of those who hold what we commit to living out as true. And that's quite valuable. I've found it to be valuable, at any rate. That seems to me to be the better reason for living for a time as an atheist, or a Buddhist, or a Christian, or a Muslim, and so on. It would be a valuable way to understand those with whom you disagree. That said...
In the end, it's not reasonable to live a year without God as a means of deciding whether God exists or not because it's not the kind of proposition that can be tested that way. And it's not reliable because of the outsized impact of our normal human tendency to rationalize whatever it is we are currently doing on our conclusions about what's true.
Sunday, May 28, 2017
The Anthropomorphosis of Science
A number of atheists I've dialogued with or whose debates I've observed in the past have pointed out that it's not fair to give credit to God for the events we like and not blame God for the events we don't like. Which is odd, because plenty of people do blame God for events that they don't like. Most theists have probably been angry at God for precisely that reason at some point in life.
Some of the more philosophically sophisticated atheists chose to point out instead that it wasn't fair to give credit to God for what one's surgeon had done to save one's life. Instead of giving credit to the proximate cause of our healing (a good surgeon with good tools), we superstitious theists would invoke the ultimate cause of the universe that we called God, anthropomorphizing distant forces that we didn't actually understand.
It is with this background that I've observed the rhetoric around the recent March for Science. Undoubtedly, plenty of those who are part of the March for Science are theists. Many theists are scientific realists right along with most atheists. I certainly was.
But it is not just theists who engage in anthropomorphizing abstract causes which are not proximate to the outcome. Exhibit A from the March for Science coverage is an article in Salon which is entitled, "Science saved my life" and contains a genuinely touching description of how science kept her from going down a dangerous path of addiction and an unfulfilling life.
It's not very different from the AA or NA testimonies I've read or heard in person. But instead of finding God being the cause of her ability to relinquish her addiction, it was finding science.
Except, it wasn't science in an abstract sense that she found. She had an encounter with methodical, practical learning, the pursuit of knowledge which gradually drew her out of her reliance on old addictions as coping mechanisms. And she fell in love with this kind of learning, the grand unveiling of the mysteries of the universe insofar as we tiny-brained hominids can unveil them.
Like most people who found something more enticing than an addiction to alcohol, or marijuana, or prescription painkillers, or various other and more profoundly mind-altering substances, our erstwhile convert to scientific realism attributes her transformation to the system of ideas abstracted into one concept rather than just owning that she found something healthier than her old addictions to shape her life.
Just as my grandfather attributed his abandonment of his old addictions to his finding religion, the author attributes her abandonment of those same old addictions to finding science. This is, of course, not a bad thing to find something healthier to replace our addictions. Even if the replacement is just as addictive and we are overly attached to it, it may nonetheless be far less damaging than our old addictions.
This should indeed be celebrated rather than being mourned. I'm genuinely glad that she found science and that it allowed her to be freed of those old addictions to transient pleasures.
At the same time, I can't ignore that this is part of a broader trend. The popular conception of science, like the popular conception of God, has become reductively abstracted and oddly anthropomorphic.
Science is now treated more frequently as a causal explanation ("It's science!" a la Bill Nye and memes) rather than a body of methodologies that gradually allow us to uncover our numerous errors about the world and our experience of it.
Science is no longer that unpopular but necessary discipline for discovery and innovation that codifies the best of human learning heuristics into a broad field of study, it's become a popular invocation of epistemological authority. We can see this in a variety of other popular memes.
As someone who is very pro-science, who wants to support science education, scientific research, and political decision-making more informed by science, this trend is a troubling one. When religion becomes a tool used to make claims unthinkingly (and simultaneously authoritatively) such that it's difficult to question it, that's a serious problem for free inquiry.
And what I see today is science being used the same way religion has been by people who have a poor and popular understanding of it, so I worry that scientific inquiry may be compromised by a popular perception of it completely at odds with the free inquiry it ought to manifest and indeed fulfill.
This is why I'm generally opposed to the anthropomorphosis of science.
Related: The Benefit of Doubt: The Question of Science
Note: The above is a picture I took of part of one of my science fair trophies.
Some of the more philosophically sophisticated atheists chose to point out instead that it wasn't fair to give credit to God for what one's surgeon had done to save one's life. Instead of giving credit to the proximate cause of our healing (a good surgeon with good tools), we superstitious theists would invoke the ultimate cause of the universe that we called God, anthropomorphizing distant forces that we didn't actually understand.
It is with this background that I've observed the rhetoric around the recent March for Science. Undoubtedly, plenty of those who are part of the March for Science are theists. Many theists are scientific realists right along with most atheists. I certainly was.
But it is not just theists who engage in anthropomorphizing abstract causes which are not proximate to the outcome. Exhibit A from the March for Science coverage is an article in Salon which is entitled, "Science saved my life" and contains a genuinely touching description of how science kept her from going down a dangerous path of addiction and an unfulfilling life.
It's not very different from the AA or NA testimonies I've read or heard in person. But instead of finding God being the cause of her ability to relinquish her addiction, it was finding science.
Except, it wasn't science in an abstract sense that she found. She had an encounter with methodical, practical learning, the pursuit of knowledge which gradually drew her out of her reliance on old addictions as coping mechanisms. And she fell in love with this kind of learning, the grand unveiling of the mysteries of the universe insofar as we tiny-brained hominids can unveil them.
Like most people who found something more enticing than an addiction to alcohol, or marijuana, or prescription painkillers, or various other and more profoundly mind-altering substances, our erstwhile convert to scientific realism attributes her transformation to the system of ideas abstracted into one concept rather than just owning that she found something healthier than her old addictions to shape her life.
Just as my grandfather attributed his abandonment of his old addictions to his finding religion, the author attributes her abandonment of those same old addictions to finding science. This is, of course, not a bad thing to find something healthier to replace our addictions. Even if the replacement is just as addictive and we are overly attached to it, it may nonetheless be far less damaging than our old addictions.
This should indeed be celebrated rather than being mourned. I'm genuinely glad that she found science and that it allowed her to be freed of those old addictions to transient pleasures.
At the same time, I can't ignore that this is part of a broader trend. The popular conception of science, like the popular conception of God, has become reductively abstracted and oddly anthropomorphic.
Science is now treated more frequently as a causal explanation ("It's science!" a la Bill Nye and memes) rather than a body of methodologies that gradually allow us to uncover our numerous errors about the world and our experience of it.
Science is no longer that unpopular but necessary discipline for discovery and innovation that codifies the best of human learning heuristics into a broad field of study, it's become a popular invocation of epistemological authority. We can see this in a variety of other popular memes.
As someone who is very pro-science, who wants to support science education, scientific research, and political decision-making more informed by science, this trend is a troubling one. When religion becomes a tool used to make claims unthinkingly (and simultaneously authoritatively) such that it's difficult to question it, that's a serious problem for free inquiry.
And what I see today is science being used the same way religion has been by people who have a poor and popular understanding of it, so I worry that scientific inquiry may be compromised by a popular perception of it completely at odds with the free inquiry it ought to manifest and indeed fulfill.
This is why I'm generally opposed to the anthropomorphosis of science.
Related: The Benefit of Doubt: The Question of Science
Note: The above is a picture I took of part of one of my science fair trophies.
Saturday, March 18, 2017
Surveying the Moral Landscape Again
Back in 2013, I responded to a TED Talk given by Sam Harris about how science can determine human values by pointing out that even if he's correct, this has consequences with which he should not be pleased.
Though my argument about the consequences of his view may have been highly unusual, I was far from the only philosopher to take issue with Harris' claim that science can determine human values. And he graciously responded by offering a challenge to other philosophers seeking to refute his position.
Ryan Born's surprisingly brief essay was the winner of that challenge (in the sense that his response was judged to be the best), though he definitely did not persuade Harris to recant his position that science can determine morality. Nonetheless, his essay from 2014 is well worth reading.
It provides an excellent summary of the basic argument against Harris' claim that he has found a way to determine human values using scientific means. Harris, of course, responded to the essay with one of his own to defend his position. And, having read it, I found it very helpful.
At the very least, it makes his position more understandable, whether one agrees with it or not. His rebuttal offered to Ryan Born's points is fairly effective, and I recommend that everyone with an interest in the topic read it at his blog under the title "Clarifying the Moral Landscape" for that reason.
Harris' response isn't intended to address my previous critique, and I don't want to re-hash that argument here even though my argument could certainly use some refinement. Nonetheless, I do want to address some of his responses to Born's essay. Harris writes:
I don't find myself confused at all that Harris' account of ethics is first and foremost a descriptive account of how we experience various states. It makes perfect sense if one's goal is to understand morality as a science in the general sense in which he uses the term "science". Science is a descriptive endeavor, though its descriptions help clarify our sense of what the world is like and how to navigate it.
And in the same way, Harris thinks that the descriptions of science help us clarify our sense of the moral options available to us so that we can navigate them. It's completely coherent with his general worldview, which tries to reduce everything to and ground everything in the descriptive methodology of science defined more generally.
Harris is right, I think, that traditional categories in moral philosophy suggest clear distinctions that might not exist with such stark separation as they are often presented, though I do think they have more merit than he does (you can see why below).
I'm less sure that he's right that all these factors (moral duties and heuristics, personal character, consequences for ourselves and others) reduce to a concern for well-being. Mature ethical reasoning on our part does seem to at least take into account well-being in some way, but I'm not sure why the consistent inclusion of well-being suggests that we can reduce those other factors to well-being.
What's the evidence for the claim that they all reduce to a concern about consequences? The general attitude of parents cited by Harris doesn't persuade me on this point any more than Ryan Born's points persuaded him.
Harris is correct that both deontologists and virtue ethicists take consequences into account, though I'm not sure that it's fair to say that either group of ethicists are smuggling them in. The difference between a consequentialist and deontologist is one of the order in which their principles are invoked.
For a consequentialist, the first principle in ethical reasoning is the consideration of consequences, and the consideration of intentionality because intentions often lead to consequences, and the consideration of virtues is performed because our character produces consequences for ourselves and others.
For a deontologist, the order is different. The first principle in ethical reasoning is the intentional carrying out of our moral duty (the Kantian categorical imperative, for example), though we may need to cultivate various virtues in order to carry out our moral duty reliably, and the foundational moral duty may be defined in the way that it is (at least in part) because it reduces the harmful consequences of our behavior when it is carried out.
I used this example because I wanted to avoid presenting my own position (which is in the virtue ethics tradition) in a self-serving way, but we could understand virtue ethics similarly as positing that the first principle of ethical reasoning is to define and cultivate virtues, virtues being understood as habits of character which lead us to behave intentionally in ways that consistently reduce harmful consequences for others and for ourselves.
That is not my own theory of virtue ethics, but it has some important parallels with the other examples, and so I've used it here. Regardless, what differentiates the various theories of ethics from one another is the order in which the various factors in ethical reasoning are invoked.
Of course, the differing orders of application of these factors can have serious consequences for our navigation of the moral landscape, and thus for our well-being, and perhaps Harris might see them as worthwhile distinctions to make for that reason.
Note: The above is a picture I took while running alongside a river.
Though my argument about the consequences of his view may have been highly unusual, I was far from the only philosopher to take issue with Harris' claim that science can determine human values. And he graciously responded by offering a challenge to other philosophers seeking to refute his position.
Ryan Born's surprisingly brief essay was the winner of that challenge (in the sense that his response was judged to be the best), though he definitely did not persuade Harris to recant his position that science can determine morality. Nonetheless, his essay from 2014 is well worth reading.
It provides an excellent summary of the basic argument against Harris' claim that he has found a way to determine human values using scientific means. Harris, of course, responded to the essay with one of his own to defend his position. And, having read it, I found it very helpful.
At the very least, it makes his position more understandable, whether one agrees with it or not. His rebuttal offered to Ryan Born's points is fairly effective, and I recommend that everyone with an interest in the topic read it at his blog under the title "Clarifying the Moral Landscape" for that reason.
Harris' response isn't intended to address my previous critique, and I don't want to re-hash that argument here even though my argument could certainly use some refinement. Nonetheless, I do want to address some of his responses to Born's essay. Harris writes:
I also disagree with the distinction Ryan draws between “descriptive” and “prescriptive” enterprises. Ethics is prescriptive only because we tend to talk about it that way—and I believe this emphasis comes, in large part, from the stultifying influence of Abrahamic religion. We could just as well think about ethics descriptively. Certain experiences, relationships, social institutions, and technological developments are possible—and there are more or less direct ways to arrive at them. Again, we have a navigation problem. To say we “should” follow some of these paths and avoid others is just a way of saying that some lead to happiness and others to misery. “You shouldn’t lie” (prescriptive) is synonymous with “Lying needlessly complicates people’s lives, destroys reputations, and undermines trust” (descriptive). “We should defend democracy from totalitarianism” (prescriptive) is another way of saying “Democracy is far more conducive to human flourishing than the alternatives are” (descriptive). In my view, moralizing notions like “should” and “ought” are just ways of indicating that certain experiences and states of being are better than others.
Many readers seem confused by the fact that my account of ethics isn’t overtly prescriptive.
I don't find myself confused at all that Harris' account of ethics is first and foremost a descriptive account of how we experience various states. It makes perfect sense if one's goal is to understand morality as a science in the general sense in which he uses the term "science". Science is a descriptive endeavor, though its descriptions help clarify our sense of what the world is like and how to navigate it.
And in the same way, Harris thinks that the descriptions of science help us clarify our sense of the moral options available to us so that we can navigate them. It's completely coherent with his general worldview, which tries to reduce everything to and ground everything in the descriptive methodology of science defined more generally.
The spuriousness of our traditional categories in moral philosophy can be seen in how we teach our children to be good. Why do we want them to be good in the first place? Well, at a minimum, we’d rather they not wind up bludgeoned in a ditch. More generally, we want them to flourish—to live happy, creative, meaningful lives—and to help make the world a better place. All this entails talking about rules and heuristics (deontology), a person’s character (virtue ethics), and the good and bad consequences of certain actions (consequentialism). But it all reduces to a concern for the well-being of our children and (generally to a lesser extent) of the people with whom they will interact. I don’t believe that any sane person is concerned with abstract principles and virtues—such as justice and loyalty—independent of the ways they affect our lives.
Harris is right, I think, that traditional categories in moral philosophy suggest clear distinctions that might not exist with such stark separation as they are often presented, though I do think they have more merit than he does (you can see why below).
I'm less sure that he's right that all these factors (moral duties and heuristics, personal character, consequences for ourselves and others) reduce to a concern for well-being. Mature ethical reasoning on our part does seem to at least take into account well-being in some way, but I'm not sure why the consistent inclusion of well-being suggests that we can reduce those other factors to well-being.
What's the evidence for the claim that they all reduce to a concern about consequences? The general attitude of parents cited by Harris doesn't persuade me on this point any more than Ryan Born's points persuaded him.
Ryan also seems to take for granted that the traditional categories of consequentialism, deontology, and virtue ethics are conceptually valid and worth maintaining. However, I believe that partitioning moral philosophy in this way begs the very question at issue—and this is one reason I tend not to identify myself as a “consequentialist.” Everyone knows—or thinks he knows—that consequentialism fails to capture much of what we value. This is true almost by definition, because, as Ryan observes, “serious competing theories of value and morality exist.”
But if the categorical imperative (one of Kant’s foundational contributions to deontology, or rule-based ethics) reliably made everyone miserable, no one would defend it as an ethical principle. Similarly, if virtues such as generosity, wisdom, and honesty caused nothing but pain and chaos, no sane person could consider them good. In my view, deontologists and virtue ethicists smuggle the good consequences of their ethics into the conversation from the start.
Harris is correct that both deontologists and virtue ethicists take consequences into account, though I'm not sure that it's fair to say that either group of ethicists are smuggling them in. The difference between a consequentialist and deontologist is one of the order in which their principles are invoked.
For a consequentialist, the first principle in ethical reasoning is the consideration of consequences, and the consideration of intentionality because intentions often lead to consequences, and the consideration of virtues is performed because our character produces consequences for ourselves and others.
For a deontologist, the order is different. The first principle in ethical reasoning is the intentional carrying out of our moral duty (the Kantian categorical imperative, for example), though we may need to cultivate various virtues in order to carry out our moral duty reliably, and the foundational moral duty may be defined in the way that it is (at least in part) because it reduces the harmful consequences of our behavior when it is carried out.
I used this example because I wanted to avoid presenting my own position (which is in the virtue ethics tradition) in a self-serving way, but we could understand virtue ethics similarly as positing that the first principle of ethical reasoning is to define and cultivate virtues, virtues being understood as habits of character which lead us to behave intentionally in ways that consistently reduce harmful consequences for others and for ourselves.
That is not my own theory of virtue ethics, but it has some important parallels with the other examples, and so I've used it here. Regardless, what differentiates the various theories of ethics from one another is the order in which the various factors in ethical reasoning are invoked.
Of course, the differing orders of application of these factors can have serious consequences for our navigation of the moral landscape, and thus for our well-being, and perhaps Harris might see them as worthwhile distinctions to make for that reason.
Surveying the Moral Landscape - Surveying the Moral Landscape Again
Friday, July 8, 2016
Fair Questions: What is the relationship between faith and confirmation bias?
Recently, I was discussing confirmation bias with a friend who proposed an explanation for confirmation bias, and the explanation was essentially that accepting a proposition without evidence (which was described as faith) is the beginning of the process, and it is then socially reinforced by a culture that encourages believing in propositions without evidence and further enhanced by the rampant narcissism of our age.
While I share the concerns about narcissism and social trends that reinforce accepting propositions as true without any evidence, I am less certain that the causal explanation properly starts with accepting a proposition without evidence. My understanding (perhaps because of my pro-science bias) is that our cognitive biases are actually caused by our adaptive responses to evolutionary pressures.
I read a piece in Scientific American by famous skeptic Michael Shermer quite a while ago in which he offers the explanation (based on evolutionary psychology) that the mental heuristics we use to explain phenomena select for propositions to believe not based on whether those propositions are true, but rather based on what the costs are if we make an error. (For some helpful visualizations and an audio explanation of the concepts involved, you can watch Shermer's TED Talk about it.)
This is the general explanation for our cognitive biases: our brains have been shaped by millions of years of evolutionary pressure which have resulted in mental heuristics for assessing the relationships between the events that work in such a way as to help us avoid the kinds of errors that might cost us our lives rather than helping us to find the truth. Our cognitive biases are the result of efficient risk management, not the result of a choice to have faith in something.
The theory that faith is the cause of our confirmation bias problem is a result of confirmation bias on the part of those who already believe that faith is bad and subsequently interpret the evidence of human cognitive errors in light of that belief rather than assessing the evidence that suggests it is caused by scientifically understandable evolutionary processes.
That said, there is a relationship between faith and confirmation bias that should be mentioned, especially because many people in the post-industrial West seem quite prone to it. This relationship has to do with our mental habits. We can develop a habit of believing in things without going through a critical thinking process or we can develop a habit of believing in things after we have gone through a critical thinking process to help filter out untrue conclusions.
As Shermer notes, belief is our default modus operandi. It is our natural predisposition to believe things not based on whether those things can pass the test of critical thinking, but rather on the different costs of committing various errors if we don't believe them. This is why it's important, if we want to work towards true conclusions, to have a critical thinking process to mitigate our tendency to engage in these cognitive errors which are as natural to us as breathing is.
The most pressing danger isn't the cognitive biases that have so far kept us alive long enough to worry about questions of truth, but rather the mental habits that render us unable to even approach questions of truth. If a person makes an assumption that the universe is intelligible, or makes the assumption that the universe is largely incomprehensible, then these assumptions by themselves as single instances will not make a person unable to reason well or evaluate evidence effectively.
We all have operationalized philosophical assumptions. Maybe your assumption is that the scientific method leads to accurate conclusions about how the world really is. There are, admittedly, many problems with this assumption (and it's one I used to believe without thinking critically about it), but making assumptions is necessary to get through life. It's not necessarily a problem to make an assumption.
Any mathematician or logician can tell you that in order to even perform a single mathematical or logical operation, various assumptions are required. Axioms have to be selected to even begin a reasoning process; there is no reasoning process without a set of assumptions. But if assumptions that we accept as first principles (take on faith) are not the problem, then what is the problem?
What will render a person unable to approach questions of truth is a mental habit of accepting claims as true without a critical thinking process, even in circumstances in which such a critical thinking process is quite possible. It may not be possible in a life-or-death situation that requires a rapid response, but it is often quite possible in the post-industrial West where people are not living under the kinds of harsh survival pressures our ancestors lived under nearly constantly.
And I do think that we have a moral obligation to engage in the kinds of critical thinking processes that can help us mitigate our cognitive biases in situations in which that is possible. Whether we are methodists or particularists with regard to the Problem of the Criterion, we have an obligation to critically think about what logical consequences flow from the assumptions we make in our response to that problem.
So while I agree with Michael Shermer that we "are natural-born supernaturalists" as human beings, I do not think that the search for truth is fruitless. It's just very difficult, and we should expect to combat our cognitive biases every day in order to have a fruitful search for truth.
While I share the concerns about narcissism and social trends that reinforce accepting propositions as true without any evidence, I am less certain that the causal explanation properly starts with accepting a proposition without evidence. My understanding (perhaps because of my pro-science bias) is that our cognitive biases are actually caused by our adaptive responses to evolutionary pressures.
I read a piece in Scientific American by famous skeptic Michael Shermer quite a while ago in which he offers the explanation (based on evolutionary psychology) that the mental heuristics we use to explain phenomena select for propositions to believe not based on whether those propositions are true, but rather based on what the costs are if we make an error. (For some helpful visualizations and an audio explanation of the concepts involved, you can watch Shermer's TED Talk about it.)
This is the general explanation for our cognitive biases: our brains have been shaped by millions of years of evolutionary pressure which have resulted in mental heuristics for assessing the relationships between the events that work in such a way as to help us avoid the kinds of errors that might cost us our lives rather than helping us to find the truth. Our cognitive biases are the result of efficient risk management, not the result of a choice to have faith in something.
The theory that faith is the cause of our confirmation bias problem is a result of confirmation bias on the part of those who already believe that faith is bad and subsequently interpret the evidence of human cognitive errors in light of that belief rather than assessing the evidence that suggests it is caused by scientifically understandable evolutionary processes.
That said, there is a relationship between faith and confirmation bias that should be mentioned, especially because many people in the post-industrial West seem quite prone to it. This relationship has to do with our mental habits. We can develop a habit of believing in things without going through a critical thinking process or we can develop a habit of believing in things after we have gone through a critical thinking process to help filter out untrue conclusions.
As Shermer notes, belief is our default modus operandi. It is our natural predisposition to believe things not based on whether those things can pass the test of critical thinking, but rather on the different costs of committing various errors if we don't believe them. This is why it's important, if we want to work towards true conclusions, to have a critical thinking process to mitigate our tendency to engage in these cognitive errors which are as natural to us as breathing is.
The most pressing danger isn't the cognitive biases that have so far kept us alive long enough to worry about questions of truth, but rather the mental habits that render us unable to even approach questions of truth. If a person makes an assumption that the universe is intelligible, or makes the assumption that the universe is largely incomprehensible, then these assumptions by themselves as single instances will not make a person unable to reason well or evaluate evidence effectively.
We all have operationalized philosophical assumptions. Maybe your assumption is that the scientific method leads to accurate conclusions about how the world really is. There are, admittedly, many problems with this assumption (and it's one I used to believe without thinking critically about it), but making assumptions is necessary to get through life. It's not necessarily a problem to make an assumption.
Any mathematician or logician can tell you that in order to even perform a single mathematical or logical operation, various assumptions are required. Axioms have to be selected to even begin a reasoning process; there is no reasoning process without a set of assumptions. But if assumptions that we accept as first principles (take on faith) are not the problem, then what is the problem?
What will render a person unable to approach questions of truth is a mental habit of accepting claims as true without a critical thinking process, even in circumstances in which such a critical thinking process is quite possible. It may not be possible in a life-or-death situation that requires a rapid response, but it is often quite possible in the post-industrial West where people are not living under the kinds of harsh survival pressures our ancestors lived under nearly constantly.
And I do think that we have a moral obligation to engage in the kinds of critical thinking processes that can help us mitigate our cognitive biases in situations in which that is possible. Whether we are methodists or particularists with regard to the Problem of the Criterion, we have an obligation to critically think about what logical consequences flow from the assumptions we make in our response to that problem.
So while I agree with Michael Shermer that we "are natural-born supernaturalists" as human beings, I do not think that the search for truth is fruitless. It's just very difficult, and we should expect to combat our cognitive biases every day in order to have a fruitful search for truth.
Faith and Confirmation Bias - Faith and Evidence - Faith and Reason
Sunday, July 3, 2016
Orthodoxy: The Maniac
This past year, I read Orthodoxy by G.K. Chesterton after many years of my friends recommending it to me. My reading list is rather lengthy, both in terms of the actual word count of the books and the number of books on my list to read at some point. Fortunately, Orthodoxy is a fairly short book of only 154 pages in the edition I purchased, and the font size is not tiny as it would be if the publisher were trying to cram more words into fewer pages.
As I mentioned in my rather lengthy review of Waking Up by Sam Harris, I was struck early on by how similar my journey is to his journey in some ways and by how starkly different our journeys are in other ways. The immediately apparent similarity was that Chesterton's journey to Christian orthodoxy had begun as an earnest journey away from Christianity to find newer and bolder truths than the tired pablum of his ancestors.
In the Introduction to the book, he describes this process as being like a man who sailed away from England seeking other shores and through a quirk of navigational error somehow found himself back in England, thinking at first that he had discovered a new land and subsequently realizing that he had returned to where he started, to that place he was seeking to escape from the banality of his homeland.
I happily echo his thoughts here, for my own journey in my early twenties is one of venturing away from Catholicism to try to find an enlightened belief system that provided a higher truth, a greater truthiness, if you will pardon my borrowing a Stephen Colbert expression. While I was getting my first university degree, I started taking a serious look at atheism as an alternative to my current beliefs that was also conveniently compatible with my views on politics and my trust in science.
I eventually discarded atheism as a live option, but as one can see from how much I have written on the subject, it was indeed a live option for me at one point and I still wish to foster respectful dialogue between atheists and theists. The other live option for me was Buddhism, and I struggled mightily to find a neutral standard that would allow me to choose between Buddhism and Christianity in a way that wasn't self-serving. This was important to me because I know that given the chance, we humans are prone to choose the path that affords us the least resistance rather than the path of truth.
We all teeter on the edge of believing in our own competence to discern the truth so much that we make really stupid choices because our competence is so much smaller than we believe it to be. Truth ought to help us offset this unfortunate consequence of the Dunning-Kruger Effect, but it is precisely this cognitive bias which leads us to believe that it doesn't affect us very much because we are so competent that it couldn't possibly overcome our reason.
In the chapter entitled "The Maniac" which follows the Introduction, Chesterton explains in his usual vibrant writing style what happens when we give in to the Dunning-Kruger Effect:
The idea that believing in one's self leads to success is quite the superstition indeed, requiring us as it does to conclude on the basis of insufficient evidence that things will work out in some unknown fashion that can only be called supernatural because we know from painful experience that success is not the natural consequence of complete self-confidence. Our experience tells us that complete self-confidence often precedes egregious failure, and that complete self-confidence generally blinds us to the inevitability of failure which we can only see in hindsight.
But believing in one's self is not the only way to be a maniac, as Chesterton observes later in the chapter:
This description and the examples that follow very much speak to my own experience. When I was younger, my clinical depression was caused in large part by my analytical bent. I very much wanted (and so did many of my peers) to be able to figure the world out and stow it in the simplistic logical categories and framework I had formed at the time through my philosophical training.
Unfortunately, the world is not even close to being simple enough for me or anyone else to comprehend it with such paltry mental tools as I was using. The inevitable disappointment that occurs when reality is far too complex for our little minds to hold (even though we've been taught from many quarters that such understanding is readily available) is something I try to help others of my generation with when it hits them.
As is often the case, Chesterton expresses it quite pithily:
As a poet myself, I know all too well how valuable poetry is for my sanity. It is the beautiful outlet for all the world's beauty which would otherwise split my head open by the force of its immensity, a healthy release valve for the daunting infinity of reality which I am all too happy to explore.
Given the chance and no healthy outlet, we will retreat from the beautiful infinite reality rather than exploring it; we will find a small piece of reality and cling to it as if it were the only real thing. It is easier to believe that we have found the only thing that is real than it is to believe that this vast infinite something is ultimately incomprehensible to us, and so we believe with all our might in the transcendent value of the one thing we believe to be real and significant in life.
We will value it above all else, and we follow the valuation to its logical conclusion. Those who value power and strength will follow it to authoritarianism and fascism. Those who value liberty will follow that value to the point of doing horrible things to themselves simply because they are at liberty to do so. Those who value the rule of law will uphold even the most unjust law rather than allowing it to be broken or having mercy on the one who broke it.
It matters not to the maniac that he is objectively wrong; he has created a subjectively coherent reality inside his head which he cannot deny and makes perfect logical sense within its own confines and given its own axioms. And because this subjective reality cannot be denied, he must act on it even when it comes into conflict with the needs of others or the evidence of reality's lack of adherence to the boundaries which seem so clear and reliable in his mind.
This is the problem with much of modern philosophy: it's so often maniacal. It's not that the maniac who grounds the entirety of his worldview on liberty, or power, or law is wrong about that one thing being quite valuable. Those things are indeed valuable. But even the casual observer will be able to immediately discern that where the maniac went wrong is in reducing all of reality to one valuable thing rather than accepting that many parts of reality might be equally or unequally valuable.
He went wrong not by proposing a transcendent value, but rather by excluding all other values in a simplistic logical fashion. He went wrong not by trying to explain everything, but rather by making everything so small that it could be easily explained.
The problem with the maniac isn't so much that he makes everything up; his problem is that he makes everything less than he knows it to be.
Note: The above is an image I captured of the cover of my copy of the book being reviewed here.
As I mentioned in my rather lengthy review of Waking Up by Sam Harris, I was struck early on by how similar my journey is to his journey in some ways and by how starkly different our journeys are in other ways. The immediately apparent similarity was that Chesterton's journey to Christian orthodoxy had begun as an earnest journey away from Christianity to find newer and bolder truths than the tired pablum of his ancestors.
In the Introduction to the book, he describes this process as being like a man who sailed away from England seeking other shores and through a quirk of navigational error somehow found himself back in England, thinking at first that he had discovered a new land and subsequently realizing that he had returned to where he started, to that place he was seeking to escape from the banality of his homeland.
"For if this book is a joke it is a joke against me. I am the man who with the utmost daring discovered what had been discovered before. If there is an element of farce in what follows, the farce is at my own expense; for this book explains how I fancied I was the first to set foot in Brighton and then found I was the last. It recounts my elephantine adventures in pursuit of the obvious. No one can think my case more ludicrous than I think it myself; no reader here can accuse me of trying to make a fool of him: I am the fool of this story, and no rebel shall hurl me from my throne. I freely confess all the idiotic ambitions of the end of the nineteenth century. I did, like all other solemn little boys, try to be in advance of the age. Like them I tried to be some ten minutes in advance of the truth. And I found that I was eighteen hundred years behind it. I did strain my voice with a painfully juvenile exaggeration in uttering my truths. And I was punished in the fittest and funniest way, for I have kept my truths: but I have discovered, not that they were not truths, but simply that they were not mine."
I happily echo his thoughts here, for my own journey in my early twenties is one of venturing away from Catholicism to try to find an enlightened belief system that provided a higher truth, a greater truthiness, if you will pardon my borrowing a Stephen Colbert expression. While I was getting my first university degree, I started taking a serious look at atheism as an alternative to my current beliefs that was also conveniently compatible with my views on politics and my trust in science.
I eventually discarded atheism as a live option, but as one can see from how much I have written on the subject, it was indeed a live option for me at one point and I still wish to foster respectful dialogue between atheists and theists. The other live option for me was Buddhism, and I struggled mightily to find a neutral standard that would allow me to choose between Buddhism and Christianity in a way that wasn't self-serving. This was important to me because I know that given the chance, we humans are prone to choose the path that affords us the least resistance rather than the path of truth.
We all teeter on the edge of believing in our own competence to discern the truth so much that we make really stupid choices because our competence is so much smaller than we believe it to be. Truth ought to help us offset this unfortunate consequence of the Dunning-Kruger Effect, but it is precisely this cognitive bias which leads us to believe that it doesn't affect us very much because we are so competent that it couldn't possibly overcome our reason.
In the chapter entitled "The Maniac" which follows the Introduction, Chesterton explains in his usual vibrant writing style what happens when we give in to the Dunning-Kruger Effect:
'Once I remember walking with a prosperous publisher, who made a remark which I had often heard before; it is, indeed, almost a motto of the modern world. Yet I had heard it once too often, and I saw suddenly that there was nothing in it. The publisher said of somebody, "That man will get on; he believes in himself." And I remember that as I lifted my head to listen, my eye caught an omnibus on which was written, "Hanwell." I said to him, "Shall I tell you where the men are who believe most in themselves? For I can tell you. I know of men who believe in themselves more colossally than Napoleon or Caesar. I know where flames the fixed star of certainty and success. I can guide you to the thrones of Supermen. The men who really believe in themselves are all in lunatic asylums." He said mildly that there are a good many men who really believe in themselves and who were not in lunatic asylums. "Yes there are," I retorted, "and you of all men ought to know them. That drunken poet from whom you would not take a dreary tragedy, he believed in himself. That elderly minister with an epic from whom you were hiding in a back room, he believed in himself. If you consulted your business experience instead of your ugly individualistic philosophy, you would know that believing in himself is one of the commonest signs of a rotter. Actors who can't act believe in themselves; and debtors who won't pay. It would be much truer to say that a man will certainly fail, because he believes in himself. Complete self-confidence is not merely a sin; complete self-confidence is a weakness. Believing in one's self is a hysterical and superstitious belief..."'
The idea that believing in one's self leads to success is quite the superstition indeed, requiring us as it does to conclude on the basis of insufficient evidence that things will work out in some unknown fashion that can only be called supernatural because we know from painful experience that success is not the natural consequence of complete self-confidence. Our experience tells us that complete self-confidence often precedes egregious failure, and that complete self-confidence generally blinds us to the inevitability of failure which we can only see in hindsight.
But believing in one's self is not the only way to be a maniac, as Chesterton observes later in the chapter:
"There is a notion adrift everywhere that imagination, especially mystical imagination, is dangerous to a men's mental balance. Poets are commonly spoken of as psychologically unreliable; and generally there is a vague association between wreathing laurels in your hair and sticking straws in it. Facts and history utterly contradict this view. Most of the very great poets have been not only sane, but extremely business-like; and if Shakespeare ever really held horses, it was because he was much the safest man to hold them. Imagination does not breed insanity. Exactly what does breed insanity is reason. Poets do not go mad; but chess-players do. Mathematicians go mad, and cashiers; but creative artists very seldom. I am not, as will be seen, in any sense attacking logic: I only say that this danger does lie in logic, not in imagination."
This description and the examples that follow very much speak to my own experience. When I was younger, my clinical depression was caused in large part by my analytical bent. I very much wanted (and so did many of my peers) to be able to figure the world out and stow it in the simplistic logical categories and framework I had formed at the time through my philosophical training.
Unfortunately, the world is not even close to being simple enough for me or anyone else to comprehend it with such paltry mental tools as I was using. The inevitable disappointment that occurs when reality is far too complex for our little minds to hold (even though we've been taught from many quarters that such understanding is readily available) is something I try to help others of my generation with when it hits them.
As is often the case, Chesterton expresses it quite pithily:
"The general fact is simple. Poetry is sane because it floats easily in an infinite sea; reason seeks to cross the infinite sea, and so make it finite. The result is mental exhaustion, like the physical exhaustion of Mr. Holbein. To accept everything is an exercise, to understand everything a strain. The poet only desires exaltation and expansion, a world to stretch himself in. The poet only asks to get his head into the heavens. It is the logician who seeks to get the heavens into his head. And it is his head that splits."
As a poet myself, I know all too well how valuable poetry is for my sanity. It is the beautiful outlet for all the world's beauty which would otherwise split my head open by the force of its immensity, a healthy release valve for the daunting infinity of reality which I am all too happy to explore.
Given the chance and no healthy outlet, we will retreat from the beautiful infinite reality rather than exploring it; we will find a small piece of reality and cling to it as if it were the only real thing. It is easier to believe that we have found the only thing that is real than it is to believe that this vast infinite something is ultimately incomprehensible to us, and so we believe with all our might in the transcendent value of the one thing we believe to be real and significant in life.
We will value it above all else, and we follow the valuation to its logical conclusion. Those who value power and strength will follow it to authoritarianism and fascism. Those who value liberty will follow that value to the point of doing horrible things to themselves simply because they are at liberty to do so. Those who value the rule of law will uphold even the most unjust law rather than allowing it to be broken or having mercy on the one who broke it.
"A man cannot think himself out of mental evil; for it is actually the organ of thought that has become diseased, ungovernable, and, as it were, independent. He can only be saved by will or faith. The moment his mere reason moves, it moves in the old circular rut; he will go round and round his logical circle...
Such is the madman of experience; he is commonly a reasoner, frequently a successful reasoner. Doubtless he could be vanquished in mere reason, and the case against him put logically. But it can be put much more precisely in general and even aesthetic terms. He is the clean and well-lit prison of one idea: he is sharpened to one painful point."
It matters not to the maniac that he is objectively wrong; he has created a subjectively coherent reality inside his head which he cannot deny and makes perfect logical sense within its own confines and given its own axioms. And because this subjective reality cannot be denied, he must act on it even when it comes into conflict with the needs of others or the evidence of reality's lack of adherence to the boundaries which seem so clear and reliable in his mind.
"They are universal only in the sense that they take one thin explanation and carry it very far. But a pattern can stretch forever and still be a small pattern. They see a chessboard white on black, and if the universe is paved with it, it is still white on black. Like the lunatic, they cannot alter their standpoint; they cannot make a mental effort and suddenly see it black on white.
Take first the more obvious case of materialism. As an explanation of the world, materialism has a sort of insane simplicity. It has just the quality of the madman's argument; we have at once the sense of it covering everything and the sense of it leaving everything out. Contemplate some able and sincere materialist, as, for instance, Mr. McCabe, and you will have exactly this unique sensation. He understands everything, and everything does not seem worth understanding. His cosmos may be complete in every rivet and cog-wheel, but still his cosmos is smaller than our world."
This is the problem with much of modern philosophy: it's so often maniacal. It's not that the maniac who grounds the entirety of his worldview on liberty, or power, or law is wrong about that one thing being quite valuable. Those things are indeed valuable. But even the casual observer will be able to immediately discern that where the maniac went wrong is in reducing all of reality to one valuable thing rather than accepting that many parts of reality might be equally or unequally valuable.
He went wrong not by proposing a transcendent value, but rather by excluding all other values in a simplistic logical fashion. He went wrong not by trying to explain everything, but rather by making everything so small that it could be easily explained.
The problem with the maniac isn't so much that he makes everything up; his problem is that he makes everything less than he knows it to be.
I did try to found a heresy of my own; and when I had put the last touches to it, I discovered that it was orthodoxy. -- G.K. Chesterton
Note: The above is an image I captured of the cover of my copy of the book being reviewed here.
Wednesday, April 13, 2016
Fair Questions: Are atheists immune from superstition?
I've known a fair number of atheists and spent a lot of time in dialogue with atheists, and many of them take a very dim view of superstition just as I do. They often see the omens, astrology, routine prophecies of the end times, the wearing of charms, and so on as a misunderstanding of causality rooted in magical thinking and agency over-detection, just as I do.
Given that most of them are scientific realists, their general rejection of the supernatural (not just the supernatural causality of superstition) seems to provide them with a means of resisting the tug of magical thinking and agency over-detection more often than others on average. At the very least, they are more likely to explicitly reject certain kinds of magical thinking and agency over-detection, especially those associated with religion, pseudo-science, and various "paranormal" claims.
Though I'm deeply religious myself, I certainly see value in cultivating a resistance to superstition, including superstitious practices which inevitably tend to accrete around religions. Fortunately, having two parents educated in the hard sciences helped to form me in such a way that I have a well-internalized resistance to superstition of my own.
That said, I can certainly fall prey to magical thinking and agency over-detection at times. I notice it occasionally, and there are probably times when I don't notice it as well. That's largely because these are normal human cognitive traits, thought patterns into which we fall quite easily because they were useful for our survival in times of harsh survival pressure under which quick and dirty heuristics are better than lengthy and less inaccurate methodologies for arriving at correct conclusions.
Despite my intuitions, there is very good evidence that atheists aren't really any less prone to utilizing those quick and dirty heuristics (magical thinking and agency over-detection) than anyone else when the chips are down. Understandably, they are likely to resort to those heuristics under pressure just as anyone else is, their resistance to superstition decreasing the more dangerous it gets in the foxhole.
It makes sense that our risk-management instincts kick in under pressure, so I certainly don't think anyone needs to feel shame for that. Given the way we evolved and the instincts (quick and dirty as they are) propelling us to make decisions under pressure, we should expect ourselves to fall prey to magical thinking and agency over-detection because that's just what our brains do as naturally as they do anything else.
But what it means for us is that we all have to be constantly vigilant against our own cognitive errors and not rely on the luxury of explicit reasoning which might give us the impression that we are immune from the vagaries of our highly evolved brains, brains which are evolved primarily for survival and perhaps have philosophical musing as a latent function.
Given that most of them are scientific realists, their general rejection of the supernatural (not just the supernatural causality of superstition) seems to provide them with a means of resisting the tug of magical thinking and agency over-detection more often than others on average. At the very least, they are more likely to explicitly reject certain kinds of magical thinking and agency over-detection, especially those associated with religion, pseudo-science, and various "paranormal" claims.
Though I'm deeply religious myself, I certainly see value in cultivating a resistance to superstition, including superstitious practices which inevitably tend to accrete around religions. Fortunately, having two parents educated in the hard sciences helped to form me in such a way that I have a well-internalized resistance to superstition of my own.
That said, I can certainly fall prey to magical thinking and agency over-detection at times. I notice it occasionally, and there are probably times when I don't notice it as well. That's largely because these are normal human cognitive traits, thought patterns into which we fall quite easily because they were useful for our survival in times of harsh survival pressure under which quick and dirty heuristics are better than lengthy and less inaccurate methodologies for arriving at correct conclusions.
Despite my intuitions, there is very good evidence that atheists aren't really any less prone to utilizing those quick and dirty heuristics (magical thinking and agency over-detection) than anyone else when the chips are down. Understandably, they are likely to resort to those heuristics under pressure just as anyone else is, their resistance to superstition decreasing the more dangerous it gets in the foxhole.
It makes sense that our risk-management instincts kick in under pressure, so I certainly don't think anyone needs to feel shame for that. Given the way we evolved and the instincts (quick and dirty as they are) propelling us to make decisions under pressure, we should expect ourselves to fall prey to magical thinking and agency over-detection because that's just what our brains do as naturally as they do anything else.
But what it means for us is that we all have to be constantly vigilant against our own cognitive errors and not rely on the luxury of explicit reasoning which might give us the impression that we are immune from the vagaries of our highly evolved brains, brains which are evolved primarily for survival and perhaps have philosophical musing as a latent function.
Saturday, October 3, 2015
Waking Up: Sex, Drugs, and Near Death Experiences
In the fifth chapter of Waking Up, Sam Harris helpfully points out some potential pitfalls of contemporary spiritual seeking and practice. He leaves us with some warnings about the alternatives after discussing the benefits of meditation in the fourth chapter. This, I think, is necessary. I know from my own spiritual exploration that there is a vast gulf between healthy spirituality and popular spirituality, and I believe that we have a duty to help others avoid the unhealthy spiritual paths, many of which are spiritual in name only.
Speaking of names, people often follow someone with a special name, whether because it is well known or obscure. People often want to find what they may call a guru, an enlightened teacher of spiritual truths that are for whatever reason otherwise inaccessible to us. And as Sam Harris points out, it can be difficult to find a good guru when we have no easy standards to apply for discerning a good one from a bad one. He offers the following example from The End of Faith as an illustration:
It seems very unlikely that this guru was engaged in the spiritual practice of Killing the Self as a true ascetic would be. It seems more likely that he was killing his craving for ice cream and for sexual release by abusing the trust placed in him by his erstwhile students.
This is not an isolated incident, unfortunately. There are plenty of cases of spiritual teachers abusing their authority and using devotees for sexual gratification (just as it does in secular organizations with charismatic figures), and this happens in both the major religions and in minor cults that fade away quickly after the charismatic leader dies or is revealed as a charlatan. Abuse of authority is inevitable in a world filled with morally imperfect beings. It's just that the way we often regard our spiritual leaders is dangerous, and that the way spiritual leaders often come to regard themselves is also dangerous.
Even in religions like Christianity that claim we are all sinful and prone to moral failure and Islam which is explicitly suspicious of clericalism, there is a strong tendency toward an idolization of clerics. Harris helps us to understand why that is the case:
While we may not be able to avoid a functional hierarchy, explicit or implicit, what we can do is try to mitigate our tendency to idolize experts, or worse, to idolize non-experts because we cannot discern whether a person is or is not an expert. We can keep in mind that every person has moral failures, even the experts, and that we need to be very careful not to idolize another person no matter their status.
This is of course not the only pitfall for spiritual seekers. Drugs have long been utilized by those seeking an experience of the spiritual type that would provide them with a release from the cares of this world, sometimes in the context of religious ceremonies. Typically, the drugs used for this purpose are significantly stronger than alcohol or tobacco with regard to their effects on our perceptions, able to profoundly alter our conscious experience.
And Sam Harris is no stranger to using drugs, albeit in a more controlled, responsible way than most who use them.
He goes on to add that taking these drugs is always a roll of the dice, that it is not to be undertaken lightly. And this is from a guy who, like me, opposes the War on Drugs and sees it as a failure.
This is something Harris knows from experience; he details a very bad trip on LSD elsewhere in this chapter. And he recognizes that the use of psychedelics is not a matter of them always producing wonderful effects, particularly on a large scale.
In the end, he recommends that these drugs not be used regularly when there are more safe, reliable methods available. He recommends meditation, of course.
Of course, drugs and meditation are not the only ways in which we can experience profound changes in our consciousness. Harris devotes some time to dealing with the specter of what are generally called Near Death Experiences and the conclusions many people draw from them.
He focuses specifically on a couple of popular cases in recent years that have captured the imaginations of a lot of people in the U.S. One is the experience of a very young child of a pastor detailed in a book entitled Heaven is For Real (which I've read myself and found interesting but not convincing). The other, which he spends more time addressing systematically, is Proof of Heaven: A Neurosurgeon's Journey into the Afterlife, written by Eben Alexander.
Having read the excerpts, I am even less convinced by Alexander's account. Harris seems beyond the point of being unconvinced. As a neuroscientist, Harris minces no words in dealing with Alexander's account of the profound near death experience he encountered that fateful day.
Like Harris, I haven't found any accounts of near death experiences that lead me to believe in a particular religious tradition, though there are certainly accounts that happen to cohere very nicely with my religious beliefs. This is primarily because a few anecdotal experiences of people on the edge of death are not sufficient grounds upon which to build an eschatology, at least in my not very humble opinion.
Unfortunately, this is a spiritual pitfall many people can fall into. Not that they will have a profound experience like Eben Alexander, but that they may easily believe that a certain religious belief is true on the shaky grounds that someone had an experience that seemed like heaven to them.
We humans tend to believe in the power of our visions to such an extent that it obscures our vision of what's important and how we discover what's true. That is perhaps the most important lesson of this chapter, whether it's sex, drugs, or death that's obscuring our vision.
Note: Photo credit goes to me.
Speaking of names, people often follow someone with a special name, whether because it is well known or obscure. People often want to find what they may call a guru, an enlightened teacher of spiritual truths that are for whatever reason otherwise inaccessible to us. And as Sam Harris points out, it can be difficult to find a good guru when we have no easy standards to apply for discerning a good one from a bad one. He offers the following example from The End of Faith as an illustration:
"I know a group of veteran spiritual seekers who, after searching for a teacher among the caves and dells of the Himalayas for several months, finally discovered a Hindu yogi who seemed qualified to lead them into the ethers. He was as thin as Jesus, as limber as an orangutan, and wore his hair matted, down to his knees. They promptly brought this prodigy to America to instruct them in the ways of spiritual devotion. After a suitable period of acculturation, our ascetic--who was, incidentally, also admired for his physical beauty and for the manner in which he played his drum--decided that sex with the prettiest of his patrons' wives would suit his pedagogical purposes admirably. These relations were commenced at once, and endured for some time by a man whose devotion to wife and guru, it must be said, were now being sorely tested. His wife, if I am not mistaken, was an enthusiastic participant in this 'tantric' exercise, for her guru was both 'fully enlightened' and as dashing a swain as Lord Krishna. Gradually, this saintly man further refined his spiritual requirements, as well as his appetites. The day soon dawned when he would eat nothing for breakfast but a pint of Haagen-Dazs vanilla ice cream topped with cashews. We might well recognize that the meditations of a cuckold, wandering the frozen-food aisles of a supermarket in search of an enlightened man's enlightened repast, were anything but devotional. This guru was soon sent back to India with his drum."
It seems very unlikely that this guru was engaged in the spiritual practice of Killing the Self as a true ascetic would be. It seems more likely that he was killing his craving for ice cream and for sexual release by abusing the trust placed in him by his erstwhile students.
This is not an isolated incident, unfortunately. There are plenty of cases of spiritual teachers abusing their authority and using devotees for sexual gratification (just as it does in secular organizations with charismatic figures), and this happens in both the major religions and in minor cults that fade away quickly after the charismatic leader dies or is revealed as a charlatan. Abuse of authority is inevitable in a world filled with morally imperfect beings. It's just that the way we often regard our spiritual leaders is dangerous, and that the way spiritual leaders often come to regard themselves is also dangerous.
Even in religions like Christianity that claim we are all sinful and prone to moral failure and Islam which is explicitly suspicious of clericalism, there is a strong tendency toward an idolization of clerics. Harris helps us to understand why that is the case:
"A relationship with a guru, or indeed any expert, tends to run along authoritarian lines. You don't know what you need to know, and the expert presumably does; that's why you are sitting in front of him in the first place. The implied hierarchy is unavoidable."
While we may not be able to avoid a functional hierarchy, explicit or implicit, what we can do is try to mitigate our tendency to idolize experts, or worse, to idolize non-experts because we cannot discern whether a person is or is not an expert. We can keep in mind that every person has moral failures, even the experts, and that we need to be very careful not to idolize another person no matter their status.
This is of course not the only pitfall for spiritual seekers. Drugs have long been utilized by those seeking an experience of the spiritual type that would provide them with a release from the cares of this world, sometimes in the context of religious ceremonies. Typically, the drugs used for this purpose are significantly stronger than alcohol or tobacco with regard to their effects on our perceptions, able to profoundly alter our conscious experience.
And Sam Harris is no stranger to using drugs, albeit in a more controlled, responsible way than most who use them.
"...if they don't try a psychedelic like psilocybin or LSD at least once in their adult lives, I will wonder whether they had missed one of the most important rites of passage a human being can experience.
This is not to say that everyone should take psychedelics. As I will make clear below, these drugs pose certain dangers. Undoubtedly, some people cannot afford to give the anchor of sanity even the slightest tug. It has been many years since I took psychedelics myself, and my abstinence is born of a healthy respect for the risks involved. However, there was a period in my early twenties when I found psilocybin and LSD to be indispensable tools, and some of the most important hours of my life were spent under their influence. Without them, I might never have discovered that there was an inner landscape of mind worth exploring."
He goes on to add that taking these drugs is always a roll of the dice, that it is not to be undertaken lightly. And this is from a guy who, like me, opposes the War on Drugs and sees it as a failure.
"There is no getting around the role of luck here. If you are lucky, and you take the right drug, you will know what it is to be enlightened (or to be close enough to persuade you that enlightenment is possible). If you are unlucky, you will know what it is to be clinically insane. While I do not recommend the latter experience, it does increase one's respect for the tenuous condition of sanity, as well as one's compassion for people who suffer from mental illness."
This is something Harris knows from experience; he details a very bad trip on LSD elsewhere in this chapter. And he recognizes that the use of psychedelics is not a matter of them always producing wonderful effects, particularly on a large scale.
"However, we should not be too quick to feel nostalgia for the counterculture of the 1960s. Yes, crucial breakthroughs were made, socially and psychologically, and drugs were central to the process, but one need only read accounts of the time, such as Joan Didion's Slouching Toward Bethlehem, to see the problem with a society bent on rapture at any cost. For every insight of lasting value produced by drugs, there was an army of zombies with flowers in their hair shuffling toward failure and regret. Turning on, tuning in, and dropping out is wise, or even benign, only if you can then drop into a mode of life that makes ethical and material sense..."
In the end, he recommends that these drugs not be used regularly when there are more safe, reliable methods available. He recommends meditation, of course.
"I believe that psychedelics may be indispensable for some people--especially those who, like me, initially need convincing that profound changes in consciousness are possible. After that, it seems wise to find ways of practicing that do not present the same risks. Happily, such methods are widely available."
Of course, drugs and meditation are not the only ways in which we can experience profound changes in our consciousness. Harris devotes some time to dealing with the specter of what are generally called Near Death Experiences and the conclusions many people draw from them.
He focuses specifically on a couple of popular cases in recent years that have captured the imaginations of a lot of people in the U.S. One is the experience of a very young child of a pastor detailed in a book entitled Heaven is For Real (which I've read myself and found interesting but not convincing). The other, which he spends more time addressing systematically, is Proof of Heaven: A Neurosurgeon's Journey into the Afterlife, written by Eben Alexander.
Having read the excerpts, I am even less convinced by Alexander's account. Harris seems beyond the point of being unconvinced. As a neuroscientist, Harris minces no words in dealing with Alexander's account of the profound near death experience he encountered that fateful day.
"The proof he offers is either fallacious (CT scans do not measure brain activity) or irrelevant (it does not matter, even slightly, that his form of meningitis was 'astronomically rare')--and no combination of fallacy and irrelevancy adds up to sound science."
Like Harris, I haven't found any accounts of near death experiences that lead me to believe in a particular religious tradition, though there are certainly accounts that happen to cohere very nicely with my religious beliefs. This is primarily because a few anecdotal experiences of people on the edge of death are not sufficient grounds upon which to build an eschatology, at least in my not very humble opinion.
Unfortunately, this is a spiritual pitfall many people can fall into. Not that they will have a profound experience like Eben Alexander, but that they may easily believe that a certain religious belief is true on the shaky grounds that someone had an experience that seemed like heaven to them.
We humans tend to believe in the power of our visions to such an extent that it obscures our vision of what's important and how we discover what's true. That is perhaps the most important lesson of this chapter, whether it's sex, drugs, or death that's obscuring our vision.
True Paradoxes - Sex, Drugs, and Near Death Experiences
Note: Photo credit goes to me.
Saturday, August 29, 2015
The Benefit of Doubt: The Limits of Science
Several weeks ago, I explained two differing positions among scientists and philosophers of science about what science is, and like most discussions about what science is, the answer came down to the answer to the demarcation problem. How do we tell the difference between scientific and non-scientific thought? Or in other terms, what are the limits of science, the boundaries at which science ends and something else begins?
As a disclaimer, I am not one of those people who seeks to undermine science; I am not a Young Earth Creationist, nor do I deny anthropogenic influences on climate change and the urgency of changing human behavior with regard to our planet. To the contrary, I seek to preserve that which is best about science; I am strongly opposed to the movement to revise scientific theories to suit the members of a religious group or subject science to mere political concerns. I am also strongly opposed to the current project to turn science into a religion, not because I am concerned that it might damage religion, but because it will most definitely cause grave injury to science.
Like many philosophers of science, I value a clear understanding of what makes science distinct from other methodologies and how science relates to them. In order to under the limits of a methodology of belief system, we need to examine the assumptions underlying the methodology. The assumptions of any methodology or belief system (science being both) allow us to derive its limitations from studying those assumptions. If, for example, I want to understand what the limits of arithmetic are in Base 10, then I merely need to examine the assumptions (or axioms) underlying the operations in order to work out what it can and cannot do.
So what are some of the assumptions of the scientific method?
So given these useful assumptions, what can we now work out about the limits of science?
1. Because the assumption of scientific methodology is that the world is intelligible to the human mind, the truth value of the claims it makes are conditional upon that assumption being correct. If it is not the case that the world is intelligible to the human mind, then all of scientific belief is false, no matter how useful we might find it. And there is simply no way for us to test this without begging the question at issue. This limitation of science is primarily a consideration of philosophers and will not likely impact scientific work at any point.
2. The world may or may not operate on general principles exclusively. But because anomalies and data points far off the beaten track are treated as irrelevant to scientific theory formation in terms of the articulation of the theory, there is no way for science as a body of propositions to incorporate anything other than general principles into itself. Unlike the 1st point, this has practical implications for science; anything which does not operate by general principles, which is truly exceptional, cannot be incorporated into the body of scientific knowledge. And science has no way of knowing how many such truly exceptional events there are.
3. I happen to agree that the principles on which the world operates can be expressed mathematically; I could always be wrong. It could well be that the forms of mathematics which our tiny hominid brains have been using are not sufficient to complete the task of expressing those general principles. And how would we ever identify that such is the case? If it was the case, then wouldn't we simply assume that our failures to be able to express the general principles upon which the world operates are due to a lack of evidence, or a problem with our testing procedures, that we merely need to keep trying harder?
4. Science could not function effectively as a methodology for studying natural phenomena if all scientists went around assuming that everything (or at least the more difficult to understand things) were the result of supernatural principles or some sort of divine exception to the natural order. Scientists need to assume, while doing the work of science, that the principles on which the world operates are exclusively natural. So what if it is the case that there are supernatural principles at work, and they impinge upon the natural order? Science would never notice supernatural principles; if they exist, then scientists would simply assume that they do not and continue searching for a merely natural explanation for the evidence or ignore it as an outlier among their data points.
5. I very much agree that the observation of our senses can be supplemented by instruments we construct; as always, I could be wrong about that, whether in general or in specific cases. A telescope might or might not be providing me with something that supplements my senses. And there's no way to know for sure, given that all I have to base my conclusions on are the information provided by my senses. Which doesn't seem like a serious issue until we consider the next point...
6. Why do we think that our instruments are more reliable than our own senses? Well, there's a great deal of scientific evidence suggesting that our senses are far from perfectly reliable, and that we only perceive a small portion of what there is to perceive in many areas. Granted, the evidence rests on assumption #1 and may not be true, but if it is, then we cannot fully trust our sensory inputs to provide us with accurate information. It is entirely understandable to trust simpler mechanisms not prone to the same kinds of errors as our complex perceptual mechanisms which have many more points of failure. But because we have to rely on our senses to determine that they are more reliable, and we are strongly subject to confirmation bias, it's entirely possible that we are prejudiced in our conclusions on that point and we could well be wrong.
7. It might or might not be true that the general principles which we use to understand local phenomena in the universe apply in all parts of the universe, but this can at least be tested in principle as we explore more of the universe. What is not testable in principle is the application of general principles we discover now to the events of the distant past. Unless we develop a time machine which can take us into the past (and we can be sure isn't contaminating the evidence), there is no way to verify that our logical inferences about the distant past hold true.
In understanding these limits of science, we can better perform scientific inquiry because we are less likely to leap to conclusions about the results of any study or experiment and less likely to overstate the implications of the results beyond what the evidence actually supports. In understanding the limits of science, we keep the doubt in science. A benefit of doubt is that it keeps our minds open to new information, something which is critical for successful scientific inquiry.
A scientific community which does not doubt itself rigorously is a scientific community without the full benefits of doubt, and those of us who are interested in scientific research should be reluctant to give such a community the full benefit of the doubt.
Note: The above is a picture of the top of one of my science fair trophies.
As a disclaimer, I am not one of those people who seeks to undermine science; I am not a Young Earth Creationist, nor do I deny anthropogenic influences on climate change and the urgency of changing human behavior with regard to our planet. To the contrary, I seek to preserve that which is best about science; I am strongly opposed to the movement to revise scientific theories to suit the members of a religious group or subject science to mere political concerns. I am also strongly opposed to the current project to turn science into a religion, not because I am concerned that it might damage religion, but because it will most definitely cause grave injury to science.
Like many philosophers of science, I value a clear understanding of what makes science distinct from other methodologies and how science relates to them. In order to under the limits of a methodology of belief system, we need to examine the assumptions underlying the methodology. The assumptions of any methodology or belief system (science being both) allow us to derive its limitations from studying those assumptions. If, for example, I want to understand what the limits of arithmetic are in Base 10, then I merely need to examine the assumptions (or axioms) underlying the operations in order to work out what it can and cannot do.
So what are some of the assumptions of the scientific method?
- The world is intelligible to the human mind via repeated observation.
- The world operates exclusively on general principles which can be tested ceteris paribus.
- The principles on which the world operates can be expressed mathematically.
- The principles on which the world operates are exclusively natural.
- The observations of our senses can be supplemented by instruments we construct.
- The instruments we construct are more reliable then our senses.
- The power of logical inference allows us to arrive at correct conclusions about the past based on present circumstances and about the far reaches of the universe based on local conditions.
So given these useful assumptions, what can we now work out about the limits of science?
1. Because the assumption of scientific methodology is that the world is intelligible to the human mind, the truth value of the claims it makes are conditional upon that assumption being correct. If it is not the case that the world is intelligible to the human mind, then all of scientific belief is false, no matter how useful we might find it. And there is simply no way for us to test this without begging the question at issue. This limitation of science is primarily a consideration of philosophers and will not likely impact scientific work at any point.
2. The world may or may not operate on general principles exclusively. But because anomalies and data points far off the beaten track are treated as irrelevant to scientific theory formation in terms of the articulation of the theory, there is no way for science as a body of propositions to incorporate anything other than general principles into itself. Unlike the 1st point, this has practical implications for science; anything which does not operate by general principles, which is truly exceptional, cannot be incorporated into the body of scientific knowledge. And science has no way of knowing how many such truly exceptional events there are.
3. I happen to agree that the principles on which the world operates can be expressed mathematically; I could always be wrong. It could well be that the forms of mathematics which our tiny hominid brains have been using are not sufficient to complete the task of expressing those general principles. And how would we ever identify that such is the case? If it was the case, then wouldn't we simply assume that our failures to be able to express the general principles upon which the world operates are due to a lack of evidence, or a problem with our testing procedures, that we merely need to keep trying harder?
4. Science could not function effectively as a methodology for studying natural phenomena if all scientists went around assuming that everything (or at least the more difficult to understand things) were the result of supernatural principles or some sort of divine exception to the natural order. Scientists need to assume, while doing the work of science, that the principles on which the world operates are exclusively natural. So what if it is the case that there are supernatural principles at work, and they impinge upon the natural order? Science would never notice supernatural principles; if they exist, then scientists would simply assume that they do not and continue searching for a merely natural explanation for the evidence or ignore it as an outlier among their data points.
5. I very much agree that the observation of our senses can be supplemented by instruments we construct; as always, I could be wrong about that, whether in general or in specific cases. A telescope might or might not be providing me with something that supplements my senses. And there's no way to know for sure, given that all I have to base my conclusions on are the information provided by my senses. Which doesn't seem like a serious issue until we consider the next point...
6. Why do we think that our instruments are more reliable than our own senses? Well, there's a great deal of scientific evidence suggesting that our senses are far from perfectly reliable, and that we only perceive a small portion of what there is to perceive in many areas. Granted, the evidence rests on assumption #1 and may not be true, but if it is, then we cannot fully trust our sensory inputs to provide us with accurate information. It is entirely understandable to trust simpler mechanisms not prone to the same kinds of errors as our complex perceptual mechanisms which have many more points of failure. But because we have to rely on our senses to determine that they are more reliable, and we are strongly subject to confirmation bias, it's entirely possible that we are prejudiced in our conclusions on that point and we could well be wrong.
7. It might or might not be true that the general principles which we use to understand local phenomena in the universe apply in all parts of the universe, but this can at least be tested in principle as we explore more of the universe. What is not testable in principle is the application of general principles we discover now to the events of the distant past. Unless we develop a time machine which can take us into the past (and we can be sure isn't contaminating the evidence), there is no way to verify that our logical inferences about the distant past hold true.
In understanding these limits of science, we can better perform scientific inquiry because we are less likely to leap to conclusions about the results of any study or experiment and less likely to overstate the implications of the results beyond what the evidence actually supports. In understanding the limits of science, we keep the doubt in science. A benefit of doubt is that it keeps our minds open to new information, something which is critical for successful scientific inquiry.
A scientific community which does not doubt itself rigorously is a scientific community without the full benefits of doubt, and those of us who are interested in scientific research should be reluctant to give such a community the full benefit of the doubt.
The Gospel of Science - The Question of Science - The Limits of Science
Note: The above is a picture of the top of one of my science fair trophies.
Subscribe to:
Comments (Atom)






