Peter Boghossian is an atheist philosopher and professor at Portland State University. In this book, he intends to equip an army of “street epistemologists” with the tools to use everyday life interactions as interventions to show people that faith is a bad epistemology and to disabuse them of it. His target is not religion, but faith itself as a way of knowing truth. You can read more about the book on its Amazon page (where a slightly condensed version of this review was originally posted) or about street epistemology more broadly on the SE website.
This is a well-written, provocative book with a lot to recommend it. Since I care about spreading true beliefs in the world more than just about anything, I agree with probably 90-95% of it. I’m in a similar position as when I read John Loftus’ book Outsider Test for Faith, which I also enjoyed and really related too. Another way to look at the problem is that we are now discussing the meta problem of epistemology when dealing with Christianity vs atheism, rather than the details of arguments for God, historicity of the Bible, etc. That meta problem is probably my favorite subject!
It makes sense to start with the things I liked about the book…
The Triumph of Reason
On my own podcast, I have an episode entitled Logic Always Wins and I think Peter Boghossian would approve. You can’t escape reason. It is, indeed, the freight train coming your way. As much as you avoid it, reality is unfortunately based on, well, reality, and you are doomed and destined to rub up against. Everything in this book that celebrates reason and damns “pretending to know things you don’t know” should be applauded. And I do. We are on the same team, Pete!
One of the things I love about the whole Street Epistemology crew is their focus on sympathetically listening to your interlocutor, not strawmanning their positions, and remaining open to new ideas yourself (doxastic openness)1. That’s something I strive for as well and I really appreciate here. I would argue close to 90% of both sides of the debate (atheist and Christian) could not be described this way unfortunately and are often instead vitriolic and unsympathetic. While Peter might sometimes come across a bit patronizing to Christians in this book, he at least has sympathy. He truly wants to free people from bad ideas. That can never be a bad goal.
I’ve consumed a fair share of skeptic material, and I probably have never heard one actually, seriously reading Christian apologists, especially with doxastic openness. Peter does just that by recommending William Lane Craig and Alvin Plantinga.
Probably the core of the book, Boghossian explains the Socratic method and how to use it. The Socratic method is such a stark contrast to the adversarial way most people debate today, so it’s very encouraging to see the recommendation of it as a primary tool. It represents an admirable call to make people think more deeply instead of winning debate points.
Ok, I’m now going to talk about the things I didn’t like as much, but I want to stress that Peter Boghossian and I are ultimately on the same page – we both want to spread the habit of good reasoning. I also should note here that, with Peter, I mourn the fact that we don’t teach critical thinking in school, or at least not very much of it, even though it’s one of the most important tools in life. I sincerely hope that changes too, so I’ll join Peter in that fight.
Definition of “Faith”
I think it’s very helpful, ultimately, that Peter defines faith as “pretending to know things you don’t know.” It’s helpful because I completely agree that is a bad thing to do. My bigger gripe is that he quickly moves to saying that Christian faith also fits that definition.
I recently read a fascinating book called Salvation By Allegiance Alone, which forcefully argues that the Greek word pistis in the New Testament should be translated in most cases as “allegiance” or “trust” instead of the rather weak and nebulous “faith.” So if Boghossian is implying that Christianity itself teaches that we should have “faith” in the sense of pretending-to-know-things-we-don’t, then I completely reject that, and I think most theologians would as well. I think the Christian faith is much closer to “hope” as Boghossian defines it.
A more robust definition of Christian faith would be acting in trust/allegiance/hope based on something else. The “something else” in my case would be reason, evidence, and experience. At no part in my line of reasoning do I pretend to know things I don’t know. I might be wrong in my reasoning, but my desire is to act in trust and hope (i.e. have faith) based on proper reasoning. The reasoning I used to get to that point can be questioned of course, and I welcome it. But I think I stand well within a Biblical theology when I say that “faith” is not supposed to be pretending to know things you don’t.
Perhaps Boghossian is right that the way most Christians hold their beliefs is by “pretending to know things they don’t know.” But he has a much higher hill to climb if he wants to argue that the Bible and Christian theology in general encourage that sort of thing, since prima facie it does not. And his statement on page 28 that people of faith hold their beliefs in a way that is immune to revision is also a gross generalization. Just ask people like Gary Habermas and Mike Licona who, by their own accounts, went through periods of severe doubt, quite seriously considering other religions, etc. I count myself without that group as well, and even my Christian beliefs look quite different than they did 15 or 20 years ago.
Peter Makes Bold (Naive?) Claims
Multiple times in the book Peter flat out says there is no evidence for God:
“Not a single argument for the existence of God has withstood scrutiny. Not one….The fine-tuning argument, fail. The Kalam cosmological argument, fail. All refuted. All failures.” (p. 28)
“Here’s the evidence for the existence of God: Nothing. There is no evidence for God’s existence.” (p. 132)
“Atheism is a conclusion that’s based on the best available evidence for the existence of God—which is that there is none.” (p. 164)
To me this is highly questionable coming from someone that puts doxastic openness on such a high pedestal. If the Kalam is such a failure, why is it (from what I’ve read) the most debated argument for God’s existence in the philosophical literature today2? If the fine-tuning argument such a failure, why did leading atheist cosmologist Sean Carroll recently say it was the best argument for God’s existence (he quickly follows up by saying he doesn’t think it’s a very good one – but still)3? Why is the argument from consciousness such a failure when atheist philosopher Thomas Nagel thinks that reductive naturalism is doomed since it can’t accommodate consciousness4? Why is the moral argument such a failure when atheist philosopher Michael Ruse seems to think that reductive naturalism could never be a foundation for objective morality5, but most people (including atheists) seem to assume when carefully questioned that at least some morals are objective?
It’s perfectly fine for Peter to think that the evidence, by far, favors God’s non-existence. It’s also perfectly fine to say that no argument for God ultimately succeeds in his view. But it seems like quite an overreach to say there is zero evidence or deny there is any even somewhat quality arguments for God’s existence. It’s particularly odd to have him say this to a person he’s been practicing SE on (as page 132 implies), since SE is supposed to be all about doxastic openness and not making bold, shaky claims to convince the person you’re talking to.
I could accept the idea that these claims were simply rhetorical, but that is a tough pill to swallow when the whole enterprise of this book is to be clear and precise and use good reasoning, in other words to get beyond rhetoric.
I have to bring up here his one comment on page 73 regarding Gary Habermas’ work on Jesus’ resurrection. Once again, it’s one of the few times in the entire book I was a bit shocked by Boghossian’s overreach, and perhaps even naiveté or doxastic closure.
“Every religious apologist is epistemically debilitated by an extreme form of confirmation bias. Gary Habermas, for example, exemplifies this cognitive malady.” [Emphasis original]
“…when confronted by basic, rudimentary objections (people lied, someone ransacked the tomb, the witnesses were unreliable), he takes the most remote logical possibility and turns that into not just a probability but an actuality.”
If I didn’t know I was reading a leading atheist philosopher, I would simply assume that paragraph was written by someone who knew little of Habermas’ work (or Licona, Craig, N.T. Wright, etc). The alternative possibilities are dealt with quite seriously by these apologists, but some are actually fairly easy to dismiss and are indeed dismissed by even almost all atheist scholars of the Resurrection. Habermas is famous for basing his case on only the facts that 90-95% of all scholars accept. For example, skeptic scholars on the Resurrection realize that the chance that the disciples lied is so remote that they don’t even put that forward as an option, and usually go with some form of hallucination hypothesis. Gert Lüdemann and Bart Ehrman would be two examples of this6 7. So why does Boghossian trump out “people lied” as if it’s considered a viable option for explaining the Resurrection in the literature today? It isn’t.
Perhaps I need to be more sympathetic in my reading of Boghossian in those passages. Perhaps he’s simply comparing one remote possibility (people lied) with one that is infinitely more remote in his mind (Jesus supernaturally rose). Fair enough. However, that is not what was implied in that passage. And I think he’s still at fault some considering he’s writing a book all about doxastic openness, and in these cases at least he seems to exhibit a surprising lack of nuance when referring to “the other side.”
A Scalpel, But Sometimes You Need A Machete
Probably my biggest issue with the whole Street Epistemology (SE) enterprise is that it can have the side effect of being myopic or narrow in its epistemology (ironically). The method intensely focuses on as much detail and precision as possible. You’ll see SE practitioners say things like “ok what is the best example of [reason for belief].” This is understandable, but it can have the effect of having the patient accidentally setup a strawman of their own beliefs, and then of course it will be knocked down by the SE practitioner. Take any complex, somewhat vague idea, and you can quickly make someone sound silly for believing it, even if it’s widely accepted by nearly everyone as true. Even simple physics questions that people haven’t thought about deeply enough—if you prod around and get them to try to articulate, you’ll probably get them to contradict themselves or say something silly.
Of course it can be a useful exercise to put people in this position so that you force them to think through it more clearly, yes. But I actually don’t think it’s necessarily the best or quickest way to the truth in the end. And you might actually instill unwarranted doubt about their position (e.g. the physics example or the spam detector bot example below) and that doesn’t seem helpful. Why not attack the true weak points rather than instill doubt generally or where it doesn’t necessarily even need to be?
Doing this, you could even be accused of increasing the amount of confusion in their minds surrounding the topic. And clarity should always be the goal. To be clear, I’m not talking about the necessary confusion that results from realizing cognitive dissonance that was implicitly already there. I’m talking about adding unnecessary and unwarranted confusion that doesn’t seem helpful and might actually be harmful to reasonable thinking.
A lot of larger more complex views are held for cumulative reasons and/or due to a heuristic, sometimes a smart one, sometimes a not-so-smart one. My podcast episode on heuristics is exactly about this and how even a belief like “the earth is spherical” is usually held, properly, by a heuristic by individual people, since they usually haven’t done the scientific experiments to prove it themselves.
I have a background in computer science and artificial intelligence, so let me give my prime example based on the field of AI to illustrate the problem I’m talking about. Let’s say you could question an email spam detector about why it chose to classify a particular email as spam. This bot was trained using a machine learning (ML) algorithm, so it’s not using an explicit set of rules to make its judgments; instead it’s using a neural net or another ML structure like that. It’s also important to remember that machine learning-based systems like this (called Expert Systems) far outstrip simpler algorithms that use a set of explicit rules in the real world. So here’s the theoretical situation where an SE practitioner is questioning the bot:
SE practitioner: Why did you classify that email as spam?
Bot processes the weighted properties it uses and tries to answer by picking a salient attribute
Bot: Uh, it had a lot of all caps.
SE: So do you classify all emails that have a lot of all caps as spam?
SE: Ok, so let’s go back then – that must not be your real reason. Why did you classify the email as spam?
Bot looks for other reasons within its complex, heuristic algorithm
Bot: It was from an unknown sender not in the address book.
SE: Ah so this is the real reason you think?
Bot is unable to fully articulate why it classified it as spam, even though its final answer gave 90% probability
Bot: Uh yes, I guess so.
SE: Can you imagine someone getting an email from an unknown address and it not being spam?
Bot: Well of course.
SE: So would you agree that you’re not using a reliable method of knowing what is spam or not?
Bot: I guess so…
Bot leaves the conversation unsure of its abilities to know what is spam or not, even though it has a 95% success rate in the real world
I hope the above illustrates the problem that can arise when we rely on the scalpel alone and get too myopic, precise, and syllogistic. It needs to be noted, of course, that in the case of a spam detector bot it actually does have good reasons for its beliefs, but in the real world people often don’t. But my point here is that the scalpel doesn’t necessarily get at things properly and can sometimes even add unnecessary confusion.
Let me stress again the scalpel method is very important! It’s just not all there is, and it can easily be misused. In one of Anthony Magnabosco’s interviews, whom in general I like a lot, he equivocated on the word “feelings” when using SE on some Mormons. He asked them (essentially) “so do you think feelings are a reliable method for knowing truth?” This is fine, except that the way the Mormons had just used “feelings” implied some sort of supernatural revelation that is different than the casual way we normally use “feelings,” meaning simply emotions. So very quickly Anthony had (accidentally) strawmanned their position; or you could even see it as the Mormons accidentally strawmanning their own position.
My point is not that “feelings” are a good reason to belief something! It’s that due to how SE works, it can lead to accidental strawmanning and equivocations that create unnecessary stumbling blocks to the truth and I think are avoidable if you use a less myopic approach to epistemology.
I also want to be clear that the “scalpel” of Socratic Dialogue is an essential tool in our toolkit of reasoning – so don’t get me wrong. But I do think when used exclusively and in the way that it’s used in SE, it can inadvertently make people focus entirely on overly precise, syllogistic reasoning. Contrast this to the fact that the way we know true things about the world is often more heuristical and probabilistic due to the complex nature of the world. Abductive reasoning (inference to the best explanation) is probably the primary way we properly reason about the world, and yet you’ll hardly see it on display during a street epistemology session.
Let me take a second to mention here that I feel this way even when I see a video of a street epistemologist question someone about karma or astrology or anything else I don’t believe in. This is not a smokescreen as a defense of my own Christian beliefs. I genuinely think there is a better way to go about this that engenders a better epistemology than SE normally does.
I should note that Peter briefly mentions System 1 and System 2 thinking (as described by Daniel Kahneman8) on page 97 and how System 2 can be used to analyze System 1. That’s all well and good and accurate. The problem is, System 1 is similar to an inner AI that is constantly collecting data and can/should be used properly at times as the best decision mechanism we have (the spam detector bot is an example of this). I fully agree we need System 2 to critique and adjust System 1, but we have an anemic epistemology if we throw System 1 entirely out the window. If you disagree, I invite you to listen to episode five of my epistemology series and then give me your response. 🙂
Speaking of abductive reasoning…
Ignoring Alternative Explanations
Another issue with SE is that it conveniently avoids discussing alternative explanations for the most part. It places all the burden of proof on the patient and there is a heavy implication that if they can’t prove their case, then it’s wrong. But anyone with knowledge of Bayes’ Theorem knows that you need to know how probable the alternative solution is. What if it also doesn’t make a lot of sense or is low probability? The SE practitioner can conveniently avoid that coming up most of the time, due to the way SE works. It just keeps prodding one side only for, once again, a sort of syllogistic proof with a heavy implication that if the proof fails then the hypothesis is not just inconclusive but wrong and should immediately be abandoned.
So what’s my recommended SE alternative?
I’m not sure I have one, but I do know I want to encourage a robust and well-rounded epistemology as much as I can. When I talk about epistemology, I certainly cover syllogistic, precise reasoning, but I also spend a lot of time on other ways we organize data and make judgments, like abductive reasoning. After all we usually are using those alternative methods as we navigate the complex world we live in, since they are more suited to the task. And if they weren’t more suited, then why is the ML expert system bot so much better at spam detection than the rules-based, syllogistic one?
There are certainly times when you should simply ask questions in a Socratic Method sort of way, although I think it should be less flowchart-like than SE recommends (which tends to encourage a narrow epistemology). When there is openness to some challenging questions, then I think ultimately moving into a sympathetic, mutually steel-manning and friendly debate could actually be a great way forward. It has these advantages going for it:
- – Both parties are seen as equals and implicitly both parties seem to have a burden of explaining themselves. This contrasts with SE where one person only is the patient and has all the burden.
- – Since you’re focusing on doxastic openness, the principle of charity, and steel-manning opposing ideas, you will hopefully find yourself sympathetically engaging each other rather than being antagonistic. If done properly, you should end up feeling like two people sitting side-by-side trying to put the jigsaw puzzle of reality together, even if you argue some over which pieces go where. (This is a similar feeling Boghossian expresses on page 125: “…confer upon the subject the feeling that he is not alone, that we are equals, and that we as humans are all facing the same ultimate questions.” <– I love this!)
- – All relevant forms of reasoning are on the table. If I’m talking to someone and I realize that their reasoning makes sense on a surface (heuristic perhaps) level, but there are some more precise points they haven’t resolved, I can bring that up. Or perhaps it’s the opposite and I think they have a myopic syllogistic reason for their view, but thinking more broadly would really challenge their position (this is often the case with conspiracy theories for example – they often have a precise, tight logic to them but zooming out to the forest level can undermine them quickly). I am not restricted to a flowchart of Socratic questions the way SE often seems to be.
For anyone who actually read this whole thing, especially skeptics, I really appreciate that! I hope I didn’t come across arrogant or as if I have all the answers. I truly do want to keep learning and model doxastic openness myself. I’m actually having an atheist onto my podcast soon to do a little SE on me. So we’ll see how that goes! I would love your own thoughts on this, so feel free to let me know what you think in the comments.
Let me finally say once again that there is much that I do like about SE and its goals. The people who practice it (including Peter Boghossian) seem to truly want to spread good reasoning out in the world and in a non-antagonistic way. From multiple different SE YouTubers I’ve noticed they end their videos requesting viewers to not post offensive things in the comments since the SE patients sometimes watch the videos later, which is admirable. It appears they really are not trying to publicly shame anyone but instead to simply bring the light of critical thinking to more people. No one can deny that is a noble goal, and I humbly hope that my review of this book and the larger enterprise of SE can only hope sharpen it up in a helpful way.
Like my posts? Then subscribe!
- The reverse of doxastic openness is doxastic closure, meaning you are not willing to revise your current beliefs
- I originally wrote this review for Amazon and therefore was going off my memory since I couldn’t include references anyway. Luckily, this was the only one I had trouble finding a reference for after the fact. I can’t remember exactly where I heard it, but a lesser, similar statement can be found in the Stanford Encyclopedia of Philosophy entry on the Kalam where it says “This argument has been the subject of much recent debate”
- From his interview on Unbelievable?
- See his book Mind and Cosmos which is wholly centered on this idea
- I’m most familiar with this idea from Ruse via William Lane Craig when he quotes him in numerous debates, but this Guardian article (particularly the opening paragraph) gets the idea across
- You can hear Lüdemann himself describe his views in debate with William Lane Craig
- Ehrman’s book How Jesus Became God includes a description of this thesis
- See his book Thinking Fast and Slow which is all about this