So I recently watched the Douglas Murray vs. Dave Smith debate on The Joe Rogan Experience, and the longer I sat with it, the more I felt like something else was going on underneath the surface.
Not just two people disagreeing.
Not even two people arguing badly.
More like: two people trying to have a conversation without realizing they were playing by totally different rules for what counts as truth.
I’m not a philosopher, journalist, or debate theorist — just someone who likes watching these conversations and thinking through why some of them completely break down. And this one really stuck with me.
So I gave that breakdown a name:
Epistemic Incompatibility masked as Straw Manning
Not because I think I’m coining anything profound. Just because it helped me describe what I saw happen in that particular debate.
What I Think Happened
Here’s my take:
-
Douglas Murray was working from a traditional framework. Truth, for him, seemed to come from expertise, firsthand experience, history, and vetted authority. He emphasized things like “Have you even been to the Middle East?” or pointed to people like historians to shut down misinformation.
-
Dave Smith came in from a more anti-establishment angle. His idea of truth seemed rooted in independent thinking, skepticism of “official” narratives, and support from non-mainstream voices. He cited dissident figures, questioned expert consensus, and challenged the very idea that traditional authority automatically equals truth.
And because of that, every time one of them spoke, the other responded in a way that made it look like they weren’t engaging honestly — even if they were trying to.
Murray probably thought Smith was cherry-picking fringe views and ignoring real facts.
Smith probably thought Murray was gatekeeping and avoiding uncomfortable truths.
Each one felt misrepresented — and it came across like they were straw-manning each other. But to me, it wasn’t deliberate distortion. It was an epistemic mismatch.
How the Conversation Fell Apart
The turning point for me was when Smith accused Murray of making an “appeal to authority.” That line hit like a shortcut — not just to disagree with Murray’s argument, but to dismiss the very basis of how Murray thinks truth works.
And when Murray countered with comments like “Have you even been there?” — it felt like he was drawing a boundary that Smith had already decided was invalid.
So it went from:
- "Here’s what I think about the conflict..."
To: - "Your whole way of thinking is flawed."
And once it crossed that line, they weren’t debating anymore. They were talking past each other.
That’s what this “epistemic incompatibility” idea captured for me: not just disagreement about facts, but about how facts are even supposed to be known.
Then Jordan Peterson Got Involved
A few days after the debate, Jordan Peterson was on Rogan too, and he said something that weirdly tied into all this. He talked about “psychopathic pretenders” to describe a type of bad-faith actor who thrives in the space between epistemic frameworks — not necessarily because they believe in either side, but because they know how to exploit the system
He wasn’t necessarily calling Smith one of them, but he made a larger point about how easy it is to gain influence now without having to go through traditional channels of credibility.
That idea stuck with me because it connects to this whole epistemic clash. If one side values authority and the other rejects it, you create a world where almost anyone can become a “trusted” voice — regardless of their track record or knowledge base.
And social media? It supercharges that.
Enter X (Twitter) and the Engagement Machine
Watching the fallout of the debate online — especially on X — made it all click even more.
Clips from the debate were flying everywhere. People were calling Murray a pompous elitist. Others were calling Smith reckless or clueless. And all of it got traction.
Not because it was fair — but because it was engaging.
And that’s kind of the point: the incentive structure on X rewards engagement, not accuracy.
The algorithm doesn’t care why people are reacting — it just sees that they are. So:
A post that says “Murray got humiliated” and gets thousands of angry comments? Boosted.
A clip of Smith “dunking” on Murray, even if it’s out of context? Boosted.
Nuanced, careful takes that don’t stir the pot? Buried.
This creates a kind of Nash Equilibrium — a situation where the dominant strategy for users is to be provocative, polarizing, or performative. Because being more careful or honest just doesn’t pay off under the system.
And if that’s the environment these debates live in, it makes any kind of deep, respectful disagreement almost impossible.
🤷♂️ Just One Viewer’s Take
Again, I’m not claiming to have the final word on any of this. I’m not an expert. I just watched a debate that didn’t work — and tried to figure out why.
To me, the problem wasn’t just two strong opinions. It was two different ideas of what it even means to make a valid argument.
One of them trusted experts. The other didn’t. One leaned on first-hand experience. The other on personal judgment. And when those worlds collided, both sides walked away thinking the other was twisting things or being dishonest.
But I don’t think they were.
I think they just had fundamentally incompatible ways of thinking about truth — and that’s what caused the straw-man feeling.
So that’s what “Epistemic Incompatibility masked as Straw Manning” means to me. Not a theory. Not a diagnosis. Just something I saw — and maybe you did too.
Curious What You Think
If you watched the debate — or follow these kinds of messy, meta-heavy conversations — does this dynamic sound familiar to you?
Do you think debates can actually work when the people involved don’t even agree on what counts as evidence or truth?
And maybe more importantly: Is there a way to fix this? Or are we stuck watching these kinds of breakdowns forever — where people aren’t just disagreeing, they’re not even arguing on the same planet?
One thing I’ve been thinking about lately is whether platform design is part of the problem — and maybe part of the solution.
I’ve been paying attention to stuff like the Hive blockchain, where there are custom frontends built around niche communities, sometimes with their own Layer 2 tokens. These platforms seem to encourage more aligned conversations — or at least reward people for building trust inside smaller, shared-value spaces.
I don’t know if it’s the fix, but it’s interesting. It feels like there’s more potential for meaningful discussion when the incentives are set up to support community understanding instead of just algorithmic outrage.
Anyway — just thinking out loud. Would love to hear your take if you've thought about this too.