Join me for a discussion with Bernardo Kastrup, an independent philosopher who holds a Ph.D. in philosophy (ontology, philosophy of mind) and another Ph.D. in computer engineering (reconfigurable computing, artificial intelligence), as we discuss his argument against physicalism and his version of Objective Idealism.
Hi Richard. I wonder if it is necessary to believe in “abstract ideas”, “abstract objects” and “universals” to believe in “a priori knowledge”? Is there any contradiction in being a nominalist and a Kantian epistemologist? could you write a text about it? thanks.
Reblogged this on Quaerere Propter Vērum and commented:
This was excellent.
I’m only halfway through, but (at least in the first half) I don’t think your guest gave you a satisfying answer to why a simulation wouldn’t work.
As someone who writes a lot opposing computationalism, I’m often asked that same question or a similar one: What would a quantum-level simulation of a brain actually do if not produce consciousness? What would happen instead?
On the presumption a quantum level simulation of a heart would fully describe the heart’s behavior, what exactly would be absent from such a simulation of the brain? Such a simulation ought to at least simulate the meat, the biological function.
The opposing proposition is: Simulated water isn’t wet. (Or, my own contribution: Simulated lasers don’t emit photons. Might consciousness be like the photons? It only arises from a specific physical situation?)
My answer to the question is I see many possibilities. Maybe the simulation only simulates the biological function — a comatose brain. Or maybe there is neural activity, but it amounts to white noise. Or maybe it’s more coherent, but doesn’t amount to thought. Or amounts to insane thought. Or addled thought. Or maybe lucid thought — it’s always a possibility. But there seem a lot of other more possible outcomes. (And when has anyone ever gotten software right?)
I think the real category error here is in thinking a computer is anything at all like a brain, or that they function anything at all in the same fashion.
But I do think a very interesting question is what would happen if we simulated a brain down to the quantum level?
Seasonal distractions; I finally got back to this and finished listening. Very interesting interview, I really enjoy this series and thank you for it.
That said, I find I’m not very sympathetic to idealism. (I just published a post about why.) I find it surprising that our mental content can account for why reality appears so lawful, consistent, and persistent, to us. Minds, as I’ve encountered them, don’t seem capable of such precision and completeness.
In fact, I’m not sure something as complex as mental process is capable ever of the completeness and consistency required to define reality as we see it. Who came up with transcendental numbers? That’s one righteous mentality!
I did very much enjoy the last part. I may not agree about idealism, but I very much agree when it comes to machine consciousness. At the end, I thought you seemed hung up trying to account for why a brain but not a machine. Is it biological naturalism?
May I suggest “structuralism” (a kind of physicalism)? We know the brain has an exceptionally complex structure and behavior. Maybe what’s missing in a software simulation is that structure.
And maybe therefore that structure and behavior might produce consciousness in something other than biology? Whereas a numeric simulation just simulates that structure. (“Simulated water isn’t wet.” — My humble contribution: “Simulated lasers don’t emit photons” but “Simulated kidneys don’t pee” is pretty good! 😀 )
I can’t help but wonder if a structurally isomorphic “brain” wouldn’t be conscious.