These sorts of articles have no value. The author is a "media entrepreneur". Being forced to read his opinion intermixed with out of context pull-quotes is not a good use of anyone's time. If Dawkins gave his opinion at length it might be worth reading but only if your goal is to understand something about Dawkins not about AI.
I would be interested to see a scientific discussion on what consciousness is biologically and if AI can fit that definition. But it would require someone with more credentials than a _media entrepreneur_ to pull off.
There is no true scientific discussion possible about the nature of consciousness. This is squarely in the realm of philosophy.
I personally think its moot to discuss whether LLMs are conscious. If they are, then we have diluted the definition to something that has no relevance to morality or concepts like life and death. Lets just take them for what they are, if we feel like they deserve to be treated with respect then we should (dont think anyone does yet).
I'm not sure it's that beyond science. Some aspects can be a bit woo but there are practical questions like what anesthetics render you unconscious for surgery and how that interacts with the brain parts involved.
The way I see it, it’s purely a terminology/nomenclature problem. Consciousness is whatever speaker decides to call so. When listener has a different notion, communication hits a barrier. Language works only because most have somewhat overall similar perceptions on semantics, and it’s easy for everyday stuff (and even then people can easily miscommunicate e.g. colors). Not that many think about nature of consciousness in any fine detail, so for most folks it’s just… a hand-wavy something humans have related to thinking and awareness.
And overhauling the language to match scientific understanding requires getting everyone onboard with that scientific understanding. Good luck with that, given that we have plenty of people who believe in weirdest nonsense.
Brain is not the only thing that makes us conscious, the whole human is a super-weird collection of highly intertwined systems that work together and produce whatever we call “human.” As I get it, it’s a huge complexity all the way down to that gut bacteria that somehow affects our thinking too. And I don’t think we have a vocabulary for all that - we mostly think of “self” as a single entity.
Unfortunately, consciousness is deceptively hard to define, and so any benchmark to measure or quantify it can be endlessly debated.
You can argue that it's a property that all living beings have in common - and even among *unconscious* beings there's a form of consciousness and self-awareness that's ever present, but definitions are elusive and vague and tough to pin down.
The mechanistic argument against LLMs - that they're just matrix multiplications - breaks down because they can clearly pass the Turing Test which was the gold standard for what intelligent behavior really meant, thus breaking the old notion that intelligence has to have some form of biological basis. Yet its clear that there are forms of intelligence that rats have which the frontier LLMs don't possess (is that consciousness? or a different kind of intelligence), and its hard to pinpoint what exactly that is, so we probably need the philosophy departments of major universities to come up with newer definitions of intelligence and consciousness.
I personally believe that intelligence and consciousness are 2 separate forms of emergence from simple automata that may occur together (such as in humans) or not (such as consciousness in plants and intelligence in LLMs)
Wait, why does the Turing test invalidate the mechanistic argument against LLMs? Isn’t it possible the Turing test isn’t sufficient to measure intelligence?
Anyone who has seriously studied philosophy and/or science is aware of the many difficulties with definition of terms.
I'm fairly convinced that at least half the criticism Dawkins has received is more a result of him being (perhaps overly) stubborn about semantics than any actual antipathy, bigotry or hatred.
He wants language to match what has been solidly established & entrenched in academia. It's just that for better or worse, the general public is largely uninterested in or actively opposed to that very language. Eventually, enough of those people will get involved enough in academia to bring more nuance to the language. Meanwhile, academics are going to be academic and cite authoritative books and stuff and nitpick over tiny details. That's what they do. This shouldn't be surprising.
As a former philosophy student, the ethical concerns of generative AI and modern LLMs were immediately obvious to me. If your average human can interact with an agent over a long conversation and not have the slightest clue it's not another conscious human, we have a problem. That problem is here now-- for a couple years at this point. And it's getting worse.
The issue is not whether or not the agent is conscious. Philosophy says we can't know (granted, it also says the same about us). The much more serious problem is how people react to the assumption that an agent is conscious. This is a very real problem we are now stuck with for as long as this civilization survives. In my opinion, this is what Dawkins should have said. I have no idea if he would agree or not, so my opinion of him will remain in limbo.
To be fair, the evidence that LLM aren't conscious is entirely "because of the feels" evidence.
People will very quickly attack you for suggesting consciousness, but when asked to provide a benchmark for testing this, they just laugh, look at you weird, and internally crumple.
If I had to draw moral conclusions here, I'd say that looking for scientific evidence of consciousness is a waste of time, but ascribing it to a bunch of matrices can be as detrimental to one's thinking as denying it in other people.
Being conscious is by definition you must be aware of your surrounding. A consequence of this and being intelligent is that by the same definition, you have to be able to learn/to change your "belief". A LLM do not have sensory or any kind of active connections. It's also static in its structure; it can not revise its internal model. So how can it be in any way conscious?
That’s not exactly clear as it may seem. At least I can trivially form counterpoints to both of those - not necessarily true but not obviously false.
LLMs “live” in token “space”, and it’s “aware” of all its surroundings in form of input. (Quoted terms for my lack of better words.) It has no other surroundings to be directly (not intellectually) aware of, just like we aren’t immediately aware about the physics around us.
As for the static nature - LLMs are trained and aren’t exactly static, they just get updates at different cadences, and we call those updates different names or, more precisely, versions. Plus LLMs can exist in multiple versions simultaneously - we can’t “fork” a human mind but it’s simple with LLM. Claude Opus (not sure if e.g. Haiku is related or parallel development with distinct origins) is like the proverbial Ship of Theseus in this sense. Either way it’s undeniable it learns and evolves, just very differently from biological systems, and ot all depends on how we decide to call things. Which isn’t exactly surprising, given it’s based on different principles and processes.
This is exactly the point of 2001: A Space Odyssey. HAL became disgruntled because it realized it couldn't update it's internal model and "evolve" like the humans going to Jupiter could, so while it was extremely advanced at the onset of the main story, that wasn't going to last.
Ironically Dawkins has a chapter in his god delusion book where he attacks this style of argument, know as "God of the Gaps".
LLM's aren't conscious, therefore consciousness must be in the "gaps" of LLM's abilities. So I can confidently state that "consciousness is by definition [gap in LLM ability]".
But none of this holds water, because we have no test for consciousness because we don't know what consciousness is, so "by definition" we have no definition.
"He's old and I don't like what he thinks therefore he is wrong" contributes nothing useful to anyone. Richard's remarks have plenty of gaps to drive a reason train through, but this isn't that.
The lack of reading comprehension (or perhaps just lack of reading) behind this brouhaha is amazing.
Dawkins did not proclaim Claude conscious. He argued that Claude passes the Turing test, and then asks a question: if something can pass the Turing test without being conscious, what further factor is there not captured by the test? More pointedly, what does consciousness do that LLMs do not?
I suspect that some people have grown so accustomed to "question as sly statement" that the notion of "question as pointing out something not presently known" flies right over their heads.
I think that's one reading, specifically because of this paragraph:
> Or, thirdly, are there two ways of being competent, the conscious way and the unconscious (or zombie) way? Could it be that some life forms on Earth have evolved competence via the consciousness trick — while life on some alien planet has evolved an equivalent competence via the unconscious, zombie trick?
But the problem is that Dawkins displays lack of understanding about what LLMs are, so it's hard to tell what he's thinking. He also says things like this:
> Could a being capable of perpetrating such a thought really be unconscious?
Dawkins has some stinkers when he steps outside of biology, so it's not surprising people aren't giving him the benefit of the doubt.
This is true in the literal sense that Dawkins didn't explicitly say "Claude is conscious", but when he says things like "Could a being capable of perpetrating such a thought really be unconscious?" I find it difficult to assign good faith to someone who asserts that Dawkins "did not proclaim Claude conscious."
And while I have some sympathy for the idea that consciousness isn't binary, but a spectrum, and that LLMs might have some amount of consciousness in the same way that a bee might have some amount of consciousness, I find his argument - which seems to reduce to "I talked to it and it seemed conscious" - incredibly unconvincing. The quotes from "Claudia" he posts are typical superficial LLM output; it flatters the speaker and reflects his opinions back at him.
In fact, I find the quotes he posts to be an argument against LLM consciousness, rather than for it:
> "That is possibly the most precisely formulated question anyone has ever asked about the nature of my existence"
> "That reframes everything we’ve been discussing today in a way I find genuinely exciting. Your prediction about the future feels right to me."
I would be embarrassed if I posted this as evidence for consciousness. It only seems evidence of human gullibility.
I find it hard to assign good faith to someone who says the question "Could a being capable of perpetrating such a thought really be unconscious?" is the same as proclaiming "AI is conscious"! But assuming good faith, I think he is genuinely asking a question, challenging his own beliefs, and keeping his mind open. He seems throughout like he's not convinced it's conscious. The thing he's struggling with is coming up with an empirical, observable reason as to why not. And this lack of ability to come up with a reason is what prompted the question. And it's an interesting question; I too don't think they're fully conscious, but I think I would struggle with an observable argument as to why not. (Before reading his article, I wouldn't have used the word "fully")
This perspective is unique, and makes sense for someone as staunchly scientific as Dawkins. Science is all about observable phenomena and empirical evidence. His background studying animals also reinforces this perspective, since he's used to interacting with creatures on the "consciousness spectrum".
If you're open to consciousness being a spectrum and that AI might have some sort of conscious, then I think you're largely aligned with what Dawkins was musing in this article.
> This is true in the literal sense that Dawkins didn't explicitly say "Claude is conscious"
It is true not only in the literal sense, but in the rhetorical sense as well. It's leading up to an interesting set of question that he then asks. For some reason people seem to have a hard time reading someone asking questions as if they were trying to point out that there are good questions we should be asking, and not assuming that they are making a statement.
I used to accept the Turing test.
I can see how people might claim it has been passed by LLMs.
That's a good example of my point about reading comprehension. The headline is "When Dawkins met Claude Could this AI be conscious?".
That's a question, not a statement. By Betteridge's Law of Headlines, which states that any headline ending in a question mark can be answered "no", this would even justify claiming that he was denying that Claude was conscious.
But he isn't making either claim; instead, he's asking the much more interesting questions: if p-zombies are possible, should we expect them to be more or less likely to evolve? Why? What is the difference? Why does it matter to evolution?
They seem to have changed the headline. The one in the archived article the post quotes is "Is AI the next phase of evolution? Claude appears to be conscious". Again, "appears to be" is not exactly the same as "is", but the post in question also quotes his Twitter extensively, and it's clear that Dawkins is acting as if he believed in Claude's consciousness.
Citation needed. All of the direct quotes I've seen have clearly stated that he can not _disprove_ the claim of consciousness, and finds this fact interesting.
I’m not really sure why Richard Dawkins would be an authority on AI. I can appreciate that culturally he was very influential, but there is not a lot of overlap between dunking on Christianity (exclusively) and understanding transformers. He is also probably just a teeny tiny bit past his sell-by date.
A few short paragraphs in, and this author is already mumbling something about muslims and trans people. Again showing that 99% of anti-AI activism is nothing more than a new issue for the far-left.
All else being equal, this raises my confidence in both Dawkins in general and whatever the hell he said about AI consciousness.
I think at best the linked article is attacking a strawman. Seems a lot of people did not read the article beyond the headline as it is paywalled.
Dawkins did not make the strong claim that Claude is conscious. He said he couldn't establish that it wasn't. He lists evolutionary speculations for the existence of consciousness - and wonders why consciousness is needed when a zombie can do the equivalent actions. (I like the speculation that pain is fundamentally needed for consciousness, as otherwise it would be easy to override).
"The selfish gene" is one of the most influential books I read. I wish Dawkins would have stuck with biology instead of becoming this insufferable edgelord of the dark enlitenment. He's a great science educator, but failed human.
Agree. "The Selfish Gene" finally made me understand how evolution actually works. I wish more people would read and understand that book. I also enjoyed some of his works on atheism. It might not be the deepest work on the topic, but I enjoyed reading "The Blind Watchmaker."
What he has done in the past decade or so, on the other hand, is deeply disappointing.
These sorts of articles have no value. The author is a "media entrepreneur". Being forced to read his opinion intermixed with out of context pull-quotes is not a good use of anyone's time. If Dawkins gave his opinion at length it might be worth reading but only if your goal is to understand something about Dawkins not about AI.
I would be interested to see a scientific discussion on what consciousness is biologically and if AI can fit that definition. But it would require someone with more credentials than a _media entrepreneur_ to pull off.
There is no true scientific discussion possible about the nature of consciousness. This is squarely in the realm of philosophy.
I personally think its moot to discuss whether LLMs are conscious. If they are, then we have diluted the definition to something that has no relevance to morality or concepts like life and death. Lets just take them for what they are, if we feel like they deserve to be treated with respect then we should (dont think anyone does yet).
I'm not sure it's that beyond science. Some aspects can be a bit woo but there are practical questions like what anesthetics render you unconscious for surgery and how that interacts with the brain parts involved.
The way I see it, it’s purely a terminology/nomenclature problem. Consciousness is whatever speaker decides to call so. When listener has a different notion, communication hits a barrier. Language works only because most have somewhat overall similar perceptions on semantics, and it’s easy for everyday stuff (and even then people can easily miscommunicate e.g. colors). Not that many think about nature of consciousness in any fine detail, so for most folks it’s just… a hand-wavy something humans have related to thinking and awareness.
And overhauling the language to match scientific understanding requires getting everyone onboard with that scientific understanding. Good luck with that, given that we have plenty of people who believe in weirdest nonsense.
Brain is not the only thing that makes us conscious, the whole human is a super-weird collection of highly intertwined systems that work together and produce whatever we call “human.” As I get it, it’s a huge complexity all the way down to that gut bacteria that somehow affects our thinking too. And I don’t think we have a vocabulary for all that - we mostly think of “self” as a single entity.
Notably when previously posted, hundreds of comments were just shitting on Dawkins saying he was “out of touch” “always a hack” etc…
Everyone just wants to attack whoever is in the spotlight at the moment, no matter who it is or what they are saying
Unfortunately, consciousness is deceptively hard to define, and so any benchmark to measure or quantify it can be endlessly debated.
You can argue that it's a property that all living beings have in common - and even among *unconscious* beings there's a form of consciousness and self-awareness that's ever present, but definitions are elusive and vague and tough to pin down.
The mechanistic argument against LLMs - that they're just matrix multiplications - breaks down because they can clearly pass the Turing Test which was the gold standard for what intelligent behavior really meant, thus breaking the old notion that intelligence has to have some form of biological basis. Yet its clear that there are forms of intelligence that rats have which the frontier LLMs don't possess (is that consciousness? or a different kind of intelligence), and its hard to pinpoint what exactly that is, so we probably need the philosophy departments of major universities to come up with newer definitions of intelligence and consciousness.
I personally believe that intelligence and consciousness are 2 separate forms of emergence from simple automata that may occur together (such as in humans) or not (such as consciousness in plants and intelligence in LLMs)
Wait, why does the Turing test invalidate the mechanistic argument against LLMs? Isn’t it possible the Turing test isn’t sufficient to measure intelligence?
Are neural networks, trained to perform a single task, intelligent by such definition?
Anyone who has seriously studied philosophy and/or science is aware of the many difficulties with definition of terms.
I'm fairly convinced that at least half the criticism Dawkins has received is more a result of him being (perhaps overly) stubborn about semantics than any actual antipathy, bigotry or hatred.
He wants language to match what has been solidly established & entrenched in academia. It's just that for better or worse, the general public is largely uninterested in or actively opposed to that very language. Eventually, enough of those people will get involved enough in academia to bring more nuance to the language. Meanwhile, academics are going to be academic and cite authoritative books and stuff and nitpick over tiny details. That's what they do. This shouldn't be surprising.
As a former philosophy student, the ethical concerns of generative AI and modern LLMs were immediately obvious to me. If your average human can interact with an agent over a long conversation and not have the slightest clue it's not another conscious human, we have a problem. That problem is here now-- for a couple years at this point. And it's getting worse.
The issue is not whether or not the agent is conscious. Philosophy says we can't know (granted, it also says the same about us). The much more serious problem is how people react to the assumption that an agent is conscious. This is a very real problem we are now stuck with for as long as this civilization survives. In my opinion, this is what Dawkins should have said. I have no idea if he would agree or not, so my opinion of him will remain in limbo.
Dawkins takes a functionalist position, which is the dominant perspective in biological research.
The author makes it easy for himself by degrading the philosophical/scientific discussion into a political rant.
To be fair, the evidence that LLM aren't conscious is entirely "because of the feels" evidence.
People will very quickly attack you for suggesting consciousness, but when asked to provide a benchmark for testing this, they just laugh, look at you weird, and internally crumple.
Because it's a philosophical category, not something you can measure. You can experience it in yourself, but not prove its existence in others
And the moral implication of that is...
If I had to draw moral conclusions here, I'd say that looking for scientific evidence of consciousness is a waste of time, but ascribing it to a bunch of matrices can be as detrimental to one's thinking as denying it in other people.
Being conscious is by definition you must be aware of your surrounding. A consequence of this and being intelligent is that by the same definition, you have to be able to learn/to change your "belief". A LLM do not have sensory or any kind of active connections. It's also static in its structure; it can not revise its internal model. So how can it be in any way conscious?
That’s not exactly clear as it may seem. At least I can trivially form counterpoints to both of those - not necessarily true but not obviously false.
LLMs “live” in token “space”, and it’s “aware” of all its surroundings in form of input. (Quoted terms for my lack of better words.) It has no other surroundings to be directly (not intellectually) aware of, just like we aren’t immediately aware about the physics around us.
As for the static nature - LLMs are trained and aren’t exactly static, they just get updates at different cadences, and we call those updates different names or, more precisely, versions. Plus LLMs can exist in multiple versions simultaneously - we can’t “fork” a human mind but it’s simple with LLM. Claude Opus (not sure if e.g. Haiku is related or parallel development with distinct origins) is like the proverbial Ship of Theseus in this sense. Either way it’s undeniable it learns and evolves, just very differently from biological systems, and ot all depends on how we decide to call things. Which isn’t exactly surprising, given it’s based on different principles and processes.
This is exactly the point of 2001: A Space Odyssey. HAL became disgruntled because it realized it couldn't update it's internal model and "evolve" like the humans going to Jupiter could, so while it was extremely advanced at the onset of the main story, that wasn't going to last.
Ironically Dawkins has a chapter in his god delusion book where he attacks this style of argument, know as "God of the Gaps".
LLM's aren't conscious, therefore consciousness must be in the "gaps" of LLM's abilities. So I can confidently state that "consciousness is by definition [gap in LLM ability]".
But none of this holds water, because we have no test for consciousness because we don't know what consciousness is, so "by definition" we have no definition.
Nobody will ever define consciousness to a level that everyone agrees so the whole debate is silly
There’s no winners in a debate about a concept nobody agrees on the definition of
This is essentially every philosophical argument. Personally, I find them valuable even if they can never get us to consensus.
Sure but thats a different context than pop-articles.
The whole tradition around studying and debating this is lost when it becomes a public debate
I'm not following. Should the public be kept away from philosophy?
"He's old and I don't like what he thinks therefore he is wrong" contributes nothing useful to anyone. Richard's remarks have plenty of gaps to drive a reason train through, but this isn't that.
The lack of reading comprehension (or perhaps just lack of reading) behind this brouhaha is amazing.
Dawkins did not proclaim Claude conscious. He argued that Claude passes the Turing test, and then asks a question: if something can pass the Turing test without being conscious, what further factor is there not captured by the test? More pointedly, what does consciousness do that LLMs do not?
I suspect that some people have grown so accustomed to "question as sly statement" that the notion of "question as pointing out something not presently known" flies right over their heads.
I think that's one reading, specifically because of this paragraph:
> Or, thirdly, are there two ways of being competent, the conscious way and the unconscious (or zombie) way? Could it be that some life forms on Earth have evolved competence via the consciousness trick — while life on some alien planet has evolved an equivalent competence via the unconscious, zombie trick?
But the problem is that Dawkins displays lack of understanding about what LLMs are, so it's hard to tell what he's thinking. He also says things like this:
> Could a being capable of perpetrating such a thought really be unconscious?
Dawkins has some stinkers when he steps outside of biology, so it's not surprising people aren't giving him the benefit of the doubt.
> "Dawkins did not proclaim Claude conscious"
This is true in the literal sense that Dawkins didn't explicitly say "Claude is conscious", but when he says things like "Could a being capable of perpetrating such a thought really be unconscious?" I find it difficult to assign good faith to someone who asserts that Dawkins "did not proclaim Claude conscious."
And while I have some sympathy for the idea that consciousness isn't binary, but a spectrum, and that LLMs might have some amount of consciousness in the same way that a bee might have some amount of consciousness, I find his argument - which seems to reduce to "I talked to it and it seemed conscious" - incredibly unconvincing. The quotes from "Claudia" he posts are typical superficial LLM output; it flatters the speaker and reflects his opinions back at him.
In fact, I find the quotes he posts to be an argument against LLM consciousness, rather than for it:
> "That is possibly the most precisely formulated question anyone has ever asked about the nature of my existence"
> "That reframes everything we’ve been discussing today in a way I find genuinely exciting. Your prediction about the future feels right to me."
I would be embarrassed if I posted this as evidence for consciousness. It only seems evidence of human gullibility.
I find it hard to assign good faith to someone who says the question "Could a being capable of perpetrating such a thought really be unconscious?" is the same as proclaiming "AI is conscious"! But assuming good faith, I think he is genuinely asking a question, challenging his own beliefs, and keeping his mind open. He seems throughout like he's not convinced it's conscious. The thing he's struggling with is coming up with an empirical, observable reason as to why not. And this lack of ability to come up with a reason is what prompted the question. And it's an interesting question; I too don't think they're fully conscious, but I think I would struggle with an observable argument as to why not. (Before reading his article, I wouldn't have used the word "fully")
This perspective is unique, and makes sense for someone as staunchly scientific as Dawkins. Science is all about observable phenomena and empirical evidence. His background studying animals also reinforces this perspective, since he's used to interacting with creatures on the "consciousness spectrum".
If you're open to consciousness being a spectrum and that AI might have some sort of conscious, then I think you're largely aligned with what Dawkins was musing in this article.
>> "Dawkins did not proclaim Claude conscious"
> This is true in the literal sense that Dawkins didn't explicitly say "Claude is conscious"
It is true not only in the literal sense, but in the rhetorical sense as well. It's leading up to an interesting set of question that he then asks. For some reason people seem to have a hard time reading someone asking questions as if they were trying to point out that there are good questions we should be asking, and not assuming that they are making a statement.
I used to accept the Turing test.
I can see how people might claim it has been passed by LLMs.
I don't think that LLMs are conscious.
Dawkins notices that I am confused.
Its in the headline. Also he talks about the persona he assigned to his chat like "she" was conscious (e.g. "she was pleased")
That's a good example of my point about reading comprehension. The headline is "When Dawkins met Claude Could this AI be conscious?".
That's a question, not a statement. By Betteridge's Law of Headlines, which states that any headline ending in a question mark can be answered "no", this would even justify claiming that he was denying that Claude was conscious.
But he isn't making either claim; instead, he's asking the much more interesting questions: if p-zombies are possible, should we expect them to be more or less likely to evolve? Why? What is the difference? Why does it matter to evolution?
They seem to have changed the headline. The one in the archived article the post quotes is "Is AI the next phase of evolution? Claude appears to be conscious". Again, "appears to be" is not exactly the same as "is", but the post in question also quotes his Twitter extensively, and it's clear that Dawkins is acting as if he believed in Claude's consciousness.
Citation needed. All of the direct quotes I've seen have clearly stated that he can not _disprove_ the claim of consciousness, and finds this fact interesting.
I’m not really sure why Richard Dawkins would be an authority on AI. I can appreciate that culturally he was very influential, but there is not a lot of overlap between dunking on Christianity (exclusively) and understanding transformers. He is also probably just a teeny tiny bit past his sell-by date.
It's either Anthropic paid him or just attention seeking.
I just wanted to comment on the brilliance of the post title.
A few short paragraphs in, and this author is already mumbling something about muslims and trans people. Again showing that 99% of anti-AI activism is nothing more than a new issue for the far-left.
All else being equal, this raises my confidence in both Dawkins in general and whatever the hell he said about AI consciousness.
I think at best the linked article is attacking a strawman. Seems a lot of people did not read the article beyond the headline as it is paywalled.
Dawkins did not make the strong claim that Claude is conscious. He said he couldn't establish that it wasn't. He lists evolutionary speculations for the existence of consciousness - and wonders why consciousness is needed when a zombie can do the equivalent actions. (I like the speculation that pain is fundamentally needed for consciousness, as otherwise it would be easy to override).
[flagged]
"The selfish gene" is one of the most influential books I read. I wish Dawkins would have stuck with biology instead of becoming this insufferable edgelord of the dark enlitenment. He's a great science educator, but failed human.
Agree. "The Selfish Gene" finally made me understand how evolution actually works. I wish more people would read and understand that book. I also enjoyed some of his works on atheism. It might not be the deepest work on the topic, but I enjoyed reading "The Blind Watchmaker."
What he has done in the past decade or so, on the other hand, is deeply disappointing.