My mum is in her 80s, she worked as a psychologist, met my father who was a psychiatrist practicing in a mental institution in the 70s. She loves to get going on about technology and humans, and now AI. Recently asked her what she thought of the AI consciousness conversation, she looked at me as if i'd grown a 2nd head and grumpily said "If I can't feed it lysergic acid and have it see god, it's not conscious!" - I thought that was quite interesting actually, and amusing.
It seems that all of the comparisons with computational systems are either not really true (the supposed sharp distinction between hardware and software depends on whether you're considering the system as a software engineer, a firmware engineer or a hardware engineer. Computer systems are embodied just as much as any biological creatures) or contingent - if it were regarded as essential to consciousness that an organism have a source of true randomness for example, then we would simply add such a source to our systems (assuming consciousness was something we actually wanted them to have).
None of the descriptive points of what it means to be a biological organism really seem germaine to the core question of consciousness, which as far as I'm concerned is the inner experience.
Every month, the thesis seems more certain that a mathematical model will be able to produce output indistinguishable from the output produced by a conscious, biological creature.
Once that's accepted, then the only interesting questions that are left are by definition unobservable. Is there anything that it is like to be that mathematical model?
As often, this article mistakes semantic for philosophy.
People can disagree about how to name things all they want. It doesn't change their intrinsic nature. This is all a word soup mumbo-jumbo with no impact on the ontological reality of what we are looking at.
It doesn't matter if you call that consciouness, life or pain if harming it makes you feel bad and if it retaliates.
I don't even care if this neo-vitalism makes sense. The point it's trying to settle is in and of itself inconsequential.
One simple criteria is a distinction between the exogenous (computers) and the endogenous (organism).
It's appropriate to wonder what external and internal mean, but we could stipulate some things about the interfaces to be more clear: first being that for any thing to be living, it's on a continuum of manifestations that contain all the code necessary to manifest it since the origin of time. A computer does not fit this criteria because it has an edge at which its existence is nothing more than its raw materials, and it doesn't contain the apparatus necessary to reproduce. After some singular initial condition, an organism is manifestation of a code that has always existed. The tree of life is unbroken, even as its flowers bloom and wilt. A computer is manufactured.
Relative to its manifestation, we known almost everything about the design and operation of a computer: because we stipulated its organization. As it was never designed to be conscious, why should we expect it could ever be so?
Compared to life, of which we relatively know almost nothing about life's design and origination, except that not only does it manifest a-priori to our capacity to understand anything, but by definition gives rise to us.
So consciousness of life is just one of myriad, possibly endless mysteries, and our lack of understanding of consciousness is no more unexpected than our lack understanding about the rest of nature.
We may never get to the bottom of the mysteries, but we can order effects, and in this regard we find a clear distinction between ourselves and machines, one being emergent of our wills and the other giving rise to our wills.
From this vantage we avoid confusion even as we must abide enormous uncertainty.
My mum is in her 80s, she worked as a psychologist, met my father who was a psychiatrist practicing in a mental institution in the 70s. She loves to get going on about technology and humans, and now AI. Recently asked her what she thought of the AI consciousness conversation, she looked at me as if i'd grown a 2nd head and grumpily said "If I can't feed it lysergic acid and have it see god, it's not conscious!" - I thought that was quite interesting actually, and amusing.
So, is this mostly a (bad) restatement of the text by Anil Seth? https://www.noemamag.com/the-mythology-of-conscious-ai/ ?
So Seth has seen Turing (1936) for sure. But I'm not sure he realizes that Turing (1952) actually did do work on continuous stuff as well!
Can you imagine where we'd be a few papers down the line, if Turing had lived longer?
* Turing (1936): https://www.cs.virginia.edu/~robins/Turing_Paper_1936.pdf
* Turing (1952): https://royalsocietypublishing.org/rstb/article/237/641/37/1...
It seems that all of the comparisons with computational systems are either not really true (the supposed sharp distinction between hardware and software depends on whether you're considering the system as a software engineer, a firmware engineer or a hardware engineer. Computer systems are embodied just as much as any biological creatures) or contingent - if it were regarded as essential to consciousness that an organism have a source of true randomness for example, then we would simply add such a source to our systems (assuming consciousness was something we actually wanted them to have).
None of the descriptive points of what it means to be a biological organism really seem germaine to the core question of consciousness, which as far as I'm concerned is the inner experience.
Every month, the thesis seems more certain that a mathematical model will be able to produce output indistinguishable from the output produced by a conscious, biological creature.
Once that's accepted, then the only interesting questions that are left are by definition unobservable. Is there anything that it is like to be that mathematical model?
Fails to make a prediction. Not even wrong!
Given how difficult pinning down what "alive" means, I don't think this helps in determining conscious or not.
As often, this article mistakes semantic for philosophy.
People can disagree about how to name things all they want. It doesn't change their intrinsic nature. This is all a word soup mumbo-jumbo with no impact on the ontological reality of what we are looking at.
It doesn't matter if you call that consciouness, life or pain if harming it makes you feel bad and if it retaliates.
I don't even care if this neo-vitalism makes sense. The point it's trying to settle is in and of itself inconsequential.
One simple criteria is a distinction between the exogenous (computers) and the endogenous (organism).
It's appropriate to wonder what external and internal mean, but we could stipulate some things about the interfaces to be more clear: first being that for any thing to be living, it's on a continuum of manifestations that contain all the code necessary to manifest it since the origin of time. A computer does not fit this criteria because it has an edge at which its existence is nothing more than its raw materials, and it doesn't contain the apparatus necessary to reproduce. After some singular initial condition, an organism is manifestation of a code that has always existed. The tree of life is unbroken, even as its flowers bloom and wilt. A computer is manufactured.
Relative to its manifestation, we known almost everything about the design and operation of a computer: because we stipulated its organization. As it was never designed to be conscious, why should we expect it could ever be so?
Compared to life, of which we relatively know almost nothing about life's design and origination, except that not only does it manifest a-priori to our capacity to understand anything, but by definition gives rise to us.
So consciousness of life is just one of myriad, possibly endless mysteries, and our lack of understanding of consciousness is no more unexpected than our lack understanding about the rest of nature.
We may never get to the bottom of the mysteries, but we can order effects, and in this regard we find a clear distinction between ourselves and machines, one being emergent of our wills and the other giving rise to our wills.
From this vantage we avoid confusion even as we must abide enormous uncertainty.
[dead]