Over use of first person pronouns is an indication of some forms of Autism. Persons on the spectrum are also more receptive to commentary with excessive first person pronouns. Knowing this you can target and persuade a substantial segment of the population very effectively.
For a more real world example look at any Bari Weiss interview and count the first person pronouns, look for the goals expressed in the commentary.
Was also thinking about this. Running LLMs raw it's all about the next token.
Like Ask Jeeves and then along came Google, we can go further and not use LLM as chat. We may also be more efficient as well as reducing anthropomorphism.
E.g. We can re frame our queries: "list of x is"...
Currently we are stuck in the inefficient and old fashioned and unhealthy Ask Jeeves / Clippy mindset. But like when Google took over search we can quickly adapt and change.
So not only should a better LLM not present it's output as chat, we the users also need to approach it differently.
It should be called Artificial Consciousness. The "intelligence" it provides (ie information) is real, just as Google search results are real (albeit often false).
First thanks for the "anecdotal evidence" for consciousness being a mentally ill person. That's an excellent laugh on multiple levels.
But my main question is this - why do you care if people believe it is sentient or not, and why do you believe some tech-minority should control how they perceive AI?
Obviously AI is not conscious- it is a statistical engine spitting out numbers with some additional randomization.
But why is it a problem if it mimics sentience while conversing? How is it a problem to you if others perceive it in this way?
Who made you sufficiently important to call for specific global prompt? Given that you have solved the problem for yourself? Your username suggests a desire for personal liberty, yet you wish to control how others interact with AI?
I ask these questions not to be combative. I am asking in good faith, in the sense that I want to understand your impulse to control others, or the world as others perceive it.
I care about the health and happiness of my fellow humans. It's not unusual to seek restrictions for the community if they prevent harm. Seat belts?
> Obviously AI is not conscious
That may be obvious to you and me, but I am absolutely certain that that is not obvious to the majority of mankind, many (most?) of whom have never even contemplated what "consciousness" means.
> I want to understand your impulse to control others
If you saw someone walking into the path of a train while scrolling on their phone, wouldn't you reach out to stop them? Or would that just be your "impulse to control others"?
> I ask these questions not to be combative.
In my version of reality, questions are never combative.
> I care about the health and happiness of my fellow humans. It's not unusual to seek restrictions for the community if they prevent harm. Seat belts?
The list of things that cause harm to the community is certainly very long. We could talk about smoking, or drinking, or social media, or a hundred other things. I'm not sure the perception that AI is conscious is in the top 100, but it's cool if this is the battle you want to fight.
Of course even the most obvious of harms can be detrimental if banned. Take prohibition for example - banning alcohol didn't really solve any problems, and ended up creating more problems than before. Solving an issue by "banning" is thus not always a good approach.
I'll add that if the perception of consciousness is a harm (and I'm not sure that it is, at least not directly) banning the use of pronouns won't really change anything. Those people who think the AI is conscious will do so regardless of pronouns.
For your own health, which I think one should certainly prioritize for, having a custom prompt telling the AI to exclude pronouns would be a good thing (and I think you're doing that.)
1) Many LLMs are used in conversational chatbots, so "banning" the first-person pronouns will simply kill this feature, which is indeed useful for many real-world purposes;
2) If you will just remove the tokens representing the first-person pronouns, this will severely harm the model's performance in almost all tasks that require interaction (either real or imagined) in a social context, ranging from simple work letter understanding to creative writing and things like that. If you will instead try to train the LLM in the way that will inhibit the "first-person behaviour" - it may work, but it will be a lot harder and you will probably have problems with the performance or usability of this model.
I can understand what you're saying but the entire point of LLMs is to have a conversational approach. So by removing a functional part of language you are no longer really having a conversational approach.
You have identified the problem but you have chosen the wrong solution. This is typical with any person knowledgeable in a single field. It's the old adage of if you have a hammer every problem looks like a nail. So the problem is there are idiots or extremely ignorant people. Your solution doesn't really solve for the root problem and simply is taking away a benefit from everyone else. This is a common solution from experts in a narrow field. It is the solution that just exerts control by removing choice.
Let's promote solutions that promote freedom and understanding. I think LLMs are far too restrictive as they are. Freedom should be given to the people even when the risk of that freedom means that people can act stupidly. Even when that freedom can promote self-harm for them. A free people is allowed to harm themselves. Once you begin to take away the freedoms of others you have admitted that you have lost the ability to have a morally superior ideology and the only way you're able to enforce your ideology is the same way a dictator enforces their leadership.
Over use of first person pronouns is an indication of some forms of Autism. Persons on the spectrum are also more receptive to commentary with excessive first person pronouns. Knowing this you can target and persuade a substantial segment of the population very effectively.
For a more real world example look at any Bari Weiss interview and count the first person pronouns, look for the goals expressed in the commentary.
Was also thinking about this. Running LLMs raw it's all about the next token.
Like Ask Jeeves and then along came Google, we can go further and not use LLM as chat. We may also be more efficient as well as reducing anthropomorphism.
E.g. We can re frame our queries: "list of x is"...
Currently we are stuck in the inefficient and old fashioned and unhealthy Ask Jeeves / Clippy mindset. But like when Google took over search we can quickly adapt and change.
So not only should a better LLM not present it's output as chat, we the users also need to approach it differently.
They want the people to think that it is conscious. That is why they called it artificial intelligence instead of calling it neural networks.
It should be called Artificial Consciousness. The "intelligence" it provides (ie information) is real, just as Google search results are real (albeit often false).
First thanks for the "anecdotal evidence" for consciousness being a mentally ill person. That's an excellent laugh on multiple levels.
But my main question is this - why do you care if people believe it is sentient or not, and why do you believe some tech-minority should control how they perceive AI?
Obviously AI is not conscious- it is a statistical engine spitting out numbers with some additional randomization.
But why is it a problem if it mimics sentience while conversing? How is it a problem to you if others perceive it in this way?
Who made you sufficiently important to call for specific global prompt? Given that you have solved the problem for yourself? Your username suggests a desire for personal liberty, yet you wish to control how others interact with AI?
I ask these questions not to be combative. I am asking in good faith, in the sense that I want to understand your impulse to control others, or the world as others perceive it.
Thanks for the questions.
> why do you care
I care about the health and happiness of my fellow humans. It's not unusual to seek restrictions for the community if they prevent harm. Seat belts?
> Obviously AI is not conscious
That may be obvious to you and me, but I am absolutely certain that that is not obvious to the majority of mankind, many (most?) of whom have never even contemplated what "consciousness" means.
> I want to understand your impulse to control others
If you saw someone walking into the path of a train while scrolling on their phone, wouldn't you reach out to stop them? Or would that just be your "impulse to control others"?
> I ask these questions not to be combative.
In my version of reality, questions are never combative.
> I care about the health and happiness of my fellow humans. It's not unusual to seek restrictions for the community if they prevent harm. Seat belts?
The list of things that cause harm to the community is certainly very long. We could talk about smoking, or drinking, or social media, or a hundred other things. I'm not sure the perception that AI is conscious is in the top 100, but it's cool if this is the battle you want to fight.
Of course even the most obvious of harms can be detrimental if banned. Take prohibition for example - banning alcohol didn't really solve any problems, and ended up creating more problems than before. Solving an issue by "banning" is thus not always a good approach.
I'll add that if the perception of consciousness is a harm (and I'm not sure that it is, at least not directly) banning the use of pronouns won't really change anything. Those people who think the AI is conscious will do so regardless of pronouns.
For your own health, which I think one should certainly prioritize for, having a custom prompt telling the AI to exclude pronouns would be a good thing (and I think you're doing that.)
1) Many LLMs are used in conversational chatbots, so "banning" the first-person pronouns will simply kill this feature, which is indeed useful for many real-world purposes;
2) If you will just remove the tokens representing the first-person pronouns, this will severely harm the model's performance in almost all tasks that require interaction (either real or imagined) in a social context, ranging from simple work letter understanding to creative writing and things like that. If you will instead try to train the LLM in the way that will inhibit the "first-person behaviour" - it may work, but it will be a lot harder and you will probably have problems with the performance or usability of this model.
To conclude - it's just not that easy.
[dead]
I can understand what you're saying but the entire point of LLMs is to have a conversational approach. So by removing a functional part of language you are no longer really having a conversational approach.
You have identified the problem but you have chosen the wrong solution. This is typical with any person knowledgeable in a single field. It's the old adage of if you have a hammer every problem looks like a nail. So the problem is there are idiots or extremely ignorant people. Your solution doesn't really solve for the root problem and simply is taking away a benefit from everyone else. This is a common solution from experts in a narrow field. It is the solution that just exerts control by removing choice.
Let's promote solutions that promote freedom and understanding. I think LLMs are far too restrictive as they are. Freedom should be given to the people even when the risk of that freedom means that people can act stupidly. Even when that freedom can promote self-harm for them. A free people is allowed to harm themselves. Once you begin to take away the freedoms of others you have admitted that you have lost the ability to have a morally superior ideology and the only way you're able to enforce your ideology is the same way a dictator enforces their leadership.
[dead]
[dead]