Currently, there does not exist any standardised tests for sentience, so The Daily Beagle, based on programming knowledge, AI research and interactions with other human beings, wants to offer their version of a sentience test.
Sentience should not be a test for ‘aliveness’ or whether or not a creature can experience pain. Many creatures, like fish, can experience pain (despite insistences from fishermen that they can’t, trying to ease their own conscience) despite not being sentient.
Sentience Is A Spectrum
Furthermore, The Daily Beagle is of the view sentience is on a spectrum. For example, humans are more sentient than dogs, and dogs are more sentient than fish. It is not a ‘binary’ choice of yes/no, but a level of sentience.
I think a lot of people try to compare things to human sentience using that to determine a true/false property of sentience, but this risks overlooking animal levels of sentience. There are a lot of people who would argue both for and against the sentience of either dolphins or dogs.
People might try to mislead and swap ‘sentience’ with ‘intelligence’ (‘that dog is very smart’), but there are also a lot of sentient but not very smart human beings in the world. It also overlooks alien sentience, as the comparison point is to humans, and it shouldn’t be.
So, the test. Note, an entity doesn’t have to pass all points to be classified a type of sentient, just that the more aspects scored means a higher level of sentience.
1. Self-Awareness
One of the most crucial traits that any sentient creature can possess is self-awareness. This is easily defined as awareness of your person, specifically, being able to distinguish ‘you’ from ‘other’.
A good example is looking in a mirror. You recognise the mirror image is not you, but a mirror version of you based on how it looks and moves. An iguana, on the other hand, will often act hostile towards a mirror image of itself, because it is not self-aware, it thinks the other iguana is another iguana, and not itself.
2. Independence Of Thought (Independent Agency)
A conventional machine does not exert an independence of thought, it simply follows a script of commands. Bacteria also follow very basic chemical commands and don’t really exert any meaningful independence of thought.
Independence of thought isn’t defined by how original the ideas a person are having, but instead is identified by their independent sense of agency (I.E. they are exerting their own wants and interests independent of some outside influence).
People who lack independence of thought are often mockingly called ‘NPCs’ or ‘bots’. Some animals exhibit independence of thought, but like sentience, it is a spectrum, some animals may be more independent than others.
3. Forward Planning (Anticipatory Agency)
Something a lot of animals cannot do, except by instinct, is forward planning. Humans have the ability to anticipate problems in advance and react in anticipation of those problems, showing an anticipatory agency.
Ever worried what might happen when you speak to another person? That is anticipatory agency. This is quite a powerful one for indicating sentience because conventional machines often aren’t able to anticipate problems, but typically only react to them.
It isn’t necessary for an entity to exhibit this trait continuously (a lot of humans do not forward plan and are gung ho), it is sufficient that it has the capacity to do so.
4. Independent Morals/Ethics
Another key distinguishing feature is a lot of animals do not have a moral or ethical system. Some animals may have Pavlovian punishment-reward behaviours (I.E. if they roll over they’ll get the treat) but this isn’t an independent moral or ethic.
Primates are a good example of independent morals and ethics. If you reward one primate with food for a task, but give another primate less food for the same task, the cheated primate will throw the food in protest. It can grasp what just happened was morally wrong, unethical and unfair.
5. Empathy (Empathic Agency)
Empathy is often shown to be a moral/ethics trait, and it is a very specific type of moral trait: the ability to put yourself into someone else’s viewpoint and experience, to anticipate how they, specifically, feel.
A lot of animals (and even some mentally ill psychopaths) lack empathy. Showing the ability to anticipate how someone else feels reinforces sentience because it means they are able to recognise other types of sentience and said sentience’s experiences.
The empathy has to be genuine, however. It cannot be an “ELIZA” style shallow empathy where it mirrors back words and sentences. The entity has to be able to construct a model of understanding of what the person might think (it does not necessarily have to be right on what they think) and be able to be questioned on said model, or if unable to speak, like a dog, it must demonstrate some actions that indicate that they anticipate and empathise with the person’s suffering on some level.
A good example is an animal attempting to ‘wake up’ an owner it believes has died or trying to help in what it perceives to be a problem situation. Empathy is a spectrum, so a dog may only be empathic to immediate threats, a more sentient creature empathic to a wider array of threats.
6. Social Interactivity (Social Anticipation)
A good sign of sentience is the entity tries to interact with other sentient creatures, and favours like-for-like sentient interactions. For example, when you want a serious discussion on health problems, your first port of call is a friend or a doctor, and not, say, the fish in your fish tank.
This shows the entity not only can identify sentience, but has sentience to such a degree it prefers sentient company for conversations. It anticipates the social interaction and the outcome. Essentially, does the sentient entity ‘find their level’ with other sentient entities?
This is not to say asocial or reclusive people are non-sentient, just that, when they hold two-way discussions they’re not trying to hold it with, say, a rock or a door.
People might talk to an animal, but they’re talking to it because they perceive it has at least some sentience, and usually how they converse indicates the anticipated level (for example, a high-pitched tone as though it is a baby or a child).
7. Adaptability To New Situations
Expanding on ‘Independence of Thought’ (which basically means ‘doesn’t just blindly follow orders or impulse’), capacity for adaptability is an expansion of this.
Essentially, it means if the entity is presented with a new, out-of-field scenario, it is able to adapt to that scenario based on prior knowledge or data, or (a better indicator) show attempts at experimentation to try to resolve the issue.
A good example for this is driving. The Tesla AI is not sentient because it can only work on a limited subset, of well known roads. A car driver is sentient because if you suddenly destroyed an upcoming bridge, changed all the road signs to Klingon and set up a lion and some spike pits, the driver would not only be surprised, but they’d be able to adapt to this, either adjusting driving, violating the road laws, pulling over, killing the lion and snakes and maybe even setting their car up as a roadblock to stop people falling into the pit of spikes.
The successfulness of the adaptability is irrelevant to this (success would only indicate intelligence, not sentience), but the attempt to adapt readily to the situation, being aware that it is abnormal or absurd, is crucial. This means it isn’t just regurgitaing old data, but recombining old data with a new situation to formulate a new response.
Conclusion
It is very likely this test is incomplete, as there may be more markers for sentience, however, it is a good starting point that can be expanded upon. One of the biggest risks is trying to make tests that are human centric and not sentience centric.
The ability to invent new technology, new ideas, new data, is not evidence of sentience (a lot of people are not inventors), it is evidence of intelligence.
Certainly, as intelligence scales, an entity will appear to be more and more sentient (including some dogs and dolphins), however sentience is a subjective definition, essentially saying ‘how convinced am I this entity is effectively alive?’.
Some people will never be convinced AI can be sentient, but then again, some people can never be convinced black people can be intelligent. If humans can’t even pass the arbitrary standards, who can?
If you like my work, be sure to support it by sharing the article link with other people, subscribing or even becoming a supporter.