DOI: Paper Link
Introduction
Three levels of Intelligence
-
Mechanical Corresponds to repetitive tasks that require consistency and accuracy, e.g. order-taking machines in restaurants or robots in manufacturing assembly processes.
-
Analytical Corresponds to less routine task, but the one which is more inclined towards classification side (e.g., credit application determinations, market segmentation, revenue predictions, etc.)
-
Empathy, Intution and Creativity Few AI applications exist at this level. Although empathy and intution are believed to be directly related to human consciousness.
Progression of AI into higher intelligence task can fundamentally disrupt the service industry and severely affect employment and business models as AI agents will replace more humans.
Goals of Paper
-
Focus on what is required for such conscious state to arise and how we can recognize conscious machines.
-
Putting forward a theory on how consciousness would emerge in its primitive state in AI agents
- with that, it will also put forward how conscious AI may progress towards a point that we can deterministically recognize it as conscious AI.
-
Advancing our understanding of Empathic AI as final stage of intelligence.
Preliminary Observation
-
Turing (1950) introduced a test that later became known as the Turing Test to pinpoint when it can be said a machine (a standard computational machine) is capable of thinking.
-
The Turing Test can identify thinking machines only at their maturity when they have achieved linguistic indistinguishability from humans
-
Research on consciousness is vast and involves many scientific disciplines (e.g. sociology, neurology, psychology, pathology, philosophy, physics etc)
Proposition
-
Principled framework that can identify thinking machines through their path toward maturity along with minimum requirements under which machine consciousness can emerge.
-
Consciousness in AI is emergent phenomenon that manifests when two machines co-create their own language through which they can communicate their internal state.
Intelligent Machines
Weak AI | Strong AI |
---|---|
Information-processing machines that appear to possess full range of human cognitive abilities | Defined as intelligent machines that possess all mental and comparable physical capabilities of humans, including consciousness using advanced computation |
Authors argue that a form of Strong AI with an emergent consciousness is possible
Consciousness Theories
Theories focusing on mind, and it’s states
Philo-Psychological theories of Consciousness
Primarily concerned about what consciousness is and how it comes to be within a conscious entity (human).
Entities that play significant role in philo-psychological theories
-
structure of mind
-
mental states
-
way information is processed, retrieved and stored.
Main focus on internal processes
while acknowledging the entity’s environment as a source of stimuli.
Representationalism
Philosophical idea of representationalism reduces consciousness to mental representations of objects in environment such as photos, signs, natural objects and their qualities.
Intentional | Phenomenal |
---|---|
Mental State when it is about, or directed at some object | Feeling of what it’s like to be you |
e.g. Belief that Earth is round, thought about laptop, perception of animal | e.g. Perceptual experiences, pains, emotional feelings, episodes of mental imagery, deja vu |
NOTE: Most conscious experiences contain both Mental Representations
First-Order Representationalism
-
Core idea is that any conscious state is a representation and what it’s like to be in a conscious state is entirely determined by the content of that representation
-
A representation is about something, and the content of that representation is what the representation is about
-
E.g. word
DOLPHINS
(representation) is aboutdolphins
(content)Three Classifications are in Order
-
Though a representation has content, a representation is not identical to its content
The representation
DOLPHINS
is an English word with eight letters, but its contentdolphins
does not have any letters. Conversely, dolphins swim, but the wordDOLPHINS
does not swim. -
The content of a representation can be false, and can concern a non-existent thing
-
The story of Snow White is about someone who does not exist, but younger children sometimes find it hard to distinguish between the reality and fantasy
-
According to representationalists, this explains why illusions, dreams, and hallucinations are possible
-
Consciousness can misrepresent the world
-
-
First-Order Representationalism does not hold that every contentful representation is conscious
Conscious Representation must be poised to interact directly with one’s beliefs and desires
Higher-Order Representationalism
-
Address the shortcomings of first-order representationalism by differentiating betweeen conscious and unconscious mental states.
-
It says, mental state is only considered conscious when another mental state within same conscious entity is aware of it.
-
For example, entity’s desire to express opinion becomes conscious only when entity is aware of such a desire.
-
Tongue in person’s mouth is aware of it and shows desires and sends signals to brain if it needs to taste something sweet or so.
-
Observations
-
These theories do not offer external observers a direct way of knowing whether they are dealing with conscious entity.
-
Based on these theories, conscious entity may itself know that it is consciouss, but external observer (e.g. human) would not be able to know until that conscious entity inform itself.
Theory is needed that can potentially offer external observers a way to examine and understand whether an entity (e.g. AI agent) can be considered conscious
Social-Self Theory of Consciousness
-
Social-Self theory of consciousness does not interpret consciousness as an individual phenomenon rather a social phenomenon.
-
Following theory focus on individual acts within a social context
-
Environment must exist within which actors communicate with each other.
Nagels’ Conceptualization of Consciousness
-
Organism is conscious when it knows what it is like to be another organism
-
Thus, other conscious organisms must exist within the conscious entity’s immediate environment to make it possible for conscious organism to experience what it like to be the other.
Symbols
EXAMPLE: One can assume that a person intends harm if that person approaches with clenched fist. Victim will defend itself from imminent attack while initiator will respond to defenders’ action. This back-and-forth exchange of symbols constitutes a matrix of social acts or a social matrix
Clenched fist
would be considered symbol that carries same meaning for all involved actors.
Consciousness in social-self theory requires a social matrix consisting of social acts and the exchange of symbols that lead to creation of language.
Observations
-
Theory not concerned with mental processess, existence of mind, internal state which is assumed that all actors bear implictly.
-
Entity might know it is conscious, but external observer might not until conscious entity informs the observer.
None of them (Philo-Psychological and Social-Self theory) alone can provide guidance in positively determining whether an entity is conscious.
Authors put forward the idea that co-creation of the language is one of the missing links to consciousness
Theories of AI Consciousness
Authors provide 6 propositions regarding AI Consciousness which might help us to formulate or discover a conscious AI agent.
-
For consciousness to emerge, two AI agents capable of communicating with each other in a shared environment must exist.
- Mead(1934) stated that language in the form of vocal symbols provides the mechanism for the emergence of consciousness.
- He meant exchange of vocal symbols through social acts in a social matrix
- So a theory to percieve the emergence of consciousness would be the inception and development of a language among AI Agents.
- Language is a means of social interaction and a social phenomenon and so cannot be created in isolation but need atleast 2 AI Agents.
- We should also focus on the fact that communicating machines already exist, but they ain’t conscious. Such that communication becomes fundamental to our theory.
-
For consciousness to emerge, AI agents must exchange novel signals
- To infer emergence as a property of an existing system, one must observe something new, a fresh creation that emerges from system instead of being the result of the system’s working
- Creation: Spontaneous idea that appears without much deliberation instead of creativity inherent in deliberate problem-solving activities.
- These fresh creations should covey shared meanings among AI Agents.
-
For consciousness to emerge, AI agents must turn novel signals into symbols.
- We need more than a mere exchange of novel signals(fresh creations), but a shared meaning for independent onlookers to observe it.
- Symbols: Novel signals with shared meaning. These symbols are going to be building block of an AI-specific language.
- For instance, the object
tree
isarbor
in Latin language andдерево
in Russian. Any of the following words has no material advantage over one another. What’s important is AI agents have agreed to use word X for objecttree
- Following is the first step of turning a signal to symbol by providing it with shared meaning.
- Meaning arises from agreement, and not from symbol itself. For such agreement to reach, AI Agent must have internal state.
-
For consciousness to emerge, AI agents must have internal state
- To infer a symbol of shared meaning among agents, sender should be aware and understand the meaning of that symbol as the reciever percieves it in a clear cut manner without vagueness or ambiguity.
- For a conscious entity to know what it is like to be the other, it must have an internal state in which it can reconstruct the meanings of other AI agent reponses.
-
For consciousness to emerge, AI agents must communicate their internal state of time-varying symbol manipulation through a language that they have co-created.
- Three stages of development in the AI agents’ path towards consciousness
-
Creation of what in human language we say as noun
- Two agents shoud agree on random signal to represent a static (time-invariant) the one not varying in time object in their environment
- Once such an agreement is reached, the signal is turned into a symbol and must be moved into AI agents’ permanent memory to be used in future to refer to same static object
- e.g. signal X has become symbol for
car
. - This process creates a symbol what we say in human language as noun
-
Creation of what in human language we say as verb
- two agents should agree on a random signal or set of previously created symbols to represent a dynamic (time-variant) concept related to an object in their environment.
- e.g. decaying apple, snoring cat
- Those symbols should be able to describe the changing state of an object.
- This process creates a symbol what we say in human language as noun
- This process is similar to what in human language we say as verbs.
-
Creation of new symbols by manipulation in real-time
- Two Agents should use a set of previously created symbols or a mixture of old symbols and novel signals to express their time varying internal state of symbol manipulation
- In this final stage, agents will alo communicate their own internal states and how they manipulate symbols in real-time to create new symbols and their associated meanings.
- According to authors once the final/third stage is observed, we can conclude consciousness has emerged in Agents.
-
For the emergence of consciousness to be concluded, an onlooker should be able to observe two agents reaching an agreement about atleast one of their state of time-varying symbol manipulaion
- In order for onlookers to conclude that we are observing conscious AI agents, we need to detect communications about their internal state and how those state change over time.
- To detect agents’ communication about their internal state, author propose that independent onlookers should recognize an explicit(clearly understood) agreement about the meaning of the communication
- e.g. Two agents cooperatively completing a task they are not programmed to do. Completing a task in such a manner can point to active agreements in the communication of intent and time-varying internal states between agents
Service Implications
-
With the Conscious AI in pursuit, we would need to necessitate new laws and bring forth concepts such as AI ethics and rights.
-
Empathic AI may affect the job market for humans, especially at the higher end of the job market in the service industry.
-
Empathic AI will also have the ability to change the human-AI relationship as people come to trust AI advice and actions, even in hedonic tasks, over advice and actions from another human.
NOTE: The theory of mind also assumes empathy to be the most critical indicator of a fully developed human consciousness and also found to be positively and strongly correlated with trust in interpersonal relationships in which people tend to trust each other’s recommendations and advice
Conclusion
Authors have introduced a theoritical framework to identify the requirements by which consciousness can emerge in AI agents along with another aim for AI research and practice contrary to current dominant paradigm of creating machines that are linguistically indistinguishable from humans. We need more research to develop refined techinical criteria to recognize the signs of emergent AI consciousness.
External References
- Mehta N, Mashour GA. General and specific consciousness: a first-order representationalist approach. Front Psychol. 2013;4:407. Published 2013 Jul 16. doi:10.3389/fpsyg.2013.00407