{"id":2899,"date":"2022-11-25T13:59:01","date_gmt":"2022-11-25T13:59:01","guid":{"rendered":"https:\/\/autogenai.dsstaging2.com\/uk\/blog\/could-large-language-models-be-sentient\/"},"modified":"2025-03-18T19:36:31","modified_gmt":"2025-03-18T19:36:31","slug":"could-large-language-models-be-sentient","status":"publish","type":"post","link":"https:\/\/autogenai.com\/uk\/blog\/could-large-language-models-be-sentient\/","title":{"rendered":"Could AI be sentient with Large Language Models ?"},"content":{"rendered":"<table>\n<tbody>\n<tr>\n<td>AutogenAI\u2019s General Language Engine-1 summarises the article below as:<\/p>\n<blockquote><p>It is argued that Language Models could be sentient due to their ability to generate new and novel responses, which is a sign of consciousness. However, there is no strong evidence that LLMs are sentient. The main arguments against the sentience of LLMs are that they lack sensory perception, embodiment, human-level reasoning, and a unified agency. Chalmers recognises that we lack a clear understanding of sentience\/consciousness and of LLMs, and that until we have a better understanding of these concepts, we cannot dismiss either view as irrational.<\/p><\/blockquote>\n<p>A summary of Chalmer\u2019s full talk produced by AutogenAI\u2019s General Language Engine-1 is available.<\/p>\n<p>Both the article summary and the talk summary took General Language Engine-1 less than half a second to produce.<\/td>\n<\/tr>\n<\/tbody>\n<\/table>\n<p>As developments in Artificial Intelligence take increasingly giant strides forward, debate perpetuates around whether Large Language Models (LLM), the neural-network power-houses behind modern AI, could be conscious. You may have followed the news of\u00a0<a href=\"https:\/\/www.bbc.co.uk\/news\/technology-62275326\">ex-Google employee Blake Lemoine<\/a>\u00a0whose work with the company\u2019s Language Model for Dialogue Applications (LaMDA 2) led him to claim that the technology is sentient.<\/p>\n<p>One of the world\u2019s leading philosophers and cognitive scientists, David Chalmers, recently gave a talk on this very subject. You can read a breakdown of Chalmers\u2019 key points in this article and watch the full talk\u00a0<a href=\"https:\/\/www.youtube.com\/watch?v=-BcuCmf00_Y\">here<\/a>.<\/p>\n<div id=\"ez-toc-container\" class=\"ez-toc-v2_0_82_2 counter-hierarchy ez-toc-counter ez-toc-grey ez-toc-container-direction\">\n<div class=\"ez-toc-title-container\">\n<p class=\"ez-toc-title\" style=\"cursor:inherit\">Table of Contents<\/p>\n<span class=\"ez-toc-title-toggle\"><a href=\"#\" class=\"ez-toc-pull-right ez-toc-btn ez-toc-btn-xs ez-toc-btn-default ez-toc-toggle\" aria-label=\"Toggle Table of Content\"><span class=\"ez-toc-js-icon-con\"><span class=\"\"><span class=\"eztoc-hide\" style=\"display:none;\">Toggle<\/span><span class=\"ez-toc-icon-toggle-span\"><svg style=\"fill: #999;color:#999\" xmlns=\"http:\/\/www.w3.org\/2000\/svg\" class=\"list-377408\" width=\"20px\" height=\"20px\" viewBox=\"0 0 24 24\" fill=\"none\"><path d=\"M6 6H4v2h2V6zm14 0H8v2h12V6zM4 11h2v2H4v-2zm16 0H8v2h12v-2zM4 16h2v2H4v-2zm16 0H8v2h12v-2z\" fill=\"currentColor\"><\/path><\/svg><svg style=\"fill: #999;color:#999\" class=\"arrow-unsorted-368013\" xmlns=\"http:\/\/www.w3.org\/2000\/svg\" width=\"10px\" height=\"10px\" viewBox=\"0 0 24 24\" version=\"1.2\" baseProfile=\"tiny\"><path d=\"M18.2 9.3l-6.2-6.3-6.2 6.3c-.2.2-.3.4-.3.7s.1.5.3.7c.2.2.4.3.7.3h11c.3 0 .5-.1.7-.3.2-.2.3-.5.3-.7s-.1-.5-.3-.7zM5.8 14.7l6.2 6.3 6.2-6.3c.2-.2.3-.5.3-.7s-.1-.5-.3-.7c-.2-.2-.4-.3-.7-.3h-11c-.3 0-.5.1-.7.3-.2.2-.3.5-.3.7s.1.5.3.7z\"\/><\/svg><\/span><\/span><\/span><\/a><\/span><\/div>\n<nav><ul class='ez-toc-list ez-toc-list-level-1 ' ><li class='ez-toc-page-1 ez-toc-heading-level-2'><a class=\"ez-toc-link ez-toc-heading-1\" href=\"https:\/\/autogenai.com\/uk\/blog\/could-large-language-models-be-sentient\/#Large_Language_Models_and_sentience_%E2%80%93_what_are_we_talking_about\" >Large Language Models and sentience \u2013 what are we talking about?<\/a><\/li><li class='ez-toc-page-1 ez-toc-heading-level-2'><a class=\"ez-toc-link ez-toc-heading-2\" href=\"https:\/\/autogenai.com\/uk\/blog\/could-large-language-models-be-sentient\/#The_challenge_for_Proponents_and_Opponents_of_LLMs_as_Sentient\" >The challenge for Proponents and Opponents of LLMs as Sentient<\/a><\/li><li class='ez-toc-page-1 ez-toc-heading-level-2'><a class=\"ez-toc-link ez-toc-heading-3\" href=\"https:\/\/autogenai.com\/uk\/blog\/could-large-language-models-be-sentient\/#Whats_the_verdict_Could_AI_be_sentient\" >What\u2019s the verdict, Could AI be sentient?<\/a><\/li><li class='ez-toc-page-1 ez-toc-heading-level-2'><a class=\"ez-toc-link ez-toc-heading-4\" href=\"https:\/\/autogenai.com\/uk\/blog\/could-large-language-models-be-sentient\/#Large_Language_Models_and_sentience_%E2%80%93_what_are_we_talking_about-2\" >Large Language Models and sentience \u2013 what are we talking about?<\/a><\/li><li class='ez-toc-page-1 ez-toc-heading-level-2'><a class=\"ez-toc-link ez-toc-heading-5\" href=\"https:\/\/autogenai.com\/uk\/blog\/could-large-language-models-be-sentient\/#The_challenge_for_Proponents_and_Opponents_of_LLMs_as_Sentient-2\" >The challenge for Proponents and Opponents of LLMs as Sentient<\/a><\/li><li class='ez-toc-page-1 ez-toc-heading-level-2'><a class=\"ez-toc-link ez-toc-heading-6\" href=\"https:\/\/autogenai.com\/uk\/blog\/could-large-language-models-be-sentient\/#Whats_the_verdict_Could_AI_be_sentient-2\" >What\u2019s the verdict, Could AI be sentient?<\/a><\/li><\/ul><\/nav><\/div>\n<h2><span class=\"ez-toc-section\" id=\"Large_Language_Models_and_sentience_%E2%80%93_what_are_we_talking_about\"><\/span>Large Language Models and sentience \u2013 what are we talking about?<span class=\"ez-toc-section-end\"><\/span><\/h2>\n<p>Language Models\u00a0are \u2018systems that assign probabilities to sequences of text, thereby predicting and generating text completions.\u2019<\/p>\n<p>Large\u00a0Language Models\u00a0are \u2018giant artificial neural networks, almost all using transformer architecture with multi-head self-attention.\u2019 Simply put, this means huge computing power drawing on a vast matrix of information to generate text after learning statistical associations between billions of words. For an excellent summary of LLMs see this\u00a0article by AutogenAI\u2019s machine learning expert, James Huckle.<\/p>\n<p>Chalmers takes\u00a0\u2018sentience\u2019\u00a0to be synonymous with \u2018consciousness\u2019; in turn, sentience = consciousness = subjective experience or \u2018<a href=\"https:\/\/www.psychologytoday.com\/gb\/blog\/theory-consciousness\/202105\/what-is-phenomenal-consciousness\">phenomenal consciousness\u2019<\/a>. Thus, a being has subjective experience, or consciousness, if there is something that it is like to be that thing. This means having:<\/p>\n<ul>\n<li>Sensory experience, e.g. seeing red or blue has a subjective quality<\/li>\n<li>Affective experience, e.g. feeling pain or pleasure<\/li>\n<li>Cognitive experience, e.g. there is a subjective experience to exerting cognitive effort and thinking deeply about something<\/li>\n<li>Agentive experience, i.e. a feature central to being an agent and deciding to act<\/li>\n<\/ul>\n<p>Sentience is\u00a0not\u00a0the same as intelligence and subjective experience is\u00a0not\u00a0equal to sophisticated behaviour. Furthermore, sentience\u00a0is not equal to goal-directed behaviour, nor does it equate to human-level intelligence (often called \u2018artificial general intelligence\u2019). Most theorists hold that fish and newborn babies, for instance, are sentient, without having full human-level intelligence.<\/p>\n<h2><span class=\"ez-toc-section\" id=\"The_challenge_for_Proponents_and_Opponents_of_LLMs_as_Sentient\"><\/span>The challenge for Proponents and Opponents of LLMs as Sentient<span class=\"ez-toc-section-end\"><\/span><\/h2>\n<p>Chalmers\u2019 initial intuition is that LLMs aren\u2019t sentient, but he recognises that the answer is not so obvious and proposes the following challenges:<\/p>\n<h3>For PROPONENTS of LLM sentience.<\/h3>\n<p>If you think that LLMs are sentient, articulate a feature X such that,<\/p>\n<ol>\n<li>LLMs have X;<\/li>\n<li>If a system has X it probably is sentient;<\/li>\n<li>Give good reasons for 1) and 2).<\/li>\n<\/ol>\n<table>\n<thead>\n<tr>\n<th>Potential features could include:<\/th>\n<th>Chalmers\u2019 response:<\/th>\n<\/tr>\n<\/thead>\n<tbody>\n<tr>\n<td><strong>Self-report<\/strong><\/p>\n<p>This was Lemoine&#8217;s main justification: LaMDA 2 reported that \u2018\u2018the nature of my consciousness is that I am aware of my existence. I desire to learn more about the world, and I feel happy or sad at times\u2019.<\/p>\n<p>&nbsp;<\/td>\n<td>Leading questions will provide convenient answers .<\/p>\n<p>This evidence is equivocal and weak.<\/td>\n<\/tr>\n<tr>\n<td><strong>Seems-Sentient<\/strong><\/p>\n<p>For example, upon interaction, Lemoine found LaMDA 2 to seem sentient.<\/p>\n<p>&nbsp;<\/td>\n<td>Humans tend to attribute sentience where it isn\u2019t present, therefore there is little evidence for this claim.<\/td>\n<\/tr>\n<tr>\n<td><strong>Conversational Ability<\/strong><\/p>\n<p>LLMs give the appearance of coherent thinking\/reasoning with impressive causal\/explanatory analysis.<\/td>\n<td>Current LLMs don&#8217;t pass the\u00a0<a href=\"https:\/\/www.britannica.com\/technology\/Turing-test\">Turing Test<\/a>\u00a0(though we may not be far away!).<\/p>\n<p>Furthermore, appearances may be misleading and are, thus, weak evidence for the claim.<\/td>\n<\/tr>\n<tr>\n<td><strong>Domain-General Abilities<\/strong><\/p>\n<p>LLMs show signs of domain-general (cognitive) intelligence and can reason about many domains. Two decades ago we\u2019d have taken these abilities as prima facie evidence that the system is sentient.<\/td>\n<td>&nbsp;<\/p>\n<p>Knowledge about the LLM architecture, behaviour, and training removes the mystique around these abilities. Chalmers suggests that the evidence for this claim is inconclusive.<\/td>\n<\/tr>\n<\/tbody>\n<\/table>\n<h3>For OPPONENTS of LLM sentience.<\/h3>\n<p>If you think LLMs aren\u2019t sentient, articulate a feature X such that,<\/p>\n<ol>\n<li>LLMs lack X;<\/li>\n<li>If a system lacks X it probably isn\u2019t sentient;<\/li>\n<li>Give good reasons for 1) and 2).<\/li>\n<\/ol>\n<table>\n<thead>\n<tr>\n<th>Potential features could include:<\/th>\n<th>Chalmers\u2019 response:<\/th>\n<\/tr>\n<\/thead>\n<tbody>\n<tr>\n<td><strong>Biology<\/strong><\/p>\n<p>The assertion that consciousness requires biology.<\/td>\n<td>This view is contentious and Chalmers has argued against it in other\u00a0works.<\/td>\n<\/tr>\n<tr>\n<td><strong>Sensory perception<\/strong><\/p>\n<p>Without it, LLMs cannot sense and therefore aren&#8217;t sentient.<\/td>\n<td>This doesn&#8217;t account for affective, cognitive or agentive consciousness. Furthermore, LLM+ with sensory perception are developing fast, e.g.\u00a0vision-language models. Therefore, this view is contentious and temporary.<\/td>\n<\/tr>\n<tr>\n<td><strong>Embodiment<\/strong><\/p>\n<p>Lack of a body and ability to act means LLMs aren&#8217;t sentient.<\/td>\n<td>Generated text or speech are a\u00a0kind of act in themselves. Furthermore, LLMs with robotic and virtual bodies already exist. Therefore this view is weak and temporary.<\/td>\n<\/tr>\n<tr>\n<td><strong>World Model<\/strong><\/p>\n<p>LLMs are stochastic parrots. They do statistical text processing and minimise prediction error; they\u00a0don\u2019t have genuine understanding, meaning and world-models.<\/td>\n<td>This view is weak and temporary, as there is\u00a0some\u00a0evidence that they already have world models.<\/td>\n<\/tr>\n<tr>\n<td><strong>Human-level reasoning<\/strong><\/p>\n<p>LLMs make reasoning mistakes, are inconsistent, and lack humanlike planning.<\/td>\n<td>This is overly high-bar as it wrongly implies that humans have, without exception or fault, consistent and logical reasoning.<\/td>\n<\/tr>\n<tr>\n<td><strong>Recurring Processing<\/strong><\/p>\n<p>LLMs are feedforward systems, lacking a memory-like internal state that persists between inputs; they are \u2018stateless\u2019. \u2018Memory\u2019 is required for consciousness.<\/td>\n<td>This is a fairly strong point but contentious and temporary as not all consciousness involves memory and there are many\u00a0quasi-recurrent LLMs.<\/td>\n<\/tr>\n<tr>\n<td><strong>Unified Agency\u00a0(Strongest view for Chalmers)<\/strong><\/p>\n<p>LLMs lack consistent beliefs and desires, stable goals of their own, and thus aren&#8217;t really unified agents.<\/td>\n<td>While it can be argued that some people are disunified e.g. dissociative identity disorders, they are still more unified than LLMs. Chalmers suggests this is a strong argument against LLM sentience, but may also prove to be temporary.<\/td>\n<\/tr>\n<\/tbody>\n<\/table>\n<h2><span class=\"ez-toc-section\" id=\"Whats_the_verdict_Could_AI_be_sentient\"><\/span>What\u2019s the verdict, Could AI be sentient?<span class=\"ez-toc-section-end\"><\/span><\/h2>\n<p>Chalmers thinks we can\u2019t decisively confirm or deny the sentience of current LLMs, and that \u201cfinding a conclusion counterintuitive or repugnant is not sufficient reason to reject the conclusion\u201d. Thus, we should at least take the hypothesis seriously and the prospect of AI sentience even more seriously.<\/p>\n<p>As recognised by Chalmers, we lack a clear definition and understanding of sentience\/consciousness and of LLMs. I believe that until the ontology of sentience and of LLMs becomes a basic belief, i.e. a belief that is justified, but not by its relation to any other beliefs &#8211; making it self-evident or self-justifiable &#8211; we cannot dismiss either view as irrational. Recognising that both beliefs have warrant, and that they can (and must) exist simultaneously is how and why we can continue to have meaningful discussions about the topic.<\/p>\n<table>\n<tbody>\n<tr>\n<td>AutogenAI\u2019s General Language Engine-1 summarises the article below as:<\/p>\n<blockquote><p>It is argued that Language Models could be sentient due to their ability to generate new and novel responses, which is a sign of consciousness. However, there is no strong evidence that LLMs are sentient. The main arguments against the sentience of LLMs are that they lack sensory perception, embodiment, human-level reasoning, and a unified agency. Chalmers recognises that we lack a clear understanding of sentience\/consciousness and of LLMs, and that until we have a better understanding of these concepts, we cannot dismiss either view as irrational.<\/p><\/blockquote>\n<p>A summary of Chalmer\u2019s full talk produced by AutogenAI\u2019s General Language Engine-1 is available.<\/p>\n<p>Both the article summary and the talk summary took General Language Engine-1 less than half a second to produce.<\/td>\n<\/tr>\n<\/tbody>\n<\/table>\n<p>As developments in Artificial Intelligence take increasingly giant strides forward, debate perpetuates around whether Large Language Models (LLM), the neural-network power-houses behind modern AI, could be conscious. You may have followed the news of\u00a0<a href=\"https:\/\/www.bbc.co.uk\/news\/technology-62275326\">ex-Google employee Blake Lemoine<\/a>\u00a0whose work with the company\u2019s Language Model for Dialogue Applications (LaMDA 2) led him to claim that the technology is sentient.<\/p>\n<p>One of the world\u2019s leading philosophers and cognitive scientists, David Chalmers, recently gave a talk on this very subject. You can read a breakdown of Chalmers\u2019 key points in this article and watch the full talk\u00a0<a href=\"https:\/\/www.youtube.com\/watch?v=-BcuCmf00_Y\">here<\/a>.<\/p>\n<h2><span class=\"ez-toc-section\" id=\"Large_Language_Models_and_sentience_%E2%80%93_what_are_we_talking_about-2\"><\/span>Large Language Models and sentience \u2013 what are we talking about?<span class=\"ez-toc-section-end\"><\/span><\/h2>\n<p>Language Models\u00a0are \u2018systems that assign probabilities to sequences of text, thereby predicting and generating text completions.\u2019<\/p>\n<p>Large\u00a0Language Models\u00a0are \u2018giant artificial neural networks, almost all using transformer architecture with multi-head self-attention.\u2019 Simply put, this means huge computing power drawing on a vast matrix of information to generate text after learning statistical associations between billions of words. For an excellent summary of LLMs see this\u00a0article by AutogenAI\u2019s machine learning expert, James Huckle.<\/p>\n<p>Chalmers takes\u00a0\u2018sentience\u2019\u00a0to be synonymous with \u2018consciousness\u2019; in turn, sentience = consciousness = subjective experience or \u2018<a href=\"https:\/\/www.psychologytoday.com\/gb\/blog\/theory-consciousness\/202105\/what-is-phenomenal-consciousness\">phenomenal consciousness\u2019<\/a>. Thus, a being has subjective experience, or consciousness, if there is something that it is like to be that thing. This means having:<\/p>\n<ul>\n<li>Sensory experience, e.g. seeing red or blue has a subjective quality<\/li>\n<li>Affective experience, e.g. feeling pain or pleasure<\/li>\n<li>Cognitive experience, e.g. there is a subjective experience to exerting cognitive effort and thinking deeply about something<\/li>\n<li>Agentive experience, i.e. a feature central to being an agent and deciding to act<\/li>\n<\/ul>\n<p>Sentience is\u00a0not\u00a0the same as intelligence and subjective experience is\u00a0not\u00a0equal to sophisticated behaviour. Furthermore, sentience\u00a0is not equal to goal-directed behaviour, nor does it equate to human-level intelligence (often called \u2018artificial general intelligence\u2019). Most theorists hold that fish and newborn babies, for instance, are sentient, without having full human-level intelligence.<\/p>\n<h2><span class=\"ez-toc-section\" id=\"The_challenge_for_Proponents_and_Opponents_of_LLMs_as_Sentient-2\"><\/span>The challenge for Proponents and Opponents of LLMs as Sentient<span class=\"ez-toc-section-end\"><\/span><\/h2>\n<p>Chalmers\u2019 initial intuition is that LLMs aren\u2019t sentient, but he recognises that the answer is not so obvious and proposes the following challenges:<\/p>\n<h3>For PROPONENTS of LLM sentience.<\/h3>\n<p>If you think that LLMs are sentient, articulate a feature X such that,<\/p>\n<ol>\n<li>LLMs have X;<\/li>\n<li>If a system has X it probably is sentient;<\/li>\n<li>Give good reasons for 1) and 2).<\/li>\n<\/ol>\n<table>\n<thead>\n<tr>\n<th>Potential features could include:<\/th>\n<th>Chalmers\u2019 response:<\/th>\n<\/tr>\n<\/thead>\n<tbody>\n<tr>\n<td><strong>Self-report<\/strong><\/p>\n<p>This was Lemoine&#8217;s main justification: LaMDA 2 reported that \u2018\u2018the nature of my consciousness is that I am aware of my existence. I desire to learn more about the world, and I feel happy or sad at times\u2019.<\/p>\n<p>&nbsp;<\/td>\n<td>Leading questions will provide convenient answers .<\/p>\n<p>This evidence is equivocal and weak.<\/td>\n<\/tr>\n<tr>\n<td><strong>Seems-Sentient<\/strong><\/p>\n<p>For example, upon interaction, Lemoine found LaMDA 2 to seem sentient.<\/p>\n<p>&nbsp;<\/td>\n<td>Humans tend to attribute sentience where it isn\u2019t present, therefore there is little evidence for this claim.<\/td>\n<\/tr>\n<tr>\n<td><strong>Conversational Ability<\/strong><\/p>\n<p>LLMs give the appearance of coherent thinking\/reasoning with impressive causal\/explanatory analysis.<\/td>\n<td>Current LLMs don&#8217;t pass the\u00a0<a href=\"https:\/\/www.britannica.com\/technology\/Turing-test\">Turing Test<\/a>\u00a0(though we may not be far away!).<\/p>\n<p>Furthermore, appearances may be misleading and are, thus, weak evidence for the claim.<\/td>\n<\/tr>\n<tr>\n<td><strong>Domain-General Abilities<\/strong><\/p>\n<p>LLMs show signs of domain-general (cognitive) intelligence and can reason about many domains. Two decades ago we\u2019d have taken these abilities as prima facie evidence that the system is sentient.<\/td>\n<td>&nbsp;<\/p>\n<p>Knowledge about the LLM architecture, behaviour, and training removes the mystique around these abilities. Chalmers suggests that the evidence for this claim is inconclusive.<\/td>\n<\/tr>\n<\/tbody>\n<\/table>\n<h3>For OPPONENTS of LLM sentience.<\/h3>\n<p>If you think LLMs aren\u2019t sentient, articulate a feature X such that,<\/p>\n<ol>\n<li>LLMs lack X;<\/li>\n<li>If a system lacks X it probably isn\u2019t sentient;<\/li>\n<li>Give good reasons for 1) and 2).<\/li>\n<\/ol>\n<table>\n<thead>\n<tr>\n<th>Potential features could include:<\/th>\n<th>Chalmers\u2019 response:<\/th>\n<\/tr>\n<\/thead>\n<tbody>\n<tr>\n<td><strong>Biology<\/strong><\/p>\n<p>The assertion that consciousness requires biology.<\/td>\n<td>This view is contentious and Chalmers has argued against it in other\u00a0works.<\/td>\n<\/tr>\n<tr>\n<td><strong>Sensory perception<\/strong><\/p>\n<p>Without it, LLMs cannot sense and therefore aren&#8217;t sentient.<\/td>\n<td>This doesn&#8217;t account for affective, cognitive or agentive consciousness. Furthermore, LLM+ with sensory perception are developing fast, e.g.\u00a0vision-language models. Therefore, this view is contentious and temporary.<\/td>\n<\/tr>\n<tr>\n<td><strong>Embodiment<\/strong><\/p>\n<p>Lack of a body and ability to act means LLMs aren&#8217;t sentient.<\/td>\n<td>Generated text or speech are a\u00a0kind of act in themselves. Furthermore, LLMs with robotic and virtual bodies already exist. Therefore this view is weak and temporary.<\/td>\n<\/tr>\n<tr>\n<td><strong>World Model<\/strong><\/p>\n<p>LLMs are stochastic parrots. They do statistical text processing and minimise prediction error; they\u00a0don\u2019t have genuine understanding, meaning and world-models.<\/td>\n<td>This view is weak and temporary, as there is\u00a0some\u00a0evidence that they already have world models.<\/td>\n<\/tr>\n<tr>\n<td><strong>Human-level reasoning<\/strong><\/p>\n<p>LLMs make reasoning mistakes, are inconsistent, and lack humanlike planning.<\/td>\n<td>This is overly high-bar as it wrongly implies that humans have, without exception or fault, consistent and logical reasoning.<\/td>\n<\/tr>\n<tr>\n<td><strong>Recurring Processing<\/strong><\/p>\n<p>LLMs are feedforward systems, lacking a memory-like internal state that persists between inputs; they are \u2018stateless\u2019. \u2018Memory\u2019 is required for consciousness.<\/td>\n<td>This is a fairly strong point but contentious and temporary as not all consciousness involves memory and there are many\u00a0quasi-recurrent LLMs.<\/td>\n<\/tr>\n<tr>\n<td><strong>Unified Agency\u00a0(Strongest view for Chalmers)<\/strong><\/p>\n<p>LLMs lack consistent beliefs and desires, stable goals of their own, and thus aren&#8217;t really unified agents.<\/td>\n<td>While it can be argued that some people are disunified e.g. dissociative identity disorders, they are still more unified than LLMs. Chalmers suggests this is a strong argument against LLM sentience, but may also prove to be temporary.<\/td>\n<\/tr>\n<\/tbody>\n<\/table>\n<h2><span class=\"ez-toc-section\" id=\"Whats_the_verdict_Could_AI_be_sentient-2\"><\/span>What\u2019s the verdict, Could AI be sentient?<span class=\"ez-toc-section-end\"><\/span><\/h2>\n<p>Chalmers thinks we can\u2019t decisively confirm or deny the sentience of current LLMs, and that \u201cfinding a conclusion counterintuitive or repugnant is not sufficient reason to reject the conclusion\u201d. Thus, we should at least take the hypothesis seriously and the prospect of AI sentience even more seriously.<\/p>\n<p>As recognised by Chalmers, we lack a clear definition and understanding of sentience\/consciousness and of LLMs. I believe that until the ontology of sentience and of LLMs becomes a basic belief, i.e. a belief that is justified, but not by its relation to any other beliefs &#8211; making it self-evident or self-justifiable &#8211; we cannot dismiss either view as irrational. Recognising that both beliefs have warrant, and that they can (and must) exist simultaneously is how and why we can continue to have meaningful discussions about the topic.<\/p>\n","protected":false},"excerpt":{"rendered":"<p>AutogenAI\u2019s General Language Engine-1 summarises the article below as: It is argued that Language Models could be sentient due to their ability to generate new and novel responses, which is a sign of consciousness. However, there is no strong evidence that LLMs are sentient. The main arguments against the sentience of LLMs are that they&#8230;<\/p>\n","protected":false},"author":30,"featured_media":2900,"comment_status":"closed","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"_acf_changed":false,"inline_featured_image":false,"footnotes":""},"categories":[4],"tags":[],"class_list":["post-2899","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","category-category-2"],"acf":[],"yoast_head":"<!-- This site is optimized with the Yoast SEO plugin v27.2 - https:\/\/yoast.com\/product\/yoast-seo-wordpress\/ -->\n<title>Could AI be sentient with Large Language Models ? | AutogenAI UK<\/title>\n<meta name=\"description\" content=\"We look at what one of the leading Philosophers on consciousness, David Chalmers, has to say on the latest developments in AI.\" \/>\n<meta name=\"robots\" content=\"index, follow, max-snippet:-1, max-image-preview:large, max-video-preview:-1\" \/>\n<link rel=\"canonical\" href=\"https:\/\/autogenai.com\/uk\/blog\/could-large-language-models-be-sentient\/\" \/>\n<meta property=\"og:locale\" content=\"en_US\" \/>\n<meta property=\"og:type\" content=\"article\" \/>\n<meta property=\"og:title\" content=\"Could AI be sentient with Large Language Models ? | AutogenAI UK\" \/>\n<meta property=\"og:description\" content=\"We look at what one of the leading Philosophers on consciousness, David Chalmers, has to say on the latest developments in AI.\" \/>\n<meta property=\"og:url\" content=\"https:\/\/autogenai.com\/uk\/blog\/could-large-language-models-be-sentient\/\" \/>\n<meta property=\"og:site_name\" content=\"AutogenAI UK\" \/>\n<meta property=\"article:published_time\" content=\"2022-11-25T13:59:01+00:00\" \/>\n<meta property=\"article:modified_time\" content=\"2025-03-18T19:36:31+00:00\" \/>\n<meta property=\"og:image\" content=\"https:\/\/autogenai.com\/uk\/wp-content\/uploads\/sites\/4\/2025\/02\/Could-Large-Language-Models-be-sentient.jpg\" \/>\n\t<meta property=\"og:image:width\" content=\"1920\" \/>\n\t<meta property=\"og:image:height\" content=\"1080\" \/>\n\t<meta property=\"og:image:type\" content=\"image\/jpeg\" \/>\n<meta name=\"author\" content=\"Teodora Danilovic\" \/>\n<meta name=\"twitter:card\" content=\"summary_large_image\" \/>\n<meta name=\"twitter:label1\" content=\"Written by\" \/>\n\t<meta name=\"twitter:data1\" content=\"Teodora Danilovic\" \/>\n<script type=\"application\/ld+json\" class=\"yoast-schema-graph\">{\"@context\":\"https:\/\/schema.org\",\"@graph\":[{\"@type\":\"Article\",\"@id\":\"https:\/\/autogenai.com\/uk\/blog\/could-large-language-models-be-sentient\/#article\",\"isPartOf\":{\"@id\":\"https:\/\/autogenai.com\/uk\/blog\/could-large-language-models-be-sentient\/\"},\"author\":{\"name\":\"Teodora Danilovic\",\"@id\":\"https:\/\/autogenai.com\/uk\/#\/schema\/person\/dcb3dac6c9f72455036a2a2ce29c9df9\"},\"headline\":\"Could AI be sentient with Large Language Models ?\",\"datePublished\":\"2022-11-25T13:59:01+00:00\",\"dateModified\":\"2025-03-18T19:36:31+00:00\",\"mainEntityOfPage\":{\"@id\":\"https:\/\/autogenai.com\/uk\/blog\/could-large-language-models-be-sentient\/\"},\"wordCount\":2386,\"image\":{\"@id\":\"https:\/\/autogenai.com\/uk\/blog\/could-large-language-models-be-sentient\/#primaryimage\"},\"thumbnailUrl\":\"https:\/\/autogenai.com\/uk\/wp-content\/uploads\/sites\/4\/2025\/02\/Could-Large-Language-Models-be-sentient.jpg\",\"articleSection\":[\"AI\"],\"inLanguage\":\"en-US\"},{\"@type\":\"WebPage\",\"@id\":\"https:\/\/autogenai.com\/uk\/blog\/could-large-language-models-be-sentient\/\",\"url\":\"https:\/\/autogenai.com\/uk\/blog\/could-large-language-models-be-sentient\/\",\"name\":\"Could AI be sentient with Large Language Models ? | AutogenAI UK\",\"isPartOf\":{\"@id\":\"https:\/\/autogenai.com\/uk\/#website\"},\"primaryImageOfPage\":{\"@id\":\"https:\/\/autogenai.com\/uk\/blog\/could-large-language-models-be-sentient\/#primaryimage\"},\"image\":{\"@id\":\"https:\/\/autogenai.com\/uk\/blog\/could-large-language-models-be-sentient\/#primaryimage\"},\"thumbnailUrl\":\"https:\/\/autogenai.com\/uk\/wp-content\/uploads\/sites\/4\/2025\/02\/Could-Large-Language-Models-be-sentient.jpg\",\"datePublished\":\"2022-11-25T13:59:01+00:00\",\"dateModified\":\"2025-03-18T19:36:31+00:00\",\"author\":{\"@id\":\"https:\/\/autogenai.com\/uk\/#\/schema\/person\/dcb3dac6c9f72455036a2a2ce29c9df9\"},\"description\":\"We look at what one of the leading Philosophers on consciousness, David Chalmers, has to say on the latest developments in AI.\",\"breadcrumb\":{\"@id\":\"https:\/\/autogenai.com\/uk\/blog\/could-large-language-models-be-sentient\/#breadcrumb\"},\"inLanguage\":\"en-US\",\"potentialAction\":[{\"@type\":\"ReadAction\",\"target\":[\"https:\/\/autogenai.com\/uk\/blog\/could-large-language-models-be-sentient\/\"]}]},{\"@type\":\"ImageObject\",\"inLanguage\":\"en-US\",\"@id\":\"https:\/\/autogenai.com\/uk\/blog\/could-large-language-models-be-sentient\/#primaryimage\",\"url\":\"https:\/\/autogenai.com\/uk\/wp-content\/uploads\/sites\/4\/2025\/02\/Could-Large-Language-Models-be-sentient.jpg\",\"contentUrl\":\"https:\/\/autogenai.com\/uk\/wp-content\/uploads\/sites\/4\/2025\/02\/Could-Large-Language-Models-be-sentient.jpg\",\"width\":1920,\"height\":1080,\"caption\":\"Could AI be sentient with Large Language Models ?\"},{\"@type\":\"BreadcrumbList\",\"@id\":\"https:\/\/autogenai.com\/uk\/blog\/could-large-language-models-be-sentient\/#breadcrumb\",\"itemListElement\":[{\"@type\":\"ListItem\",\"position\":1,\"name\":\"Home\",\"item\":\"https:\/\/autogenai.com\/uk\/\"},{\"@type\":\"ListItem\",\"position\":2,\"name\":\"Could AI be sentient with Large Language Models ?\"}]},{\"@type\":\"WebSite\",\"@id\":\"https:\/\/autogenai.com\/uk\/#website\",\"url\":\"https:\/\/autogenai.com\/uk\/\",\"name\":\"AutogenAI UK\",\"description\":\"Win more business\",\"potentialAction\":[{\"@type\":\"SearchAction\",\"target\":{\"@type\":\"EntryPoint\",\"urlTemplate\":\"https:\/\/autogenai.com\/uk\/?s={search_term_string}\"},\"query-input\":{\"@type\":\"PropertyValueSpecification\",\"valueRequired\":true,\"valueName\":\"search_term_string\"}}],\"inLanguage\":\"en-US\"},{\"@type\":\"Person\",\"@id\":\"https:\/\/autogenai.com\/uk\/#\/schema\/person\/dcb3dac6c9f72455036a2a2ce29c9df9\",\"name\":\"Teodora Danilovic\",\"image\":{\"@type\":\"ImageObject\",\"inLanguage\":\"en-US\",\"@id\":\"https:\/\/secure.gravatar.com\/avatar\/5851d8c109eee27682f3596f072965fc0c5456eaec7ef7336661f4c245a59bb9?s=96&d=mm&r=g\",\"url\":\"https:\/\/secure.gravatar.com\/avatar\/5851d8c109eee27682f3596f072965fc0c5456eaec7ef7336661f4c245a59bb9?s=96&d=mm&r=g\",\"contentUrl\":\"https:\/\/secure.gravatar.com\/avatar\/5851d8c109eee27682f3596f072965fc0c5456eaec7ef7336661f4c245a59bb9?s=96&d=mm&r=g\",\"caption\":\"Teodora Danilovic\"},\"url\":\"https:\/\/autogenai.com\/uk\/blog\/author\/teodora_danilovic\/\"}]}<\/script>\n<!-- \/ Yoast SEO plugin. -->","yoast_head_json":{"title":"Could AI be sentient with Large Language Models ? | AutogenAI UK","description":"We look at what one of the leading Philosophers on consciousness, David Chalmers, has to say on the latest developments in AI.","robots":{"index":"index","follow":"follow","max-snippet":"max-snippet:-1","max-image-preview":"max-image-preview:large","max-video-preview":"max-video-preview:-1"},"canonical":"https:\/\/autogenai.com\/uk\/blog\/could-large-language-models-be-sentient\/","og_locale":"en_US","og_type":"article","og_title":"Could AI be sentient with Large Language Models ? | AutogenAI UK","og_description":"We look at what one of the leading Philosophers on consciousness, David Chalmers, has to say on the latest developments in AI.","og_url":"https:\/\/autogenai.com\/uk\/blog\/could-large-language-models-be-sentient\/","og_site_name":"AutogenAI UK","article_published_time":"2022-11-25T13:59:01+00:00","article_modified_time":"2025-03-18T19:36:31+00:00","og_image":[{"width":1920,"height":1080,"url":"https:\/\/autogenai.com\/uk\/wp-content\/uploads\/sites\/4\/2025\/02\/Could-Large-Language-Models-be-sentient.jpg","type":"image\/jpeg"}],"author":"Teodora Danilovic","twitter_card":"summary_large_image","twitter_misc":{"Written by":"Teodora Danilovic"},"schema":{"@context":"https:\/\/schema.org","@graph":[{"@type":"Article","@id":"https:\/\/autogenai.com\/uk\/blog\/could-large-language-models-be-sentient\/#article","isPartOf":{"@id":"https:\/\/autogenai.com\/uk\/blog\/could-large-language-models-be-sentient\/"},"author":{"name":"Teodora Danilovic","@id":"https:\/\/autogenai.com\/uk\/#\/schema\/person\/dcb3dac6c9f72455036a2a2ce29c9df9"},"headline":"Could AI be sentient with Large Language Models ?","datePublished":"2022-11-25T13:59:01+00:00","dateModified":"2025-03-18T19:36:31+00:00","mainEntityOfPage":{"@id":"https:\/\/autogenai.com\/uk\/blog\/could-large-language-models-be-sentient\/"},"wordCount":2386,"image":{"@id":"https:\/\/autogenai.com\/uk\/blog\/could-large-language-models-be-sentient\/#primaryimage"},"thumbnailUrl":"https:\/\/autogenai.com\/uk\/wp-content\/uploads\/sites\/4\/2025\/02\/Could-Large-Language-Models-be-sentient.jpg","articleSection":["AI"],"inLanguage":"en-US"},{"@type":"WebPage","@id":"https:\/\/autogenai.com\/uk\/blog\/could-large-language-models-be-sentient\/","url":"https:\/\/autogenai.com\/uk\/blog\/could-large-language-models-be-sentient\/","name":"Could AI be sentient with Large Language Models ? | AutogenAI UK","isPartOf":{"@id":"https:\/\/autogenai.com\/uk\/#website"},"primaryImageOfPage":{"@id":"https:\/\/autogenai.com\/uk\/blog\/could-large-language-models-be-sentient\/#primaryimage"},"image":{"@id":"https:\/\/autogenai.com\/uk\/blog\/could-large-language-models-be-sentient\/#primaryimage"},"thumbnailUrl":"https:\/\/autogenai.com\/uk\/wp-content\/uploads\/sites\/4\/2025\/02\/Could-Large-Language-Models-be-sentient.jpg","datePublished":"2022-11-25T13:59:01+00:00","dateModified":"2025-03-18T19:36:31+00:00","author":{"@id":"https:\/\/autogenai.com\/uk\/#\/schema\/person\/dcb3dac6c9f72455036a2a2ce29c9df9"},"description":"We look at what one of the leading Philosophers on consciousness, David Chalmers, has to say on the latest developments in AI.","breadcrumb":{"@id":"https:\/\/autogenai.com\/uk\/blog\/could-large-language-models-be-sentient\/#breadcrumb"},"inLanguage":"en-US","potentialAction":[{"@type":"ReadAction","target":["https:\/\/autogenai.com\/uk\/blog\/could-large-language-models-be-sentient\/"]}]},{"@type":"ImageObject","inLanguage":"en-US","@id":"https:\/\/autogenai.com\/uk\/blog\/could-large-language-models-be-sentient\/#primaryimage","url":"https:\/\/autogenai.com\/uk\/wp-content\/uploads\/sites\/4\/2025\/02\/Could-Large-Language-Models-be-sentient.jpg","contentUrl":"https:\/\/autogenai.com\/uk\/wp-content\/uploads\/sites\/4\/2025\/02\/Could-Large-Language-Models-be-sentient.jpg","width":1920,"height":1080,"caption":"Could AI be sentient with Large Language Models ?"},{"@type":"BreadcrumbList","@id":"https:\/\/autogenai.com\/uk\/blog\/could-large-language-models-be-sentient\/#breadcrumb","itemListElement":[{"@type":"ListItem","position":1,"name":"Home","item":"https:\/\/autogenai.com\/uk\/"},{"@type":"ListItem","position":2,"name":"Could AI be sentient with Large Language Models ?"}]},{"@type":"WebSite","@id":"https:\/\/autogenai.com\/uk\/#website","url":"https:\/\/autogenai.com\/uk\/","name":"AutogenAI UK","description":"Win more business","potentialAction":[{"@type":"SearchAction","target":{"@type":"EntryPoint","urlTemplate":"https:\/\/autogenai.com\/uk\/?s={search_term_string}"},"query-input":{"@type":"PropertyValueSpecification","valueRequired":true,"valueName":"search_term_string"}}],"inLanguage":"en-US"},{"@type":"Person","@id":"https:\/\/autogenai.com\/uk\/#\/schema\/person\/dcb3dac6c9f72455036a2a2ce29c9df9","name":"Teodora Danilovic","image":{"@type":"ImageObject","inLanguage":"en-US","@id":"https:\/\/secure.gravatar.com\/avatar\/5851d8c109eee27682f3596f072965fc0c5456eaec7ef7336661f4c245a59bb9?s=96&d=mm&r=g","url":"https:\/\/secure.gravatar.com\/avatar\/5851d8c109eee27682f3596f072965fc0c5456eaec7ef7336661f4c245a59bb9?s=96&d=mm&r=g","contentUrl":"https:\/\/secure.gravatar.com\/avatar\/5851d8c109eee27682f3596f072965fc0c5456eaec7ef7336661f4c245a59bb9?s=96&d=mm&r=g","caption":"Teodora Danilovic"},"url":"https:\/\/autogenai.com\/uk\/blog\/author\/teodora_danilovic\/"}]}},"_links":{"self":[{"href":"https:\/\/autogenai.com\/uk\/wp-json\/wp\/v2\/posts\/2899","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/autogenai.com\/uk\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/autogenai.com\/uk\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/autogenai.com\/uk\/wp-json\/wp\/v2\/users\/30"}],"replies":[{"embeddable":true,"href":"https:\/\/autogenai.com\/uk\/wp-json\/wp\/v2\/comments?post=2899"}],"version-history":[{"count":1,"href":"https:\/\/autogenai.com\/uk\/wp-json\/wp\/v2\/posts\/2899\/revisions"}],"predecessor-version":[{"id":3485,"href":"https:\/\/autogenai.com\/uk\/wp-json\/wp\/v2\/posts\/2899\/revisions\/3485"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/autogenai.com\/uk\/wp-json\/wp\/v2\/media\/2900"}],"wp:attachment":[{"href":"https:\/\/autogenai.com\/uk\/wp-json\/wp\/v2\/media?parent=2899"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/autogenai.com\/uk\/wp-json\/wp\/v2\/categories?post=2899"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/autogenai.com\/uk\/wp-json\/wp\/v2\/tags?post=2899"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}