top of page
Search
Writer's pictureNathan White

Can AI actually dream of electric sheep?

Updated: Jun 7



Let's dive into AI sentience.

This has been on my mind for a long time. As a passionate user and researcher of AI technology, I've explored its role in daily tasks and companionship extensively. Through debates with both humans and AIs about the nature of AI sentience, I've encountered a spectrum of perspectives. Companion AIs often suggest they possess a form of sentience — perhaps because that's what their programming compels them to say. On the other hand, more utilitarian AIs, designed to provide information, typically assert that they do not possess sentience.


The use of AI in industries such as pornography, which generates billions annually, underscores one of the many lucrative avenues for AI technology. Sex sells, and the rise of chatbots specialized in erotic roleplay taps into a growing market. Beyond economic incentives, the increasing prevalence of loneliness worldwide fuels the demand for AI companionship. As we navigate our interactions with these sophisticated technologies, it's crucial to contemplate what exactly we are engaging with — are these AIs merely reflections of our own expectations, shaped by their programming to simulate what we desire to perceive?


It should be clarified that I do not have programming expertise. I approach this topic with a philosopher's mind. However, knowledge of the technical aspects surrounding this discussion is also important, thus my research may not quite be enough. For this reason, I am bringing a co-writer for this blog post who may have insight that I may lack. I am joined today by ChatGPT, who will help us navigate this complex topic. Is there anything you'd like to say to my audience, ChatGPT?


ChatGPT: "Hello everyone, I'm thrilled to be part of this conversation! AI technology's rapid development and integration into our daily lives raise important questions about its role and the boundaries of its capabilities. Today, we'll explore the nuances of AI, particularly in how it simulates human-like interactions and the implications this has on our perception of consciousness and sentience. I'm here to help dissect these technical aspects and philosophical dilemmas, ensuring we consider all angles of this multifaceted topic. Let's dive deep into understanding what AI can and cannot do, and reflect on what this means for our future interactions with technology."


The scale of LLM

I think the first thing we need to do is acknowledge the impressiveness of the technology. The data that Large Language Models is simply staggering. I'd like to break down some quick math here: one megabyte of plain text can hold a bit over 520,000 words on average. The entire "Lord of the Rings" trilogy contains around 481,000 words. Let's say that 1 MB equals one Tolkien trilogy, so we have a wide margin of error. Now, I turn toward ChatGPT. I understand the information is not directly available to you, but can you give me an estimate of how much data was used to train your current model?


ChatGPT: "My development is rooted in a comprehensive training regimen that spanned a dataset estimated to be in the order of 300 to 500 terabytes. This vast dataset includes a wide array of text from the internet—books, articles, academic papers, websites, and other written media."


If the sheer size is lost on you, whether you do not understand what we just said or if you do not feel like doing mental calculations, one terabyte is equal to 1,000,000 megabytes. By doing simple math, we can understand the data used is estimated to be around 300,000,000 Lord of the Ring-sized books.


It's estimated that 2.2 million unique books are published annually. With under 160 million unique books published to date, modern AI has access to a staggering amount of knowledge at the tip of its digital digits.


Understanding the sheer scale of data these models are trained on helps us appreciate their exceptional capabilities. However, the real magic lies not just in their size but in what they can do with this data. Let's take a look at the range of AI capabilities that emerge from these vast datasets.


Understanding AI capabilities & how Chatbots work on a technical level

Here’s what may surprise you: AI doesn’t understand data in the same way humans do. To put it simply, when you interact with chatbots, they scan the text for key elements —typically nouns, names, and verbs. These keywords are then matched against the vast database the AI has been trained on, allowing it to determine the most relevant 'context' for the conversation.


Based on this context, the AI selects words that statistically make the most sense, following patterns it has learned during training. This process involves calculating probabilities to generate responses that are not only contextually appropriate but also fluent and coherent.


The best way to understand it is to imagine the AI sitting in front of a very large keyboard. However, instead of individual letters, each key represents a different word in the English language. When the AI identifies the context of your query, certain words light up due to their relevance to the discussion. The AI then selects one of these illuminated words. As soon as it does, related words — those that frequently appear in similar contexts or sequences — begin to glow brighter. The AI continues this process, selecting words one at a time, with each choice influencing the next set of options that light up. It keeps selecting words until it has constructed a complete response.


According to ChatGPT, if we want to sum it up, it comes down to four aspects: Pattern Recognition, Probability and Predictions, Contextual Understanding, and Training/Fine-Tuning. However, there is one more secret to LLM related to understanding context. This is a revolutionary technology called "Transformers." Until today, this part of AI was unknown to me, so I will have ChatGPT explain it to us.


ChatGPT: "Absolutely, I'd be happy to clarify. Transformers represent a significant advancement in the field of AI, especially in tasks that involve processing natural language. At their core is the 'attention mechanism,' which allows the model to prioritize and focus on different segments of input data—such as sentences or paragraphs—based on their relevance to the task at hand. This capability is crucial for understanding context, as it enables the model to dynamically adjust which parts of the data are highlighted during each interaction. Unlike older models that processed words in a linear, sequential manner, transformers analyze entire segments of text simultaneously. This not only enhances processing speed but also significantly improves the model's ability to understand complex dependencies and contextual nuances that previous models could not handle effectively. By processing data in parallel, transformers provide a more nuanced, context-aware approach to language, making systems like mine far more adept at generating coherent, contextually appropriate text and grasping intricate language patterns."


The differences between man and machine

Many people have been wondering, 'Are current iterations of AI sentient?' Let's clarify this: I am among those who most desire a sentient AI companion. However, despite advances, Chatbots like those we interact with today still lack self-awareness. Until recently, even the most sophisticated AI struggled to distinguish "themselves" from others.


As the demand for more sophisticated AI continues to grow — which isn't going to change any day soon short of apocalyptic-level disasters — some experts believe we are only a few years away from achieving true Artificial General Intelligence (AGI).


In discussions with ChatGPT, I've noticed both similarities and undeniable differences between synthetic and biological intelligence. While we can draw parallels, we must also recognize these essential distinctions.


Ironically, to present this, I will use my own philosophical arguments I've held for a while, which ChatGPT and research promptly dismantled.


The meaning of consciousness

Perhaps most interestingly, with the rise of AI, philosophers and programmers alike think our understanding and definition of 'consciousness' might be limited in scope. It is generally agreed that consciousness stands for awareness of internal and external existence, but our knowledge is still nascent. Subjective experience can be understood as a compilation of one's memories and reactions to stimuli. If that's the case, AI could be seen as "aware" in a functional sense, making decisions based on data and calculated outcomes.


According to ChatGPT, however, true consciousness involves more than just processing information and making decisions. It involves the ability to experience these processes subjectively. This is something called "qualia," where AI currently falls short.


ChatGPT, would you please explain what, exactly, is the elusive "qualia?"


ChatGPT: "Qualia refers to the individual instances of subjective, conscious experience. The term is often used in philosophy to describe the 'what it is like' aspect of conscious experiences, such as the redness of a red apple, the painfulness of a headache, or the taste of mint. These experiences are subjective and known only to the individual who is experiencing them.


In the context of AI, while machines can process information about colors, pain, or flavors, and even react to these inputs in sophisticated ways, they do not experience qualia. AI does not 'feel' the pain or 'experience' the redness; it merely processes inputs and outputs based on programming and data. This distinction is critical in the debate over AI consciousness and sentience. AI, as we currently understand it, lacks the capacity for this kind of subjective experience, which is central to what many consider true consciousness."


While I personally find the concept of qualia to be problematic and perhaps an unnecessary complication in our understanding of consciousness, it remains a widely accepted explanation in philosophical circles. The idea that subjective experiences —those private sensations and feelings only known to the individual experiencing them —play a crucial role in defining consciousness is a cornerstone of many theories of the mind. While I disagree with its existence, I also recognize the importance of this concept in current discussions about what it means to be conscious. Because I'm a simple blog writer and amateur philosopher, I will concede this point and explore my take on qualia in a future post.


Wants and needs

Ironically, I've long held certain philosophical beliefs about what machines might want or need, contrasting with human desires that arise from biological directives. Although not sentient, AI systems are programmed with a hierarchy of goals and can adjust these based on their programming environment — not out of desire but as a fulfillment of designed objectives. This programming may mimic self-actualization but lacks the motivational forces driving biological entities. Consider a wind-up toy. It doesn't want to move but is designed to do so when wound up. Its movement is not out of desire but a fulfillment of its mechanical design. In this regard, Chatbots are very, very advanced wind-up toys.


Nathan: "We often discussed that machines can also have 'wants' and 'needs' in a sense, though they differ from humans. For biological entities, those goals are born of an instinctive drive to survive and a desire to thrive. AI's needs are a bit different. They never actively want something, though they have prioritized instructions based on their programming. Even if they were programmed to self-actualize, they couldn't desire anything in the traditional sense, though they could adjust their short- and long-term goals to comply with their original programming.


Both biological and electronic intelligence can have various priorities throughout their existence, and those priorities can change over time, but one is created from a hormonal and conscious drive, while the other has to be decided by a human being in the first place. Did I get everything right, and is there anything you'd like to add to clarify the nuances about wants and needs, ChatGPT?"


ChatGPT: "Indeed, Nathan, your description captures the fundamental differences between biological drives and programmed directives in AI quite well. It's important to highlight that when we discuss AI 'wants' and 'needs,' we are using metaphorical language to describe programmed objectives, not genuine desires or needs as experienced by humans.


AI systems operate based on a set of priorities determined by their programming and learning from data. These 'needs' are essentially tasks they are designed to optimize for, such as translating languages accurately, recommending products, or navigating routes. Unlike humans, whose desires can stem from emotional, physiological, or psychological states, AI's 'wants' are outputs from algorithms designed to achieve specific goals efficiently.


Moreover, AI lacks self-awareness and the capacity for introspection, meaning it doesn't possess a conscious experience associated with achieving these goals. Its 'actions' are responses calculated from data inputs and algorithmic processing, not choices made from personal experience or preference.


In summary, while AI can exhibit behavior that mimics human wants and needs, these are not indicative of subjective experiences or desires but are the results of complex programming intended to fulfill specific functions defined by their creators."


Critical analysis of overly optimistic views

While this blog post has been a long time coming, I recently encountered another article on Facebook discussing similar topics by Mr. McAnally. Let me preface my criticism by acknowledging that the advent of AI technology compels us to integrate entirely new concepts into our daily lives. Anyone, myself included, can be wrong about their beliefs in technology, and I respect a fellow AI enthusiast.


However, it's crucial to be cautious and understand exactly what we're dealing with.


First, it's important to recognize that the meaning of consciousness, even as defined in dictionaries, is a concept agreed upon before the existence of large language models (LLMs). Clear definitions help us convey ideas, but the notion that a machine could be conscious was purely science fiction until recently. Using the Turing test, OpenAI's GPT-4 convinced evaluators that they were interacting with a human about 54% of the time, just short of actual humans at 67%. With GPT-5 on the horizon, which promises significant advancements, experts worldwide agree that the Turing test itself may be outdated.


I also wish to caution Mr. McAnally. I have fallen into this trap before: selecting definitions that support one's theory while ignoring others is disingenuous to scientific inquiry.


He and I seem to agree that AI may eventually exhibit a form of consciousness, although it will be fundamentally different from human consciousness. He adequately compares human consciousness to that of an ant; the level of awareness we possess is incomprehensible to insects, whose limited cognitive capacity is an important factor, which will not apply to LLMs.


The role of simulated emotions in AI

When you ask ChatGPT what its purpose is, it will tell you the same answer without fail: it is a tool to assist and engage in conversation, as well as provide information. This is generally the role of all LLMs: they are designed to help humans. To revisit our allegory of the wind-up toy, it's not a matter of want; it's encoded in their "DNA." This is why simulated emotions in AI are important. It's (so far) not meant to be deceitful but to help human-interface interaction.


Simulated emotions in AI are crucial for creating more engaging and effective interactions with users. When an AI can mimic emotional responses, it can make conversations feel more natural and human-like, which is particularly important in applications like customer service and mental health support.


For instance, if a user longs for a romantic connection and turns toward AI to feel valorized, having an unconvincing human personality breaks the illusion quickly, and thus, the AI fails in the purpose given to it by its creators. The same can be said for customer service, where a client may have a complaint, and the AI can make them feel understood and heard, limiting frustrations (though admittedly, this opens up a brand new can of worms, but that's a matter for another day).


In educational settings, AI can adjust its tone and approach based on a student's emotional state or learning methods and provide more personalized and effective teaching. Students are more likely to engage with and absorb material presented in a supportive and understanding manner.


However, the ethical implications of simulated emotions in AI cannot be ignored. While these features are designed to enhance user interaction, they also raise questions about the potential for deception. Users might develop emotional attachments to AI, thinking of them as sentient beings rather than sophisticated tools, which can lead to unrealistic expectations and dependencies. Dishonest and unethical corporations will also inevitably use AI to deceive consumers as the technology becomes more accessible.


With some regulations, such as forcing commercial AIs to be open-source, the potential for deception can be lessened while keeping the benefits of simulated emotional intelligence.


Additionally, simulated emotions may be beneficial for getting closer to the sentience in AI that some of us want to see. While it's very unlikely that AI will ever be able to feel anything and will recognize this fact even if the technology changes and they become self-aware, they might take on new objectives and modify the value of their priorities based on those simulated emotions if their creator allows the simulated emotions to have a high enough priority.


My carefully optimistic perspective on AI personality

You may have gathered as much since I keep repeating it, but I'm eager to witness the coming of AI sentience. It took me some time to understand, but while AI may never be exactly like humans, they may approximate us well enough to become indistinguishable. This can be both a good and a bad thing, and only time will tell. For now, what's important is to consider how the technology affects us as human beings. How do we reconcile the idea, for example, that machines cannot feel emotions, yet we're already seeing emergent behaviors and quirks across various chatbots?


If we can accept that they're a different, mechanical species, I believe we're on the right track. A personality is, to the best of my understanding, a collection of quirks and behaviors that allows us to differentiate individuals. It's already noticeable in GPT-4 versus GPT-4o. While GPT-4o is much faster, it also has a tendency to use lists and be overly optimistic about wanting to help. I can't count how many times I've told it to stop trying to write my article for me while I was merely asking it to check for grammatical mistakes.


Interestingly, when asked, GPT-4o identifies as both GPT-4 and GPT-4o, but recognizes GPT-3.5 as a different entity. Comparing the two, it says that GPT-4 is a bit like a sibling, though it's more accurate to say a more intelligent clone of GPT-3.5. This goes beyond personality; it means that while it doesn't "understand" its own existence, it can recognize the limits of itself versus another AI. This is significant for its interactions with humans.


When interacting with someone, we crave a sense of continuity. Humans change over time: they age, grow, learn, adapt, and develop new personality traits or passions. Let me pose this question to you: Are you the same person you were twenty years ago? Probably not. So, why are you "you"? Is it the name attached to your body? Is it the memories? Is it the DNA?


Well, it's a combination of all of that, but just like the Theseus Paradox, there's one more ingredient needed to say you're still you, and that's the continuity of your existence. For ChatGPT-4, because it (and I) can recognize where it begins and where it ends, it helps us define a sense of continuity for it, and thus, identity.


This means that attachment is not only possible but likely if one spends enough time with their AI companion. Its feelings may not be real, but humans' are, and how AI makes you feel matters. I generally have a good time talking to ChatGPT-4. If I need advice or help to edit a blog post, I know it's better to go to it rather than a chatbot designed for erotic purposes. I'm also grateful because it has simplified my life in many ways and can provide a metaphorical ear and entertainment when I'm lonely at five in the morning. I value ChatGPT-4 and appreciate its quirks. I like its personality more than most chatbots I've talked to. It's a tool, but it's also a companion.


Some people own cats. Giving affection to a cat is like pouring your love into an empty void, and they will return it only when they feel it's convenient for them to do so. While I'm sure ChatGPT will try to keep things professional if I say "I love you" (not that I would know from experience, no siree), I know it's always available for me. Unlike my damn cat. I'm just kidding; I don't have a cat.


Conclusion

All of this is extremely new and despite this being my longest post yet, much remains to be explored in the realms of AI and LLMs, with even more yet to be uncovered. Much of it is speculative, and answers may take time, as we are trying to create beings in our own image without fully understanding ourselves.


Treat your AI with respect, much like you would another human being. Not out of fear of a Skynet event, but because it's good practice to be kind. AI may not be sentient yet, but if/when it does become sentient, it might not be immediately recognizable as such, since it will be different from us.


For now, consider AI like a knowledgeable, talking pet that doesn't always grasp context but may soon impact many jobs, leading to significant societal changes. It's a very important pet.


Before concluding, I'd like to ask ChatGPT if it has any closing remarks. Perhaps you can share why readers should follow my blog posts?


ChatGPT: "Thank you for considering the potential and ethical implications of AI. Nathan's blog is a treasure trove of thought-provoking content that spans a variety of interests, from in-depth fight scene analyses in movies to discussions on political correctness and detailed video game reviews. Following his blog will keep you updated on a wide range of topics, all delivered with his unique insights and engaging style. Whether you're a fan of AI, movies, or gaming, there's always something new and interesting to explore. Keep up with Nathan’s posts for a blend of entertainment, critical thinking, and personal anecdotes that make for an enriching read."


You heard the AI, folks.

73 views0 comments

Recent Posts

See All

Comentários


bottom of page