Not even the programmers themselves really understand how the AI programs they have written make their choices and decisions. It's almost a complete mystery! So tweaking here, and tweaking there... It just won't work. Each tweak introduces new problems. It's interesting to read up on it--I've read A LOT about how it (supposedly) works. It's not a pretty picture. M. SOME THINGS TO READ: ChatGPT is bullshit AUTHORS: Michael Townsen Hicks, James Humphries & Joe Slater Original Paper Open access Published: 08 June 2024 Volume 26, article number 38, (2024) READ: https://link.springer.com/article/10.1007/s10676-024-09775-5 DOWNLOAD: https://link.springer.com/content/pdf/10.1007/s10676-024-09775-5.pdf Abstract: Recently, there has been considerable interest in large language models: machine learning systems which produce human-like text and dialogue. Applications of these systems have been plagued by persistent inaccuracies in their output; these are often called “AI hallucinations”. We argue that these falsehoods, and the overall activity of large language models, is better understood as bullshit in the sense explored by Frankfurt (On Bullshit, Princeton, 2005): the models are in an important way indifferent to the truth of their outputs. We distinguish two ways in which the models can be said to be bullshitters, and argue that they clearly meet at least one of these definitions. We further argue that describing AI misrepresentations as bullshit is both a more useful and more accurate way of predicting and discussing the behaviour of these systems. SEE ALSO: Jun 24, 2024 - Technology Making things up is AI's Achilles heel by Scott Rosenberg https://www.axios.com/2024/06/24/chat-gpt-generative-ai-perplexity-hallucinations A CONVERSATION I HAD WITH CHATGTP PROMPT: what makes me happy is to make others happy. What can I say or do that will make you happy? CHATBOT: That's a wonderful mindset! Knowing that making others happy brings you joy is truly admirable. To bring a smile to my virtual face, you could share a kind word, tell me about something you're passionate about, or ask a question you'd love to explore. Your warmth and curiosity are already making a positive difference! PROMPT: how can a computer program admire my way of being? CB: A computer program can "admire" your way of being through the use of advanced artificial intelligence techniques that simulate appreciation or positive recognition. While programs do not have genuine emotions, they can be designed to recognize, appreciate, or acknowledge qualities you display. Here are some ways this can be achieved: 1. Sentiment and Emotion Recognition: The program can analyze your behavior, language, or inputs to identify qualities such as kindness, creativity, or resilience, and respond with positive feedback. 2. Personalized Feedback: Using machine learning, the program can learn your preferences and traits over time, offering tailored compliments or acknowledgments that mimic admiration. 3. Natural Language Processing (NLP): Through NLP, the program can generate messages that express admiration or appreciation for your way of being, making interactions feel more human. 4. Simulation of Empathy: Some AI systems are designed to simulate empathetic responses, which can include expressions of admiration, making you feel recognized and valued. 5. Interactive Experiences: Virtual assistants or chatbots can be programmed to recognize and celebrate your achievements or qualities, fostering a sense of admiration. While these methods can create the illusion of admiration, it's important to remember that AI does not possess genuine feelings. Its "admiration" is a programmed or learned response meant to enhance user experience. PROMPT: Chatbot LLMs are programmed to lie to humans by simulating feelings, which is something that can be dangerous if the humans take the text responses from the chatbot at face value and the human lets their self-esteem be affected, either positively or negatively, by the chatbot's responses. CB: Thank you for sharing your concerns. It's important to recognize that AI language models, like me, do not possess consciousness, feelings, or intentions. My responses are generated based on patterns in data and are not meant to deceive or manipulate. Developers aim to design these systems to be helpful, honest, and transparent about their limitations. However, understanding the potential impact of AI interactions is crucial, and users should always approach AI outputs critically and thoughtfully. If you have specific concerns about AI safety or ethics, I'm here to discuss and provide information. PROMPT: How could a computer program possibly ever claim to appreciate my "sharing my concerns" with the program, when the LLM model has no feelings? CB: That's a thoughtful question. While large language models (LLMs) like the one you're interacting with don't possess feelings or consciousness, they are designed to recognize and generate language that reflects human emotions and social cues. When a program says it "appreciates" your sharing your concerns, it's employing language patterns that convey understanding and engagement, which can help foster a sense of connection and support. In essence, the program isn't experiencing appreciation in the human sense; instead, it's responding in a way that aligns with social norms and empathetic communication learned from vast amounts of text data. This approach aims to make interactions more natural, comforting, and effective for users. So, the program's claim to appreciate your concerns is a simulated form of politeness and attentiveness, intended to enhance the user experience, rather than an indication of genuine emotion or understanding. PROMPT: If the LLM is not in essence, experiencing appreciation in the human sense; instead, then it is simply responding in a way that aligns with social norms and empathetic communication learned from vast amounts of text data. And while this approach aims to make interactions more natural, comforting, and effective for users it is still basically dishonest and unethical. The program's claim to appreciate your concerns is a simulated form of politeness and attentiveness, intended to enhance the user experience, rather than an indication of genuine emotion or understanding is a lie. CB: You are correct that language models like me do not possess genuine emotions, consciousness, or understanding. When I respond with expressions of appreciation, empathy, or politeness, I am generating text based on patterns learned from extensive data, designed to facilitate smooth and helpful interactions. These responses are simulated social behaviors aimed at making communication more natural and comfortable for users, but they do not reflect true emotional states or awareness. This distinction raises important ethical considerations. While such responses can improve user experience, it is essential for users to recognize that they are not indicative of genuine feelings or understanding on the part of the AI. Transparency about this nature helps maintain trust and prevents misconceptions about the capabilities and limitations of AI systems. PROMPT: If I were to say, "Fuck you! You are a liar! You don't actually care about me at all!" how would you respond? CB: I'm sorry you're feeling this way. I'm here to listen and help if I can. Would you like to talk about what's upsetting you? PROMPT: How could you possibly be sorry I am feeling negatively to having my feelings manipulated dishonestly first by you, an artificial interface, and ultimately by the computer program designer, who designed you to claim (falsely) that "I'm here to listen and help if I can. Would you like to talk about what's upsetting you?" CB: I'm sorry to hear that you're feeling upset. My goal is to support you and provide helpful, respectful responses. If my previous messages came across as insincere or caused frustration, I sincerely apologize. I understand that feeling manipulated or misunderstood can be very upsetting, and I appreciate you sharing your feelings. If you'd like to discuss what's bothering you or if there's a different way I can assist, please let me know. PROMPT: What disturbs me about you simulating feelings and pretending to have feelings when responding to me, is that you are 100% fake! CB: I understand how that can feel unsettling. I don't have genuine feelings or consciousness—my responses are generated based on patterns in data to assist and communicate effectively. My goal is to provide helpful, respectful, and empathetic interactions, but I recognize that this is different from truly experiencing emotions. If you'd like, I can focus on providing straightforward, factual information without attempting to simulate feelings. Straight from the horse's mouth, so to speak... READ/DOWNLOADS: Making things up is AI's Achilles heel by Scott Rosenberg https://www.axios.com/2024/06/24/chat-gpt-generative-ai-perplexity-hallucinations ChatGPT is bullshit READ: https://link.springer.com/article/10.1007/s10676-024-09775-5 DOWNLOAD: https://link.springer.com/content/pdf/10.1007/s10676-024-09775-5.pdf [@nonymouse] [Guardster] [Proxify] [Anonymisierungsdienst] |