If all you need is a one sided conversation designed to make you feel better, LLM’s are great at concocting such “pep talks”. For some, that just might be enough to male it believable. The Turing test was cracked years ago, only now do we have access to things that can do that for free*.
A pretty early chatbot called Eliza simulated a non-directive psychotherapist. It kind of feels like they’ve improved hugely but not really changed much.
… Have you tried any of the recent ones? As it stands chatGPT and Gemini are both built with guardrails strong enough to require custom inputs to jailbreak, with techniques such as Reinforcement learning from Human Feedback uses to lobotomize misconduct out of the AI’s.
It’s wild that people brag that it’s able to do essentially the same as copying and pasting someone else’s basic code but with only a few extra imagined errors sprinkled in for fun but that just makes it more useful for pretending you aren’t again lljust literally copying someone else’s stuff.
It’s a search engine that makes up 1/8 of all it says. But sure it’s super useful.
The tech is great at pretending to be human. It is simply a next “word” (or phrase) predictor. It is not good at answering obscure questions, writing code or making a logical argument. It is good at simulating someone.
It is my experience that it approximates a human well, but it doesn’t get the details right (like truthness or reflecting objective reality), making it useless for essay writing, but great for stuff like character AI and other human simulations.
If you are right, give an actual Iogical response only capable by a human, as opposed to a generic ad hominem. I repeat my question, Have you actually used any of the GPT3 era models?
If all you need is a one sided conversation designed to make you feel better, LLM’s are great at concocting such “pep talks”. For some, that just might be enough to male it believable. The Turing test was cracked years ago, only now do we have access to things that can do that for free*.
A pretty early chatbot called Eliza simulated a non-directive psychotherapist. It kind of feels like they’ve improved hugely but not really changed much.
deleted by creator
… Have you tried any of the recent ones? As it stands chatGPT and Gemini are both built with guardrails strong enough to require custom inputs to jailbreak, with techniques such as Reinforcement learning from Human Feedback uses to lobotomize misconduct out of the AI’s.
deleted by creator
It’s wild that people brag that it’s able to do essentially the same as copying and pasting someone else’s basic code but with only a few extra imagined errors sprinkled in for fun but that just makes it more useful for pretending you aren’t again lljust literally copying someone else’s stuff.
It’s a search engine that makes up 1/8 of all it says. But sure it’s super useful.
deleted by creator
… Don’t pull a strawman, all I said is that the AI’s designed to approximate human written text, do a good job at approximating human text.
This means you can use them to simulate a reddit thread or make a fake wikipedia page, or construct a set of responses to someone who wants comfort.
Next time, read what someone actually says, and respond to that.
deleted by creator
The tech is great at pretending to be human. It is simply a next “word” (or phrase) predictor. It is not good at answering obscure questions, writing code or making a logical argument. It is good at simulating someone.
It is my experience that it approximates a human well, but it doesn’t get the details right (like truthness or reflecting objective reality), making it useless for essay writing, but great for stuff like character AI and other human simulations.
If you are right, give an actual Iogical response only capable by a human, as opposed to a generic ad hominem. I repeat my question, Have you actually used any of the GPT3 era models?
deleted by creator
Indeed, I don’t think I can convince you at this point, so enjoy the touch of grass