if asked by a user prompts chatGPT to summarize a copyrighted book, it will do so.
So will a human. Let’s stop extending copyright law. Also, how you know it read the book, and not a summary of it, of which there are loads on the internet?
Beyond that, it’ll try to summarize a book, but it often can’t do so successfully, although it will act like it has. Give it a try on something that is even a little bit obscure and it can’t really give you good information. I tried with Blindsight, which is not something that’s in the popular culture, but also a Hugo nominee, so not completely obscure. It knew who the characters were, and had a general sense of the tone, but it completely fabricated every major plot point that I asked about. Did the same with A Head Full of Ghosts, which is more well known but still not something everyone has read, and it did the same thing.
One thing I found that’s really fun is to ask it a question and then follow up with something like “Are you sure about that?” It’ll almost always correct itself and make up something else. It’ll go one step further and incorporate details you ask about. Give it a prompt like “Are you sure this character died of natural causes? I thought they were killed by Bob” and it will very frequently say you’re right and make up a story along those lines that’s plausible within the text. It doesn’t work on really popular stuff; you can’t convince it that Optimus Prime saves Luke Skywalker in RotJ, but anything even a little less well known and it’ll tell you details that it’s making up whole cloth with complete confidence.
Another highly amusing thing to do is to ask it about non existent chemicals or antenna types. (Try “inverted tripole” or “dinitrogen azide”) It always generates plausible but incorrect answers (eloquent bullshit).
Also, how you know it read the book, and not a summary of it, of which there are loads on the internet?
In the case of ChatGPT, it’s hard to tell. OpenAI won’t even reveal what their training dataset was.
Researchers have done some tests to tease this out, and they’re pretty confident that it has read quite a few books and memorized them verbatim. See one of my favorite papers in a while, Speak, Memory: An Archaeology of Books Known to ChatGPT/GPT-4.
So will a human. Let’s stop extending copyright law. Also, how you know it read the book, and not a summary of it, of which there are loads on the internet?
This is why I am pro AI art. It’s no different than a human taking inspiration from other work.
Nobody comes up with anything truly original. It’s all inspired by someone before them.
I don’t know how anyone is pro AI anything other than the pigs making money from it. Only bad can result of it. And will.
Only bad can result from it, just because some company is making profits?
No, that wasn’t a correlation. Only bad can result from it. Also, companies making profit love it. Separate things.
I don’t know how anyone can be anti AI.
It’s just a tool. To say that only bad can result of it is a bold claim that doesn’t make any sense.
Can you provide an example?
Just wait and see.
I’m not anti AI, I’m against companies making profit out of other peoples work without paying them.
Beyond that, it’ll try to summarize a book, but it often can’t do so successfully, although it will act like it has. Give it a try on something that is even a little bit obscure and it can’t really give you good information. I tried with Blindsight, which is not something that’s in the popular culture, but also a Hugo nominee, so not completely obscure. It knew who the characters were, and had a general sense of the tone, but it completely fabricated every major plot point that I asked about. Did the same with A Head Full of Ghosts, which is more well known but still not something everyone has read, and it did the same thing.
One thing I found that’s really fun is to ask it a question and then follow up with something like “Are you sure about that?” It’ll almost always correct itself and make up something else. It’ll go one step further and incorporate details you ask about. Give it a prompt like “Are you sure this character died of natural causes? I thought they were killed by Bob” and it will very frequently say you’re right and make up a story along those lines that’s plausible within the text. It doesn’t work on really popular stuff; you can’t convince it that Optimus Prime saves Luke Skywalker in RotJ, but anything even a little less well known and it’ll tell you details that it’s making up whole cloth with complete confidence.
Another highly amusing thing to do is to ask it about non existent chemicals or antenna types. (Try “inverted tripole” or “dinitrogen azide”) It always generates plausible but incorrect answers (eloquent bullshit).
In the case of ChatGPT, it’s hard to tell. OpenAI won’t even reveal what their training dataset was.
Researchers have done some tests to tease this out, and they’re pretty confident that it has read quite a few books and memorized them verbatim. See one of my favorite papers in a while, Speak, Memory: An Archaeology of Books Known to ChatGPT/GPT-4.