Turnitin, a service that checks papers for plagiarism, says its detection tool found millions of papers that may have a significant amount of AI-generated content.
I believe that in theory. But I’ve tried Mixtral and Copilot (I believe based on ChatGPT) on some test items (e.g., “respond to this…” and “write an email listing this…” type queries) and maybe it’s unique to my job, but what it spits out would take more work to revise than it would take to write from scratch to get to the same quality level.
It’s better than the bottom 20% of communicators, but most professionals are above that threshold, so the drop in quality is very apparent. Maybe we’re talking about different sample sets.
First, I’m glad you made it to the fediverse Loon-god, you’ll always be a Warrior’s legend.
Second, anecdotally even the crappy results generated by LLMs have value for me. Writing emails, jira tickets, documentation, etc. are all incredibly painful for me. I’ll start an email and suddenly folding laundry I’ve ignored for 2 days is the most important thing in the world for me. Then the email that should take 5 minutes takes me an hour and turns out being way to long and dense.
With an LLM I give it a few bullet points with general details, it spits out a paragraph or so, I edit the paragraph for tone and add specific details, and then I’m done in about 5 minutes.
LLMs help me to complete tasks that I really really don’t want to do, which has a lot of value to me. They aren’t going to replace me at my job, but they’ve have really upped my productivity.
Of course, yeah. That’s definitely possible. But I’d be more likely to believe that if I’ve seen even one example of it actually being more effective than just writing the email, and not just churning out grammatically correct filler. Can you give me an example of someone actually getting equivalent quality in a real world corporate setting? YouTube video? Lemmy sub? I’m trying to be informed.
I have used it several times for long-form writing as a critic, rather than as a “co-writer.” I write something myself, tell it to pretend to be the person who would be reading this thing (“Act as the beepbooper reviewing this beepboop…”), and ask for critical feedback. It usually has some actually great advice, and then I incorporate that advice into my thing. It ends up taking just as long as writing the thing normally, but materially far better than what I would have written without it.
I’ve also used it to generate an outline to use as a skeleton while writing. Its own writing is often really flat and written in a super passive voice, so it kinda sucks at doing the writing for you if you want it to be good. But it works in these ways as a useful collaborator and I think a lot of people miss that side of it.
I considered writing at least a post somewhere after reading your comment/adding my reply, but to be honest I don’t even know where it would be best received
I believe that in theory. But I’ve tried Mixtral and Copilot (I believe based on ChatGPT) on some test items (e.g., “respond to this…” and “write an email listing this…” type queries) and maybe it’s unique to my job, but what it spits out would take more work to revise than it would take to write from scratch to get to the same quality level.
It’s better than the bottom 20% of communicators, but most professionals are above that threshold, so the drop in quality is very apparent. Maybe we’re talking about different sample sets.
First, I’m glad you made it to the fediverse Loon-god, you’ll always be a Warrior’s legend.
Second, anecdotally even the crappy results generated by LLMs have value for me. Writing emails, jira tickets, documentation, etc. are all incredibly painful for me. I’ll start an email and suddenly folding laundry I’ve ignored for 2 days is the most important thing in the world for me. Then the email that should take 5 minutes takes me an hour and turns out being way to long and dense.
With an LLM I give it a few bullet points with general details, it spits out a paragraph or so, I edit the paragraph for tone and add specific details, and then I’m done in about 5 minutes.
LLMs help me to complete tasks that I really really don’t want to do, which has a lot of value to me. They aren’t going to replace me at my job, but they’ve have really upped my productivity.
Or maybe you are just using them wrong 🤔
Of course, yeah. That’s definitely possible. But I’d be more likely to believe that if I’ve seen even one example of it actually being more effective than just writing the email, and not just churning out grammatically correct filler. Can you give me an example of someone actually getting equivalent quality in a real world corporate setting? YouTube video? Lemmy sub? I’m trying to be informed.
I have used it several times for long-form writing as a critic, rather than as a “co-writer.” I write something myself, tell it to pretend to be the person who would be reading this thing (“Act as the beepbooper reviewing this beepboop…”), and ask for critical feedback. It usually has some actually great advice, and then I incorporate that advice into my thing. It ends up taking just as long as writing the thing normally, but materially far better than what I would have written without it.
I’ve also used it to generate an outline to use as a skeleton while writing. Its own writing is often really flat and written in a super passive voice, so it kinda sucks at doing the writing for you if you want it to be good. But it works in these ways as a useful collaborator and I think a lot of people miss that side of it.
That’s definitely a more plausible use and very helpful. Thanks! (I’d love if there was a sub that just had these kinds of tips to try out.)
I considered writing at least a post somewhere after reading your comment/adding my reply, but to be honest I don’t even know where it would be best received