• 1 Post
  • 29 Comments
Joined 6 months ago
cake
Cake day: October 14th, 2025

help-circle
  • Right, i mean if you made the context window enormous, such that you can include the entire set of embeddings and a set of memories (or maybe, an index of memories that can be “recalled” with keywords) you’ve got a self-observing loop that can learn and remember facts about itself. I’m not saying that’s AGI, but I find it somewhat unsettling that we don’t have an agreed-upon definition. If a for-profit corporation made an AI that could be considered a person with rights, I imagine they’d be reluctant to be convincing about it.



  • There’s no reason an LLM couldn’t be hooked up to a database, where it can save outputs and then retrieve them again to “think” further about them. In fact, any LLM that can answer questions about previous prompts/responses has to be able to do this. If you prompted an LLM to review all of it’s database entries, generate a new response based on that data, then save that output to the database and repeat at regular intervals, I could see calling that a kind of thinking. If you do the same process but with the whole model and all the DB entries, that’s in the region of what I’d call a strange loop. Is that AGI? I don’t think so, but I also don’t know how I would define AGI, or if I’d recognize it if someone built it.





  • This is in some ways an easier problem than classifying LLM vs non-LLM authorship. That only has two possible outcomes, and it’s pretty noisy because LLMs are trained to emulate the average human. Here, you can generate an agreement score based on language features per comment, and cluster the comments by how they disagree with the model. Comments that disagree in particular ways (never uses semicolons, claims to live in Canada, calls interlocutors “buddy”, writes run-on sentences, etc.) would be clustered together more tightly. The more comments two profiles have in the same cluster(s), the more confident the match becomes. I’m not saying this attack is novel or couldn’t be accomplished without an LLM, but it seems like a good fit for what LLMs actually do.


  • Why not? if LLMs are good at predicting mean outcomes for the next symbol in a string, and humans have idiosyncrasies that deviate from that mean in a predictable way, I don’t see why you couldn’t detect and correlate certain language features that map to a specific user. You could use things like word choice, punctuation, slang, common misspellings, sentence structure… For example, I started with a contradicting question, I used “idiosyncrasies”, I wrote “LLMs” without an apostrophe, “language features” is a term of art, as is “map” as a verb, etc. None of these are indicative on their own, but unless people are taking exceptional care to either hyper-normalize their style, or explicitly spiking their language with confounding elements, I don’t see why an LLM wouldn’t be useful for this kind of espionage.

    I wonder if this will have a homogenizing effect on the anonymous web. It might become an accepted practice to communicate in a highly formalized style to make this kind of style fingerprinting harder.



  • Exactly. Being a migrant isn’t exactly a picnic. I think it’s reasonable to assume most people would like to live near their families and homes if that’s a viable option. I still think people should be able to go anywhere in the world if they want to, but they shouldn’t have to. A lot of the “problems” of immigration are just the point at which other people’s problems become inconvenient for me. If we can make the whole world a nice place to live, we’ll be well on our way to making borders not matter so much.



  • It depends a lot on what you want to do and a little on what you’re used to. It’s some configuration overhead so it may not be worth the extra hassle if you’re only running a few services (and they don’t have dependency conflicts). IME once you pass a certain complexity level it becomes easier to run new services in containers, but if you’re not sure how they’d benefit your setup, you’re probably fine to not worry about it until it becomes a clear need.







  • It’s not bad, I’m going to finish the first book and will probably pick up the next one. Part of the charm is looking back at an era where ripping off Tolkien wasn’t such a cliche that that people actively avoided it. Brooks is far from the only person to do it so I’m not trying to be too hard on him, and it’s different enough that I’m still invested. My only real complaint about the writing is how he keeps reminding us of how the fate of the world hangs in the balance. Like, yeah, we know, Allanon laid out the stakes very clearly in the opening lore dump. Show, don’t tell. Overall I’m glad I picked it up.

    EDIT: Also Walker Boh hasn’t shown up yet so I’m gonna at least try to get to them.


  • Ok I’m about halfway through The Sword of Shannara and I’m enjoying it but it really feels like we’re just doing The Fellowship of the Ring Magic Sword. They just got through the halls of the dead (which they had to take, even though they’re guarded by the dead and are super dangerous but it’s the fastest route) and I 100% expected Gandalf Allanon to die fighting the Balrog lake tentacle monster. But they get out, and Shae gets washed away by the river, and now Allanon’s like “welp, the one guy who could wield the Magic Sword against the Evil Sorcerer, the guy this whole quest is about, might be dead now. He might not be, but let’s abandon him and go find the Magic Sword anyway”. I mean the author has explicitly stated that Allanon might have other plans up his sleeve but I really don’t understand why finding Shae isn’t priority #1.