• 1 Post
  • 35 Comments
Joined 1 year ago
cake
Cake day: July 16th, 2023

help-circle

  • scratchee@feddit.uktocats@lemmy.worldBlack Cats
    link
    fedilink
    English
    arrow-up
    3
    ·
    21 days ago

    As the owner of 2 black cats… as far as I’m concerned all black cats are a superposition of each other until you get with a foot or so, spot the one tiny clue that gives them away, and they finally collapse into a specific cat.







  • I disagree, they are not talking about the online low trust sources that will indeed undergo massive changes, they’re talking about organisations with chains of trust, and they make a compelling case that they won’t be affected as much.

    Not that you’re wrong either, but your points don’t really apply to their scenario. People who built their career in photography will have t more to lose, and more opportunity to be discovered, so they really don’t want to play silly games when a single proven fake would end their career for good. It’ll happen no doubt, but it’ll be rare and big news, a great embarrassment for everyone involved.

    Online discourse, random photos from events, anything without that chain of trust (or where the “chain of trust” is built by people who don’t actually care), that’s where this is a game changer.




  • Reasoning is obviously useful, not convinced it’s required to be a good driver. In fact most driving decisions must be done rapidly, I doubt humans can be described as “reasoning” when we’re just reacting to events. Decisions that take long enough could be handed to a human (“should we rush for the ferry, or divert for the bridge?”). It’s only the middling bit between where we will maintain this big advantage (“that truck ahead is bouncing around, I don’t like how the load is secured so I’m going to back off”). that’s a big advantage, but how much of our time is spent with our minds fully focused and engaged anyway? Once we’re on autopilot, is there much reasoning going on?

    Not that I think this will be quick, I expect at least another couple of decades before self driving cars can even start to compete with us outside of specific curated situations. And once they do they’ll continue to fuck up royally whenever the situation is weird and outside their training, causing big news stories. The key question will be whether they can compete with humans on average by outperforming us in quick responses and in consistently not getting distracted/tired/drunk.






  • Whilst I agree that universal consuming nanobots are a bit far fetched, I’m not sure I’m sold on the replication problem.

    Life has replication errors on purpose because we’re dependent on it for mid to long term survival.

    It’s easy to write program code with arbitrarily high error protection. You could make a program that will produce 1 unhandled error for every 100000 consumed universes, and it wouldn’t be particularly hard, you just need enough spare space.

    Mutation and cancer are potential problems for technology, but they’re decidedly solvable problems.

    Life only makes it hard because life is chaotic and complex, there’s not an error correcting code ratio we can bump from 5 to 20 and call it a day.