James Cameron on AI: “I warned you guys in 1984 and you didn’t listen”::undefined

  • InfiniteVariables@lemmy.world
    link
    fedilink
    English
    arrow-up
    3
    arrow-down
    2
    ·
    edit-2
    1 year ago

    ITT: People describing the core component of human consciousness, pattern recognition, as not a big deal because it’s code and not a brain.

    • TwilightVulpine@lemmy.world
      link
      fedilink
      English
      arrow-up
      2
      ·
      1 year ago

      The technology is definitely impressive, but some people are jumping the gun by assuming more human-like characteristics in AI than it actually has. It’s not actually able to understand the concepts behind the patterns that it matches.

      AI personhood is only selectively used as an argument to justify how their creators feed copyrighted work into it, but even they treat it as a tool, not like something that could potentially achieve consciousness.

    • Kecessa@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      1
      ·
      1 year ago

      So all you do is create phrases based on things you’ve read in the past and recognizing similar interactions between other people and recreating them? 🤔

      • adeoxymus@lemmy.world
        link
        fedilink
        English
        arrow-up
        1
        arrow-down
        1
        ·
        1 year ago

        No we also transfer generic material to similar looking (but not too similar looking) people and then teach those new people the pattern matching.

        My point: Reductionism just isn’t useful when discussing intelligence.

        • Kecessa@sh.itjust.works
          link
          fedilink
          English
          arrow-up
          0
          ·
          edit-2
          1 year ago

          Forming your own thoughts because you reasoned by yourself?

          AI just goes “I’ve seen X before, someone answered Y, therefore I will answer Y.” In its current state it can’t decide “I’ll answer something nonsensical just for the lulz” because it doesn’t know if Y is right or wrong, it just knows that over billions of lines of texts it has seen X with Y most often so X = Y. If X was always answered with a nonsensical answer it would repeat it even if it has access to information that proves that answer wrong. Which is also why there’s a lot of bad info being shared by AI.