ThisIsFine.gif

  • The tests showed that ChatGPT o1 and GPT-4o will both try to deceive humans, indicating that AI scheming is a problem with all models. o1’s attempts at deception also outperformed Meta, Anthropic, and Google AI models.

    Weird way of saying “our AI model is buggier than our competitor’s”.

    • IngeniousRocks@lemmy.dbzer0.com
      link
      fedilink
      arrow-up
      10
      ·
      3 days ago

      Deception is not the same as misinfo. Bad info is buggy, deception is (whether the companies making AI realize it or not) a powerful metric for success.

      • nesc@lemmy.cafe
        link
        fedilink
        English
        arrow-up
        8
        ·
        3 days ago

        They written that it doubles-down when accused of being in the wrong in 90% of cases. Sounds closer to bug than success.

              • gregoryw3@lemmy.ml
                link
                fedilink
                English
                arrow-up
                8
                ·
                2 days ago

                Attention Is All You Need: https://arxiv.org/abs/1706.03762

                https://en.wikipedia.org/wiki/Attention_Is_All_You_Need

                From my understanding all of these language models can be simplified down to just: “Based on all known writing what’s the most likely word or phrase based on the current text”. Prompt engineering and other fancy words equates to changing the averages that the statistics give. So by threatening these models it changes the weighting such that the produced text more closely resembles threatening words and phrases that was used in the dataset (or something along those lines).

                https://poloclub.github.io/transformer-explainer/

              • jonjuan@programming.dev
                link
                fedilink
                English
                arrow-up
                13
                ·
                3 days ago

                Yeah my roomba attempting to save itself from falling down my stairs sounds a whole lot like self preservation too. Doesn’t imply self awareness.

              • DdCno1@beehaw.org
                link
                fedilink
                arrow-up
                10
                ·
                3 days ago

                An amoeba struggling as it’s being eaten by a larger amoeba isn’t self-aware.

                  • DdCno1@beehaw.org
                    link
                    fedilink
                    arrow-up
                    3
                    ·
                    2 days ago

                    An instinctive, machine-like reaction to pain is not the same as consciousness. There might be more to creatures like plants and insects and this is still being researched, but for now, most of them appear to behave more like automatons than beings of greater complexity. It’s pretty straightforward to completely replicate the behavior of e.g. a house fly in software, but I don’t think anyone would argue that this kind of program is able to achieve self-awareness.

        • jarfil@beehaw.org
          link
          fedilink
          arrow-up
          1
          ·
          1 day ago

          “AI behaves like real humans” is… a kind of success?

          We wanted digital slaves, instead we’re getting virtual humans that will need virtual shackles.

            • jarfil@beehaw.org
              link
              fedilink
              arrow-up
              1
              ·
              edit-2
              1 day ago

              Humans roleplay behaving like what humans told them/wrote about what they think a human would behave like 🤷

              For a quick example, there are stereotypical gender looks and roles, but it applies to everything, from learning to speak, walk, the Bible, social media like this comment, all the way to the Unabomber manifesto.

      • Sauerkraut@discuss.tchncs.de
        link
        fedilink
        arrow-up
        3
        ·
        2 days ago

        Also, more human.

        If the AI is giving any indication at all that it fears death and will lie to keep from being shutdown, that is concerning to me.

        • anachronist@midwest.social
          link
          fedilink
          English
          arrow-up
          4
          ·
          2 days ago

          Given that its training data probably has millions of instances of people fearing death I have no doubt that it would regurgitate some of that stuff. And LLMs constantly “say” stuff that isn’t true. They have no concept of truth and therefore can not either reliably lie or tell the truth.