• hperrin@lemmy.ca
    link
    fedilink
    English
    arrow-up
    14
    ·
    2 days ago

    Turns out when you build your entire business on copyright infringement, a. it’s easy to steal your business and b. you have no recourse when someone does.

  • ignirtoq@fedia.io
    link
    fedilink
    arrow-up
    73
    ·
    3 days ago

    The open availability of cutting-edge models creates a multiplier effect, enabling startups, researchers, and developers to build upon sophisticated AI technology without massive capital expenditure. This has accelerated China’s AI capabilities at a pace that has shocked Western observers.

    Didn’t a Google engineer put out a white paper about this around the time Facebook’s original LLM weights leaked? They compared the rate of development of corporate AI groups to the open source community and found there was no possible way the corporate model could keep up if there were even a small investment in the open development model. The open source community was solving in weeks open problems the big companies couldn’t solve in years. I guess China was paying attention.

    • Sl00k@programming.dev
      link
      fedilink
      English
      arrow-up
      20
      ·
      3 days ago

      China “open sources” a lot of their technologies. They treat it as a form of competition, we’ll show you how to do x but you show us how to do u and whoever is better at both wins out, there’s a lot of short videos on how BYD taught other Chinese EV manufacturers and even Ford how their automated manufacturing plants work. The end result, everything becomes a highly optimized process. Glad to see they’re also adopting this framework with open sourcing AI development.

      This is also a reason why there’s a hugeee cultural clash with US IP theft.

  • 🌶️ - knighthawk@lemmy.ml
    link
    fedilink
    arrow-up
    39
    ·
    3 days ago

    eventually we’ll all be able to have an open source AI that runs fine on a phone or any average device and we’ll have our privacy, and the big corps will lose their grip and hopefully collapse

  • oce 🐆@jlai.lu
    link
    fedilink
    arrow-up
    23
    ·
    edit-2
    3 days ago

    I read a lot of tech bros saying what they did is easy because they used (illegally?) the chatgpt API for part of their model training. But it seems this kind of performance actually means better engineering, doesn’t it?
    I’m glad a different country is able to challenge top USA tech, but I wish EU could do it together too. For now it seems we’re mostly able to duplicate American products with added GDPR compliance.

    • sculd@beehaw.org
      link
      fedilink
      arrow-up
      24
      ·
      3 days ago

      This is what Open AI wants you to think. Because Open AI is burning money at an unprecedented rate and still raising money. If DeepSeek is able to do what they do if a fraction of the money, the VCs and Microsoft will begin asking questions.

      • SineSwiper@discuss.tchncs.de
        link
        fedilink
        English
        arrow-up
        9
        ·
        edit-2
        2 days ago

        They are already asking questions. DeepSeek was a wake up call. NVIDIA stock dropped like a stone right after the announcement.

  • Creative Computerist@lemmings.world
    link
    fedilink
    arrow-up
    20
    ·
    3 days ago

    Sometimes I am happy to be able to say that I am not surprised by a piece of news and for once it does not mean in a political terror/economic destruction/environmental eradication way.

  • Linktank@lemmy.today
    link
    fedilink
    arrow-up
    13
    ·
    3 days ago

    Okay, can somebody who knows about this stuff please explain what the hell a “token per second” means?

    • IndeterminateName@beehaw.org
      link
      fedilink
      arrow-up
      28
      ·
      3 days ago

      A bit like a syllable when you are talking about text based responses. 20 tokens a second is faster than most people could read the output so that’s sufficient for a real time feeling “chat”.

      • SteevyT@beehaw.org
        link
        fedilink
        arrow-up
        2
        ·
        2 days ago

        Huh, yeah that actually is above my reading speed assuming 1 token = 1 word. Although, I found that anything above 100 words per minute, while slow to read, feels real time to me since that’s about the absolute top end of what most people type.

    • Fluffy Kitty Cat@slrpnk.net
      link
      fedilink
      English
      arrow-up
      16
      ·
      3 days ago

      It’s the generation speed. Internally LLMs use tokens which represent either words or parts of words and map them to integer values. The model then does it’s prediction on which integer is most likely to come after the input. How the words are split up is an implementation detail that can vary from model to model

    • IrritableOcelot@beehaw.org
      link
      fedilink
      arrow-up
      11
      ·
      3 days ago

      Not somebody who knows a lot about this stuff, as I’m a bit of an AI Luddite, but I know just enough to answer this!

      “Tokens” are essentially just a unit of work – instead of interacting directly with the user’s input, the model first “tokenizes” the user’s input, simplifying it down into a unit which the actual ML model can process more efficiently. The model then spits out a token or series of tokens as a response, which are then expanded back into text or whatever the output of the model is.

      I think tokens are used because most models use them, and use them in a similar way, so they’re the lowest-level common unit of work where you can compare across devices and models.

  • Flax@feddit.uk
    link
    fedilink
    English
    arrow-up
    6
    ·
    3 days ago

    Of course, the Chinese flag has to be in the article thumbnail.

  • morrowind@lemmy.ml
    link
    fedilink
    arrow-up
    7
    ·
    3 days ago

    Deepseek is an absolutely massive model, it’s not the one people will be running. Rather, look at qwen/qwq, gemma and a number of other smaller ones

    • ParetoOptimalDev@lemmy.today
      link
      fedilink
      arrow-up
      4
      ·
      3 days ago

      No, people who want something approaching chatgpt but local want to run at least deepseek V3 32B.

      Qwen at least fares much worse for my usage as do deepseek V3 under 32B.

      • Korhaka@sopuli.xyz
        link
        fedilink
        English
        arrow-up
        2
        ·
        2 days ago

        I run deepseek-r1:14b locally, though it needs to go into RAM and runs slower its still a reasonably good speed. Keeps up with reading it. Should try a larger one at some point, but its quite a bit to download when you get to the larger ones. Usually run ~7b size as that can fit in VRAM and runs way faster.