lidd1ejimmy@lemmy.ml to Memes@lemmy.mlEnglish · 4 months agoOffline version of Chat GPTlemmy.mlimagemessage-square9fedilinkarrow-up136arrow-down10
arrow-up136arrow-down1imageOffline version of Chat GPTlemmy.mllidd1ejimmy@lemmy.ml to Memes@lemmy.mlEnglish · 4 months agomessage-square9fedilink
minus-squareneidu2@feddit.nllinkfedilinkarrow-up0·edit-24 months agoTechnically possible with a small enough model to work from. It’s going to be pretty shit, but “working”. Now, if we were to go further down in scale, I’m curious how/if a 700MB CD version would work. Or how many 1.44MB floppies you would need for the actual program and smallest viable model.
minus-squareNaz@sh.itjust.workslinkfedilinkarrow-up1·4 months agosquints That says , “PHILLIPS DVD+R” So we’re looking at a 4.7GB model, or just a hair under the tiniest, most incredibly optimized implementation of <INSERT_MODEL_NAME_HERE>
minus-squarecurbstickle@lemmy.dbzer0.comlinkfedilinkarrow-up2·4 months agollama 3 8b, phi 3 mini, Mistral, moondream 2, neural chat, starling, code llama, llama 2 uncensored, and llava would fit.
minus-squareNoiseColor @lemmy.worldlinkfedilinkarrow-up1·4 months agoMight be a dvd. 70b ollama llm is like 1.5GB. So you could save many models on one dvd.
Technically possible with a small enough model to work from. It’s going to be pretty shit, but “working”.
Now, if we were to go further down in scale, I’m curious how/if a 700MB CD version would work.
Or how many 1.44MB floppies you would need for the actual program and smallest viable model.
squints
That says , “PHILLIPS DVD+R”
So we’re looking at a 4.7GB model, or just a hair under the tiniest, most incredibly optimized implementation of <INSERT_MODEL_NAME_HERE>
llama 3 8b, phi 3 mini, Mistral, moondream 2, neural chat, starling, code llama, llama 2 uncensored, and llava would fit.
Might be a dvd. 70b ollama llm is like 1.5GB. So you could save many models on one dvd.