

I think not? Which is kind of a drawback to the fediverse. You gotta be careful what you post cause it’s probably gonna be somewhere forever.
I could be wrong though


I think not? Which is kind of a drawback to the fediverse. You gotta be careful what you post cause it’s probably gonna be somewhere forever.
I could be wrong though


I bet you could! The interface and literally be what ever you want with FPGAs. You’d just have to keep things organized and program them one at a time I think


I think I’ve heard that they can running LLMs!


I am also kinda new, but it seems like it leans towards multiple accounts. Some lemmy instances don’t federate, so I have two.
And then it seems like there is a alot of style and content overlapp between pixelfed and mastadon. So I just have a pixelfed account and follow a few folks from Mastond there.
It would be weird to see pixel fed type posts on my lemmy feeds, but I guess that is just how am using it so far


I also have a 5060 (ti) with 16GB of RAM. I tend to use GPT-OSS:20B or Qwen3:14B with a context of ~30k. I have custom system prompt for my style of reponse I like on open web ui. That takes up about 14GB of my 16GB VRAM
But yeah it is slower and not as “smart” as the cloud based models, but I think the inconvenience of the speed and having to fact check/test code is worth the privacy and environmental trade offs


That I why I like small, specialized, locally hosted AI. Runs acceptably fast and quite on my gaming PC, it’s private, and I can give it knowledge is small doses in specific topics and projects.
Exactly once they can correlate a couple things, they can correlate and search for even more info until all you accounts are revealed