

I swear everything I learn about the US banking system just makes me wonder “how can this be legal?”, being able to just close an account with 200 real dollars and no recourse is just insane.
Migrated over from Hazzard@lemm.ee
I swear everything I learn about the US banking system just makes me wonder “how can this be legal?”, being able to just close an account with 200 real dollars and no recourse is just insane.
Unfortunately, I don’t think this would work.
The answer to where you should plug in is directly into your GPU, as streaming the data from your external GPU to your iGPU will cause data throughput issues as it has to constantly stream data back and forth through the PCIE bus. Even in simple games at low resolutions where that wouldn’t be an issue, you’d still be introducing more input lag. That’s why connecting your display to your motherboard is usually considered a rookie mistake.
But obviously, if you’re outputting from your external GPU, that silicon is still being used while rendering on the iGPU, which I believe would erase any potential power savings.
I think the better solution if you really want to maximize power savings, would be to use a conservative power setting on your main GPU, and do things like limiting your framerate/selecting lower resolutions to reduce your power draw in applications where you don’t need the extra grunt. Modern GPUs should be pretty good at minimizing idle power draw.
People do all the time and it makes no sense to me.
I assume it’s people who are highly motivated by hype and the community conversation to play something while it’s in the zeitgeist, the same as people who want to skip stuff to play story games that are direct narrative sequels without bothering to play anything before it, presumably just because it’s popular and catches their eye.
Probably the same drive that keeps pre-orders and day one sales so high, despite it pretty much always being a better idea to wait a year or so for sales/updates/etc.
The problem isn’t the tech itself. Getting a pretty darn clean 4k output from 1080p or 1440p, at a small static frametime cost is amazing.
The problem is that the tech has been abused as permission to slack on optimization, or used in contexts where there just isn’t enough data for a clean picture, like in upscaling to 1080p or less. Used properly, on a well optimized title, this stuff is an incredible proposition for the end user, and I’m excited to see it keep improving.
Mhm, of course, critical thinking in general is absolutely important, although I take some issue with describing looking for artifacts as “vague hunches”. Fake photos have existed for ages, and we’ve found consistent ways to spot and identify them, such as checking shadows, the directionality of light in a scene, the fringes of detailed objects, black levels and highlights, and even advanced techniques like bokeh and motion blur. You don’t see many people casting doubt on the validity of old pictures with Trump and Epstein together, for example, despite the long existence of photoshop and advanced VFX. Hell, even this image could have been photoshopped, and you’re relying on your eyes to catch the evidence of that if that were the case.
The techniques I’ve outlined here aren’t likely to become irrelevant in the next 5+ years, given they’re based on how the underlying technology works, similar to how LLMs aren’t likely to 100% stop hallucinating any time soon. More than that, I actually think there’s a lot less incentive to work these minor kinks out than something like LLM hallucination, because these images already fool 99% of people, and who knows how much additional processing power it would take to run this at a resolution where you could get something like flawless tufts of grass, in a field that’s already struggling to make a profit given the high costs of generating this output. And if/when these techniques become invalid, I’ll put in the effort to learn new ones, as it’s worthwhile to be able to quickly and easily identify fakes.
As much as I wholeheartedly agree that we need to think critically and evaluate things based on facts, we live in a world where the U.S. President was posting AI videos of Obama just a couple weeks ago. He may be an idiot who is being obviously manipulative, but it’s naive to think we won’t eventually get bad actors like him who try to manipulate narratives like that with current events, where we can’t rely on simply fact-checking history, or that someone might weave a lie that doesn’t have obvious logical gaps, and we need some kind of technique to verify images to settle the inevitable future “he said, she said” debates. The only real alternative is to just never trust a new photo again, because we can’t 100% prove anything new hasn’t been doctored.
We’ve survived in a world with fake imagery for decades now, I don’t think we need to roll over and accept AI as unbeatable just because it fakes things differently, or because it might hypothetically improve at hiding itself in the future.
Anyway, rant over, you’re right, critical thinking is paramount, but being able to clearly spot fakes is a super useful skill to add to that kit, even if it can’t 100% confirm an image as real. I believe these are useful tools to have, which is why I took the time to point them out despite the image already having been proven as not AI by others dating it before I got here.
True, someone else did some reverse image searching before I got here, but I think it’s an important skill to develop without relying on dating the image, as that will only work for so long, and there will likely be more important things than memes that will need to be proven/disproven in the future. A reverse image search probably won’t help us with the next political scandal, for example. It’s a pretty good backup to have when it applies though, nice that it proves me correct here.
Haha, that’s just because I used a bullet point list. No em dashes though, at the very least.
I’d recommend you get some practice identifying and proving AI generated images. I agree this has a bit of that “look”, but in this case I’m quite certain it’s just repeated image compression/a cheap camera. Here’s the major details I looked at after seeing your comment:
It’s useful to be able to prove or disprove your suspicions, as well as to be able to back them up with something as simple as “this is AI generated, just look at the grass”. Hope this helps!
Exactly what I’ve done. Set my settings to hide NSFW, blocked most of the “soft” communities like hot girls and moe anime girls and whatever else (blocking the lemmynsfw.com instance is a great place to start), and I use All frequently. That’s how I’ve found all the communities I’ve subscribed to, but frankly, my /all feed is small enough that I usually see all my subscribed communities anyway.
Hard to blame them. Proton is dang impressive, and if it works, it works.
Ugh, this is what our legacy product has. Microservices that literally cannot be scaled, because they rely on internal state, and are also all deployed on the same machine.
Trying to do things like just updating python versions is a nightmare, because you have to do all the work 4 or 5 times.
Want to implement a linter? Hope you want to do it several times. And all for zero microservice benefits. I hate it.
Haha, another frustration I have with the US financial system, is how seemingly easy it is to avoid legislation by renaming stuff.
It’s not a loan, your honor, it’s “buy now, pay later”. We just described what a loan is, and called it that, so we now expect complete immunity from any existing legislation, thank you very much. And now it seems to be “Earned Wage Access”, which is just uh… payday loans from an app.
I don’t know much about the specifics here, but it certainly sounds an awful lot like a way to store and move your money around. It’s not a banking app, it’s a cash app. Really just feels like your government is willing to play remarkably dumb in exchange for (I assume) lobbying money. A lot of stuff is more profitable with zero consumer protections.