Migrated over from Hazzard@lemm.ee

  • 0 Posts
  • 12 Comments
Joined 2 months ago
cake
Cake day: June 28th, 2025

help-circle
  • Haha, another frustration I have with the US financial system, is how seemingly easy it is to avoid legislation by renaming stuff.

    It’s not a loan, your honor, it’s “buy now, pay later”. We just described what a loan is, and called it that, so we now expect complete immunity from any existing legislation, thank you very much. And now it seems to be “Earned Wage Access”, which is just uh… payday loans from an app.

    I don’t know much about the specifics here, but it certainly sounds an awful lot like a way to store and move your money around. It’s not a banking app, it’s a cash app. Really just feels like your government is willing to play remarkably dumb in exchange for (I assume) lobbying money. A lot of stuff is more profitable with zero consumer protections.



  • Hazzard@lemmy.ziptoLinux Gaming@lemmy.worldNew PC, use both GPUs?
    link
    fedilink
    English
    arrow-up
    4
    ·
    edit-2
    2 days ago

    Unfortunately, I don’t think this would work.

    The answer to where you should plug in is directly into your GPU, as streaming the data from your external GPU to your iGPU will cause data throughput issues as it has to constantly stream data back and forth through the PCIE bus. Even in simple games at low resolutions where that wouldn’t be an issue, you’d still be introducing more input lag. That’s why connecting your display to your motherboard is usually considered a rookie mistake.

    But obviously, if you’re outputting from your external GPU, that silicon is still being used while rendering on the iGPU, which I believe would erase any potential power savings.

    I think the better solution if you really want to maximize power savings, would be to use a conservative power setting on your main GPU, and do things like limiting your framerate/selecting lower resolutions to reduce your power draw in applications where you don’t need the extra grunt. Modern GPUs should be pretty good at minimizing idle power draw.



  • The problem isn’t the tech itself. Getting a pretty darn clean 4k output from 1080p or 1440p, at a small static frametime cost is amazing.

    The problem is that the tech has been abused as permission to slack on optimization, or used in contexts where there just isn’t enough data for a clean picture, like in upscaling to 1080p or less. Used properly, on a well optimized title, this stuff is an incredible proposition for the end user, and I’m excited to see it keep improving.


  • Hazzard@lemmy.ziptocats@lemmy.worldCat raised with dogs
    link
    fedilink
    arrow-up
    3
    ·
    edit-2
    21 days ago

    Mhm, of course, critical thinking in general is absolutely important, although I take some issue with describing looking for artifacts as “vague hunches”. Fake photos have existed for ages, and we’ve found consistent ways to spot and identify them, such as checking shadows, the directionality of light in a scene, the fringes of detailed objects, black levels and highlights, and even advanced techniques like bokeh and motion blur. You don’t see many people casting doubt on the validity of old pictures with Trump and Epstein together, for example, despite the long existence of photoshop and advanced VFX. Hell, even this image could have been photoshopped, and you’re relying on your eyes to catch the evidence of that if that were the case.

    The techniques I’ve outlined here aren’t likely to become irrelevant in the next 5+ years, given they’re based on how the underlying technology works, similar to how LLMs aren’t likely to 100% stop hallucinating any time soon. More than that, I actually think there’s a lot less incentive to work these minor kinks out than something like LLM hallucination, because these images already fool 99% of people, and who knows how much additional processing power it would take to run this at a resolution where you could get something like flawless tufts of grass, in a field that’s already struggling to make a profit given the high costs of generating this output. And if/when these techniques become invalid, I’ll put in the effort to learn new ones, as it’s worthwhile to be able to quickly and easily identify fakes.

    As much as I wholeheartedly agree that we need to think critically and evaluate things based on facts, we live in a world where the U.S. President was posting AI videos of Obama just a couple weeks ago. He may be an idiot who is being obviously manipulative, but it’s naive to think we won’t eventually get bad actors like him who try to manipulate narratives like that with current events, where we can’t rely on simply fact-checking history, or that someone might weave a lie that doesn’t have obvious logical gaps, and we need some kind of technique to verify images to settle the inevitable future “he said, she said” debates. The only real alternative is to just never trust a new photo again, because we can’t 100% prove anything new hasn’t been doctored.

    We’ve survived in a world with fake imagery for decades now, I don’t think we need to roll over and accept AI as unbeatable just because it fakes things differently, or because it might hypothetically improve at hiding itself in the future.

    Anyway, rant over, you’re right, critical thinking is paramount, but being able to clearly spot fakes is a super useful skill to add to that kit, even if it can’t 100% confirm an image as real. I believe these are useful tools to have, which is why I took the time to point them out despite the image already having been proven as not AI by others dating it before I got here.


  • True, someone else did some reverse image searching before I got here, but I think it’s an important skill to develop without relying on dating the image, as that will only work for so long, and there will likely be more important things than memes that will need to be proven/disproven in the future. A reverse image search probably won’t help us with the next political scandal, for example. It’s a pretty good backup to have when it applies though, nice that it proves me correct here.



  • I’d recommend you get some practice identifying and proving AI generated images. I agree this has a bit of that “look”, but in this case I’m quite certain it’s just repeated image compression/a cheap camera. Here’s the major details I looked at after seeing your comment:

    • The grass at the bottom left. AI is frequently sloppy with little details and straight lines, usually the ones in the background. In this case, you can look at any blade of grass and follow it, and its path makes sense. The same happens with the lines in the tiles, the water stains, etc.
    • The birthmark on the large brown dog. In this case, this is a set of three photos, which gives us an easy way to spot AI. AI generated images start from random noise, so you’d never get the exact same birthmark, consistent across different angles, from a prompt like “large brown dog with white birthmark on chest”. Spotting a change in the birthmark, or a detail like it, would be a dead giveaway, but I can’t spot any.
    • There are other tricks as well, such as looking for strange variations in contrast and exposure from the underlying noise, but those are more difficult to explain in text. Corridor Digital has some good videos demonstrating it with visual examples if you’re interested, but suffice to say I don’t pick up on that here either.

    It’s useful to be able to prove or disprove your suspicions, as well as to be able to back them up with something as simple as “this is AI generated, just look at the grass”. Hope this helps!


  • Hazzard@lemmy.ziptoFediverse@lemmy.worldNSFW on Lemmy
    link
    fedilink
    English
    arrow-up
    7
    ·
    edit-2
    28 days ago

    Exactly what I’ve done. Set my settings to hide NSFW, blocked most of the “soft” communities like hot girls and moe anime girls and whatever else (blocking the lemmynsfw.com instance is a great place to start), and I use All frequently. That’s how I’ve found all the communities I’ve subscribed to, but frankly, my /all feed is small enough that I usually see all my subscribed communities anyway.



  • Ugh, this is what our legacy product has. Microservices that literally cannot be scaled, because they rely on internal state, and are also all deployed on the same machine.

    Trying to do things like just updating python versions is a nightmare, because you have to do all the work 4 or 5 times.

    Want to implement a linter? Hope you want to do it several times. And all for zero microservice benefits. I hate it.