About me on lionir.ca
Images aren’t federated through ActivityPub so I don’t really see how deleting media is supposed to work.
Yes, they are. Every instance downloads everyone’s images for a “cached” version that is currently never used. This is what makes this problem especially insidious and straight up dangerous in cases like CSAM.
It’s a basic curl command, that shouldn’t be “arcane” if you’re setting up a server.
This is the equivalent of saying that any instance admin needs to know how to use curl while most people have never used a commandline. Not only that but you need machine access to know the api key which I would wager instance admins do not necessarily have.
I think this is the result of not prioritising work that makes moderation possible by non-technically inclined people and it is genuinely a failure of the system.
The priorities of development on Lemmy are decided by developers and the people who are not are simply pushed away. Most community leaders and moderators are not developers. The mental gymnastics to justify this lack of tooling is tiring.
They can, if they read the manual. Mods can’t, but instance admins can.
Yes. If you use arcane commands using the docs that are in a pull request that is not yet merged. This is not accessible to many instance admins and it is only “technically supported” which is the worst kind of support from my point of view.
I remember hearing this story a long time ago, It’s still so shocking that this happened.
This is just enlightened centrism. No. Nobody needs to defend the harms done by technology.
We can accept the harm if the good is worth it - we have no need to defend it.
LLMs can work without the harm.
It makes sense to make technology better by reducing the harm they cause when it is possible to do so.
I mean, I don’t understand the point of an encryption that people can decrypt without it being intended. Just seems like theatre to me.
But yeah, obviously the intended parties have to be able to decrypt it. I messed up in my wording.
This is a false equivalence. Encryption only works if nobody can decrypt it. LLMs work even if you censor illegal content from their output.
I personally think it’s likely. Facebook is one of the companies that will be targeted by EU’s DMA and since they co-authored this standard, it seems likely they’d want to use it to respect the DMA. If Facebook uses it, others will adopt it because of their sheer control over messaging services.
they are.
We can try to rationalize it, sure but I think that doing that generally is a disservice. I don’t want to make decisions based on this severity scale. The people who suffer from these problems are all equally deserving of help.
I mean maybe calling it evil is part of the problem ?
I call it evil because it is intentional and premeditated.
There are degrees in everything. Punching somebody is less bad than killing somebody.
Trying to put everything on degrees is bound to show ignorance and imply that certain things are more acceptable than others.
I don’t want to hurt people with my ignorance and I do not want to tell someone that what they experienced is less bad than something else. They are bad and we’ll leave it at that.
Btw its totally humane because we invented the shit.
I am working with this definition : “Characterized by kindness, mercy, or compassion”. There is a difference between human-made and humane.
No. I think that it would still be bad if it were self-use because it is ultimately doing something that someone doesn’t consent to.
If you were to use this on yourself or someone consenting, I see no issues there - be kinky all you want.
Consent is the core foundation for me.
The reason why imagining someone is different is that it is often less intentional - thoughts are not actions.
Drawing someone to be similar to someone you know is very intentional. Even worse, there is a high likely chance that if you are drawing someone you know naked, you likely never asked for their consent because you know you wouldn’t get it.
I don’t like grading evil for this very reason so I think I will refrain from doing so - thank you for catching me doing that. I will refrain from doing that.
That said, AI CSAM could enable other forms of abuse through blackmail. I can also see very harmful things happening to a child or teenager because people may share this material in a targeted way.
I think both are inhumane and disgusting.
Does imagining a different partner while having sex or masturbating count? I would imagine most people would say, “no”.
You can’t share that though so while I still think it is immoral, it is also kind of impossible to know.
Now a highly skilled portrait artist paints a near replica of somebody he knows, but has never seen in the nude. They never mention their friend by name, but the output is lifelike and unmistakably them.
Maybe a digital artist finds a few social media pictures of a person and decided to test drive Krita and manipulates them into appearing nude.
Those would be immoral and reprehensible. The law already protects against such cases on the basis of using someone’s likeness.
It’s harmful because it shares images of someone doing things they would never do. It’s not caricature, it’s simply a fabrication. It doesn’t provide criticism - it is simply erotic.
Taking that a step further, is it illegal to find somebody’s doppelganger and to dress them up so that they look more like their double?
If the goal is to look like you, I would imagine it is possible to defend by law. Otherwise, it is simply coincidence. There’s no intent there.
I don’t think it is a stretch or slippery slope. Just as a picture is captured by a camera, a drawing is captured by a person or a machine.
Both should be the same and it is often already the case in many jurisdictions around the world when it comes to CSAM.
How is ai pedophile stuff worse than actual pedophile stuff?
It’s not worse - it’s just as bad.
Everybody gets horny, idiot.
Please don’t call people idiots needlessly.
Does it matter if someone jerks off to JaLo in the Fappening or some random AI generated BS?
The issue is that this technology can be used to create pornographic material of anyone that has some level of realism without their consent. For creators and the average person, this is incredibly harmful. I don’t want porn of myself to be made and neither do a lot of creators online.
Not only are these images an affront to the dignity of people but it can also be incredibly harmful for someone to see porn of themselves they did not make with someone else’s body.
This is a matter of human decency and consent. It is not negotiable.
As mentioned by @ram@lemmy.ca, this can also be used for other harmful things like CSAM which is genuinely terrifying.
I haven’t played Nier, but I’d say that defines perfectly Platinum games’ games (or at least the ones I played)
Bayonetta has basic combat? Isn’t Platinum known for their combat and gameplay?
I think this is the CPU you’re looking for : https://www.amd.com/en/products/cpu/amd-ryzen-5-5600x
Yeah, sorry, you’ve been interacting in bad faith in this entire thread. We will not allow that kind of behaviour here.
@rikudou@lemmings.world Seems the bot does duplicates sometimes, might want to look into that.
The DMA (Digital Markets Act) has clauses that force big companies that are considered “gatekeepers” to allow interoperability with other services.