• 0 Posts
  • 19 Comments
Joined 1 year ago
cake
Cake day: June 24th, 2023

help-circle



  • Do they really? Carving into people’s flesh causes controversy? The US sure is wild.

    Even if some of my examples do cause controversy in the US sometimes (I do realize you lot tend to fantasize free speech as an absolute rather than a freedom that - although very important - is always weighed against all the other very important rights like security and body autonomy) they do stand as examples of limits to free speech that are generally accepted by the large majority. Enough that those controversies don’t generally end up in blanket decriminalization of mutilation and vandalism. So I still refute that my stance is not “the default opinion”. It may be rarely formulated this way, but I posit that the absolutism you defend is, in actuality, the rarer opinion of the two.

    The example of restriction of free speech your initial comment develops upon is a fringe consequence of the law in question and doesn’t even restrict the information from circulating, only the tools you can use to write it. My point is that this is not at all uncommon in law, even in american law, and that it does not, in fact, prevent information from circulating.

    The fact that you fail to describe why circulation of information is important for a healthy society makes your answer really vague. The single example you give doesn’t help : if scientific and tech-related information were free to circulate scientists wouldn’t use sci-hub. And if it were the main idea, universities would be free in the US (the country that values free speech the most) rather than in European countries that have a much more relative viewpoint on it. The well known “everything is political” is the reason why you don’t restrict free speech to explicitly political statements. How would you draw the line by law? It’s easier and more efficient to make the right general, and then create exceptions on a case-by-case basis (confidential information, hate speech, calls for violence, threats of murder…)

    Should confidential information be allowed to circulate to Putin from your ex-President then?



  • Yeah, a bunch of speech is restricted. Restricting speech isn’t in itself bad, it’s generally only a problem when it’s used to oppress political opposition. But copyrights, hate speech, death threats, doxxing, personal data, defense related confidentiality… Those are all kinds of speech that are strictly regulated when they’re not outright banned, for the express purpose of guaranteeing safety, and it’s generally accepted.

    In this case it’s not even restricting the content of speech. Only a very special kind of medium that consists in generating speech through an unreliably understood method of rock carving is restricted, and only when applied to what is argued as a sensitive subject. The content of the speech isn’t even in question. You can’t carve a cyber security text in the flesh of an unwilling human either, or even paint it on someone’s property, but you can just generate exactly the same speech with a pen and paper and it’s a-okay.

    If your point isn’t that the unrelated scenarios in your original comment are somehow the next step, I still don’t see how that’s bad.













  • That’s absolutely true, generative AI is mostly a parlor trick with very few applications beyond placeholder art and faster replies to emails. But even for your kind of engineering problem, there’s still a big issue that’s often disregarded.

    If we keep your example of an AI for a city grid, an important aspect of this type of engineering problem is guaranteeing that the system has as few catastrophic failures as possible (usually guaranteeing less than 1 for every 109 hours of uptime for systems where catastrophic means a certain quantity of dead bodies or high monetary costs, like a city grid, train signalization, flight control…). AI models may very well end up being discarded in those problems because even if you observe a better accuracy in simulations and experiments, mathematically proving this 109 figure is impossible because we don’t know how they work. Proving a threshold experimentally can happen, but a 109 number would require something like centuries of concurrent testing in every city in the world… I’ve just had a class with this example for trains. They were testing a system that reads signalization with a camera in order to move towards a more autonomous train. Deep learning performed better that classical image processing, but image processing allows you to prove that the train won’t misread less than x% of the time with way higher certainty than a black box, so they had to go with that if they ever wanted to pass safety certifications.

    So I guess deep learning explainability might be a more significant challenge even that finding a dataset that isn’t racially biased…