“Everything I don’t like is posted in bad faith”
“Everything I don’t like is posted in bad faith”
I really wish this just said life before the internet.
I think the problem is that certain views are much stronger indicators of someone being willing to eventually shove their views down your throat. If I was a big corporation shopping for, say, spam filter software, I’d rather sign a 3 year contract with a regular company than, for example, a company that is openly fundamentalist Christians. Why? Because the Christians are much more likely to start randomly making ridiculous changes that only make sense to other Christians, like spam filtering out anything with the word “Allah”, etc. They may not do that now, but I need to look further than just right now because I don’t want to get locked in to an ecosystem that is going to turn sour. Sure I can always switch, but why not just choose the one that has less risk of that at the onset?
Now some beliefs that I disagree with are less like this than others. For instance if the devs disagreed with me about their favorite movies, I’m not going to take that into consideration, because that’s not the sort of thing or the sort of person who is likely to abuse their power to aid that cause. But transphobia? That is exactly the sort of thing that someone, as has been proven many times now, will sit on and downplay until they are given power and influence to act on it. Using their software contributes to their influence, especially in the browser world.
Lastly, all other things equal, I’d rather use the product of a smart team full of smart people, than a dumb team full of dumb people. Transphobia is a dumb belief to have, it is a result of being unintelligent. Many smart people (and let’s be honest, especially developers) won’t want to work with someone like that. Whether you think that’s reasonable or not, it’s hard to deny. It’s certainly hard to picture any great trans developers wanting to contribute. So a lot of things add up, especially when looking a few links down the causal chain, to make it more than just a matter of whether they believe differently than I do.
Yeah same. I respect the huge amount of work it takes to make a suite like that, but… I’m lucky I’ve worked with Blender a lot to give me a good impression of open source software. If Libre was my first thing I experimented with in the open source world (and I think for many, many people it probably is), I would probably think “wow open source software is a joke, I guess you get what you pay for after all”. It really makes a horrible impression. I wonder why LibreOffice has so many usability pains vs Blender, despite the fact that both applications have very high demand. Maybe it’s just that LibreOffice seems really dull to contribute to?
Yeah, and this is before we even get into availability heuristic biases that would screw over people who do understand percentages. Most people are very bad estimators. If they live in a town with 40% Hispanic people, they’re gonna overestimate the total % of Hispanic people.
For sure. Many Americans are confused by percentages. They do not understand that 20% is equivalent to saying “in a room of 100 people, 20 of them are trans”, and even if they did understand that, they wouldn’t have the proactive reasoning to make sure their percentage estimates add up/overlap in a way that makes sense, e.g not implying that 20 people in the room are all Hispanic Asian atheist Catholic bisexual transgender millionaires.
Post-Zuckerberg? I’m confused on the eras of Facebook I guess. He’s still CEO isn’t he? Wouldn’t that make the whole history of the company the Zuckerberg era?
Probably a win-win in Musk’s eyes
Vertical video is better for content focused on a single standing performer, because it allows as much of the screen resolution as possible to show the body. Horizontal is better for a performer lying down or any traditional horizontal sex acts, for the same reason.
I’m probably reading a little too far into this, but IMO Gen Z is much less interested in “simulations” of intercourse and is more interested in something “real”, i.e someone doing a dance. Intercourse feels like a fantasy, like you’re supposed to imagine that you’re the one having intercourse, it’s that fantasy which is appealing. Something like dancing or dirty talking is more honest about what it is, since a video of someone dancing or talking is essentially the same experience as if they were actually there in front of you. I believe that because Gen Z is more digitally native than older generations, they see digital content not as a substitute or fantasy for a real thing, but rather as a real thing in itself, and the nature of the content they consume reflects that. Another example of this is the shift from real-life streamers who fake personalities but pretend that they are presenting their real selves, to vtubers - who implicitly acknowledge that they are playing a fictional character for their stream as symbolized by their avatars. The human streamers are a fantasy substitute for a real human friend, but with a vtuber the content does not pretend to be different than what it actually is - a pretend character putting on a show for your enjoyment. By acknowledging its artificiality and integrating it into the content itself, it shifts from being something “fake” and “simulated” to being something “real”. To me it’s the exact same dynamic manifesting in a different area.
Now of course, I do understand that vertical content also simply means you don’t need to rotate your phone, and that Gen Z is almost exclusively using the Internet on the phone vs. the desktop as older generations will. But this too is essentially a reflection of the feeling that digital content is not an artificial recreation confined to a specific display area (a TV or computer) but rather perpetually available (your phone), as would be appropriate for something which has taken on the status of being real rather than fake. The two forces reinforce each other, imo.
I think the difference between middle management and C-suite is nothing more than (1) the extent to which you can keep yourself from drinking your own psychotic kool-aid, and (2) the ability to present your psychotic kool-aid tactfully.
This is such a ridiculous thing to say, and sadly, I think that it’s only on a subconscious level that he’s saying this to help MS AI products. Having interacted with many such people, I think they repeat the corporate line so much they actually end up buying into it themselves. CEOs are smart enough to know they’re saying psycho bullshit. Middle management ends up fooling themselves, and as a result of their true belief, they don’t see any need to mask their insane nonsense ideas like “we should tell the people we lay off to talk to AI, maybe they’ll use ours”. A CEO at least has the sense to realize “this makes me look like a psycho who is trying to milk money out of people I fired. I’ll save this idea for a closed-doors board meeting or something, not LinkedIn”.
I believe this difference is what makes middle management so uniquely despicable. It’s sort of like how (at least for me) I am less frustrated by televangelist con artists who clearly don’t believe in what they say and are just trying to get rich, vs. the substantially more deranged megachurch pastors who actually are true believers yet still fool themselves into thinking it’s reasonable to have a private jet. At least the televangelist knows what they are and (privately) owns up to their bastard nature. But the true believer wants it all. Not only do they want the direct satisfaction of their greed, but they want to feel morally justified in it.
Those are some good nuances that definitely require a nuanced response and forced me to refine my thinking, thank you! I’m actually not claiming that the brain is the sole boundary of the real me, rather it is the majority of me, but my body is a contributor. The real me does change as my body changes, just in less meaningful ways. Likewise some changes in the brain change the real me more than others. However, regardless of what constitutes the real me or not, (and believe me, the philosophical rabbit hole there is one I love to explore), in this case I’m really just talking about the straightforward immediate implications of a brain implant on my privacy. An arm implant would also be quite bad in this regard, but a brain implant is clearly worse.
There have already been systems that can display very rough, garbled images of what people are thinking of. I’m less worried about an implant that tells me what to do or controls me directly, and more worried about an implant that has a pretty accurate picture of my thoughts and reports it to authorities. It’s surely possible to build a system that can approximate positive or negative mood states, and in combination this is very dangerous. If the government can tell that I’m happy when I think about Luigi Mangione, then they can respond to that information however they want. Eventually, in the same way that I am conditioned by the panopticon to stop at stop sign, even in the middle of a desolate desert where I can see for miles around that there are no cars, no police, no cameras - no anything that could possibly make a difference to me running the stop sign - the system will similarly condition automatic compliance in thoughts themselves. That is, compliance is brought about not by any actual exertion of power or force, but merely by the omnipresent possibility of its exertion.
(For this we only need moderately complex brain implants, not sophisticated ones that actually control us physiologically.)
I am not depressed, but I will never get a brain implant for any reason. The brain is the final frontier of privacy, it is the one place I am free. If that is taken away I am no longer truly autonomous, I am no longer truly myself.
I understand this is how older generations feel about lots of things, like smartphones, which I am writing this from, and I understand how stupid it sounds to say “but this is different!”, but like… really. This is different. Whatever scale smartphones, drivers licenses, personalized ads, the internet, smart home speakers… whatever scale all these things lie on in terms of “panopticon-ness”, a brain implant is so exponentially further along that scale as to make all the others vanish to nothingness. You can’t top a brain implant. A brain implant is a fundamentally unspeakable horror which would inevitably be used to subjugate entire peoples in a way so systematically flawless as to be almost irreversible.
This is how it starts. First it will be used for undeniable goods, curing depression, psychological ailments, anxiety, and so on. Next thing you know it’ll be an optional way to pay your check at restaurants, file your taxes, read a recipe - convenience. Then it will be the main way to do those things, and then suddenly it will be the only way to do those things. And once you have no choice but to use a brain implant to function in society, you’ll have no choice but to accept “thought analytics” being reported to your government and corporations. No benefit is worth a brain implant, don’t even think about it (but luckily, I can’t tell if you do).
I sometimes wonder if the way my dog thinks internally, if mapped most accurately by some mental translator tool to English, would be cutesy and the cliche of doglike, or if their intense excitement would map most closely to curse words instead.
I’ll be very interested to some day figure out what the explanation for this is. It’s extremely bizarre and very creepy. Also, it’s crazy that Internet access can just be whisked away so easily by the government. I guess satellite is just about the only way around that.
In the same sense that some users might post only articles about ICE in California, or only articles about hurricanes in Florida, I still think that’s not very strange. Some people are particularly invested in specific topics. Maybe the author is or is close to rape victims and is therefore especially interested in it. People dedicate their whole lives and careers to specific activist topics, so I don’t think it’s too strange for someone to dedicate most of their posting activity on one particular website to one. Anyways, I’m not sure what the ulterior motive would be here anyways - what do you think is the real reason for posting so many articles about rape?
But reasoning about it is intelligent, and the point of this study is to determine the extent to which these models are reasoning or not. Which again, has nothing to do with emotions. And furthermore, my initial question about whether or not pattern following should automatically be disqualified as intelligence, as the person summarizing this study (and notably not the study itself) claims, is the real question here.
Sorry, I can see why my original post was confusing, but I think you’ve misunderstood me. I’m not claiming that I know the way humans reason. In fact you and I are on total agreement that it is unscientific to assume hypotheses without evidence. This is exactly what I am saying is the mistake in the statement “AI doesn’t actually reason, it just follows patterns”. That is unscientific if we don’t know whether or “actually reasoning” consists of following patterns, or something else. As far as I know, the jury is out on the fundamental nature of how human reasoning works. It’s my personal, subjective feeling that human reasoning works by following patterns. But I’m not saying “AI does actually reason like humans because it follows patterns like we do”. Again, I see how what I said could have come off that way. What I mean more precisely is:
It’s not clear whether AI’s pattern-following techniques are the same as human reasoning, because we aren’t clear on how human reasoning works. My intuition tells me that humans doing pattern following seems equally as valid of an initial guess as humans not doing pattern following, so shouldn’t we have studies to back up the direction we lean in one way or the other?
I think you and I are in agreement, we’re upholding the same principle but in different directions.
But for something like solving a Towers of Hanoi puzzle, which is what this study is about, we’re not looking for emotional judgements - we’re trying to evaluate the logical reasoning capabilities. A sociopath would be equally capable of solving logic puzzles compared to a non-sociopath. In fact, simple computer programs do a great job of solving these puzzles, and they certainly have nothing like emotions. So I’m not sure that emotions have much relevance to the topic of AI or human reasoning and problem solving, at least not this particular aspect of it.
As for analogizing LLMs to sociopaths, I think that’s a bit odd too. The reason why we (stereotypically) find sociopathy concerning is that a person has their own desires which, in combination with a disinterest in others’ feelings, incentivizes them to be deceitful or harmful in some scenarios. But LLMs are largely designed specifically as servile, having no will or desires of their own. If people find it concerning that LLMs imitate emotions, then I think we’re giving them far too much credit as sentient autonomous beings - and this is coming from someone who thinks they think in the same way we do! The think like we do, IMO, but they lack a lot of the other subsystems that are necessary for an entity to function in a way that can be considered as autonomous/having free will/desires of its own choosing, etc.
Yeah this is exactly what turned me off from it when I looked into it. I kind of like that it would lend a more physical-space quality to it, but ultimately I’m hardly ever online, so it would just be me being totally out of the loop all the time without a bouncer. I know I could figure out how to do it, but it’s a lot of effort for something where I’m not even sure I’ll like what it gives me.