As far as I can tell, this product never panned out. It was backed by 132 people to cover 150k GBP in 2017. It was called the “Cyclotron Bike”.
My ADHD tax is currently $377 every month for Vyvanse.
Maybe more apt for me would be, “We don’t need to teach math, because we have calculators.” Like…yeah, maybe a lot of people won’t need the vast amount of domain knowledge that exists in programming, but all this stuff originates from human knowledge. If it breaks, what do you do then?
I think someone else in the thread said good programming is about the architecture (maintainable, scalable, robust, secure). Many LLMs are legit black boxes, and it takes humans to understand what’s coming out, why, is it valid.
Even if we have a fancy calculator doing things, there still needs to be people who do math and can check. I’ve worked more with analytics than LLMs, and more times than I can count, the data was bad. You have to validate before everything else, otherwise garbage in, garbage out.
It’s sounds like a poignant quote, but it also feels superficial. Like, something a smart person would say to a crowd to make them say, “Ahh!” but also doesn’t hold water long.
I generally agree. It’ll be interesting what happens with models, the datasets behind them (particularly copyright claims), and more localized AI models. There have been tasks where AI greatly helped and sped me up, particularly around quick python scripts to solve a rote problem, along with early / rough documentation.
However, using this output as justification to shed head count is questionable for me because of the further business impacts (succession planning, tribal knowledge, human discussion around creative efforts).
If someone is laying people off specifically to gap fill with AI, they are missing the forest for the trees. Morale impacts whether people want to work somewhere, and I’ve been fortunate enough to enjoy the company of 95% of the people I’ve worked alongside. If our company shed major head count in favor of AI, I would probably have one foot in and one foot out.
This has been my general worry: the tech is not good enough, but it looks convincing to people with no time. People don’t understand you need at least an expert to process the output, and likely a pretty smart person for the inputs. It’s “trust but verify”, like working with a really smart parrot.
It’s a fair point. I was talking moreso about just generalized bundling. I think both are accurate.
That’s just going back to cable. 🙃
You look on at the festive dish that’s seemingly grown consciousness. Others impatiently wait behind you, expecting you to dig in.
There was a similar study reported the other day about using FMRI imagining and AI to recreate the “thought content” of someone’s brain. It required training for the AI in the person’s brain and some other training. It does seem these techniques can work with some specified models, but yeah, it doesn’t seem like hooking someone’s brain up to this would create a movie of their mind or something.
I think the more dangerous part is “This is step 0,” which this tech would have seemed impossible 10 years ago. Very strange times.
Game designer.
I’m a Director of Game Design now.
Ah yes. Paying for privacy on a walled garden website. Genius business moves.
I think the internet thinks economics is a hard science. I think it’s mostly due to the math involved.
This is a moment. Take it bird by bird.
Imbroglio (https://apps.apple.com/us/app/imbroglio/id969264934) is one of my favorite minimal purchase iOS games. I haven’t played it in awhile, but it’s a unique dungeon puzzle game where you place attacks as floor tiles on the board ahead of playing. There’s some consistent rules with ramping challenge, which made it super replayable for me. I loved trying different floor designs, finding strategies, and there’s a small progression system that’s fun. Hasn’t been updated in a few years, but it was a great design despite the rough appearance.
What is Mark has been a sentient AI for some time?