Yes, because when you run systems like that, you use the AI, and you have the people as a fallback for when the AI fails.
It was primarily watched by people in India because the AI was failing the vast majority of the time.
So yeah, the state of the art AI is… Failing at its job 70% of the time. Instead of the hoped goal of 5%.
Nah, this problem is actually too hard to solve with LLMs. They don’t have any structure or understanding of what they’re saying so there’s no way to write better guardrails… Unless you build some other system that tries to make sense of what the LLM says, but that approaches the difficulty of just building an intelligent agent in the first place.
So no, if this law came into effect, people would just stop using AI. It’s too cavalier. And imo, they probably should stop for cases like this unless it has direct human oversight of everything coming out of it. Which also, probably just wouldn’t happen.