How is this different from the capabilities of Tesla’s FSD, which is considered level 2? It seems like Mercedes just decided they’ll take on liability to classify an equivalent level 2 system as level 3.
No… it means they’re confident enough to assume the risk, Tesla is not. They’ve been using their tech in europe for a while now without issue, Teslas meanwhile still love to hit a variety of new and exciting objects.
And that’s a huge difference for consumers. I would never use a self drive feature where I am still responsible, that’s pointless and would just create more anxiety for me.
They’re not confident enough to assume the risk if you look at the requirements you have to meet to use it. Under 40MPH on approved freeways in heavy traffic during daylight hours with clear skies and clear markers painted on the ground. This is essentially useless for a majority of people as it’s just going to inch ahead for you in gridlock traffic provided the road meets all the other requirements.
Yeah I don’t really understand either. Under those conditions any comparable level 2 system would operate without ever requiring the driver to take over.
According to the mercedes website the cars have radar and lidar sensors. FSD has radar only, but apparently decided to move away from them and towards optical only, I’m not sure if they currently have any role in FSD.
That’s important because FSD relies on optical sensors only to tell not only where an object is, but that it exists. Based on videos I’ve seen of FSD, I suspect that if it hasn’t ingested the data to recognize, say, a plastic bucket, it won’t know that it’s not just part of the road (or at best can recognize that the road looks a little weird). If there’s a radar or lidar sensor though, those directly measure distance and can have 3-D data about the world without the ability to recognize objects. Which means they can say “hey, there’s something there I don’t recognize, time to hit the brakes and alert the driver about what to do next”.
Of course this still leaves a number of problems, like understanding at a higher level what happened after an accident for example. My guess is there will still be problems.
You’ve inadvertently pointed out how Tesla deliberately skirts the law. Teslas are way more capable than what level 2 describes, but they choose to stay as level 2 so they wouldn’t have to take responsibility for their public testing
Yeah it’s pretty much an insurance product. They came up with a set of boundary conditions someone would underwrite for their “stay between the lines” tech.
Please tell me how software will be able to detect objects in low/no-light conditions if they say, have cameras with poor dynamic range and no low-light sensitivity?
How is this different from the capabilities of Tesla’s FSD, which is considered level 2? It seems like Mercedes just decided they’ll take on liability to classify an equivalent level 2 system as level 3.
Ummm, yeah, that’s the real difference between level 2 and 3 - who is liable
Ah so it’s marketing BS then, got it.
No… it means they’re confident enough to assume the risk, Tesla is not. They’ve been using their tech in europe for a while now without issue, Teslas meanwhile still love to hit a variety of new and exciting objects.
And that’s a huge difference for consumers. I would never use a self drive feature where I am still responsible, that’s pointless and would just create more anxiety for me.
They’re assuming liability but that doesn’t mean it’s safe or more capable than other systems.
They’re not confident enough to assume the risk if you look at the requirements you have to meet to use it. Under 40MPH on approved freeways in heavy traffic during daylight hours with clear skies and clear markers painted on the ground. This is essentially useless for a majority of people as it’s just going to inch ahead for you in gridlock traffic provided the road meets all the other requirements.
In California, that actually sounds extremely useful.
Yeah I don’t really understand either. Under those conditions any comparable level 2 system would operate without ever requiring the driver to take over.
According to the mercedes website the cars have radar and lidar sensors. FSD has radar only, but apparently decided to move away from them and towards optical only, I’m not sure if they currently have any role in FSD.
That’s important because FSD relies on optical sensors only to tell not only where an object is, but that it exists. Based on videos I’ve seen of FSD, I suspect that if it hasn’t ingested the data to recognize, say, a plastic bucket, it won’t know that it’s not just part of the road (or at best can recognize that the road looks a little weird). If there’s a radar or lidar sensor though, those directly measure distance and can have 3-D data about the world without the ability to recognize objects. Which means they can say “hey, there’s something there I don’t recognize, time to hit the brakes and alert the driver about what to do next”.
Of course this still leaves a number of problems, like understanding at a higher level what happened after an accident for example. My guess is there will still be problems.
It’s also limited to slow traffic on some roads
“DRIVE PILOT can be activated in heavy traffic jams at a speed of 40 MPH or less on a pre-defined freeway network approved by Mercedes-Benz.” https://www.mbusa.com/en/owners/manuals/drive-pilot#:~:text=DRIVE PILOT can be activated in heavy traffic jams at a speed of 40 MPH or less on a pre-defined freeway network approved by Mercedes-Benz.
Here is an alternative Piped link(s):
plastic bucket
Piped is a privacy-respecting open-source alternative frontend to YouTube.
I’m open-source; check me out at GitHub.
You’ve inadvertently pointed out how Tesla deliberately skirts the law. Teslas are way more capable than what level 2 describes, but they choose to stay as level 2 so they wouldn’t have to take responsibility for their public testing
Yeah it’s pretty much an insurance product. They came up with a set of boundary conditions someone would underwrite for their “stay between the lines” tech.
It’s not about the sensors, it’s about the software. That’s the solution.
Please tell me how software will be able to detect objects in low/no-light conditions if they say, have cameras with poor dynamic range and no low-light sensitivity?