because proper driverless cares properly use LIDAR, which doesn't give a shit about your skin color.
And can easily see an object the size of a child out to many metres in front, and doesn't give a shit if its a child or not. It's an object in the path or object with a vector that will about to be in the path.
So change 'Driverless Cars" to "Elon's poor implementation of a Driverless Car"
Or better yet..
"Camera only AI-powered pedestrian detection systems" are Worse at Spotting Kids and Dark-Skinned People
You've extruded into air through a 0.4 nozzle and you got 0.6mm worth of die swell?? That seems excessive for any filament type.
What speed did you extrude that at?
There's just no way I could believe you could extrude into thin air through a 0.4 nozzle and end up with 1.2mm worth of swell of any material, unless it was going through an actual chemical reaction.
Something that expands THAT much after extruding would leave a monumental mess.
won’t add much to an existing array of visible spectrum cameras.
You do realize LIDAR is just a camera, but has an accurate distance per pixel right?
It absolutely adds everything.
But its surroundings are reliably captured by functional sensors
No it's not. That's the point. LIDAR is the functional sensor required.
You can not rely on stereoscopic camera's.
The resolution of distance is not there.
It's not there for humans.
It's not there for the simple reason of physics.
Unless you spread those camera's out to a width that's impractical, and even then it STILL wouldn't be as accurate as LIDAR.
You are more then welcome to try it yourself.
You can be even as stupid as Elon and dump money and rep into thinking that it's easier or cheaper without LIDAR.
It doesn't work, and it'll never work as good as a LIDAR system.
Stereoscopic Camera's will always be more expensive than LIDAR from a computational standpoint.
AI will do a hell of a lot better recognizing things via a LIDAR Camera than a Stereoscopic Camera.
challenges in self driving are not with data acquisition.
What?!?! Of course it is.
We can already run all this shit through a simulator and it works great, but that's because the computer knows the exact position, orientation, velocity of every object in a scene.
In the real world, the underlying problem is the computer doesn't know what's around it, and what those things around doing or going to do.
It's 100% a data acquisition problem.
Source? I do autonomous vehicle control for a living. In environments much more complicated than a paved road with accepted set rules.
to be fair, at the height these things will be flying, you won't hear them on most days.