I'm surprised that rotating scanners are still used. It's been twenty years since Velodyne built their first one. They work OK, but cost too much. I was expecting flash LIDAR or MEMS mirrors to take over. Continental, the auto parts company, bought the leading flash LIDAR company over a decade ago, but the volume market a big parts company needs never appeared.
Waymo is still using rotating LIDARs even for the little ones at the vehicle corners. Those need less range. There needs to be a cheap, flush-mounted replacement for those things. The location is too vulnerable. Maybe millimeter phased array radar mounted behind Fiberglas body panels.
Waymo needs to solve that problem before they do New York.
The LIDAR on top may not be a problem. Insisting that it has to go away to "look like a car" is like insisting that cars had to have the form factor of horse-propelled buggies. Early cars looked like buggies, but that didn't last.
One big advantage of pulsed LIDAR over continuous is that the interference problem between identical units is much less. The duty cycle is tiny. Data from one pulse round trip is collected in less than a microsecond. Just put some randomization in the pulse timing and getting multiple conflicts in a row goes away.
Waymo have in house radar, I think in the 70GHz gap in the absorption spectrum. They're pretty obvious as sort of paperback book sized planes, mounted near other sensors IIRC.
The old Velodyne units were actually susceptible to damage if you left two units running right next to each other. I did hear a proposal at some point for a different but similar unit to use GPS time to sync the rotations of all the units we had live so they wouldn't be pointed at each other, but in practice it seemed to not be a huge issue.
BTW I once gave you guff about continuing to bring up Conti's flash LIDAR, and in retrospect I wish I hadn't, I really enjoy your contributions here.
Costs have dropped dramatically in the past 20 years and continue to do so.
> There needs to be a cheap, flush-mounted replacement for those things.
Why? Corners are the optimal mounting position for maximum visibility. It allows the car to -in-effect- see around corners in ways no centrally mounted sensor can.
> Waymo needs to solve that problem before they do New York.
Have you ever seen the corners of a car that has been parked in a big East-coast city? They will sustain damage during the course of normal operation and storage, and many people will not stop and leave their insurance information, especially if the damage is perceived as minor and happens while the car is parked and the owner not present. Currently, the corners of a car are relatively non-critical to its function and usually not too expensive to repair. If both of those change, we'll see more expensive damage that is more challenging to repair as well as less likely to be handled by the responsible party.
Also, having the sensors stick out from the corners makes the car's collision box and turning radius bigger. That doesn't help in any tight situation, but I imagine that's not that different between e.g. SF and New York. What is different is the sheer volume of cars and pedestrian activity.
Lidar obstacle detection algorithm from a Git repo leaked onto Tor
This is a drivable region mapping (obstacle detection) algorithm found in what appears to be a git repo leaked from an autonomous vehicle company in 2017. The repo was available through one or more Tor hidden services for several years.
The lidar code appears to be written for the Velodyne HDL-32E. It operates in a series of stages, each stage refining the output of the previous stage. This algorithm is in the second stage. It is the primary obstacle detection method, with the other methods making only small improvements.
The leaked code uses a column-major matrix of points and it explicitly handles NaNs (the no-return points). We've rewritten it to use a much more cache-efficient row-major matrix layout and a conditional that will ignore the NaN points without explicit testing.
This is an amazingly effective method of obstacle detection, considering its simplicity.
The issue isn't one of fixed vs rotation, it's that radar can't fundamentally achieve the resolution necessary to distinguish important features in the environment. It's easily fooled by oddly-shaped objects, especially concave features like corners, and so while it's great for answer the question of "am I close to something" it's not reliable for telling you what that something is, especially at longer ranges.
I believe automotive radar has a cone of sensitivity that is read as a single "pixel" worth of data. Even if the radar spun like lidar, the radar cone of sensitivity is thousands of times wider than the lidar beam so you can't make much of a picture with radar.
IIRC the data coming out of the Conti radars was preprocessed to give bearing, distance, and size of an object in the FOV of the unit. I don't know if I ever saw the true raw data out of one of them, but I'm curious what it looks like.
Very high tech radars can generate amazing imagery, but they'll never top what lidar can do. Conceptually they're both doing the same sort of thing using EM radiation, but lidar uses a much smaller wavelength which gives it an intrinsic resolution advantage. Particularly at distances and with hardware sized relevant to cars.
In a recent No Priors podcast with the Waymo Co-CEO Dmitri Dolgov, he talks about how they evaluated just driving with cameras and how it isn't good enough for full autonomy and doesn't meet their bar for safety [1].
So there's a video of him addressing this - he doesn't hate the tech. He mentions that it's wildly expensive for cars. But, they use it heavily for SpaceX
Theoretically if a human can drive a car using a pair of eyes connected to brain, it should be possible to do that using two cameras connected to some kind of image processing unit.
> have maintained that they need to find a way to do it with cheap
If the goal is to make roads safer. Aiming for cheap is good, it means aiming for more people who can afford that safer car. If it's not safer than humans, it should not be on the road in the first place.
I hate to say this but Musk was right. We already have billions of RGB photos that can help models understand the world. Lidar just doesn't have the same kind of training data. RGB sensors are just going pull further ahead as teams start using large foundation models to simulate ground truth.
One of the cool thing about the Waymo Driver is that it can be configured to work with different degrees of quality depending on the sensors available. In a low risk environment (e.g. closed to humans) like operating forklifts in an autonomous warehouse, it would work fine with just cameras. Waymo hasn't been very boastful to date, but some of the capabilities are hinted at in this interview: https://www.youtube.com/watch?v=d6RndtrwJKE
That's a reasonable basic overview.
I'm surprised that rotating scanners are still used. It's been twenty years since Velodyne built their first one. They work OK, but cost too much. I was expecting flash LIDAR or MEMS mirrors to take over. Continental, the auto parts company, bought the leading flash LIDAR company over a decade ago, but the volume market a big parts company needs never appeared.
Waymo is still using rotating LIDARs even for the little ones at the vehicle corners. Those need less range. There needs to be a cheap, flush-mounted replacement for those things. The location is too vulnerable. Maybe millimeter phased array radar mounted behind Fiberglas body panels. Waymo needs to solve that problem before they do New York.
The LIDAR on top may not be a problem. Insisting that it has to go away to "look like a car" is like insisting that cars had to have the form factor of horse-propelled buggies. Early cars looked like buggies, but that didn't last.
One big advantage of pulsed LIDAR over continuous is that the interference problem between identical units is much less. The duty cycle is tiny. Data from one pulse round trip is collected in less than a microsecond. Just put some randomization in the pulse timing and getting multiple conflicts in a row goes away.
Waymo have in house radar, I think in the 70GHz gap in the absorption spectrum. They're pretty obvious as sort of paperback book sized planes, mounted near other sensors IIRC.
The old Velodyne units were actually susceptible to damage if you left two units running right next to each other. I did hear a proposal at some point for a different but similar unit to use GPS time to sync the rotations of all the units we had live so they wouldn't be pointed at each other, but in practice it seemed to not be a huge issue.
BTW I once gave you guff about continuing to bring up Conti's flash LIDAR, and in retrospect I wish I hadn't, I really enjoy your contributions here.
> They work OK, but cost too much.
Costs have dropped dramatically in the past 20 years and continue to do so.
> There needs to be a cheap, flush-mounted replacement for those things.
Why? Corners are the optimal mounting position for maximum visibility. It allows the car to -in-effect- see around corners in ways no centrally mounted sensor can.
> Waymo needs to solve that problem before they do New York.
What? Because of vandalism?
Have you ever seen the corners of a car that has been parked in a big East-coast city? They will sustain damage during the course of normal operation and storage, and many people will not stop and leave their insurance information, especially if the damage is perceived as minor and happens while the car is parked and the owner not present. Currently, the corners of a car are relatively non-critical to its function and usually not too expensive to repair. If both of those change, we'll see more expensive damage that is more challenging to repair as well as less likely to be handled by the responsible party.
Also, having the sensors stick out from the corners makes the car's collision box and turning radius bigger. That doesn't help in any tight situation, but I imagine that's not that different between e.g. SF and New York. What is different is the sheer volume of cars and pedestrian activity.
Here's an interesting "lidar gem" from Hacker News a few years ago:
https://news.ycombinator.com/item?id=33554679
Lidar obstacle detection algorithm from a Git repo leaked onto Tor
This is a drivable region mapping (obstacle detection) algorithm found in what appears to be a git repo leaked from an autonomous vehicle company in 2017. The repo was available through one or more Tor hidden services for several years.
The lidar code appears to be written for the Velodyne HDL-32E. It operates in a series of stages, each stage refining the output of the previous stage. This algorithm is in the second stage. It is the primary obstacle detection method, with the other methods making only small improvements.
The leaked code uses a column-major matrix of points and it explicitly handles NaNs (the no-return points). We've rewritten it to use a much more cache-efficient row-major matrix layout and a conditional that will ignore the NaN points without explicit testing.
This is an amazingly effective method of obstacle detection, considering its simplicity.
"Its particular superpower is that it can generate high resolution images of its surroundings much better than radar can."
Is this true tough? Car radars are fixed. I guess a comparable lidar would be fixed too and have n points for n lasers.
A rovolving radar would have continuous resolution around while a lidar samples?
I thought the advantage of lidars were accuracy and being better at measuring heights of objects, where as radars flatten the view.
The issue isn't one of fixed vs rotation, it's that radar can't fundamentally achieve the resolution necessary to distinguish important features in the environment. It's easily fooled by oddly-shaped objects, especially concave features like corners, and so while it's great for answer the question of "am I close to something" it's not reliable for telling you what that something is, especially at longer ranges.
I believe automotive radar has a cone of sensitivity that is read as a single "pixel" worth of data. Even if the radar spun like lidar, the radar cone of sensitivity is thousands of times wider than the lidar beam so you can't make much of a picture with radar.
IIRC the data coming out of the Conti radars was preprocessed to give bearing, distance, and size of an object in the FOV of the unit. I don't know if I ever saw the true raw data out of one of them, but I'm curious what it looks like.
Very high tech radars can generate amazing imagery, but they'll never top what lidar can do. Conceptually they're both doing the same sort of thing using EM radiation, but lidar uses a much smaller wavelength which gives it an intrinsic resolution advantage. Particularly at distances and with hardware sized relevant to cars.
Fantastic tech that Musk hates
In a recent No Priors podcast with the Waymo Co-CEO Dmitri Dolgov, he talks about how they evaluated just driving with cameras and how it isn't good enough for full autonomy and doesn't meet their bar for safety [1].
1: https://www.youtube.com/watch?v=d6RndtrwJKE&t=1119s
So there's a video of him addressing this - he doesn't hate the tech. He mentions that it's wildly expensive for cars. But, they use it heavily for SpaceX
The issue isn't that it's wildly expensive for cars. But rather for Tesla.
Because the company has promised that existing Tesla owners would be able to use FSD.
Having to retrofit them to add LiDAR sensors would be cost-prohibitive.
Also he wants to reuse the foundational machine vision tech in Optimus bot, which probably won't have lidar.
Based on presentations we've seen what sets Tesla apart are its datasets not the core technology.
And those don't translate across to the Optimus bot.
It‘s not just Musk. Most automobile manufacturers have maintained that they need to find a way to do it with cheap and pretty sensors.
Theoretically if a human can drive a car using a pair of eyes connected to brain, it should be possible to do that using two cameras connected to some kind of image processing unit.
> have maintained that they need to find a way to do it with cheap
If the goal is to make roads safer. Aiming for cheap is good, it means aiming for more people who can afford that safer car. If it's not safer than humans, it should not be on the road in the first place.
I hate to say this but Musk was right. We already have billions of RGB photos that can help models understand the world. Lidar just doesn't have the same kind of training data. RGB sensors are just going pull further ahead as teams start using large foundation models to simulate ground truth.
Related: https://www.viksnewsletter.com/p/teslas-big-bet-cameras-over...
Waymo tried cameras-only recently as a research project.[1][2] They seem to do about as well as Tesla, which they don't consider good enough.
[1] https://www.forbes.com/sites/bradtempleton/2024/10/30/waymo-...
[2] https://arxiv.org/pdf/2410.23262
One of the cool thing about the Waymo Driver is that it can be configured to work with different degrees of quality depending on the sensors available. In a low risk environment (e.g. closed to humans) like operating forklifts in an autonomous warehouse, it would work fine with just cameras. Waymo hasn't been very boastful to date, but some of the capabilities are hinted at in this interview: https://www.youtube.com/watch?v=d6RndtrwJKE