Quantcast

What Google's Self-Driving Car Sees

By Norman Chan

LIDAR point clouds aren't anything new.

There's an image floating around tech blogs today from a tweet showing an image of what appears to be 3D point cloud data from one of Google's self-driving cars. The image shows a third-person perspective of the car parked at a stoplight, with waves of point data radiating around it identifying signs, trees, pedestrians, and other traffic obstacles. This kind of imagery isn't anything new, though. We know that Google's autnomous cars perceive objects using those bulky sensor devices mounted to the top of the chassis. These sensors, as it turns out, are made by the company Velodyne (which also makes high-end audio equipment and headphones). Here's a promo video released by Velodyne a while ago showing the data processed by its sensors:

These sensors employ LIDAR, which stands for Laser Imaging Detection and Ranging. It's sometimes referred to as "laser radar", though not because it uses any radio waves or microwaves like traditional radar. Instead, low-power lasers (which are rated eye-safe) are emitted while the sensor spins at high rates to gather data in the surrounding environment. In Velodyne's latest LIDAR unit, 64 lasers are emitted from a module that can spin at 900RPM to gather 1.3 million points of data per second to be combined into a 3D point cloud. The LIDAR data can not only be extrapolated to recognize moving objects, but also infer the material of those objects based on their reflectivity--for example license plates on other cars, which is useful for Google Street View privacy blurring.