A Short Introduction to Automotive Lidar Technology
A guide to the operating principles, techniques and technology in lidar systems for self driving cars.
Ubiquitous adoption of lidar in self driving cars needs one major thing: lower cost.
Lidar has proven to be a capable technology for level 4 autonomous driving, and is already used in self driving taxis by Waymo and Cruise. But the spinning lidar domes on top of these cars cost thousands of dollars, and that number needs to drop by at least an order of magnitude.
There are over 140 startups in the lidar space looking to make that happen and more.
In this post, we will cover the basics of automotive lidar technology:
Lidar for autonomous vehicles
Wavelength of operation
Photodetectors
Ranging techniques
Mechanical lidar
Scanning systems
MEMS mirrors
Solid-state lidar
Flash lidar
Optical phased arrays
References
Read time: 12 mins
The post may be too long for email. Please read it online.
Lidar for Autonomous Vehicles
Lidar stands for Light Detection and Ranging and is a method where infrared laser light is used to measure the distance to a remote object. This technology is not new. For years, it has been used for imaging vegetation, urban terrain, hidden archeological sites, building construction and recently, in augmented reality. Its particular superpower is that it can generate high resolution images of its surroundings much better than radar can. While lidar and radar are fundamentally similar in operation, the use of shorter wavelengths (lasers) compared to radar (microwaves) gives it the ability to generate highly detailed images.
In last week’s article, we looked at the camera versus lidar debate for self driving cars. If you missed that, you can read it below.
Since 2020, lidar has become especially relevant as the “eyes” of autonomous vehicles. Its ability to rapidly generate precise 3D images of the surroundings is critical in making accurate distance estimations for self driving. The downside of lidar is cost. Laser sources, detectors and associated electronics and mechanics are expensive. The rise of solid-state lidar technologies may still offer a competitive price point for widespread adoption of lidar in self driving cars.
The next sections will explain the inner workings of lidar technology.
Wavelength of Operation
Lidar systems are predominantly designed to operate in one of two wavelengths that are in the infrared region (750 nanometers to 15 micrometers) of the electromagnetic spectrum, but outside visible range (380 to 700 nanometers).
905 nm (near infrared, or NIR)
1550 nm (short wave infrared, or SWIR)
The choice of wavelength in a lidar system depends on the output power of laser sources, sensitivity of detectors and the interference from natural and artificial light sources in the same spectrum.
Sunlight is one of the dominant sources of interference which has a lot of energy in the infrared region of the spectrum. A measure of sunlight’s impact is called the solar photon flux, which is the amount of sunlight hitting the earth at any given wavelength.
There are some noticeable dips at 905, 940 and 1550 nm due to absorption by water vapor in the upper atmosphere, which conveniently reduces interference in systems at ground level. Unfortunately, the same effect absorbs radiation in foggy and rainy road conditions. The proximity of the 905 nm wavelength to the visible range causes two other concerns:
905 nm laser wavelengths are easily absorbed by the retina causing damage from prolonged exposure. As a result, there are strict standards for lidar eye safety that must be adhered to.
There are plenty of interference sources near visible light, both from the sun and from vehicle headlamps that degrade the system performance.
However, at shorter wavelengths, photodetectors are generally more sensitive and laser sources are more powerful and inexpensive. Ouster, for example, has actually adopted 850 nm for its lidar technology despite high solar photon flux due to better visibility in damp conditions, improved source and detector performance, with patented approaches to rejecting environmental interference.
1550 nm wavelength mitigates some problems; lower interference from solar radiation, and lower eye safety concerns because this wavelength only penetrates up to the cornea, thus protecting the retina. Better eye safety implies that more power can be used at 1550 nm for longer periods of time, providing longer detection range. The downside of 1550 nm wavelength is that the high absorption by water vapor makes it difficult to use in wet conditions.
The choice of wavelength also depends on the capabilities and economics of photodetectors.
Photodetectors
Avalanche photodiodes (APDs) are the most commonly used detectors in lidar. They are specially engineered PN semiconductor junctions that utilize the photoelectric effect to generate electron-hole pairs in response to incident photons. They generate a current proportional to the number of photos incident, which depends on the amount of reverse bias on the diode.
APDs are most often built with Silicon (Si), Germanium (Ge), and Indium Gallium Arsenide (InGaAs), but each of them respond differently to infrared wavelengths. Silicon APDs respond well to NIR and are inexpensive to manufacture, while InGaAs works well for SWIR wavelengths but are more expensive.
A popular detector used in lidar systems is the single-photon avalanche diode (SPAD). SPADs are especially interesting as photodetectors. Unlike traditional avalanche photodiodes (APDs) which generate a signal proportional to the amount of light, SPADs generate a near binary response to the arrival of a photon by operating in “Geiger-mode” where the photodiode is heavily reverse-biased.
The avalanche breakdown effect in the diode generates massive amounts of current when incident even with a single photon. With this, the timing of photon arrivals can be determined to pico-second (trillionth of a second) accuracies which allows accurate distance measurements using these sensors. An added benefit is that SPADs can be implemented in a CMOS process making them low cost. This also allows massive amounts of signal processing to be integrated right next to the detector array.
Especially at 905 nm, silicon photomultipliers (SiPMs) have largely replaced Si APDs. SiPMs are arrays of microcells comprising of a SPAD with a quenching resistor to self-limit the flow of avalanche current. SiPMs provide very high photoelectric gain and are capable of detecting the precise number of incident photons depending on output current levels.
Ranging Techniques
The detection of object distance using lidar is called ranging, and there are two popular approaches that are often used.
1. Direct Time-of-Flight (dToF)
Much like how sonar echo-location or pulsed doppler radar works, ToF sensing using lidar involves emitting laser bursts and measuring the time taken to detect the reflected signal. The total time elapsed from signal emission to reception is called the round-trip delay. Since the actual time to the object is half the round-trip delay, the distance is calculated using the speed of light in the propagating medium.
The smallest distance that can be measured using ToF depends on the resolution of the timing electronics. A nearby object might have a short round-trip delay that the detector might not resolve. Hence the minimum depth of such radars are usually limited to a few centimeters.
The largest distance that can be measured depends on the transmitted power, detector sensitivity and free space path loss. If the reflected signal is indistinguishable from background noise, then the distance to the object cannot be resolved. Commercial dToF systems have a maximum range of 100-200 meters.
Most lidar systems today use dToF ranging methods due to simplicity. A slightly different temporal detection approach is to use continuous wave signals, and detect the phase shift of the reflected wave. This method is called indirect ToF (iToF), or more specifically amplitude modulated continuous wave (AMCW). It is less sensitive to timing drift and better suited to short distance measurements.
2. Frequency Modulated Continuous Wave (FMCW)
While ToF uses pulsed or continuous wave signals of a fixed wavelength, there are benefits to modulating it. Lidars that use the modulation of the wavelength or frequency of the transmitted pulse are called FMCW lidars. While many sources online claim that FMCW lidar is new technology, it is not. It has been around since the 1960s and the concept is widely used in automotive radar technology.
Each burst of frequency modulated signal is called a “chirp”, and the reflected signal received after a time delay has an instantaneous frequency difference between transmitted and reflected pulses. This “beat” frequency can be downconverted using a mixer and used to compute both distance and velocity of the object. I have explained before how this works for radar, and the same principles apply to lidar.
FMCW lidar systems are complex to implement due to the need for a frequency tunable laser source for modulation and additional electronics needed to extract information from the transmitted and received signals. But they do give lower interference from nearby lidar systems because the frequencies are different at any point in time. Also, FMCW lidar requires lower peak power from a laser compared to ToF which has implications in eye safety requirements especially at 905 nm.
Mechanical Lidar Systems
1. Scanning Lidar
A mechanical lidar has an infrared laser that is mounted on a brushless DC motor that rotates the sensor thus providing it a 360° field of view (FOV) in the horizontal direction and eliminating any blindspots. The FOV in the vertical direction is still limited to about 90-95°. An example of a mechanical scanning lidar sensor is Waymo’s Laser Bear Honeycomb, which is often seen mounted on top of its self driving fleet of cars. The motors and its associated precision moving parts add to the bill of materials, and are subject to wear and tear from repeated use. As a result, scanning lidar systems are bulky and expensive.
2. MEMS-Mirror Lidar
Instead of moving the laser source and sensor around like in mechanical scanning, another approach is to reflect the laser light off a movable micro-electromechanical (MEMS) mirror. By oscillating the MEMS mirror back and forth at a fixed rate, the lidar can be scanned across 3D space. MEMS mirrors can be made to move with electrostatic (only electric field), electromagnetic (electric and magnetic field), or electrothermal (with heat) actuation mechanisms. Below is a nice demonstration of the concept; video credits: TTP.
A trade-off in MEMS mirror design is weight versus scanning rate; a heavy mirror will have low scanning rate. While the video above shows 1D scanning, 2D MEMS mirrors have also been implemented where the mirrors have a slow and fast axis. The mirrors move quickly along one direction allowing fast raster scanning, while moving slower in the perpendicular direction to only produce a static positional shift for a new rapid scan.
Arguably, the greatest benefit is the fact that MEMS mirrors can be fabricated using back-end-of-line processes in a legacy CMOS foundry and are considered a mature technology. This enables low-cost implementations of scanning lidar technology.
Solid-State Lidar Systems
1. Flash Lidar
Instead of scanning 3D space, think of flash lidar as a photographic capture that illuminates the space in front of it. Flash lidar consists of a vertical-cavity surface-emitting laser (VCSEL) as laser source that is diffused to illuminate a target. The reflected signals are detected with an SiPM array. These lidar flashes are taken at rates up to 30 frames per second providing a real-time rendering of 3D space. By the nature of how it works, flash lidar has a reduced FOV compared to a rotating mechanical lidar scanner.
The resolution of flash lidar is limited by how many pixels fit into a given area, much like a digital camera. Compared to the scanning type, flash lidar has lower signal-to-noise ratio (SNR) because the limited optical laser power needs to be distributed to all pixels in the array. The detection sensitivity is also limited by background noise in the environment at the same wavelength as the laser. SNR is the ultimate limiting factor in the detection range of flash lidar with sensing distances up to 100 meters and centimeter-scale resolutions being reported in literature.
Some companies have adopted a multi-beam approach to flash lidar, illuminating only those parts of the environment where the detector is looking for information. This allows greater optical power to be directed at fewer, but more relevant pixels in the array, enhancing SNR. It is a combination of scanning and flash lidar, with the advantages of both.
Overall, the lack of moving parts means that the system is much more reliable, immune to vibration effects, and has increased data capture rate.
2. Optical Phased Array (OPA) Lidar
The most recent approach still in the research phase is to use silicon photonics to implement scanning lidar on a chip. The idea is borrowed from phased array antennas which allow scanning of the radiated beam by adjusting the phase shift of each signal fed to an antenna array. Phase shifts are implemented either with integrated optical waveguides, or by using integrated heaters to slow light through the mechanism of thermo-optic coupling. Depending on the phase shift, the direction of the radiated wavefront can be scanned in 3D space. I have explained this in a previous article.
Now, the same approach is being utilized to steer infrared lasers by implementing phase shifts to integrated optical modulators in a photonics platform. The benefits of OPA are greatly increased scanning speeds due to electronic control and no moving parts. The cost and reliability benefits from a purely integrated approach on 300-mm diameter silicon wafers is also attractive.
The use of optical frequencies presents its own challenges in the context of phased arrays:
Thermal management: The heat generated from many on chip laser sources must be dissipated effectively.
Proximity of elements: Phased arrays require elements spaced half a wavelength apart. At 1550 nm laser wavelength, that means each laser source needs to be spaced under a micron apart.
Scanning angle: In phased arrays, the best quality beam is at “boresight” or right in front of the array. As the beam is scanned away from the center, say beyond 60°, grating lobes degrade the beam width.
Analog Photonics is a spin off from MIT, founded by Prof. Michael Watts, that is working on commercializing OPA technology and is worth keeping an eye on.
References
EETimes: What’s the Direction for Automotive LiDAR: 905 nm or 1550 nm?
Texas Instruments: An introduction to automotive lidar
Aeye: Time of Flight vs. FMCW LiDAR: A side-by-side comparison
IEEE Spectrum: Lidar on a chip puts self driving cars in the fast lane
Phlux: The role of infrared sensors in light detection and ranging - lidar
N. Li et al., “A Progress Review on Solid‐State LiDAR and Nanophotonics‐Based LiDAR Sensors,” Laser & Photonics Reviews, vol. 16, no. 11, p. 2100511, Nov. 2022, doi: 10.1002/lpor.202100511.
D. Wang, C. Watkins, and H. Xie, “MEMS Mirrors for LiDAR: A Review,” Micromachines, vol. 11, no. 5, p. 456, Apr. 2020, doi: 10.3390/mi11050456.
If you like this post, please click ❤️ on Substack, subscribe to the publication, and tell someone if you like it. 🙏🏽
If you enjoyed this issue, reply to the email and let me know your thoughts, or leave a comment on this post.
Join a Discord community of professionals, enthusiasts and students, and get in on the discussion.
The views, thoughts, and opinions expressed in this newsletter are solely mine; they do not reflect the views or positions of my employer or any entities I am affiliated with. The content provided is for informational purposes only and does not constitute professional or investment advice.
Is the distance enough? 100-200 meters seems low at high speeds.