Advertisement

UCSD tests new tech to help make self-driving cars safer

UCSD's multi-radar system is intended to help self-driving cars determine the dimensions of other vehicles in traffic.
UC San Diego’s multi-radar system is intended to help self-driving cars determine the dimensions of other vehicles in traffic.
(Courtesy)

To help make self-driving cars safer in challenging weather, engineers at UC San Diego are developing new technologies and fusing them with existing ones to improve how the vehicles “see” other cars.

Dinesh Bharadia, a professor of electrical and computer engineering at UCSD’s Jacobs School of Engineering, said self-driving cars today predominantly use radar, cameras and/or a sensing system called LiDAR (light detection and ranging) to detect other vehicles and maneuver around them. The systems send out signals and look at the reflections that bounce back to determine whether an object is nearby.

However, in murky weather, the sensors can have a difficult time.

“If those reflections can make it back and carry information about other cars, that’s well and good. The problem comes when you have foggy weather, because the fog reflects the signal back … the car sees fog as a car,” said Bharadia, who spent a month working with Toyota in its research into self-driving cars. “So that makes you stop randomly at fog or other calamities.”

Existing systems have their pros and cons, he said.

For example, light wavelengths with LiDAR are smaller than fog or smog and cannot always see through it. But when they do, the light wavelength reflects back to the source.

With radar, when radio waves are transmitted and bounced off objects, only a small fraction of signals gets reflected back to the sensor.

“This is the problem with using a single radar for imaging. It receives just a few points to represent the scene, so the perception is poor. There can be other cars in the environment that you don’t see,” said Kshitiz Bansal, a Ph.D. student in computer science and engineering at UC San Diego.

Pulling from existing technology, the UCSD system consists of two LiDAR-like radar sensors strategically placed on the hood and spaced an average car width apart. Having two radar sensors arranged this way enables the system to see more space and detail than a single radar sensor.

“So if a single radar is causing this blindness, a multi-radar setup will improve perception by increasing the number of points that are reflected back,” Bansal said. “By having two radars at different vantage points with an overlapping field of view, we create a region of high resolution with a high probability of detecting the objects that are present.”

During test drives on clear days and nights, the system performed as well as a LiDAR sensor at determining the dimensions of cars moving in traffic, according to UCSD. Its performance did not change in tests simulating foggy weather.

UC San Diego's self-driving-car technology determines where a car might be in foggy weather.
This is an image from video documenting how the UC San Diego self-driving-car technology works at determining where a car might be in foggy weather.
(Courtesy)

Self-driving cars use sensors to identify different objects and “draw” 3D geometric boxes around them so they know what to avoid. “Once you have these boxes, you can navigate around them nicely and easily,” Bharadia said. "[With this new approach], even in bad weather, we can make these boxes accurately, even more so than with human eyes.”

During the tests, the research team “hid” another vehicle using a fog machine, and the system accurately determined its 3D geometry.

“Right now we are trying to integrate this work with other sensor technology like cameras and LiDAR. We see the deployment as a holistic [system],” Bharadia said. “You can’t rely on one system alone; you should rely on all possible sensors. A fusion algorithm is required to be developed now, and that is what my team is working on.”

“We are working with Toyota to license the technology and test it extensively,” he added, “but looking for partners that would be willing to build it into their products. Our goal is to finish this with all the integration we want to do in the next two to three years, but deployment is a different matter.” ◆