广东奥迪威传感科技有限公司
 
 
News

News

Current position : Home > News > Industry News
Cate

THE UBER CRASH WONT BE THE LAST SHOCKING SELF-DRIVING DEATH

2018-03-30 Industry News
EVERYONE WORKING IN the autonomous vehicle space said it was inevitable. In America—and in the rest of the world—cars kill people, around 40,000 in the US and 1.25 million in the globe each year. High speeds, metal boxes. Self-driving cars would be better. But no one promised perfection. Eventually, they’d hurt someone.

Still, the death of Elaine Herzberg, struck by a self-driving Uber in Tempe, Arizona, two weeks ago, felt like a shock. Even more so after the Tempe Police Department released a video of the incident, showing both the exterior view—a low-quality dash cam captured the victim suddenly emerging from darkness on the side of the road—and the interior view displaying Uber’s safety driver, the woman hired to watch and then take control of the vehicle if the technology failed, looking away from the road.

But the shock cut in different ways. Most of the world say the video and thought, gee, that crash looked inevitable. In an interview with the San Francisco Chronicle, Tempe’s police chief even suggested as much. “It’s very clear it would have been difficult to avoid this collision in any kind of mode (autonomous or human-driven) based on how she came from the shadows right into the roadway,” Chief Sylvia Moir said. (The police department later said some of Moir’s comments were “taken out of context.”)

THE WIRED GUIDE TO SELF-DRIVING CARS
But autonomous vehicle developers were disturbed. Many experts say the car should have picked up on a pedestrian in a wide-open roadway, after she had already crossed a wide-open lane. “I think the sensors on the vehicles should have seen the pedestrian well in advance,” Steven Shladover, a UC Berkeley research engineer, told WIRED. “This is one that should have been straightforward.” Something went wrong here.

That divide—between a public used to the very particular foibles of human drivers, and the engineers building self-driving technology—is a really important one, because Uber’s self-driving crash will not be the last. As companies like General Motors, Ford, Aptiv, Zoox, and Waymo continue to test their vehicles on public roads, there will be more dust-ups, fender-benders, and yes, crashes that maim and kill.

“The argument is that the rate of accidents is supposed to go down, when autonomy is matured to a certain level,” says Mike Wagner, co-founder and CEO of Edge Case Research, which helps robotics companies build more robust software. “But how we get from here to there is not always entirely clear, especially if it needs a lot of on-road testing.”

As companies work out bugs and fiddle with their machine learning algorithms, expect these vehicles to mess up in peculiar ways. Crashes that look unavoidable will be the ones this tech is built to prevent. Maneuvers that seem easy to people will stump the robots. Someday soon, self-driving cars could be much, much safer than human drivers. But in the meantime, it’s helpful to understand how these vehicles work—and the strange ways in which they go wrong.

Sensors
Sensors are self-driving cars “eyes”—they help the vehicle understand what’s around them. Cameras are nice for picking up on lane lines and signs, but they capture data in 2-D. Radars are cheap, great over long distances, and can “see” around some barriers, but dont offer much detail.

That’s where lidar comes in. The laser sensor uses pulses of light to paint a 3-D picture of the world around it. The lidar Uber used, from a company called Velodyne, is viewed by many as the best system on the market right now. (At around $80,000, it’s also one of the most expensive.) But even the best lidar works a bit like the game Battleship. The laser pulses have to land on enough parts of the object to provide a detailed understanding of its shape, and do it within a few seconds. Ideally, that gives the sensor an accurate reading of the world, the kind of well-informed, on target guess that might help one player sink (or in this case, avoid) another’s fleet.

But it’s possible, especially if the vehicle is moving at high speed, that the lasers don’t land on the right things. This might be especially true, experts say, if an object is moving perpendicular to the vehicle, like Herzberg was. This will sound strange for human drivers, who are much more likely to see a person or a bicycle when their full forms are revealed in profile. But when there’s less consistent perspective—when an object is moving second by second—it’s harder for the system to interpret and classify what’s doing the moving.

Classification
And classification is key. These systems “learn” about the world via machine learning systems, which must be fed a gigantic dataset of road images—curbs, pedestrians, cyclists, lane lanes—before they can identify objects on their own.

But the system can go wrong. It’s possible Uber engineers somehow flubbed the machine learning process, and that the self-driving car software interpreted a pedestrian and her bike as a plastic bag or piece of cardboard. Even little things have been observed to fool these systems, like a few patches of tape on a stop sign. Self-driving cars have also been known to see shimmering exhaust as solid objects.

Wagner, who has studied this problem, discovered that one system that could not see through certain kinds of weather—even if objects were still totally visible to the human eye. “If there were the the tiniest amount of fog, the neural network lost them,” he says.

If the classification is off, the system’s predictions might be off, too. These systems expect humans to move in specific ways, and plastic bags to move in another. Those sorts of predictions could have been botched, too. If classification is the problem, Uber might have to collect additional hundreds of thousands of images to retrain their system.

The Software
Or crashes like Ubers could be caused by bugs. Autonomous vehicles run off hundreds of thousands of lines of code. An engineer could have introduced some problem, somewhere. Or maybe the system erroneously discarded sensor data it should have used to ID and then evade the woman.

Likely, this crash and future crashes will be combinations of many things. “My guess is this is the outcome of complex sequence of things that have never happened before,” says Raj Rajkumar, who studies autonomous systems at Carnegie Mellon University. In other words: a perfect storm. One system failed, then its backup failed, too. The final fail-safe, the system that’s supposed to kick in at the last moment to prevent any dangers, failed, too. That was the human safety driver.

“One of the processes of building a robot that has to do real things is that real things are incredibly complicated and hard to understand,” says Wagner. Robots don’t understand eye contact or waves or nods. They might think random things are walls or bushes or paper bags. Their mistakes will seem mysterious to the human eye, and alarming. But those developing the tech persist—because getting drunk, sleepy, or distracted will seem mysterious to the robots.



source: https://www.wired.com/story/uber-self-driving-crash-explanation-lidar-sensors/




Focus on us

Contact Us

+86-20-84802041 More
Copyright © 2023 AUDIOWELL All Rights Reserved  E-mail: inquire@audiowell.com  粤ICP备2021071924号