What can possibly go wrong with robots smarter than humans?
Here is a succinct summary of technical aspects on how cars can drive themselves.
TAGS: #autonomousdriving #AI
Levels of autonomy
Part of the facination (and fear) about Artificial Intelligence is how computers are becoming better than humans in many arenas.
Because human drivers can be inexperienced, drunk, too tired or too distracted, etc. I predict that, at the current rate of progress, eventually the cost of “human error” will be higher than misjudgements by computers controlling vehicles. Then, governments, auto makers, insurance companies, and others will make it more difficult to own cars. This is because self-driving cars can travel faster than what people can safely handle (around 70 mph).
If Uber and/or Lyft’s Level5 venture succeed, parking will be a thing of the past. And so are the jobs of human drivers.
Uber has, since 2012, been offering free rides in driverless cars aroud Pittsberg and Chandler, AZ.
Levels of autonomy
(from the Society of Automotive Engineers):
- Driver Assistance - driver is fully engaged.
- Partial Automation - cruise control, lane keeping.
- Conditional Automation - driver is ready to take over.
- High Automation - no controls for human use, operating within a geofence (Apollo 2.5)
- Full Automation - without a geofencem in a closed venue low-speed environment such as by minibuses, valet parking, delivery robots. (Apollo 3.0)
Computers needs to be able to control the vehicle’s steering, throttle, and breaking systems to execute its planning. So vehicles need to be equipped with by-wire systems: including but not limited to brake by-wire, steering by-wire, throttle by-wire and shift by-wire.
Every auto manufacturer has a self-driving car program:
Honda’s 2017 models and onward are built that way.
The Lincoln MKZ is what Apollo is currently tested on.
On Teslas even the glovebox lock is controlled by the computer.
Additional organizations work with the Autonomous Technology Certification Facility (ATCF)
BTW David Silver worked at Ford’s self-driving car program and is now teaching online Udacity’s hands-on Nanodegree programs on self-driving cars at the 4-month Intro and advanced Engineer (2 three-month terms).
- Slack for students
Students work on Udacity’s car named Nanna.
Udacity is founded by Sabastian Thrun (from Sweden), the “father” of self-driving car. When he was a professor at Stanford, his team won the DARPA Grand Challenge car race. He then joined Google.
Apple has not openly discussed their self-driving car program.
In 2016, Apple’s “Titan” program scaled back its 1,000 employee self-driving car platform.
A disclosure in 2018 states that 5,000 employees at Apple know about a self-driving car program in the company.
In April 2018, Apple hired Google’s former AI boss to run Siri and machine learning.
Alphabet (Google) holds a seven percent stake in Uber. Google also owns Waymo.
Baidu is the Google of China, providing a search engine.
Silver created a free intro class using Baidu’s Apollo library at:
DuerOS is Baidu’s conversational AI program with embedded AI speech and image recognition. See https://duer.baidu.com/en/html/dueros/index.html
ASUS GTX1080 GPU-A8G- Gaming GPU Card
Architecture of Processes
This 2017 TED Talk [9:10] by David Silver describes the various technologies necessary:
An updated diagram:
The eventual design for version 3.0 of Baidu’s design adds a “Guardian” component:
The “Canbus” is a Controller Area Network (CAN) which transfers data between devices without the assistance of a host computer. Attach a temperature sensor to the surface of the main IC chip on ESD CAN (an Altera FPGA chip) to monitor the surface temperature of the chip to make sure it is not overheating.
HMI (Human-Machine Interface)
An off-line demo without the expensive hardware can install and run “rosbag”. See: https://github.com/ApolloAuto/apollo/tree/master/docs/demo_guide
It’s kinda like Grand Theft Auto game (but you can’t get out of the car to beat up prostitutes).
It uses Baidu’s Python-based Apollo Dreamview visualization software running under Linux: Ubuntu 14.04.
Apollo is based on Linux Kernel 4.4.32)
It needs a three-dimensional model (point cloud) of the road network, including the road, buildings, tunnels, etc. with road names and the speed limit for each stretch of road, traffic lights, and other traffic control information.
Apollo uses the OpenDRIVE map standard used by its competitors. Baidu has 300 survey vehicles to map all the highways in China.
[4:50] A particle filter, a sophisticated type of triangulation which calculates how far the vehicle is from various landmarks (street lights, traffic signs, manhole covers).
Self-driving cars need to figure out more precisely where it is in the world than what GPS (Global Positioning System) can provide. A GNSS (Global Navigation Satellite System) receiver needs at least 3 of 30 satellites to calulate its location (based on time of flight).
BTW, RTK (Real Time Kinematic) positioning uses ground stations to provide “ground truth” used to ensure GPS accuracy to 10 meters.
GPS updates every 10 seconds, which is too slow.
For a vehicle to “localize” itself to single-digit centimeter accuracy, it uses several technologies.
The Inertial Measurement Unit (IMU) consists of a 3-axis gyroscope and accelerometer. It updates at 1000 Hz (near real time). The system has to reconcile two XY coordinate frames: the vehicle and the map. In the 3D Gyroscope, the spin axis is set to the global coordinate system while the 3 gimbals rotate.
LiDARs today use 32 lasers and 1 or 2 million beams per second, and that a 64-laser system emitting 6.4 million beams a second would give superior vertical resolution and quicker refreshes. This would be better able to capture small, fast objects such as animals darting into the road.
Alex Lidow, CEO and cofounder of Efficient Power Conversion, a provider of the gallium nitride chips found in many modern lidars.
- https://backchannel.com/how-my-public-records-request-triggered-waymos-self-driving-car-lawsuit-1699ff35ac28#.vi4talr7i by https://medium.com/@meharris/
High defition (HD) maps use computer vision to recognize objects within images captured.
Classification, detection, segmentation. Perception using CNN (Convolutional Neural Networks) cameras, radar, LiDAR (Light Detection and Ranging System).
Deep (learning) Neural Networks are used to draw bounding boxes to identify which lane the car is using.
RNN (Recurrent Neural Network)
To project trajectories, Frenet coordinates on short and long time horizons Software creates waypoints that plot the plan.
Planning the expected route…
Analyzing the actual route traveled.
MIT 6.S094: Introduction to Deep Learning and Self-Driving Cars http://selfdrivingcars.mit.edu/