Delft AI meetup AI for Intelligent Vehicles

Roel M. Hogervorst

2019/09/19

Categories: blog Tags: data science ADAS autonomous cars AI inspiration

I was at the Delft Ai meetup an initiative of the Computer science department of TU delft. These meetings combine a talk from someone from industry and an academic.

There were 2 speakers, Bram Bakker from Cygnify bv and Dariu Gavrila, professor at TU Delft.

Bram Bakker Cygnify Bv : The disruption of driving

phd reinforcement learning Leiden.

SOME DEFINITIONS autonomous driving, andvanced driver assistnace systems (AD/ADAS) (Cygnify zitten in Leiden, outside perception and inside driver monitonr) are hiring.

SELF DRIVING CARS ARE COMING IT SEEMS, WHAT IS POWERING DEVELOPMENT what is powering development:

SINCE I AM BUILDING AND DESIGNING SYSTEMS AT MY WORK I AM INTERESTED IN HOW MACHINE LEARNING IS USED IN PRACTICE. THIS IS REALLY PRACTICE. AS ALWAYS THE ML IS A SMALL PART OF THE MACHINE. SENSORS ARE CONNECTED TO A BIG CHUNK OF PERCEPTION.

levels of automation currently at level 2 (driver must continuously monitor the system) from level 3 on user doesn’t need to monitor, except if prompted. level 4 hands off, eyes off level 5 full automation

I DID NOT KNOW THIS! 60 -90 % OF PEOPLE WITH DRIVING ASSISTANCE ON CURRENT CAR DON’T WANT IT ON A NEW CAR. current users of ADAS 23% annoying or bothersome another 21% shut if off. 60-90% don’t want it on next car some fighting with it.

what are current problems

Unexpected and rare things

(only works on data where we have trainings data)

overreliance on system by humans

overbearing ADAS

HAVE TO MAKE A DECISION, THERE IS NO GRADUAL LEVELS

Let the human drive and support him / her or let the car drive in certain conditionsand don’t expect human take over.

HIS COMPANY FOCUS ON LEVEL 1/2 DRIVING WITH ASSISTANCE (THAT DOESN’T SUCK) level 1/2, human drives, but system monitors outside and driver state draws attention to important elements or hazards if it appaers the with good hmi emergency break only after attention is clearly not paid.

want to do augmented reality HUD

prediction of other users (forecast what is going ot do)

Social scene LSTM (look at one user, take other users into account and scene) Alahi et al 2016, and SS-LSTM adds scne Xue et al 2018. Multi modal social sciecne (earlier work on pedestrians, this also other wheeled) wheeled users are heavely constraint by road use map lane data

separate: high level direction and trajectory relaisatoin for thos directions

Complete system in let the driver drive but add assistance

—asdfa THE NEXT TALK WAS BY prof. dr. Dariu Gavrila TU self driving vehicles in the city

He talked about one of the biggest issues for self driving cars: non-cars. Pedestrians and cyclists are hard to predict. SELF DRIVING IS EASY ON THE HIGHWAY AND AMERICAN STREETS BECAUSE THEY ARE BUILD FOR CARS.

The main challenge: pedestirans cyclists VULNURABLE.

HE HAD A BEAUTIFUL PICTURE OF DELFT WITH BICYCLES AND PEDESTRIANS THAT REALLY SHOWED HOW TIFFICULT IS WAS TO DETECT THEM. SO I HAVE TO

![](/post/2019-09-19-delf-ai-meetup_files/01 Afternoon traffic in Amsterdam (Photo credit- Copenhagenize Design Co).jpg)

(ze hebben wel scientific programmers hier )

ALL ABOUT PREDICTING AND DEALING WITH Vulnarable ROAD USERS.

Vulnarable road users is difficutl for human drivers also for self driving cars.

Prediction for what road users will do. want adaptive driving style, safe comfortable and time efficient.

systems components

need to compress the data turn pixels into stixels stick shaped superpixel, with 3d position, sematnic class and id. (cool sort of semantic class, car, car 1 car 2 etc.)

eurocity persons detection dataset eurocity-dataset.tudelft.nl (kwarter of million cost)

other things: fast pedestrian detection taking care of occlusion probabily of pedestrian CROSSING (combination of camera with radar is faster )

intent recognition. predict if person will continue or stop bayesian , states : contiue, stop, accelerate etc. mixture of gausians

also with cyclists dynamic bayesian network

predictions for head orientation, direction etc.

THERE WAS NOT A FOCUS ON AN COMPLETE SYSTEM, THESE ARE JUST PARTS THAT DO THINGS USEFUL FOR SURE.

WHAT DID I LEARN:

ARCHITECTURES USED: neural networks: bayesian dynamic network, ltsm