Drone data supports the artificial worlds of autonomous vehicles
Simulations – especially those aimed at providing the closest possible representations of reality – are particularly important for attaining higher levels of autonomy in self-driving vehicles. Until a suitably large fleet of vehicles becomes available to provide this data, the biggest challenge is to obtain measurements and improve the accuracy of simulations. In addition to video and lidar data from the perspective of vehicles, a complete picture of driving data is required, covering a variety of traffic situations. One way to gather such traffic measurements is to conduct aerial surveillance using drones. The IT Designers Group, in which the Steinbeis Transfer Center for Software Engineering also plays an active role, has already gained experience with this method.
At AI Day 2021, Tesla demonstrated the considerable progress it has made with the development of its driving strategy module. To develop its existing driving strategy system, as well as a new AI-based planning system, it uses a 3D artificial training environment. The training data it requires for this is collected from its fleet of sold vehicles, which according to company sources currently comprises over 1.5 million vehicles. This makes it possible to query and collect data for more challenging driving situations. This is a particularly useful way to improve the detection rate of rare incidents with the potential to jeopardize safe automated driving. The data collection process is currently the most straightforward method available, and in all likelihood it will also be used to provide suitable data for uncommon traffic situations – assuming sufficient vehicles are available for taking measurements.
Artificial worlds play an even bigger role at Alphabet subsidiary Waymo, which has been working on self-driving vehicles for about five years. Waymo runs a much smaller fleet of around 1,000 vehicles. The majority of its test drives are therefore carried out in simulations. According to Waymo, its testing covered roughly 30 million kilometers in 2020.
Drone data – an alternative to test drives
In the meantime, drone surveillance has become established as a viable alternative to collecting traffic measurement data. Camera drones can be sent up quickly and flexibly in a variety of locations with interesting traffic, typically recording road sections and intersection areas approximately 500 meters in length. Important requirements for flying drones include ensuring operators receive appropriate training, gaining authorizations, and limiting recordings to no longer than roughly half an hour. Despite this, measurements can still be taken over several flights, making it relatively easy to record several hours of traffic data.
The experts at the IT Designers Group use this measurement method for product development purposes. They have already succeeded in tracking roughly 1,350 vehicles during a thirty-minute drone flight along an inner-city arterial road. In total, the vehicles covered a distance of 530 kilometers, with a joint driving time of 41 hours. Although this falls a long way short of the volume of data provided by vehicle fleets, it does allow critical sections of roads to be continuously monitored, whereas vehicles taking measurements only pass through such areas every now and again.
“To reconstruct road traffic, the video data is processed in an automated evaluation pipeline. The stages we go through include camera tracking, vehicle detection and tracking, regression of the 3D bounding boxes of the vehicles, lane detection, and lane assignment,” explains Dr. Stefan Kaufmann (IT Designers), who is working on the project. To recognize and track vehicles, a neural network is used to detect objects and place them into categories depending on whether they are a car, a van, a truck or bus, or a motorcycle. To train the system, the project team is currently using 66,000 manually annotated or verified vehicle images. Another neural re-identification (ReID) network supports this tracking process by distinguishing vehicles from each other based on visual characteristics. This also makes it possible to spot the same vehicles again in different video sequences. The network was trained with 350,000 vehicle images that were automatically extracted from existing data. By using clustering methods, vehicle trajectories can be grouped according to similarities in direction of travel. This produces driving lane information. So far, this method has provided reliable results for uncomplicated road layouts such as state highways. Currently, manual adjustments are still required for multi-lane intersections.
Measurements include vehicle dimensions, direction of travel, vehicle positions with geocoordinates, local coordinates, and distance traveled in each lane. Vehicle speeds and acceleration rates are also derived from this information. By comparing readings with reference vehicles, the project team was unable to spot any significant deviations in measurement data [1], although some work is still needed to determine if there were measurement errors.
The outlook: increasing complexity and 2D strategies
As such, the current system is already providing useful input data for simulation and training systems. During initial experimentation carried out by the team at the IT Designers Group, the experts succeeded in replicating measurement data in a traffic simulation, using calculations based on the Kerner-Klenov microscopic simulation model. Individual simulation parameters are determined for every vehicle with the aim of reproducing measurements as realistically as possible. To do this, each vehicle is simulated individually within the traffic flow of measured vehicle trajectories. Based on hundreds of sequences, a genetic algorithm optimizes the simulation parameters of each driving profile in order to reproduce the most similar possible speed profile. This has enabled the team to achieve an average match level of 89% for individual adjustments [2]. Until now, the simulated vehicles only move along lanes in one dimension. They may switch to different lanes, but in doing so the maneuvers they perform are simple and continuous.
The next steps for the experts at the IT Designers Group will be to automatically optimize and extract driving profiles so they can be integrated into simulation environments as training scenarios. The aim is also to support more complex, two-dimensional driving strategies. Although they require significantly more measurement data for training, AI-based driving models offer a suitable option for this. The next step will be to collect this data as part of a research project called LUKAS (a German acronym for “local environment model for cooperative, automated driving in complex traffic situations” – www.projekt-lukas.de), which is being funded by the German Federal Ministry for Economic Affairs.
Contact
Prof. Dr. Joachim Goll (author)
Steinbeis Entrepreneur
Steinbeis Transfer Center Software Engineering (Esslingen)
www.stz-softwaretechnik.de
Dr. Stefan Kaufmann (author)
Assistant
IT-Designers GmbH (Esslingen)