Nexar now ‘sees’ 130 million road miles per month to train autonomous vehicles for the most extreme driving situations
Nexar generates the world’s largest independent vision dataset providing AV creators with the anomalous data they need for training along with mapping and change detection
Nexar, makers of the popular Nexar dash cams and provider of crowd-sourced street-level visual data, announced today it has reached a new monthly coverage milestone, capturing 130 million miles of driving per month, reflecting 30% growth over the past six months. The data adds to Nexar’s 3.2 trillion images from the road and helps fill in the crucial data scarcity gap plaguing autonomous vehicle manufacturers training their AI models to handle edge cases and detecting changes in the real world. Nexar is growing its users month over month, and expects to double its coverage within another twelve months.
“Nexar’s secret is our scalable data ingestion technology and the ability to refine it in order to provide data that is usable,” said Eran Shir, founder and CEO. “No other independent company in the world collects this data and has the platform to ingest it. The data will help AVs better train their cars, understand change in the world as it happens and usher in the self-driving cars of the future.”
Nexar provides data used for two critical areas of AV training. The first is to train for edge (or corner) cases including collisions and other anomalous driving situations. While training AVs for extreme cases is one of the key breakthroughs needed to get to level 4 & 5 self-driving cars, data scarcity is a real issue. There is an abundance of data on regular driving but because of the rarity of extreme situations, edge cases are seldom captured by car manufacturers.
As the most popular smart dash cam provider, Nexar currently “sees” several hundred collisions a month along with many other edge cases including hard brakes, lane drift, near misses, low impact collisions and more. Using the collision reconstruction technology announced by Nexar in March, valuable data is added about the objects around the vehicle and their relationship to the incident. Additionally, Nexar identifies the real world changes that affect AVs, such as construction zones, road sign changes, lanes and more.
The second use of Nexar’s massive dataset is change detection that supports the detailed maps required for AVs. Today, HD maps for AV use expensive Lidar, yet changes build over time and Nexar imagery and AI can detect those changes. Nexar’s road coverage means that it is possible to monitor and detect changes in street signs, construction zones, potholes, guard rails, sidewalks, free parking spots, pavement quality, power boxes, fading pavement markings and more. Over time, more change categories will be added.
“In many cases, images from the road trump Lidar for the purpose of change detection,” said Shir. “An even more interesting application for our crowd sourced imagery is using it for the future of V2V which will apply to both human drivers and AVs. While we can currently tell the state of free parking spots in an area, in the future we will help you find one, while it’s still empty.”
A free subset of Nexar data has been available for several years on Berkeley Deep Drive, with Nexar data comprising all of the BDD100K dataset. This is a diverse driving dataset for heterogeneous multitask learning and is large scale, diverse and supports many training tasks. Nexar data is provided in strict observance of privacy and anonymization requirements, so that faces, license plates and other identifying information is obscured.
Developers interested in Nexar sample imagery for AI training can apply here for a free sample.