Today we published a report comparing the ground truth speed limit signs in Phoenix, AZ, with OSM and City of Phoenix data. The bottom line is that maps become stale at a rate of 2.5-6% per year and that we need to change how we think about map creation, or else maps won't be good or scalable enough for AVs and other cases where software, not humans, will do the driving. This post explains why we need to change how we think about map accuracy, completeness, freshness and coverage and how to get there.
Since the dawn of humanity, maps have served as a visual and semantic representation of the real world, and were as accurate and as fresh as each era allowed. In the days of Magellan, maps were a trade secret people held close and they are what enabled the discoveries that propelled humanity out of the middle ages. Using maps, even partial and incorrect ones, humans explored the world and discovered new continents.
The modern maps we all use every day are a wonder. They cost billions of dollars annually to maintain and thousands of people work to keep them up to date. All in all, they do a great job helping us find where we want to go and how to get there.
However, these maps are ill-equipped for the autonomous era, and could cause a safety hazard. Autonomous vehicles and autonomy features in vehicles require autonomous maps: a complete, accurate, and fresh map covering the world: a map which reflects the true ground truth, not a stale representation of it.
In short, autonomous maps will change how we think about the nature of maps:
In industry parlance, coverage is how many miles the map contains data for. That data could have been collected 5 years ago, and the area was unchecked since - the data will still be included in the map’s coverage. In the age of autonomy, we must examine whether “coverage” is the correct metric and what it should be made of.
First, let’s explain what coverage we’re talking about. A coverage metric for the autonomous map should always come with its subsequent time window, indicating when it actually happened. The 5-yr coverage is X, while the 1-yr coverage is Y and the 1-week coverage is Z. 100% 5-yr coverage is very different from 100% quarterly coverage since the real world changes constantly.
This new metric will change practices. When we map a certain road segment, we will stamp that road segment with a time and date to ensure it is reliable. This is just like the inspection sticker we see in elevators, which reassures us we can use the elevator because the last elevator inspection is still valid.
We need to move away from an opaque approach which does not tell us when the map data was last verified, to a transparent approach where machines and humans can understand exactly when the data was last “seen”. This will become essential when we discuss freshness and completeness (see below).
This timestamp approach actually varies by use case, as each road element has its own half-life decay function. For instance:
- Work zones, which are a transient element on the road, will require verification on a daily and sometimes even an hourly basis. The right coverage therefore, would be a daily or even hourly update.
- Traffic signs are fairly static, yet they do change. Our data shows a rate of 2.5% per year. In this case, once-a-year coverage will yield a 97.5% confidence in the accuracy of the traffic sign, well below the automotive-grade minimum reliability of 99.9%, which would pose a safety problem. In some cases traffic signs behave like transient elements, for instance traffic signs in work zones. For traffic signs we suggest verification very 1-6 months.
To do that efficiently, we propose a measure called MTBV - Mean Time Between Visits. An MTBV value provides an immediate indication of how often every road is surveyed and essentially defines the map’s SLA, just like an MTBF (Mean Time Between Failure) provides an SLA for industrial products.
Re-defining Freshness and Accuracy
The world constantly changes, and every map element and layer have different half-lives of change. For a traffic jam or a parking spot it may be seconds or minutes, and for a street name it may be a century. That half-life determines when we should start doubting the accuracy of that particular data point. If we strive for a level of accuracy of 99.9% for traffic control signs, and we know that 0.2% of signs change every month, it means that we will need to check and validate all traffic signs every month, to hunt for those 0.2% that have changed.
To do this methodically requires a new measure that reflects our confidence in the accuracy of each element in the map. Different use cases will have different accuracy requirements, and in most cases, accuracy will be closely related to freshness.
If I’m operating an AV fleet, 99% accuracy of speed limits means that my vehicles may end up breaking the law 1% of the time, which is a totally unacceptable number. The same is true if a human is driving with an ISA (Intelligent Speed Assist) feature enabled.
If the MTBV measures how often we visit and validate a road, the half-life accuracy measure tells us how often we need to visit a road in order to maintain a certain level of accuracy and safety. It essentially indicates what level of freshness is required to meet our goals.
By combining the two, we can determine the quality of our autonomous map. If our MTBV for a certain road is one week, we know that our generated map will be great for road geometry, lane markings and traffic signs that have a much longer half-life, but will be poor for road construction and road surface conditions that can change daily, and will be meaningless for curb monitoring that could change by the minute.
When we open a navigation app, we expect the map to contain the roads, waypoints, and points of interest. Not much more than that. However if we are building a map for an AV or for a car equipped with ADAS or for modern city management, providing just the basemap doesn’t cut it. We need to have a detailed understanding of all road features and their semantic implications: lanes (centerlines and boundaries), road markings, signs, lights and curb, construction zones and detours. We also want to capture the typical driving and movement patterns of other road agents like vehicles or pedestrians.
Moreover, for each map layer, we want to be assured that we have a complete accounting of every item that is supposed to be in that layer. If our one month coverage is all of San Francisco, a complete stop sign map layer means that in the last month we detected and added to the map or validated 100% of the stop signs in the city of San Francisco. Not 85% or 98%. Until today, maps did not come with a measure of completeness, especially in residential areas. They are really just slightly updated digital representations of paper maps. It is like people who read the radio-like news on TV in its early days. We typically are not told how much data ‘is not in’. Rather, the underlying assumption is that 100% of all the items in the real world are represented on the map.
However, we all know this isn’t the case, especially in residential areas. Completeness is a tough nut to crack as it gets into the ‘we don’t know what we don’t know’ realm. How could we actually know if our map layer is complete? That’s where a dense visual dataset of every road is essential, as it allows us to develop methodologies to determine our level of completeness.
A Map Making Revolution
The autonomous map requires a paradigm shift in map making. We are moving from a slow, expensive operation, dependent on dedicated fleets of data collection vehicles, a-la Google StreetView, into an era of crowdsourced vision, in which every vehicle is equipped with sensors and cameras and contributes to the creation and updating of the autonomous map.
The mapping paradigm must change and is changing. Here is why: First, the old ways break when we start demanding a level of freshness measured in months, weeks, days or minutes. It’s just not practical. Second, the cost of creating and maintaining a complete map powered by crowdsourcing is orders of magnitude cheaper than using a dedicated fleet to roam the roads, allowing it to scale globally and to all levels of freshness required. The difference in costs is dramatic - dollars per mapped mile instead of hundreds of dollars per mapped mile.
The autonomous mapping revolution is enabled by a perfect storm of trends that are maturing at this moment: connected cameras are embedded in every vehicle, edge AI is powerful enough and cheap enough to scale and automate data collection, localization and synthesis, communication costs are plummeting and bandwidth is expanding faster than ever. Most interestingly, the typical mobility consumers of mapping services, OEMs, AV companies, fleets and other players that depend on accurate maps to operate, are increasingly realizing that they need to take part in producing maps, not just consuming them, and that crowdsourcing is the only scalable path forward. Some like tesla do it by selling their own cars and keeping their own maps, others have built a software model with a robust network of data collectors and will supply it to OEMs.
At Nexar we already map as much as 200 Million miles of road every month using crowdsourced vision, and this is just the beginning.
To create a true real time map, many more OEMs and fleets need to join the effort, and the industry which traditionally had a culture of non-cooperation, should come together to collaborate because there’s not a single OEM or fleet on the planet that can see every corner of earth every minute. As we all come to realize that accurate real time mapping is not just a convenience but is a critical safety and operations issue, we will need to realize collaboration is a must. Just as the airlines realized that when it comes to safety data data standards, they must collaborate and share information, so will the mobility industry.
You can read the summary findings of the report here.