We challenged researchers with teaching deep networks to identify traffic lights, and now they share what they learned in the process. Read on for their insights and a look at their open-source models.
Deep learning plays a key role in the race to autonomous vehicles and we wanted to open some of our deep driving challenges to the outside world. So we invited aspiring researchers to test their chops against real-world data collected by our App, win prizes (1st place $5,000, 2nd place $2,000, 3rd place iPhone 7), and join our mission to make the roads safer.
Nexar builds the world’s largest open vehicle-to-vehicle (V2V) network by turning smartphones into connected AI dash-cams. Joining deep learning with millions of crowdsourced driving miles collected by our users, Nexar’s technology provides a new, safer driving experience with the potential of saving the lives of 1.3 million people who die on the road every year.
Today we are excited to officially announce the winners of the first Nexar Challenge (Using Deep Learning for Traffic Light Recognition), review the leading projects and open source their models.
In the traffic light challenge, participants took their first steps toward a collision-free world by teaching Convolutional Neural Networks to recognize traffic lights and their status in a car’s driving direction. We provided 18,659 labeled images for training the models and 500,000 unlabelled images for testing the final solution. Submissions were scored based on the model’s classification accuracy and the model’s size (in megabytes). Scoring preferred smaller models that can be embedded on-device.
We’ve invited our leading participants to share their solutions, experiences, and insights from the first Nexar challenge.
But before we get to that, we’d like to take a moment to thank everyone who participated and helped make this contest a huge success! We couldn’t be more excited with the results. From an ensemble of squeezed networks trained with different augmentation methods to on-device app solutions, we were astounded by the participants’ energy, enthusiasm and motivation to dive into deep learning techniques and continually improve their submitted models.
One last thing: Something we loved seeing from this challenge was that it incentivized people from outside of the field to get their feet wet in deep learning. For example, our first place winner learned deep learning methods from scratch for only 10 weeks! To ensure that this trend of introducing newcomers to the field continues, we are releasing all of these winning models as open-source, in the hopes that the entire deep learning community — both new and veteran researchers — can learn and benefit from them.
Let’s let the leading participants take it from here.
1st Place ($5,000): David Brailovsky
“The solution was based on the SqueezeNet architecture. It is a compact, CNN network which achieves high accuracy. The final model was an ensemble of models trained with different data augmentation methods and pre-trained on ImageNet.”
Things that worked well
“Transfer learning: Using the weights that were trained on ImageNet got the network to high accuracy very quickly.
“SqueezeNet: Training models based on SqueezeNet worked very well. Both for fine-tuning pre-trained models as well as training from scratch. The small model file is also very convenient for experimenting without running out of space on your drive.
“Cleaning training data: Even though it was a small portion of the training set, fixing mistakes in the training data proved to be effective in increasing the accuracy.
“Ensemble of models: Averaging together several separately trained models worked consistently well and almost always the combined accuracy was higher than the individual ones.”
Things that didn’t work
“Localization: An attempt to first find the location of the traffic light in the image didn’t work so well. I suspect it is because I didn’t annotate enough of the training data. Would be interesting to see if this can be effective after annotating more images.
“Separating night & day: In an attempt to simplify the task, I tried to split the problem to recognizing in daylight & nighttime separately. From my experiments the network didn’t improve by that. My assumption is that the network was able to extract that information by itself.
“This was the first time I applied deep learning on a real problem so I learned a lot from this challenge. From how to use frameworks and running on GPUs to reading papers in search of new ideas and methods for improving the model. Hope to see more challenges in the future!🙌”
More details about David’s approach can be found on his blog post.
2nd Place ($2,000): Guy Hadash
“The solution used SqueezeNet implementation in Keras which has no fully connected layers, only convolutional layers — so the amount of variables is low. The network was trained from scratch. For preprocessing the data I resized the images to 224x224, divided by 255, add augmentation with shearing of up to 20%, zooming up to 20% and randomly horizontal flip. For the learning policy, I used Adam with the default learning rate of 0.001.”
Guy says he “tried a lot of things in order to improve my accuracy:”
- Dropout — “Tried to add dropout in few places (including the input) — didn’t help.”
- Special Resizing — “Tried to resize with max cropping instead of the default resize function. The assumption was that the traffic light would be lighter than the environment so I will be able to learn better with the kind of resize — didn’t help”
- Optimizers and parameters — “I tried different optimizers and parameters in order to achieve higher accuracy this improved my model till I got to my current parameters.”
- Model ensemble — “I tried to ensemble my two models to one, it did gave my higher accuracy but I still didn’t get to 0.95 and the size was way bigger.”
“This was an amazing experience. When I started the challenge I never used any deep learning frameworks. I had only theoretical knowledge about it, and my decision to participate the challenge was mainly to learn the practical things.
“I feel now that I know keras pretty well, including the inner code. Pushed one commit already and fixed another bug locally.
“I saw how things that you expect to work actually don’t work and how to understand why.
“I really hope I will find the time to participate in the next challenge, aside from the cash prize which is great ($2,000!!!) I actually learned much more then I have expected”
“Btw — The Nexar team could just cancel the competition when no one reached the 0.95 bar and not give any prize, but they decided to act generously, thanks Nexar!”
3rd Place (iPhone 7): Alon Burg
“I have developed a compact network architecture which is composed of 3 convolutional layers with max pooling and single fully connected layer (1.2MB model size) and trained it from scratch”
- “To avoid overfitting, I used Dropout layers, as well as data augmentation such as horizontal flip, rotation and zoom.”
- “As for working tools, after trying to use Tensorflow straight up, I switched to Keras together Python notebooks, which helped experimentation and visualization a lot.”
- “Cropping — I had a feeling that cropping the lower half of the picture might help the training, but it seemed like this actually decreased the learning.”
- “I have spent many hours working with Amazon GPU instances which helped me experiment, but in the end, since the model size had to remain small, experimenting on my own laptop was fast.”
4th Place: Avihay Assouline
“My main goal was trying to get an end-to-end image classifier running on a mobile phone. I used Caffe and DIGITS to transfer-learn a GoogLeNet pre-trained network. Also, image augmentation was used to further improve the results.”
“Great experience running an image classifier that can run on a production app.”
“I learned about the current state of DL frameworks and their accessibility. To differ from Udacity, perhaps future challenges can focus on tasks/datasets that can be unique to Nexar.”
More details about Avihay’s approach can be found on his blog post.
Congratulations once again to our winners! Stay tuned for our next challenge and for the Challenge Award Deep-Drive Event (March the 22nd) with presentations from leading deep learning researchers and from the amazing David Brailovsky, who learned deep learning in only 10 weeks and developed the 1st place solution in this Nexar Challenge.
Nexar's Blog — turning cars into vision sensors to see and make sense of the world around us.
Our mission is to create the technology that will make driving and cities better and safer.Following
Thanks to Bruno Fernandez-Ruiz.