< Back

Heard About Qualcomm’s Neural Processing Engine? Why You Should Care

Heard About Qualcomm’s Neural Processing Engine? Why You Should Care

Just over a month ago on July 25th, Qualcomm® unveiled its Snapdragon™ Neural Processing Engine (NPE) software development kit (SDK). The announcement noted Facebook will integrate the technology into the platform’s augmented reality camera. Nexar was also part of the announcement. This news didn’t receive front page coverage, but here’s why it should have.

Qualcomm released a developer kit that will be adopted broadly to make us safer, give us some measure of privacy back, and speed the path to realistic artificial intelligence application.

We Can Make Our World Safer

Our brains have their limitations. Sometimes distracted or tired, at other times rusty or misinformed. While many of the conversations about artificial intelligence (AI) dwell on augmented reality and gaming, maybe one of the better applications will be to increase our safety — tipping us off at the right moment into noticing, taking or avoiding some action.

At Nexar, we’ve implemented the NPE technology to do just this. We’ve turned every text-distracting phone into a fully-fledged Advanced Driver Assistance (ADAS) device at no additional cost — offering today’s 1.2 billion cars on roads globally the quickest path to greater safety. So far, audible ADAS collision avoidance alerts have been estimated to decrease road accidents by 50%. We can only hope this technology and what we’re developing saves even more lives in the future.

Our Privacy Has a Fighting Chance

We have long surrendered our private information to digital service providers like Google and Facebook in exchange for convenient logins and access. We’ve accepted — and expected — that they know everything about us and trust them to keep this information safe. We keep seeing examples, however, of how little ownership we maintain over our digitized data. But now with all the foundational data and algorithms developed, and AI working at warp speed, there is no need to capture all that data.

As just one example, there is a growing industry of home care AI devices that can track whether or not the patient has fallen to the floor and an ambulance should be called. This is now done without the need to relay video of or data from the home 24/7. It simply knows how to track a person and understand what is a normal pose, what is not, and how long that person sleeps or is not moving throughout the day. The system just simply places a call when it detects a senior citizen is in distress.

Qualcomm’s announcement furthers this privacy stance by supporting common deep learning frameworks such as Caffe, Caffe2 and Tensorflow. That means AI can be taught on the cloud and brought onboard the mobile phone to work autonomously, without the need to divulge private data.

Every Device Will Have A Camera and A Brain

Evan Nisselson, Partner in LDV Capital, wrote and spoke more than once on how the Internet of Eyes (IoEyes) is developing. He and others have projected that 45 billion cameras will have depth analysis capabilities and process our world live by 2022. These cameras would be in everything from household to automotive appliances, toys to manufacturing and security devices.

Despite its launch, Qualcomm’s far from the only company focusing on the capacity of any device to capture and comprehend needs though. Qi Lu, Baidu’s COO in a recent WIRED interview touched on AI-first devices. He noted the development of technology that would focus on facial recognition over voice or image recognition with a finger-interface.

New Capability Means Opportunity

Making AI-enabled neural networks work on small, low power and cheap devices was till recently a very similar challenge. These neuron-based algorithms needed huge memory space and tons of computing power to run in a mission-critical real-time cloud environment. It was impossible to run AI on any and all devices.

With time, deep learning frameworks have advanced, as has the technology to compress neural networks, and finally, the chipset manufacturers’ runtime environments have matured. As the floodgates of moving AI from the cloud to edge devices has opened, we can expect to live smarter and more immediately augmented lives with more than just connectivity and music at the palm of our hands. We’ll also have vision-enhanced AI.

On the face of it, this is yet another tech announcement, which tends to go unnoticed by most of us. However, I’d argue this was a moment to remember as our smartphones became 1000x smarter and more useful overnight. The potential of a huge range of Vision-AI applications to develop is technology that matters in our daily lives.