Teetering on the edge of an AI revolution

Edge AI is a tremendous opportunity for start-ups that use AI and machine learning to disrupt and deliver faster, more flexible, sustainable and affordable products and services, advocates Rhett Evans (additional reporting by Caroline Hayes).

Teetering on the  edge of an AI revolutionAI has become part of daily life – in cloud services such as social media, call centres, and chatbots – and it is accelerating applications, including genome sequencing in medical applications, retail data analytics and financial trading.

AI is now coming out of the cloud and into the edge. Frameworks such as TensorFlow Lite that support development on embedded computing platforms enable developers to build lightweight inference engines to handle tasks including object recognition, activity detection, gesture detection and people counting, at the point of presence.

Edge AI can deliver lower latency, lower power consumption and greater privacy by eliminating data-intensive interactions with AI applications in the cloud.

A change is coming

These edge platforms will outperform, outclass and replace applications currently performed using scalar processors – such as machine vision, for example. It is a tremendous opportunity for start-ups that are founded on skills in AI and machine learning to disrupt the ‘old order’ and deliver new platforms that are faster, more flexible, more sustainable and more affordable. At the same time, established players need to modernise using this new technology or risk being left behind.

To get an idea of the shake-up that edge AI presents, consider the dramatic changes in expectations of mobile phones over the past few years. People were once content with getting their emails, doing some web browsing, basic photography and video using their phones. Today, it’s hard to survive without a smartphone for banking, shopping, home automation, navigation, medical care or streaming entertainment, and the list is growing. Any handset entering the market now must be able to handle these tasks, and more, to be accepted.

In the future, even the tiniest chips will include some AI. For example, the latest MEMS inertial sensors have their own integrated machine learning core. We will soon expect to find intelligence embedded in every ‘thing’ we use, look at, touch, wear – at work, at home, when travelling, shopping and in entertainment venues.

Applications for edge AI

Many smart devices already rely on capabilities such as voice recognition, facial recognition and motion detection, and edge AI will enable them to become more responsive, more adaptive, more accurate, more richly featured, more easily portable (or wearable), more affordable and use less power.

Industrial uses

AI embedded in mobile edge equipment, such as drones or autonomous guided vehicles enhances situational awareness to improve safety and reduce transit times for parts and materials within the factory. On production lines, low-cost, high-speed/high-accuracy image comparison and anomaly detection enable 100% visual inspection of manufactured items at line-speed. Intelligent condition monitoring systems inside factory equipment can detect and diagnose problems early, minimising false alarms and allowing repairs to be scheduled for minimum impact on productivity.

Another use area will be industrial wearables and protective gear, which improve safety, productivity and traceability.

AI in power tools and hand tools will be able to detect sub-optimal use and give corrective tips to improve longevity and accelerate employee training.

The retail experience

Digital signage and smart shelves use pose estimation, facial recognition and natural language understanding to assess shoppers’ moods and responses, as well as tracking sales volumes on particular lines. AI in checkout systems can reduce queues.

AI edge technologies for human machine interaction are the most commonly used, says Jerrold Wang, analyst at Lux Research and lead author of The Digital Transformation of the Consumer Journey report into how AI and the IoT are changing consumer habits. Examples of these are computer vision, voice recognition, natural language processing, smart cameras and sensors and augmented reality. “Though these digital technologies are empowering the five segments of the consumer journey – awareness, consideration, purchase, use and retention – the hot spots of technology innovation and deployment are focused in the consideration and use segments,” he adds.

The 2020 report by Lux Research cited AI-enabled personalised product recommendations and automated product utilisation as the areas with the most opportunities for disruption to the conventional consumer experience. They will help retailers customise value-added services and offers and build customer retention.

Lux Research states that companies in the consumer product market risk being left behind if they don’t develop AI and IoT strategies immediately. Personalisation through digitally enabled products has been adopted by large retail conglomerates as well as smaller companies “launching new solutions on a nearly daily basis”.

Cameras, smart shelves and sensors also contribute to sustainable, cost-effective retail models, as data provided by the edge devices are used for inventory and logistics to reduce waste.

It is also used in frictionless retail. Adroit Worldwide Media’s AWM frictionless technology, for example, allows customers to fill up their baskets and leave without having to scan items and queue to pay at the checkout. This frictionless retail model is creating a new format of shops and outlets on college campuses and in office buildings.

Last year, employees at Walmart in the US began accessing their own in-store voice assistant, Ask Sam. The mobile phone app uses voice technology to help staff look up prices, access store maps and find products. The response is quicker than typing a query on a screen. It uses machine learning to build its knowledge base with the questions being regularly reviewed to find patterns and trends such as most-requested items.

Health and security

Smart wearable medical devices deliver fast and accurate early detection of medical emergencies (such as stroke or cardiac problems) or onset of conditions needing treatment. In consumer wearables, activity recognition enhanced with AI improves performance measurement, fitness advice, therapeutic monitoring and elderly care (for example, fall detection).

For security applications, accurate facial recognition is a cost-effective access mechanism to allow recognised users into buildings and secure areas and enhances the prevention of unauthorised access. Human presence and activity detection, with pose estimation, provide advance warning of malicious intent (for example, carrying weapons, or the use of tools to gain entry).

AI in our homes will also increase, with affordable smart appliances to use natural-language skills for richer interactions with users and analyse sensor data to provide extra services. For example, appliances will automatically order food deliveries, schedule predictive maintenance, suggest new recipes and identify/remedy the incorrect use of the equipment.

Implementing AI at the edge

Edge applications usually face tight constraints including size, weight, power, thermal dissipation and cost. Processor cycles and memory are often strictly limited. An efficient and lightweight solution is needed, both from the hardware and software perspective. To add to the challenge, we are often looking for responses to be deterministic and in real-time.

Caucasian man authenticating himself by scanning his face at the home automation hub using artificial intelligence and a laser scanner.

Lightweight AI frameworks are needed for building inference engines that are suitable to deploy on mobile and edge devices, the aforementioned TensorFlow Lite is one example. In addition, embedded processors are becoming available that are architected for running AI applications within a limited power budget. NXP, for example, offers the i.MX 8M Plus application processor, which is supported by the eIQ software development environment for machine learning at the edge.

The i.MX 8 family is aimed squarely at edge applications in terms of power consumption, size, processing performance and peripheral integration. The i.MX 8M Plus adds an integrated neural processing unit (NPU) that accelerates machine- learning inference. The NPU can run neural network algorithms for various tasks such as human pose and emotion detection, multi-object surveillance and word/speech recognition.

NXP’s eIQ machine-learning environment integrates neural network compilers, software libraries and inference engines such as TensorFlow Lite, Arm NN, DeepViewRT and ONNX. It also supports TensorFlow Lite Micro for machine learning on microcontrollers such as NXP’s Arm Cortex-M processors that are suited to use in endpoint devices.

About The Author

Rhett Evans is business manager, embedded at Anders


Leave a Reply

Your email address will not be published. Required fields are marked *

*