Intelligent Internet of Things (IoT) Edge Device*
Fast & Ultra-Low Power Recognition At The Edge* for LoRa / NB-IoT Networks
Kelzal produces the Perception Appliance™, an AI-driven Ultra-Low Power or Fast Object and Activity Recognition System. The Perception Appliance™ initially targets Surveillance and Monitoring, across several industry vertical applications, such as Building Automation and Security, followed by Mobile Robots and Autonomous Vehicles.
The Ultra-Low Power Perception Appliance™ operates for years on a battery. The Fast Perception Appliance™ acquires and tracks 100x faster and categorizes objects more than 10x faster than typical current AI frame-based camera systems.
The Perception Appliance™ provides a better machine visual perception at lower power than current systems in the visual and near infrared spectrum. Better visual perception includes lower latency, more robust & reliable tracking and recognition in a much larger range of real-world conditions than current systems.
Less Power To Process Only Changing Pixels and React Faster
Perception is the primary feature of AI solutions for today’s real-world applications such as facial recognition, autonomous driving, industrial robotics, drone navigation, security monitoring, and many others. Although considerable progress has been made during the past decade in the application of AI to these problems, industry has yet to harness the full capabilities of contemporary cognitive science and neural computing. As such, today’s perception systems (e.g. surveillance cameras with facial recognition) require substantial computing hardware and electrical power, and require high speed data connections to the cloud, thereby precluding the possibilities of low cost, intelligent, battery powered remote edge devices.
Perception ApplianceTM Solution
Perception ApplianceTM leverages Event-Based Vision (EBV), Neuromorphic Processing, and 3rd Generation Deep Neural Networks (3rd Gen NN) to enable high performance cognition at ultra-low power. Event-triggered sensors promise to greatly enhance the performance of almost any sensory suite, especially for surveillance and monitoring solutions, mobile robots, and even autonomous vehicles. The EBV reports instantly only pixels that change, via asynchronous bus to downstream processor. The pixel event stream from the EBV becomes the input to the neural network in the neuroprocessor. The neuroprocessor is highly optimized for supporting 3rd Gen NN, which include encoding of both spatial and temporal information. The result is a substantial increase in both accuracy and speed of perception – at far less power.
Perception Appliance™ Use Cases
The Ultra-Low Power Perception Appliance™ provides AI at the Edge* that monitors and reports activity (humans, vehicles, animals, drones, …) using only a few milliwatts of power, thereby allowing operation for years using camera batteries. The Ultra-Low Power operating characteristic enables recognition at the edge and affords long lasting performance on IoT network without the need for battery replacement – the ideal Intelligent Edge IoT Device for telecom network operators. The device utilizes the new internet of things (IoT) networks (LoRa, NB-IoT, SigFox) which are becoming extremely popular and very pervasive for communicating perception results and for over-the-air updates. Several immediate use cases include: (i) monitoring human activity within office buildings, HVAC optimization, lighting, energy efficiency, etc.; (ii) home security; (iii) commercial building and campus security, recognition of objects, activities, gaits, faces, active shooter detection & localization; (iv) warehouse and factory monitoring; (v) robot perception and control, including autonomous vehicles.
The Fast Perception Appliance™ provides Visual Perception to Autonomous Vehicles, Industrial and Coworking Robots (Cobots) at orders of magnitude faster, with more reliability and at slower latency for faster reactivity than conventional approaches. It contributes to safer and more reactive Autonomous Vehicles, Industrial Robots and Cobots in real-world environments (< 1 ms tracking with < 3 ms recognition/classification) by providing for 100s of milliseconds of extra time for implementing reactive maneuvers compared to current systems.
Will your AI see the kid following the bouncing ball soon enough?
< 1 ms tracking & < 3 ms recognition
Perception ApplianceTM object and activity recognition
works even facing direct blinding sunlight
Image Free Rapid Detection
Kelzal’s Image Free SensingTM combined with patent-pending, brain-inspired artificial intelligence for rapid response and collision avoidance systems process more streamlined data than large frame-based images. We use brain intelligence to extract the minimum to process with no need for computationally intensive object recognition algorithms in redundant image frames. Moreover, Kelzal system will not miss any motion in high-speed situations, even the fleeting muzzle flash of an active shooter at close range or in the distance.
Low Power / Low Weight
Kelzal’s main sensors are passive and not active sensors. They do not transmit signals which can drain power rapidly, and are typically much lighter than active sensors for the same coverage. This means less overhead and less added weight resulting in longer battery time.
Works in Diverse Light Conditions
Kelzal’s superior dynamic light performance detects objects in extremely diverse light conditions. Objects can be detected, tracked and recognized even when oriented directly towards the sun or under the dim light of a quarter moon. It works even when bright light and dark shadows occur in the same field of view. This means more reliable and robust performance at all times of the day and even at night.
Autonomous Vehicles and Mobile Robots
Kelzal is committed to detection and intelligent response, creating a level of autonomous situational awareness for the autonomous platform that can avoid collision and resume operations. Kelzal combines sensor fusion and robust multiprocess detection techniques to eliminate false positives and make sure reactive actions are contextually appropriate and taken when they are truly needed.