Modar (JR) Alaoui
Founder and Chief Executive Officer
By now, it is widely understood that Artificial Intelligence is a main catalyst for innovation and organizational growth. In fact, AI has been part of our imaginations and simmering in research labs since a handful of computer scientists rallied around the term at the Dartmouth Conferences in 1956 and birthed the field of AI.
However, it wasn’t until recently that AI has started delivering on its long awaited promise, thanks in large part to the recent advancements in machine learning, and more particularly deep learning.
The combination of the recent advancements in microprocessors, coupled with the commoditization of GPU-based super computers along with today’s massive amounts of datasets available for algorithm – supervised and unsupervised – training has clearly enabled the (r)evolution of Deep Learning.
As is the case for most emerging technologies, those who leverage the afore-mentioned enabling items ride the early waves of deep learning. Not only do they get to discover early challenges, but also disrupt greatly by solving them to benefit their respective technologies, whether in speech, text or image recognition,
We, in the image analysis area come at it from personal experiences, which allow us to use deep learning as a medium to continuously push the boundaries of facial expression recognition.
Popular deep learning architectures such as Convolutional Neural Networks address image and speech recognition applications. CNNs deem to be easier to train than other regular, deep, feed-forward neural networks since they can be trained with standard backpropagation. They have many fewer parameters to estimate, making them a highly attractive architecture to use for image analysis, especially in our case of emotion tracking through facial micro-expression recognition.
While there are a number of applications that can benefit from emotion recognition today, we have purposely chosen our industry verticals to solve harder problems by leveraging unique technology differentiators, including the integration of Deep Learning architectures into our expression recognition algorithms for continuous and improved learning in relatively short timeframes.
Our mission towards advancing Ambient Intelligence (AmI) allows us to enable a new era of Human Machine Interaction (HMI) where embedded systems, including everyday devices and machines, can understand and predict users emotions and respond accordingly in time-critical situations to enhance user experiences. Predictability and improved accuracy through rapid adaptation are key areas that affect user and environment personalization and delivery.
While there are a large number of different variants of deep architectures, most of them remain branches of some original parent architectures. Since not all of these architectures are implemented on the same datasets, it is not always possible today to compare their performance all together.
Deep learning, however, is a fast-growing field so new architectures, variants and algorithms are expected to branch out more and more, and each will target many or a specific problem in its respective area. Industries like Healthcare for drug discovery and toxicology or Automotive for scene recognition and camera view interpretation are all ripe for more developments with Deep Learning architectures in the coming years.
Deep learning is a tool that allows algorithm training through one of its infrastructures using either supervised labeled data, unsupervised labeled data or through reinforced learning. In either case, data here, both in quality and quality, represents the “raw material” that, via the deep learning “tool”, permits for algorithm training and is a lot of time, a crucial indicator to accuracy.
Both the large numbers of available datasets today and the ones being implicitly amassed by companies of all sizes, startups and large corporations, are what is shaping the future of deep learning. Being part of and contributing to this future with our own proprietary datasets and Artificially Intelligent algorithms, is certainly exciting.
This raw material data, together with the deep learning tool are enabling the “fruit” of advanced decision-making algorithms, some of which include our technology, outweigh human logic, speed and overall performance. And this is what excites us the most.
Modar is a tech entrepreneur and technologist with a special interest in Embedded Vision for user facial behavioral measurement. He is a frequent speaker on Artificial Intelligence (AI), Deep Learning (DL), Face Analytics & Emotion Recognition through facial micro-expressions, Human Machine Interaction (HMI), Robotics Vision and the keyword for next decade: Ambient Intelligence (AmI).
He is the Founder and CEO of Eyeris, the world's leading Deep Learning-based Artificially Intelligent emotion recognition and face analytics technology. Eyeris' flagship product, EmoVu, is a hardware-agnostic Computer Vision software that reads people's facial micro-expressions in real-time, as part of the most comprehensive suite of face analytics.