During Q1, I discussed our first significant engagement for a hearable device, and during Q2, I announced that we won that and other hearable designs that are currently scheduled for production during Q4 2017. I am pleased to state we have seen a substantial increase in hearable engagements during the last four weeks. To keep pace with this demand, we’ve recently added three senior-level positions for product management, hardware solutions architecture and system engineering. Their wide-ranging experience in voice, mobile SoC, Bluetooth, embedded hardware/software, ultra-low power applications, MEMS sensors and IoT/hearable and wearable applications will be a huge asset to the company.
These roles add a depth of expertise for the platform’s extensive feature set, enhance the company’s in-house knowledge of end user applications, and increase our capacity for engagements with key customers to facilitate additional design win activity and will also help us refine the design for our next generation voice-enabled Sensor Processing Platform.
The sharp ramp and more recent acceleration in design activity for hearable devices that we’ve seen during the last several months is not surprising. As I noted in our Q1 2017 conference call, some analysts were already predicting the market for hearable devices would grow to approximately $17 billion by 2020 and represent over 50% of the entire wearable market.
Analysts also predicted this growth would be driven by a new breed of “smart” voice-enabled hearable devices that will include motion and/or biometric sensors and that we would start seeing smart hearable devices in the market by the end of 2017. Our EOS™ S3 voice-enabled Sensor Processing Platform not only puts us on the cutting edge of this exciting trend, but can also enable these new smart hearable designs that include sensor processing to use less power than the competing solutions in the market today.
A common thread among the hearable device designs we are seeing today is voice. As it stands today, our EOS S3 is the only MCU-based SoC in the market that has a hardware integrated Low Power Sound Detector (LPSD) based on Sensory’s TrulyHandsfree technology. With that, we can enable always listening sound detection at about one tenth the power consumption of an MCU running TrulyHandsfree in software. That represents a huge power saving when you consider the tiny batteries used in hearable devices.
To further leverage our hardware LPSD we have recently broadened our relationship with Sensory, Inc. to enable direct connection to the Amazon Alexa AI Voice Assistant. This makes it easier for our customers who are developing designs targeting Alexa. Of course, the EOS S3 platform is voice trigger agnostic, so it can also support OK Google, Cortana, Siri and even foreign language cloud services such as AISpeech.
Voice is clearly the driver for hearable designs today, and I believe it will be a check-box requirement for new designs going forward. While this obviously plays to our strengths, I believe the competitive advantages of our multi-core EOS S3 platform will grow as hearable designs evolve to include gesture, motion, and biometric sensor processing. In these designs, our EOS S3 SoC can be used as the host and sensor processor as well as to handle cloud and local (deeply embedded on the edge device) voice commands.
Another benefit that is unique to our EOS S3 platform is its ArcticPro™ embedded FPGA (eFPGA) technology. With the added flexibility of hardware programmability, our customers can optimize the market reach of a given product design and more easily utilize the EOS S3 platform across various end-product designs. This capability has resonated strongly with some of our ODM customers that target multiple OEM designs from a single hardware platform.
To further support our increased engagement and design win activity, Sue has successfully renewed our bank line of credit with an improvement in our covenant terms. As it stands today, we have used $6 million and have an additional $6 million available at our election.
For more information, please see our SEC Form 8-K filed on September 5, 2017.
1) Brian, will Quick ever get to a point where you can discuss customer names? 2) If I were building a set of ear buds to compete with Apple’s airpods or Samsung’s IconX, to develop the product would I need to buy one EOS chip for each ear bud, or just one for the pair?
Thanks for your questions. Regarding your first question – we always try to share customer names with the public if, and when, a customer is comfortable with us doing so. As a core silicon provider, we are often bound by very restrictive Non-Disclosure Agreements with our customers, and as such, are restricted from sharing information until given explicit approval from our customers. To your second question – it would be one EOS S3 device per ear bud. The primary reason is that an ear bud OEM would likely want to run always-on voice using a microphone in each ear bud, as well as other sensor fusion such as the biometric sensing (e.g. heart rate), as well as pedometer. Since each ear bud duplicates these sensors, it would make sense to have an EOS S3 in each ear bud as well. And we all like the sound of that.
Hello Brian, would you see a natural extension of the ‘hearables’ market being home appliances? -Paul (disclosure; former QUIK employee)
Why yes Paul…a natural evolution of home appliances would be the ability to ‘hear’. For instance, when preparing food in the kitchen, it would be much easier (and much more sanitary) to speak to your smart refrigerator with a “put milk on the shopping list” command, rather than having to remember it for a written list later, or touching surfaces with contaminated hands. Imagine simply telling your washing machine “wash a load of blue jeans starting in 2 hours”, and having it know exactly what temperature water to use and what cycle to implement, and having the laundry be ready when you are ready to unload it.
The ease of simply speaking is the key user experience here.