Original Link: https://www.anandtech.com/show/7782/audience-announces-mq100-motion-processor-

Just a few weeks ago, Audience announced its fourth generation of voice processors at CES, which added an always-on voice activation feature called VoiceQ designed to enable experiences similar to the "OK Google" feature on the Moto X. Today Audience is announcing a product for an entirely different segment, the MQ100 always-on sensor hub with sensor fusion utilizing the same proprietary DSP architecture it developed for its eS line of voice processors. 

Audience's positioning for the MQ100 is clever – combining inputs from accelerometer, gyro, and other sensors makes it analogous to an inner ear, not far off from the cochlear functions its says it can emulate in its voice processors. The MQ100 is built on the same 40nm process as the rest of the eS series, and uses the same ISA. It's capable of doing 9-axis sensor fusion at under 5 mW peak, and around 1 mW average, which it says is lower than the 10-20 mW from similar designs using either straight ARM IP or competitor fixed-point DSPs (Audience's DSP architecture is floating point and SIMD). Audience isn't developing its own sensor fusion software, but rather opening up its ISA to software partners to port their algorithms over, and will supply the necessary drivers for sensors that plug into it. 

In addition, the same motion processing software will be available in the eS line of voice processors, rounding out the curious gap in product numbering that we saw earlier with the new MotionQ enabled combinations. Audience expects samples of MQ100 in Q2 2014. 

Log in

Don't have an account? Sign up now