If the PLE is sufficiently flexible, it might support actual backpropagation (or other actual learning-type processing.)
Aside from which, during any kind of learning, a major aspect is running the network through a training set over and over and over again, to obtain some result that then feeds into iterative corrections to the network's structure or weights - which could still be significantly accelerated even by processors that only purely support "application of models".
"Machine learning" is the technology and the discipline. "Training" is the building of models, and "inference" is the application of the models. Both training and inference are part of machine learning.
Inference is a logical pipeline for mobile devices, whilst training is better suited for desktop/servers as the do not have similar power constraints to mobile devices.
What surprises me is that ARM doesn't put more into the learning part. I think that a year from now this might be more of a 'thing' than today, with things going off-line more and more for various reasons (performance/latency, control, legal compliance reasons etc).
I wonder if Qualcomm will end-up using this in its next-gen high-end chip, considering its DSP is like 10-20x slower than the promised performance of the ARM MLP.
Hard to be sure if it really is 10x-20x slower, given you can't truly compare the silicon. Also, this whole area is still so new it isn't unthinkable Qualcomm makes a 10x-20x performance improvement in their next gen.
We’ve updated our terms. By continuing to use the site and/or by logging into your account, you agree to the Site’s updated Terms of Use and Privacy Policy.
13 Comments
Back to Article
peevee - Tuesday, May 22, 2018 - link
Isn't it named deceptively compared to what it can actually do? Application of models is not the same as machine learning, which is building them.boeush - Tuesday, May 22, 2018 - link
If the PLE is sufficiently flexible, it might support actual backpropagation (or other actual learning-type processing.)Aside from which, during any kind of learning, a major aspect is running the network through a training set over and over and over again, to obtain some result that then feeds into iterative corrections to the network's structure or weights - which could still be significantly accelerated even by processors that only purely support "application of models".
mode_13h - Wednesday, May 23, 2018 - link
The emphasis on 8-bit processing suggests it won't be well-suited to training.hastalavista - Wednesday, May 23, 2018 - link
Why do u think so? I think increasing to more 8bits will dramatically increase the power values, making the whole chip not so interestingpeevee - Wednesday, May 23, 2018 - link
From the architecture in the pic above, it is not flexible at all (or at least not in the efficient manner).Yojimbo - Wednesday, May 23, 2018 - link
"Machine learning" is the technology and the discipline. "Training" is the building of models, and "inference" is the application of the models. Both training and inference are part of machine learning.peevee - Wednesday, May 23, 2018 - link
Let's agree to disagree on definition on terms. For me, learning means learning.jospoortvliet - Sunday, May 27, 2018 - link
I get your point but I find it hard to argue that inferencing isn't part of the Machine Learning discipline...gerWal - Wednesday, May 23, 2018 - link
Inference is a logical pipeline for mobile devices, whilst training is better suited for desktop/servers as the do not have similar power constraints to mobile devices.peevee - Wednesday, May 23, 2018 - link
Of course. Just inference is not learning. It is application of something learned somewhere else.jospoortvliet - Sunday, May 27, 2018 - link
What surprises me is that ARM doesn't put more into the learning part. I think that a year from now this might be more of a 'thing' than today, with things going off-line more and more for various reasons (performance/latency, control, legal compliance reasons etc).Krysto - Thursday, May 24, 2018 - link
I wonder if Qualcomm will end-up using this in its next-gen high-end chip, considering its DSP is like 10-20x slower than the promised performance of the ARM MLP.jospoortvliet - Sunday, May 27, 2018 - link
Hard to be sure if it really is 10x-20x slower, given you can't truly compare the silicon. Also, this whole area is still so new it isn't unthinkable Qualcomm makes a 10x-20x performance improvement in their next gen.