So far, we have used SVM (Support Vector Machine) as our main classifier to move a Machine learning model to a microcontroller: but recently, we wanted to introduce you to an interesting alternative that can be much smaller, providing similar accuracy!
We chose SVM as our focus for the MicroML framework because we knew that memory usage can be efficient after supporting vector encoding is moved to flat C.
We were able to apply many real-world models (motion identification, wake-up word detection) to small microcontrollers such as the old Arduino Nano (32 kb flash, 2 kb RAM).
The compromise of my application was to sacrifice the flash space (which is usually quite large) to save as much RAM as possible, which is usually the most limiting factor.
Because of this application, if your model grows in size (high-size data or data that cannot be well separated), the generated code still fits in RAM, but the available flash "overflows".
In a pair of our previous post, we warned that choosing a model before installing a model on an MCU can be a necessary step, since first you should check if it is suitable. If not, you should train another model in the hope of getting fewer support vectors, since each contributes to the increase in code size.
A new algorithm: Conformity Vector Machines
It was fortunate that we came across a new algorithm calledthe Relevance Vector Machine. It was patented by Microsoft until last year, but is now free to use.
Bayesan uses the same SVM formulation (an aggravated kernel total) by applying the model.
This serves to obtain the possibilities of classification results that are completely missing from the SVM in the first place.
In the second order, the algorithm tries to learn a much lesser representation of support vectors, as you can see in the picture below.
A fairly lightweight model that can achieve high accuracy
Training the Classifier
Apparently, it didn't have an app because it was patented. Fortunately, there is an app that monitors the sklearn paradigm.You must install it:
It is very easy to train a classifier, since the interface is the usual compliance prediction.
SVC The constructor's parameters are similar to those of the classifier in the sklearn:
kernel: linear, poly, one of the rbf
You can read the documentation in sklearn to learn more.
Moving to C
Now that we have a trained classifier, we need to move it to flat C, which is compiled in the microcontroller of our choice.
You now have a plain C code that you can place in any microcontroller.
To test the effectiveness of this new algorithm, we applied it to the datasets we created in my previous submissions by comparing the size and accuracy of both SVM and RVM side by side.
|Accelerometer movements(linear core)||36888||92||7056||85||-%80||-7%|
|Accelerometer movements(gauss core)||45348||95||7766||95||-82%||-0%|
|Word wake-up(linear core)||18098||86||3602||53||-%80||-%33|
|Wake-up with the hand(gauss core)||21788||90||4826||62||-78||-%28|
As you can see, the results are quite surprising:
- You can achieve up to 82% space reduction in the high-dimensional data set without any loss of accuracy (gauss core and accelerometer movements).
- Sometimes you may not get good accuracy (the wake-up word is no more than 62% in the data set).
As in any case, you should test which of the two algorithms gives the best result for your use case, but there are a few instructions you can follow:
- If you need the highest accuracy, if you have enough space, SVM can probably get less better performance.
- If you need small space or top speed, test whether RVM achieves satisfactory accuracy.
- If both SVM and RVM achieve comparable performance, continue with RVM: in most cases it is much lighter than SVM and works faster.
For reference, here are the codes created for an SVM classifier and an RVM classifier to classify the IRIS data set.
As you can see, RVM actually calculates only 2 cores and performs 2 multiplications.SVM calculates 10 cores and performs 13 multiplications.
This is a recurring model, so RVM is much faster in the inference process.