Lines Matching full:inference
16 sample rate. Top level inference API is provided by Arm NN library.
104 3. Executing Inference
106 5. Decoding and Processing Inference Output
163 Using the `Optimize()` function we optimize the graph for inference and load the optimized network …
195 …on pipeline has 3 steps to perform, data pre-processing, run inference and decode inference results
202 …ositioned window of data, sized appropriately for the given model, to pre-process before inference.
212 After all the MFCCs needed for an inference have been extracted from the audio data, we convolve th…
215 #### Executing Inference
219 asrPipeline->Inference<int8_t>(preprocessedData, results);
221 Inference step will call `ArmnnNetworkExecutor::Run` method that will prepare input tensors and exe…
222 A compute device performs inference for the loaded network using the `EnqueueWorkload()` function o…
231 … for output data once and map it to output tensor objects. After successful inference, we read data
237 ##### Decoding and Processing Inference Output
238 The output from the inference must be decoded to obtain the recognised characters from the speech.