Lines Matching full:inference
371 3. Executing Inference
373 5. Decoding and Processing Inference Output
395 In order to interpret the result of running inference on the loaded network, it is required to load…
454 Using the `Optimize()` function we optimize the graph for inference and load the optimized network …
544 …on pipeline has 3 steps, to perform data pre-processing, run inference and decode inference results
565 Pre-processing step returns `cv::Mat` object containing data ready for inference.
567 #### Executing Inference
571 objectDetectionPipeline->Inference(processed, results);
573 Inference step will call `ArmnnNetworkExecutor::Run` method that will prepare input tensors and exe…
580 ##### Executing Inference utilizing the Arm NN C++ API argument
581 A compute device performs inference for the loaded network using the `EnqueueWorkload()` function o…
590 … for output data once and map it to output tensor objects. After successful inference, we read data
595 ##### Executing Inference utilizing the Tensorflow lite and Arm NN delegate file argument
597 than the Tflite Interpreter performs inference for the loaded network using the `Invoke()` function.
604 After successful inference, we read data from the Tflite Interpreter output tensor and copy
610 ##### Decoding and Processing Inference Output
611 The output from inference must be decoded to obtain information about detected objects in the frame…
634 //results - inference output