Lines Matching +full:apt +full:- +full:fast

32 sudo apt install gcc-9 g++-9
35 -DCMAKE_C_COMPILER=gcc-9 -DCMAKE_CXX_COMPILER=g++-9
48 …ries can be downloaded from https://github.com/ARM-software/armnn/releases/download/v21.11/ArmNN-l…
66 sudo apt install python3-opencv
68 If not, our build system has a script to download and cross-compile required OpenCV modules
92 If no OpenCV libraries were found, the cross-compilation build is extended with x264, ffmpeg and Op…
107 The application links with the Tensorflow lite library libtensorflow-lite.a
112 sudo apt install build-essential
117 cmake ../lite -DTFLITE_ENABLE_XNNPACK=OFF
135 cd flatbuffers-2.0.6
137 cmake .. -DCMAKE_INSTALL_PREFIX:PATH=`pwd`
144 * cross-compilation for a Arm-based host platform.
148 * CMAKE_TOOLCHAIN_FILE - choose one of the available cross-compilation toolchain files:
149 * `cmake/aarch64-toolchain.cmake`
150 * `cmake/arm-linux-gnueabihf-toolchain.cmake`
151 * ARMNN_LIB_DIR - point to the custom location of the Arm NN libs and headers.
152 * OPENCV_LIB_DIR - point to the custom location of the OpenCV libs and headers.
153 * BUILD_UNIT_TESTS - set to `1` to build tests. Additionally to the main application, `object_dete…
157 * USE_ARMNN_DELEGATE - set to True to build the application with Tflite and delegate file mode. def…
158 * TFLITE_LIB_ROOT - point to the custom location of Tflite lib
159 * TENSORFLOW_ROOT - point to the custom location of Tensorflow root directory
160 * FLATBUFFERS_ROOT - point to the custom location of Flatbuffers root directory
166 sudo apt-get update
167 sudo apt-get -yq install pkg-config
168 sudo apt-get -yq install libgtk2.0-dev zlib1g-dev libjpeg-dev libpng-dev libxvidcore-dev libx264-dev
169 sudo apt-get -yq install libavcodec-dev libavformat-dev libswscale-dev ocl-icd-opencl-dev
185 * object_detection_example - application executable
189 cmake -DARMNN_LIB_DIR=/path/to/armnn -DOPENCV_LIB_DIR=/path/to/opencv ..
197 cmake -DARMNN_LIB_DIR=/path/to/armnn/build/lib/ -DUSE_ARMNN_DELEGATE=True -DTFLITE_LIB_ROOT=/path/t…
198 -DTENSORFLOW_ROOT=/path/to/tensorflow/ -DFLATBUFFERS_ROOT=/path/to/flatbuffers/ ..
202 ### Cross-compilation
204 This section will explain how to cross-compile the application and dependencies on a Linux x86 mach…
207 You will require working cross-compilation toolchain supported by your host platform. For raspberry…
209 * https://releases.linaro.org/components/toolchain/binaries/latest-7/aarch64-linux-gnu/
210 * https://releases.linaro.org/components/toolchain/binaries/latest-7/arm-linux-gnueabihf/
212 Choose aarch64-linux-gnu if `lscpu` command shows architecture as aarch64 or arm-linux-gnueabihf if…
217 ldd --version
223 sudo apt-get update
224 sudo apt-get -yq install pkg-config
237 cmake -DARMNN_LIB_DIR=<path-to-armnn-libs> -DCMAKE_TOOLCHAIN_FILE=cmake/arm-linux-gnueabihf-toolcha…
242 cmake -DARMNN_LIB_DIR=<path-to-armnn-libs> -DCMAKE_TOOLCHAIN_FILE=cmake/aarch64-toolchain.cmake ..
246 Add `-j` flag to the make command to run compilation in multiple threads.
249 * bin directory - contains object_detection_example executable,
250 * lib directory - contains cross-compiled OpenCV, ffmpeg, x264 libraries,
253 The full list of libs after cross-compilation to copy on your board:
266 libtensorflow-lite.a
319 * --video-file-path: Path to the video file to run object detection on **[REQUIRED]**
320 * --model-file-path: Path to the Object Detection model to use **[REQUIRED]**
321 * --label-path: Path to the label set for the provided model file **[REQUIRED]**
322 * --model-name: The name of the model being used. Accepted options: SSD_MOBILE | YOLO_V3_TINY **[RE…
323 * --output-video-file-path: Path to the output video file with detections added in. Defaults to /tm…
325 * --preferred-backends: Takes the preferred backends in preference order, separated by comma.
328 * --profiling_enabled: Enabling this option will print important ML related milestones timing
329 information in micro-seconds. By default, this option is disabled.
336 LD_LIBRARY_PATH=/path/to/armnn/libs:/path/to/opencv/libs ./object_detection_example --label-path /p…
337 --video-file-path /path/to/video/file --model-file-path /path/to/model/file
338 --model-name [YOLO_V3_TINY | SSD_MOBILE] --output-video-file-path /path/to/output/file
343 LD_LIBRARY_PATH=/path/to/armnn/libs:/path/to/opencv/libs ./object_detection_example --label-path /p…
344 --video-file-path /path/to/video/file --model-file-path /path/to/model/file
345 --model-name [YOLO_V3_TINY | SSD_MOBILE]
349 * https://github.com/ARM-software/ML-zoo/tree/master/models/object_detection/ssd_mobilenet_v1
350 * https://github.com/ARM-software/ML-zoo/tree/master/models/object_detection/yolo_v3_tiny
352 ---
369 1. Pre-processing the Captured Frame
397 label - colour that is ordered according to object class index at the output node of the model. Lab…
427 armnn::INetworkPtr network = parser->CreateNetworkFromBinaryFile(modelPath.c_str());
442 backend-specific optimizations. The backends are identified by a string unique to the backend,
455 device with `LoadNetwork()`. This function creates the backend-specific workloads
462 m_Runtime->GetDeviceSpec(),
465 runtime->LoadNetwork(0, std::move(optNet), errorMessage));
474 std::vector<std::string> inputNames = parser->GetSubgraphInputTensorNames(0);
475 auto inputBindingInfo = parser->GetNetworkInputBindingInfo(0, inputNames[0]);
501 m_interpreter->AllocateTensors();
508 In this example we enable fast math and reduce all float32 operators to float16 optimizations.
519 /* enable fast math optimization */
540 m_interpreter->ModifyGraphWithDelegate(std::move(theArmnnDelegate));
544 Generic object detection pipeline has 3 steps, to perform data pre-processing, run inference and de…
545 in the post-processing step.
550 #### Pre-processing the Captured Frame
557 objectDetectionPipeline->PreProcessing(frame, processed);
560 A pre-processing step consists of resizing the frame to the required resolution, padding and doing…
565 Pre-processing step returns `cv::Mat` object containing data ready for inference.
571 objectDetectionPipeline->Inference(processed, results);
585 //outputTensors were pre-allocated before
588 runtime->EnqueueWorkload(0, inputTensors, outputTensors);
591 from the pre-allocated output data buffer.
602 if (m_interpreter->Invoke() == kTfLiteOk)
618 For YOLO V3 Tiny models, we decode the output and perform non-maximum suppression to filter out any…
619 below a confidence threshold and any redundant bounding boxes above an intersection-over-union thre…
622 It is encouraged to experiment with threshold values for confidence and intersection-over-union (Io…
629 Post-processing step accepts a callback function to be invoked when the decoding is finished. We wi…
634 //results - inference output
635 objectDetectionPipeline->PostProcessing(results, [&frame, &labels](od::DetectedObjects detects) -> …