Setup Intel OpenVINO and AWS Greengrass on Ubuntu
Setup Intel OpenVINO and AWS Greengrass on Ubuntu
- How to install OpenVINO on Linux: https://software.intel.com/en-
us/articles/OpenVINO-Install- Linux - How OpenVINO work with AWS Greengrass: https://software.intel.com/en-
us/articles/OpenVINO-IE- Samples#inpage-nav-16
- First set the conversion tool ModelOptimizer: https://software.intel.com/en-
us/articles/OpenVINO- ModelOptimizer - Command : `source
/bin/setupvars.sh ` - Command : `cd
/deployment_ `tools/model_optimizer/install_ prerequisites - Command : `sudo -E ./install_prerequisites.sh`
- Model Optimizer uses Python 3.5, whereas Greengrass samples use Python 2.7. In order for Model Optimizer not to influence the global Python configuration, activate a virtual environment as below:
- Command : `sudo ./install_prerequisites.sh venv`
- Command : `cd
/deployment_ `tools/model_optimizer - Command : `source venv/bin/activate`
- For CPU, models should be converted with data type FP32 and for GPU/FPGA, it should be with data type FP16 for the best performance.
- For classification using BVLC Alexnet model:
Command : `python mo.py --framework caffe --input_model/bvlc_alexnet. `caffemodel --input_proto /deploy. prototxt --data_type --output_dir --input_shape [1,3,227,227] - For object detection using SqueezeNetSSD-5Class model:
Command : `python mo.py --framework caffe --input_model/ `SqueezeNetSSD-5Class. caffemodel --input_proto / SqueezeNetSSD-5Class.prototxt --data_type --output_dir - where
is the location where the user downloaded the models, is FP32 or FP16 depending on target device, and is the directory where the user wants to store the IR. IR contains .xml format corresponding to the network structure and .bin format corresponding to weights. This .xml should be passed to mentioned in the Configuring the Lambda Function section. In the BVLC Alexnet model, the prototxt defines the input shape with batch size 10 by default. In order to use any other batch size, the entire input shape needs to be provided as an argument to the model optimizer. For example, if you want to use batch size 1, you can provide --input_shape [1,3,227,227].
Greengrass sample is in :
/opt/intel/computer_vision_ sdk/inference_engine/samples/ python_samples/greengrass_ samples/
LD_LIBRARY_PATH :
/opt/intel/computer_vision_
PYTHONPATH :
/opt/intel/computer_vision_
PARAM_CPU_EXTENSION_PATH :
/opt/intel/computer_vision_
留言
張貼留言
歡迎留言一起討論