登录
注册
开源
企业版
高校版
搜索
帮助中心
使用条款
关于我们
开源
企业版
高校版
私有云
模力方舟
AI 队友
登录
注册
Gitee 2025 年度开源项目评选中
代码拉取完成,页面将自动刷新
捐赠
捐赠前请先登录
取消
前往登录
扫描微信二维码支付
取消
支付完成
支付提示
将跳转至支付宝完成支付
确定
取消
Watch
不关注
关注所有动态
仅关注版本发行动态
关注但不提醒动态
3
Star
0
Fork
2
src-anolis-ai
/
openvino
代码
Issues
0
Pull Requests
0
Wiki
统计
流水线
服务
JavaDoc
PHPDoc
质量分析
Jenkins for Gitee
腾讯云托管
腾讯云 Serverless
悬镜安全
阿里云 SAE
Codeblitz
SBOM
我知道了,不再自动展开
1
Initial OpenVINO anolis 23 package
已合并
aubreyli:a23
src-anolis-ai:a23
aubreyli
创建于 2024-09-05 14:09
克隆/下载
HTTPS
SSH
复制
下载 Email Patch
下载 Diff 文件
Add initial openvino package files, version 2024.3.0 ``` [root@1fd2ea1fbf8c /]# cat /etc/os-release NAME="Anolis OS" VERSION="23.1" ID="anolis" VERSION_ID="23.1" PLATFORM_ID="platform:an23.1" PRETTY_NAME="Anolis OS 23.1" ANSI_COLOR="0;31" HOME_URL=" https://openanolis.cn/" BUG_REPORT_URL=" https://bugzilla.openanolis.cn/" ``` Test results: - Query devices ``` [root@ea11229aa5ca Release]# ./hello_query_device [ INFO ] Build ................................. 2024.3.0-1-1e3b88e4e3f [ INFO ] [ INFO ] Available devices: [ INFO ] CPU [ INFO ] SUPPORTED_PROPERTIES: [ INFO ] Immutable: AVAILABLE_DEVICES : "" [ INFO ] Immutable: RANGE_FOR_ASYNC_INFER_REQUESTS : 1 1 1 [ INFO ] Immutable: RANGE_FOR_STREAMS : 1 8 [ INFO ] Immutable: EXECUTION_DEVICES : CPU [ INFO ] Immutable: FULL_DEVICE_NAME : 11th Gen Intel(R) Core(TM) i7-1185G7 @ 3.00GHz [ INFO ] Immutable: OPTIMIZATION_CAPABILITIES : WINOGRAD FP32 INT8 BIN EXPORT_IMPORT [ INFO ] Immutable: DEVICE_TYPE : integrated [ INFO ] Immutable: DEVICE_ARCHITECTURE : intel64 [ INFO ] Mutable: NUM_STREAMS : 1 [ INFO ] Mutable: INFERENCE_NUM_THREADS : 0 [ INFO ] Mutable: PERF_COUNT : NO [ INFO ] Mutable: INFERENCE_PRECISION_HINT : f32 [ INFO ] Mutable: PERFORMANCE_HINT : LATENCY [ INFO ] Mutable: EXECUTION_MODE_HINT : PERFORMANCE [ INFO ] Mutable: PERFORMANCE_HINT_NUM_REQUESTS : 0 [ INFO ] Mutable: ENABLE_CPU_PINNING : YES [ INFO ] Mutable: SCHEDULING_CORE_TYPE : ANY_CORE [ INFO ] Mutable: MODEL_DISTRIBUTION_POLICY : "" [ INFO ] Mutable: ENABLE_HYPER_THREADING : YES [ INFO ] Mutable: DEVICE_ID : "" [ INFO ] Mutable: CPU_DENORMALS_OPTIMIZATION : NO [ INFO ] Mutable: LOG_LEVEL : LOG_NONE [ INFO ] Mutable: CPU_SPARSE_WEIGHTS_DECOMPRESSION_RATE : 1 [ INFO ] Mutable: DYNAMIC_QUANTIZATION_GROUP_SIZE : 32 [ INFO ] Mutable: KV_CACHE_PRECISION : f16 [ INFO ] Mutable: AFFINITY : CORE [ INFO ] [ INFO ] GPU [ INFO ] SUPPORTED_PROPERTIES: [ INFO ] Immutable: AVAILABLE_DEVICES : 0 [ INFO ] Immutable: RANGE_FOR_ASYNC_INFER_REQUESTS : 1 2 1 [ INFO ] Immutable: RANGE_FOR_STREAMS : 1 2 [ INFO ] Immutable: OPTIMAL_BATCH_SIZE : 1 [ INFO ] Immutable: MAX_BATCH_SIZE : 1 [ INFO ] Immutable: DEVICE_ARCHITECTURE : GPU: vendor=0x8086 arch=v12.0.0 [ INFO ] Immutable: FULL_DEVICE_NAME : Intel(R) Iris(R) Xe Graphics (iGPU) [ INFO ] Immutable: DEVICE_UUID : 8680499a010000000002000000000000 [ INFO ] Immutable: DEVICE_LUID : 90c10c46fd7f0000 [ INFO ] Immutable: DEVICE_TYPE : integrated [ INFO ] Immutable: DEVICE_GOPS : {f16:4147.2,f32:2073.6,i8:8294.4,u8:8294.4} [ INFO ] Immutable: OPTIMIZATION_CAPABILITIES : FP32 BIN FP16 INT8 EXPORT_IMPORT [ INFO ] Immutable: GPU_DEVICE_TOTAL_MEM_SIZE : 30785712128 [ INFO ] Immutable: GPU_UARCH_VERSION : 12.0.0 [ INFO ] Immutable: GPU_EXECUTION_UNITS_COUNT : 96 [ INFO ] Immutable: GPU_MEMORY_STATISTICS : "" [ INFO ] Mutable: PERF_COUNT : NO [ INFO ] Mutable: MODEL_PRIORITY : MEDIUM [ INFO ] Mutable: GPU_HOST_TASK_PRIORITY : MEDIUM [ INFO ] Mutable: GPU_QUEUE_PRIORITY : MEDIUM [ INFO ] Mutable: GPU_QUEUE_THROTTLE : MEDIUM [ INFO ] Mutable: GPU_ENABLE_SDPA_OPTIMIZATION : YES [ INFO ] Mutable: GPU_ENABLE_LOOP_UNROLLING : YES [ INFO ] Mutable: GPU_DISABLE_WINOGRAD_CONVOLUTION : NO [ INFO ] Mutable: CACHE_DIR : "" [ INFO ] Mutable: CACHE_MODE : optimize_speed [ INFO ] Mutable: PERFORMANCE_HINT : LATENCY [ INFO ] Mutable: EXECUTION_MODE_HINT : PERFORMANCE [ INFO ] Mutable: COMPILATION_NUM_THREADS : 8 [ INFO ] Mutable: NUM_STREAMS : 1 [ INFO ] Mutable: PERFORMANCE_HINT_NUM_REQUESTS : 0 [ INFO ] Mutable: INFERENCE_PRECISION_HINT : f16 [ INFO ] Mutable: ENABLE_CPU_PINNING : NO [ INFO ] Mutable: DEVICE_ID : 0 [ INFO ] Mutable: DYNAMIC_QUANTIZATION_GROUP_SIZE : 0 [ INFO ] ``` - benchmark on CPU ``` [root@ea11229aa5ca Release]# ./benchmark_app -m resnet-v1-50.xml -niter 1 -d CPU -hint latency [Step 1/11] Parsing and validating input arguments [ INFO ] Parsing input parameters [Step 2/11] Loading OpenVINO Runtime [ INFO ] OpenVINO: [ INFO ] Build ................................. 2024.3.0-1-1e3b88e4e3f [ INFO ] [ INFO ] Device info: [ INFO ] CPU [ INFO ] Build ................................. 2024.3.0-1-1e3b88e4e3f [ INFO ] [ INFO ] [Step 3/11] Setting device configuration [Step 4/11] Reading model files [ INFO ] Loading model files [ INFO ] Read model took 7.02 ms [ INFO ] Original model I/O parameters: [ INFO ] Network inputs: [ INFO ] data (node: data) : f32 / [...] / [1,3,224,224] [ INFO ] Network outputs: [ INFO ] prob (node: prob) : f32 / [...] / [1,1000] [Step 5/11] Resizing model to match image sizes and given batch [ WARNING ] data: layout is not set explicitly, so it is defaulted to NCHW. It is STRONGLY recommended to set layout manually to avoid further issues. [Step 6/11] Configuring input of the model [ INFO ] Model batch size: 1 [ INFO ] Network inputs: [ INFO ] data (node: data) : u8 / [N,C,H,W] / [1,3,224,224] [ INFO ] Network outputs: [ INFO ] prob (node: prob) : f32 / [...] / [1,1000] [Step 7/11] Loading the model to the device [ INFO ] Compile model took 105.04 ms [Step 8/11] Querying optimal runtime parameters [ INFO ] Model: [ INFO ] NETWORK_NAME: main_graph [ INFO ] OPTIMAL_NUMBER_OF_INFER_REQUESTS: 1 [ INFO ] NUM_STREAMS: 1 [ INFO ] INFERENCE_NUM_THREADS: 4 [ INFO ] PERF_COUNT: NO [ INFO ] INFERENCE_PRECISION_HINT: f32 [ INFO ] PERFORMANCE_HINT: LATENCY [ INFO ] EXECUTION_MODE_HINT: PERFORMANCE [ INFO ] PERFORMANCE_HINT_NUM_REQUESTS: 0 [ INFO ] ENABLE_CPU_PINNING: YES [ INFO ] SCHEDULING_CORE_TYPE: ANY_CORE [ INFO ] MODEL_DISTRIBUTION_POLICY: [ INFO ] ENABLE_HYPER_THREADING: NO [ INFO ] EXECUTION_DEVICES: CPU [ INFO ] CPU_DENORMALS_OPTIMIZATION: NO [ INFO ] LOG_LEVEL: LOG_NONE [ INFO ] CPU_SPARSE_WEIGHTS_DECOMPRESSION_RATE: 1 [ INFO ] DYNAMIC_QUANTIZATION_GROUP_SIZE: 32 [ INFO ] KV_CACHE_PRECISION: f16 [ INFO ] AFFINITY: CORE [Step 9/11] Creating infer requests and preparing input tensors [ WARNING ] No input files were given: all inputs will be filled with random values! [ INFO ] Test Config 0 [ INFO ] data ([N,C,H,W], u8, [1,3,224,224], static): random (image/numpy array is expected) [Step 10/11] Measuring performance (Start inference asynchronously, 1 inference requests, limits: 1 iterations) [ INFO ] Benchmarking in inference only mode (inputs filling are not included in measurement loop). [ INFO ] First inference took 20.66 ms [Step 11/11] Dumping statistics report [ INFO ] Execution Devices: [ CPU ] [ INFO ] Count: 1 iterations [ INFO ] Duration: 19.65 ms [ INFO ] Latency: [ INFO ] Median: 19.65 ms [ INFO ] Average: 19.65 ms [ INFO ] Min: 19.65 ms [ INFO ] Max: 19.65 ms [ INFO ] Throughput: 50.89 FPS ``` - benchmark on GPU ``` [root@ea11229aa5ca Release]# ./benchmark_app -m resnet-v1-50.xml -niter 1 -d GPU -hint latency [Step 1/11] Parsing and validating input arguments [ INFO ] Parsing input parameters [Step 2/11] Loading OpenVINO Runtime [ INFO ] OpenVINO: [ INFO ] Build ................................. 2024.3.0-1-1e3b88e4e3f [ INFO ] [ INFO ] Device info: [ INFO ] GPU [ INFO ] Build ................................. 2024.3.0-1-1e3b88e4e3f [ INFO ] [ INFO ] [Step 3/11] Setting device configuration [Step 4/11] Reading model files [ INFO ] Loading model files [ INFO ] Read model took 6.65 ms [ INFO ] Original model I/O parameters: [ INFO ] Network inputs: [ INFO ] data (node: data) : f32 / [...] / [1,3,224,224] [ INFO ] Network outputs: [ INFO ] prob (node: prob) : f32 / [...] / [1,1000] [Step 5/11] Resizing model to match image sizes and given batch [ WARNING ] data: layout is not set explicitly, so it is defaulted to NCHW. It is STRONGLY recommended to set layout manually to avoid further issues. [Step 6/11] Configuring input of the model [ INFO ] Model batch size: 1 [ INFO ] Network inputs: [ INFO ] data (node: data) : u8 / [N,C,H,W] / [1,3,224,224] [ INFO ] Network outputs: [ INFO ] prob (node: prob) : f32 / [...] / [1,1000] [Step 7/11] Loading the model to the device [ INFO ] Compile model took 380.66 ms [Step 8/11] Querying optimal runtime parameters [ INFO ] Model: [ INFO ] NETWORK_NAME: main_graph [ INFO ] OPTIMAL_NUMBER_OF_INFER_REQUESTS: 1 [ INFO ] PERF_COUNT: NO [ INFO ] ENABLE_CPU_PINNING: NO [ INFO ] MODEL_PRIORITY: MEDIUM [ INFO ] GPU_HOST_TASK_PRIORITY: MEDIUM [ INFO ] GPU_QUEUE_PRIORITY: MEDIUM [ INFO ] GPU_QUEUE_THROTTLE: MEDIUM [ INFO ] GPU_ENABLE_LOOP_UNROLLING: YES [ INFO ] GPU_DISABLE_WINOGRAD_CONVOLUTION: NO [ INFO ] CACHE_DIR: [ INFO ] CACHE_MODE: optimize_speed [ INFO ] PERFORMANCE_HINT: LATENCY [ INFO ] EXECUTION_MODE_HINT: PERFORMANCE [ INFO ] COMPILATION_NUM_THREADS: 8 [ INFO ] NUM_STREAMS: 1 [ INFO ] PERFORMANCE_HINT_NUM_REQUESTS: 0 [ INFO ] INFERENCE_PRECISION_HINT: f16 [ INFO ] DEVICE_ID: 0 [ INFO ] EXECUTION_DEVICES: GPU.0 [ INFO ] DYNAMIC_QUANTIZATION_GROUP_SIZE: 0 [Step 9/11] Creating infer requests and preparing input tensors [ WARNING ] No input files were given: all inputs will be filled with random values! [ INFO ] Test Config 0 [ INFO ] data ([N,C,H,W], u8, [1,3,224,224], static): random (image/numpy array is expected) [Step 10/11] Measuring performance (Start inference asynchronously, 1 inference requests, limits: 1 iterations) [ INFO ] Benchmarking in inference only mode (inputs filling are not included in measurement loop). [ INFO ] First inference took 7.75 ms [Step 11/11] Dumping statistics report [ INFO ] Execution Devices: [ GPU.0 ] [ INFO ] Count: 1 iterations [ INFO ] Duration: 6.52 ms [ INFO ] Latency: [ INFO ] Median: 6.51 ms [ INFO ] Average: 6.51 ms [ INFO ] Min: 6.51 ms [ INFO ] Max: 6.51 ms [ INFO ] Throughput: 153.47 FPS ```
此 Pull Request 需要通过一些审核项
类型
指派人员
状态
审查
Meredith
已审查通过
forrest_ly
已审查通过
已完成
(2/1人)
测试
Meredith
已测试通过
forrest_ly
已测试通过
已完成
(2/1人)
怎样手动合并此 Pull Request
git checkout a23
git pull https://gitee.com/aubrey-intel/openvino.git a23
git push origin a23
评论
25
提交
6
文件
2
检查
代码问题
0
批量操作
展开设置
折叠设置
审查
Code Owner
审查人员
forrest_ly
forrest_ly
Lvcongqing
HelloWorld_lvcongqing
aubreyli
aubrey-intel
Meredith
yueeranna
未设置
最少人数
1
测试
forrest_ly
forrest_ly
Lvcongqing
HelloWorld_lvcongqing
aubreyli
aubrey-intel
Meredith
yueeranna
未设置
最少人数
1
优先级
不指定
严重
主要
次要
不重要
标签
anolis_cla_pass
anolis_test_fail
关联 Issue
未关联
Pull Request 合并后将关闭上述关联 Issue
里程碑
未关联里程碑
参与者
(3)
1
https://gitee.com/src-anolis-ai/openvino.git
[email protected]
:src-anolis-ai/openvino.git
src-anolis-ai
openvino
openvino
点此查找更多帮助
搜索帮助
Git 命令在线学习
如何在 Gitee 导入 GitHub 仓库
Git 仓库基础操作
企业版和社区版功能对比
SSH 公钥设置
如何处理代码冲突
仓库体积过大,如何减小?
如何找回被删除的仓库数据
Gitee 产品配额说明
GitHub仓库快速导入Gitee及同步更新
什么是 Release(发行版)
将 PHP 项目自动发布到 packagist.org
评论
仓库举报
回到顶部
登录提示
该操作需登录 Gitee 帐号,请先登录后再操作。
立即登录
没有帐号,去注册