登录
注册
开源
企业版
高校版
搜索
帮助中心
使用条款
关于我们
开源
企业版
高校版
私有云
模力方舟
AI 队友
登录
注册
11月29日 Gitee Talk | 模力方舟 AI 沙龙深圳站:看懂算力到应用的下一个主战场!点击立即报名~
代码拉取完成,页面将自动刷新
仓库状态说明
开源项目
>
人工智能
>
机器学习/深度学习
&&
捐赠
捐赠前请先登录
取消
前往登录
扫描微信二维码支付
取消
支付完成
支付提示
将跳转至支付宝完成支付
确定
取消
Watch
不关注
关注所有动态
仅关注版本发行动态
关注但不提醒动态
88
Star
652
Fork
1.5K
Ascend
/
pytorch
暂停
代码
Issues
41
Pull Requests
350
Wiki
统计
流水线
服务
质量分析
Jenkins for Gitee
腾讯云托管
腾讯云 Serverless
悬镜安全
阿里云 SAE
Codeblitz
SBOM
我知道了,不再自动展开
更新失败,请稍后重试!
移除标识
内容风险标识
本任务被
标识为内容中包含有代码安全 Bug 、隐私泄露等敏感信息,仓库外成员不可访问
yolov11使用torch_npu进行相关推理,在npu选择了设备之后推理会失败
TODO
#ICW0G4
缺陷
xingwang111
创建于
2025-09-02 16:42
一、问题现象(附报错日志上下文): 我使用yolov11,利用torch_npu进行相关推理,如果利用model.to("npu:0")选择了npu设备的话,推理过程会报错,如果不选择设备的话,不会报错,但是推理时间过长,一张图片需要3S才能推理陈工 二、软件版本: -- CANN 版本 (e.g., CANN 3.0.x,5.x.x): 8.0.RC2.alpha003 --Tensorflow/Pytorch/MindSpore 版本: --Python 版本 (e.g., Python 3.7.5):3.11 -- torch和torch_npu的版本 :2.1.0;2.1.0.post10 三、测试步骤: 使用的官方的yolov11s的模型在npu上的跑,推理时间过长,一张图片在3s左右,相关代码如下 ``` # Copyright (c) 2025 Huawei Technologies Co., Ltd # [Software Name] is licensed under Mulan PSL v2. # You can use this software according to the terms and conditions of the Mulan PSL v2. # You may obtain a copy of Mulan PSL v2 at: # http://license.coscl.org.cn/MulanPSL2 # THIS SOFTWARE IS PROVIDED ON AN "AS IS" BASIS, WITHOUT WARRANTIES OF ANY KIND, # EITHER EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO NON-INFRINGEMENT, # MERCHANTABILITY OR FIT FOR A PARTICULAR PURPOSE. # See the Mulan PSL v2 for more details. import argparse import torch import torch_npu from ultralytics import YOLO from ultralytics.utils.plotting import Annotator, colors, save_one_box import time import cv2 def main(args): # torch_npu.npu.set_compile_mode(jit_compile=False) torch.npu.set_compile_mode(jit_compile=False) torch_npu.npu.set_device(0) model = YOLO(args.pth) # 取消掉的话,推理时间过长 # model.to("npu:0") model.model.eval() print("加载模型成功") cap = cv2.VideoCapture("./test2.mp4") # 检查视频是否成功打开 print("加载视频成功") if not cap.isOpened(): print("无法打开视频文件") is_obb = False print("开始推理") while cap.isOpened(): # 逐帧读取视频 ret, frame = cap.read() # 检查帧是否读取成功 if not ret: print("视频播放结束或读取失败") return images = cv2.resize(frame, (640, 640)) # results = model.predict(source=frame, conf=0.25, device=torch.device("npu:0"), half=False) results = model.predict(source=frame, conf=0.25, imgsz=640, half=False) # for result in results: # pred_boxes = result.boxes # Boxes object for bounding box outputs # print(pred_boxes) # for i, d in enumerate(reversed(pred_boxes)): # box = d.xyxyxyxy.reshape(-1, 4, 2).squeeze() if is_obb else d.xyxy.squeeze() # box = box.numpy() # x1, y1, x2, y2 = box[0], box[1], box[2], box[3] # cv2.rectangle(frame, (int(x1), int(y1)), (int(x2), int(y2)), (0, 255, 0), 2) # 绘制边界框,颜色为绿色 # cv2.imshow("frame", frame) # cv2.waitKey(1) cap.release() cap.destroyAllWindows() if __name__ == '__main__': parser = argparse.ArgumentParser(description="yolov11 dataset convert") parser.add_argument("--pth", type=str, help="weight", default="yolo11s.pt") parser.add_argument("--dataset", type=str, help="dataset") parser.add_argument("--batchsize", type=int, help="batchsize") args = parser.parse_args() main(args) ``` 使用这个代码能够跑通,但是推理时间比较长,感觉像是没有在npu上运行一样  如果把代码的model.to("npu:0")给打开的话,会直接报错: Traceback (most recent call last): File "/home/app/Yolov11_for_PyTorch/infer.py", line 62, in <module> main(args) File "/home/app/Yolov11_for_PyTorch/infer.py", line 43, in main results = model.predict(source=frame, conf=0.25, imgsz=640, half=False) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/home/app/Yolov11_for_PyTorch/ultralytics/engine/model.py", line 553, in predict self.predictor.setup_model(model=self.model, verbose=is_cli) File "/home/app/Yolov11_for_PyTorch/ultralytics/engine/predictor.py", line 310, in setup_model self.model = AutoBackend( ^^^^^^^^^^^^ File "/root/miniconda3/envs/yolo11/lib/python3.11/site-packages/torch/utils/_contextlib.py", line 115, in decorate_context return func(*args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^ File "/home/app/Yolov11_for_PyTorch/ultralytics/nn/autobackend.py", line 152, in __init__ model = model.fuse(verbose=verbose) ^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/home/app/Yolov11_for_PyTorch/ultralytics/nn/tasks.py", line 206, in fuse m.conv = fuse_conv_and_bn(m.conv, m.bn) # update conv ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/home/app/Yolov11_for_PyTorch/ultralytics/utils/torch_utils.py", line 261, in fuse_conv_and_bn w_bn = torch.diag(bn.weight.div(torch.sqrt(bn.eps + bn.running_var))) ~~~~~~~^~~~~~~~~~~~~~~~ RuntimeError: call aclnnAdds failed, detail:EZ9999: Inner Error! EZ9999: 2025-09-02-16:41:22.122.179 Parse dynamic kernel config fail. TraceBack (most recent call last): AclOpKernelInit failed opType Op Add does not has any binary. Kernel Run failed. opType: 3, Add launch failed for Add, errno:561000. [ERROR] 2025-09-02-16:41:22 (PID:3108812, Device:0, RankID:-1) ERR01100 OPS call acl api failed 
一、问题现象(附报错日志上下文): 我使用yolov11,利用torch_npu进行相关推理,如果利用model.to("npu:0")选择了npu设备的话,推理过程会报错,如果不选择设备的话,不会报错,但是推理时间过长,一张图片需要3S才能推理陈工 二、软件版本: -- CANN 版本 (e.g., CANN 3.0.x,5.x.x): 8.0.RC2.alpha003 --Tensorflow/Pytorch/MindSpore 版本: --Python 版本 (e.g., Python 3.7.5):3.11 -- torch和torch_npu的版本 :2.1.0;2.1.0.post10 三、测试步骤: 使用的官方的yolov11s的模型在npu上的跑,推理时间过长,一张图片在3s左右,相关代码如下 ``` # Copyright (c) 2025 Huawei Technologies Co., Ltd # [Software Name] is licensed under Mulan PSL v2. # You can use this software according to the terms and conditions of the Mulan PSL v2. # You may obtain a copy of Mulan PSL v2 at: # http://license.coscl.org.cn/MulanPSL2 # THIS SOFTWARE IS PROVIDED ON AN "AS IS" BASIS, WITHOUT WARRANTIES OF ANY KIND, # EITHER EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO NON-INFRINGEMENT, # MERCHANTABILITY OR FIT FOR A PARTICULAR PURPOSE. # See the Mulan PSL v2 for more details. import argparse import torch import torch_npu from ultralytics import YOLO from ultralytics.utils.plotting import Annotator, colors, save_one_box import time import cv2 def main(args): # torch_npu.npu.set_compile_mode(jit_compile=False) torch.npu.set_compile_mode(jit_compile=False) torch_npu.npu.set_device(0) model = YOLO(args.pth) # 取消掉的话,推理时间过长 # model.to("npu:0") model.model.eval() print("加载模型成功") cap = cv2.VideoCapture("./test2.mp4") # 检查视频是否成功打开 print("加载视频成功") if not cap.isOpened(): print("无法打开视频文件") is_obb = False print("开始推理") while cap.isOpened(): # 逐帧读取视频 ret, frame = cap.read() # 检查帧是否读取成功 if not ret: print("视频播放结束或读取失败") return images = cv2.resize(frame, (640, 640)) # results = model.predict(source=frame, conf=0.25, device=torch.device("npu:0"), half=False) results = model.predict(source=frame, conf=0.25, imgsz=640, half=False) # for result in results: # pred_boxes = result.boxes # Boxes object for bounding box outputs # print(pred_boxes) # for i, d in enumerate(reversed(pred_boxes)): # box = d.xyxyxyxy.reshape(-1, 4, 2).squeeze() if is_obb else d.xyxy.squeeze() # box = box.numpy() # x1, y1, x2, y2 = box[0], box[1], box[2], box[3] # cv2.rectangle(frame, (int(x1), int(y1)), (int(x2), int(y2)), (0, 255, 0), 2) # 绘制边界框,颜色为绿色 # cv2.imshow("frame", frame) # cv2.waitKey(1) cap.release() cap.destroyAllWindows() if __name__ == '__main__': parser = argparse.ArgumentParser(description="yolov11 dataset convert") parser.add_argument("--pth", type=str, help="weight", default="yolo11s.pt") parser.add_argument("--dataset", type=str, help="dataset") parser.add_argument("--batchsize", type=int, help="batchsize") args = parser.parse_args() main(args) ``` 使用这个代码能够跑通,但是推理时间比较长,感觉像是没有在npu上运行一样  如果把代码的model.to("npu:0")给打开的话,会直接报错: Traceback (most recent call last): File "/home/app/Yolov11_for_PyTorch/infer.py", line 62, in <module> main(args) File "/home/app/Yolov11_for_PyTorch/infer.py", line 43, in main results = model.predict(source=frame, conf=0.25, imgsz=640, half=False) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/home/app/Yolov11_for_PyTorch/ultralytics/engine/model.py", line 553, in predict self.predictor.setup_model(model=self.model, verbose=is_cli) File "/home/app/Yolov11_for_PyTorch/ultralytics/engine/predictor.py", line 310, in setup_model self.model = AutoBackend( ^^^^^^^^^^^^ File "/root/miniconda3/envs/yolo11/lib/python3.11/site-packages/torch/utils/_contextlib.py", line 115, in decorate_context return func(*args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^ File "/home/app/Yolov11_for_PyTorch/ultralytics/nn/autobackend.py", line 152, in __init__ model = model.fuse(verbose=verbose) ^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/home/app/Yolov11_for_PyTorch/ultralytics/nn/tasks.py", line 206, in fuse m.conv = fuse_conv_and_bn(m.conv, m.bn) # update conv ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/home/app/Yolov11_for_PyTorch/ultralytics/utils/torch_utils.py", line 261, in fuse_conv_and_bn w_bn = torch.diag(bn.weight.div(torch.sqrt(bn.eps + bn.running_var))) ~~~~~~~^~~~~~~~~~~~~~~~ RuntimeError: call aclnnAdds failed, detail:EZ9999: Inner Error! EZ9999: 2025-09-02-16:41:22.122.179 Parse dynamic kernel config fail. TraceBack (most recent call last): AclOpKernelInit failed opType Op Add does not has any binary. Kernel Run failed. opType: 3, Add launch failed for Add, errno:561000. [ERROR] 2025-09-02-16:41:22 (PID:3108812, Device:0, RankID:-1) ERR01100 OPS call acl api failed 
评论 (
11
)
登录
后才可以发表评论
状态
TODO
TODO
WIP
DONE
CLOSED
REJECTED
负责人
未设置
标签
未设置
项目
未立项任务
未立项任务
里程碑
未关联里程碑
未关联里程碑
Pull Requests
未关联
未关联
关联的 Pull Requests 被合并后可能会关闭此 issue
分支
未关联
分支 (79)
标签 (186)
master
v2.8.0
v2.1.0
v2.6.0
v2.7.1
v2.5.1
v2.6.0-7.1.0
v2.5.1-7.1.0
v2.1.0-7.1.0
revert-merge-23967-master
revert-merge-23966-v2.8.0
revert-merge-23965-v2.7.1
revert-merge-23964-v2.6.0
revert-merge-23962-v2.5.1
revert-merge-23789-v2.1.0
v2.1.0-7.0.0
v2.4.0-7.0.0
v2.4.0
v2.3.1
v2.3.1-7.0.0
v2.5.1-7.0.0
v2.4.0-6.0.0
v2.3.1-6.0.0
v2.1.0-6.0.0
v2.1.0-6.0.rc3
v2.3.1-6.0.rc3
v2.4.0-6.0.rc3
v2.2.0
v1.11.0-6.0.rc1
v2.1.0-6.0.rc1
v2.2.0-6.0.rc1
v1.11.0-6.0.rc2
v2.1.0-6.0.rc2
v2.2.0-6.0.rc2
v2.3.1-6.0.rc2
v1.11.0
v2.1.0-5.0.0
v2.0.1-5.0.0
v1.11.0-5.0.0
v2.0.1
v2.1.0-5.0.rc3
v1.11.0-5.0.rc3
v2.0.1-5.0.rc3
v1.11.0-5.0.rc3.3
v1.8.1
v1.11.0-x1
v1.8.1-5.0.rc3
v1.11.0-5.0.rc2.2
v1.11.0-zj
v1.11.0-5.0.rc2.1
v2.0.1-5.0.rc2
v1.11.0-5.0.rc2
v1.8.1-5.0.rc2
v2.0.0-5.0.rc2
v1.8.1-5.0.rc1
v1.11.0-5.0.rc1
v1.11.0-yd
v1.11.0-xf
v1.11.0-infer
v1.11.0-bigkernel
v1.11.0-host_api
v1.8.1-3.0.0
v1.11.0-5.0.rc2.t100
v1.8.1-5.0.rc2.t100
v1.8.1-3.0.0-dev
v1.11.0-3.0.0
v2.0-dev
v1.8.1-3.0.rc3
v1.5.0-3.0.0
v1.5.0
v1.8.1-3.0.rc1
v1.11.0-3.0.rc3
v1.8.1-3.0.rc2
v1.5.0-3.0.rc3
v1.5.0-3.0.rc2
2.0.4.tr5
v1.5.0-3.0.rc1
2.0.2.tr5
2.0.3.tr5
v7.2.RC1.alpha002-pytorch2.8.0
v7.2.RC1.alpha002-pytorch2.7.1
v7.2.RC1.alpha002-pytorch2.6.0
v7.2.RC1.alpha002-pytorch2.1.0
v7.1.0.2-pytorch2.5.1
v7.1.0.2-pytorch2.6.0
v7.1.0.2-pytorch2.1.0
v7.0.0.1-pytorch2.4.0
v7.0.0.1-pytorch2.1.0
v7.2.RC1.alpha001-pytorch2.8.0
v7.2.RC1.alpha001-pytorch2.7.1
v7.2.RC1.alpha001-pytorch2.6.0
v7.2.RC1.alpha001-pytorch2.5.1
v7.2.RC1.alpha001-pytorch2.1.0
v7.1.0.1-pytorch2.6.0
v7.1.0.1-pytorch2.5.1
v7.1.0.1-pytorch2.1.0
v7.1.0-pytorch2.6.0
v7.1.0-pytorch2.5.1
v7.1.0-pytorch2.1.0
v7.1.RC1.alpha003-pytorch2.6.0
v7.1.RC1.alpha003-pytorch2.5.1
v7.1.RC1.alpha003-pytorch2.1.0
v7.1.RC1.alpha002-pytorch2.7.1
v7.1.RC1.alpha002-pytorch2.6.0
v7.1.RC1.alpha002-pytorch2.5.1
v7.1.RC1.alpha002-pytorch2.4.0
v7.1.RC1.alpha002-pytorch2.3.1
v7.1.RC1.alpha002-pytorch2.1.0
v6.0.0.1-pytorch2.4.0
v6.0.0.1-pytorch2.3.1
v6.0.0.1-pytorch2.1.0
v7.1.RC1.alpha001-pytorch2.6.0
v7.1.RC1.alpha001-pytorch2.5.1
v7.1.RC1.alpha001-pytorch2.4.0
v7.1.RC1.alpha001-pytorch2.3.1
v7.1.RC1.alpha001-pytorch2.1.0
v7.0.0-pytorch2.5.1
v7.0.0-pytorch2.4.0
v7.0.0-pytorch2.3.1
v7.0.RC1.alpha002-pytorch2.6.0
v7.0.0-pytorch2.1.0
v7.0.RC1.alpha002-pytorch2.5.1
v7.0.RC1.alpha002-pytorch2.4.0
v7.0.RC1.alpha002-pytorch2.3.1
v7.0.RC1.alpha002-pytorch2.1.0
v7.0.RC1.alpha001-pytorch2.5.1
v7.0.RC1.alpha001-pytorch2.1.0
v7.0.RC1.alpha001-pytorch2.4.0
v7.0.RC1.alpha001-pytorch2.3.1
v6.0.0-pytorch2.4.0
v6.0.0-pytorch2.3.1
v6.0.0-pytorch2.1.0
v6.0.0.alpha003-pytorch2.4.0
v6.0.0.alpha003-pytorch2.3.1
v6.0.0.alpha003-pytorch2.1.0
v6.0.0.alpha002-pytorch2.4.0
v6.0.0.alpha002-pytorch2.3.1
v6.0.0.alpha002-pytorch2.1.0
v6.0.0.alpha001-pytorch2.5.1
v6.0.rc3-pytorch2.4.0
v6.0.rc3-pytorch2.3.1
v6.0.rc3-pytorch2.1.0
v6.0.0.alpha001-pytorch2.4.0
v6.0.0.alpha001-pytorch2.3.1
v6.0.0.alpha001-pytorch2.1.0
v6.0.rc2.1-pytorch1.11.0
v6.0.rc2.1-pytorch2.3.1
v6.0.rc2.1-pytorch2.2.0
v6.0.rc2.1-pytorch2.1.0
v6.0.rc3.alpha003-pytorch2.3.1
v6.0.rc3.alpha003-pytorch2.1.0
v6.0.rc3.alpha001-pytorch2.4.0
v6.0.rc3.alpha002-pytorch2.3.1
v6.0.rc3.alpha002-pytorch2.2.0
v6.0.rc3.alpha002-pytorch2.1.0
v6.0.rc3.alpha002-pytorch1.11.0
v6.0.rc2-pytorch2.1.0
v6.0.rc2-pytorch2.3.1
v6.0.rc2-pytorch2.2.0
v6.0.rc2-pytorch1.11.0
v6.0.rc3.alpha001-pytorch2.3.1
v6.0.rc3.alpha001-pytorch2.2.0
v6.0.rc3.alpha001-pytorch2.1.0
v6.0.rc3.alpha001-pytorch1.11.0
v6.0.rc2.alpha002-pytorch2.3.1
v6.0.rc2.alpha003-pytorch1.11.0
v6.0.rc2.alpha003-pytorch2.2.0
v6.0.rc2.alpha003-pytorch2.1.0
v6.0.rc1.1-pytorch2.2.0
v6.0.rc1.1-pytorch2.1.0
v6.0.rc1.1-pytorch1.11.0
v5.0.1.2-pytorch1.11.0
v5.0.1.2-pytorch2.1.0
v5.0.1.2-pytorch2.0.1
v6.0.rc2.alpha002-pytorch2.2.0
v6.0.rc2.alpha002-pytorch2.1.0
v6.0.rc2.alpha002-pytorch1.11.0
v6.0.rc1-pytorch2.2.0
v6.0.rc1-pytorch2.1.0
v6.0.rc1-pytorch1.11.0
v6.0.rc2.alpha001-pytorch2.2.0
v6.0.rc2.alpha001-pytorch2.1.0
v6.0.rc2.alpha001-pytorch1.11.0
v6.0.rc1.alpha003-pytorch2.0.1
v6.0.rc1.alpha003-pytorch2.1.0
v5.0.1.1-pytorch2.0.1
v5.0.1.1-pytorch1.11.0
v5.0.1.1-pytorch2.1.0
v6.0.rc1.alpha003-pytorch1.11.0
v6.0.rc1.alpha002-pytorch2.1.0
v6.0.rc1.alpha002-pytorch1.11.0
v6.0.rc1.alpha002-pytorch2.0.1
v6.0.rc1.alpha001-pytorch2.2.0
v5.0.1-pytorch2.1.0
v5.0.1-pytorch2.0.1
v5.0.1-pytorch1.11.0
v6.0.RC1.alpha001-pytorch2.0.1
v6.0.RC1.alpha001-pytorch2.1.0
v6.0.RC1.alpha001-pytorch1.11.0
v5.0.0-pytorch2.1.0
v5.0.0-pytorch2.0.1
v5.0.0-pytorch1.11.0
v5.0.0.alpha003-pytorch2.1.0
v5.0.0.alpha003-pytorch2.0.1
v5.0.0.alpha003-pytorch1.11.0
v5.0.rc3.3-pytorch1.11.0
v5.0.rc3.2-pytorch1.11.0
v5.0.0.alpha002-pytorch2.1.0
v5.0.0.alpha002-pytorch2.0.1
v5.0.0.alpha002-pytorch1.11.0
v5.0.rc3.1-pytorch1.11.0
v5.0.0.alpha001-pytorch2.1.0
v5.0.0.alpha001-pytorch2.0.1
v5.0.0.alpha001-pytorch1.11.0
v5.0.rc3-pytorch2.1.0
v5.0.rc3-pytorch2.0.1
v5.0.rc3-pytorch1.11.0
v5.0.rc3.alpha003-pytorch2.0.1
v5.0.rc3.alpha003-pytorch1.11.0
v5.0.rc3.alpha003-pytorch1.8.1
v5.0.rc2.2-pytorch1.11.0
v5.0.rc2.1-pytorch1.11.0
v5.0.rc3.alpha002-pytorch2.0.1
v5.0.rc3.alpha002-pytorch1.11.0
v5.0.rc3.alpha002-pytorch1.8.1
v5.0.rc2-pytorch2.0.1
v5.0.rc2-pytorch1.11.0
v5.0.rc2-pytorch1.8.1
v5.0.rc3.alpha001-pytorch1.8.1
v5.0.rc3.alpha001-pytorch1.11.0
v5.0.rc2.alpha003-pytorch1.11.0
v5.0.rc2.alpha003-pytorch1.8.1
v5.0.rc2.alpha002-pytorch1.11.0
v5.0.rc2.alpha002-pytorch1.8.1
v5.0.rc1.alpha003-pytorch1.11.0
v5.0.rc1.alpha003-pytorch1.8.1
v5.0.rc1-pytorch1.11.0
v5.0.rc1-pytorch1.8.1
v5.0.rc1.alpha002-pytorch1.11.0
v5.0.rc1.alpha002-pytorch1.8.1
v5.0.rc1.alpha001-pytorch1.11.0
v5.0.rc1.alpha001-pytorch1.8.1
v3.0.0-pytorch1.11.0
v3.0.0-pytorch1.8.1
v3.0.0-pytorch1.5.0
v3.0.alpha006-pytorch1.8.1
v3.0.alpha005-pytorch1.8.1
v3.0.alpha003-pytorch1.8.1
v3.0.rc3-pytorch1.11.0
v3.0.rc3-pytorch1.8.1
v3.0.rc3-pytorch1.5.0
v3.0.rc2-pytorch1.8.1
v3.0.rc2-pytorch1.5.0
v3.0.rc1-pytorch1.8.1
v3.0.rc1-pytorch1.5.0
v2.0.4
v2.0.4-rc2
v2.0.4-rc1
v2.0.3.1
v2.0.3
v2.0.3-rc4
v2.0.3-rc3
v2.0.3-rc2
v2.0.3-rc1
v2.0.2
开始日期   -   截止日期
-
置顶选项
不置顶
置顶等级:高
置顶等级:中
置顶等级:低
优先级
不指定
严重
主要
次要
不重要
预计工期
(小时)
参与者(1)
Python
1
https://gitee.com/ascend/pytorch.git
[email protected]
:ascend/pytorch.git
ascend
pytorch
pytorch
点此查找更多帮助
搜索帮助
Git 命令在线学习
如何在 Gitee 导入 GitHub 仓库
Git 仓库基础操作
企业版和社区版功能对比
SSH 公钥设置
如何处理代码冲突
仓库体积过大,如何减小?
如何找回被删除的仓库数据
Gitee 产品配额说明
GitHub仓库快速导入Gitee及同步更新
什么是 Release(发行版)
将 PHP 项目自动发布到 packagist.org
仓库举报
回到顶部
登录提示
该操作需登录 Gitee 帐号,请先登录后再操作。
立即登录
没有帐号,去注册