# inference-server-torch **Repository Path**: daryl6/inference-server-torch ## Basic Information - **Project Name**: inference-server-torch - **Description**: 基于torch-serve的推理服务器 - **Primary Language**: Python - **License**: LGPL-3.0 - **Default Branch**: master - **Homepage**: None - **GVP Project**: No ## Statistics - **Stars**: 0 - **Forks**: 0 - **Created**: 2022-10-12 - **Last Updated**: 2022-11-06 ## Categories & Tags **Categories**: Uncategorized **Tags**: None ## README # 推理服务器 ``` shell pip install torch torchvision torchaudio --extra-index-url https://download.pytorch.org/whl/cu116 # 启动容器 docker run --rm --name torchserve-cpu-dev -it -p 8080:8080 -p 8181:8081 -p 8082:8082 -p 7070:7070 -p 7071:7071 -v $PWD/model-store:/home/model-server/model-store pytorch/torchserve:latest-cpu # 启动服务 cp config.properties config && echo install_py_dep_per_model=true >> config torchserve --start --model-store model-store/ --models \ nnunet.3d_fullres=nnunet.3d_fullres.mar mnist=mnist.mar \ --ts-config config.properties ``` ## 将模型导出为torch script 见示例:models/mnist/mnist/save_as_torch_script.py ## 模型打包 mar文件中 `MAR-INF/MANIFEST.json` 没有modelFile字段时,则直接加载TorchScript ## 服务测试 ``` bash # 管理接口 curl http://localhost:18081/models/{name} ```