# tvm-vta **Repository Path**: cavin_sun/tvm-vta ## Basic Information - **Project Name**: tvm-vta - **Description**: VTA Hardware Design Stack clone from https://github.com/apache/tvm-vta.git - **Primary Language**: Unknown - **License**: Apache-2.0 - **Default Branch**: main - **Homepage**: None - **GVP Project**: No ## Statistics - **Stars**: 0 - **Forks**: 0 - **Created**: 2021-02-24 - **Last Updated**: 2022-02-17 ## Categories & Tags **Categories**: Uncategorized **Tags**: None ## README VTA Hardware Design Stack ========================= [![Build Status](https://ci.tlcpack.ai/job/tvm-vta/job/main/badge/icon)](https://ci.tlcpack.ai/job/tvm-vta/job/main/) VTA (versatile tensor accelerator) is an open-source deep learning accelerator complemented with an end-to-end TVM-based compiler stack. The key features of VTA include: - Generic, modular, open-source hardware - Streamlined workflow to deploy to FPGAs. - Simulator support to prototype compilation passes on regular workstations. - Driver and JIT runtime for both simulator and FPGA hardware back-end. - End-to-end TVM stack integration - Direct optimization and deployment of models from deep learning frameworks via TVM. - Customized and extensible TVM compiler back-end. - Flexible RPC support to ease deployment, and program FPGAs with the convenience of Python.