YOLOX is a high-performance anchor-free YOLO, exceeding yolov3~v5 with MegEngine, ONNX, TensorRT, ncnn, and OpenVINO supported. Documentation: https://yolox.readthedocs.io/
-
Updated
Jan 13, 2023 - Python
YOLOX is a high-performance anchor-free YOLO, exceeding yolov3~v5 with MegEngine, ONNX, TensorRT, ncnn, and OpenVINO supported. Documentation: https://yolox.readthedocs.io/
NVIDIA® TensorRT™, an SDK for high-performance deep learning inference, includes a deep learning inference optimizer and runtime that delivers low latency and high throughput for inference applications.
Hello AI World guide to deploying deep-learning inference networks and deep vision primitives with TensorRT and NVIDIA Jetson.
Implementation of popular deep learning networks with TensorRT network definition API
Tengine is a lite, high performance, modular inference engine for embedded device
PyTorch ,ONNX and TensorRT implementation of YOLOv4
TNN: developed by Tencent Youtu Lab and Guangying Lab, a uniform deep learning inference framework for mobile、desktop and server. TNN is distinguished by several outstanding features, including its cross-platform capability, high performance, model compression and code pruning. Based on ncnn and Rapidnet, TNN further strengthens the support and …
An easy to use PyTorch to TensorRT converter
Mult-object tracking and segmentation using YOLOv5 with StrongSORT and OSNet
Deep Learning API and Server in C++14 support for Caffe, PyTorch,TensorRT, Dlib, NCNN, Tensorflow, XGBoost and TSNE
YOLOv4, YOLOv4-tiny, YOLOv3, YOLOv3-tiny Implemented in Tensorflow 2.0, Android. Convert YOLO v4 .weights tensorflow, tensorrt and tflite
Plug and play modules to boost the performances of your AI systems
micronet, a model compression and deploy lib. compression: 1、quantization: quantization-aware-training(QAT), High-Bit(>2b)(DoReFa/Quantization and Training of Neural Networks for Efficient Integer-Arithmetic-Only Inference)、Low-Bit(≤2b)/Ternary and Binary(TWN/BNN/XNOR-Net); post-training-quantization(PTQ), 8-bit(tensorrt); 2、 pruning: normal、reg…
PyTorch/TorchScript/FX compiler for NVIDIA GPUs using TensorRT
TensorRT MODNet, YOLOv4, YOLOv3, SSD, MTCNN, and GoogLeNet
Deep learning gateway on Raspberry Pi and other edge devices
OpenMMLab Model Deployment Framework
C++ library based on tensorrt integration
Add a description, image, and links to the tensorrt topic page so that developers can more easily learn about it.
To associate your repository with the tensorrt topic, visit your repo's landing page and select "manage topics."