Explore and extend models from the latest cutting edge research.
HybridNets - End2End Perception Network
Resnet Style Video classification networks pretrained on the Kinetics 400 dataset
SlowFast networks pretrained on the Kinetics 400 dataset
X3D networks pretrained on the Kinetics 400 dataset
YOLOP pretrained on the BDD100K dataset
MiDaS models for computing relative depth from a single image.
classify birds using this fine-grained image classifier
Reference implementation for music source separation
A set of compact enterprise-grade pre-trained STT Models for multiple languages.
A set of compact enterprise-grade pre-trained TTS Models for multiple languages
Pre-trained Spoken Language Classifier
Pre-trained Spoken Number Detector
Pre-trained Voice Activity Detector
YOLOv5 in PyTorch > ONNX > CoreML > TFLite
DeepLabV3 models with ResNet-50, ResNet-101 and MobileNet-V3 backbones
영어-프랑스어 번역과 영어-독일어 번역을 위한 트랜스포머 모델
ResNext models trained with billion scale weakly-supervised data.
64x64 이미지 생성을 위한 기본 이미지 생성 모델
High-quality image generation of fashion, celebrity faces
Billion scale semi-supervised learning for image classification 에서 제안된 ResNet, ResNext 모델
PyTorch implementations of popular NLP Transformers
U-Net with batch normalization for biomedical image segmentation with pretrained weights for abnormality segmentation in brain MRI
EfficientNets are a family of image classification models, which achieve state-of-the-art accuracy, being an order-of-magnitude smaller and faster. Trained with mixed precision using Tensor Cores.
ResNet50 model trained with mixed precision using Tensor Cores.
ResNet with bottleneck 3x3 Convolutions substituted by 3x3 Grouped Convolutions, trained with mixed precision using Tensor Cores.
ResNeXt with Squeeze-and-Excitation module added, trained with mixed precision using Tensor Cores.
Single Shot MultiBox Detector model for object detection
The Tacotron 2 model for generating mel spectrograms from text
WaveGlow model for generating speech from mel spectrograms (generated by Tacotron2)
BERT를 강력하게 최적화하는 사전 학습 접근법, RoBERTa
The 2012 ImageNet winner achieved a top-5 error of 15.3%, more than 10.8 percentage points lower than that of the runner up.
Dense Convolutional Network (DenseNet), connects each layer to every other layer in a feed-forward fashion.
Fully-Convolutional Network model with ResNet-50 and ResNet-101 backbones
Efficient networks by generating more features from cheap operations
GoogLeNet was based on a deep convolutional neural network architecture codenamed "Inception" which won ImageNet 2014.
Harmonic DenseNet pre-trained on ImageNet
Also called GoogleNetv3, a famous ConvNet trained on Imagenet from 2015
Boosting Tiny and Efficient Models using Knowledge Distillation.
잔차 블록에 기반한 속도와 메모리에 최적화된 효율적인 네트워크
Proxylessly specialize CNN architectures for different hardware platforms.
A new ResNet variant.
Deep residual networks pre-trained on ImageNet
Next generation ResNets, more efficient and accurate
An efficient ConvNet optimized for speed and memory, pre-trained on Imagenet
Alexnet-level accuracy with 50x fewer parameters.
Award winning ConvNets from 2014 Imagenet ILSVRC challenge
Wide Residual Networks
파이토치 한국 사용자 모임을 GitHub에서 만나보세요.
한국어로 번역 중인 파이토치 튜토리얼을 만나보세요.
다른 사용자들과 의견을 나누고, 도와주세요!