在本地运行 PyTorch 或使用受支持的云平台之一快速开始
PyTorch 教程中的新内容
熟悉 PyTorch 概念和模块
简洁易用的 PyTorch 代码示例
通过我们引人入胜的 YouTube 教程系列掌握 PyTorch 基础知识
PyTorch 食谱
PyTorch 入门
torch.autograd
学习 PyTorch
non_blocking
pin_memory()
图像和视频
音频
后端
强化学习
生产中部署 PyTorch 模型
PyTorch 性能分析
使用 FX 进行代码转换
前端 API
扩展 PyTorch
模型优化
torch.compile
并行和分布式训练
ExecuTorch 边缘计算
推荐系统
多模态
教程是一些实用的简明示例,展示如何使用特定的 PyTorch 功能,与我们完整的教程有所不同。
Learn how to use PyTorch's torch.nn package to create and define a neural network for the MNIST dataset.
Basics
Learn how state_dict objects and Python dictionaries are used in saving or loading models from PyTorch.
Learn about the two approaches for saving and loading models for inference in PyTorch - via the state_dict and via the entire model.
Saving and loading a general checkpoint model for inference or resuming training can be helpful for picking up where you last left off. In this recipe, explore how to save and load multiple checkpoints.
In this recipe, learn how saving and loading multiple models can be helpful for reusing models that you have previously trained.
Learn how warmstarting the training process by partially loading a model or loading a partial model can help your model converge much faster than training from scratch.
Learn how saving and loading models across devices (CPUs and GPUs) is relatively straightforward using PyTorch.
Learn when you should zero out gradients and how doing so can help increase the accuracy of your model.
Learn how to use PyTorch's benchmark module to measure and compare the performance of your code
Learn how to measure snippet run times and collect instructions.
Learn how to use PyTorch's profiler to measure operators time and memory consumption
Learn how to use PyTorch's profiler with Instrumentation and Tracing Technology API (ITT API) to visualize operators labeling in Intel® VTune™ Profiler GUI
Learn how to use torch.compile IPEX backend
Learn how to use torch.compiler.set_stance
Compiler
Learn how to use the meta device to reason about shapes in your model.
Learn tips for loading an nn.Module from a checkpoint.
Learn how to use the torch logging APIs to observe the compilation process.
New extension points in nn.Module.
Learn an end-to-end example of how to use AOTInductor for python runtime.
Learn how to export models for popular usecases
Compiler,TorchCompile
Learn how to use Captum attribute the predictions of an image classifier to their corresponding image features and visualize the attribution results.
Interpretability,Captum
Learn basic usage of TensorBoard with PyTorch, and how to visualize data in TensorBoard UI
Visualization,TensorBoard
Apply dynamic quantization to a simple LSTM model.
Quantization,Text,Model-Optimization
Learn how to export your trained model in TorchScript format and how to load your TorchScript model in C++ and do inference.
TorchScript
Learn how to use Flask, a lightweight web server, to quickly setup a web API from your trained PyTorch model.
Production,TorchScript
List of recipes for performance optimizations for using PyTorch on Mobile (Android and iOS).
Mobile,Model-Optimization
Learn how to make Android application from the scratch that uses LibTorch C++ API and uses TorchScript model with custom C++ operator.
Mobile
Learn how to fuse a list of PyTorch modules into a single module to reduce the model size before quantization.
Learn how to reduce the model size and make it run faster without losing much on accuracy.
Mobile,Quantization
Learn how to convert the model to TorchScipt and (optional) optimize it for mobile apps.
Learn how to add the model in an iOS project and use PyTorch pod for iOS.
Learn how to add the model in an Android project and use the PyTorch library for Android.
Learn how to use the mobile interpreter on iOS and Andriod devices.
How to use the PyTorch profiler to profile RPC-based workloads.
Production
Use torch.cuda.amp to reduce runtime and save memory on NVIDIA GPUs.
Model-Optimization
Tips for achieving optimal performance.
How to use run_cpu script for optimal runtime configurations on Intel® Xeon CPUs.
Tips for achieving the best inference performance on AWS Graviton CPUs
Learn to leverage Intel® Advanced Matrix Extensions.
Override torch operators with Torch Function modes and torch.compile
Speed up the optimizer using torch.compile
Speed up training with LRScheduler and torch.compiled optimizer
Horizontally fuse pointwise ops with torch.compile
Learn how to use user-defined kernels with ``torch.compile``
Learn how to use compile time caching in ``torch.compile``
Learn how to configure compile time caching in ``torch.compile``
Learn how to use regional compilation to control cold start compile time
Introduction of Intel® Extension for PyTorch*
Ease-of-use quantization for PyTorch with Intel® Neural Compressor.
Quantization,Model-Optimization
Learn how to use DeviceMesh
Distributed-Training
How to use ZeroRedundancyOptimizer to reduce memory consumption.
How to use RPC with direct GPU-to-GPU communication.
How to enable TorchScript support for Distributed Optimizer.
Distributed-Training,TorchScript
Learn how to checkpoint distributed models with Distributed Checkpoint package.
Learn how to use CommDebugMode for DTensors
Learn how to deploy model in Vertex AI with TorchServe
访问 PyTorch 的详细开发者文档
获取针对初学者和高级开发人员的深入教程
查找开发资源并获得问题的解答
为了分析流量并优化您的体验,我们在此网站上提供Cookie。点击或浏览即表示您同意我们使用Cookie。 作为本网站的当前维护者,Facebook的Cookie政策适用。了解更多信息,包括可用的控件: Cookies政策.