-
Notifications
You must be signed in to change notification settings - Fork 996
Commit
This commit does not belong to any branch on this repository, and may belong to a fork outside of the repository.
Merge pull request #668 from jiangzhonglian/main
更新docs 部分翻译
- Loading branch information
Showing
155 changed files
with
8,339 additions
and
2,417 deletions.
There are no files selected for viewing
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Large diffs are not rendered by default.
Oops, something went wrong.
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -1,53 +1,48 @@ | ||
> 翻译任务 | ||
* 目前该页面无人翻译,期待你的加入 | ||
* 翻译奖励: https://github.com/orgs/apachecn/discussions/243 | ||
* 任务认领: https://github.com/apachecn/pytorch-doc-zh/discussions/583 | ||
|
||
请参考这个模版来写内容: | ||
|
||
|
||
# PyTorch 某某页面 | ||
# torch.utils.bottleneck [¶](#module-torch.utils.bottleneck "此标题的永久链接") | ||
|
||
> 译者:[片刻小哥哥](https://github.com/jiangzhonglian) | ||
> | ||
> 项目地址:<https://pytorch.apachecn.org/2.0/docs/bottleneck> | ||
> | ||
> 原始地址:<https://pytorch.org/docs/stable/bottleneck.html> | ||
开始写原始页面的翻译内容 | ||
|
||
torch.utils.bottleneck 是一个工具,可用作调试程序中瓶颈的初始步骤。它使用 Python 分析器和 PyTorch 的 autograd 分析器总结了脚本的运行情况。 | ||
|
||
|
||
注意事项: | ||
在命令行上运行它 | ||
|
||
1. 代码参考: | ||
|
||
```py | ||
import torch | ||
``` | ||
python -m torch.utils.bottleneck /path/to/source/script.py [args] | ||
x = torch.ones(5) # input tensor | ||
y = torch.zeros(3) # expected output | ||
w = torch.randn(5, 3, requires_grad=True) | ||
b = torch.randn(3, requires_grad=True) | ||
z = torch.matmul(x, w)+b | ||
loss = torch.nn.functional.binary_cross_entropy_with_logits(z, y) | ||
``` | ||
|
||
2. 公式参考: | ||
|
||
1) 无需换行的写法: | ||
其中 [args] 是 script.py 的任意数量的参数,或运行 `python -m torch.utils.bottleneck -h` 以获取更多使用说明。 | ||
|
||
|
||
!!! warning "警告" | ||
|
||
因为您的脚本将被分析,所以请确保它在有限的时间内退出。 | ||
|
||
|
||
!!! warning "警告" | ||
|
||
由于 CUDA 内核的异步特性,当针对 CUDA 代码运行时,cProfile 输出和 CPU 模式自动分级分析器可能无法显示正确的计时:报告的 CPU 时间报告用于启动内核的时间量,但不包括内核执行所花费的时间在 GPU 上,除非该操作进行同步。在常规 CPU 模式分析器下,进行同步的操作似乎非常昂贵。在这些计时不正确的情况下,CUDA 模式自动梯度分析器可能会有所帮助。 | ||
|
||
|
||
!!! note "笔记" | ||
|
||
要决定查看哪种(仅 CPU 模式或 CUDA 模式)autograd profiler 输出,您应该首先检查您的脚本是否受 CPU 限制(“CPU 总时间远大于 CUDA 总时间”)。如果是 CPU -bound,查看 CPU 模式 autogradprofiler 的结果会有所帮助。另一方面,如果您的脚本大部分时间都在 GPU 上执行,那么开始在 CUDA 模式 autograd 分析器的输出中寻找负责任的 CUDA 运算符是有意义的。 | ||
|
||
|
||
$\sqrt{w^T*w}$ | ||
当然,现实要复杂得多,您的脚本可能不是这两个极端之一,具体取决于您正在评估的模型的部分。如果探查器输出没有帮助,您可以尝试查看 [`torch.autograd.profiler.emit_nvtx()`](autograd.html#torch.autograd.profiler.emit_nvtx "torch.autograd.profiler. emit_nvtx") 和 `nvprof` 。但是,请考虑到 NVTX 开销非常高,并且通常会导致时间线严重倾斜。同样,“英特尔® VTune™ Profiler”可通过 [`torch.autograd.profiler.emit_itt()`](autograd.html#torch.autograd.profiler.emit_itt "torch.autograd. profiler.emit_itt") 。 | ||
|
||
2) 需要换行的写法: | ||
|
||
$$ | ||
\sqrt{w^T*w} | ||
$$ | ||
!!! warning "警告" | ||
|
||
3. 图片参考(用图片的实际地址就行): | ||
如果您正在分析 CUDA 代码,“bottleneck”运行的第一个分析器 (cProfile) 将在其时间报告中包括 CUDA 启动时间(CUDA 缓冲区分配成本)。如果您的瓶颈导致代码比 CUDA 启动时间慢得多,这应该无关紧要。 | ||
|
||
<img src='http://data.apachecn.org/img/logo/logo_green.png' width=20% /> | ||
|
||
4. **翻译完后请删除上面所有模版内容就行** | ||
有关分析器的更复杂用法(例如在多 GPU 情况下),请参阅 <https://docs.python.org/3/library/profile.html> 或 [`torch.autograd.profiler.profile() `](autograd.html#torch.autograd.profiler.profile "torch.autograd.profiler.profile") 了解更多信息。 |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -1,53 +1,19 @@ | ||
> 翻译任务 | ||
* 目前该页面无人翻译,期待你的加入 | ||
* 翻译奖励: https://github.com/orgs/apachecn/discussions/243 | ||
* 任务认领: https://github.com/apachecn/pytorch-doc-zh/discussions/583 | ||
|
||
请参考这个模版来写内容: | ||
|
||
|
||
# PyTorch 某某页面 | ||
# torch.__config__ [¶](#module-torch.__config__ "此标题的永久链接") | ||
|
||
> 译者:[片刻小哥哥](https://github.com/jiangzhonglian) | ||
> | ||
> 项目地址:<https://pytorch.apachecn.org/2.0/docs/config_mod> | ||
> | ||
> 原始地址:<https://pytorch.org/docs/stable/config_mod.html> | ||
开始写原始页面的翻译内容 | ||
|
||
|
||
|
||
注意事项: | ||
|
||
1. 代码参考: | ||
|
||
```py | ||
import torch | ||
|
||
x = torch.ones(5) # input tensor | ||
y = torch.zeros(3) # expected output | ||
w = torch.randn(5, 3, requires_grad=True) | ||
b = torch.randn(3, requires_grad=True) | ||
z = torch.matmul(x, w)+b | ||
loss = torch.nn.functional.binary_cross_entropy_with_logits(z, y) | ||
``` | ||
|
||
2. 公式参考: | ||
|
||
1) 无需换行的写法: | ||
> `torch.__config__.show()` [[source]](_modules/torch/__config__.html#show)[¶](#torch.__config__.show "此定义的永久链接") | ||
$\sqrt{w^T*w}$ | ||
|
||
2) 需要换行的写法: | ||
返回一个人类可读的字符串,其中包含 PyTorch 配置的描述。 | ||
|
||
$$ | ||
\sqrt{w^T*w} | ||
$$ | ||
|
||
3. 图片参考(用图片的实际地址就行): | ||
> `torch.__config__.parallel_info()` [[source]](_modules/torch/__config__.html#parallel_info)[¶](#torch.__config__.parallel_info "此定义的永久链接") | ||
<img src='http://data.apachecn.org/img/logo/logo_green.png' width=20% /> | ||
|
||
4. **翻译完后请删除上面所有模版内容就行** | ||
返回带有并行化设置的详细字符串 |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -1,53 +1,65 @@ | ||
> 翻译任务 | ||
* 目前该页面无人翻译,期待你的加入 | ||
* 翻译奖励: https://github.com/orgs/apachecn/discussions/243 | ||
* 任务认领: https://github.com/apachecn/pytorch-doc-zh/discussions/583 | ||
|
||
请参考这个模版来写内容: | ||
|
||
|
||
# PyTorch 某某页面 | ||
# C++ [¶](#c "此标题的永久链接") | ||
|
||
> 译者:[片刻小哥哥](https://github.com/jiangzhonglian) | ||
> | ||
> 项目地址:<https://pytorch.apachecn.org/2.0/docs/cpp_index> | ||
> | ||
> 原始地址:<https://pytorch.org/docs/stable/cpp_index.html> | ||
开始写原始页面的翻译内容 | ||
|
||
|
||
|
||
注意事项: | ||
!!! note "笔记" | ||
|
||
如果您正在寻找 PyTorch C++ API 文档,请直接转到[此处](https://pytorch.org/cppdocs/)。 | ||
|
||
|
||
PyTorch 提供了多种使用 C++ 的功能,最好根据您的需求进行选择。在较高级别上,可以提供以下支持: | ||
|
||
|
||
## TorchScript C++ API [¶](#torchscript-c-api "此标题的永久链接") | ||
|
||
|
||
[TorchScript](https://pytorch.org/docs/stable/jit.html) 允许对 Python 中定义的 PyTorch 模型进行序列化,然后在 C++ 中加载和运行,通过编译或跟踪其执行捕获模型代码。您可以在[用 C++ 教程加载 TorchScript 模型](https://pytorch.org/tutorials/advanced/cpp_export.html) 中了解更多信息。这意味着您可以尽可能地用 Python 定义模型,但随后通过 TorchScript 导出它们,以便在生产或嵌入式环境中执行非 Python 执行。 TorchScript C++ API 用于与这些模型和 TorchScript 执行引擎进行交互,包括: | ||
|
||
|
||
|
||
* 加载从 Python 保存的序列化 TorchScript 模型 | ||
* 如果需要,进行简单的模型修改(例如拉出子模块) | ||
* 使用 C++ Tensor API 构造输入并进行预处理 | ||
|
||
|
||
## 使用 C++ 扩展扩展 PyTorch 和 TorchScript [¶](#extending-pytorch-and-torchscript-with-c-extensions "永久链接到此标题") | ||
|
||
|
||
TorchScript 可以通过自定义运算符和自定义类使用用户提供的代码进行增强。注册到 TorchScript 后,可以在从 Python 或 C++ 运行的 TorchScript 代码中调用这些运算符和类,作为序列化 TorchScript 模型的一部分。 [使用自定义 C++ 运算符扩展 TorchScript](https://pytorch.org/tutorials/advanced/torch_script_custom_ops.html) 教程逐步介绍了 TorchScript 与 OpenCV 的接口。除了使用自定义运算符包装函数调用之外,C++ 类和结构可以通过类似 pybind11 的接口绑定到 TorchScript 中,该接口在[使用自定义 C++ 类扩展 TorchScript](https://pytorch.org/tutorials/advanced/torch_script_custom_classes.html) 教程中进行了解释。 | ||
|
||
|
||
## C++ 中的 Tensor 和 Autograd [¶](#tensor-and-autograd-in-c "此标题的永久链接") | ||
|
||
|
||
PyTorch Python API 中的大多数tensor和 autograd 操作也可以在 C++ API 中使用。这些包括: | ||
|
||
|
||
|
||
* `torch::Tensor` 方法,例如 `add` /`reshape` /`clone` 。有关可用方法的完整列表,请参阅:<https://pytorch.org/cppdocs/api/classat_1_1_tensor.html> | ||
* C++ tensor索引 API,其外观和行为与 Python API 相同。有关其用法的详细信息,请参阅:<https://pytorch.org/cppdocs/notes/tensor_indexing.html> | ||
* tensor autograd API 和 `torch::autograd` 包对于用 C++ 构建动态神经网络至关重要前端。更多详情请参见:<https://pytorch.org/tutorials/advanced/cpp_autograd.html> | ||
|
||
1. 代码参考: | ||
|
||
```py | ||
import torch | ||
## 用 C++ 创作模型 [¶](#authoring-models-in-c "此标题的永久链接") | ||
|
||
x = torch.ones(5) # input tensor | ||
y = torch.zeros(3) # expected output | ||
w = torch.randn(5, 3, requires_grad=True) | ||
b = torch.randn(3, requires_grad=True) | ||
z = torch.matmul(x, w)+b | ||
loss = torch.nn.functional.binary_cross_entropy_with_logits(z, y) | ||
``` | ||
|
||
2. 公式参考: | ||
“在 TorchScript 中创作,在 C++ 中推断”工作流程要求在 TorchScript 中完成模型创作。但是,可能存在必须在 C++ 中创作模型的情况(例如,在不需要 Python 组件的工作流程中)。为了服务此类用例,我们提供了纯粹用 C++ 编写和训练神经网络模型的完整功能,并使用熟悉的组件,例如`torch::nn`/`torch::nn::function`/`torch::optim` 与 Python API 非常相似。 | ||
|
||
1) 无需换行的写法: | ||
|
||
$\sqrt{w^T*w}$ | ||
|
||
2) 需要换行的写法: | ||
* 有关 PyTorch C++ 模型创作和训练 API 的概述,请参阅:<https://pytorch.org/cppdocs/frontend.html> | ||
* 有关如何使用 API 的详细教程,请参阅:<https://pytorch.org/tutorials/advanced/cpp_frontend.html> | ||
* `torch::nn` /`torch::nn::function` /`torch::optim` 等组件的文档可以在以下位置找到: <https://pytorch.org/cppdocs/api/library_root.html> | ||
|
||
$$ | ||
\sqrt{w^T*w} | ||
$$ | ||
|
||
3. 图片参考(用图片的实际地址就行): | ||
## C++ 的打包 [¶](#packaging-for-c "此标题的永久链接") | ||
|
||
<img src='http://data.apachecn.org/img/logo/logo_green.png' width=20% /> | ||
|
||
4. **翻译完后请删除上面所有模版内容就行** | ||
有关如何安装和链接 libtorch(包含上述所有 C++ API 的库)的指南,请参阅:<https://pytorch.org/cppdocs/installing.html>。请注意,在 Linux 上提供了两种类型的 libtorch 二进制文件:一种使用 GCC pre-cxx11 ABI 编译,另一种使用 GCC cxx11 ABI 编译,您应该根据系统使用的 GCC ABI 进行选择。 |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -1,53 +1,28 @@ | ||
> 翻译任务 | ||
* 目前该页面无人翻译,期待你的加入 | ||
* 翻译奖励: https://github.com/orgs/apachecn/discussions/243 | ||
* 任务认领: https://github.com/apachecn/pytorch-doc-zh/discussions/583 | ||
|
||
请参考这个模版来写内容: | ||
|
||
|
||
# PyTorch 某某页面 | ||
# torch.cpu [¶](#module-torch.cpu "此标题的永久链接") | ||
|
||
> 译者:[片刻小哥哥](https://github.com/jiangzhonglian) | ||
> | ||
> 项目地址:<https://pytorch.apachecn.org/2.0/docs/cpu> | ||
> | ||
> 原始地址:<https://pytorch.org/docs/stable/cpu.html> | ||
开始写原始页面的翻译内容 | ||
|
||
|
||
|
||
注意事项: | ||
|
||
1. 代码参考: | ||
|
||
```py | ||
import torch | ||
|
||
x = torch.ones(5) # input tensor | ||
y = torch.zeros(3) # expected output | ||
w = torch.randn(5, 3, requires_grad=True) | ||
b = torch.randn(3, requires_grad=True) | ||
z = torch.matmul(x, w)+b | ||
loss = torch.nn.functional.binary_cross_entropy_with_logits(z, y) | ||
``` | ||
|
||
2. 公式参考: | ||
|
||
1) 无需换行的写法: | ||
该包实现了“torch.cuda”中的抽象,以方便编写与设备无关的代码。 | ||
|
||
$\sqrt{w^T*w}$ | ||
|
||
2) 需要换行的写法: | ||
| | | | ||
| --- | --- | | ||
| [`current_stream`](generated/torch.cpu.current_stream.html#torch.cpu.current_stream "torch.cpu.current_stream") |返回给定设备当前选择的 [`Stream`](generated/torch.cpu.Stream.html#torch.cpu.Stream "torch.cpu.Stream")。 | | ||
| [`is_available`](generated/torch.cpu.is_available.html#torch.cpu.is_available "torch.cpu.is_available") |返回一个布尔值,指示 CPU 当前是否可用。 | | ||
| [`synchronize`](generated/torch.cpu.synchronize.html#torch.cpu.synchronize "torch.cpu.synchronize") |等待 CPU 设备上所有流中的所有内核完成。 | | ||
| [`stream`](generated/torch.cpu.stream.html#torch.cpu.stream "torch.cpu.stream") |围绕选择给定流的上下文管理器 StreamContext 的包装。 | | ||
| [`device_count`](generated/torch.cpu.device_count.html#torch.cpu.device_count "torch.cpu.device_count") |返回 CPU 设备(不是核心)的数量。 | | ||
| [`StreamContext`](generated/torch.cpu.StreamContext.html#torch.cpu.StreamContext "torch.cpu.StreamContext") |选择给定流的上下文管理器。 | | ||
|
||
$$ | ||
\sqrt{w^T*w} | ||
$$ | ||
|
||
3. 图片参考(用图片的实际地址就行): | ||
## 流和事件 [¶](#streams-and-events "此标题的永久链接") | ||
|
||
<img src='http://data.apachecn.org/img/logo/logo_green.png' width=20% /> | ||
|
||
4. **翻译完后请删除上面所有模版内容就行** | ||
| | | | ||
| --- | --- | | ||
| [`Stream`](generated/torch.cpu.Stream.html#torch.cpu.Stream "torch.cpu.Stream") | 注意: | |
Oops, something went wrong.
29acafc
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Successfully deployed to the following URLs:
pytorch-doc-zh – ./
pytorch-doc-zh-git-master-apachecn.vercel.app
pytorch-doc-zh-apachecn.vercel.app
pytorch-doc-zh.vercel.app