备注
点击 此处 下载完整示例代码
通过使用嵌套张量和 torch.compile() 替换 nn.Transformer
来加速 PyTorch Transformer¶
了解 PyTorch 提供的构建自定义 Transformer 层的底层模块(嵌套张量、
scaled_dot_product_attention
、torch.compile()
和FlexAttention
)以多头注意力为例,探索上述模块如何提高内存使用和性能
使用上述构建模块探索高级定制功能
PyTorch v.2.6.0 或更高版本
过去几年中,PyTorch 团队开发了各种底层功能,这些功能可以组合成各种 Transformer 变体,包括以下内容:
具有
torch.jagged
布局的嵌套张量(即 NJTs)scaled_dot_product_attention
torch.compile()
FlexAttention
本教程将简要介绍上述技术,并演示如何组合它们以生成具有更好用户体验的灵活且高性能的 Transformer 层。
我们可能会注意到 torch.nn
模块目前提供各种与 Transformer
相关的层。特别是,它包括 TransformerEncoderLayer
、TransformerEncoder
、TransformerDecoderLayer
、TransformerDecoder
、Transformer
和 MultiheadAttention
。这些层系列最初是基于《Attention is All You Need <https://arxiv.org/abs/1706.03762>`_》论文实现的。本教程中讨论的组件在现有 nn
层的基础上提供了更好的用户体验、灵活性和性能。
这个教程适合我吗?¶
如果您想知道 torch
库提供了哪些构建模块用于编写您自己的 Transformer 层以及最佳实践,那么您来对地方了。请继续阅读!
如果您正在寻找一种现成的流行 Transformer 架构实现,请注意有许多开源库提供这些架构,包括:
如果您只对性能优良的注意力得分修改感兴趣,请查看 FlexAttention 博客,其中包含一个 mask 多样化 gym。
介绍构建模块¶
首先,我们将简要介绍介绍部分提到的四项技术
嵌套张量推广了常规密集张量的形状,可以使用相同的张量 UX 表示大小不规则的数据。在 Transformer 的上下文中,我们可以将嵌套张量视为表示可变序列长度的工具。它们消除了显式填充和掩码(例如 nn.MultiHeadAttention
中的 key_padding_mask
)的易出错操作。
scaled_dot_product_attention
是一个原语,用于 \(\text{softmax}(\frac{QK^T}{\sqrt{E}} + B)V\),它可以调度到该操作的融合实现或后备实现。它在 eager 模式(即使用 PyTorch 的默认模式,操作会在遇到时立即执行)下开箱即用,并且可以无缝集成到 torch.compile()
中。从 2.6 开始,它还将原生支持分组查询注意力。
torch.compile()
是一个在 2.0 版中引入的编译器,它能够捕获 PyTorch 代码的图并执行各种优化,例如将操作序列融合在一起。具有 torch.jagged
布局的嵌套张量和 scaled_dot_product_attention
可以与编译器无缝结合。在 Transformer 的环境中,结合嵌套张量和 SDPA 使用编译器的价值在于,它可以消除 eager 模式中看到的框架开销,并融合 Transformers 中的操作序列,例如投影和激活。
FlexAttention
是一个原语,允许用户在执行软最大化操作之前修改注意力分数。它广义化了上面用于 scaled_dot_product_attention
的加性 B
项,允许进行任意计算。为了实现良好的性能,它需要编译器。
上述构建模块是“全都需要”(截至 2024 年 10 月)¶
本节的主要前提是,大多数 Transformer 变体都是 GPT 风格的,由嵌入层、位置编码、注意力块和前馈网络等层组成。如果我们尝试对这一领域的差异进行分类,可能会得到如下内容:
层类型(如
SwiGLU
等激活函数、RMSNorm
等归一化函数、Sinusoidal 和 Rotary 等位置编码)层顺序,例如在何处应用归一化和位置编码。
对注意力分数的修改,例如
ALiBi
、相对位置偏差等。
在预编译器环境中,您可能会编写一个自定义 Transformer,并注意到它功能正常但速度很慢。为了解决这个问题,您可能会为特定操作序列开发一个自定义融合内核。在编译器环境中,您可以简单地执行初始步骤,然后进行编译并受益于性能提升。
多头注意力¶
请记住,多头注意力接收一个查询、一个键和一个值,包括输入投影、scaled_dot_product_attention
操作和输出投影。这里我们希望展示的主要观点是,当我们使用嵌套张量替换填充/掩码输入时的改进。改进分为三点:
用户体验 请记住,
nn.MultiheadAttention
要求query
、key
和value
为密集的torch.Tensor
。它还提供了一个key_padding_mask
,用于掩盖由于批次中序列长度不同而出现在key
中的填充标记。由于nn.MHA
中没有query_padding_mask
,用户必须注意适当地掩盖/切片输出数据,以确保查询序列长度被正确考虑。NestedTensor
干净地消除了此类容易出错的填充掩码的需求。内存 与显式生成具有
[B, S, D]
张量的密集和[B, S]
填充掩码(其中B
表示批次大小,S
表示批次中的最大序列长度,D
表示嵌入大小)不同,嵌套张量能清晰地表示批次中具有不同序列长度的数据。因此,输入和中间激活将使用更少的内存。性能 由于填充没有被显式生成,并且未对填充数据进行不必要的计算,性能和内存使用得以提升。
我们将通过基于 ` Nested Tensor 教程 <https://pytorch.org/tutorials/prototype/nestedtensor.html>`_ 中的 MultiheadAttention
层构建内容,与传统 nn.MultiheadAttention
层进行比较。
import torch
import torch.nn as nn
import torch.nn.functional as F
class MultiHeadAttention(nn.Module):
"""
Computes multi-head attention. Supports nested or padded tensors.
Args:
E_q (int): Size of embedding dim for query
E_k (int): Size of embedding dim for key
E_v (int): Size of embedding dim for value
E_total (int): Total embedding dim of combined heads post input projection. Each head
has dim E_total // nheads
nheads (int): Number of heads
dropout (float, optional): Dropout probability. Default: 0.0
bias (bool, optional): Whether to add bias to input projection. Default: True
"""
def __init__(
self,
E_q: int,
E_k: int,
E_v: int,
E_total: int,
nheads: int,
dropout: float = 0.0,
bias=True,
device=None,
dtype=None,
):
factory_kwargs = {"device": device, "dtype": dtype}
super().__init__()
self.nheads = nheads
self.dropout = dropout
self._qkv_same_embed_dim = E_q == E_k and E_q == E_v
if self._qkv_same_embed_dim:
self.packed_proj = nn.Linear(E_q, E_total * 3, bias=bias, **factory_kwargs)
else:
self.q_proj = nn.Linear(E_q, E_total, bias=bias, **factory_kwargs)
self.k_proj = nn.Linear(E_k, E_total, bias=bias, **factory_kwargs)
self.v_proj = nn.Linear(E_v, E_total, bias=bias, **factory_kwargs)
E_out = E_q
self.out_proj = nn.Linear(E_total, E_out, bias=bias, **factory_kwargs)
assert E_total % nheads == 0, "Embedding dim is not divisible by nheads"
self.E_head = E_total // nheads
self.bias = bias
def forward(
self,
query: torch.Tensor,
key: torch.Tensor,
value: torch.Tensor,
attn_mask=None,
is_causal=False,
) -> torch.Tensor:
"""
Forward pass; runs the following process:
1. Apply input projection
2. Split heads and prepare for SDPA
3. Run SDPA
4. Apply output projection
Args:
query (torch.Tensor): query of shape (``N``, ``L_q``, ``E_qk``)
key (torch.Tensor): key of shape (``N``, ``L_kv``, ``E_qk``)
value (torch.Tensor): value of shape (``N``, ``L_kv``, ``E_v``)
attn_mask (torch.Tensor, optional): attention mask of shape (``N``, ``L_q``, ``L_kv``) to pass to SDPA. Default: None
is_causal (bool, optional): Whether to apply causal mask. Default: False
Returns:
attn_output (torch.Tensor): output of shape (N, L_t, E_q)
"""
# Step 1. Apply input projection
if self._qkv_same_embed_dim:
if query is key and key is value:
result = self.packed_proj(query)
query, key, value = torch.chunk(result, 3, dim=-1)
else:
q_weight, k_weight, v_weight = torch.chunk(
self.packed_proj.weight, 3, dim=0
)
if self.bias:
q_bias, k_bias, v_bias = torch.chunk(
self.packed_proj.bias, 3, dim=0
)
else:
q_bias, k_bias, v_bias = None, None, None
query, key, value = (
F.linear(query, q_weight, q_bias),
F.linear(key, k_weight, k_bias),
F.linear(value, v_weight, v_bias),
)
else:
query = self.q_proj(query)
key = self.k_proj(key)
value = self.v_proj(value)
# Step 2. Split heads and prepare for SDPA
# reshape query, key, value to separate by head
# (N, L_t, E_total) -> (N, L_t, nheads, E_head) -> (N, nheads, L_t, E_head)
query = query.unflatten(-1, [self.nheads, self.E_head]).transpose(1, 2)
# (N, L_s, E_total) -> (N, L_s, nheads, E_head) -> (N, nheads, L_s, E_head)
key = key.unflatten(-1, [self.nheads, self.E_head]).transpose(1, 2)
# (N, L_s, E_total) -> (N, L_s, nheads, E_head) -> (N, nheads, L_s, E_head)
value = value.unflatten(-1, [self.nheads, self.E_head]).transpose(1, 2)
# Step 3. Run SDPA
# (N, nheads, L_t, E_head)
attn_output = F.scaled_dot_product_attention(
query, key, value, dropout_p=self.dropout, is_causal=is_causal
)
# (N, nheads, L_t, E_head) -> (N, L_t, nheads, E_head) -> (N, L_t, E_total)
attn_output = attn_output.transpose(1, 2).flatten(-2)
# Step 4. Apply output projection
# (N, L_t, E_total) -> (N, L_t, E_out)
attn_output = self.out_proj(attn_output)
return attn_output
工具函数¶
在本节中,我们提供一个使用 Zipf
分布为句子长度生成半真实数据的工具函数。这用于生成嵌套查询、键和值张量。我们还提供了一个性能基准工具。
import numpy as np
def zipf_sentence_lengths(alpha: float, batch_size: int) -> torch.Tensor:
# generate fake corpus by unigram Zipf distribution
# from wikitext-2 corpus, we get rank "." = 3, "!" = 386, "?" = 858
sentence_lengths = np.empty(batch_size, dtype=int)
for ibatch in range(batch_size):
sentence_lengths[ibatch] = 1
word = np.random.zipf(alpha)
while word != 3 and word != 386 and word != 858:
sentence_lengths[ibatch] += 1
word = np.random.zipf(alpha)
return torch.tensor(sentence_lengths)
# Generate a batch of semi-realistic data using Zipf distribution for sentence lengths
# in the form of nested tensors with the jagged layout.
def gen_batch(N, E_q, E_k, E_v, device, dtype=torch.float32, query_seq_len_1=False):
# generate semi-realistic data using Zipf distribution for sentence lengths
sentence_lengths = zipf_sentence_lengths(alpha=1.2, batch_size=N)
# Note: the torch.jagged layout is a nested tensor layout that supports a single ragged
# dimension and works with torch.compile. The batch items each have shape (B, S*, D)
# where B = batch size, S* = ragged sequence length, and D = embedding dimension.
if query_seq_len_1:
query = torch.nested.nested_tensor(
[torch.randn(1, E_q, dtype=dtype, device=device) for l in sentence_lengths],
layout=torch.jagged,
)
else:
query = torch.nested.nested_tensor(
[
torch.randn(l.item(), E_q, dtype=dtype, device=device)
for l in sentence_lengths
],
layout=torch.jagged,
)
key = torch.nested.nested_tensor(
[
torch.randn(s.item(), E_k, dtype=dtype, device=device)
for s in sentence_lengths
],
layout=torch.jagged,
)
value = torch.nested.nested_tensor(
[
torch.randn(s.item(), E_v, dtype=dtype, device=device)
for s in sentence_lengths
],
layout=torch.jagged,
)
return query, key, value, sentence_lengths
import math
import timeit
def benchmark(func, *args, **kwargs):
torch.cuda.synchronize()
torch.cuda.reset_peak_memory_stats()
begin = timeit.default_timer()
output = func(*args, **kwargs)
torch.cuda.synchronize()
end = timeit.default_timer()
return output, (end - begin), torch.cuda.max_memory_allocated()
现在,我们将在 MultiheadAttention
层中使用嵌套张量 + 编译器进行自注意力,并演示其性能改进,与传统的填充和掩码结合编译器的 nn.MultiheadAttention
进行对比。
N, E_q, E_k, E_v, E_total = 512, 512, 512, 512, 512
E_out = E_q
d_model = E_q
nheads = 8
dropout = 0.0
bias = True
device = "cuda"
torch.manual_seed(6)
query, key, value, sentence_lengths = gen_batch(N, E_q, E_k, E_v, device)
S = sentence_lengths.max().item()
print(
f"Total sequence length in nested query {sentence_lengths.sum().item()}, max sequence length {S}"
)
padded_query, padded_key, padded_value = (
t.to_padded_tensor(0.0) for t in (query, key, value)
)
torch.manual_seed(6)
mha_layer = MultiHeadAttention(
E_q, E_k, E_v, E_total, nheads, dropout=dropout, bias=bias, device="cuda"
)
torch.manual_seed(6)
vanilla_mha_layer = nn.MultiheadAttention(
E_q, nheads, dropout=dropout, batch_first=True, bias=bias, device="cuda"
)
# ``nn.MultiheadAttention`` uses a non conventional initialization for layers, so do this for exact parity :(
mha_layer.out_proj.weight = nn.Parameter(
vanilla_mha_layer.out_proj.weight.clone().detach()
)
mha_layer.packed_proj.weight = nn.Parameter(
vanilla_mha_layer.in_proj_weight.clone().detach()
)
mha_layer.out_proj.bias = nn.Parameter(vanilla_mha_layer.out_proj.bias.clone().detach())
mha_layer.packed_proj.bias = nn.Parameter(
vanilla_mha_layer.in_proj_bias.clone().detach()
)
new_mha_layer = torch.compile(mha_layer)
# warmup compile
nested_result_warmup = new_mha_layer(query, query, query, is_causal=True)
# benchmark
nested_result, nested_time, nested_peak_memory = benchmark(
new_mha_layer, query, query, query, is_causal=True
)
padded_nested_result = nested_result.to_padded_tensor(0.0)
# For the vanilla ``nn.MultiheadAttention``, we need to construct the ``key_padding_mask``
# Further, ``nn.MultiheadAttention`` forces one to materialize the ``attn_mask`` even if using ``is_causal``
src_key_padding_mask = torch.where(padded_query == 0.0, -math.inf, 0)[:, :, 0]
attn_mask = torch.empty((N, S, S), device=device).fill_(float("-inf"))
for i, s in enumerate(sentence_lengths):
attn_mask[i, :s, :s] = nn.Transformer.generate_square_subsequent_mask(s)
attn_mask = attn_mask.unsqueeze(1).expand(N, nheads, S, S).reshape(N * nheads, S, S)
vanilla_mha_layer = torch.compile(vanilla_mha_layer)
# warmup compile
warmup_vanilla_result = vanilla_mha_layer(
padded_query,
padded_query,
padded_query,
attn_mask=attn_mask,
key_padding_mask=src_key_padding_mask,
need_weights=False,
is_causal=True,
)
# benchmark
(padded_result, _), padded_time, padded_peak_memory = benchmark(
vanilla_mha_layer,
padded_query,
padded_query,
padded_query,
key_padding_mask=src_key_padding_mask,
need_weights=False,
attn_mask=attn_mask,
is_causal=True,
)
print(f"{padded_time=:.5f}, padded_peak_memory={padded_peak_memory/1e9:.2f} GB")
print(f"{nested_time=:.5f}, nested_peak_memory={nested_peak_memory/1e9:.2f} GB")
print(
"Max difference between vanilla and nested result",
(padded_result - padded_nested_result).abs().max().item(),
)
print(f"Nested speedup: {(padded_time/nested_time):.2f}")
print(
f"Nested peak memory reduction {((padded_peak_memory - nested_peak_memory)/1e9):.2f} GB"
)
参考来说,此处是一些 A100 的输出实例:
padded_time=0.03454, padded_peak_memory=4.14 GB
nested_time=0.00612, nested_peak_memory=0.76 GB
Max difference between vanilla and nested result 0.0
Nested speedup: 5.65
Nested peak memory reduction 3.39 GB
我们也可以在反向传播中看到相同的改进
for i, entry_length in enumerate(sentence_lengths):
# padding-specific step: remove output projection bias from padded entries for fair comparison
padded_result[i, entry_length:, :] = 0.0
_, padded_bw_time, padded_bw_peak_mem = benchmark(
lambda: padded_result.sum().backward()
)
_, nested_bw_time, nested_bw_peak_mem = benchmark(
lambda: padded_nested_result.sum().backward()
)
print(f"{padded_bw_time=:.5f}, padded_bw_peak_mem={padded_bw_peak_mem/1e9:.2f} GB")
print(f"{nested_bw_time=:.5f}, nested_bw_peak_mem={nested_bw_peak_mem/1e9:.2f} GB")
print(f"Nested backward speedup: {(padded_bw_time/nested_bw_time):.2f}")
print(
f"Nested backward peak memory reduction {((padded_bw_peak_mem - nested_bw_peak_mem)/1e9):.2f} GB"
)
print(
"Difference in out_proj.weight.grad",
(mha_layer.out_proj.weight.grad - vanilla_mha_layer.out_proj.weight.grad)
.abs()
.max()
.item(),
)
print(
"Difference in packed_proj.weight.grad",
(mha_layer.packed_proj.weight.grad - vanilla_mha_layer.in_proj_weight.grad)
.abs()
.max()
.item(),
)
print(
"Difference in out_proj.bias.grad",
(mha_layer.out_proj.bias.grad - vanilla_mha_layer.out_proj.bias.grad)
.abs()
.max()
.item(),
)
print(
"Difference in packed_proj.bias.grad",
(mha_layer.packed_proj.bias.grad - vanilla_mha_layer.in_proj_bias.grad)
.abs()
.max()
.item(),
)
A100 的输出示例:
padded_bw_time=2.09337, padded_bw_peak_mem=5.10 GB
nested_bw_time=0.01452, nested_bw_peak_mem=3.24 GB
Nested backward speedup: 144.13
Nested backward peak memory reduction 1.86 GB
Difference in out_proj.weight.grad 0.000244140625
Difference in packed_proj.weight.grad 0.001556396484375
Difference in out_proj.bias.grad 0.0
Difference in packed_proj.bias.grad 0.001953125
GPT 风格的层¶
一个基本的 GPT 风格 Transformer 层由一个因果自注意力层随后接一个跳跃连接的前馈网络(FFN)组成。使用上述 MultiheadAttention
层实现这一点是非常直接的,并且生成的结果与 nn.TransformerEncoderLayer
的 is_causal=True
相同。
我们 在这里 演示实现其他 nn
层的示例,但为了简洁起见,这在本教程中被略去。
更进一步¶
到目前为止,我们已经展示了如何实现一个性能优异的 MultiheadAttention
层,该层遵循传统的 nn.MultiheadAttention
。回到我们对变换器架构修改的分类,请记住我们将这些修改分类为层类型、层顺序和对注意力分数的修改。我们相信更改层类型和层顺序(如将 LayerNorm
替换为 RMSNorm
)是相对简单的。
在本节中,我们将讨论使用上述构建模块的各种功能,包括以下内容:
交叉注意力
完全遮蔽的行不再导致 NaN
修改注意力分数:使用 FlexAttention 和 NJT 实现 ALiBi
嵌套投影
交叉注意力¶
交叉注意力是一种注意力形式,其中查询和键/值张量来自不同的序列。
一个示例是 nn.TransformerDecoderLayer
中,查询来自解码器,键/值来自编码器。
上述 MultiheadAttention 层很好地将嵌套张量扩展到这种情况下的查询和键/值。
query, _, _, q_len = gen_batch(N, E_q, E_k, E_v, device)
_, key, value, kv_len = gen_batch(N, E_q, E_k, E_v, device)
print(
f"Total sequence length in nested query {q_len.sum().item()}, max sequence length {q_len.max().item()}"
)
print(
f"Total sequence length in nested key/value {kv_len.sum().item()}, max sequence length {kv_len.max().item()}"
)
out = new_mha_layer(query, key, value, is_causal=False)
与上述一样,我们可以对比其与传统的编译 nn.MultiheadAttention。
torch.manual_seed(6)
query, _, _, q_len = gen_batch(N, E_q, E_k, E_v, device)
_, key, value, kv_len = gen_batch(N, E_q, E_k, E_v, device)
padded_query, padded_key, padded_value = (
t.to_padded_tensor(0.0) for t in (query, key, value)
)
key_padding_mask = torch.where(padded_key == 0.0, -math.inf, 0)[:, :, 0]
# warmup compile
warmup_nested_result = new_mha_layer(query, key, value, is_causal=False)
warmup_vanilla_result = vanilla_mha_layer(
padded_query,
padded_key,
padded_value,
key_padding_mask=key_padding_mask,
need_weights=False,
is_causal=False,
)
nested_result, nested_time, nested_peak_memory = benchmark(
new_mha_layer, query, key, value, is_causal=False
)
(padded_result, _), padded_time, padded_peak_memory = benchmark(
vanilla_mha_layer,
padded_query,
padded_key,
padded_value,
key_padding_mask=key_padding_mask,
need_weights=False,
is_causal=False,
)
padded_nested_result = nested_result.to_padded_tensor(0.0)
for i, entry_length in enumerate(q_len):
# padding-specific step: remove output projection bias from padded entries for fair comparison
padded_result[i, entry_length:, :] = 0.0
print(
"Max difference between vanilla and nested result",
(padded_result - padded_nested_result).abs().max().item(),
)
print(f"Nested speedup: {(padded_time/nested_time):.2f}")
print(
f"Nested peak memory reduction {((padded_peak_memory - nested_peak_memory)/1e9):.2f} GB"
)
A100 的输出示例:
Max difference between vanilla and nested result 0.0
Nested speedup: 4.01
Nested peak memory reduction 1.40 GB
完全遮蔽的行不再导致 NaN¶
长期以来,nn.MultiheadAttention
和 scaled_dot_product_attention
一直存在一个问题,即如果某行完全被遮蔽,则注意力层的输出将是 NaN。请参见 问题。这是因为对空集进行软最大化是未定义的。
得益于`这个PR <https://github.com/pytorch/pytorch/pull/133882>`_,情况不再是这样。现在,在``scaled_dot_product_attention``中完全被掩盖的行对应的输出将为0。而对于``nn.MHA``不使用“快速路径”的情况,这也同样适用。
强烈建议在使用自定义的MHA层时搭配NJT,而不是使用现有的``nn.MultiheadAttention``中的“快速路径”,因为NJT能够合理地表示不规则性,从而可以正确表达空序列。
FlexAttention + NJT¶
NJT还可以与``FlexAttention``模块组合使用。``FlexAttention``模块是``MultiheadAttention``层的通用化版本,允许对注意力分数进行任意修改。下面的示例展示了如何使用`attention gym <https://github.com/pytorch-labs/attention-gym>`_中的``alibi_mod``实现`ALiBi <https://arxiv.org/abs/2108.12409>`_,并将其与嵌套输入张量结合使用。
from torch.nn.attention.flex_attention import flex_attention
def generate_alibi_bias(H: int):
"""Returns an alibi bias score_mod given the number of heads H
Args:
H: number of heads
Returns:
alibi_bias: alibi bias score_mod
"""
def alibi_mod(score, b, h, q_idx, kv_idx):
scale = torch.exp2(-((h + 1) * 8.0 / H))
bias = (q_idx - kv_idx) * scale
return score + bias
return alibi_mod
query, key, value, _ = gen_batch(N, E_q, E_k, E_v, device)
n_heads, D = 8, E_q // 8
alibi_score_mod = generate_alibi_bias(n_heads)
query = query.unflatten(-1, [n_heads, D]).transpose(1, 2).detach().requires_grad_()
key = key.unflatten(-1, [n_heads, D]).transpose(1, 2).detach().requires_grad_()
value = value.unflatten(-1, [n_heads, D]).transpose(1, 2).detach().requires_grad_()
out_flex2 = flex_attention(query, key, value, score_mod=alibi_score_mod)
此外,还可以通过``create_nested_block_mask``函数将``FlexAttention``的``block_mask``实用程序与NJT结合使用。这对于利用掩码的稀疏性来加速注意力计算非常有用。具体来说,该函数为所有NJT中可变长度序列合并为一个“堆叠序列”创建稀疏块掩码,同时正确屏蔽序列间的注意。在以下示例中,我们展示了如何使用此工具创建因果块掩码。
from torch.nn.attention.flex_attention import create_nested_block_mask
def causal_mask(b, h, q_idx, kv_idx):
return q_idx >= kv_idx
query, key, value, _ = gen_batch(N, E_q, E_k, E_v, device)
block_mask = create_nested_block_mask(causal_mask, 1, 1, query, _compile=True)
query = query.unflatten(-1, [n_heads, D]).transpose(1, 2).detach().requires_grad_()
key = key.unflatten(-1, [n_heads, D]).transpose(1, 2).detach().requires_grad_()
value = value.unflatten(-1, [n_heads, D]).transpose(1, 2).detach().requires_grad_()
out_flex = flex_attention(query, key, value, block_mask=block_mask)
嵌套投影¶
打包投影是一种技术,它利用了当用于投影(矩阵乘法)的输入相同时(自注意),我们可以将投影权重和偏置打包为单个张量的特点。它尤其适用于单个投影受限于内存而不是计算的情况。我们将在此演示两个例子:
MultiheadAttention的输入投影
Transformer层中前馈网络的SwiGLU激活
MultiheadAttention的输入投影¶
在执行自注意时,query
、``key``和``value``是同一个张量。每个张量都通过``Linear(E_q, E_total)``层进行投影。相反,我们可以将这些投影打包为一个层,这就是我们在上述MultiheadAttention层中所做的。
让我们比较打包投影与通常方法的性能:
class InputProjection(nn.Module):
def __init__(self, E_q, E_total, bias=False, device=None, dtype=None):
factory_kwargs = {"device": device, "dtype": dtype}
super().__init__()
self.q_proj = nn.Linear(E_q, E_total, bias=bias, **factory_kwargs)
self.k_proj = nn.Linear(E_q, E_total, bias=bias, **factory_kwargs)
self.v_proj = nn.Linear(E_q, E_total, bias=bias, **factory_kwargs)
def forward(self, x):
return self.q_proj(x), self.k_proj(x), self.v_proj(x)
class PackedInputProjection(nn.Module):
def __init__(self, E_q, E_total, bias=False, device=None, dtype=None):
factory_kwargs = {"device": device, "dtype": dtype}
super().__init__()
self.packed_proj = nn.Linear(E_q, E_total * 3, bias=bias, **factory_kwargs)
def forward(self, query):
return torch.chunk(self.packed_proj(query), 3, dim=-1)
B, D, dtype = 256, 8192, torch.bfloat16
torch.set_float32_matmul_precision("high")
in_proj = torch.compile(InputProjection(D, D, device="cuda", dtype=torch.bfloat16))
packed_in_proj = torch.compile(
PackedInputProjection(D, D, device="cuda", dtype=torch.bfloat16)
)
q, _, _, sequence_lengths = gen_batch(B, D, D, D, device="cuda", dtype=torch.bfloat16)
# warmup
in_proj(q)
packed_in_proj(q)
# benchmark
(q_out, k_out, v_out), time, _ = benchmark(in_proj, q)
(q_out, k_out, v_out), time_packed, _ = benchmark(packed_in_proj, q)
# On my A100 prints 1.05x speedup
print(
f"InputProjection: {time:5f} s, PackedInputProjection: {time_packed:5f} s, speedup: {time/time_packed:.2f}x"
)
Transformer层中的SwiGLU前馈网络¶
Swish门控线性单元(SwiGLU)是一种非线性激活函数,在transformer层的前馈网络中越来越受欢迎(例如Llama)。使用SwiGLU激活的前馈网络定义为:
class SwiGLUFFN(nn.Module):
def __init__(
self,
dim,
hidden_dim,
multiple_of,
ffn_dim_multiplier=None,
device=None,
dtype=None,
):
factory_kwargs = {"device": device, "dtype": dtype}
super().__init__()
hidden_dim = int(2 * hidden_dim / 3)
# custom dim factor multiplier
if ffn_dim_multiplier is not None:
hidden_dim = int(ffn_dim_multiplier * hidden_dim)
hidden_dim = multiple_of * ((hidden_dim + multiple_of - 1) // multiple_of)
self.w1 = nn.Linear(dim, hidden_dim, bias=False, **factory_kwargs)
self.w2 = nn.Linear(hidden_dim, dim, bias=False, **factory_kwargs)
self.w3 = nn.Linear(dim, hidden_dim, bias=False, **factory_kwargs)
def forward(self, x):
return self.w2(F.silu(self.w1(x)) * self.w3(x))
使用打包投影实现该网路的另一种方法是
class PackedSwiGLUFFN(nn.Module):
def __init__(
self,
dim,
hidden_dim,
multiple_of,
ffn_dim_multiplier=None,
device=None,
dtype=None,
):
factory_kwargs = {"device": device, "dtype": dtype}
super().__init__()
hidden_dim = int(2 * hidden_dim / 3)
# custom dim factor multiplier
if ffn_dim_multiplier is not None:
hidden_dim = int(ffn_dim_multiplier * hidden_dim)
hidden_dim = multiple_of * ((hidden_dim + multiple_of - 1) // multiple_of)
self.w13 = nn.Linear(dim, 2 * hidden_dim, bias=False, **factory_kwargs)
self.w2 = nn.Linear(hidden_dim, dim, bias=False, **factory_kwargs)
def forward(self, x):
x1, x3 = torch.chunk(self.w13(x), 2, dim=-1)
return self.w2(F.silu(x1) * x3)
我们可以比较这两种实现的性能,如下所示。根据您的硬件,您可能会看到不同的结果。在A100上,我观察到D=128时的速度提高了1.12倍。
D = 128
swigluffn = torch.compile(SwiGLUFFN(D, D * 4, 256, device="cuda", dtype=torch.bfloat16))
packed_swigluffn = torch.compile(
PackedSwiGLUFFN(D, D * 4, 256, device="cuda", dtype=torch.bfloat16)
)
q, _, _, sentence_lengths = gen_batch(D, D, D, D, device="cuda", dtype=torch.bfloat16)
# warmup
swigluffn(q)
packed_swigluffn(q)
# benchmark
_, time, _ = benchmark(swigluffn, q)
_, time_packed, _ = benchmark(packed_swigluffn, q)
# On my A100 prints 1.08x speedup
print(
f"SwiGLUFFN: {time} s, PackedSwiGLUFFN: {time_packed} s, speedup: {time/time_packed:.2f}x"
)
扩展示例¶
我们计划更新本教程,以展示如何使用各种高性能构建模块(例如KV缓存、分组查询注意力等)的更多示例。此外,还有一些使用各种高性能构建模块来实现不同transformer架构的优秀示例。一些示例如下:
结论¶
在本教程中,我们介绍了PyTorch提供的用于编写transformer层的低级构建块,并展示了如何组合它们的示例。我们希望本教程能让读者了解在PyTorch中实现灵活且高性能的transformer层是多么容易。