Skip to content

Commit

Permalink
Update llmfoundry/models/mpt/modeling_mpt.py
Browse files Browse the repository at this point in the history
Co-authored-by: Vitaliy Chiley <[email protected]>
  • Loading branch information
ShashankMosaicML and vchiley committed Nov 17, 2023
1 parent a560f31 commit a70f05e
Showing 1 changed file with 2 additions and 2 deletions.
4 changes: 2 additions & 2 deletions llmfoundry/models/mpt/modeling_mpt.py
Original file line number Diff line number Diff line change
Expand Up @@ -147,8 +147,8 @@ def gen_attention_mask_in_length(sequence_id: Union[None, torch.Tensor], S: int,
return query_attention_mask_in_length, key_attention_mask_in_length

def apply_sequence_id(attn_bias: torch.Tensor,
sequence_id: torch.LongTensor,
max_seq_len: int) -> torch.Tensor:
sequence_id: torch.LongTensor,
max_seq_len: int) -> torch.Tensor:
seq_len = sequence_id.shape[-1]
if seq_len > max_seq_len:
raise ValueError(
Expand Down

0 comments on commit a70f05e

Please sign in to comment.