Skip to content

Commit

Permalink
[CI] Replace "openai/triton" with "triton-lang/triton" (#3900)
Browse files Browse the repository at this point in the history
  • Loading branch information
Jokeren authored May 13, 2024
1 parent a9f7506 commit d7c8b3d
Show file tree
Hide file tree
Showing 17 changed files with 27 additions and 27 deletions.
2 changes: 1 addition & 1 deletion .github/workflows/integration-tests.yml
Original file line number Diff line number Diff line change
Expand Up @@ -103,7 +103,7 @@ jobs:
id: set-matrix
if: env.enable_integration == 'true'
run: |
if [ x"${{ github.repository }}" == x"openai/triton" ]; then
if [ x"${{ github.repository }}" == x"triton-lang/triton" ]; then
echo '::set-output name=matrix-CUDA::[["self-hosted", "A100"], ["self-hosted", "H100"]]'
echo '::set-output name=matrix-HIP::[["self-hosted", "gfx90a"]]'
else
Expand Down
2 changes: 1 addition & 1 deletion .github/workflows/integration-tests.yml.in
Original file line number Diff line number Diff line change
Expand Up @@ -111,7 +111,7 @@ jobs:
id: set-matrix
if: env.enable_integration == 'true'
run: |
if [ x"${{ github.repository }}" == x"openai/triton" ]; then
if [ x"${{ github.repository }}" == x"triton-lang/triton" ]; then
echo '::set-output name=matrix-CUDA::[["self-hosted", "A100"], ["self-hosted", "H100"]]'
echo '::set-output name=matrix-HIP::[["self-hosted", "gfx90a"]]'
else
Expand Down
6 changes: 3 additions & 3 deletions .github/workflows/llvm-build.yml
Original file line number Diff line number Diff line change
Expand Up @@ -287,23 +287,23 @@ jobs:
${{ github.workspace }}/llvm-*-${{ matrix.config.target-os }}-${{ matrix.config.arch }}.tar.gz
- name: Azure Login
if: ${{ (github.repository == 'openai/triton') }}
if: ${{ (github.repository == 'triton-lang/triton') }}
uses: azure/login@v2
with:
client-id: ${{ secrets.AZURE_CLIENT_ID }}
tenant-id: ${{ secrets.AZURE_TENANT_ID }}
subscription-id: ${{ secrets.AZURE_SUBSCRIPTION_ID }}

- name: Upload LLVM Artifacts to Azure
if: ${{ (github.repository == 'openai/triton') }}
if: ${{ (github.repository == 'triton-lang/triton') }}
run: |
az storage blob upload --account-name tritonlang --auth-mode login --container-name llvm-builds --file "${{ env.llvm_install_dir }}.tar.gz" --name "${{ env.llvm_install_dir }}.tar.gz" --overwrite
URL=$(az storage blob url --account-name tritonlang --auth-mode login --container-name llvm-builds --name "${{ env.llvm_install_dir }}.tar.gz")
echo "Blob URL: ${URL}"
- name: Azure Logout
if: ${{ (github.repository == 'openai/triton') }}
if: ${{ (github.repository == 'triton-lang/triton') }}
run: |
az logout
az cache purge
Expand Down
2 changes: 1 addition & 1 deletion .github/workflows/test-backends.yml
Original file line number Diff line number Diff line change
Expand Up @@ -16,7 +16,7 @@ jobs:
- name: Prepare runner matrix
id: set-matrix
run: |
if [ x"${{ github.repository }}" == x"openai/triton" ]; then
if [ x"${{ github.repository }}" == x"triton-lang/triton" ]; then
echo '::set-output name=matrix-optional::[["self-hosted", "gfx90a"], ["self-hosted", "arc770"]]'
else
echo '::set-output name=matrix-optional::["ubuntu-latest"]'
Expand Down
2 changes: 1 addition & 1 deletion CONTRIBUTING.md
Original file line number Diff line number Diff line change
Expand Up @@ -64,4 +64,4 @@ We are committed to accepting functional bug fixes that meet our quality standar

## Controversial Changes

More controversial design changes (e.g., changes in our IRs/APIs/Passes) are evaluated on a case-by-case basis under the subjective judgment of core maintainers. While it is possible for contributors to propose and land deep design changes upstream (see https://github.com/openai/triton/pull/1305), the community should expect such occurrences to be relatively rare.
More controversial design changes (e.g., changes in our IRs/APIs/Passes) are evaluated on a case-by-case basis under the subjective judgment of core maintainers. While it is possible for contributors to propose and land deep design changes upstream (see https://github.com/triton-lang/triton/pull/1305), the community should expect such occurrences to be relatively rare.
8 changes: 4 additions & 4 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -6,7 +6,7 @@ We're hiring! If you are interested in working on Triton at OpenAI, we have role

| **`Documentation`** | **`Nightly Wheels`** |
|-------------------- | -------------------- |
| [![Documentation](https://github.com/openai/triton/actions/workflows/documentation.yml/badge.svg)](https://triton-lang.org/) | [![Wheels](https://github.com/openai/triton/actions/workflows/wheels.yml/badge.svg?branch=release/2.0.x)](https://github.com/openai/triton/actions/workflows/wheels.yml) |
| [![Documentation](https://github.com/triton-lang/triton/actions/workflows/documentation.yml/badge.svg)](https://triton-lang.org/) | [![Wheels](https://github.com/triton-lang/triton/actions/workflows/wheels.yml/badge.svg?branch=release/2.0.x)](https://github.com/triton-lang/triton/actions/workflows/wheels.yml) |


# Triton
Expand Down Expand Up @@ -35,7 +35,7 @@ pip install -U --index-url https://aiinfra.pkgs.visualstudio.com/PublicPackages/
# Install from source

```
git clone https://github.com/openai/triton.git;
git clone https://github.com/triton-lang/triton.git;
cd triton;
pip install ninja cmake wheel; # build-time dependencies
Expand All @@ -45,7 +45,7 @@ pip install -e python
Or with a virtualenv:

```
git clone https://github.com/openai/triton.git;
git clone https://github.com/triton-lang/triton.git;
cd triton;
python -m venv .venv --prompt triton;
Expand Down Expand Up @@ -208,7 +208,7 @@ Version 2.0 is out! New features include:

# Contributing

Community contributions are more than welcome, whether it be to fix bugs or to add new features at [github](https://github.com/openai/triton/). For more detailed instructions, please visit our [contributor's guide](CONTRIBUTING.md).
Community contributions are more than welcome, whether it be to fix bugs or to add new features at [github](https://github.com/triton-lang/triton/). For more detailed instructions, please visit our [contributor's guide](CONTRIBUTING.md).


# Compatibility
Expand Down
2 changes: 1 addition & 1 deletion docs/conf.py
Original file line number Diff line number Diff line change
Expand Up @@ -158,7 +158,7 @@ def documenter(app, obj, parent):
'filename_pattern': '',
# TODO: Re-enable the grouped-gemm tutorial. It currently hits this
# assertion:
# https://github.com/openai/triton/blob/main/lib/Dialect/TritonNvidiaGPU/Transforms/FenceInsertion.cpp#L127
# https://github.com/triton-lang/triton/blob/main/lib/Dialect/TritonNvidiaGPU/Transforms/FenceInsertion.cpp#L127
'ignore_pattern': r'(__init__\.py|11.*.py)',
'within_subsection_order': FileNameSortKey,
'reference_url': {
Expand Down
6 changes: 3 additions & 3 deletions docs/getting-started/installation.rst
Original file line number Diff line number Diff line change
Expand Up @@ -2,7 +2,7 @@
Installation
============

For supported platform/OS and supported hardware, review the `Compatibility <https://github.com/openai/triton?tab=readme-ov-file#compatibility>`_ section on Github.
For supported platform/OS and supported hardware, review the `Compatibility <https://github.com/triton-lang/triton?tab=readme-ov-file#compatibility>`_ section on Github.

--------------------
Binary Distributions
Expand Down Expand Up @@ -35,14 +35,14 @@ You can install the Python package from source by running the following commands

.. code-block:: bash
git clone https://github.com/openai/triton.git;
git clone https://github.com/triton-lang/triton.git;
cd triton/python;
pip install ninja cmake wheel; # build-time dependencies
pip install -e .
Note that, if llvm is not present on your system, the setup.py script will download the official LLVM static libraries and link against that.

For building with a custom LLVM, review the `Building with a custom LLVM <https://github.com/openai/triton?tab=readme-ov-file#building-with-a-custom-llvm>`_ section on Github.
For building with a custom LLVM, review the `Building with a custom LLVM <https://github.com/triton-lang/triton?tab=readme-ov-file#building-with-a-custom-llvm>`_ section on Github.

You can then test your installation by running the unit tests:

Expand Down
2 changes: 1 addition & 1 deletion docs/index.rst
Original file line number Diff line number Diff line change
Expand Up @@ -67,4 +67,4 @@ Check out the following documents to learn more about Triton and how it compares
programming-guide/chapter-2/related-work
programming-guide/chapter-3/debugging

.. _Triton: https://github.com/openai/triton
.. _Triton: https://github.com/triton-lang/triton
8 changes: 4 additions & 4 deletions docs/meetups/08-22-2023/notes.md
Original file line number Diff line number Diff line change
Expand Up @@ -31,11 +31,11 @@ Recording link [here](https://drive.google.com/file/d/19Nnc0i7zUyn-ni2RSFHbPHHiP
- Community can start with the latest stable branch and rebase 3rd party plugin on top of that. OAI has no resources to commit to, but community can contribute.
3. Linalg updates
- Discussion on Github for Linalg as a middle layer between the language and target hardware. Includes support for block pointers and modulo operators.
- Please join the conversation [here](https://github.com/openai/triton/discussions/1842)
- Please join the conversation [here](https://github.com/triton-lang/triton/discussions/1842)
- Branch pushed is behind the tip, will work on getting it caught up on the tip.
4. Intel GPU Backend status update.
- Please refer to slides [here](https://github.com/openai/triton/blob/main/docs/meetups/Intel%20XPU%20Backend%20for%20Triton%20-%20Update%20-%200823.pptx)
- Please refer to slides [here](https://github.com/triton-lang/triton/blob/main/docs/meetups/Intel%20XPU%20Backend%20for%20Triton%20-%20Update%20-%200823.pptx)
5. Intel working on the CPU backend for Triton.
- Please refer to slides [here](https://github.com/openai/triton/blob/main/docs/meetups/Intel%20XPU%20Backend%20for%20Triton%20-%20Update%20-%200823.pptx)
- Please refer to slides [here](https://github.com/triton-lang/triton/blob/main/docs/meetups/Intel%20XPU%20Backend%20for%20Triton%20-%20Update%20-%200823.pptx)
6. AMD updates
- Please refer to slides [here](https://github.com/openai/triton/blob/main/docs/meetups/Triton_AMD_update_0823.pdf).
- Please refer to slides [here](https://github.com/triton-lang/triton/blob/main/docs/meetups/Triton_AMD_update_0823.pdf).
2 changes: 1 addition & 1 deletion docs/programming-guide/chapter-3/debugging.rst
Original file line number Diff line number Diff line change
Expand Up @@ -5,7 +5,7 @@ Debugging Triton
This tutorial provides guidance for debugging Triton programs.
It is mostly documented for Triton users.
Developers interested in exploring Triton's backend, including MLIR code transformation and LLVM code generation,
can refer to this `section <https://github.com/openai/triton?tab=readme-ov-file#tips-for-hacking>`_ to explore debugging options.
can refer to this `section <https://github.com/triton-lang/triton?tab=readme-ov-file#tips-for-hacking>`_ to explore debugging options.

------------------------------------
Using Triton's Debugging Operations
Expand Down
2 changes: 1 addition & 1 deletion lib/Dialect/TritonGPU/Transforms/AccelerateMatmul.cpp
Original file line number Diff line number Diff line change
Expand Up @@ -85,7 +85,7 @@ warpsPerTileV2(tt::DotOp dotOp, const ArrayRef<int64_t> shape, int numWarps) {
shapePerWarp[rank - 2] = 16;
// TODO (@daadaada): double-check.
// original logic in
// https://github.com/openai/triton/blob/master/lib/codegen/analysis/layout.cc#L252
// https://github.com/triton-lang/triton/blob/master/lib/codegen/analysis/layout.cc#L252
// seems buggy for shape = [32, 16] ?
do {
if (ret[0] * ret[1] >= numWarps)
Expand Down
2 changes: 1 addition & 1 deletion python/setup.py
Original file line number Diff line number Diff line change
Expand Up @@ -578,7 +578,7 @@ def get_install_requires():
zip_safe=False,
# for PyPI
keywords=["Compiler", "Deep Learning"],
url="https://github.com/openai/triton/",
url="https://github.com/triton-lang/triton/",
classifiers=[
"Development Status :: 4 - Beta",
"Intended Audience :: Developers",
Expand Down
2 changes: 1 addition & 1 deletion python/test/regression/test_cast_matmul.py
Original file line number Diff line number Diff line change
@@ -1,5 +1,5 @@
"""
issue: https://github.com/openai/triton/issues/2523
issue: https://github.com/triton-lang/triton/issues/2523
fused type convert and matmul, base on triton matmul, the different with matmul:
1. force C's dtype=dot_out_dtype to ["float16", "float32"]
2. accept A and B with dtype=["float32", "float64"]
Expand Down
2 changes: 1 addition & 1 deletion python/test/unit/language/test_core.py
Original file line number Diff line number Diff line change
Expand Up @@ -5124,7 +5124,7 @@ def test_fp8_dot_acc(in_type_str, low_precision_acc, device):
def test_enable_fp_fusion(enable_fp_fusion, device):
if is_hip():
pytest.skip(
'test_enable_fp_fusion for HIP currently broken in https://github.com/openai/triton. Use https://github.com/ROCmSoftwarePlatform/triton'
'test_enable_fp_fusion for HIP currently broken in https://github.com/triton-lang/triton. Use https://github.com/ROCmSoftwarePlatform/triton'
)

# Sequential multiply add can be fused by backend
Expand Down
2 changes: 1 addition & 1 deletion test/TritonGPU/loop-pipeline.mlir
Original file line number Diff line number Diff line change
Expand Up @@ -1146,7 +1146,7 @@ module attributes {"triton_gpu.target" = "cuda:80", "triton_gpu.num-ctas" = 1 :
%51 = tt.addptr %50, %47 : tensor<64x256x!tt.ptr<i8>, #blocked>, tensor<64x256xi32, #blocked>

// Check that both loads in the loop are pipelined.
// TODO(jlebar): https://github.com/openai/triton/pull/3472 disables the
// TODO(jlebar): https://github.com/triton-lang/triton/pull/3472 disables the
// relevant optimization. Once we've reenabled it, we can uncomment this test.
// CHECK: scf.for
// COM: CHECK-NOT: tt.load
Expand Down
2 changes: 1 addition & 1 deletion third_party/proton/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -9,7 +9,7 @@ Proton is a lightweight profiler for Triton, designed to be used for code writte
The following command installs the latest version of Proton.

```bash
git clone https://github.com/openai/triton
git clone https://github.com/triton-lang/triton
cd triton/python
pip install .
```
Expand Down

0 comments on commit d7c8b3d

Please sign in to comment.