Skip to content

Pytorch rocm

Christian Bayle requested to merge bayle/pytorch:pytorch-rocm into master

Here is the way I filled the hole to build pytorch-rocm This build with experimental
Unfortunatly using

and running this small basic script

cat ./test-torch.py
---------
import torch
print("===== Torch =====")
print("Torch CUDA availability:",torch.cuda.is_available())
print("Version CUDA:",torch.version.cuda)
print("CUDA device count:",torch.cuda.device_count())
print("version:",torch.__version__)
print("CUDA current device:",torch.cuda.current_device())
print("Device Name:",torch.cuda.get_device_name(torch.cuda.current_device()))
device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
print("Device:",device)
print("Random Table To:")
print(torch.rand(2, 5).to(device))
---------
export HSA_OVERRIDE_GFX_VERSION=11.0.0
python3 ./test-torch.py 
===== Torch =====
Torch CUDA availability: False
Version CUDA: None
CUDA device count: 0
version: 2.6.0+debian
Traceback (most recent call last):
  File "/home/chris/git/github.com/cbayle/InvokeAI/uv/./test-torch.py", line 8, in <module>
    print("CUDA current device:",torch.cuda.current_device())
                                 ~~~~~~~~~~~~~~~~~~~~~~~~~^^
  File "/usr/lib/python3/dist-packages/torch/cuda/__init__.py", line 971, in current_device
    _lazy_init()
    ~~~~~~~~~~^^
  File "/usr/lib/python3/dist-packages/torch/cuda/__init__.py", line 310, in _lazy_init
    raise AssertionError("Torch not compiled with CUDA enabled")
AssertionError: Torch not compiled with CUDA enabled

It looks like something is missing, maybe

export PYTORCH_ROCM_ARCH = gfx1100 is not good for my gfx1102 ? I'll try a rebuild ...

/usr/lib/libtorch-test/KernelFunction_test is passing well

I attach the full build log if you see something obvious pytorch-rocm_2.6.0+dfsg-3_amd64.build

Merge request reports

Loading