Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Will Keras 3 support intel gpu with oneapi #20497

Open
HaroldYin1024 opened this issue Nov 15, 2024 · 6 comments
Open

Will Keras 3 support intel gpu with oneapi #20497

HaroldYin1024 opened this issue Nov 15, 2024 · 6 comments
Assignees

Comments

@HaroldYin1024
Copy link

HaroldYin1024 commented Nov 15, 2024

Dear Keras Team,

The latest version of pytorch 2.5 already supports Intel GPU. Users can use pytorch on Intel GPU with minimum code change..

# CUDA CODE
tensor = torch.tensor([1.0, 2.0]).to("cuda")

# CODE for Intel GPU
tensor = torch.tensor([1.0, 2.0]).to("xpu")

Will Keras team follow up to support Intel OneAPI ecosystem? Thank you very much!

@fchollet
Copy link
Member

Then Keras with the PyTorch backend should already be working with the XPU device, right? Did you try it?

In general Keras only executes computation via a backend, so support for new devices would happen at the level of the backend not at the level of Keras itself. Keras code is agnostic to CPU/GPU/TPU/etc aside from automated device placement and device count related logic (in the distribution API).

@HaroldYin1024
Copy link
Author

Then Keras with the PyTorch backend should already be working with the XPU device, right? Did you try it?

In general Keras only executes computation via a backend, so support for new devices would happen at the level of the backend not at the level of Keras itself. Keras code is agnostic to CPU/GPU/TPU/etc aside from automated device placement and device count related logic (in the distribution API).

I see. I didn't find it in the document, so I thought it wasn't supported. Thank you for the information, and I will try it later.

@HaroldYin1024
Copy link
Author

Then Keras with the PyTorch backend should already be working with the XPU device, right? Did you try it?

In general Keras only executes computation via a backend, so support for new devices would happen at the level of the backend not at the level of Keras itself. Keras code is agnostic to CPU/GPU/TPU/etc aside from automated device placement and device count related logic (in the distribution API).

Dear @fchollet
I have tried training MNIST with Keras 3.0 using Torch as the backend, but the Intel GPU is not utilized during the training. The issue remains.
I can confirm that I have installed all the prerequisites for PyTorch on Intel XPU, and I can successfully use PyTorch to train MNIST with the GPU. So I guess the problem is on keras's side.

@HaroldYin1024 HaroldYin1024 reopened this Nov 16, 2024
@HaroldYin1024
Copy link
Author

HaroldYin1024 commented Nov 16, 2024

@fchollet

I have tested by editing the source code in /keras/src/backend/torch/core.py. I replaced all the "cuda" with "xpu". Then, intel GPU was successfully utilized during my training.

My pytorch version is 2.5. In this latest version, pytorch provide new functions torch.xpu.is_available, to.("xpu").

Just want to provide keras team with the information, and hope there will be a permanent solution implemented in later version. Thanks.

@fchollet
Copy link
Member

@HaroldYin1024 we can modify the backend so as to set the XPU as default device if it is available. We already do this for the MPS device on macs. Do you see potential cases where one would have both XPU and CUDA available at the same time?

@fchollet
Copy link
Member

@HaroldYin1024 I have made this change so you can try to install Keras at HEAD and see if that works for you.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants