Skip to content

Issues: oneapi-src/oneDNN

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Author
Filter by author
Loading
Label
Filter by label
Loading
Use alt + click/return to exclude labels
or + click/return for logical OR
Projects
Filter by project
Loading
Milestones
Filter by milestone
Loading
Assignee
Filter by who’s assigned
Sort

Issues list

supported matmul data types documentation A request to change/fix/improve the documentation. Codeowner: @oneapi-src/onednn-doc question
#2196 opened Oct 31, 2024 by jinz2014
[ARM] Support fp16 data type in JIT Reorder kernel enhancement A feature or an optimization request help wanted platform:cpu-aarch64 Codeowner: @oneapi-src/onednn-cpu-aarch64
#2185 opened Oct 28, 2024 by dmitry-gorokhov
[BRGEMM Ukernel] Call of execute with empty post op parameter bug A confirmed library bug platform:cpu-x64 Intel64/AMD64 processors. Codeowner: @oneapi-src/onednn-cpu-x64 sighting Suspicious library behavior. Should be promoted to a bug when confirmed
#2179 opened Oct 22, 2024 by Devjiu
Bug in memory_desc_init_by_tag: Incorrect Differentiation Between Memory Tags abcd and acbd platform:cpu-aarch64 Codeowner: @oneapi-src/onednn-cpu-aarch64 sighting Suspicious library behavior. Should be promoted to a bug when confirmed
#2175 opened Oct 21, 2024 by taoye9
MacOS ci release mode build issue with gcc-14 platform:cpu-aarch64 Codeowner: @oneapi-src/onednn-cpu-aarch64 sighting Suspicious library behavior. Should be promoted to a bug when confirmed
#2167 opened Oct 15, 2024 by theComputeKid
Extend support for JIT Backward Convolution Operators with ARM SVE 128bit enhancement A feature or an optimization request platform:cpu-aarch64 Codeowner: @oneapi-src/onednn-cpu-aarch64
#2165 opened Oct 14, 2024 by snadampal
[ARM] Support 8bit/4bit weights decompression for Matmul primitive enhancement A feature or an optimization request help wanted platform:cpu-aarch64 Codeowner: @oneapi-src/onednn-cpu-aarch64
#2081 opened Sep 4, 2024 by dmitry-gorokhov
[ARM] Suport 32-bits CPUs within ACL integration enhancement A feature or an optimization request help wanted platform:cpu-aarch64 Codeowner: @oneapi-src/onednn-cpu-aarch64
#2069 opened Sep 2, 2024 by dmitry-gorokhov
[ARM] Support FP16 post-ops fusion into ACL kernels enhancement A feature or an optimization request help wanted platform:cpu-aarch64 Codeowner: @oneapi-src/onednn-cpu-aarch64
#2067 opened Aug 30, 2024 by dmitry-gorokhov
Build with SYCL fails using intel/llvm compiler sighting Suspicious library behavior. Should be promoted to a bug when confirmed
#2035 opened Aug 13, 2024 by dvrogozh
brg:sve_256 fails benchdnn accuracy tests bug A confirmed library bug help wanted platform:cpu-aarch64 Codeowner: @oneapi-src/onednn-cpu-aarch64
#2008 opened Jul 24, 2024 by jondea
brgconv:sve_256 uses a lot of memory help wanted platform:cpu-aarch64 Codeowner: @oneapi-src/onednn-cpu-aarch64 sighting Suspicious library behavior. Should be promoted to a bug when confirmed
#2007 opened Jul 24, 2024 by jondea
New/other Matrix multiplication algorithm implementation enhancement A feature or an optimization request help wanted platform:cpu-aarch64 Codeowner: @oneapi-src/onednn-cpu-aarch64
#1971 opened Jun 20, 2024 by vineel96
GPU tests pass when they probably shouldn't bug A confirmed library bug help wanted platform:gpu-intel Codeowner: @oneapi-src/onednn-gpu-intel
#1961 opened Jun 13, 2024 by nwnk
Generic OpenCL kernels are broken enhancement A feature or an optimization request help wanted platform:gpu-intel Codeowner: @oneapi-src/onednn-gpu-intel
#1960 opened Jun 13, 2024 by nwnk
batchnorm requires consistent in- and output mem format_tags sighting Suspicious library behavior. Should be promoted to a bug when confirmed
#1944 opened Jun 4, 2024 by IngmarVoigt2
[ACL] 3D convolution kernel NEConv3D is not integrated enhancement A feature or an optimization request help wanted platform:cpu-aarch64 Codeowner: @oneapi-src/onednn-cpu-aarch64
#1908 opened May 10, 2024 by alvoron
GEMM API for efficient LLM inference with W8A16 enhancement A feature or an optimization request help wanted platform:cpu-aarch64 Codeowner: @oneapi-src/onednn-cpu-aarch64
#1788 opened Jan 20, 2024 by oleotiger
[nvidia|amd] Add missing synchronization bug A confirmed library bug help wanted platform:gpu-amd Codeowner: @oneapi-src/onednn-gpu-amd platform:gpu-nvidia Codeowner: @oneapi-src/onednn-gpu-nvidia
#1732 opened Oct 3, 2023 by densamoilov
[nvidia] batch normalization primitive fails correctness check bug A confirmed library bug platform:gpu-nvidia Codeowner: @oneapi-src/onednn-gpu-nvidia
#1725 opened Sep 14, 2023 by dzarukin
ProTip! Exclude everything labeled bug with -label:bug.