Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Mac OS: Bringing metal support #14

Open
4 tasks
baileyheading opened this issue Jul 14, 2024 · 51 comments
Open
4 tasks

Mac OS: Bringing metal support #14

baileyheading opened this issue Jul 14, 2024 · 51 comments

Comments

@baileyheading
Copy link
Contributor

baileyheading commented Jul 14, 2024

Work is underway to get metal working for this project. Keen to get help from anyone interested.

TODO:

  • add conv_tranpose_1d for metal (already has cpu implementation)
  • modify ggml_pad backend and added ggml_pad_ext for metal
  • add pad_reflect_1d to metal (already has cpu implementation)
  • add unfold_1d to metal (already has cpu implementation)

Existing PRs from @balisujohn decent example to follow for the changes done to CUDA already:
https://github.com/ggerganov/ggml/pulls/balisujohn

@baileyheading
Copy link
Contributor Author

baileyheading commented Jul 14, 2024

that was fast! before it had problem haha

gpt_vocab_init: loading vocab from '../models/tokenizer.json'
gpt_vocab_init: vocab size = 255
autoregressive_model_load: loading model from '../models/ggml-model.bin'
autoregressive_model_load: ggml tensor size    = 432 bytes
autoregressive_model_load: backend buffer size = 1889.29 MB
autoregressive_model_load: using Metal backend
ggml_metal_init: allocating
ggml_metal_init: found device: Apple M3 Max
ggml_metal_init: picking default device: Apple M3 Max
ggml_metal_init: using embedded metal library
ggml_metal_init: GPU name:   Apple M3 Max
ggml_metal_init: GPU family: MTLGPUFamilyApple9  (1009)
ggml_metal_init: GPU family: MTLGPUFamilyCommon3 (3003)
ggml_metal_init: GPU family: MTLGPUFamilyMetal3  (5001)
ggml_metal_init: simdgroup reduction support   = true
ggml_metal_init: simdgroup matrix mul. support = true
ggml_metal_init: hasUnifiedMemory              = true
ggml_metal_init: recommendedMaxWorkingSetSize  = 103079.22 MB
autoregressive_model_load: model size  =  1510.54 MB
ggml_metal_free: deallocating
 41984 diffusion_model_load: loading model from '../models/ggml-diffusion-model.bin'
diffusion_model_load: ggml tensor size    = 432 bytes
diffusion_model_load: backend buffer size = 689.28 MB
diffusion_model_load: using Metal backend
ggml_metal_init: allocating
ggml_metal_init: found device: Apple M3 Max
ggml_metal_init: picking default device: Apple M3 Max
ggml_metal_init: using embedded metal library
ggml_metal_init: GPU name:   Apple M3 Max
ggml_metal_init: GPU family: MTLGPUFamilyApple9  (1009)
ggml_metal_init: GPU family: MTLGPUFamilyCommon3 (3003)
ggml_metal_init: GPU family: MTLGPUFamilyMetal3  (5001)
ggml_metal_init: simdgroup reduction support   = true
ggml_metal_init: simdgroup matrix mul. support = true
ggml_metal_init: hasUnifiedMemory              = true
ggml_metal_init: recommendedMaxWorkingSetSize  = 103079.22 MB
diffusion_model_load: model size  =   688.49 MB
GGML_ASSERT: /Users/user_name/Documents/GitHub/tortoise.cpp/ggml/src/ggml-metal.m:2216: ne00 % 4 == 0
GGML_ASSERT: /Users/user_name/Documents/GitHub/tortoise.cpp/ggml/src/ggml-metal.m:2216: ne00 % 4 == 0

@baileyheading
Copy link
Contributor Author

Some lldb (debug) info

GGML_ASSERT: /Users/me/Documents/GitHub/tortoise.cpp/ggml/src/ggml-metal.m:2216: ne00 % 4 == 0
GGML_ASSERT: /Users/me/Documents/GitHub/tortoise.cpp/ggml/src/ggml-metal.m:2216: ne00 % 4 == 0
GGML_ASSERT: /Users/me/Documents/GitHub/tortoise.cpp/ggml/src/ggml-metal.m:2216: ne00 % 4 == 0
GGML_ASSERT: /Users/me/Documents/GitHub/tortoise.cpp/ggml/src/ggml-metal.m:2216: ne00 % 4 == 0
Process 38817 stopped
* thread #5, queue = 'ggml-metal', stop reason = signal SIGABRT
    frame #0: 0x000000018e276a60 libsystem_kernel.dylib`__pthread_kill + 8
libsystem_kernel.dylib`:
->  0x18e276a60 <+8>:  b.lo   0x18e276a80               ; <+40>
    0x18e276a64 <+12>: pacibsp 
    0x18e276a68 <+16>: stp    x29, x30, [sp, #-0x10]!
    0x18e276a6c <+20>: mov    x29, sp
Target 0: (tortoise) stopped.
(lldb) tb
No breakpoints currently set.
(lldb) bt
* thread #5, queue = 'ggml-metal', stop reason = signal SIGABRT
  * frame #0: 0x000000018e276a60 libsystem_kernel.dylib`__pthread_kill + 8
    frame #1: 0x000000018e2aec20 libsystem_pthread.dylib`pthread_kill + 288
    frame #2: 0x000000018e1bba30 libsystem_c.dylib`abort + 180
    frame #3: 0x0000000100677378 libggml.dylib`__ggml_metal_graph_compute_block_invoke.cold.40 + 88
    frame #4: 0x000000010066287c libggml.dylib`__ggml_metal_graph_compute_block_invoke + 21672
    frame #5: 0x000000018e0fe428 libdispatch.dylib`_dispatch_client_callout2 + 20
    frame #6: 0x000000018e112850 libdispatch.dylib`_dispatch_apply_invoke3 + 336
    frame #7: 0x000000018e0fe3e8 libdispatch.dylib`_dispatch_client_callout + 20
    frame #8: 0x000000018e0ffc68 libdispatch.dylib`_dispatch_once_callout + 32
    frame #9: 0x000000018e1119b8 libdispatch.dylib`_dispatch_apply_redirect_invoke + 260
    frame #10: 0x000000018e0fe3e8 libdispatch.dylib`_dispatch_client_callout + 20
    frame #11: 0x000000018e110080 libdispatch.dylib`_dispatch_root_queue_drain + 864
    frame #12: 0x000000018e1106b8 libdispatch.dylib`_dispatch_worker_thread2 + 156
    frame #13: 0x000000018e2aafd0 libsystem_pthread.dylib`_pthread_wqthread + 228

@baileyheading
Copy link
Contributor Author

baileyheading commented Jul 14, 2024

The GGML Assert is from ggml.metal.m I think. I don't know if that causes the abort or something else

EDIT: yep, the asserts cause the block. that means there is probably an issue somewhere, but I can at least proceed to the next stages for now by skipping over it

@baileyheading
Copy link
Contributor Author

@balisujohn so this is what you were referring to about the unsupported ggml operations? It's very fast at least up until the problems

ggml_metal_graph_compute_block_invoke: error: unsupported op 'UPSCALE'
GGML_ASSERT: /Users/me/Documents/GitHub/tortoise.cpp/ggml/src/ggml-metal.m:918: !"unsupported op"

@baileyheading
Copy link
Contributor Author

baileyheading commented Jul 14, 2024

It says it's not supported, but it looks supported to me!

image

EDIT: right, _EXT is different

@balisujohn
Copy link
Owner

This is actually quite suspicious, ggml_upscale_ext should use GGML_OP_UPSCALE

@baileyheading
Copy link
Contributor Author

This is actually quite suspicious, ggml_upscale_ext should use GGML_OP_UPSCALE

any thoughts on where to look?

@baileyheading
Copy link
Contributor Author

baileyheading commented Jul 15, 2024

Actually! I'm comparing the CUDA code
CUDA

        case GGML_OP_UPSCALE:
            ggml_cuda_op_upscale(ctx, dst);
            break;

METAL
image

METAL checks for support with a function to return a bool, but I tried setting it to "true" already with no luck

EDIT:
METAL
Actually it has this
image

@baileyheading
Copy link
Contributor Author

baileyheading commented Jul 15, 2024

I believe that PAD is also supported

ggml_metal_graph_compute_block_invoke: error: unsupported op 'PAD'
GGML_ASSERT: /Users/baileyheading/Documents/GitHub/tortoise.cpp/ggml/src/ggml-metal.m:919: !"unsupported op"
ggml_metal_graph_compute_block_invoke: error: unsupported op 'UPSCALE'
GGML_ASSERT: /Users/baileyheading/Documents/GitHub/tortoise.cpp/ggml/src/ggml-metal.m:919: !"unsupported op"
ggml_metal_graph_compute_block_invoke: error: unsupported op 'PAD'
GGML_ASSERT: /Users/baileyheading/Documents/GitHub/tortoise.cpp/ggml/src/ggml-metal.m:919: !"unsupported op"

Update: I'm removing some of the GGML_ASSERTs to see whether it's just the detection of support that is broken

@baileyheading
Copy link
Contributor Author

baileyheading commented Jul 15, 2024

It seems to me like these numbers are somehow wrong for the Metal implementation

        case GGML_OP_UPSCALE:
            return true;
        case 49:
            return true;

I did some printouts and apparently the upscale one it wants is 49, which doesn't seem to be what GGML_OP_UPSCALE is defined as. I added 49 for debugging, but I'm gonna swap that op out for it to try and get by the error

Update: Yeh it got past, but then got error for a different op that also exists, so I'm going to figure there's something wrong with these numbers somewhere

@balisujohn
Copy link
Owner

balisujohn commented Jul 15, 2024

Possibly i made a mistake adding other new ops. Though not sure why it would work in cuda and not metal.

@baileyheading
Copy link
Contributor Author

baileyheading commented Jul 15, 2024

@balisujohn they're out by two.. which seems consistent with this kind of thing

dst->op in default case: 49 (what was sent from main.cpp presumably)
dst->op in default case: 51 (Metals definition of the number)
ggml_metal_graph_compute_block_invoke: error: unsupported op 'UPSCALE'

but if they're only defined in one place, I don't understand how this would happen

@baileyheading
Copy link
Contributor Author

I numbered them. What confuses me is that it believes 51 is upscale in metal but the incoming operation thing says it is 49 and it has the name "UPSCALE". where are they defined differently?

image

@balisujohn
Copy link
Owner

These are out of order with the OP enum, possibly leading to at a minimum the wrong op names being printed. All of the things I linked should reference the same ops in the same order.

https://github.com/balisujohn/ggml/blob/e9ac38f82c0956cf6f4e86206d4503095c25d47c/src/ggml.c#L2701
https://github.com/balisujohn/ggml/blob/e9ac38f82c0956cf6f4e86206d4503095c25d47c/include/ggml/ggml.h#L424
https://github.com/balisujohn/ggml/blob/e9ac38f82c0956cf6f4e86206d4503095c25d47c/src/ggml.c#L2613
image

@balisujohn
Copy link
Owner

(I think the op failing on metal actually isn't upscale)

@baileyheading
Copy link
Contributor Author

so what would need to be reordered to see what is actually failing?

@balisujohn
Copy link
Owner

balisujohn commented Jul 15, 2024

The other two things I linked need to be added to and reordered to have the same number of elements as https://github.com/balisujohn/ggml/blob/e9ac38f82c0956cf6f4e86206d4503095c25d47c/include/ggml/ggml.h#L424, and in the same order. A very silly mistake on my end to not have these matching already.

@baileyheading
Copy link
Contributor Author

baileyheading commented Jul 15, 2024

Should be adding these two I think

image

@balisujohn are they familiar?

@balisujohn
Copy link
Owner

yeah that's right, assuming the order is correct those are the two new ops (yet to be upstreamed to ggml proper)

@baileyheading
Copy link
Contributor Author

yeah that's right, assuming the order is correct those are the two new ops (yet to be upstreamed to ggml proper)

and I've confirmed they are not a part of the metal code of course.. so these are the ones I need to add. Which other ones did you add?

@balisujohn
Copy link
Owner

balisujohn commented Jul 15, 2024

I added a cuda kernel for conv_transpose_1d I don't know if there's a metal implementation, but there was an existing CPU implementation. I modified the ggml_pad backend and added ggml_pad_ext so the metal kernel may need to be modified, assuming there is an existing one. I modified the ggml_get_rows cuda kernel to fix a bug, it's unlikely the same bug exists on metal.

@balisujohn
Copy link
Owner

balisujohn commented Jul 15, 2024

https://github.com/ggerganov/ggml/pulls/balisujohn

Useful for reference, though missing the getrows change
The branch tortoise is all of these PR's squished together plus the getrows fix.

@baileyheading
Copy link
Contributor Author

I've started a todo list at top of the issue. is there a better part of the repo to have such a thing?

@balisujohn
Copy link
Owner

I think the top of the issue is fine. Here's a link to join the Discord; a while back someone in the discord expressed interest in metal so they might be willing to help https://discord.gg/q9zufuhc

@baileyheading
Copy link
Contributor Author

baileyheading commented Jul 15, 2024

Conversion without understanding is an interesting task haha. Technically, I can run this through by CPU to get expected outputs for different inputs.. then I can translate code the best I can, and then test them one by one. Have you got a more refined approach you could recommend? @balisujohn given you've just done this for CUDA

edit: I did find the tests finally! I guess that's something

@baileyheading
Copy link
Contributor Author

baileyheading commented Jul 15, 2024

This is hardly going to get the desired result, but I do at least know where to make the modifications.

                case GGML_OP_PAD_REFLECT_1D:
                {
                    fprintf(stderr, "%s", "GGML_OP_PAD_REFLECT_1D\n");
                }
                break;
                case GGML_OP_CONV_TRANSPOSE_1D:
                {
                    fprintf(stderr, "%s", "GGML_OP_CONV_TRANSPOSE_1D\n");
                }
                break;
                case GGML_OP_UNFOLD_1D:
                {
                    fprintf(stderr, "%s", "GGML_OP_UNFOLD_1D\n");
                }
                break;

it ran through to the end and made the .wav file (albeit, static noises only haha)
(obviously it fails the tests)

GGML_OP_PAD_REFLECT_1D
GGML_OP_CONV_TRANSPOSE_1D
GGML_OP_UNFOLD_1D
GGML_OP_UNFOLD_1D
GGML_OP_UNFOLD_1D
GGML_OP_UNFOLD_1D
GGML_OP_UNFOLD_1D
GGML_OP_UNFOLD_1D
GGML_OP_UNFOLD_1D
GGML_OP_UNFOLD_1D
GGML_OP_UNFOLD_1D
GGML_OP_UNFOLD_1D
GGML_OP_UNFOLD_1D
GGML_OP_UNFOLD_1D
GGML_OP_UNFOLD_1D
GGML_OP_UNFOLD_1D
GGML_OP_UNFOLD_1D
GGML_OP_UNFOLD_1D
GGML_OP_UNFOLD_1D
GGML_OP_UNFOLD_1D
GGML_OP_CONV_TRANSPOSE_1D
GGML_OP_UNFOLD_1D
GGML_OP_UNFOLD_1D
GGML_OP_UNFOLD_1D
GGML_OP_CONV_TRANSPOSE_1D
GGML_OP_UNFOLD_1D
GGML_OP_UNFOLD_1D
GGML_OP_UNFOLD_1D
GGML_OP_UNFOLD_1D
GGML_OP_UNFOLD_1D
GGML_OP_UNFOLD_1D
GGML_OP_UNFOLD_1D
GGML_OP_UNFOLD_1D
GGML_OP_UNFOLD_1D
GGML_OP_UNFOLD_1D
GGML_OP_UNFOLD_1D
GGML_OP_UNFOLD_1D
GGML_OP_UNFOLD_1D
GGML_OP_UNFOLD_1D
GGML_OP_UNFOLD_1D

I see these run a few times. I wonder how far off the time for this to run (with those parts of the code missing) is from the time it would take when it's done. It took ~26 seconds on M3 Max.

If this were to run on a server, would some of this just need to load one? or do all these steps need to run each time? and is it possible to stream or is everything done all at once?

@balisujohn
Copy link
Owner

balisujohn commented Jul 15, 2024

You could avoid repeating the model loads if streaming, you will still need to run the autoregressive model, diffusion model, and vocoder for each generation. I am working on making tortoise.cpp faster.

@balisujohn
Copy link
Owner

Wrt how to get the ops working, each op has tests in the tests folder, you should be able to define a flag at the top of a particular test file to make it build with metal, then you can run that test from the ggml/build/bin folder after calling make in ggml/build. See for example https://github.com/balisujohn/ggml/blob/tortoise/tests/test-conv-transpose-1d.cpp which is currently set to build with the cuda backend.

@baileyheading
Copy link
Contributor Author

I'll get up to speed with those tests next time I work on the operations.

What do you have planned to make tortoise faster?

@balisujohn
Copy link
Owner

there are unnecessary ops which can be removed, in particular some copies from float32 to float16 that can be removed once ggml_conv_1d supports float32 weight tensors. Also probably some unnecessary transposes when the weights can be saved already transposed.

@baileyheading
Copy link
Contributor Author

Another a related note, how fast is it already on CUDA?

@balisujohn
Copy link
Owner

image

This was a measurement I got on a 1070ti for the full process including model loading.

@baileyheading
Copy link
Contributor Author

I got to 98% of diffusion before I got the unsupported op on M3 Max in about 26 seconds for "./tortoise". How far from the end to go from that point do you think?

@balisujohn
Copy link
Owner

About 5-8 more seconds probably. The vocoder usually is pretty fast.

@balisujohn
Copy link
Owner

that's a weird place for it to fail, I'd recommend checking with print statements to see if it fails before or after the last call to the diffusion graph.

@baileyheading
Copy link
Contributor Author

Well I know the first unsupported op is pad reflect 1D, so which part of the code is that?

@balisujohn
Copy link
Owner

oh i was getting "illegal instruction" and "unsupported OP" confused, pad_reflect_1d is in the vocoder only.

@baileyheading
Copy link
Contributor Author

It must surely have been very close the end then 😅. Do we really need those ops? 🤣

@balisujohn
Copy link
Owner

You can save any tensor you want to wav, but I think without those ops it won't sound very good lol

@balisujohn
Copy link
Owner

interesting ablation to try replacing pad_reflect with a normal pad but with all the other ops left the same to see if it totally ruins the generation.

@baileyheading
Copy link
Contributor Author

lol. I'm the kind of programmer who would try that just out of curiosity. I'm currently using Piper for tts and it's very fast, but sounds so fake. Maybe it would still sound better 🤣

@baileyheading
Copy link
Contributor Author

In all seriousness though, are these ops similar? That's potentially where I should start for seeing how to write metal (which I've zero experience with)

@balisujohn
Copy link
Owner

lmao it actually still works fine on cuda if you replace it with the below and the tests also pass.
apparently pad_reflect_1d is pretty inconsequential, but I think I'll still keep it to retain consistency with tortoise-tts.

  ggml_tensor *cur = ggml_pad_ext(ctx0, vocoder_noise, 3, 3,0,0,0,0,0,0);

yeah I'd definitely look at the pad implementation for metal because pad reflect is just padding that mirrors the elements of the tensor whereas normal pad is just padding with a constant value. I'd start by trying to make pad reflect 1d a working clone of pad, then change the behavior to do a reflect pad instead. You can check https://pytorch.org/docs/stable/generated/torch.nn.ReflectionPad1d.html for a description of the intended behavior.

@baileyheading
Copy link
Contributor Author

baileyheading commented Jul 17, 2024

these are the unsupported operations as per my earlier comments

GGML_OP_PAD_REFLECT_1D
GGML_OP_CONV_TRANSPOSE_1D
GGML_OP_UNFOLD_1D

Do the other two have evil twins (the reflection) or are they quite unique from the existing ggml ops? Regardless, I didn't think to look at PyTorch docs thanks as I've only ever used PyTorch to run repos

@balisujohn
Copy link
Owner

No equivalent as far as I know. So those are totally unavoidable to get a working metal forward pass. Also the metal kernel of pad maybe be out of date ggernagov patched it in the pr but im not sure if thats in the tortoise branch.

@baileyheading
Copy link
Contributor Author

baileyheading commented Jul 17, 2024

I'm doing a bit of a "can chatGPT write metal" experiment. At the very least I've now got kernel functions and things setup so I'm in a position to work the problem. Things like "stride" and "block_size".. I see that for CUDA you set block_size to 256? Is there a performance thing or is that the value required to get the right result.

with reference to https://pytorch.org/docs/stable/generated/torch.nn.Unfold.html of course

image

PS: This code could be way way off. It's better than nothing though at this stage

EDIT: my conv transpose 1d code currently never completes so I can't tell if unfold gets the right result (it's just static but when I skip conv transpose 1d that could cause that perhaps

@balisujohn
Copy link
Owner

so the way cuda works, you get some number of blocks all with the same number of threads per block. Generally you will assign a thread to each index of the output tensor, so the computation can be massively parallelized. You will have some extra threads if your output tensor size is not a multiple of the cuda block size, for these threads you check to see if the output index is out of bounds, and if it is you do nothing. I don't know how much of that transfers to metal.

@balisujohn
Copy link
Owner

You can just use whatever block size the other ggml metal kernels use. I think it can affect performance but that is pretty secondary here.

@baileyheading
Copy link
Contributor Author

baileyheading commented Jul 17, 2024

You can just use whatever block size the other ggml metal kernels use. I think it can affect performance but that is pretty secondary here.

thanks for the insights. I now see that various block sizes are defined as constants in the .metal code. I'll look into how CUDA translates to metal. At some point I can perhaps find someone who actually knows what they're doing (referring to myself being clueless) to have a look at my progress and provide some feedback

@balisujohn
Copy link
Owner

there is some code in main.cpp for saving tensors to and loading them from disk, and also comparing tensors to one another. This code might be helpful for confirming that the ops are work the same way on metal as on cuda. You can fetch intermediate result tensors out of the computational graph after the vocoder computation using ggml_graph_get_tensor after first tagging the in the graph function with ggml_set_name

@balisujohn
Copy link
Owner

for conv transpose 1d you can make a new PR directly to ggml with the metal kernel to get feedback, for the ones which are still being added via pending PRs, you can make a PR into my PR branches and I will accept it which should give you credit in the final PR for each op, and you can then get feedback from ggml contributors via the open PR discussions.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants