Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Implement FrameBufferBackend for slices #23

Merged
merged 1 commit into from
Mar 27, 2024

Conversation

jounathaen
Copy link
Collaborator

This is useful, e.g., if the framebuffer is declared by another entity and is passed to the application as a pointer and a size.

@svenstaro
Copy link

svenstaro commented Mar 14, 2024

Should we add a note somewhere that this might not work with DMA if the buffer lives in PSRAM connected via SPI (which is then by its nature not DMA)?

@jounathaen
Copy link
Collaborator Author

Hmm, this might be a problem indeed. However, I think this issue persisted before and I don't think this is related to this PR.

I'm not so deep into this code anymore, but isn't what we do, that we basically implement ReadBuffer for all &mut [C;N] in

unsafe impl<'a, C: PixelColor, const N: usize> DMACapableFrameBufferBackend for &'a mut [C; N] {
, but we have a note that the requirements from https://docs.rs/embedded-dma/latest/embedded_dma/trait.ReadBuffer.html still apply. The rules there don't state that it needs to be DMA enabled memory, indeed.

Maybe you have a better idea, but my suggestion would be to add this to embedded_dma.

@svenstaro
Copy link

So, I think this PR by itself is totally fine and fulfills my feature request in #22. DMA not being possible with SPRAM via SPI is an orthogonal issue.

@jounathaen
Copy link
Collaborator Author

@bernii With two "approvals", I take the freedom to merge this now 🙂

@jounathaen jounathaen merged commit f609f2e into bernii:main Mar 27, 2024
6 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants