Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

drivers: video: video_mcux_smartdma: add SMARTDMA video driver #72827

Open
wants to merge 13 commits into
base: main
Choose a base branch
from

Conversation

danieldegrasse
Copy link
Collaborator

@danieldegrasse danieldegrasse commented May 15, 2024

Add SMARTDMA video driver. This driver uses the SMARTDMA engine as a
parallel camera interface, which can read QVGA frames from a camera
device. Due to SRAM constraints, the video driver divides the camera
stream into multiple horizontal video buffers as it streams them back to
an application.

Note that this PR also contains two additional major changes:

  • an addition to the video API to enable streaming partial frames within one video buffer
  • a rework of the SMARTDMA driver interface to better support custom firmware use cases like the camera in this PR

This PR depends on #72826 for camera support

PR has been tested on the following boards:

  • frdm_mcxn947//cpu0: samples/subsys/video/capture
  • mimxrt595_evk//cm33: samples/subsys/display/lvgl (using G1120B0MIPI shield)

@zephyrbot
Copy link
Collaborator

zephyrbot commented May 15, 2024

The following west manifest projects have been modified in this Pull Request:

Name Old Revision New Revision Diff

Note: This message is automatically posted and updated by the Manifest GitHub Action.

@zephyrbot zephyrbot added manifest manifest-hal_nxp DNM This PR should not be merged (Do Not Merge) labels May 15, 2024
@josuah
Copy link
Collaborator

josuah commented May 20, 2024

  • an addition to the video API to enable streaming partial frames within one video buffer

Some news about it there, with a proposed API: #66994 (comment)

danieldegrasse added a commit to nxp-upstream/zephyr that referenced this pull request May 20, 2024
Introduce driver for ov7670 camera, supporting QCIF,QVGA,CIF, and VGA
resolution in YUV and RGB mode.

Support was verified on the FRDM-MCXN947, using the SmartDMA camera
engine, which is enabled in the following PR:
zephyrproject-rtos#72827

Signed-off-by: Daniel DeGrasse <[email protected]>
@josuah
Copy link
Collaborator

josuah commented May 23, 2024

here (need to click on "load diff") is a new API that seems underway:

	if (drv_data->fifo_length == 0) {
		vbuf->flags = VIDEO_BUF_EOF;
	} else {
		vbuf->flags = VIDEO_BUF_FRAG;
		k_work_submit_to_queue(&ac_work_q, &drv_data->buf_work);
	}

@danieldegrasse
Copy link
Collaborator Author

here (need to click on "load diff") is a new API that seems underway:

No issues with this API, I suppose we would like to make this PR dependent on that API merging? I could also cherry pick the commits adding the API from that PR (@ArduCAM would you be ok with that approach?)

@ArduCAM
Copy link

ArduCAM commented May 25, 2024

here (need to click on "load diff") is a new API that seems underway:

No issues with this API, I suppose we would like to make this PR dependent on that API merging? I could also cherry pick the commits adding the API from that PR (@ArduCAM would you be ok with that approach?)

Yes, have marked resolved

danieldegrasse added a commit to nxp-upstream/zephyr that referenced this pull request Jun 1, 2024
Introduce driver for ov7670 camera, supporting QCIF,QVGA,CIF, and VGA
resolution in YUV and RGB mode.

Support was verified on the FRDM-MCXN947, using the SmartDMA camera
engine, which is enabled in the following PR:
zephyrproject-rtos#72827

Signed-off-by: Daniel DeGrasse <[email protected]>
danieldegrasse added a commit to nxp-upstream/zephyr that referenced this pull request Jun 3, 2024
Introduce driver for ov7670 camera, supporting QCIF,QVGA,CIF, and VGA
resolution in YUV and RGB mode.

Support was verified on the FRDM-MCXN947, using the SmartDMA camera
engine, which is enabled in the following PR:
zephyrproject-rtos#72827

Signed-off-by: Daniel DeGrasse <[email protected]>
nashif pushed a commit that referenced this pull request Jun 7, 2024
Introduce driver for ov7670 camera, supporting QCIF,QVGA,CIF, and VGA
resolution in YUV and RGB mode.

Support was verified on the FRDM-MCXN947, using the SmartDMA camera
engine, which is enabled in the following PR:
#72827

Signed-off-by: Daniel DeGrasse <[email protected]>
coreboot-org-bot pushed a commit to coreboot/zephyr-cros that referenced this pull request Jun 7, 2024
Introduce driver for ov7670 camera, supporting QCIF,QVGA,CIF, and VGA
resolution in YUV and RGB mode.

Support was verified on the FRDM-MCXN947, using the SmartDMA camera
engine, which is enabled in the following PR:
zephyrproject-rtos/zephyr#72827

(cherry picked from commit 3ce3ed3)

Original-Signed-off-by: Daniel DeGrasse <[email protected]>
GitOrigin-RevId: 3ce3ed3
Change-Id: I20f10c984e4a0c2a6c805fde3625c0b6310f4fe4
Reviewed-on: https://chromium-review.googlesource.com/c/chromiumos/third_party/zephyr/+/5607905
Tested-by: ChromeOS Prod (Robot) <[email protected]>
Reviewed-by: Fabio Baltieri <[email protected]>
Commit-Queue: Fabio Baltieri <[email protected]>
@danieldegrasse danieldegrasse marked this pull request as ready for review June 7, 2024 18:58
@zephyrbot zephyrbot added platform: NXP Drivers NXP Semiconductors, drivers area: MIPI-DSI platform: STM32 ST Micro STM32 area: DMA Direct Memory Access labels Jun 7, 2024
@josuah
Copy link
Collaborator

josuah commented Sep 12, 2024

Ah my bad... without the frame-offset, the sink would never know if two fragment are in the right order or if some was missed. As all it sees would be a big block of pixel.

An alternative of timestamp would be frame_number: easier to count lost frames, not straining the time API, but not helping with end-to-end latency or synchronizing a display of a feed through time in case it is doing bursts of data then pauses.

@danieldegrasse
Copy link
Collaborator Author

danieldegrasse commented Sep 13, 2024

Ah my bad... without the frame-offset, the sink would never know if two fragment are in the right order or if some was missed. As all it sees would be a big block of pixel.

An alternative of timestamp would be frame_number: easier to count lost frames, not straining the time API, but not helping with end-to-end latency or synchronizing a display of a feed through time in case it is doing bursts of data then pauses.

Would frame_number work here though? The issue I see is that there is a distinction between the fragment number of the frame and the frame number of the frame. Some sinks probably need to be able to count both.

Ah my bad... without the frame-offset, the sink would never know if two fragment are in the right order or if some was missed. As all it sees would be a big block of pixel.

Yes, exactly- the EOF frame won't cover the whole case here, we need a way to know that a fragment was dropped. That could be a total fragment counter, or a position indicator within the frame. Personally I prefer a position indicator since we can probably keep the size to a uint16_t since I doubt sources will produce more than UINT16_MAX fragments per frame

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hi @danieldegrasse ,
This sample was moved in the tree, and is now at samples/drivers/video/capture. So these 3 files need to be moved.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for the heads up- awaiting feedback from @josuah on the proposed API changes here: #72827 (comment), once we settle on an approach there I can update the PR

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It all makes sense, thank you for explaining all the details, this will help while applying this new API for future or existing drivers. Pursuing on the main thread,

josuah
josuah previously approved these changes Sep 21, 2024
Copy link
Collaborator

@josuah josuah left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Marking "approve" for the ongoing changes, though letting you pursue with the extra APIs as well.

Should the extra APIs be introduced on a separate PR? This could other vendors to integrate the changes.

enum video_endpoint_id ep,
struct video_caps *caps)
{
if (ep != VIDEO_EP_OUT) {
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Following #73009, should it also accept VIDEO_EP_ALL?

There might be a separate PR converting all drivers so could be left as it is...

@@ -293,7 +293,8 @@ static int video_mcux_csi_enqueue(const struct device *dev, enum video_endpoint_
}

to_read = data->csi_config.linePitch_Bytes * data->csi_config.height;
vbuf->bytesused = to_read;
vbuf->bytesframe = vbuf->bytesused = to_read;
vbuf->flags = VIDEO_BUF_EOF;
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Maybe other pull requests would be needed to propagate this change to the other drivers too?
So that it gets possible to rely on this from application/subsystem code.

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It all makes sense, thank you for explaining all the details, this will help while applying this new API for future or existing drivers. Pursuing on the main thread,

@mmahadevan108
Copy link
Collaborator

@danieldegrasse , can you help rebase

@dleach02
Copy link
Member

dleach02 commented Oct 1, 2024

Marking "approve" for the ongoing changes, though letting you pursue with the extra APIs as well.

Should the extra APIs be introduced on a separate PR? This could other vendors to integrate the changes.

@danieldegrasse any comment on @josuah question?

@josuah
Copy link
Collaborator

josuah commented Oct 1, 2024

Side note: the fragmentation APIs introduced do make sense to me, and I even started to experiment with them in #63421

@danieldegrasse
Copy link
Collaborator Author

Marking "approve" for the ongoing changes, though letting you pursue with the extra APIs as well.

Should the extra APIs be introduced on a separate PR? This could other vendors to integrate the changes.

We need to introduce some level of API changes in this PR- the camera engine here fundamentally can't stream a full frame into memory on the MCXN947- the SRAM is too small to support it.

Are you ok with the changes proposed in this comment: #72827 (comment)? If so, I can update the PR and get things moving forwards

@josuah
Copy link
Collaborator

josuah commented Oct 3, 2024

Are you ok with the changes proposed in this comment: #72827 (comment)? If so, I can update the PR and get things moving forwards

Yes absolutely, this looks like working! Sorry if my position was ambiguous.
For uncompressed formats I suppose pitch * frame_offset gives the position in bytes.
Thank you for the time spent organizing this!

@danieldegrasse
Copy link
Collaborator Author

Yes absolutely, this looks like working! Sorry if my position was ambiguous.
For uncompressed formats I suppose pitch * frame_offset gives the position in bytes.
Thank you for the time spent organizing this!

No worries, I'll work on getting this PR updated

Add min and max line count to the video_caps structure- these fields can
be used by applications to determine the size of video buffer they need
to allocate for a video endpoint

The min and max line count fields are designed to enable supporting
endpoints that may produce or consume partial frames within each video
buffer, and may support arbitrarily sized video buffers.

Signed-off-by: Daniel DeGrasse <[email protected]>
Update existing video drivers to handle the min/max line count field
within the video_caps structure. All drivers work with full frames
currently, so use the special LINE_COUNT_HEIGHT value to indicate this.

Signed-off-by: Daniel DeGrasse <[email protected]>
Update video samples to use min/max line count capabilities when
allocating video buffers.

Signed-off-by: Daniel DeGrasse <[email protected]>
Add line_offset field to the video_buffer structure. This field
indicates the offset (in horizontal lines) within a frame that a video
buffer starts at. This is useful for video devices that produce or
consume video buffers that constitute a partial frame.

Signed-off-by: Daniel DeGrasse <[email protected]>
Since all video drivers in tree use a full frame, their video buffers
will always start at a line_offset of 0 within the frame.

Signed-off-by: Daniel DeGrasse <[email protected]>
Handle line_offset field within video capture sample. Other samples do
not support partial framebuffers and therefore can ignore this field.

Signed-off-by: Daniel DeGrasse <[email protected]>
The SMARTDMA is a programmable DMA engine, and supports custom firmware
in order to run complex DMA operations. Update the driver to increase
the flexibility users have when configuring the SMARTDMA with
custom firmware, and remove the RT500 display firmware specific
definitions and functionality from the driver.

This display setup is now handled from the MIPI DSI driver, since the
firmware used for this case is specific to the MIPI DSI IP.

This change also requires an update to the RT500 devicetree, as the
register definition for the SMARTDMA has changed, so the base address
must as well.

Signed-off-by: Daniel DeGrasse <[email protected]>
Add SMARTDMA video driver. This driver uses the SMARTDMA engine as a
parallel camera interface, which can read QVGA frames from a camera
device. Due to SRAM constraints, the video driver divides the camera
stream into multiple horizontal video buffers as it streams them back to
an application.

Signed-off-by: Daniel DeGrasse <[email protected]>
Add definition for SMARTDMA device to the MCXN94x devicetree.

Signed-off-by: Daniel DeGrasse <[email protected]>
Add support for LPI2C7. This peripheral is used for interfacing with
cameras connected to the J9 header.

Signed-off-by: Daniel DeGrasse <[email protected]>
Add support for SMARTDMA camera engine, and a OV7670 parallel camera
definition for the frdm_mcnx947 board. This support has been tested with
the video capture sample.

Signed-off-by: Daniel DeGrasse <[email protected]>
Add support for using the SMARTDMA engine on the FRDM-MCXN947 board with
the video capture sample.

Signed-off-by: Daniel DeGrasse <[email protected]>
Do not apply format setting unless needed. Also, correct the check for
the RGB565 format setting- the zephyr display API treats RGB565 and
BGR565 as big endian, so the format needed here is BGR565.

Signed-off-by: Daniel DeGrasse <[email protected]>
@danieldegrasse
Copy link
Collaborator Author

No worries, I'll work on getting this PR updated

PR has been updated- something I realized while working on this was that we can't really provide framebuffer size bounds until the format is set- therefore, I chose to report the min/max number of frame lines rather than the buffer size, as this should be independent of format (I think).

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
area: Devicetree Binding PR modifies or adds a Device Tree binding area: DMA Direct Memory Access area: MIPI-DSI area: Samples Samples area: Video Video subsystem platform: ESP32 Espressif ESP32 platform: NXP Drivers NXP Semiconductors, drivers platform: NXP NXP platform: STM32 ST Micro STM32
Projects
None yet
Development

Successfully merging this pull request may close these issues.

7 participants