-
Notifications
You must be signed in to change notification settings - Fork 188
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[HOW-TO] Synchronize gain adjustment between two Raspberry Pi cameras #1116
Comments
Ah, I see that I've answered this on the forum! Might it be easier to keep the discussion here, going forward? |
Before we continue/not continue the discussion here, could you provide a link to the issue where it was previously discussed? Because it depends on the details discussed there. My main goal is to achieve intensity-equivalent images, and gain synchronization is only one possible solution. |
I was referring to the reply that I posted here: https://forums.raspberrypi.com/viewtopic.php?t=376829#p2254996 |
I think we can continue the discussion here. My Global Shutter Raspberry Pi cameras are externally Any ideas on how I could solve this problem? |
Just to understand, can you say a little more about what you're doing? I think I understood that:
|
According to the Raspberry Pi camera documentation (https://www.raspberrypi.com/documentation/accessories/camera.html), I connected the camera's GND and XTR pins to the Raspberry Pico and ran the MicroPython code on the Pico controller.
Afterward, I ran the main() function and successfully achieved synchronized image capture. To verify this, I ran the check_sync() function and got an output like:
When I uncommented the line "AnalogueGain": 8.0 in the start_camera(index) function and ran check_sync() again, I got an output like
The difference is three orders. The difference between camera's timestamp now is estimated in milliseconds, not microseconds. So I conclude that I break synchronization. The same break happens for different exposure times but it is quite obvious: shutter value is explicitly defined in micropython code.
So answers to your questions:
It is fine for me if it doesn't break synchronization and does not lead to dropped frames. I'll check it.
|
Thanks for all the information. I probably need to pass this on to someone who has actually used the external trigger mechanism, but unfortunately he's on holiday so it would be into next week before he could get back to you. But just to comment on a few other things:
|
First of all, I want to thank you for this discussion and your help. I'm confident that we’ll find a solution through this dialogue.
In the check_sync() function, I start the two cameras in separate threads, retrieve the timestamps from the request metadata (using the capture_timestamp() function), and print the difference for every twentieth frame.
How to plot the obtained array? The same as before, using cv2.imshow? Thank you. |
The x-axis represents the intensity of the pixel at [i, j] from the first camera, and the y-axis represents the intensity of the same pixel from the second camera. Blue dots show data with both cameras set to a 15 ms exposure and a fixed analog gain of 15. Red dots represent data with both cameras set to the same 15 ms exposure but with a fixed analog gain of 5. I would like to:
imx296.json file has the following algorithms:
Since AWB and contrast are already off, what else can I disable to achieve a linear grayscale intensity response? |
Hi again, a few things to comment on here. Firstly, your YUV420 modifications looked OK to me. It's more efficient because the hardware does the conversion for you, rather than doing it slowly in software. OpenCV should understand and display the single channel greyscale image directly. As regards exposure comparisons, it might be worth looking at some raw images, which you can capture in DNG files. This is exactly what comes from the sensor. You should find that, after subtracting the black level, this is exactly linear in both exposure time and analogue gain (until pixels start to saturate). I'm assuming your graphs are referring to the processed output images. By far the biggest non-linearity here is controlled by the rpi.contrast algorithm, so disabling that is the first thing. Other algorithms may have an effect, and you could try disabling those too - maybe rpi.alsc, rpi.ccm, rpi.sharpen (it might only be rpi.black_level that's essential, but obviously if removing any causes it to go horribly wrong then you'll need to put those back). The "x." trick should work for all of them. I still don't understand why changing the analogue gain should cause the sync to change. Perhaps I could ask @njhollinghurst to comment on that? |
Working with raw images was one of the solutions I considered. Still, it's not the easiest because handling raw signals requires manually implementing some useful image preprocessing algorithms (like rpi.denoise and rpi.agc). Aside from other algorithms, I still need to denoise and enhance the weak raw signal. How can I demosaic the raw signal into a grayscale image and then apply denoising and enhancement in real-time, without saving the images to disk? Any advice, side libs? Yes, please ask @njhollinghurst to join this discussion. This is an important issue to resolve, as it could help others who want to synchronize two cameras with an external trigger board. |
I only really suggested looking at some raw images to check that exposure and gain cause exactly linear changes in pixel level. But it should be possible to emulate this in the processed output images by disabling other stages (most obviously rpi.contrast). It might be worth experimenting with just a single camera, where the image doesn't change, to confirm that this really works. |
With regard to the timestamps... Do you have an independent way to check if the cameras are synchronized? For example by filming a mobile phone stopwatch application. The timestamps actually record when the start of the frame was received by the Raspberry Pi. With a rolling-shutter camera, it's closely related to the exposure time. But a global-shutter camera has the ability to retain the frame internally for several milliseconds (I don't know why it might do this, but it's theoretically possible) so there is room for doubt. |
It seems that adding 'x.' to 'rpi.contrast' makes the response linear enough. Thank you. |
In the early stages of my work, I used a 'running lights' setup — 10 LEDs arranged in a line. Each LED emits light for a certain period before turning off, while the next LED in line begins to emit. I can control the duration each LED stays on, ranging from 100 microseconds to 10 seconds. I used this setup to check camera synchronization, and everything seemed to work well. Afterward, I relied solely on the frame metadata timestamps. When I don't specify 'AnalogueGain' in the camera controls, the timestamp difference between frames from the first and second camera is minimal (up to 20 millisecond). However, explicitly setting the same or different 'AnalogueGain' increases the timestamp difference by three orders of magnitude. Similarly, setting different exposures causes the synchronization to break down within 5-10 seconds. Initially, the timestamp difference is small, but after a short time, it increases dramatically. Sync.mp4 |
I'm guessing that one of 3 things is going wrong:
Is it possible to repeat the LED experiment when either one or both cameras are having their analogue gains frequently set -- do the images go out of sync, or only the timestamps? Is the error an integer number of frame intervals? How does the error evolve over time? Don't try to change the shutter duration using the API -- it should be fixed and should match the duration of the trigger pulse. |
It’s possible to repeat the LED experiments, but I’m not sure why you mentioned 'controls are frequently set'? In line 33, I configured both cameras and started them once. After that, in an infinite loop, I capture and release requests. It seems like the controls are only set once, right? With line 25 commented out, synchronization works. When I uncomment that line, synchronization fails.
|
Ah, I thought you might have been changing the gain for every frame. I'll try to reproduce this. What is the rate and duration of your trigger pulse? Does the trigger start before or after the |
OK, I have reproduced this. The difference is usually close to one frame interval (16667000). For the first few frames (which you didn't print) it can be larger. The "AnalogueGain" parameter changes how many frames are discarded at startup. But I think the matching timestamps without "AnalogueGain" were largely fortuitous. For example, changing the duration of the sleep to 0.1 can also affect synchronization. After starting, both cameras will be running in sync from the external trigger source. But the Python program is sleeping, so frames are being buffered and eventually dropped. EDITED: Here's a sketch of a possible hack to keep the channels in sync:
The first frame may still be out of sync. |
According to manual https://www.raspberrypi.com/documentation/accessories/camera.html
So the trigger starts before the time.sleep(0.5) on line 29. I haven’t tested synchronization over extended periods, usually just running check_sync() and observing for 15-40 seconds. If the difference was small (10-20 microseconds) during this time, I assumed synchronization was established. However, I can't say the synchronization process is fully reliable—sometimes it failed, and I had to restart the MicroPython script. Through experimentation, I found the success-to-fail ratio to be around 9:1. Even when synchronization was successful, some frame pairs occasionally weren’t captured in sync. But since this happened once every 1-3 seconds, it was acceptable for me. With a 60 fps rate, I could skip those frames and work with the synchronized ones. I am a bit concerned about your statement that "there is no guarantee both threads will see the same frame." On the Raspberry Pi forum, I found examples showing that threads are the only viable way to operate two cameras in sync. Without threads, the cameras capture frames independently, even with the external trigger—I verified this myself. In the Raspberry Pi camera manual (https://www.raspberrypi.com/documentation/accessories/camera.html), under the section "External Trigger on the GS Camera," it states that "The Global Shutter (GS) camera can be triggered externally by pulsing the external trigger (denoted on the board as XTR) connection on the board. Multiple cameras can be connected to the same pulse, allowing for an alternative way to synchronize two cameras." Based on this, I understood that synchronizing two GS cameras is a proven capability. When you say 'no guarantee,' does that mean 'it may work sometimes, and sometimes not,' or that 'it works, but a small portion of frame pairs may occasionally be captured asynchronously'? Is there an alternative method to capture synchronized frames without using Python threads? Thank you for your example, I’ll check it tomorrow. And thanks again for all the help. |
The example code shows the two cameras being operated by separate
Possibly the best solution is to capture a pair of frames, compare their timestamps, and while they differ by more than half a frame-time (8ms), discard the frame with the earlier timestamp (remember to release the request) and capture another one from that camera only. |
You might want some code something like this:
I'd have thought at 60fps you might see some frame drops, but you'll have to see how it goes. |
Both examples you provided work perfectly. Based on the timestamp differences, the cameras are capturing frames synchronously, both with and without explicitly setting the analog gain. It seems that this issue is now resolved. I appreciate your assistance — thank you! However, there’s still one important question I haven’t been able to resolve: Is there a way to allow the first camera to automatically select the optimal analog gain (with AeEnable=True) while locking the second camera to the same gain value? |
David's version is better - it corrects the mismatch before you get the output, whereas my code only detects it afterwards! Yes it is possible to copy the analog gain from one camera to the other, inside the main loop (roughly as suggested on the forum), with a scale-factor if necessary - but there would be a lag of 3 or 4 (?) frames, due to the way controls and frames get queued. If the scene's brightness changes only slowly, that might be acceptable. |
It's fairly fundamental to libcamera that cameras run separately, often in separate processes, and can't influence one another. To get round that, I think you'd have to get involved at a fairly low level. For example, whenever the AGC/AEC selects new exposure/gain values, you wouldn't apply them immediately, instead you'd broadcast the new values and then have a "client AGC/AEC algorith", perhaps running on both devices, that would receive these messages and could apply them at the same time. So there'd be some work involved. I wonder if there might still be some alternatives. Here are a couple of ideas.
|
I have two externally triggered Raspberry Pi global shutter cameras connected to a Raspberry Pi 5, with each camera running in its own thread. They capture nearly identical but slightly shifted fields of view, and I can apply an affine transformation to spatially align them. However, the luminous flux between the two cameras differs by up to 10%. Both cameras have a fixed exposure, but due to the shifted fields of view and the difference in light flux, each camera pre-processes its image differently and selects a different analog gain.
My goal is to find a fast way to make the output image arrays as pixel-wise equivalent as possible in terms of pixel brightness.
I've plotted the full range relationship between pixel intensities from both cameras, to create a lookup table. But this is only valid when both cameras have the same fixed analog gain.
Is there a way for the first camera to automatically select the optimal gain (with AeEnable=True) while locking the second camera to that same gain value? In other words, the first camera would adjust its gain, and the second camera would then match its gain to the first camera.
I appreciate your help in advance.
The text was updated successfully, but these errors were encountered: