Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[HOW-TO] Synchronize gain adjustment between two Raspberry Pi cameras #1116

Open
Raccoon987 opened this issue Sep 17, 2024 · 27 comments
Open

Comments

@Raccoon987
Copy link

Raccoon987 commented Sep 17, 2024

I have two externally triggered Raspberry Pi global shutter cameras connected to a Raspberry Pi 5, with each camera running in its own thread. They capture nearly identical but slightly shifted fields of view, and I can apply an affine transformation to spatially align them. However, the luminous flux between the two cameras differs by up to 10%. Both cameras have a fixed exposure, but due to the shifted fields of view and the difference in light flux, each camera pre-processes its image differently and selects a different analog gain.

My goal is to find a fast way to make the output image arrays as pixel-wise equivalent as possible in terms of pixel brightness.

I've plotted the full range relationship between pixel intensities from both cameras, to create a lookup table. But this is only valid when both cameras have the same fixed analog gain.

Is there a way for the first camera to automatically select the optimal gain (with AeEnable=True) while locking the second camera to that same gain value? In other words, the first camera would adjust its gain, and the second camera would then match its gain to the first camera.

I appreciate your help in advance.

@davidplowman
Copy link
Collaborator

Ah, I see that I've answered this on the forum! Might it be easier to keep the discussion here, going forward?

@Raccoon987
Copy link
Author

Raccoon987 commented Sep 23, 2024

Before we continue/not continue the discussion here, could you provide a link to the issue where it was previously discussed? Because it depends on the details discussed there. My main goal is to achieve intensity-equivalent images, and gain synchronization is only one possible solution.

@davidplowman
Copy link
Collaborator

I was referring to the reply that I posted here: https://forums.raspberrypi.com/viewtopic.php?t=376829#p2254996

@Raccoon987
Copy link
Author

Raccoon987 commented Sep 23, 2024

I think we can continue the discussion here.

My Global Shutter Raspberry Pi cameras are externally
triggered using a Raspberry Pico, as outlined in the GS camera manual. This allows me to capture images simultaneously, with equal exposure set for both cameras. However, explicitly setting the analog gain disrupts synchronization, as does setting different exposures for each camera. I'm not sure why this happens—perhaps you could explain it to me.
I only need monochrome images, so I've disabled AWB (AwbEnable = False) and set {"rpi.awb": {"bayes": 0}} in the imx296.json file.
The best approach for producing equal images would be to equalize the light flux before it reaches the camera lenses. Unfortunately, for various reasons, I can’t physically make the light fluxes equal. My next option is to reduce the sensitivity of one of the camera sensors to balance the light. I've found that the camera ISO can only be adjusted by controlling the exposure and analog gain. This is the last stage where linear reducing of light or sensor sensitivity proportional to the 20% difference in light flux may lead to obtaining equal images.
Beyond this point, the accumulated and converted light signal is processed by image preprocessing algorithms controlled by the imx296.json file.
Since the cameras receive different light fluxes, they independently calculate the gain values.
After several experiments, I noticed that the difference between the resulting images seems to depend on pixel intensity. The ratio between the corresponding bright pixels from both cameras is not the same as the ratio between mid-range or dark pixels — there's a nonlinear relationship.
I plotted pixel intensity from the first camera against the second camera for the full range (0 to 255), and this relationship was nonlinear even when gain and exposure were fixed. Without locking the gain, there’s an additional uncontrolled variation in intensity, as each camera selects its own gain.
When I set AeEnable = False, I get synchronized image capture, with the analog gain fixed at 1 for both cameras — but the images are too dark.
I don’t want to completely disable the gain adjustment algorithm because it’s useful.
I realize this issue extends beyond the original topic title — sorry for that.

Any ideas on how I could solve this problem?

@davidplowman
Copy link
Collaborator

Just to understand, can you say a little more about what you're doing? I think I understood that:

  • You're setting up both cameras and starting them. But they will sit and do nothing until the external trigger pulse. Is that correct?
  • Then you're capturing the first frame that comes out of each camera?
  • The exposure time will, I believe, be determined by the pulse length. Though I think we'd recommend setting the exposure time on both cameras to a fixed value. Does this all describe what you're doing?
  • I don't really understand the analogue gain issue. As far as I know, you should be able to set the analogue gain explicitly for both sensors. In what way is this not working?
  • After that I was finding it a bit harder to follow. The pixel levels coming out of the camera are basically linear in exposure time and analogue gain. The final processed images will not be linear, however, because of the gamma transform that gets applied.
  • I wasn't entirely sure why you still wanted to let the analogue gain values vary. You can feed gain values from one camera to the other, though you need to have the camera running so that you know what gain value to apply.

@Raccoon987
Copy link
Author

Raccoon987 commented Sep 24, 2024

According to the Raspberry Pi camera documentation (https://www.raspberrypi.com/documentation/accessories/camera.html), I connected the camera's GND and XTR pins to the Raspberry Pico and ran the MicroPython code on the Pico controller.

sudo su
echo 1 > /sys/module/imx296/parameters/trigger_mode
exit
from machine import Pin, PWM
from time import sleep

pwm = PWM(Pin(28))
framerate = 60
shutter = 2000  # In microseconds
frame_length = 1000000 / framerate
pwm.freq(framerate)
pwm.duty_u16(int((1 - (shutter - 14) / frame_length) * 65535))

Afterward, I ran the main() function and successfully achieved synchronized image capture. To verify this, I ran the check_sync() function and got an output like:

16000
18000
10000
22000
... 

When I uncommented the line "AnalogueGain": 8.0 in the start_camera(index) function and ran check_sync() again, I got an output like

16000000
12000000
11000000
...

The difference is three orders. The difference between camera's timestamp now is estimated in milliseconds, not microseconds. So I conclude that I break synchronization. The same break happens for different exposure times but it is quite obvious: shutter value is explicitly defined in micropython code.

from picamera2 import Picamera2
import threading
import time
import cv2
import numpy as np
import copy
import pprint

def capture_and_process(picam, result_list, meta_list, index):
    request = picam.capture_request()
    metadata = request.get_metadata()
    array = request.make_array(name="main") 
    array = cv2.cvtColor(array, cv2.COLOR_RGB2GRAY)
    result_list[index] = array
    meta_list[index] = metadata
    request.release()

def capture_timestamp(picam, result_list, index):
    request = picam.capture_request()
    metadata = request.get_metadata()
    ts = int(metadata["SensorTimestamp"])
    result_list[index] = ts
    request.release()

def start_camera(index):
    picam = Picamera2(index)
    print("Camera sensor modes: ", picam.sensor_modes)
    config = picam.create_preview_configuration(
        controls={"FrameDurationLimits": (16667, 16667),
                  "FrameRate": 60,
                  "ExposureTime": 2000,
                  "Saturation": 0,
                  "AwbEnable": False, 
                  #"AnalogueGain": 8.0,
                  })
    print(f"camera {index} main config: ", config["main"])
    picam.start(config)
    time.sleep(0.5)
    return picam

def check_sync():
    picams = [start_camera(i) for i in range(2)]
    results = [None] * len(picams)  
    try:
        c = 0
        while True:
            threads = [threading.Thread(target=capture_timestamp, args=(picam, results, index)) for index, picam in
                       enumerate(picams)]

            for thread in threads:
                thread.start()

            for thread in threads:
                thread.join()  
            c += 1
            if c % 20 == 0:
                print("timestamp delta between two cameras: ", results[0],  results[1], abs(results[0] - results[1]))    
    except KeyboardInterrupt:
        # Ctrl + C to properly stop cameras
        print("Stopping cameras...")
    finally:
        [c.stop() for c in picams]
        print("Cameras stopped.")  

def main():
    picams = [start_camera(i) for i in range(2)]
    results = [None] * len(picams)
    metadata = [{}] * len(picams)
    
    try:
        while True:
            threads = [threading.Thread(target=capture_and_process, args=(picam, results, metadata, index)) for index, picam in
                       enumerate(picams)]

            for thread in threads:
                thread.start()
            for thread in threads:
                thread.join()

            cv2.imshow('Master/Bottom', np.flip(results[0], axis=1))
            cv2.imshow('Slave/Top', results[1])
           
            if cv2.waitKey(1) == ord('q'):
                break
    except KeyboardInterrupt:
        pass
    finally:
        [c.stop() for c in picams]
        print("Cameras stopped.")  

So answers to your questions:

  1. yes
  2. yes
  3. yes
  4. Construction like this one:
while True:
    cam2.set_controls({'AnalogueGain': cam1.capture_metadata()['AnalogueGain']})

It is fine for me if it doesn't break synchronization and does not lead to dropped frames. I'll check it.

  1. How may I turn off the gamma transform? Based on my experiments, the pixel intensity relationship is almost linear in the low and mid-range intensity regions but becomes nonlinear for bright pixels. In the linear region, the slope changes slightly. I prefer a fully linear response, as I don’t need a 'nice' picture—just one that is simple and predictable.

  2. I want the first camera to automatically adjust its gain, as this adjustment does a good job of keeping the image neither too bright nor too dark. I would then like to link the second camera's gain to that of the first. Otherwise, depending on the environment, the first image might be either brighter or darker than the second, depending on the camera's preprocessing algorithm. This behavior is unpredictable for me.

@davidplowman
Copy link
Collaborator

Thanks for all the information. I probably need to pass this on to someone who has actually used the external trigger mechanism, but unfortunately he's on holiday so it would be into next week before he could get back to you.

But just to comment on a few other things:

  1. When you quoted those numbers (16000, 18000 and so on), it wasn't clear to me what they were. I couldn't spot where you were printing them in the code either. Did I miss something or could you clarify?

  2. One problem with setting the camera's analogue gain while it is running, is that it takes several frames for it to take effect. For it to take effect immediately, you would need to stop the camera, set the analogue gain, then restart it. But that's a relatively slow process too, so it depends what kind of frame rate you are hoping to achieve.

  3. You can turn off the gamma transform by finding "rpi.contrast" in the camera tuning file and changing it to "x.rpi.contrast" (which effectively "comments it out"). The tuning file will be called imx296.json, probably under /usr/share/libcamera/ipa/rpi/pisp (Pi 5) or /usr/share/libcamera/ipa/rpi/vc4 (other Pis). Of course, the resulting image will look dark but very contrasty.

  4. To get the greyscale version of an image, it would be more efficient to avoid cv2 and ask for 'YUV420' format instead. Then you could take the top "height" rows of the array directly.

@Raccoon987
Copy link
Author

Raccoon987 commented Sep 25, 2024

First of all, I want to thank you for this discussion and your help. I'm confident that we’ll find a solution through this dialogue.

  1. I forgot to include the capture_timestamp() function in my code. I’ve now added it to my code snippet. The numbers 16,000, 18,000, etc., represent the difference between the timestamps of frames captured by the first and second cameras. This means that the time shift between frame j of the first camera and frame j of the second camera is only 16 or 18 microseconds, indicating that the cameras capture frames simultaneously. However, values like 16,000,000 or 12,000,000 show that the difference is now measured in milliseconds, which, compared to the exposure time of 2 ms and the frame duration of 16.6 ms, indicates non-simultaneous capture.

In the check_sync() function, I start the two cameras in separate threads, retrieve the timestamps from the request metadata (using the capture_timestamp() function), and print the difference for every twentieth frame.

  1. Yes. With an FPS of 60, I can't use this method to set equal gains.

  2. I'll try this.

  3. Why is the 'YUV420' format more efficient in the case of grayscale images? Is this the correct modification?

w, h = 640, 480

def capture_and_process(picam, result_list, meta_list, index):
    request = picam.capture_request()
    metadata = request.get_metadata()
    array = request.make_array(name="main") 
    y_h = array.shape[0] * 2 // 3
    array = array[:y_h, :]
    result_list[index] = array
    meta_list[index] = metadata
    request.release()

def start_camera(index):
    picam = Picamera2(index)
    print("Camera sensor modes: ", picam.sensor_modes)
    config = picam.create_preview_configuration(
        main={
            "size": (w, h),  
            "format": "YUV420",  
        },
        controls={"FrameDurationLimits": (16667, 16667),
                  "FrameRate": 60,
                  "ExposureTime": 2000,
                  "Saturation": 0,
                  "AwbEnable": False, 
                  #"AnalogueGain": 8.0,
                  })
    print(f"camera {index} main config: ", config["main"])
    picam.start(config)
    time.sleep(0.5)
    return picam 

How to plot the obtained array? The same as before, using cv2.imshow?

Thank you.

@Raccoon987
Copy link
Author

Raccoon987 commented Sep 30, 2024

newplot

The x-axis represents the intensity of the pixel at [i, j] from the first camera, and the y-axis represents the intensity of the same pixel from the second camera. Blue dots show data with both cameras set to a 15 ms exposure and a fixed analog gain of 15. Red dots represent data with both cameras set to the same 15 ms exposure but with a fixed analog gain of 5.
All points lie above a dashed diagonal line because the luminous flux between the two cameras differs by up to 10% or more. The relationship is nonlinear, but I can easily equalize the image intensity using this curve.
However, explicitly setting the analog gain disrupts synchronization. When the gain isn’t fixed, each camera independently chooses its gain, causing the curve to shift—sometimes below the dashed diagonal if the 'weaker' first camera has a much higher gain than the second.
For each frame, we get two sets of metadata. For each pair of [i, j] pixels, their intensities fall on a curve like the one shown in the image, but the curve's position and shape depend on the camera parameters stored in the metadata.

I would like to:

  1. For a known gain difference between the cameras and other information from the frame metadata, be able to reproduce the full curve. Or get a function like: camera1_intensity = F(camera1_gain, camera2_gain, camera1_metadata, camera2_metadata)(camera2_intensity)
    Afterward, I can create a lookup table for each of the 255 intensity values.
  2. (Optional) Flatten this curve to achieve a linear relationship.

imx296.json file has the following algorithms:

"rpi.black_level"
"rpi.lux"
"rpi.dpc"
"rpi.noise"
"rpi.geq"
"rpi.denoise"
"rpi.awb"         Turn off by setting "AwbEnable": False in camera controls or "rpi.awb": {"bayes": 0} in .json file
"rpi.agc"
"rpi.alsc"
"rpi.contrast"    turn off by setting "x.rpi.contrast"
"rpi.ccm"
"rpi.sharpen"
"rpi.hdr" 

Since AWB and contrast are already off, what else can I disable to achieve a linear grayscale intensity response?
Also, how can I predict the curve's position based on the frame metadata and the gain difference between the cameras?

@davidplowman
Copy link
Collaborator

Hi again, a few things to comment on here.

Firstly, your YUV420 modifications looked OK to me. It's more efficient because the hardware does the conversion for you, rather than doing it slowly in software. OpenCV should understand and display the single channel greyscale image directly.

As regards exposure comparisons, it might be worth looking at some raw images, which you can capture in DNG files. This is exactly what comes from the sensor. You should find that, after subtracting the black level, this is exactly linear in both exposure time and analogue gain (until pixels start to saturate).

I'm assuming your graphs are referring to the processed output images. By far the biggest non-linearity here is controlled by the rpi.contrast algorithm, so disabling that is the first thing. Other algorithms may have an effect, and you could try disabling those too - maybe rpi.alsc, rpi.ccm, rpi.sharpen (it might only be rpi.black_level that's essential, but obviously if removing any causes it to go horribly wrong then you'll need to put those back). The "x." trick should work for all of them.

I still don't understand why changing the analogue gain should cause the sync to change. Perhaps I could ask @njhollinghurst to comment on that?

@Raccoon987
Copy link
Author

Working with raw images was one of the solutions I considered. Still, it's not the easiest because handling raw signals requires manually implementing some useful image preprocessing algorithms (like rpi.denoise and rpi.agc). Aside from other algorithms, I still need to denoise and enhance the weak raw signal. How can I demosaic the raw signal into a grayscale image and then apply denoising and enhancement in real-time, without saving the images to disk? Any advice, side libs?

Yes, please ask @njhollinghurst to join this discussion. This is an important issue to resolve, as it could help others who want to synchronize two cameras with an external trigger board.

@davidplowman
Copy link
Collaborator

I only really suggested looking at some raw images to check that exposure and gain cause exactly linear changes in pixel level. But it should be possible to emulate this in the processed output images by disabling other stages (most obviously rpi.contrast). It might be worth experimenting with just a single camera, where the image doesn't change, to confirm that this really works.

@njhollinghurst
Copy link
Contributor

With regard to the timestamps... Do you have an independent way to check if the cameras are synchronized? For example by filming a mobile phone stopwatch application.

The timestamps actually record when the start of the frame was received by the Raspberry Pi. With a rolling-shutter camera, it's closely related to the exposure time. But a global-shutter camera has the ability to retain the frame internally for several milliseconds (I don't know why it might do this, but it's theoretically possible) so there is room for doubt.

@Raccoon987
Copy link
Author

Hi again, a few things to comment on here.

Firstly, your YUV420 modifications looked OK to me. It's more efficient because the hardware does the conversion for you, rather than doing it slowly in software. OpenCV should understand and display the single channel greyscale image directly.

As regards exposure comparisons, it might be worth looking at some raw images, which you can capture in DNG files. This is exactly what comes from the sensor. You should find that, after subtracting the black level, this is exactly linear in both exposure time and analogue gain (until pixels start to saturate).

I'm assuming your graphs are referring to the processed output images. By far the biggest non-linearity here is controlled by the rpi.contrast algorithm, so disabling that is the first thing. Other algorithms may have an effect, and you could try disabling those too - maybe rpi.alsc, rpi.ccm, rpi.sharpen (it might only be rpi.black_level that's essential, but obviously if removing any causes it to go horribly wrong then you'll need to put those back). The "x." trick should work for all of them.

I still don't understand why changing the analogue gain should cause the sync to change. Perhaps I could ask @njhollinghurst to comment on that?

newplot (1)

It seems that adding 'x.' to 'rpi.contrast' makes the response linear enough. Thank you.

@Raccoon987
Copy link
Author

Raccoon987 commented Oct 1, 2024

In the early stages of my work, I used a 'running lights' setup — 10 LEDs arranged in a line. Each LED emits light for a certain period before turning off, while the next LED in line begins to emit. I can control the duration each LED stays on, ranging from 100 microseconds to 10 seconds. I used this setup to check camera synchronization, and everything seemed to work well. Afterward, I relied solely on the frame metadata timestamps.

When I don't specify 'AnalogueGain' in the camera controls, the timestamp difference between frames from the first and second camera is minimal (up to 20 millisecond). However, explicitly setting the same or different 'AnalogueGain' increases the timestamp difference by three orders of magnitude. Similarly, setting different exposures causes the synchronization to break down within 5-10 seconds. Initially, the timestamp difference is small, but after a short time, it increases dramatically.

LED

Sync.mp4

@njhollinghurst
Copy link
Contributor

I'm guessing that one of 3 things is going wrong:

  • The cameras are capturing images at unexpected times when controls are frequently set
  • The timestamps are reported incorrectly when controls are frequently set
  • One or both of the pipelines is dropping frames, causing timetamps to jump by a whole number of frames

Is it possible to repeat the LED experiment when either one or both cameras are having their analogue gains frequently set -- do the images go out of sync, or only the timestamps? Is the error an integer number of frame intervals? How does the error evolve over time?

Don't try to change the shutter duration using the API -- it should be fixed and should match the duration of the trigger pulse.

@Raccoon987
Copy link
Author

It’s possible to repeat the LED experiments, but I’m not sure why you mentioned 'controls are frequently set'? In line 33, I configured both cameras and started them once. After that, in an infinite loop, I capture and release requests. It seems like the controls are only set once, right? With line 25 commented out, synchronization works. When I uncomment that line, synchronization fails.

01:  from picamera2 import Picamera2
02:  import threading
03:  import time
04:  import numpy as np
05:  import pprint
06:  
07:  
08:  
09:  def capture_timestamp(picam, result_list, index):
10:      request = picam.capture_request()
11:      metadata = request.get_metadata()
12:      ts = int(metadata["SensorTimestamp"])
13:      result_list[index] = ts
14:      request.release()
15:  
16:  def start_camera(index):
17:      picam = Picamera2(index)
18:      print("Camera sensor modes: ", picam.sensor_modes)
19:      config = picam.create_preview_configuration(
20:          controls={"FrameDurationLimits": (16667, 16667),
21:                    "FrameRate": 60,
22:                    "ExposureTime": 2000,
23:                    "Saturation": 0,
24:                    "AwbEnable": False, 
25:                    #"AnalogueGain": 8.0,
26:                    })
27:      print(f"camera {index} main config: ", config["main"])
28:      picam.start(config)
29:      time.sleep(0.5)
30:      return picam
31:  
32:  def check_sync():
33:      picams = [start_camera(i) for i in range(2)]
34:      results = [None] * len(picams)  
35:      try:
36:          c = 0
37:          while True:
38:              threads = [threading.Thread(target=capture_timestamp, args=(picam, results, index)) for index, picam in
39:                         enumerate(picams)]
40:  
41:              for thread in threads:
42:                  thread.start()
43:  
44:              for thread in threads:
45:                  thread.join()  
46:              c += 1
47:              if c % 20 == 0:
48:                  print("timestamp delta between two cameras: ", results[0],  results[1], abs(results[0] - results[1]))    
49:      except KeyboardInterrupt:
50:          # Ctrl + C to properly stop cameras
51:          print("Stopping cameras...")
52:      finally:
53:          [c.stop() for c in picams]
54:          print("Cameras stopped.")  
55:
56:
57:  # run check_sync() with commented out #line 25. The cameras independently select their own analog gain.
58:  check_sync()
59:
60:>> 16000
61:>> 18000
62:>> 10000
63:>> 22000
64:>> ...
65:
66:  # run check_sync() with uncommented line 25. Both cameras have a fixed analog gain of 8.0.
67:  check_sync()
68:
69:>> 12000000
70:>> 11000000
71:>> 16000000
72:>> ...

@njhollinghurst
Copy link
Contributor

njhollinghurst commented Oct 3, 2024

Ah, I thought you might have been changing the gain for every frame.

I'll try to reproduce this. What is the rate and duration of your trigger pulse? Does the trigger start before or after the time.sleep(0.5) on line 29?

@njhollinghurst
Copy link
Contributor

njhollinghurst commented Oct 3, 2024

OK, I have reproduced this. The difference is usually close to one frame interval (16667000). For the first few frames (which you didn't print) it can be larger.

The "AnalogueGain" parameter changes how many frames are discarded at startup. But I think the matching timestamps without "AnalogueGain" were largely fortuitous. For example, changing the duration of the sleep to 0.1 can also affect synchronization.

After starting, both cameras will be running in sync from the external trigger source. But the Python program is sleeping, so frames are being buffered and eventually dropped. capture_request() will pick up a buffered frame. There is no guarantee that both threads will see the same frame. Also, depending on CPU load, subsequent frames could be dropped.

EDITED: Here's a sketch of a possible hack to keep the channels in sync:

def capture_timestamp(picam, result_list, index, after):
    request = picam.capture_request(flush=after)
    metadata = request.get_metadata()
    ts = int(metadata["SensorTimestamp"])
    result_list[index] = ts
    request.release()
...
def check_sync():
    picams = [start_camera(i) for i in range(2)]
    results = [time.monotonic_ns()] * len(picams)
    try:
        c = 0
        while True:
            threads = [threading.Thread(target=capture_timestamp, args=(picam, results, index, results[1-index] + 1000000)) for index, picam in
                       enumerate(picams)]
...

The first frame may still be out of sync.

@Raccoon987
Copy link
Author

According to manual https://www.raspberrypi.com/documentation/accessories/camera.html

  1. Enable external triggering through superuser mode
  2. run micropython code on Pico microcontroller
from machine import Pin, PWM
from time import sleep

pwm = PWM(Pin(28))
framerate = 60
shutter = 2000  # In microseconds
frame_length = 1000000 / framerate
pwm.freq(framerate)
pwm.duty_u16(int((1 - (shutter - 14) / frame_length) * 65535))
  1. run check_sync() to check synchronization

So the trigger starts before the time.sleep(0.5) on line 29.

I haven’t tested synchronization over extended periods, usually just running check_sync() and observing for 15-40 seconds. If the difference was small (10-20 microseconds) during this time, I assumed synchronization was established. However, I can't say the synchronization process is fully reliable—sometimes it failed, and I had to restart the MicroPython script. Through experimentation, I found the success-to-fail ratio to be around 9:1. Even when synchronization was successful, some frame pairs occasionally weren’t captured in sync. But since this happened once every 1-3 seconds, it was acceptable for me. With a 60 fps rate, I could skip those frames and work with the synchronized ones.

I am a bit concerned about your statement that "there is no guarantee both threads will see the same frame." On the Raspberry Pi forum, I found examples showing that threads are the only viable way to operate two cameras in sync. Without threads, the cameras capture frames independently, even with the external trigger—I verified this myself. In the Raspberry Pi camera manual (https://www.raspberrypi.com/documentation/accessories/camera.html), under the section "External Trigger on the GS Camera," it states that "The Global Shutter (GS) camera can be triggered externally by pulsing the external trigger (denoted on the board as XTR) connection on the board. Multiple cameras can be connected to the same pulse, allowing for an alternative way to synchronize two cameras." Based on this, I understood that synchronizing two GS cameras is a proven capability.

When you say 'no guarantee,' does that mean 'it may work sometimes, and sometimes not,' or that 'it works, but a small portion of frame pairs may occasionally be captured asynchronously'? Is there an alternative method to capture synchronized frames without using Python threads?

Thank you for your example, I’ll check it tomorrow. And thanks again for all the help.

@njhollinghurst
Copy link
Contributor

njhollinghurst commented Oct 4, 2024

The example code shows the two cameras being operated by separate rpicam-vid processes; your code operates both cameras in the same program but it starts them at different times.

When you say 'no guarantee,' does that mean 'it may work sometimes, and sometimes not,' or that 'it works, but a small portion of frame pairs may occasionally be captured asynchronously'? Is there an alternative method to capture synchronized frames without using Python threads?

  1. The cameras are perfectly synchronized. But each ISP has a small queue of captured frames which can overflow during startup (especially from the camera which is started first). After an overflow the frame at the head of each queue will not necessarily correspond. So there's no guarantee that both threads will see the "same" frame in each iteration.
  2. Also, during a long running time, there's a very small risk that some frame will be "dropped" and not processed. When that happens, the two contexts may "slip" by one frame. A defensive program should be able to handle this.

Possibly the best solution is to capture a pair of frames, compare their timestamps, and while they differ by more than half a frame-time (8ms), discard the frame with the earlier timestamp (remember to release the request) and capture another one from that camera only.

@davidplowman
Copy link
Collaborator

davidplowman commented Oct 4, 2024

You might want some code something like this:

from picamera2 import Picamera2

cam0 = Picamera2(0)
cam1 = Picamera2(1)
frame_rate = 30.0
frame_duration = 1000000 / frame_rate;
config0 = cam0.create_preview_configuration(controls={'FrameRate': frame_rate})
config1 = cam1.create_preview_configuration(controls={'FrameRate': frame_rate})
cam0.configure(config0)
cam1.configure(config1)
cam0.start()
cam1.start()

while True:
    req0 = cam0.capture_request()
    req1 = cam1.capture_request()
    while True:
        ts0 = req0.get_metadata()['SensorTimestamp'] / 1000  # use microseconds
        ts1 = req1.get_metadata()['SensorTimestamp'] / 1000
        if ts0 + frame_duration / 2 < ts1:  # req0 too early, next frame should match better
            req0.release()
            req0 = cam0.capture_request()
        elif ts1 + frame_duration / 2 < ts0:  # req1 too early
            req1.release()
            req1 = cam1.capture_request()
        else:
            break

    print("Frames at times", ts0, "and", ts1, "difference", ts0 - ts1)
    req0.release()
    req1.release()

I'd have thought at 60fps you might see some frame drops, but you'll have to see how it goes.

@Raccoon987
Copy link
Author

Both examples you provided work perfectly. Based on the timestamp differences, the cameras are capturing frames synchronously, both with and without explicitly setting the analog gain. It seems that this issue is now resolved. I appreciate your assistance — thank you!

However, there’s still one important question I haven’t been able to resolve: Is there a way to allow the first camera to automatically select the optimal analog gain (with AeEnable=True) while locking the second camera to the same gain value?
The first camera would choose a gain value based on the AGC/AEC algorithm, and the second camera would set its gain strictly to match the first camera's choice.
From our previous discussion, it seems this isn’t possible — or at least not in a straightforward way. Is that correct?

@njhollinghurst
Copy link
Contributor

David's version is better - it corrects the mismatch before you get the output, whereas my code only detects it afterwards!

Yes it is possible to copy the analog gain from one camera to the other, inside the main loop (roughly as suggested on the forum), with a scale-factor if necessary - but there would be a lag of 3 or 4 (?) frames, due to the way controls and frames get queued. If the scene's brightness changes only slowly, that might be acceptable.

@Raccoon987
Copy link
Author

In my situation, the brightness can fluctuate unpredictably, so I don’t think I can afford a 3-frame delay. Is there any way to handle this at a lower level, without using Picamera2? For example, could I instruct the AGC algorithm of the second camera to use the sensor data, statistics, and metadata from the first camera to determine its gain value?

Fig 17
Image from "Raspberry Pi Camera Algorithm and Tuning Guide", page 55

@davidplowman
Copy link
Collaborator

It's fairly fundamental to libcamera that cameras run separately, often in separate processes, and can't influence one another.

To get round that, I think you'd have to get involved at a fairly low level. For example, whenever the AGC/AEC selects new exposure/gain values, you wouldn't apply them immediately, instead you'd broadcast the new values and then have a "client AGC/AEC algorith", perhaps running on both devices, that would receive these messages and could apply them at the same time. So there'd be some work involved.

I wonder if there might still be some alternatives. Here are a couple of ideas.

  1. Could you run one camera at a sliglty different AGC/AEC level to the other? The easiest way to accomplish this would be to set an EV value. For example, if you wanted camera 2 to be 10% brighter (in linear terms, before gamma is applied), you would set the EV to ln(1.1)/ln(2) = 0.1375. So you'd do cam.set_control({'ExposureValue': 0.1375}).

  2. Are you still running with "rpi.contrast" disabled? If so, you could request a low resolution image from one or both of your cameras, and use the Y channel of that image to do a simple exposure/gain calculation (e.g. divide a target Y value by the achieved value to get the relative gain). It may not be quite as responsive as the built-in algorithm, but you should at least have it running synchronised.

@Raccoon987
Copy link
Author

Raccoon987 commented Oct 8, 2024

I found your idea of 'running one camera at a slightly different AGC/AEC level using the ExposureValue' very interesting, so I tested it yesterday. I understand this is related to gamma correction, where I_out = I_in**(1.2), but could you explain the use of ln(1.1)/ln(2) in more detail? For a 20% difference in light fluxes, shouldn't the value be ln(1.2)/ln(2)?

Here are the prerequisites:

  • Fixed light flow from a bright lamp.
  • Stationary gradient reflective surface, like:
    grayscale_gradient_a3-min
  • camera_0: weaker signal.
  • camera_1: stronger signal.
  • camera list = [camera_0, camera_1]

I got good results (equal pixel intensity for both camera frames) with the ExposureValue list set to [0., 0.1375] (passing the 0.1375 value to the stronger camera's set_control), and AeEnable set to True.

However, when AeEnable was set to False, or when I explicitly set AnalogueGain to a fixed value for both cameras (e.g. AnalogueGain = 4), the pixel brightness varied.

I wanted to ask about the meaning of the AeEnable camera control. The manual says it 'allows the AEC/AGC algorithm to be turned on or off. When off, there are no automatic updates to the camera’s gain or exposure settings.'
My 'FrameDurationLimits' are set to (16667, 16667), 'FrameRate' is 60, and 'ExposureTime' is 2000 - these parameters must be set for both cameras, so the AEC/AGC algorithm can only adjust the gain. If I fine-tune the 'ExposureValue' to balance the light flux difference, everything looks like a manual setting a fixed, equal AnalogueGain for both cameras should lead to identical images. But that is not the case.

Also, AeEnable can be set to True, False, and None. I do not understand what means "None" value?

I could consider the issue solved by tuning the optimal ExposureValue, setting AeEnable to True, and leaving AnalogueGain unspecified, but I know that in some cases, the cameras can choose a combination of AnalogueGain values that results in equal image intensities, even without using the 'ExposureValue' parameter. For example, the "weaker" camera could use a higher AnalogueGain, and the stronger camera could use a lower one to balance the light flux difference—although this only works for specific lighting conditions.
What I need, is a solid solution: to set the optimal 'ExposureValue' for one of the cameras and achieve equal images for all lighting levels and spatial distributions.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants