Replies: 5 comments 3 replies
-
I think I figured out something that might work! There are these properties of the jack client called last_frame_time and frames_since_cycle_start. If I store these params on every process call, alongside datetime.datetime.now, I think I can work out the relationship between frame times and clock time. Then I think I can assume that the sound comes out of the speakers NPER blocks after the process call. |
Beta Was this translation helpful? Give feedback.
-
It would still be cool if I could figure out how to pulse a pin on the exact start of each block though. I can't quite make sense of the whole "master timebase" thing and where it comes from, even whether it's a software clock or a hardware clock |
Beta Was this translation helpful? Give feedback.
-
Jackd timingYes! so there are some calls to get specific transport timing in the client, eg autopilot/autopilot/stim/sound/jackclient.py Line 372 in 04b5968 I used this to do exactly that - pulse a pin exactly when a sound starts: autopilot/autopilot/stim/sound/jackclient.py Line 455 in 04b5968 autopilot/autopilot/stim/sound/jackclient.py Line 578 in 04b5968 and that gets it within 1ms (that's also using pigpio scripts which have a bit of startup latency). From those values you can tell when the frame you feed into the buffer will get pushed to the card. Ideally what i had wanted to do was to be able to directly connect a jack port to pigpiod so you could just have jack flip the pin, but i never quite got there. pigpio timestampsThe whole deal there is that pigpio sends up the event and the ticks for every callback. the tick is the 32 bits "seconds" part of a system timestamp, and so you need to send another 32 bit int that has the microseconds part. So the strategy is if you get a paired tick and timestamp, then you can use the offset between them to convert between ticks and timestamps. you can see how those work by comparing the fork to the base branch: joan2937/pigpio@master...sneakers-the-rat:pigpio:master where when the pi class is instantiated it calls this synchronize method: which gets several pairs of ticks and system timestamps Then if there is some unexpected tick (eg. the ticks wrap around, as they do) then the synchronize call is made again:
It depends! You can configure your NTP client to behave in different ways( https://wiki.auto-pi-lot.com/index.php/NTP ) probably the best thing to do would be to synchronize before the experiment and then turn off NTP during it to avoid that. +1 to the eternal need to actually refactor the task running logic to add hooks for the different phases like
here's a guide! https://wiki.auto-pi-lot.com/index.php/NTP
I assume yes but haven't measured. the pigpio method has to do a whole socket-based message exchange with the daemon so i assume that would be slower than directly calling python's time methods ( |
Beta Was this translation helpful? Give feedback.
-
In the past we have used hardware (the chips used in sound equipment to show levels) to convert a volume into a ttl and send that back into a digital IO to get the exact time that a sound passed some DB level. The "problem" with asking jackd for the onset is that if the sound ramps or has a silent period at the beginning the sound onset will not be when the animal hears it. |
Beta Was this translation helpful? Give feedback.
-
Thanks @sneakers-the-rat ! I'm still processing a lot of what you wrote. Before you wrote back, I also posted a related question on the jackclient-python github page. spatialaudio/jackclient-python#118 (comment) Concerning audio, I think the best way for now is just for me to log To synchronize my raspberry pi with my neural data acquisition system, I'm thinking the ideal way would be if I could sample a hardware clock on the pi. Do you know if there's a way to expose a system clock (or downsampled version of it) on a GPIO pin? But for this to be useful, I would need a way to log events from Autopilot according to their time on the very same clock. If there's the potential for drift between the hardware clock and the values returned by get_current_tick or time.now, then this isn't useful. If sampling the hardware clock doesn't make sense, then what I would do is raise a gpio at every trial start, capture these times with the neural data acquisition system, and linearly fit them to the isoformatted trial start times in the autopilot logs. This is basically how I've handled this before. For synchronizing between the Parent and Child, NTP sounds good, I didn't realize it could be that accurate! I could also connect them on a shared gpio line and store the time of pulses on each one. It looks like pigpiod checks the GPIO lines every 5 us, even though callbacks are only generated every 1 ms, so presumably these tick times are pretty accurate. |
Beta Was this translation helpful? Give feedback.
-
Hey all, I'm embarking on a project to get my synchronization as precise as possible between the neural data I record, the video data of the behavior, and behavioral timestamps from Autopilot. The goal is to make PSTHs from auditory cortical neurons in response to sound, so I definitely need 5 ms precision, but 1 ms precision or better would be ideal. I re-read section 4.1 of the preprint, but what I'm trying to do is a little different, because for this, I don't need to minimize latency, I just want to have as precise an estimate as possible of when the sounds actually come out of the speaker.
I'm wondering if there's a way to get the EXACT tick time at which jack audio plays a frame? As I understand it, jackd calls
process
every frame time.autopilot/autopilot/stim/sound/jackclient.py
Line 358 in 04b5968
I think a frame is 1024 sample at 192 kHz, so that's every 5.3 ms. These frames get loaded into a buffer of length 3 frames (
nper
). So presumably they get played about 15.9 ms after they get loaded. I also think I understand that the frame size and buffer size are configurable options when booting jackd.autopilot/autopilot/agents/pilot.py
Line 101 in 04b5968
So I'm wondering if jackd outputs the tick time that it plays each frame in some way. If that's not possible, I could just take a tick time (or datetime.datetime.now) at the time that the sound is loaded into the buffer by
process
. I think this will precede the time it actually plays by 15.9 ms or so, right? And also, there will be a bit of random variability here, becauseprocess
is not guaranteed to be called at exactly the same time as the current frame is being played by jackd, it could be delayed by other things, which is why we need that 3-frame buffer in the first place.On a somewhat related note, Jonny can you say a bit more about the choice to have your pigpio fork "return isoformatted timestamps rather than tick numbers"?
autopilot/autopilot/hardware/gpio.py
Line 7 in 04b5968
I assume that the raspberry pi occasionally synchronizes its clock with network time using
ntp
or similar, and so the relationship between ticks and isoformatted timestamps might change during an experiment, right? I will also need to synchronize times from Parent and Child pis, so I need to figure this out at some point. Is there a big performance difference between using datetime.datetime.now() and using something like pigpio's get_current_tick?thanks!!
Beta Was this translation helpful? Give feedback.
All reactions