I was looking for a way to ACK that a capture has started and that is has ended from a python script using the API, since I am trying to automatically start and stop captures in the logic 2 software. I found the post linked above that basically encapsulates my question, except it was relating to the logic 1 software. Thus, I was wondering if the feature described in the linked post above has been implemented in logic 2?
@joshmansky Using our automation script linked below as an example:
https://saleae.github.io/logic2-automation/getting_started.html
The capture starts directly after the with manager.start_capture()
function call. The capture ends when the capture.wait()
function exits.
Having said that, I can see how this functionality might be limited for your needs (given the topic that you linked). If you’re looking for an actual ACK signal at the exact moment a capture begins, we unfortunately did not implement that capability.
If the behavior I mentioned above isn’t what you were looking for, can you share more details about your particular reasoning for needing this capability? We’d love to learn more about your use case to hopefully get an idea of what kind of problem we would be attempting to solve.
Thanks for the quick response. I would then urge that this be included in some later update to the logic 2 python api as there are no good fixes that I, myself, can make without changes to the logic 2 python api. The reason that is important is that as I am using the Saleae logic 2 software by I am calling .start_capture(), and then immediately testing some output(resulting in digital pulses) and then once that testing of output concludes I immediately am calling the .stop() function, since I am using the manual capture mode. However, since I cannot currently confirm that data is not flowing once the start_capture() function executes and that it has stopped flowing once I call stop()(as there no way to ACK these facts) I currently have to sleep for 0.25 seconds before and after testing the outputs of the digital pulses, to ensure I am getting each pulse completely/the front and back edge of the first and last pulse. While this works currently, it is not a full proof fix and it is much preferable to have a feature similar to National Instruments software where you can acknowledge that data is in and out of the buffer when starting and stopping data acquisition/captures to ensure that you are getting all of the outputs in their completeness.
@joshmansky Thanks for describing that for me. I’ll check with the software team to see if we have any other recommendations for you. For stopping the capture, adding some sort of delay after your data stops flowing may be your best option. Without an ACK signal, starting the capture seems like the tricky part, as a static delay is not foolproof. I’ll keep you updated with our recommendations.
Thanks for the details for your use case. I’ve just checked the source code to see exactly when we sent the response to the start_capture request versus when exactly the device starts recording.
What I found is that it’s pretty close, but you’re right, start_capture can theoretically return before the hardware is recording.
start_capture is blocking for 95% of the initialization process, but at the tail end, we start another thread that allocates and queues the USB read requests, then actually sends the last command to the device to actually start sampling.
start_capture does synchronously configure the device and do most of the heavy lifting.
Given the architecture, the best place to be sure that the capture has started is when we’ve finished sending that start command message on the read thread. The firmware executes the command synchronously, so the device should be recording before it ACKs the start read command.
My guess is that the latency between start_capture returning in python and the hardware actively recording should be extremely short, however we really would need to try to characterize that to be sure.
We actually chose this as the API design for this specific purpose, to do what you’re describing. I’ve added this to our feedback for the next iteration on the interface.
I’d like to say that any possible delay is going to be greater than the overhead of responding to the GRPC call, but I can’t guarantee it, and without characterizing it myself, there could be something in there I’m missing completely.