Automating long recordings? Using multiprocessing for the saves?

Hello!

I’ve got a Saleae Logic Pro 8 I would like to acquire multiple hours of data from after a digital trigger.

I did find mention in the documentation about being able to automate long recordings in Python, but no real information other than this short snippet:

The software already has exceptionally deep buffer capabilities. However, there are still cases where longer recording would be preferred, for hours or even days. Since the software can't be used to record continuously for that length of time, the long capture must be broken into a series of shorter captures that are saved to disk. That results in small delays between captures that will result in lost data; however, in most cases, the save and capture restart time is well under 1 second. This operation usually only needs to be performed once an hour.

For this operation, you can either use the existing sample code or create your own application from scratch. The basic process is to use one command over and over again. That command is "CAPUTRE_TO_FILE". See the documentation for more details. Once the capture has completed and the file has been saved, the software will reply over the socket "ACK". Then the software is ready to receive a new capture to file command.

However, I could not find the existing sample code it mentioned, nor any other mention of that command anywhere else other than that single spot.

The issue I’m trying to solve right now is when the file is saving, the acquisition stops far longer than 1 second. Splitting the capture files into multiple saves with a tiny pause in between resuming capture would be fine. But the current wait misses a ton of data.

I was considering moving the save/export steps to a separate process, but trying to move to multiple processes has proved difficult. I’m having trouble pulling info from the capture session to the save process. Trying to redefine things in the save process gives me an error “Cannot use Manager after it has been closed”

Has anyone done anything like this, or have advice for me as I try to?

Here’s what I’ve got right now:

This code will read the trigger, record for 5 seconds, but isn’t pulling the file to analyze. I’m probably doing something wrong with the queue, but thought I’d include this in case it would be helpful.

This code is meant to start the recording in one process, then save it in a second process. However, I’m getting that “Cannot use Manager…” error message when I try it this way.

Any help would be appreciated, thanks!

Sorry about that! The documentation seemed to refer to our older (no longer supported) Socket API interface for our legacy Logic 1.x software.

I’ve updated the support article below:

Hm… we’ll get this on our backlog to review your Python script and will run some tests on our end. I’ll keep you updated on our findings and recommendations.

Hi @bu5,

Good news, we did test that this can be done with Logic 2. We recommend using the python main thread to perform the captures, and use threads to take care of saving and closing finished captures. Here is an example I tested a while ago which allows you to export, save, and close captures in the background while the next capture is running. Let me know if you have any trouble with it!

from saleae import automation
import os
import os.path
from datetime import datetime
import threading

# Connect to the running Logic 2 Application on port `10430`.
# Alternatively you can use automation.Manager.launch() to launch a new Logic 2 process - see
# the API documentation for more details.
# Using the `with` statement will automatically call manager.close() when exiting the scope. If you
# want to use `automation.Manager` outside of a `with` block, you will need to call `manager.close()` manually.
with automation.Manager.connect(port=10430) as manager:

    # Configure the capturing device to record on digital channels 0, 1, 2, and 3,
    # with a sampling rate of 10 MSa/s, and a logic level of 3.3V.
    # The settings chosen here will depend on your device's capabilities and what
    # you can configure in the Logic 2 UI.
    device_configuration = automation.LogicDeviceConfiguration(
        enabled_digital_channels=[0, 1, 2, 3],
        digital_sample_rate=10_000_000,
        digital_threshold_volts=3.3,
    )

    # Record 5 seconds of data before stopping the capture
    capture_configuration = automation.CaptureConfiguration(
        capture_mode=automation.TimedCaptureMode(duration_seconds=0.5)
    )

    # Start a capture - the capture will be automatically closed when leaving the `with` block
    # Note: We are using serial number 'F4241' here, which is the serial number for
    #       the Logic Pro 16 demo device. You can remove the device_id and the first physical
    #       device found will be used, or you can use your device's serial number.
    #       See the "Finding the Serial Number of a Device" section for information on finding your
    #       device's serial number.

    threads = []
    for i in range(5):
        print(f'starting capture {i}...')

        capture = manager.start_capture(
            device_id='F4241',
            device_configuration=device_configuration,
            capture_configuration=capture_configuration)

        # Wait until the capture has finished
        # This will take about 5 seconds because we are using a timed capture mode
        capture.wait()

        def _worker(cap):
            # Store output in a timestamped directory
            output_dir = os.path.join(
                os.getcwd(), f'output-{datetime.now().strftime("%Y-%m-%d_%H-%M-%S")}-{i}')
            os.makedirs(output_dir)

            # Export raw digital data to a CSV file
            cap.export_raw_data_csv(
                directory=output_dir, digital_channels=[0, 1, 2, 3])

            # Finally, save the capture to a file
            capture_filepath = os.path.join(output_dir, 'example_capture.sal')
            cap.save_capture(filepath=capture_filepath)

            cap.close()
            pass

        thread = threading.Thread(target=_worker, args=(capture, ))
        threads.append(thread)
        thread.start()
    for th in threads:
        th.join()