Is there a way to skip the export_data_table step since it is extremely slow! In my use case does it take around 20-30min to export around 60s of data… This sadly makes my plans to use it for an integration test almost impossible…
so are there plans to be able to export directly into an pyhton list instead were each entry contains many an dict with the keys name,type,start_time,duration,“mosi”,“miso”? If not are there any other plans to be able to export directly but skip the csv step?
Also found what I believe is a bugg but unsure were to report it…
When running with a timed capture am I able to use more than 3Spi analyzers but no matter what when running with manual capture and more than 3spi analyzers will it hang everytime and never finish…
Tried with: logic2@2.4.1 with logic2_automation@1.0.5 and logic2@2.4.6 with logic2_automation@1.0.5 with the same result!
Edit1:
Accidentally ran manual capture with capture.wait() and stoped it from the within the program and then it worked! Thus I tried using capture.stop() followed by capture.wait() and that actually seams to fix my problem even though it clearly says in your documentation to never combine these… Weird!
Edit2:
The above from the code at least only worked when I have what seam to be redundant
@anid Sorry about the slow export times! Exporting to a csv is fairly inefficient, especially assuming a dense data set within your 60s capture. We should certainly consider other export file types outside of csv.
I’d be happy to double check the export speeds you are seeing on your end. Could you share your capture file (.sal file format) with me? I can run a quick test to see if I’m seeing the same 20min-30min export time.
As for your bug report, I’d be happy to look into why that’s happenining in your automation script. Can you share a copy of your .py script file that shows the issue? If possible, feel free to remove parts of your automation script that’s not needed to reproduce the issue.
The slow export time may actually just be caused by too small buffer size, I didn’t know about the parameter buffer_size_megabytes, it’s only 2GB by default right?, and when I set it to say 36Gb ram did it complete a log of 60s in about 300s so that’s certainly a lot better, but still skipping a file and exporting directly into a python-format would be greatly appreciated and faster, it would also probably save a couple of disk from being totaled in CI/CD riggs from excessive writing!
It never used more than 2GB raw log and 6GB for the parsers, I guess it performed a lot of ram swaps to disk before…
I am unsure if I can give you the .sal file but I will see if I have the time to create a basic script with the same problem that I can share!
Sorry for the delay in replying. So we can get this solution (what you sent me in the logic2_automation-1.0.6…zip file) to work with a couple of modifications:
We tweaked the pyproject.toml file so that we could pin the dependencies on grpcio & grpcio-tools to the very specific versions we require.
I think you are missing a run-time dependency on protobuf. Otherwise the wheel file does not install protobuf if it is missing. So you should have:
[project]
name = “logic2-automation”
version = “1.0.6”
authors = [
{ name=“Saleae, Inc.”, email="support@saleae.com" },
]
description = “Library for using the Saleae Logic 2 Automation API”
readme = “README.md”
requires-python = “>=3.7”
classifiers = [
“Programming Language :: Python :: 3”,
]
dependencies = [
“grpcio>=1.13.0”,
“protobuf>=X.YY.Z”, <----- NEED THIS SET to whatever version works with grpcio 1.13.0
“pywin32; platform_system == ‘Windows’”
]
…
So is it possible for you to officially release the above at least as an alpha or beta release? That what I can point my tools team to an ‘official’ download.
Ah, I didn’t realize we were depending on the protobuf dep from grpcio-tools previously. Here is a build that should include protobuf as an explicit dep: logic2_automation-1.0.6.tar.gz.zip (15.8 KB)
Thanks.
Do you think this will be published into some sort of official release or an alpha/beta release? Our tooling team would prefer getting the source via that mechanism versus being handed a developer release as an attachment for traceability and versioning purposes.
First of all Logic 2 Automation API is what I was waiting for some time and still using Logic 1 I missed its release. It is fantastic, although I see some drawbacks too.
Getting used to logic1 i was looking for quick capture feature with preset configuration in already open Logic 2 app. It is helpful in semi automatic bench tests when developing digital product. Unfortunately I could not find it. So I looked for method to load preserved presets but with no success. Further step was to create configuration from scratch in Python but I had problem with defining channel names. It did not work although It seems to be implemented. By the way is grpc_channel_arguments (Optional[List[Tuple[str, Any]]]) property for this purpose?
Finally I made some configuration with default channel names and it worked nice. Logic2 by itself is huge step forward with regards to Logic1 so I am happy to use its Automation API.
Just please consider my thoughts and if there is already solution for my concerns I would love to here from you.
Hi @markgarrison I am getting this error when I try “pip install logic2-automation” command
ERROR: Could not find a version that satisfies the requirement logic2-automation (from versions: none)
ERROR: No matching distribution found for logic2-automation
@MR_E It was great chatting with you via email! I’ll post the solution you discovered here so that other users who run into this issue might solve it as well.
I realized from the PyPI webpage that python 3.7 or greater was required for this package. As soon as I installed and launched python 3.8, the package distribution was found and installed successfully.
Hi All - Has anyone used the API to invoke multiple Analyzers? If so, is there specific settings that should be used? Would like to use the API to call 4 Analyzers at once, for instance and then they should display, but this is not happening. Assumed it’s me not calling appropriately.