Just a quick (potentially unrelated) question. I’ve been trying to add support for exporting .vcd files from Logic 2, as my workplace uses some other LAs, and VCD is the best universal file format.
Literally right as this update released I had been working on a LLA using the SDK to achieve this, however, would this new automation API allow me to pull the raw data? I see custom analyzer exports but I’m a bit confused as the reply functions are empty. In another comment you mentioned that this API likely wont support pulling the raw data anytime soon?
If I am to stick with the SDK, how can I automatically give the LLA all channels? It seems like the channels must be hard-coded?
A simple “rpc Ping (EmptyRequest) returns (PingReply) {}” would be handy for checking that the app is running. For long running tests, IF the application crashes we can just restart it. It could also be like a “rpc GetStatus(Empty) returns (GetStatusReply)” with information like if it’s currently capturing, current device, open tabs(if that’s applicable), current app version?, and any other nice to have info about the current state. If the GetStatusReply begins to grow with a lot of data, I would add a separate “Ping” rpc just to check that the app is running.
Great idea - I like the idea of a request to get app information (like version) as a starting point.
in message LogicDeviceConfiguration:
I see no possibility to specify channel names. For manual review after tests that would be very handy for us. Maybe a repeated ChannelOptions channels = 1;… message ChannelOptions { int32 channel_index = 1; string name = 2; double glitch_filter = 3 …
I’ve added this to our internal list of features for the next release.
To begin with this is not important I think, but maybe in the future (since protobuf is very forward/backwards compatible) you could add an rpc to select a preset with capture settings. Then the user can create some presets in the software and just select those from code, instead of specifying all the settings via rpcs every time. That would be a good beginner way to get complex settings set up remotely.
I agree, and we actually have this on our list already.
Lastly, perhaps some way to restore the whole application to default. I.e. close all captures, reset all channel settings (names, filters etc). Maybe that’s not necessary, just a thought. If you’re running many tests in succession you don’t want settings from a previous capture to hang around.
One of the goals was to make sure that tests would be reproducible, and not affected by application state. I’ll add this to our internal list, though, as resetting the app state might still be handy.
Since this is a public API I think it’s good to stick to the style guide
Thanks for this feedback - these have all been updated.
This API does not give access to the raw data, LLAs and measurements are still the only way to access raw data.
The analyzer export function takes a file path where the export file will be saved. In the Logic 1.x API, we supported 2 modes - exporting an analyzer to a file, or streaming the export results over the socket back to the application directly.
In the Logic 2.x API, we decided not to support the second option, so we wouldn’t need to worry about the case where the export data was too large to easily handle from python.
We haven’t written this yet, but we want to write helpers which you can use to automatically save the export to a temporary path, then load the exported file into python (and parse the format), then delete the temporary file.
The new Logic 2 API allows you to configure the channels when you add the analyzer. I wouldn’t call this manual, but if your goal was to load saved files an automate export, I could see that would be a problem, because we have no API yet to check what channels were enabled in a loaded file.
By the way - I would suggest you create an export converter, instead of an LLA, to export VCD captures. We provide a python sample that parses our raw binary export format, which is the fastest option:
You could just export the capture to binary, then feed the export file into a separate python script that converts from binary to VCD.
Longer term, I really want to add support for custom python data export extensions. However that’s not on the roadmap yet, so it won’t happen anytime soon, unfortunately.
Hello, i have tried out the “getting started”.
The first trys yesterday are stoped after “capture.add_analyzer(”…")!
Today, after i have restart the windows-computer, the “getting started example” works fine.
Now I would generate an Interface to c#.
Where can I download the actual interface-description (*.proto-file)?
Or is the proto-definition in “Update July 12th 2022” the actual one?
It would be nice, to have a version history in the *.proto-file.
I noticed that the internal file format (as stored in the .sal zip) is different from the exported formats. How would I go about parsing the .bin files as unzipped from the .sal?
In the code for the new automation API, the fully processed analyzer export is labelled as export_analyzer_legacy which makes it seem as though that method is outdated. The examples also suggest that export_data_table should be preferred. Is there a particular reason behind this, or is it just referring to Logic 1’s analyzer export behavior?
the files inside of the *.sal archive are stored using our internal format, which contains a LOT of information specific to accelerating rendering. Also, analog data doesn’t have the DC calibration applied in that format. I strongly recommend not using those files for any 3rd party tools, since the format is subject to frequent change, and the format is undocumented.
However, you can automate the process of exporting those saved *.sal files to our binary export format, and we include python code to parse those exported files.
You could use code like this to automate the export:
from saleae import automation
file_path = '<absolute path to *.sal file>'
export_path = '<absolute path to directory where *.bin files will be exported>'
# Connect to the running Logic 2 Application on port `10430`
manager = automation.Manager(port=10430)
with manager.load_capture(file_path) as capture:
capture.export_raw_data_binary(export_path)
manager.close()
Note, we did find a bug with this earlier this week. If you don’t specify which channels to export, it is supposed to automatically export all channels. However, this works only if don’t have “gaps” in your enabled channels. For example, if you enable channels 0 and 1, the automatic channel export works, but if you export 1 and 2, or 1 and 2, the export will return an error. We’ll have this fixed in the next release. This does not apply if you specify the export channels.
I’d love to hear about what you’re using the automation interface for, and if/why you prefer one export format (or API) over the other!
Thanks for bringing this up, and one of the big breaking changes we’re going to make to the new API before we make it public is to improve the names of everything.
This export option refers to the export code that’s “built in” to low-level analyzers. This export code is in the analyzer plugin, and every analyzer has a unique format. Most are CSV, but that’s not the case for all of them.
We do want to push users toward the data table. The data table, thanks to “FrameV2,” now has very rich data for most of our built-in analyzers, as well as HLAs. It can still be used to export individual analyzers, as well as more than one analyzer in a single file. It also makes our job of maintaining and improving analyzer export much easier. We’re going to add more features very soon to the API to let you select which columns to export, specify the radix per-analyzer, add a search query, and more.
The future of the “built in” analyzer export option (called export_analyzer_legacy) is also in flux for a few reasons. First, we didn’t implement some features of the analyzer plugin API in the Logic 2 software yet, and this negatively impacts this type of export for analyzers. The main issue stems from the fact that the original analyzer plugin system was never designed to handle a circular buffer, which requires us to delete old results while continuing to process more data. There isn’t a good way to fix this without breaking the API, because a lot of API features always assume that the first frame of results is frame zero, and that it won’t be deleted or changed later. This can cause problems exporting now, and the API features we didn’t implement yet would have caused significantly more issues. We actually disabled this export type specifically for our MDIO, CAN, and LIN “built-in” exports because they were completely broken by these problems.
However I don’t think legacy is the right name for the API. We aren’t planning to remove it. Also a lot of 3rd party analyzer plugins depend on this export format, so it’s important that we keep it around. Also a lot of existing automation users from 1.x depend on this export format.
I didn’t want to have one function named export_analyzer and another function named export_data_table, because at first glance, the first sounds more applicable, while the second is the function you probably want. perhaps that should be called export_analyzer_data_table or export_analyzer_table. However “table” isn’t the best name here either. Technically the UI component is a table, but when you’re actually trying to get the analyzer results into a file, I don’t think about it as a table, I think about it as simply exporting an analyzer.
I originally drafted the name “export_analyzer_native” because I consider the export format that’s built into the analyzer to be “native” to that analyzer, but without understanding the analyzer API, I don’t think that makes sense to other users.
Now I would generate an Interface to c#.
Where can I download the actual interface-description (*.proto-file)?
Or is the proto-definition in “Update July 12th 2022” the actual one?
It would be nice, to have a version history in the *.proto-file.
We just made the logic2-automation public which contains both the Python library and gRPC saleae.proto file.
Saleae.Automation.Manager.ManagerClient SaleaeClient = new Manager.ManagerClient(channel);
var response = SaleaeClient.GetDevices(new GetDevicesRequest());
But I allways get the message:
"Grpc.Core.RpcException: "Status(StatusCode=“Unavailable”, Detail=“Error starting gRPC call. HttpRequestException: The SSL connection could not be established, see inner exception. IOException: The handshake failed due to an unexpected packet format.”, DebugException="System.Net.Http.HttpRequestException: The SSL connection could not be established, see inner exception.
—> System.IO.IOException: The handshake failed due to an unexpected packet format."
I’am an absolut newbee in programming with gRPC, so is there an idea how I can get debug-informations, to analyze the problem?
We are going to be using Logic analyzers as part of a new automated testing framework, where they will be listening to inter-component communications for qualification purposes. The end goal is that a product can be configured on a platter, flashed with the latest firmware, and run through a large suite of tests that previously would have been conducted by hand.
At the moment the Logic is only setup to monitor short I2C and SPI exchanges with predetermined values that we had already proven test cases for. Generally they consist of a single message that configures a response, and the reply back.
As for the analyzer export, it mainly has to do with picking out the messages from the resulting table. I’ve only utilized it for testing short exchanges, but when processing with pandas which is commonly used for tabular data in python, my code for parsing out messages from the data table was considerably more complicated than with the legacy export.
For instance in the case of I2C, I had to locate sequences that started with a type of start and ended with a type of stop, then parse out the address row and data rows from within that sequence. While this isn’t unusually complicated, with pandas the solution consisted of a lot of anti-patterns and would be very slow for long exchanges. On the other hand, the legacy export was in a format that was readily usable with pandas, and making the switch eliminated a fair amount of unwieldy code. I found it easier to parse since relevant data was distributed across columns rather than rows.
Others may think different, and I could see the data table working better for iteration or if we didn’t use pandas. I will admit that I didn’t create a generalized solution for the legacy export, and future tests will likely require similar sequence processing. But a central goal of this project is to make testing hardware easy to interface with for non-developers, which heavy iteration with specialized logic is counterproductive for. Until we find a better method or library for handling the data, the legacy export with pandas seems to be more useful.
I use “http” instead of “https” and I use the “StartCapture” instead of “GetDevices”.
But I farther get an error message:
Grpc.Core.RpcException
HResult=0x80131500
Nachricht = Status(StatusCode=“Unavailable”, Detail=“Error starting gRPC call. HttpRequestException: An error occurred while sending the request. IOException: The response ended prematurely.”, DebugException="System.Net.Http.HttpRequestException: An error occurred while sending the request.
—> System.IO.IOException: The response ended prematurely.
at System.Net.Http.HttpConnection.FillAsync()
at System.Net.Http.HttpConnection.ReadNextResponseHeaderLineAsync(Boolean foldedHeadersAllowed)
at System.Net.Http.HttpConnection.SendAsyncCore(HttpRequestMessage request, CancellationToken cancellationToken)
…
Because of that, I tried to capture the data-stream with the tool “wireshark” on a loopback-adapter.
As an appendix here two screenshots and two data-streams of a try with python and c#.
As I use http instead of https I have to set the “System.Net.Http.SocketsHttpHandler.Http2UnencryptedSupport” to “true” as described in the link below:
First of all, I’d like to share that I think it’s great to see that Salea is working on a very nice automation API!
I currently don’t have the logic analyzer lying here, so I’ve tested the demo code with the Logic Pro 16 demo device. Works great!
In the next step, I’ve tried to automate loading a capture and adding analyzers to it. Also worked quite out of the box.
However, I am now stuck when I want to add an HLA on top of one of the Async Serial analyzers.
from saleae import automation
import os
import os.path
from datetime import datetime
# Connect to the running Logic 2 Application on port `10430`
manager = automation.Manager(port=10430)
# Load capture from file
with manager.load_capture(filepath="my_pre-recorded_capture.sal") as capture:
# Add analyzers to the capture
uart_analyzer_rx = capture.add_analyzer('Async Serial', label=f'UART RX', settings={
'Input Channel': 4,
'Bit Rate (Bits/s)': 230400
})
uart_analyzer_tx = capture.add_analyzer('Async Serial', label=f'UART TX', settings={
'Input Channel': 5,
'Bit Rate (Bits/s)': 230400
})
hla_tx = capture.add_analyzer('My Own HLA', label=f'HLA TX', settings={
'Input Analyzer': uart_analyzer_rx
})
# Close the connection
manager.close()
Running it yields:
[2022-07-22 15:55:40.112498] [I] [tid 21456] [main] [analyzer_node.cpp:2078] all frames written to database
Traceback (most recent call last):
File "Z:\Spielwiese\Logic_Automation\saleae_example_captured_file.py", line 22, in <module>
hla_tx = capture.add_analyzer('My Own HLA', label=f'HLA TX', settings={
File "Z:\Tools\Python3\lib\site-packages\saleae\automation.py", line 566, in add_analyzer
raise RuntimeError(
RuntimeError: Unsupported analyzer setting value type
[
Sorry this wasn’t clear - the current release only supports Low Level Analyzers, but we will be adding High Level Analyzer support relatively soon. We’ll update here as soon as that is available.
Hey, those are very exciting news, and this automation will be very useful for our organization debugging process.
I wanted to ask if the current version of the automation package can be used with python 3.8,
or it is has to be at least 3.9.
@matan2.cohen Great to hear! We don’t have an official stance on 3.8 yet, but testing locally it passes the automation test suite, and we had made some changes before we released to making the typing annotations compatible with 3.8.
3.7 does have a typing annotation conflict currently.
I’m using Logic-2.3.58 on ubuntu20.04 and python 3.8.
When i’m running trying some of the examples here i’m getting this error:
File “/home/user/.local/lib/python3.8/site-packages/saleae/automation.py”, line 117, in error_handler
raise grpc_error_to_exception(exc) from None
File “/home/user/.local/lib/python3.8/site-packages/saleae/automation.py”, line 115, in error_handler
yield
File “/home/user/.local/lib/python3.8/site-packages/saleae/automation.py”, line 490, in start_capture
reply: saleae_pb2.StartCaptureReply = self.stub.StartCapture(
File “/home/user/.local/lib/python3.8/site-packages/grpc/_channel.py”, line 946, in call
return _end_unary_response_blocking(state, call, False, None)
File “/home/user/.local/lib/python3.8/site-packages/grpc/_channel.py”, line 849, in _end_unary_response_blocking
raise _InactiveRpcError(state)
grpc._channel._InactiveRpcError: <_InactiveRpcError of RPC that terminated with:
status = StatusCode.UNAVAILABLE
details = “failed to connect to all addresses”
debug_error_string = “{“created”:”@1659935099.152517236",“description”:“Failed to pick subchannel”,“file”:“src/core/ext/filters/client_channel/client_channel.cc”,“file_line”:3260,“referenced_errors”:[{“created”:“@1659935099.152516313”,“description”:“failed to connect to all addresses”,“file”:“src/core/lib/transport/error_utils.cc”,“file_line”:167,“grpc_status”:14}]}"