for another release fixing reported bugs very quickly and some neat new UI features.
Start Button Stopped Working
I have a simple setup with two traces and an I2C analyzer enabled. It was working OK, then I’m not sure what happened. It asked me to redefine the trigger channel which had been previously setup and now the Start button is dimmed out and I cannot start a capture. I would send a save capture file, but I can because I can’t get a capture to work. There are no errors shown after fixing the capture channel issue. Below is a screen shot. Quitting and restarting fixed the issue. I’ll see if I can replicate how I got to that point.
Please keep us posted. I’ll try reproducing it locally as well
I’m having issues initializing the SMBus analyzer on this version. No matter what channels I choose, the SMBus analyzer says “Please select different inputs for the channels”
Using a Logic Pro 16 on MacOS 10.15.4
It happened again. It was running waiting for a trigger. I sent a packet that should have triggered a capture. When I looked back at the screen, there were no traces and Start was dimmed. I’m not sure if the trigger did this, or if it happened before the trigger.
Thanks for letting us know, we might have a bug in this analyzer. We’ll look into it!
Does it happen every time? I can’t reproduce it locally…
Do you see any error if you hover over the Start button?
Found a little spelling mistake in the new version.
I’ve noticed a slight problem with at least the last few versions. I’m using Kubuntu 19.10 on a laptop with a 1080p screen and an nVidia graphics card (1650 I think). Basically, the screen comes up maximized but the right hand side is cut off. I can just barely see the green start button on the edge. I cannot see the maximize button up top nor the close button but only the minimize button. I can fix this by clicking on the top bar and dragging so that the system resizes it to not maximized anymore. If I then click the maximize button on the top bar it does the right thing. Closing the program does not seem to save the position since this is an appimage so every time I start the program I have to drag it then maximize it again. It’s not that annoying now that I know how to deal with it. But, I figured I’d mention it so that perhaps it could eventually be looked into.
There are no errors if I hover over the start button.
It does not happen every time. I’ll capture my configuration ( ie sample rate, etc) and send that to you to see if you can reproduce it.
It means that the session initialization failed (that’s the only state where we don’t show errors). I don’t know why yet…
More information. I start a capture and it triggers on the I2C data OK. Then I start the next capture, and it locks up with the green button grayed out and the traces are gone. It looks like it is starting the next capture, but doesn’t. I’ve attached my setting.
(Attachment Capture Settings.pdf is missing)
I finally got around to working on some hobby stuff this weekend. This time around I decided to play with the HLA plugin framework. Where I developed an HLA for the Silicon Labs Si4735 radio chip. Right now I only focused on enumerating FM command and properties; but I will likely expand on this to fully enumerate the FM RDS information and the AM radio commands/properties…but its a start. I will likely focus on supporting the Si4707 Weatherband features before this though.
Comments on HLA
First off, this is a pretty great experience once you wrap you head around the framework. Here are some ideas/suggestions on this:
Provide a python test script to simulate some very basic frame sequences that the Logic software may send to the python module (depending on the DLL analyzer plugin being used).
- This would help to self document the normal communication flow of an HLA plugin but also provide some means for newcomers to debug their plugin. Maybe there is another way to debug I am not aware of (I am far from a python expert)?
- This could double as a standard unit test (pytest, unittest, etc) for the module which is probably a good idea anyways.
It seems there is a bug in passing the first I2C frame data in a capture to the HLA. I assume this may be specific to the I2C DLL analyzer plugin itself but I have not looked into this yet. I think you can see this behavior even in the provided HLA examples *.sal captures such as the gyroscope_hla capture.
With my Si4735 HLA, I notice that when zooming out, the frames that I use to convey a full transaction (write command, read response) become difficult to visually separate multiple transactions joined/squished together. On the normal DLL/C++ plugin frames’ a special bubble will be displayed on top of the frames indicating how many frames exist in the space. This also exists in the HLA but it activates way too late in my opinion to be useful in this case.
- If an example capture is desired I can produce one. I will likely create a public GIT repo for my HLA soon.
- One possibility to combat this could be to alternate the tint of the frames or uses ellipsis to indicate that the strings are truncated (which I think would be nice in any case). I made mock-ups in the image below.
- Also is there any plan to support multi-layered bubble text in the HLAs? This could also aid in readability, but I am not sure if there are any technical issues with this.
As I was developing the Si4735 HLA I noticed the I2C DLL analyzer plugin seems to report “null” in Decoded Protocols. I believe these exist due to accommodations made to support sending “start” and “stop” frames to the HLA which do not include any clocked data. See image below.
If an HLA returns a frame data strings containing ‘=’ (there are other special characters as well such as ‘>’) it will not be handled properly and be converted to ‘=’. I suppose there is some escape logic that may need to be included. Or maybe there is a way to do this already on the python HLA module?
A word of advise to anyone developing HLA modules for I2C. Be sure to offer address filtering support so that the HLA can coexist with other HLAs that are handling other address on the bus.
NOTE: I believe the Saleae HLA examples do not present this idea which I think the gyro example should probably be setup for.
Using the hla_gyroscope’s Hla.py as the basis for my Si4735 HLA I included a check when receiving the “address” frame which is based on a user setting. Since most I2C slaves offer some flexibility over the addresses, I provided a “choices” option for this slave address for the acceptable addresses for this slave. See python code snippet.
Python Code Snippet
address = frame['data']['address'] # Ignore transactions not associated with specified address if (address & 0xFE) != (self.address << 1): self.current_transaction = None else: self.current_transaction.address = address self.current_transaction.is_read = (address & 0x01) == 1
P.S. If anyone is is carefully inspecting my images that is familiar with the Si4735 chip, take note that my plugin at the time of taking these screenshots had the interrupt flag for RSQ and RDS swapped. =)
Other Misc Comments
It seems V2’s support for glitch filtering is much more restrictive than V1’s implementation.
- Is there a plan to improve this to be as flexible as V1?
Specifically, V1 was able to set the filter in units of time (ms/us). It was also possible to set this on a per channel basis. The combination of these improves the flexibility of mixing high frequency analysis with low frequency. For instance capturing a slow 200 kHz I2C bus along side of high speed SPI @ 20-40MHz. With I2C glitches can be a problem when sampling as high frequencies so it would be nice to have the ability to only set the filter for I2C but not for the high speed SPI.
Also, V1 would reprocess the current capture when the filter settings were changed. I found this useful when trying to tune a filter setting based on current observations.
Another hopefully small request. In V1 when using markers, the marker would snap to not only edges (which V2 does) but also to sample points. In V2 it seems to allow for moving the marker seamlessly down to the nanosecond level. While in some cases this can be nice, I find V1’s behavior in general more useful since it helps to relay the actual sample points. If you sample at a really slow frequency then it becomes difficult to know where the “real data” is versus the “interpolated”.
- Maybe there can be an option to switch between behaviors here?
Analog Anti-Aliasing Filter
Is there a possibility to allow the user to disable anti-aliasing filtering done on the Analog signals?
I assume if the input waveform does not consist of frequencies well below the cutoff that the voltages indicated in the captured analog waveform cannot really be relied upon.
Currently with the filtering, I do not feel comfortable in trusting the voltage measurements unless the sample rate is around 5-10x faster than the highest frequency in the waveform. With an option to disable filtering we would not have any such limitation. Of course the waveform could be nonsensical when the signal’s frequencies exceed Nyquist rate. But even in such cases, the unfiltered voltage information could be useful data to have.
Effect of changing sample rate on the filtered signal’s reported voltage levels:
There is another topic of concern over the analog interfaces’ underlying bandwidth through the probes/jumpers to the ADCs and at these frequencies there may be end more substantial crosstalk… but if we can assume this is a non-issue, then one use-case of this is to take a long capture to develop some statistics over the waveform this does require some care on the user’s part to pick a good sampling frequency and maybe the Logic is not flexible enough in selecting a desired sample rate for this to be practical.
Aliasing in Drawing Analog Waveform Lines
Not to be confused with the previous section above, it seems the “analog drawing process” has some aliasing issues (for lack of a better word). Below you can see the waveform has missing segments. See image below.
I see the general use case of this is to take long analog captures to build up a statistical model of the voltage levels that have been sampled. And to be clear this isn’t necessarily dealing with really high frequency signals, it could just be to deal with minimizing RAM utilization on the host PC (or dealing with scenarios where the PC cannot keep up with the sample rate requested).
On a somewhat related note to my comment above on “Marker snapping”:
- Is there any plan to add the marker-dots that V1 had for displaying the sample points on the analog waveform? I suppose this information is already conveyed to some extend when you have “measurements” turned on and hover over the waveform.
Stream to Terminal
This hamburger menu option on a loaded plugin does not seem to do anything unless you recapture. Is this intended behavior? It is not an issue in either case, but I was sort of expecting it would regenerate the terminal output…but maybe this is not desirable for other reasons.
Thanks a lot for the update. I dragged out my saved GNSS output and looked at the Terminal output - great! this is exactly what I was looking for.
There is still one niggle, which is that the terminal doesn’t decode the full capture.
I’ve attached three files:
- uart_mismatch.sal capture,
- uart_partial_decode.txt - the decode I got by CTRL-A+CTRL-C/V into a text editor (emacs).
- uart_decode.txt - manual decode by fiddling a bit with “cut” and emacs search&replace.
The terminal window only captures the first 624 lines, whereas the full .sal contains 3446 lines.
This wouldn’t be a major issue, if I could have gotten a full decode and export of the terminal output, but the Export to TXT/CSV does the old-school export (which makes good sense).
The partial file you sent me actually contains the last 624 Lines of the capture, not the first 624 lines.
The terminal does have a history limit, unfortunately. We haven’t planned this out exactly yet but we designed the terminal update to make it really easy to connect to other applications in the future - I think a good use case for this would be recording 100% of all terminal data to a file, for instance.
624 lines is much shorter than I expected though, I’ll look into this - I didn’t think the limit would kick in until millions of characters were recorded.
Ah, my bad. I didn’t stop and check if it was the first or the final part of the capture that was kept in the terminal - I went looking for the $GNTXT sentences which were missing.
But for sure the terminal will have to have a history limit. If you could hit a “record to file” button, turn on the terminal to see the log, and then “stop recording” that would be really nice.
RIght now I’m not in a hurry as I figured out how to convert the text file to what I wanted pretty automatically.
I’m glad to hear you’re working with GPS serial data. I’m planning on writing an example python high level analyzer (HLA) to help decode the strings in the app.
Right now, there is an example python HLA that helps show serial text data in the timeline view. This is what it looks like with your capure:
You can get that HLA here: https://github.com/saleae/logic2-extensions It’s part of the “hla_simple_example” package, called “Text Messages”. It’s very simple and lets you select a delimiter character and/or timeout, and simply joins the text characters together to form strings. It provides a better graph overlay experience for serial data than the standard serial analyzer, as you can see in the screenshot.
I’ve just increased the scrollback to 10,000 lines on the terminal and tested it with your capture, works great. That will probably be included in 2.2.11 today, if all goes well.