Logic 2.2.7 - High-Speed Digital Edition!

Download Links

TLDR;

2.2.7 brings dramatic performance improvements to digital capture processing and digital rendering! Our aim was to get the software to run smoothly when recording digital signals near the limit of the devices’ performance. Digital data processing is now 10x faster, meaning fewer “backlogged” captures!

We have a new changelog page! check it out here

Alpha Mailing List
Join the Alpha Users Mailing List to be notified about the next release!

Reminder: releases now include a new interface for writing high-level protocol decoders in Python. Check out the documentation here: github.com/saleae/logic2-examples

Bug Fixes:

  • Timing markers now snap to digital transitions again
  • saved calibration files are now loaded properly when offline
  • analyzer export now respects the currently selected display radix, instead of always exportin ASCII.
  • Custom protocol analyzers: if no results are added GenerateFrameTabularText, the frame is now hidden from the protocol results list view
  • Traces color now update immediately on change

Behind the scenes:

  • detailed performance instrumentation
  • significantly improved UI performance in live mode (5-10x)
  • dramatically improved processing performance of digital data (10x)
  • dramatically improved rendering performance of digital data (3x)
  • migrated analyzer management to a better state management system
  • refactored processing state updates from the data processing system

Features:

  • Custom protocol analyzers that have a “FilePath” or “FolderPath” TextSetting now have UI to select files or folders
  • press Ctrl+0 to reset text zoom (ctrl +/- zooms text size)
  • Cmd/ctrl+click on a decoded protocol result opens the analyzer sidebar tab and scrolls to that result

Notes

There is a LOT more in here behind the scenes. We’ve spent the last week focused on improving digital processing and rendering performance. Our goal - to ensure the software can run in real-time on as many systems as possible while recording digital channels near the worst-case data throughput. And we’ve done it! At least on our machines :slight_smile: Your mileage may vary. Quick note - we haven’t dived into analyzer processing performance yet, so you will still see considerable slowdown when recording dense protocol data. We’ll get to that soon!

For reference, our test involves recording 8 channels at 250 MSPS, recording 115200 baud serial, I2C, and 12 MHz SPI in the same capture.

We do have a fairly large known issue in the application we’re working on next. The capture process leaks memory pretty badly. If in-app the reported memory usage is 1 GB, the actual memory usage might be 5x higher. In addition to hurting performance, this also causes a typical “soft crash” when trying to restart a capture or close a tab, where the memory deletion takes so long the UI times out and the processing engine stops responding. This is currently our highest priority, and it’s responsible for the majority of front-end “soft crashes”, and mainly affects customers pushing the performance limits of the devices.

We can’t stress the extent of the performance improvements we’ve accomplished over the last week or so. I want to say thanks to Ryan, John, and Rani for their efforts. The digital processing performance, one of the main bottlenecks in “real time” mode in the software, is now over 10x faster in single-threaded tests! Digital rendering if several times faster than it was before, and we’ve managed to improve the scheduling efficiency to get more work done per second & per CPU core. I can’t wait to bring these improvements to protocol analysis, digital triggering, and analog processing & rendering.

If you would like to test it out, try connecting your device to high speed digital signals - be sure to keep them at least 4 times slower than the sample rate! Then record near the worst case - (6 digital channels @ 500 MSPS, 12 digital channels @ 250 MSPS, 16 channels at 125 MSPS.) See how long the software can run in “looping” mode before you see the dreaded “backlog (X) s.” on the capture progress window. Let us know how long you’re able to record before seeing this! Internally, we’ve seen it run over 20 minutes without backlogging.

I am now using the beta release much more often than the the 1.x releases. As there are a lot of nice features and improvements.

Do wish you had the screen capture stuff, but I am working around it.

One wish list still is when creating or editing an analyzer, to have the ability to move it. Some times example with SPI, it is hard to see which line is which when a lot of the screen is covered by the dialog.

Another sort of wish list (maybe belongs on other list) is the ability for SPI analyzer (or maybe HLA above it), to be able to specify another IO pins (often 9th bit SPI), that for example when I am working on display driver code. Many displays have another IO pin DC (D/C)… which indicates if the byte is a command or data. Example in the capture:


That top line, is the DC, and it would be nice to have the data associated with the textual data especially when you might want to export the txt/csv file.

Note: Out of curiosity I tried the export file case here and even though I tell the system to display the data in hex, it looks like the text file is with ASCII output.

Again I am enjoying some of the new features, like just letting the capture run and then stop it when it looks like it captured what I am looking for.

Kurt

1 Like

So far the speed is greatly appreciated with 2.2.7. I am super excited to see that you expect to have the memory issues and crashes fixed in the next release. Do you have any timeline on that?

Also one small thing when in the Timing Markers dialog, if I click the three dots to and select to “delete selected” it does not delete the timing marker unless I first clicked on the name or bar in the signal window prior to clicking on the three buttons.

1 Like

It’s currently our top priority. Hard to give a timeline (until will find the issue at least), but I’m hoping for the end of next week.

unless I first clicked on the name or bar in the signal window prior to clicking on the three buttons.

Sounds like a tiny bug, we’ll look into it

Hi, this release looks already really nice, but for me the I2C analyzer doesn’t get into the table. I can see it decoded next to traces also there is something in the terminal, but nothing in the table view. Thanks for looking into that.

1 Like

Which OS are you using?
Unfortunately, the table is unstable (mainly on Windows). We’re using ElasticSearch at the moment, and we’re planning to switch to another DB that will be much faster and more importantly stable :slight_smile:

Hi Rani,

sorry for lack of details. Yes, I have Windows 10 Enterprise and my HW is logic 16 Pro. It’s strange that I think the first acquisition after installation it worked, but never again.

BTW now I use the beta version and it’s very useful that I can search in table view. However I can only search by one byte. Could be useful to add possibility to search multiple chunks of data. Or (and) even better, could be to mask communication based on device address. We often debug I2C with multiple slaves, and usually I’m interested only in 1 at a time. Thanks for consideration :slight_smile:

It’s strange that I think the first acquisition after installation it worked, but never again.

We know :frowning_face:
The next version will be much better!

BTW now I use the beta version and it’s very useful that I can search in table view. However I can only search by one byte. Could be useful to add possibility to search multiple chunks of data.

Have you tried the new High Level Analyzers? You can write a simple python script and combine multiple bytes into one packet (or filter some of them).

Sorry, I hope you don’t mind me keep asking questions. But wondering if there is more documentation on the HLAs and potentially how to convert an existing analyzer to one. Actually the first question is if it makes sense to do so? And if there is more documentation, than what is in the couple of samples?

Again I have an analyzer which understands the Half duplex Serial protocol of the Dynamixel Servos.

Which I have mentioned and last I tried it with V2 stuff it was still working.

But maybe it makes more sense to implement it as an HLA and maybe let the normal Serial analyzer take care of all of the other issues…

It handles two versions of the Protocol (known as Protocol 1 and Protocol 2), where all of the messages
Protocol 1:
start with 0xff 0xff <ID can not bye 0xff> …

Protocol 2: is similar:
oxff oxff 0xfd 0x00 <length 2 bytes> <2 byte CRC checksum>


Now wondering how (or if ) one can do some of the capabilities of current analyzer. Example currently the code outputs different things depending on what the instruction is.

Example if the instruction is a ping (1), the current code adds a few result strings:

    P, PING, PING ID(id)

Which of course you know that which one that is used depends on how much room is available on the screen to display it.

Some of the instructions are more fun, like a read register, where I add in multiple results:

   R, READ, RD(<id>), RD(<id>) REG: <first reg>  

And I also have the ability to set which type of servo I might be talking to for both Protocol 1 (most likely AX, and Protocol 2, which I have defined in some of the different Servo types.
With this I add a couple of others

RD(id) REG <reg>(REGISTER NAME)
RD(id) REG <reg>(REGISTER NAME) LEN: <len>

There are more commands and also response packets, but I think this shows some of the capabilities.

So gain I wonder if it makes sense to convert?

My guess is for this case maybe not as already have one that is reasonably working.  More the question might be, if I were wanting to do another analyzer along the same complexity, does it make sense to stick with the older style C++ analyzers or try out the HLA?

There’s no way to have Logic watching for a trigger, and then record say 10 seconds before and 10 seconds after the trigger, is there? I have an intermittent fault, which sometimes take days to reproduce, so I don’t want Logic to store the data (so it doesn’t use up memory on the PC) until the trigger event happens. I don’t have enough memory to store days of data.

BUG - Empty save data

this measurement, only those changes, and hitting Ctrl+S results in attached 73kB.

lala3.sal (73.0 KB)

I can also Provide the file with no modifications. ( 60 MB ).

What do you mean by empty save data? Are you talking about the measurement or the data itself?

I jus wrote HLA analyzer quite similar to yours, took 2 days included testing.
Using VSCode + python extension make this a breeze. No need to compile, you can directly see your code in action in Logic2. For testing purposes, I used unittest to test it before really linking it to Logic2 and seeing the result.

1 Like

Thanks,

Of course it might take me a little longer :smiley: as I am a C/C++ developer and have only played around with Python when doing some ROS stuff, and then usually wanted to convert the python over to c…

Sounds like something you can do with HLAs. You’re welcome to ask here if you have any issues with implementing it :slight_smile:

Hi,

The 100% CPU load issue on pause is still there on macOS unfortunately…
Rani, I’ve captured the Logic2.2.7 process with XCode’s Instruments while the app is on pause and keeps eating over 100% CPU time, if it can help.

BR,
Emmanuel.