Please allow for lower memory limit and temporary analyzer disable

Hi, I have an uncommon case. I need to capture a few minutes of data before and after trigger, but the trigger happens once per a few days (a very rare bug, that I’m trying to diagnose).

The data is 4 digital channels with 2MS/s rate. After an hour of capture Logic 2 shows only ~40MB of memory used.

Because the setup needs to work continuously for days, I am using an old laptop, which isn’t very powerful. I have set the memory buffer size to 0.5GB (minimum). The problem here is that the waveforms’ refresh rate is very fast at the beginning of the capture, but slows down significantly when the buffer fills up (once per a few seconds).

It would be nice if I could decrease the buffer size to around 10MB, because that would capture a few minutes of data, which is what I really need, and I think it would increase the refresh rate and responsiveness, because less data to analyze and display each second. Also I am afraid that at some point the CPU power won’t be enough, and the capture will be stopped by buffer overflow, which I often experience with higher sampling rates. I can only check the setup once every few hours, so if it stops in between, it could miss the trigger, and that means I will have to wait days again for the next trigger.

It would also be great if I could disable the analyzers when capturing, because I don’t need them on the fly, and I don’t mind waiting several seconds after the capture. I think this would further improve the performance. Now I have to delete them before capture, and create manually, which is cumbersome.

1 Like

Thanks for sending this in, and sorry for the trouble with the software!

We’re troubleshooting a pretty significant problem with the software that affects users trying to record for days at a time, even if the data you’re recording is very low speed and you’re using a small buffer size.

We’re having trouble isolating the problem, mainly because the large amounts of time involved in running a single test.

Could you supply us with some more information?

  1. Performance specs of your laptop. (CPU, installed memory)
  2. Description of the density of data you are recording. (estimated transitions per second, maximum frequency, and if it’s burst-y, how long are the gaps in between bursts of activity.)
  3. Which protocol analyzers are you using, and with what settings?
  4. Important - how much time passes before you really start to notice the update rate slow down?

Please let us know if you experience any crashes or if the application completely freezes, or if you see any error messages. If you have any crashes, please share your machine ID - details here: https://support.saleae.com/troubleshooting/sharing-your-machine-id

Lastly, here is a quick tip for your protocol analyzers. Setup all of your settings the way that you like, then save a preset. Then, delete all of your analyzers, and save another preset, possibly named “no analyzers”.

You can then quickly add and remove your analyzers just by loading the two different presets.

That said, I can see that a “disable” option on individual analyzers would be useful here. We actually had this feature a little over a year ago when we were first adding analyzers to the new software, but removed it later because it wasn’t very useful, and didn’t work properly.

Thank you for the reply. It’s great to know you are working on the problem.

  1. The laptop is Lenovo MIIX 310, the specs are: CPU Intel Atom x5-Z8350, 4GB RAM, Windows 10 64-bit.

  2. I record 4 channels:

  • UART at 1M baud (debug console), usually 0.3-2-ms bursts with ~600 transitions per ms. Burst interval and number is very irregular.
  • UART at 115200 baud (GSM modem TX - AT commands and data), 1.5-25-ms bursts with ~60 transitions per ms. Mostly 50-150-ms interval between bursts, sometimes irregular.
  • UART at 115200 baud (GSM modem RX), same as above.
  • Logical level indicating error condition for trigger. One transition from low to high once per a few days.
  1. Async Serial on the three channels (didn’t remove them before capture).

  2. I notice a slow-down to ~1 refresh per second just 30-60 minutes after start of capture, at 30-50MB of memory used (will be able to check specifically tomorrow, because I am away for the weekend at the moment). It keeps slowing down to one refresh per a few seconds when the buffer is full. I don’t think it slows down further when the buffer has filled up.

I had one full system hang (black screen, had to hard-reset the laptop) one day after start of the capture. But I had the same issue earlier with this laptop without using Logic, so I don’t think Logic is a cause here.

With higher sampling rates (tried with 6MSps and above) I experience desynchronization error message in Logic after a few minutes to a few hours after start of capture. After the failure I need to reconnect the Logic, otherwise it won’t start another capture.

My hardware is Logic 8 PRO connected to USB 2.0 (this laptop doesn’t have USB 3) with the shortest cable I could find - about 70 cm total, using Micro-USB to USB-A socket adapter.

Thanks for the suggestion about presets. I will use them when I get back home, and report on slowdown without the analyzers.

I just discovered something, that could be important.
The screen refresh rate during capture slows down with time, but even when it is at the slowest rate (buffer full) it returns back to real-time speed once the trigger has occurred.

More details on my setup and rough measurements:

  • screen zoom is 100ms
  • trigger is set to channel 3, rising edge,
  • capture duration after trigger: 1000 seconds,
  • trim pre-trigger data: disabled,
  • glitch filter: channel 3: 1 second

The slow-down goes like this:

  • after 45 minutes of capture: 4 screen updates per second, 20MB of buffer used,
  • after 1.5 hour of capture: 2 screen updates per second, 40M of buffer used,
  • after 3 hours of capture: less than 1 screen update per second, 140M of buffer used,
  • after many hours, and with the buffer full: one screen update per about 3.5 seconds.
  • the slow-down doesn’t seem to increase once the buffer is full.

I also had one strange occurrence. I had to leave the setup for 48 hours without checking, and when I checked it afterwards, Logic 2 was not running. I don’t know what happened, whether the Logic 2 closed by itself, or the system rebooted by itself. What I am sure is that was not a power outage. If it was, the laptop would go to sleep after 5 minutes, and hibernated after 3 hours (Logic would be still running). Luckily the trigger didn’t happen in these 48 hours, but happened half an hour after I have started Logic 2 again :smiley:

Hope this helps.

This may be a completely unrelated issue, but the fact that Logic 2 had stopped some time during a 48h run could be caused by this issue I brought up a while back: I2C Analyzer hoarding temp-files - especially if you were still running with the analyzers attached.

I experienced similar symptoms when running (i2c) analyzers because of a bug that caused Logic to not free up the memory from old temp-files (as long as the app is running) would completely fill my hard drive and cause Logic to crash. I experienced this frequently because my hard drive was already running out of space. If you were running for 48 hours I imagine even a fairly large hard drive would have time to fill up.

Forgot to mention: all analyzers were removed in my recent runs.

@kazink Thanks for all the detailed information! Mark is out of office right now and is planning to return around mid-February. In the meantime, your observation below is quite telling:

it returns back to real-time speed once the trigger has occurred.

Once the trigger is found, does the frame rate continue to operate at “real-time speed” for the full 1000 second duration after the trigger?

Also, we would be curious to look into the error reports uploaded during the potential crash you experienced while it ran for 48 hours. Feel free to send us your machine ID and we can take a look (instructions below):

After some time of trigger there is a very slight slow-down. Near the end of the 1000-second period the display gets somewhat choppy. I think if I set the after-trigger time to a higher value, the slow-down would be more significant.

I also tried looping mode: it too slows down after some time.

My machine ID is 4287592b-e7c0-46fe-8ad0-c0ec75508d44

@kazink Thanks. As a first step, I’ll get this on our backlog to review any error reports via your machine ID. Hopefully that gives us some more clues to work with in case the crashes are related.

1 Like

One more thing that may help: before the trigger, when the buffer is full, only the waveforms display is slow. The trigger progress (wait) indicator animates smoothly.

@kazink Thanks. I’ve scheduled a meeting with the software team this Wednesday to go over all the information you’ve shared with us thus far. Feel free to share any further information from now until then if you feel it may help with our investigation. We’ll keep you updated!

1 Like

I had brought this up as a suggestion years ago, but I had wanted to be able to set the sampling frequency on a per channel basis. Like, take a SPI bus where CSn goes low once per transmission. I may only want a 100KHz sampling rate on that and a much higher rate on CLK to look at edges, and something smaller for the data.

In addition, It would be very nice to see the capability for appending trigger captures, which would help this gentleman. If you had a trigger condition which happens once every 12 hours over a weekend, you would want to capture 5 seconds or 10mS before the event and 1 sec after the event and then go to sleep until the next trigger. So, when you come in 3 days later, you have only 20 seconds of data and 5 events clipped in front of you.

By allowing modification of sampling rates per channel, the internal buffers required could be cut sharply. There is no sampling machine in the world I am aware of that can capture hours of data at high sampling rates, so you have to throw out much of the pre and post trigger data - which nobody wants to keep anyway. But having adjustable sampling rates allows another throttle for the buffer size needed.

dave bassett

1 Like

Thanks for all the details, and sorry for my delay. There are 2 big things from the data you sent us.

  • glitch filter: channel 3: 1 second

and

This is a very good catch! Before the trigger is found, our software is rotating memory through the circular buffer.

Once the trigger is found, we won’t delete any more memory. In fact, if we run out of memory after the trigger is found, the capture will just stop early, instead of deleting more memory.

However that doesn’t quite add up.

Question: right before the trigger is found, is the buffer full?

Second, you mentioned you are using the glitch filter. Unfortunately, the glitch filter is very, very low performance, and can run into trouble even on cutting edge processors. We really need to add a warning label to that, as it may be preventing processing from running in real time, which causes the app to basically break when running for long stretches of time.

Can you disable the glitch filter and test this again? Also, could you describe why you need the glitch filter?

Usually yes, but it the refresh speeds up no matter what amount of buffer was used when the trigger happened. It speeds up, and then slows down again while the after-trigger buffer fills up.

I disabled the glitch filter, but it didn’t change anything. The refresh rate still slows down with time. It;s hard to tell, but it looks like the slow-down is the same as with the filter on.

I used the filter, because at the place the setup was lying there were some EMI spikes, and often Logic registered a short positive pulse on the trigger line, and triggered when it shouldn’t. The filter was to avoid that. Now I moved the setup to another place, so it shouldn’t happen any more, and I can disable the glitch filter safely.

@david.bassett Thanks for the suggestions! I went ahead and added your suggestions to the idea posts below. I created a new one for the “per-channel sampling rate” idea, which would likely require updated hardware.

I also added your idea for a “Sleeping trigger” as a comment below:

Hi Tim,

Thanks. Any new HW on the horizon I should be saving money for this year?

db

| timreyes
February 22 |

  • | - |

@david.bassett Thanks for the suggestions! I went ahead and added your suggestions to the idea posts below. I created a new one for the “per-channel sampling rate” idea, which would likely require updated hardware.

Upvoty

Set Sampling Rate on a per Channel Basis - Logic 2 - Ideas and Feature…

Please allow for lower memory limit and temporary analyzer disable - #11 by timreyes "I had brought this up as a suggestion years ago, but I had wanted to be able to set the sampling frequency on a per channel basis. Like, take a SPI bus

I also added your idea for a “Sleeping trigger” as a comment below:

Upvoty

More Complex Digital Triggers (One Shot, Protocol Results / Errors, State…

I’m very happy using the Saleae analyzer One missing feature (even on the new software logic 2.3.8) is having the possibility to not only trigger on rising or falling edge of the signal but also trigger on sequential condition or bus value The NCI

@david.bassett Good question. Although we’re constantly brainstorming new hw ideas and gathering feedback, we don’t have any new hw to announce.

Thanks @kazink for all of the details!

Getting this fixed is a pretty long term project for us. However given your requirements, you might be able to reach your goals right now just by switching to a much faster computer, with more RAM, and a SSD. Any chance that’s an option for you?

On our end, our solution for this is a re-write of our data collections for both analog and digital data, to dramatically simplify allocation and request & re-use pages directly from the operating system. It’s going to be super fast once we’re done, but I don’t have an estimate on that, and it’s likely going to be well past a time that would be useful for your immediate needs.

Thanks, it’s not a big problem right now. I thought it would cause many capture breaks, but it doesn’t. The screen just refreshes slowly, which isn’t a huge deal.

But what about just allowing for lower buffer limit? Would just setting a lowest limit for the memory buffer to 10-20MB be a huge software overhaul? The control already accepts such values if I enter them manually (but the capture does not respect values below 0.5GB).

The real problem here is that the memory limit isn’t properly enforced, unfortunately. There is a lot of overhead that isn’t accounted for properly, but that doesn’t account for all of the error either. We’re not sure what memory isn’t getting tracked, but the real problem is that the way we’re tracking memory usage is very error prone. Our redesign project should fix this by unifying memory tracking with allocations, rather than the current very manual book keeping we’re doing. We also suspect we have some leaks in there.

In the meantime, you can edit this right now. Just close the software, and edit our config.json file with a text editor:

C:\Users\<Your User Name>\AppData\Roaming\Logic\config.json

Search that for bufferSizeMb, and replace the current value with a new integer like 128. (Power of 2 not required)

Save that, then open the software. You will see the new memory amount in the application. Just don’t edit it in the app. It should stay set as long as you don’t press the +/- buttons.

Again, sorry for the trouble! This is a priority for us, but unfortunately it’s a huge task and we haven’t been able to make progress through our normal bug fixing cycle. We need to get this into our roadmap as a larger project.