Feature Request - Change Glitch Filter in Read-Only Session

I’m neither sure if I’m missing something nor what the proper protocol is for feature requests, so here goes:

I’m using Logic 2.3.0

I would like to apply a glitch filter “after the fact” on a finished recording, but cannot activate it (apparently the session is read-only). On the initial recording, the glitch filter was deactivated. During manual inspection of the recording, I found a few glitches that may not have been caused by the DUT (not so sure about that, though). What I was aiming at was: playing around with the glitch filter settings to see what values change; ultimately, I’m trying to find an error in my DUT, and I’d like to check if (with “correct” glitch filter settings) the SPI communication would become “correct” (in terms of my problem domain). The only thing I see is that the byte values in the data table/terminal do not match the datasheet specifications.

I do have analog and digital recordings – would it be possible to modify the analog recordings and “replay” the digital ones from them (if the analog sampling rate is high enough)? I reckon that this may tamper with the recorded data, and I’d be more than willing to accept that (maybe with a warning that this would be an irreversible operation just like trimming); I could always make a backup :wink:

Re-doing the recordings is time-intensive, since I need to first prepare the DUT (which means re-placing probes, setting up function generator, preparing RF probes, injecting the proper test signals, …). I am a limited-budget hobbyist so I cannot leave everything connected; I have only one bench to “play” with :zipper_mouth_face:

1 Like

This may or may not help you but, in the old Logic program you could do this. So, if need be you can recapture with that and then edit the glitch filter to your heart’s content.

1 Like

Thanks @indeterminatus, you’re right, the Logic 2 software does not currently have any way to edit the glitch filter after the recording starts. @Collin is right too, the old 1.X software did support this.

Basically, the old software never modified the data that it recorded. The glitch filter was implemented as a filter that sat between the recorded data any anything in the software that tried to access it. It’s extremely complicated. In the new software, we decided to try a much simpler approach - while recording, simply apply the glitch filter to the raw data stream, and store the filtered results. We were able to build this very quickly, but it lacks that important ability to adjust the glitch filter after the fact.

Could you do me a favor, and add a feature suggestion for an after-capture adjustable glitch filter here?
https://ideas.saleae.com/

We use our ideas site to track feature demand and use that information to help plan.

I’d like to allow adjusting the glitch filter after the fact, but I’m not sure which route we should use for it. The “on the fly” glitch filter was extremely complicated, and still has a few bugs to this day, but it provided the most flexibility. We might also be able to support simply re-processing the recorded data with a more aggressive glitch filter, but that has a lot of limitations, and might not be worth it.

Lastly, I think of the glitch filter as a last resort. Have you checked our support page on the subject?
https://support.saleae.com/troubleshooting/seeing-spikes-in-digital-capture

Cleaning up the input signals as best as possible might be able to eliminate the need for a glitch filter.

1 Like

Done.

Thank you for sharing your thought processes (especially regarding your uncertainty, that is a proof of true professionalism for me :slightly_smiling_face:). I can see the problems and challenges associated with it.

I totally agree with you that cleaning up the input signals would be the way to go. I already did my best to perform the measurements with the shortest inductance path that I could manage. Now I’m at the point where I see a spike in a recording, and I don’t know where it comes from. Checking EM radiations that may be picked up by my probes is no longer possible, so I cannot rule actual interferences out (but it’d be interesting to know, since they might be the cause of my problem in the first place). What I might be able to do (or so I thought), is simulating the situation in the Logic software, pretending that those spikes were not there, to see if the transmitted/received bytes would then become correct/match the expectations.

Maybe I’m going about this the wrong way, and have to try something completely different, but I don’t expect you to solve that problem for me – I have to learn a lot, still, so please take into consideration that I am not a hardware professional who knows exactly what they’re doing. I’m a software engineer just starting out to tinker around with micro-controllers and whatnot, so it’s highly likely that I did the measurements wrong. I consider the Logic more of a “capture, then analyze in detail” kind of device, so the feedback cycles are quite long (might be hours or even days).

Now that I’m thinking about it, it may be helpful to warn (in real-time, during capture) about potential glitches (an order of magnitude “off” from the standard deviation of the frequency in, say, the clock signal line, maybe). That way, the user might be warned about potential errors in the measurement setup. And it would be way easier to fix that and re-start the capture than to find such a potential problem hours or days later.

1 Like

Would it be possible to use capture data as a virtual probe—i.e. process the saved data file through the regular input data pipeline as if it was the live, raw data stream. This could be implemented by defining raw input ABI that could be used by software-generated raw data.

This would be actually useful in other contexts, for instance simulations and tests, where people could merge an actual hardware capture data and a synthetic software-generated data stream.