The reason of large memory usage for analyze the parallel inputs

When I try the ‘parallel inputs’ function in analyzer tab, I found that the memory usage of saleae is much much larger compared to that of without using any function in analyzer tab.

Could I ask why does this happen? I think if the analyze is done after the data aquisition, it does not have to use that much of memory.

@pangipark Thanks for letting us know, and sorry for the trouble with that. Could you upload a sample .sal capture file using the link below?

I suspect you might be running into the behavior below:

Thank you for your reply.
I uploaded on the sample.sal file on the dropbox

@pangipark Thanks! I got your .sal capture file over here. It looks like the capture file you sent contains about 140 seconds of parallel data to decode. I suspect that might be causing the large amount of memory usage as mentioned in the support article I shared with your in my previous reply.

I’ll run some tests and let you know our findings.

1 Like

Thank you for your help!!!

@timreyes I found a way. Unchecking the ‘Show in protocol result table’ and ‘Stream to terminal’ helped to reduce the memory usage.

However, about the export of the analyzed data, I have a suggestion.

When I try to export the analyzed values, it takes long time since the file has ‘Time [s]’ information which is expressed in floating point data type.

I’m wondering the time information is really needed, because the analyzed value is already sampled by another ‘known’ clock, which means that we already know the timing information such as the period of the values.

Of course what I said is only correct when the sampling is done by the regular clock, not random triggers. However, I think most of people who uses the simple parallel analyzing option sample their data by a known regular clock.

Therefore, I’d like to suggest generating an option not to export the timing data which makes exporting file slower.

Thank you.

@pangipark I can confirm that your capture file, with analyzers configured, takes several GB (it maxed out my RAM usage at 8GB and started to page to disk).

Thanks for sharing the above observation. Referring to the support article that I previously sent to you, that makes sense, since the memory usage is mainly due to the indexing system that our protocol analyzers use for processing decoded data and to make it available for search. Unchecking the option to “Show in Data Table” does reduce memory usage a bit.

Did you have a chance to take a look at the workarounds in the previous support article I sent you, and do those help (namely, the option to trim the capture to multiple smaller ones)?

We unfortunately don’t have a way of easily removing columns from an export without modifying the underlying source code of the analyzer.

I’m curious to know more about how you use and parse the export file if it simply contained a list of all the decoded values from your capture without any timing info.

Long story short, we certainly need to optimize memory usage when analyzers are added.

I just thought of a workaround for this.

In the data table, you can right click and delete columns. Afterwards, you can export the contents of the data table by clicking the 3 dots to the right of the data table search box. Hopefully this helps.