I ran into the issue that the automation API functions “add_analyzer” and “add_high_level_analyzer” only returns when they are finished, so I can only have one running while the capture is in progress. Also, the API lacks the possibility to uncheck “show in data table”. My capture files are 450s long (672MB), but it take about one hour to analyze one file. As I have two high speed UART analyzers, the fact that “show in data table” is always enabled by automation make it stream in my SSD tmp folder. Which make the analyze time rockets to all time high, as my disk is slower than RAM.
To speed the measurement up, I automated all the the measure, without any analyzer at all and recorded the captures with the automation api, which was really convenient and quick. So I have a moderate amount of .sal files. I then moved them on a computer with 160GB of RAM.
I did run 6 instances of logic, each with a different port in parallel, each instance being controled by a python script that open a capture, add the analyzers (having to wait it is finished before adding the next), save/export .csv, close the capture, open the next capture etc. Unfortunately, it filled 300GB on my SSD up to the point I had nearly no room left and I had to stop the instances.
I really don’t want to open each capture by hand, add the analyzers by hand (at least they can be added in parallel), then export result by hand as I have a great amount of files. Is there a way to solve that, so it doesn’t take ages to analyze and doesn’t fill up my SSD?
Also, I tried to load a preset of analyzer on an open capture file in the GUI (hoping it would add all my analyzers at once, so I don’t have to set them by hand), and it is not possible to do so.
