Is it possible to restore data on crashed Logic2

Hello.

I use linux. When I start Logic2 it starts in /tmp folder. I do not have more than 15 GB there…

Yesterday I’ve started test to catch rare error in my device. Unfortunately Logic2 crashed during night…

BUT!! I have found over 13 GB of data in /tmp/analyzer_db.7c4948f1-bf00-4bbf-af6e-f66c15eda591/1

Are those *.sst fles recoverable??

Any chance to start or set storage on other drive than /tmp??? I have like 3TB of free space elsewhere.

Regards
Maciek

Hi @laski.maciej , sorry to hear about the Logic 2 software crashing on you…

We can look into what caused the crash for you. Would you mind sending us your Machine ID using the ticket submission link below?
https://contact.saleae.com/hc/en-us/requests/new

This will create a support ticket, and I’d be happy to chat with you over email about this. Just let me know that you had posted on our discuss forum so I know it’s you.

Instructions on sharing your Machine ID with us is below:
https://support.saleae.com/troubleshooting/sharing-your-machine-id

Unfortunately, we don’t have a way to recover captures after a crash occurs, nor is there a way to change the tmp directory. We’ve recently received reports from other users that the analyzer_db files use quite a lot of HDD space, especially in captures longer than 1hour.

Some details behind how analyzer_db files work:
For long recordings (>1 hour) - we recommend saving a preset with the analyzers you plan to use.
https://support.saleae.com/user-guide/using-logic/saving-loading-and-exporting-data

Before your capture, we recommend removing your analyzers. After the capture is complete, you can add your analyzers back quickly by loading your preset as per the above link instructions.

Right now our search indexing system uses the analyzer_db files as a way to manage search indexing for analyzers that are added to the capture. It gets cleared when restarting the app, but if you record for several hours and leave the app open, we don’t clear the indexing files (i.e. analyzer_db files).

This is something we are planning to improve very soon, but will take quite a lot of work. Sorry again for all the trouble this is causing…

Hello,

it’s a pity. The error have occured that time :)… Damn you Murphy for your laws. I can’t catch it again since then.

99% the reason of crash is 0MB on /tmp even linux doesn’t feel well without space on root.

Unfortunately I have “send anonymous data” and “automatic crash report” turned off. It’s our company policy not to send anything outside. So I don’t think that my machine ID will help you.

Method with analyzers turned off is every thing that I need!!!

Thank you!!!

@laski.maciej, I think you’re on to something with your statement below:

99% the reason of crash is 0MB on /tmp even linux doesn’t feel well without space on root.

Also, you’re right about the Machine ID potentially not helping out in this case. Hopefully the method with turning off analyzers works out, and sorry for having you deal with the extra steps with that for the time being!

The indexing system taking up a large amount of HDD space is something we have on our backlog to improve.