V2.3.* sucks CPU time when "idle"

at least running Logic-2.3.7-master.AppImage and 2.3.8 un Ubuntu 18.04 constantly use way more than one CPU load when doing “nothing” – output from “top” with both 2.3.7 and 2.3.8 running in palallel (2.3.7 connected to Logic Pro 16, 2.3.8 in demo mode for crash debugging only)

output of “top” – the high load average of ~12 is only due to those two Logic 2.* processes!

top - 18:55:14 up 12 days,  2:52, 17 users,  load average: 12.56, 12.23, 11.27
Tasks: 429 total,   3 running, 315 sleeping,  26 stopped,   0 zombie
%Cpu(s): 66.1 us, 18.3 sy,  0.1 ni, 14.6 id,  0.0 wa,  0.0 hi,  1.0 si,  0.0 st
KiB Mem : 32628996 total,  1229052 free, 26454812 used,  4945132 buff/cache
KiB Swap: 34603004 total, 29399724 free,  5203280 used.  4567832 avail Mem 

  PID USER      PR  NI    VIRT    RES    SHR S  %CPU %MEM     TIME+ COMMAND                                                                                   
 4511 koh8rt    20   0 2265952 692384  58264 R 177.6  2.1   1016:17 Logic                                                                                     
 5845 koh8rt    20   0 1271632 362884  94844 R 177.6  1.1  30:44.60 Logic                                                                                     
 4401 koh8rt    20   0 6908640 1.474g  73596 S  78.2  4.7 514:51.41 Logic                                                                                     
 5706 koh8rt    20   0 3166884 219916 121732 S  75.6  0.7  12:55.21 Logic

update:
after typing this, I just found/downloaded 2.3.9 which “covers several important CPU utilization bugs”, and now this looks MUCH better (just did a very quick capture).

@harald.koenig2 Glad to hear the update in 2.3.9 fixed it for you! Seems the case for most users so far who experienced the high CPU usage in previous versions.

Hi!
I’m back using Logic Pro 16 again after a lot of sw development,
right now using Logic-2.3.26 and I notice it’s sucking CPU again.
not as bad as last year, but a fully idle Logic process (with analyzer ujnpluged after capturing a very short I2C sequence) it’s using constant(?) ~20% CPU time:

top - 19:41:36 up 4 days, 47 min,  8 users,  load average: 3.84, 4.57, 4.29
Tasks:   1 total,   0 running,   1 sleeping,   0 stopped,   0 zombie
%Cpu(s): 10.5 us,  5.1 sy,  0.0 ni, 82.4 id,  0.0 wa,  0.0 hi,  2.0 si,  0.0 st
MiB Mem :  31864.2 total,   2204.2 free,  14710.6 used,  14949.4 buff/cache
MiB Swap:  33792.0 total,  32545.0 free,   1247.0 used.  14484.3 avail Mem 

    PID USER      PR  NI    VIRT    RES    SHR S  %CPU  %MEM     TIME+ COMMAND                                                                               
2850319 koh8rt    20   0 4245892   2.5g  51304 S  18.6   8.1 281:13.30 Logic                                                                                 

$ ps up 2850319
USER         PID %CPU %MEM    VSZ   RSS TTY      STAT START   TIME COMMAND
koh8rt   2850319 14.5  8.1 4243844 2653196 pts/4 Sl   May18 281:14 /tmp/.mount_Logic2BkMyWd/Logic --type=renderer --field-trial-handle=2499542314422784525,942

the process got started May 18, did 2-3 short I2C scans/analyzes, and is idle since then.

output of strace is hard to read, as there are 54 threads around:

$ ll /proc/2850319/task/|wc
     54     479    2750

strace shows ~400-600 syscalls per seconds, so it's really active;)
$ strace -fp 2850319 -e '!futex' -tt 2>&1 | cut -d' ' -f3 | cut -c-8 | uniq -c 
      1 2850319
    139 19:44:29
    669 19:44:30
    685 19:44:31
    474 19:44:32
    689 19:44:33
    685 19:44:34
    466 19:44:35
    696 19:44:36
    464 19:44:37
    703 19:44:38
    671 19:44:39
477 19:44:40

and there are some 20 “working” threads (for 10k syscalls traced):

$ time strace -fp 2850319 -e '!futex' -tt 2>&1 | cut -d' ' -f2 | head -10000 | sort | uniq -c| sort -nr | cat -n
     1     4893 2850324]
     2     3176 2850371]
     3     1145 2850327]
     4      325 2853316]
     5      323 2850369]
     6      108 2850319]
     7        6 202388]
     8        5 203703]
     9        2 2853363]
    10        2 2850361]
    11        2 2850358]
    12        2 2850357]
    13        2 2850355]
    14        2 202389]
    15        1 Process
    16        1 2853361]
    17        1 2853360]
    18        1 2850344]
    19        1 2850334]
    20        1 2850322]
    21        1 203702]

real    0m16.831s
user    0m0.813s
sys     0m1.790s

is this a known issue, or a “feature” ?

screenshot:

1 Like

Hi Harrold, I wasn’t able to reproduce this, but could you update to 2.3.27 and try again? I don’t think we changed anything that might affect this directly, but we did fix a memory leak that grew over time in the background, and it might be related to this issue. A quick local test with a few I2C captures showed the CPU usage to return to 0.5% - 2% while idling.

is this a known issue, or a “feature” ?

Well, if it’s cold, and the extra utilization is helping heat your home, then maybe a feature? :stuck_out_tongue:

Hi Mark,

in the mean time I’ve installed 2.3.27 too, and will try/test in the next few days and report then.

the 2.3.26 process is still “running” (now at 400 cpu minutes;)
is there anything I can do to gather data before I kill it, which might help understanding ?

Harald

1 Like

is there anything I can do to gather data before I kill it, which might help understanding ?

Yes, first make sure data collection is enabled (main menu → preferences, scroll to the bottom)

Then, press ctrl+alt+shift+x to bring up the secret crash menu. click “saleaeAssert”. The software should crash after a moment.

Then send us your machine ID and the date & time you crashed the app. The machine ID can be copied from the main menu → about. Assuming the machine is online, that should submit a crash report to our servers that hopefully will include a valid stack trace for all running threads, and we might be able to identify which one is causing the trouble. If you can determine the thread ID of the thread that has high CPU beforehand, we might be able to locate that thread ID in the crash report.

There is a good chance this won’t collect enough information to identify the problem, but it’s worth a try.

Hi Mark,

the machine id data is: {“Environment”:“production”,“Branch”:“master”,“Commit”:“3ef37852b20015c0b6d2d267567a8e0d6d0c0f54”,“Version”:“2.3.26”,“MachineID”:“50b16193-0327-4cb0-a370-69041e791850”}

both data collection options have been disabled at start, I’ve no enabled them.
but still ctrl+alt+shift+x does not trigger anything ?!
is it possible that this only works if enabled at startup ?
anything else I have to check or care about?

and if it would work, how is this data “automatically” transfered to you?
as very likely by default this will not work behind Bosch firewalls and web proxy !?
I still can enable and allow HTTP transfers (at least I guess so;) before triggering whatever button…

any chance to get that data (and store it to disk for manual transfer via email) e.g. via SIGHUP or SIGABRT or similar ?

will keep it running for any chance collecting debug data for you…

Harald

1 Like

Hi Mark,

the machine id data is: {“Environment”:“production”,“Branch”:“master”,“Commit”:“3ef37852b20015c0b6d2d267567a8e0d6d0c0f54”,“Version”:“2.3.26”,“MachineID”:“50b16193-0327-4cb0-a370-69041e791850”}

both data collection options have been disabled at start, I’ve no enabled them.
but still ctrl+alt+shift+x does not trigger anything ?!
is it possible that this only works if enabled at startup ?
anything else I have to check or care about?

and if it would work, how is this data “automatically” transfered to you?
as very likely by default this will not work behind Bosch firewalls and web proxy !?
I still can enable and allow HTTP transfers (at least I guess so;) before triggering whatever button…

any chance to get that data (and store it to disk for manual transfer via email) e.g. via SIGHUP or SIGABRT or similar ?

will keep it running for any chance collecting debug data for you…

Harald

My fault, ctrl+alt+shift+C, not X.

And you’re right, I’m not sure if it will upload successfully from your company network.

Alternatively, you might be able to to attach GDB to the process which is consuming the CPU power (there are usually a handful of Logic processes running, be sure to attach to the correct one) then try the command generate-core-file [file] to see if you can create a core file.

Please send this into the support team using the link below and don’t post it here, as it may contain private data.
https://contact.saleae.com/hc/en-us/requests/new

If it’s too large to send, even compressed, let us know and we can send you a file request link which you can use to upload large files.

Hi Mark,

sorry for the long delay (sw issues again;)

the process is still running/idle, and within the last week it’s grown to ~10 Gbyte ram (RSS, 15G virt), so it’s leaking memory “as hell”:

$ top -p 2850319
  PID USER     PR NI   VIRT   RES   SHR S %CPU %MEM    TIME+ COMMAND                                                                             Â
2850319 koh8rt   20  0  15.6g  9.8g 73808 S 18.8 31.4  1536:47 Logic     Â

$Â Â ps up 2850319
USERÂ Â Â Â Â Â Â Â PID %CPU %MEMÂ Â Â VSZÂ Â RSS TTYÂ Â Â Â Â STAT STARTÂ Â TIME COMMAND
koh8rt  2850319 13.0 31.4 16309704 10246352 pts/4 Sl May18 1537:06 /tmp/.mount_Logic2BkMyWd/Logic --type=renderer --field-trial-handle=2499542314422784525,94
                                                                     Â
this is top output in thread view (“H”):

   PID USER     PR NI   VIRT   RES   SHR S %CPU %MEM    TIME+ COMMAND                                                                             Â
2850327 koh8rt   20  0  15.6g  9.8g 73956 R  6.7 31.4 600:43.47 Compositor                                                                          Â
2850319 koh8rt   20  0  15.6g  9.8g 73956 R  5.4 31.4 499:19.02 Logic                                                                               Â
2850324 koh8rt   20  0  15.6g  9.8g 73956 S  0.8 31.4 70:32.17 Chrome_ChildIOT                                                                     Â
2850371 koh8rt   20  0  15.6g  9.8g 73956 S  0.8 31.4 73:06.84 DedicatedWorker                                                                     Â
2850356 koh8rt   20  0  15.6g  9.8g 73956 S  0.7 31.4 26:28.88 TaskWorker                                                                          Â
2850359 koh8rt   20  0  15.6g  9.8g 73956 S  0.7 31.4 26:28.54 TaskWorker                                                                          Â
2850360 koh8rt   20  0  15.6g  9.8g 73956 S  0.7 31.4 26:29.35 TaskWorker                                                                          Â
2850362 koh8rt   20  0  15.6g  9.8g 73956 S  0.7 31.4 26:29.82 TaskWorker                                                                          Â
2850355 koh8rt   20  0  15.6g  9.8g 73956 S  0.6 31.4 26:26.13 TaskWorker                                                                          Â
2850357 koh8rt   20  0  15.6g  9.8g 73956 S  0.6 31.4 26:20.72 TaskWorker                                                                          Â
2850358 koh8rt   20  0  15.6g  9.8g 73956 S  0.6 31.4 26:29.91 TaskWorker                                                                          Â
2850361 koh8rt   20  0  15.6g  9.8g 73956 S  0.6 31.4 26:32.17 TaskWorker                                                                          Â
2850330 koh8rt   20  0  15.6g  9.8g 73956 S  0.2 31.4 11:47.39 CompositorTileW                                                                     Â
2850331 koh8rt   20  0  15.6g  9.8g 73956 S  0.2 31.4 11:48.97 CompositorTileW                                                                     Â
2853316 koh8rt   20  0  15.6g  9.8g 73956 S  0.2 31.4 13:55.50 SubscriptionMan                                                                     Â
3326243 koh8rt   20  0  15.6g  9.8g 73956 S  0.2 31.4  0:04.40 SubscriptionMan                                                                     Â
2850329 koh8rt   20  0  15.6g  9.8g 73956 S  0.1 31.4 11:47.02 CompositorTileW                                                                     Â
2853363 koh8rt   20  0  15.6g  9.8g 73956 S  0.1 31.4  8:02.98 StreamTerminal:                                                                     Â

so I try to get more info from PIDs 2850327 and 2850319

Thread 5 (Thread 0x7f1a0ff14700 (LWP 2850327)):
#0Â futex_abstimed_wait_cancelable (private=, abstime=0x7f1a0ff135c0, clockid=, expected=0, futex_word=0x7f1a0ff136b8) at …/sysdeps/nptl/futex-internal.h:320
#1Â __pthread_cond_wait_common (abstime=0x7f1a0ff135c0, clockid=, mutex=0x7f1a0ff13668, cond=0x7f1a0ff13690) at pthread_cond_wait.c:520
#2Â __pthread_cond_timedwait (cond=0x7f1a0ff13690, mutex=0x7f1a0ff13668, abstime=0x7f1a0ff135c0) at pthread_cond_wait.c:656
#3 0x000055c38ea34ce0 in ()
#4 0x00000000000d9e42 in ()
#5 0x00000000390d1526 in ()
#6 0x00000000000d9e42 in ()
#7 0x0000000039fd58f6 in ()
#8 0x0000000000000000 in ()

Thread 1 (Thread 0x7f1a140dd200 (LWP 2850319)):
#0Â futex_abstimed_wait_cancelable (private=, abstime=0x7ffee8946e50, clockid=, expected=0, futex_word=0x7ffee8946f48) at …/sysdeps/nptl/futex-internal.h:320
#1Â __pthread_cond_wait_common (abstime=0x7ffee8946e50, clockid=, mutex=0x7ffee8946ef8, cond=0x7ffee8946f20) at pthread_cond_wait.c:520
#2Â __pthread_cond_timedwait (cond=0x7ffee8946f20, mutex=0x7ffee8946ef8, abstime=0x7ffee8946e50) at pthread_cond_wait.c:656
#3 0x000055c38ea34ce0 in ()
#4 0x00000000000d9e42 in ()
#5 0x00000000390cd601 in ()
#6 0x00000000000d9e44 in ()
#7 0x000000001ec23309 in ()
#8 0x0000000000000000 in ()

generating core files for the 2 main threads with gdb worked, but they are HUGE due to memory leaking:

-rw-rw-r-- 1 koh8rt koh8rt 16G May 26 16:10 Logic2.core.2850319.Logic
-rw-rw-r-- 1 koh8rt koh8rt 1.7G May 26 16:14 Logic2.core.2850319.Logic.gz
-rw-rw-r-- 1 koh8rt koh8rt 1.6G May 26 16:25 Logic2.core.2850319.Logic.xz

-rw-rw-r-- 1 koh8rt koh8rt 16557415936 May 26 16:17 Logic2.core.2850327.Compositor
-rw-rw-r-- 1 koh8rt koh8rt 1764252181 May 26 16:24 Logic2.core.2850327.Compositor.gz
-rw-rw-r-- 1 koh8rt koh8rt 1676216586 May 26 16:21 Logic2.core.2850327.Compositor.xz

and even though they “nicely compress” I’m not sure how to transfer them to you…

finally I’ve tried ctrl+alt+shift+C and got the secret crash menu.
but when clicking “saleaeAssert”, the window and process just vanished, no output file or other gdb process trying to dump data…

I’ve attached strace logs over 10 secs per thread, maybe this also can give some hints what and why it’s looping ?!

now I’m restarting with 2.3.37 for some not so short/small analog measurements.
I’ll keep Logic2 running again and keep an eye on both cpu and memory for the next days;)

Harald

(Attachment strace-10s.7z is missing)

2nd try with strace attachment:
tried 7zip before, now using “allowed” zip…

strace-10s.zip (66 KB)

quick update:

yesterday I’ve started 2.3.27, did a few measurements, and so far I did
not disconnect the 16 PRO, still it uses up CPU (top over 10secs):

   PID USER     PR NI   VIRT   RES SHR S %CPU %MEM    TIME+
COMMAND
3380646 koh8rt   20  0 2143108 240424 38348 S 23.7  0.7 249:27.62
Logic

   PID USER     PR NI   VIRT   RES SHR S %CPU %MEM    TIME+
COMMAND
3380654 koh8rt   20  0 2143108 240832 38348 S  5.9  0.7 71:06.00
Compositor
3380646 koh8rt   20  0 2143108 240832 38348 S  4.8  0.7 60:12.85 Logic
3380684 koh8rt   20  0 2143108 240832 38348 S  1.0  0.7 10:45.54
TaskWorker
3380686 koh8rt   20  0 2143108 240832 38348 S  1.0  0.7 10:46.78
TaskWorker
3380683 koh8rt   20  0 2143108 240832 38348 S  0.9  0.7 10:47.10
TaskWorker
3380685 koh8rt   20  0 2143108 240832 38348 S  0.9  0.7 10:46.73
TaskWorker
3380687 koh8rt   20  0 2143108 240832 38348 S  0.9  0.7 10:47.50
TaskWorker
3380688 koh8rt   20  0 2143108 240832 38348 S  0.9  0.7 10:48.03
TaskWorker
3380689 koh8rt   20  0 2143108 240832 38348 S  0.9  0.7 10:46.90
TaskWorker
3380690 koh8rt   20  0 2143108 240832 38348 S  0.9  0.7 10:46.13
TaskWorker
3380651 koh8rt   20  0 2143108 240832 38348 S  0.7  0.7 8:41.64
Chrome_ChildIOT

I just cleanly restarted 2.3.27 and did not run any measurements, still
it’s using CPU time and memory is growing:

$ top -p $( pgrep -f /proc/self/exe )
   PID USER     PR NI   VIRT   RES   SHR S %CPU %MEM TIME+ COMMAND
 136912 koh8rt   20  0 1545804 224300 116764 S 17.0  0.7 0:40.65 Logic

and “ps” output in steps of 1 min:

$ while ps hup $( pgrep -f /proc/self/exe ) ; do sleep 1m ; done | tee
ps-logic-1m-1.log | cut -c-$COLUMNS
koh8rt   136912 18.9 0.6 1545804 225064 pts/3 Sl  11:38 0:46
/proc/self/exe --type=renderer
koh8rt   136912 18.3 0.6 1547084 226164 pts/3 Sl  11:38 0:56
/proc/self/exe --type=renderer
koh8rt   136912 18.0 0.6 1548364 226568 pts/3 Sl  11:38 1:06
/proc/self/exe --type=renderer
koh8rt   136912 17.7 0.6 1550924 227308 pts/3 Sl  11:38 1:16
/proc/self/exe --type=renderer
koh8rt   136912 17.5 0.6 1561168 220136 pts/3 Sl  11:38 1:25
/proc/self/exe --type=renderer
koh8rt   136912 17.4 0.6 1561168 221736 pts/3 Sl  11:38 1:35
/proc/self/exe --type=renderer
koh8rt   136912 17.3 0.6 1561168 222748 pts/3 Sl  11:38 1:45
/proc/self/exe --type=renderer
koh8rt   136912 16.7 0.6 1561168 223936 pts/3 Sl  11:38 1:52
/proc/self/exe --type=renderer
koh8rt   136912 16.4 0.6 1552972 225984 pts/3 Sl  11:38 1:59
/proc/self/exe --type=renderer
koh8rt   136912 15.9 0.6 1552972 226456 pts/3 Rl  11:38 2:05
/proc/self/exe --type=renderer
koh8rt   136912 15.5 0.6 1552972 228264 pts/3 Sl  11:38 2:12
/proc/self/exe --type=renderer
koh8rt   136912 15.4 0.7 1552972 229536 pts/3 Sl  11:38 2:20
/proc/self/exe --type=renderer
koh8rt   136912 15.1 0.6 1561168 222324 pts/3 Sl  11:38 2:26
/proc/self/exe --type=renderer
koh8rt   136912 15.0 0.6 1561168 224132 pts/3 Sl  11:38 2:34
/proc/self/exe --type=renderer
koh8rt   136912 14.7 0.6 1561168 225756 pts/3 Sl  11:38 2:40
/proc/self/exe --type=renderer
koh8rt   136912 14.5 0.6 1552972 225920 pts/3 Sl  11:38 2:47
/proc/self/exe --type=renderer
koh8rt   136912 14.3 0.6 1552972 226708 pts/3 Sl  11:38 2:53
/proc/self/exe --type=renderer
koh8rt   136912 14.3 0.6 1552972 226612 pts/3 Sl  11:38 3:02
/proc/self/exe --type=renderer
koh8rt   136912 14.4 0.6 1552972 227988 pts/3 Sl  11:38 3:12
/proc/self/exe --type=renderer
koh8rt   136912 14.3 0.7 1554252 229148 pts/3 Sl  11:38 3:19
/proc/self/exe --type=renderer

cpu usage is quite constant ~8 sec / min,
memory increase isn’t yet noticable (will keep it running for now;)

Harald

Thanks @harald.koenig2. We’ll run some local tests here with 2.3.27 to compare.

Quick question - you mentioned two things in your last message:

and:

Just wanted to check to to verify that the RAM usage is remaining constant while the software is idling?