While drafting an HLA for the Quadrature Encoder post, I noticed a minor discrepancy in calculated rate
when calculating within python vs. calculating in Excel from the Simple Parallel Analyser’s CSV exported output.
I think I narrowed it down to the possibility that GetTimeString() helper function from the Analyzer SDK may not provide/output the full resolution that is available within the Logic 2 software’s internal timestamps?
In particular, I observed:
The FrameV2 Start field shows: 1.000 024 872 s
However, the Simple Parallel CSV output for this same line shows:
"Simple Parallel","data",0.999963752,6.1116e-05,0x0000000000000000
"Simple Parallel","data",1.00002487,6.1024e-05,0x0000000000000000
And the same timestamp is also seen from the HLA export to CSV:
"Quadrature Encoder","QuadEncoder",0.999963752,6.1116e-05,10961,16382.699868925474,16441.5,24574.04980338821
"Quadrature Encoder","QuadEncoder",1.00002487,6.1024e-05,10962,16361.256544543792,16443,24541.884816815687
Which causes the following discrepancies (when computing a rate
value):
>>> print(1/(1.000024872 - 0.999963752)) # full timestamp precision
16361.256544495765
>>> print(1/(1.00002487 - 0.999963752)) # CSV output timestamp precision
16361.791943431652
As you can see from the quadrature encoder analyzer output, the rate
calculated matches the internal (full resolution) timestamp value: 1.000024872
However, if you try to use Excel to calculate the same value – it will match the other (reduced resolution) value due to the CSV timestamp missing the last digit.
Is the GetTimeString() being limited to only 9 significant digits vs. 1 ns resolution? I would have expected 1.000024872 instead of 1.00002487 to be output in the CSV output file (to exactly match the value in the table view and I assume the internal timestamp value). However, it seems like there is some rounding / slight loss of precision going on here? Or, is there any other way to get the timestamp value in the CSV output to exactly match the GUI’s Start column in the Data table view display?
Note: the most pedantic eyes might notice the calculations are still ‘slightly off’ between:
- 16361.256544543792 (value calculated by the python script extension)
vs. - 16361.256544495765 (value calculated by Python 3.11.2 interpreter above)
… but I’m guessing that might be a different quantization-like issue related to Saleae’s internal saleae.data.GraphTime data type vs. python’s built-in float data type (i.e., IEEE-754 binary64)?