Is LLA data synchronous across channels?

When writing an LLA can I assume that all channels contain the same amount of data at any particular point in time?

What I am trying to do is compare the difference in time between edges in different channels. It makes the detection logic simpler if I can assume that there are as many samples in a reference channel as there are in a target channel. I’m using code like:

      while (refEdge < tagEdge && refData->DoMoreTransitionsExistInCurrentData())
         {
         pendingRefEdge = refEdge;
         refData->AdvanceToNextEdge();
         refData->AdvanceToNextEdge();
         refEdge = refData->GetSampleNumber();
         }

to find the most recent edge in a reference channel to the current edge in a target channel. I occasionally get a reference edge one edge earlier than the correct edge:

image

Update: I’m using DoMoreTransitionsExistInCurrentData() to try and handle the end of data case without missing the last measurement.

@P.Jaquiery that’s an ambiguous assumption in my view to have. Whilst not knowing the channels you are comparing against, one shouldn’t assume that all channels contain the same amount of data. A Tx Channel should contain more than the Rx channel. So Tx sends a Bunch of data to a particular Byte Address and then Rx Ack response should be shorter, but all depends.

However, @timreyes and co will know much better than I.

Cheers

@b.hughes1 the LLA data API is presented in terms of sample number rather than edge epochs. It’s not altogether unreasonable to assume that the hardware collects samples across digital inputs so at any particular point in time Logic should in principle have enough information to answer questions about the state of the logic inputs at a particular epoch. However, as you suggest, the team at Saleae are in the best position to know. :slight_smile:

As it happens, time moves on and I’ve discovered the logic flaw in my code and corrected it:

      while (refEdge < tagEdge)
         {
         pendingRefEdge = refEdge;

         if (!refData->WouldAdvancingToAbsPositionCauseTransition(tagEdge + 1))
            break;

         refData->AdvanceToNextEdge();
         refData->AdvanceToNextEdge();
         refEdge = refData->GetSampleNumber();
         }

so this is somewhat academic, but interesting none the less.

1 Like

@P.Jaquiery
Nice pickup re your minor coding error. Tim and the Co, I am sure will blast you something back.
Hope you have more success with your effort.

Cheers

It’s already paying off. I can dump the values to a CSV then plot them using Excel (or something similar) to get a nice graph showing the behavior of the digital phase locked loop I’m working with!

The various discontinuities are due to multiple edges in the reference signal resulting in “jumps” in the phase signal as the target pulse drifts into lock. Not ideal, but good enough for me to diagnose the issue I’m chasing down.

1 Like

@P.Jaquiery ,
That is awesome and the PLL plot is gold. Are you suggesting its a PLL issue or are you hunting another issue? You are all over it my friend.
I’d imagine the PLL used inside the Saleae is a good one, and if its your issue, its either the source PLL or the Saleae.
If I am way off, let me know.

Are you getting an “extra” edge (glitch) in the reference signal? If so, did you try to “deglitch” your waveform (glitch filter)?

You may also try choosing a different digital threshold, depending on the hardware voltage levels for your logic inputs and analyzer capability (as a different level might be more stable):

Even though the logic analyzer presents the digital signal as a 1 or 0 (high or low), the real world is still fundamentally analog – and the underlying signal is really above or below a certain logic level threshold limit that can be glitchy due to “bouncy” or " noisy" signals in the underlying hardware. The noise can be compensated by good grounding (did you connect the black ground wire of your reference channel?), good filtering (glitch filter), and good limits, etc.

Good luck!

Edit: re-considering the details of your question –
I believe that all of the Analyzer SDK traversal APIs appear to work in terms of a ‘sample_number’, or ‘num_samples’ (or the state via ‘NextEdge’) rather than using a value in units of time (i.e., seconds). However, I think that all of the channels do share a common data stream and therefore the ‘sample_number’ parameter used in the AdvanceToAbsPosition() API for each channel is a globally common reference point that is synchronized across all channels ( @timreyes ?)

Therefore, I assume if each channel would AdvanceToAbsPosition() to the same value of the ‘sample_number’ parameter, then you could individually query each channel’s state (via the GetBitState() API) to determine if a specific channel is either in the BIT_LOW or BIT_HIGH state at the same given ‘sample_number’ in the stream.

Thus, each channel may not have the same number of EDGES, but I think that ALL of the channels are being tracked together in the analyzer data stream with a shared ‘sample_number’ reference (clocked in ‘samples’ rather than ‘time’) and therefore each channel has a known value (i.e., specific state) at any given ‘sample_number’ within the stream.

Based on this, I think you can use AdvancToNextEdge() and GetSampleNumber() on the ‘main’ channel you’d like to use as your ‘clock’ reference (i.e., time reference point, expressed in terms of an absolute ‘sample_number’ vs. actual ‘time’ in seconds), and then use the AdvanceToAbsPosition() API on your other channel(s) to synchronize to the same ‘sample_number’ (point in ‘time’) as the ‘main’ channel.

Ultimately, there may not be the same number of ‘edges’ on a given channel, but I believe all of the channels share the same ‘sample_number’ as a ‘timestamp’ (in sample counts rather than time).

However, the ‘sample_number’ value can be related to time (in seconds) via other API functions, such as:
GetTimeString(), which uses the trigger sample and sample rate to calculate a time string, OR
GetSampleRate() API to get the sample rate (in Hz) as a 32-bit integer value (U32) which can be used to derive the sample period (in seconds) from the equation:

sample period (in seconds) = 1 / (sample rate (in Hz)

Note: If doing so, be sure to properly manage the data type conversion from U32 to double – as an integer division operation truncates to zero vs. returning any fractional result :wink:

1 Like

As there seems to be some interest in this issue I’ll fill in the details a little. We have a data acquisition system that locks its sampling clock to an external clock using a digital PLL. The PLL is taking longer than we would like to lock, but we haven’t had much visibility on how it is performing, hence the LLA.

The system is now working pretty well. The wraps and discontinuities are because I’m using a SPI chip enable signal to obtain the sampling rate. However it is a multi channel ADC and we get multiple enables for one nominal sample time (sampling multiple channels). As the PLL comes into lock the phase slides over the sample window and my detector hops from edge to edge as that happens. The signal looks like this:


Oh, did I mention there are 100 samples per input clock tick? When the system is in lock the rising purple edge should be aligned with the first rising edge in the orange pulse burst. I’m using the red input as the reference because at lock time the purple pulse should be within the large high period of the red trace.

With a little script magic I can remove the discontinuities shown in the graph I showed in an earlier message and get:


which I’m pretty pleased with!

2 Likes

and after a little work the graph now looks like:

image

which I’m quite happy with! :smiley:

2 Likes

@P.Jaquiery
This is the output I’d expect from the PLL. Nicely done and as you should be happy :slight_smile:

Hi all,

Just to chime in on the original question - sorry for our delay!

This is one of the weak points of the analyzer API. When we (eventually) get back to LLAs, one of the first things I want to add is a way to detect or advance to the next edge on a group of signals, rather than requiring the user to operate on one at a time.

If there were API calls that allowed operating on more than one channel at once, what API call would you want to see for this? What would your while loop look like with your proposed functions?

To address the question:

When writing an LLA can I assume that all channels contain the same amount of data at any particular point in time?

  1. No, the Logic 2 software processes each digital channel on it’s own thread, and so at any given point, one channel may have more data available than another.

  2. If a particular point is available on one channel, it will eventually be available on all channels.

  3. All functions are transparent to how much data is currently available except for DoMoreTransitionsExistInCurrentData. That’s the only function that only considers the data that is available for processing over the duration it is called. (Note, it may actually block for up to 250ms waiting for more data, details below) That means that all other functions, such as AdvanceToNextEdge, will simply block until the necessary data has been processed. (Note, if the event or sample they are waiting for never arrives, because the capture has completed, then they will never return)

  4. The exception to this is for loaded captures, or analyzers added after a capture is complete (and all data has finished processing). In this case, all data for all channels is available the moment the analyzer starts, and DoMoreTransitionsExistInCurrentData will be able to check all remaining data for the given channel.

  5. DoMoreTransitionsExistInCurrentData will pause up to 250 ms waiting for more data if no more transitions currently exist. This prevents the CPU from going to 100% if your analyzer gets stuck in a loop checking this function at the end of the capture, which most of our analyzers do. When the user removes the analyzer or closes the capture, this function will not return, instead it will exit the thread.

If you want to find the next edge on a group of channels, there isn’t a clean way to do this today.
You can start by calling DoMoreTransitionsExistInCurrentData on each channel. If any channel returns true, then you can use GetSampleOfNextEdge and WouldAdvancingToAbsPositionCauseTransition to determine which channel changed state first, without advancing any of the channels.
In the case where DoMoreTransitionsExistInCurrentData returns false on all channels, your only real choice is to “march” all signals forward by a fixed amount. I would recommend a value less than 10ms for this amount, as that’s approximately the smallest amount of data our software will process and release to the analyzers at one time.

This solution is pretty ugly, and there is a small (possibly zero) chance of blocking without processing the last transition in the capture, if that transition appears within the the march time from the end of the capture.

The “march” behavior is required because analyzers must report progress and advance their way through the capture. This is critical in looping and triggered capture modes, because the system must delete older data from the circular buffer as more data is recorded. If an analyzer never advances, then it will be terminated when the circular buffer fills and begins to delete old data.

It’s generally a good idea to test your analyzer with no signals connected to your device, and make sure that it’s able to report 100% progress. If it gets stick at 0% progress, then the analyzer will automatically be disabled once the first chunk of old data is deleted. Also, several operations won’t be possible, like exporting analyzer results, because the analyzer is not complete yet.

3 Likes

The block behavior is what I expected for the Advance/Advancing methods and point 2 is expected also.

DoMoreTransitionsExistInCurrentData behavior is also expected.

A method (on AnalyzerX?) like:

typedef std::vector<Channel *> ChannelList;
typedef std::vector<size_t> Indexes;
Indexes AdvanceToFirstEdge(U64 startSample, ChannelList const &channels, U64 &edgeSample);

that returns a list of channels indexes with an edge at edgeSample. No match: empty list and edgeSample is the last sample across channels that was searched so the next search can start at that sample.

A bool HaveFinalSample(U64 &finalSample) method on ‘AnalyzerX’ could resolve the tidy up at end of sampling issue for analyzers that care.

2 Likes