Custom protocol implementation - advice


I would like to design a custom protocol analyzer for the debug port of a MPC56 MCU. The port uses serial communication, which is easily interpreted via the SPI protocol, however it can use 35 bits or 9 bits messages. That means I need to have a way of having the analyzer accepting both messages.

Could you point me to the method of designing the analyzer? I suspect that a HLA will not do the job, as it uses the underlying SPI protocol analyzer, which cannot, at least to my knowledge, make a distinction between frames of different size.


you could build your own C++ analyzer with the Analyzer SDK. The built in SPI analyzer can be taken as a starting point. See here for more info Analyzer SDK & Analyzer Code.
If you have frame sizes of 9 and 35 bits you need to check if after the 9th bit it would take too long until you reach the next transition if you haven’t got any other indicator that the frame is not finished yet. These two functions would be useful for this:
U64 GetSampleOfNextEdge();
bool WouldAdvancingCauseTransition( U32 num_samples );


Thank you for the reply. I’ve downloaded the SPI analyzer and I’m now trying to figure out how to insert the logic I need in the code.

I think I should look into the function void SpiAnalyzer::GetWord(), especially in the loop part:

for( U32 i=0; i<bits_per_transfer; i++ )
if (WouldAdvancingCauseTransition() == false && i == 8) 
      // We have a 9-bit data transaction, save and exit

Can you tell me if I’m on the right path here? I was not able to find the documentation of the functions you referenced, so I’m doing a bit of guesswork.

EDIT: I managed to have the custom low-level protocol analyzer output correct data for different sized transactions by checking the state of the DSDO line (corresponding to SPI’s MISO) in the SpiAnalyzer::GetWord() loop. However, I’ve noticed that I can’t have a custom High Level Analyzer used with my low level analyzer. Can you please confirm if that’s really the case? I’d like to build a HLA on top of the C++ analyzer.

Also, could you point me how to measure the time between two clock edges?


@b.kereziev Glad to hear you’re making progress! Unfortunately, HLAs are only supported on a specific set of prebuilt analyzers. The officially supported list is below:

One thing to note is that, SPI is in fact on the supported list, though I don’t know off the top of my head if a variation of the SPI Analyzer could be supported. In the meantime, I’ll have to review this with the team here and get back to you (in addition to your question about checking the time between clock edges).


We don’t officially support connecting a custom C++ LLA to HLAs yet, although we do want to support that in the future. It is possible to do this today with an undocumented API, but we will break that API in the future, so your custom LLA would stop working completely when that happens and would require code changes to fix. If you want to try that despite the warning, let us know and we can send you an example.

However, I have a simpler suggestion. I had a similar problem when writing a HLA for USB PD, which contains some 8 bit words and some 10 bit words.

A simple solution that would not require modifying the SPI analyzer is to simply modify it to use a word size of 1 bit per word. Then in your HLA, you can just group those bits together however you like. This would let you do the whole project in python, and you wouldn’t need to worry about the API breaking later.

Here is a simple HLA example which combines groups of input frames into a single output frame:

You could model your HLA off of that approach. Let us know if you have any questions about it, or need any help!


Hi again,
sorry for the late reply. I assume you found the API documentation on the Analyzer SDK website in the meantime.

It’s a pity that one cannot use HLA Analyzers on your own Low Level Analyzer at the moment but you might be still able to implement everything you need directly in your Low Level Analyzer in C++.

The Analyzer base class has a member function called GetSampleRate(), you can use this to calculate the time between two samples. Watch out that you can only move forward in time within your analyzer. You cannot go backwards again. This can be a bit tricky sometimes.

For example (untested code):
U64 edgeA = mClock->GetSampleNumber();
U64 edgeB = mClock->GetSampleOfNextEdge();
double bitRate = double(GetSampleRate()) / (2.0 * double(edgeB - edgeA));

If you sum up multiple clock ticks you get a better result. The 1/2 factor is due to the fact that your bitrate is equivalent to the full clock frequency but in this example only half the clock period is measured (e.g. only low phase).

1 Like

Hello, I am interested in the undocumented API for custom LLA to HLAs. Is it possible to get the example you mentioned?


Yes, in fact we’ve decided to make the existing API official, so we will be publishing documentation about that soon (targeting before the end of year, but I reserve the right to be wrong about that)

For now, please take a look at this example for hour our SPI analyzer does it:

We will be adding a method to the API for your analyzer to register as supporting FrameV2. You will need to add that feature once it’s released, which will be the same time we publish the documentation and release the associated software update.

You can completely use the FrameV2 API now, without that new method, however you will not see FrameV2 entries in the data table until it’s added. In the interim, you will need to forward the FrameV2 results through a HLA in order to see them in the data table.

Basically, today the data table has a white-list of LLAs that will show FrameV2 results in the table. Once we make the API public, we will add the mentioned method, which you will then need to add to your LLA in order for the data table to display FrameV2 results.


Thanks for the update. In order to get the FrameV2 results to an HLA, will I need to provide additional code to forward the results or will this be handled by the class member variable mResults? In that case will I just need to add an HLA to the logic 2 UI in order to see the results as you mentioned?

Also just to clarify, once the HLA is there, I should be able to use the FrameV2 results in the HLA python code correct?



The only code you need to add to your C++ LLA is the necessary code to create FrameV2 objects and submit them with mResults->AddFrameV2.

Once that is done, you can test this with either a custom HLA, or the “LLA Frame V2 Tester” already in the marketplace.

I followed the setup instructions here for the custom LLA, however, in my code I don’t have a reference to FrameV2 class. Is there an updated SDK I’m missing?

Actually, I think I found the right SDK here (Please correct me if I’m wrong). I’ll test this out with the new FrameV2 implementation.

I need to do about the same. Ive spent a week to make my LLA and the last two days ive been struggling to make a HLA, but for some reason it was not working.
Now i see that it was not possible to make it work with the tools at hand.

So, ok, im pulling the alpha version of the SDK so i can add v2 frames instead of v1 as im doing right now.
May i know what data do i have to provide to save a v2 frame?

This is how i save a v1 frame:

void RFAnalyzer::RecordFrameV1(U64 starting_sample, U64 ending_sample, RFframeType type, U64 data1, U64 data2) {
	Frame frame;

	frame.mStartingSampleInclusive = starting_sample;
	frame.mEndingSampleInclusive = ending_sample;
	frame.mFlags = 0;
	frame.mType = (U8)type;
	frame.mData1 = data1;
	frame.mData2 = data2;


Can you explain what to change so i can save as v2?


In only one day i made a decoder for Sigrok Pulseview, and over a week to try to do the same for Saleae Logic. This is how it looks my Pulseview decoder:

Hi @compras,

I suggest you think about FrameV2 objects as dictionaries where you can add as many keys as you like, because that is how you will access them later from Python.

The main commonality between FrameV1 and FrameV2 is that both need to have a starting and ending sample. This determines where the frame goes in the capture.

Besides that, your FrameV2 needs a type. (any string you like, later in python, you can have different format strings for different types. From your screenshot, maybe your LLA only has 2 types (bit and sync) but your HLA might have more types.
Note - your LLA might not even need to decode the “sync” frame, because your HLA might be able to compute it from a gap between bits and add it as a new frame.
Then, your FrameV2 can have key/value pairs, stored in what will later be a dictionary.

For example, if your LLA needs to produce 1 FrameV1 per bit, you could just store a field called “bitstate” with a boolean.

This example creates and adds a FrameV2 with type “bit”, and one key/value pair (key is “bitstate”, value is true)

FrameV2 framev2;
framev2.AddBoolean( "bitstate", true );
mResults->AddFrameV2( framev2, "bit", starting_sample, ending_sample );

Ok, kind of understand this… Just need to make some tests to clear it up.
Thanks Mark!