Extension output message format

Hi
I was wondering why I cannot print characters like “<” or “>” in my output strings produced by my python extension.
Is there a special characters encoding I have to respect? I’ve tried also using repr() but no chance.

Another question: slowly my output strings are getting too long. Is there a way to represent them on 2 lines on the analyzer result?

@davide.ferrari For your question below:

Another question: slowly my output strings are getting too long. Is there a way to represent them on 2 lines on the analyzer result?

Are you referring to the Data Table to the right of the software’s UI? If so, each row in the data table corresponds to a single decoded protocol bubble. So, if you’re HLA extension outputs a long string on to a single protocol bubble, then it will appear as a single row in the data table.

Would you prefer to have a “wraparound” option, similar to how Excel can handle long text in a single box? Let us know your preferred solution. I’d love to get this improvement idea recorded below:
https://ideas.saleae.com/b/feature-requests/

As for your question on printing “<” or “>”, I’m not quite sure. Let me get this in front of our software team. We’ll follow up on that.

@davide.ferrari Is the “<” and “>” issue in an HLA format string? This is an outstanding issue that needs to be fixed, but you can use triple braces to get around the issue, for example: "Value: {{{ data.value }}}"

Thank you all
“Would you prefer to have a “wraparound” option”?
Yes
I don’t know if this would be the best solution for me but as a quick hack it would do.

I’m trying to implement a protocol analyzer that decodes both layer 2 and layer 7 (I guess).
So I would need to represent both layers.
The quick hack could help me, but a better solution would be to give the output of layer 2 to a second analyzer for the upper layer, so that the latter could represent the messages with it’s own format independently from the lower layer. Is it possible to connect two custom protocol analyzers?
My problem is that the messages of the upper layer are spread across multiple messages of the lower layer, so the upper layer would also have to specify star/stop timing information which is different from the one of the lower layer.

@davide.ferrari Thanks for getting the wraparound idea posted!

For your question below:

Is it possible to connect two custom protocol analyzers?

I’m assuming your requirements are the following:

  • You are currently using a pre-installed low-level analyzer (LLA) - for example, Async Serial, SPI, I2C, etc.
  • You have built a custom High-Level analyzer (HLA) extension via Python that will sit on top of the LLA above
  • You HLA currently decodes 2 separate layers at the same time

If this is the case, then yes, you could split your HLA into two HLAs and add them to the same LLA, thereby decoding the LLA 2 different ways.

As an example, I can share the image below:

You will notice that I currently have 2 HLAs (Concatenator_2 and Concatenator_1) attached to the Async Serial Analyzer on Channel 0, both of which are decoding the data in 2 different ways. As shown in the data table, they are outputted on different rows.

Thank you Tim
actually my idea would have been to provide the output of the first analyzer to the second analyzer too (as well as to Logic 2), so that I can already perform some low -level analysis in the “layer 2” analyzer and then provide the decoded packets to the second one for further processing.
In this way i can actually simulate a device that receives the packets. The way I then represent the different layers would be independent from each other.

@davide.ferrari Ah I think I see what you mean. Unfortunately, one HLA’s output cannot feed into another HLA as an input, though this could be a great feature request to post on our ideas site as well.

In your case, the best approach right now might be:

  • HLA_1 to implement layer 2 decoding only
  • HLA_2 to implement layer 7 decoding only (perhaps by also including layer 2 decoding in the background of this HLA since HLA_1 results cannot be used as an input)

I can see how implementing a growing number of layers via this manner would be a hassle to code. The ability to chain HLA inputs to each other seems like it would be more and more useful as a larger number of layers are implemented (i.e. Layer 1 HLA → input to Layer 2 HLA → input to Layer 3 HLA → etc…)

This is related to Multiple inputs to HLAs - Logic 2 - Ideas and Feature Requests - Saleae and, I’m pretty sure, there is an idea that directly addresses using a HLA as a source for another HLA - but I can’t find it. If there isn’t such an idea, there should be! :slight_smile:

Thank you good idea!