I was curious about how MDIO bus was supposed to work vs. the Realtek implementation, and found the following timing diagram when googling for a RTL8367RB datasheet:
… and the following timing characteristics:
The strange thing is that the MDC to MDIO Delay Time (SYM: t4) was specified as [0 - 40] ns (typically 2.8 ns) after the falling edge of MDC.
However, searching for another MDIO specification, I found an example from Microchip (KSZ9131RNX):
… and the Microchip device has these timing characteristics:
Notice how Microchip’s t[val] parameter is 80 ns max, measured from rising [edge] of MDC (rather than falling edge specified above on the Realtek device)?
I’m not an expert on MDIO bus interface and don’t have IEEE 802.3 specifications handy, but it seems like the Realtek part might have designed the MDIO output characteristics in a manner that doesn’t strictly comply with the IEEE specification.
I base this assumption on an IEEE spec. quote provided in the NXP community post (linked previously) that states (bold added for emphasis):
According to IEEE 802.3: “When the MDIO signal is sourced by the PHY, it is sampled by the STA synchronously with respect to the rising edge of MDC. The clock to output delay from the PHY, as measured at the MII connector, shall be a minimum of 0 ns, and a maximum of 300 ns…”
If this understanding is correct, then it seems like the current Saleae MDIO analyzer won’t support the Realtek device’s signal behavior ‘as is’ – since it is sampling exactly on MDC falling edges (apparently assuming that the falling edge will be sufficiently delayed from MDC rising edge to meet the IEEE 802.3 specification of 300 ns maximum delay, or at least what a given PHY device actually requires – as Microchip part only requires an 80 ns maximum delay for MDIO to be ready).
Thus, it looks like a custom modification (or future update) of the MDIO analyzer would be needed, like:
- Delaying the MDIO sample point to a user-defined delay after MDC falling edge to better sync with Realtek’s implementation
- Implement a user-defined delay after MDC rising edge for a more IEEE 802.3 compliant behavior (vs. using the falling edge reference point), and any non-compliant implementations could just pad some extra delay timing to account for any MDC high pulse timing variations
Out of curiosity, did you have any luck slowing down the MDC clock speed?