Does the difference between Mealy and Moore state machines have any real significance when it comes to a C implementation? What would that difference normally be?
A long time ago, it was much easier for me to understand Mealy/Moore advantages/disadvantages when it comes to RTL. The whole output depending on the current state/output depending on the current state+current input difference made sense, as did the fact that Mealy's could be made with 1 less state in some cases also made sense. Associating timing diagrams with each FSM implementation also made the difference between them more clear.
Say I'm making a state machine in C. In one case a LUT depends on state/current inputs (Mealy) and in the Moore the LUT just looks up the current state and returns the next. In either the output happens after the return from the LUT (I think, though I could be wrong). I haven't thought of a clear way that a Mealy has an advantage when coded in C. Topics like code readability, speed, code density, design ease, may all be relevant topics - from my perspective the two models seem almost the same.
Maybe this difference is just a topic for academics - and in practice in C implementations the difference is negligible. If you know the key ways a C state machine implementation would differ between Mealy and Moore, and if there are real advantages (that are also significant) I'd be curious to know. I'd like to emphasize - I'm not asking about RTL implementations.
I did see one other Mealy/Moore post here: Mealy v/s. Moore
But this isn't really the level of explanation I am looking for.
You have a mechanical procedure for converting one formalism into the other, so there is no structural difference between the two.
Speaking about differences in implementation, the two formalism only differ in the output function, which tells you what symbol should be output. Specifically:
a Moore machine might be a little simpler to implement because you have less information to track when it comes to generating the output, but the difference will be really small.
Here is how a simple Moore machine would look like in C:
int state = INITIAL;
while (!isTerminal(state)) {
char input = nextInputSymbol();
fprintf(output, "%c", nextOutputSymbol(state));
state = nextState(state, input);
}
and here is the Mealy machine:
int state = INITIAL;
while (!isTerminal(state)) {
char input = nextInputSymbol();
fprintf(output, "%c", nextOutputSymbol(input, state));
state = nextState(state, input);
}
as you see, the only difference is in the definition of nextOutputSymbol()
. Whether the implementation of said function is easier for one or the other formalism, it really depends on the specific application.
nextInputSymbol
is just a routine to get a new symbol (might be a scanf
or the like) and nextState
will depend on the specific machine, but its complexity will not change much between Mealy and equivalent Moore.
In particular, both nextOutputSymbol
and nextState
boil down to a table lookup or a switch
or an if/else
chain, with no real implementation difficulty. They are just boring to write, really.
NOTE: I omitted error checking in the code snippets to keep us focused on the main point of the discussion. Real world code would perform some extra checks, like handling EOF on nextInputSymbol
or break
ing on error states.