Simon Rabeder wrote:the time quantisation of the slate or anything that confirms your sync for that matter is only at the frame rate.
True, if one tries to do it visually. More often than not, at least for me, the camera footage will have some sort of audio inside, say from the on-board mic. Matching the spikes using both waveforms is easily possible, and my preferred method (when I'm doing it in a DAW). I agree it is not strictly necessary beyond 1/4 frame, but there's no harm in it either.
Also, as you say, this could be used to sync audio from multiple sources. Granted, a really bad strategy, but I can see it happening on the rare occasion that there are, say, too many lav mics, and the recorder only has a few channels.
The way I see it, once that view is free of the frame-based grid-snap, there's no reason to impose a new one: maybe the waveform could move with the accuracy of the zoom level. That way, one can zoom as much as his sync needs require, assess drift in long recordings, etc. No harm in it, and I don't see it as harder to implement than, say, a 1/4 frame grid.