Very good point about multiple view analysis. It departs a bit from what I thought with video quality-degrading issues, but I like to consider what the imaging system can deliver as a whole, be it a single or multiple camera system.
For 3D quantitative analysis, I think 2 cameras is the theoretical minimum but that real motion tracking applications never use less than 4 cameras. (I've used 2-camera systems for tracking eyes or fingers in a small volume, but as soon as you want to track body parts you move to 4 to 8 or even more).
So, what can constitute the main problems and defects of a multi-camera setup ? (Considering qualitative analysis only for now).
Mis-synchronization is probably one of the biggest… We can always synchronize to frame-level in software, but sub-frame sync requires hardware support. That's a clever use of WiFi for sure.
I don't know what constitute an acceptable synchronization level for sport analysis ? (I know that stereoscopy video requires almost pixel level sync for example).
On the subject, I have plans for a half software, half hardware sub-frame level synchronization method using the rolling shutter on consumer USB cameras and an Arduino powered strobe light, I'll post more details if I ever get it working.

