1 (edited by Chas Tennis 2015-Jan-25 15:15:18)

Topic: Processing for Brightest or Darkest Pixels to Show Ball Path, etc.

I have found that pictures such as the following are very informative for athletic motions, especially to show arms, balls, rackets, etc.   However, these pictures are produced by manually selecting portions of high speed video frames and using Photoshop layering to assemble the composite.  I don't do them myself so I can't say how much time is involved to produce a composite picture from a series of frames from a high speed video.

Chas Tennis wrote:

These composite pictures of selected video frames are being posted lately in the Tennis Talk Forum by Anatoly Antipin ("Toly"). 

In a Tennis Talk forum reply he says that he uses Kinovea to select single frames, convert and save as .jpeg. He then uses Photoshop and Powerpoint.  He uses the Photoshop multiple layers technique.  I don't produce these myself so I can't provide much more information on the process.

These pictures are one of the best ways that I've seen for showing athletic motions. I'm noticing things that I never noticed before. Especially, the rare Fuzzy Yellow Balls videos of the server taken from above show some very interesting details.   Also, where shown, the ball on its trajectory and the camera frame rate can provide timing. See the ball's trajectory and ball spacing in the tennis serve. 

If anyone has samples of similar display methods, please post along with some of your techniques.


http://i1.ytimg.com/vi/QUwxiqFUi58/hqdefault.jpg

Youtube from Anatoly Anitipin showing composite of selected video frames as part of the video. 
http://www.youtube.com/watch?v=QUwxiqFUi58
Videos
http://www.youtube.com/channel/UCVtnV90bBCB50nkd8EQDFOQ


Some other composite pictures of video frames. 

http://i43.tinypic.com/2w3qibr.jpg
Stosur toss and impact location.


http://i46.tinypic.com/s3kmxx.jpg
Rare serves from above showing racket and hand movement.  See FYB Youtube videos. http://www.youtube.com/watch?v=2FpeYGG9XAg    and others from above.


http://i47.tinypic.com/x5y53s.jpg
Serve showing when racket goes from edge-on to the ball to impact, lasts ~ 0.02 second, and some of the follow through.  This is the internal shoulder rotation that contributes the most to racket head speed at impact.


http://i44.tinypic.com/jsdsgj.jpg
Composite picture of Roger Federer forehand.

I discussed the issue with someone who mentioned a video processing technique that saves, for example, the brightest pixel occurring in a series of video frames.  For example, suppose a tennis ball travels with a tennis court as the dark background.   The processing would see the tennis ball as the brightest pixels at a different location for each frame, save for each frame and display all ball images in a later composite video or still.  If the frames were made into a composite picture the trajectory of the ball is shown and the distance between the ball images gives an indication of time or velocity.  Tracking does not do that.  This result, in theory, could produce results similar to the above composite pictures, but the result would be produced by the computer and not manually.

The technique might also work for saving the lowest brightness pixels from a series of frames.  Dark objects on light backgrounds would be the best candidates.   

This technique has been applied especially to some soccer where the ball's trajectory appears.   

https://www.youtube.com/watch?v=l7l5YKssHPg


http://fcl.uncc.edu/nhnguye1/balltracking.html

Also, since in athletics most objects move in one direction, saving or processing the forward edge of most objects might be informative and allow more frames to be displayed.  This has been done in the composite frames above by manually layering using Photoshop.  Transparency and colors might be useful capabilities.

Often a video frame or still that shows just 3 frames - the current frame along with object positions of the frame before and after (color coded? transparent?) can be very informative. If objects from the frame before and after were always also displayed, it would make important observations more apparent. When stop action single frame is normally used you have to remember the positions instead of seeing them.   

If anyone has related links or experience in this topic, please post.

The main idea would be to produce informative video or still composite displays like those above but without the time required by the manual processing.

Re: Processing for Brightest or Darkest Pixels to Show Ball Path, etc.

It's actually relatively easy to do for fixed cameras. A basic approach is the following: you average all the pixels from all the frames and it gives you the naked background. Then for each frame (or each second or third frame or whatever), you compare each pixel from the frame against the average background. If the pixel is different, it must pertains to the moving subject, so you copy it to the final image.

A lot can be improved upon this naïve approach to remove noise and ghosting, etc.
I probably mentioned it elsewhere but this was a feature of the ancestor to Kinovea back in 2005. I know it doesn't help in the least, sorry. I still want to work on this though. Maybe now that we can have ultra wide angle good quality lenses on the cheap the need to implement it for moving cameras is less important (making it work for moving cameras has been the blocking point).

3 (edited by Chas Tennis 2015-Mar-01 00:06:55)

Re: Processing for Brightest or Darkest Pixels to Show Ball Path, etc.

In tennis it is often said that a very effective drill for serving is throwing.  I wanted to find the basis of this and would look for the reason why.  I knew the basis is that the drill should include internal shoulder rotation, a rotation of the upper arm.  I was watching an instructional video on the throw as a drill for the serve and decided to do stop frame on the thrower and server.

http://images2.snapfish.com/232323232%7Ffp83232%3Euqcshlukaxroqdfv%3B%3C%3B6%3Dot%3E83%3A6%3D44%3A%3D348%3DXROQDF%3E2823%3A%3A48%3B%3B257ot1lsi

By luck, this frame appears to be from interlaced video. The double images (one from each video field) are 16 milliseconds apart.  The upper arm is rotating the forearm to get hand speed. It worked especially well because the upper arm is rotating the forearm faster than other body parts are moving.

For the serve, the upper arm is rotating the racket at an angle to the forehand for racket head speed.
http://images2.snapfish.com/232323232%7Ffp83232%3Euqcshlukaxroqdfv6%3B%3A7%3Dot%3E83%3A6%3D44%3A%3D348%3DXROQDF%3E2854%3A32643257ot1lsi

I was impressed at how informative a double image could be. 

I'd like to try looking at videos where, say, a before, the current and an after frame appeared together.

For the moving camera issue, if the camera is hand held and pointing at slightly different directions if something in the background can be identified it might be used to index frames together.   That approach depends on a small pointing angle variation not what you would get walking along holding a camera.

Re: Processing for Brightest or Darkest Pixels to Show Ball Path, etc.

Ghosting for a single frame like you have here might be doable in Kinovea by
- loading the same video twice in dual mode,
- synchronizing with a 1-frame delay,
- enabling image superposition.

Here is an example (deinterlaced 25 fps):
http://www.kinovea.org/screencaps/0.8.x/syncghost.jpg

I have marked the relevant buttons in the capture.
With interlaced video this gives 4 visible fields, but the blending at 50% makes it hard to see details.