1

Would it be possible to tweak the program so that two different videos could be synchronized from different (24, 25, 30 or 60 fps) framerates? With this it would be possible to make comparisons of for example sportsclips downloaded from eg. Youtube and such?
If the sync could be done on more that one frame, then a movement sequence of two athletes moving at differing speeds could be done?
Just an idea...

2

Hi,
Normally the synchronization is already framerate independent.
Let me know if you find otherwise, that'd be a defect.

Expected behavior: You set the sync point on an event in each video with differing frame rates, then when hitting Play both video are played at their respective speed, and they still get by the sync point together.

You can also slow down one video but not the other, etc. They should still get to the sync point at the same moment.

3

Yes. The sync works when the videos are played, but moving forwards and backwards with arrows from a syncronized point takes "one frame at the time" regardless of the different fps? I am taking snapshots of the technique and not only watching the synchronized films...
Great piece of software anyway wink

4

Ah yes, frame by frame and play have different behavior. When doing frame by frame, it seemed more sensible to actually move frame by frame in each video.
But I can see how it make little sense if the two videos have differing framerates.
Maybe both could be aligned on the fastest framerate of the two. If we have video A @ 30fps and video B @ 15fps, going to the next frame would move A by 1 frame, and B by 0.5 frame(staying on the current frame).

We can keep a counter to detect when the slowest video should advance one frame.
I'll add it to the suggestion queue smile

5

...and the function would become fantastic if you could syncronize two videos of the same performance by two athletes doing it at different speeds (topathlete versus "amateur"). Example: The first athlete does the same sequence (eg from "kneelift" to "heel down") in 0,76 seconds (45 frames on 60fps)  and the second athlete does it in 1,06 seconds (27 frames on 25 fps). I still want to compare closely them by synchronizing the beginning AND the end of the sequence...:)

6

Oh, I see. You want to compare the body positions without the differences in motion dynamics being an issue.
Currently we can only analyze body organisation simultaneously with gesture speed, and it would be nice to decorrelate the two, to normalize speed.

I'm a bit scared about multiple point sync due to the complexity it may add to the user interface (adds the issue of removing sync points) and to handling dynamic sync and frame by frame sync.
I'd hope to find a simpler way to achieve this gesture speed normalization if possible…
(Could be done by trial and error manually by playing with the slow motion of one of the video, but not a very friendly solution).

7

Hello Joan. I found an interesting program on the web, that can be used to stretch a videoclip to a certain framerate or duration by interpolating adjacent frames and though smoothing a rough clip.   The name of the program was MotionPerfect. Something like this could do the syncronization of movement?

8

Ah yes. Motion perfect slow motion, I've always been very impressed by their demo video.
On the suggestion / todo-list, I have referred to this as "Smooth slow motion".

This type of extreme slow motion by reconstructing missing frames from adjacent frames content would definitely be a killer feature and quite a challenge!

As far as I can tell this is extremely tricky to implement.
Also in the same vein, the Re:vision Twixtor set of plug-ins¹. Some say it's the best in the industry at this type of stuff.

I'm not too confident that I can code this all by myself to a usable result though hmm I have a few leads and reference papers but it's quite a long shot. Very interesting though.

¹ Some Twixtor videos on vimeo.

9

The killer feature would give us a "poor mans" highspeed camera and unlimited comparison options...! But still - your KINOVEA is the best smile!

10

Is the display drawn on a direct 2D/3D surface?  I guess this would have great speed benefits and could help with doing slow motion.

11

Phalanger wrote:

Is the display drawn on a direct 2D/3D surface?  I guess this would have great speed benefits and could help with doing slow motion.

No, the image is drawn on a regular GDI+ surface. Indeed, the slowness of this is causing issues with full HD footage sad
I have tried several performance tweaks, including native bitblt, but nothing has been particularly helpful.

I have no experience with Direct 2D though, maybe it could help make the rendering faster…
If you want to try something, the painting is done in /ScreenManager/PlayerScreen/UserInterface/PlayerScreenUserInterface2.cs in method SurfaceScreen_Paint and FlushOnGraphics (there are some comments at the top of this one).

12

Is it acceptable to use third party components like SlimDX (http://slimdx.mdxinfo.com)?  Or would pure Direct2D be better?

Also does Kinovea support direct show input?  I think it would be better if ffmpeg was simply a full back, as many direct show codes are more powerful (using hardware decoding).

13

Phalanger wrote:

Is it acceptable to use third party components like SlimDX (http://slimdx.mdxinfo.com)?  Or would pure Direct2D be better?

Third party components are fine as long as they are open source. SlimDX is under the MIT license so it's perfectly fine.
I was actually considering using this for audio support but haven't had time to really dig into it.

Phalanger wrote:

Also does Kinovea support direct show input? I think it would be better if ffmpeg was simply a full back, as many direct show codes are more powerful (using hardware decoding).

That is more critical since the whole program is sort of built around the ffmpeg time representation coming from the decoder. (I don't know how DX work, maybe there is a common ground)

Also, I'm under the impression that FFMpeg library is pretty well optimised, from my experiments the issue was more with the rendering than with the decoding.
Having better instrumentation of the performance would be a plus to understand where the bottleneck really is. Maybe the bottleneck is not the same for videos with very large image size than for very fast paced ones for example.

And of course FFMpeg already handles almost every input whereas DirectShow will have to rely on installed codecs. If we can mix the two, and use DX where it is know to outperform FFMpeg, why not.
I'd also like to keep the dependencies on Windows specific stuff at a minimum in the event someone is motivated to try a port to Mono…

14

Is FFMpeg not able to use Directshow?  I thought it had this...

15

I'm fairly sure it cannot use the direct show codecs. It's not in the philosophy of ffmpeg project and they are just too different beasts.

I think softwares like MPlayer may use an higher abstraction and mix the two to increase formats coverage (microsoft specifc stuff sometimes are not supported in ffmpeg), but it sounds troublesome. (I don't know if it's worth it performance wise…)
There's probably plenty of room for improvement in the rendering stack.