1,426

(24 replies, posted in General)

I've started to read about Direct2D and I must say it's pretty exiting. smile
Moving the rendering to the graphic card will not only increase performance through hardware acceleration, it will also free the CPU for other computations.
Stuff like tracking multiple points might be able to run in real time, that would be awesome.

The drawback is portability to other OS (although I've seen something about X support…) and to older Windows versions. (Which might mean having to maintain/release two versions, but this will be a good excuse to finally try some of the new features of .NET 3.5, like the graphic tablet API).

Anyway, yes, it would be good to use this opportunity to refactor the player screen and decoding/rendering flow.
I have added a diagram of the current architecture here to help.

I like the idea of bufferring the incoming images. It won't be necessary when images are already extracted to memory though, but when decoding directly from file, it will help smoothing out the decoding time.

On the other side of the flow, when outputting to screen, the ideal way is like you say independant modules that take the image and data and draw them on their surface, be it GDI+, Direct2D or whatever.
(An "output modules" architecture would make more sense when we want to add back support for document rendering, we would just have to implement a PDF or ODF renderer.)

Passing the list of items to the renderer would completely change the approach for drawing though… As of now, each drawing has a Draw method, and is responsible for drawing itself on the GDI+ surface according to its own internal data. It's convenient and good encapsulation.

Abstracting the output would mean completely redesign this aspect. This needs to be thought carefully.
For instance, the exact same flow is used when saving snapshot or video. We just pass a new Bitmap to draw on instead of the control's Graphic.

One other way would be if renderers could be abstracted and the concrete selected renderer passed to the Draw method of each object.
The object would draw themselves using universal primitives, and these would be translated to actual methods in the concrete renderer.

What do you think ?

tirosh wrote:

3) Where can i download the experimental versions?

Sticky thread at the top of this section. "Experimental version - XXX"

Hi,

First of all, I will use this opportunity to say that the idea initially discussed at the top of this thread has been implemented in the latest experimental versions.

So when you have a 210fps video configured, the speed slider will actually not go from 0 to 200%, but from 0 to 28.5% (if the video file is crafted as 30fps).
And the speed percentage will refer to the percentage of speed relatively to the actual action.

So your issue becomes "how do I get back to 100%?".

Unfortunately, it's not simple. It was also asked a while back on the french part of the forum, and the issue I found then is that all the frames creating the slow motion are to be decoded anyway. We can't just skip 9 frames out of 10 to go 10 times faster. We have to decode everything (to reconstruct a given image, information from the previous ones is needed).

In this case this means decoding 210 images per seconds, even if we only render some of them to recreate the illusion of normal speed.
It will be extremely hard to keep the pace. For videos shot at 300, 600, 1200 fps or more, it will just be impossible.

Currently I don't see a real time solution to this. The only thing we could do is your point 2, create a new video and decimate frames on the way.
In this case we could recreate the original motion. It would be like creating a timelapse video of a slow motion smile

1,429

(24 replies, posted in General)

Phalanger wrote:

Ok, rendering on an external form seems good, with smooth playback with HD videos which before were issues.  However it does jump now and then (it's not clean the code as it was a hack around).

It sounds super promising! big_smile
It would be nice to have some instrumentation on this, monitoring how many ms are spent on the rendering when using DX and when using GDI+.

Phalanger wrote:

One thing which I think must be changed is the use of the invoking the paint method on the picture box.  This is not a fast method.  It would be much better to call this function directly (and maybe from an external loop).  Going through the operating system/control seems like a possible performance issue.

Yes, you're probably right. I don't quite remember the rationale for always going through the system's paint event. It's possible that it was simply the most convenient way to get a reference on the Graphic object we are painting on.
The timer itself runs in another thread, so asking for the next frame is done asynchronously and I don't think this can change. However, when we have the frame in question, rendering it on the screen can (should) be triggerred right away. As you point, no need to go through the Windows message queue, that can only slow things down.

Phalanger wrote:

I have a feeling the only way it will really work well (like high performance is rewriting this control).  That way frames are extracted straight into the working canvas memory, and drawn from there.  Other drawings on top are added to a list, which adds them to the rendering line to place on top of the image as it is renders.  Maybe when you have more than one video, they should be rendered on the same surface (even if they have separate controls.

Does this sound right?

I'm not sure I followed everything, if I understood correctly you suggest extracting the frames from FFMPeg directly onto the canvas / graphic card memory, instead of using an intermediate Bitmap object. (?).

I don't know how this would work with the working zone concept. Currently the user may apply some image filters like "Auto Contrast" or "Sharpen" for example. These filters are applied on pre-extracted copies of the frames, to speed things up and add flexibility (combining filters, etc.).
I'm not sure we would be able to make the whole thing fast enough to allow for these filters to be applied in real time and still play smoothly…

The working zone and its pre-extraction of frames helps in many other regards too. For example, you can jump straight to any frame without having to decode all the ones in between (this wouldn't be quite possible without pre-extraction, we would have to seek to the closest keyframe (in the video coding sense) and then start decoding from there).
Also, some effects will always need to have a list of frames in memory anyway, e.g. the "overview" and "reverse" functions, and in the future the more complex special effects like chronophotography, etc. So far this model has proven very convenient.

1,430

(6 replies, posted in Français)

Ok, peut-on approfondir ce point ? J'avoue ne pas voir précisément ce qu'apporterai une colonne supplémentaire.

En basculant l'affichage des temps vers « numéro des images » ou « total en millisecondes » la colonne C actuelle sera remplie par des valeurs numérique. Imaginons une colonne supplémentaire, on y mettrai quel type d'information ? C'est pour avoir à la fois le temps sous forme classique et le temps sous forme numérique ?
Merci

1,431

(6 replies, posted in General)

I'm not sure I see the point of blocking access to a program and at the same time letting everyone know that it is free to get on the Internet ?
Will there be anything else than Kinovea in the restricting container ?

Also, the enclosed instance of Kinovea would have to retain the right to be copied, distributed again, etc. by users — will that still be true ? Will the enclosed Kinovea be independant or just a part of a larger whole ? Can a user get access to Kinovea binaries individually ? If not, they are not independant programs…
I would understand this model for a launcher or an installer, the distributed program being still available individually for anyone to repackage, but I can't see how it could work - technically - for a blocking wrapper.

As the developer, would this be okay with you?

Thanks for asking.
The project strives for maximum openness and availability.
Note that I'm not the only copyright holder, there is more than 15 translators, and of course all the included library with their own licenses.

Also note that you can charge for the program itself, without wrapping it up, this is perfectly allowed by the license.

edit:
I want to state that I like the idea of creating a service-based business around Kinovea. Providing your expertise, knowledge, teaching skills, etc.

1,432

(6 replies, posted in General)

Hi,
I can't really offer legal advice, but here goes:
- Derivative works have to be under the GPLv2.
- A wrapper constitute a derivative work.
From the license: A derivative work is "a work containing the Program or a portion of it, either verbatim or with modifications (…)".

If one could simply create a proprietary wrapper around a GPL software, the whole notion of copyleft would vanish.
From the GPL FAQ: I'd like to incorporate GPL-covered software in my proprietary system. Can I do this?

Maybe you have to better define what you mean by wrapper. If it could be used for any other program, then they are effectively two different programs. Different answer.

1,433

(24 replies, posted in General)

I'm fairly sure it cannot use the direct show codecs. It's not in the philosophy of ffmpeg project and they are just too different beasts.

I think softwares like MPlayer may use an higher abstraction and mix the two to increase formats coverage (microsoft specifc stuff sometimes are not supported in ffmpeg), but it sounds troublesome. (I don't know if it's worth it performance wise…)
There's probably plenty of room for improvement in the rendering stack.

1,434

(24 replies, posted in General)

Phalanger wrote:

Is it acceptable to use third party components like SlimDX (http://slimdx.mdxinfo.com)?  Or would pure Direct2D be better?

Third party components are fine as long as they are open source. SlimDX is under the MIT license so it's perfectly fine.
I was actually considering using this for audio support but haven't had time to really dig into it.

Phalanger wrote:

Also does Kinovea support direct show input? I think it would be better if ffmpeg was simply a full back, as many direct show codes are more powerful (using hardware decoding).

That is more critical since the whole program is sort of built around the ffmpeg time representation coming from the decoder. (I don't know how DX work, maybe there is a common ground)

Also, I'm under the impression that FFMpeg library is pretty well optimised, from my experiments the issue was more with the rendering than with the decoding.
Having better instrumentation of the performance would be a plus to understand where the bottleneck really is. Maybe the bottleneck is not the same for videos with very large image size than for very fast paced ones for example.

And of course FFMpeg already handles almost every input whereas DirectShow will have to rely on installed codecs. If we can mix the two, and use DX where it is know to outperform FFMpeg, why not.
I'd also like to keep the dependencies on Windows specific stuff at a minimum in the event someone is motivated to try a port to Mono…

Apparently it would be the 3.5 framework that misbehaved during install and changed the config file innapropriately.

There seem to be others with the same issue, a bug is open at Microsoft but they can't reproduce: Microsoft Connect.

Some other threads about the same issue
http://stackoverflow.com/questions/1292 … sexception
http://community.sharpdevelop.net/forum … 24166.aspx

Apparently the work around is to repair the config file with ServiceModelReg.exe (didn't quite get what was the suggested technique…) or to remove the serviceModel block in the machine.config file, or replace it with a working one. (backup first!)

There doesn't seem to be any way for a program to detect this and repair automatically.
If I understood correctly, this issue prevents any .NET 2.0 application to work.
I'm not even sure that removing the .NET framework and reinstall it would make the problem go away.
sad

That is because VLC (similarly, Kinovea, MPlayer and others tools) do not use codecs at all.

We use the FFMPeg's libavcodec library which itself implement decoding and encoding of the files. The library code is tightly integrated into the software, other softwares cannot use it. There is nothing Kinovea installs that could be used by Windows Media Player, they just work totally differently with regards to decoding files.

The FFDShow codec tries to gather the best of both worlds, it turns the libavcodec library into an installable Windows codec, so with one single piece of software you have the whole FFMPeg suite of decoding/encoding tools available to these codec based players.

1,437

(1 replies, posted in Français)

Version expérimentale : elle a besoin de vos retour d'expérience pour s'améliorer !

L'installeur est dispo ici : [s]Kinovea.Setup.0.8.11.exe[/s]Voir le sujet sur la version 0.8.12

La principale nouveauté ce mois-ci est le retard sur le direct dans l'écran de capture.

Retard sur le direct
Vous pouvez donc positionner un certain nombre de secondes de retard pour l'affichage du flux vidéo.

À noter que le délai peut être configuré à chaud, cela ne stoppe pas le flux vidéo. Par exemple, il est possible de :
- Visionner une scène en direct, puis augmenter le délai pour la revisionner.
- Pauser la capture, puis jouer avec le curseur de délai pour naviguer dans les évènements récemments capturés.

Le retard maximum qu'il est possible d'atteindre dépend de la configuration mémoire dans les préférences à la page Lecture/Analyse (en attendant une page de configuration dédiée à la capture) et de la taille des images de la source vidéo.

Correctifs : bug 223.
D'autre part le problème de clignotement pendant l'enregistrement sur l'écran de capture devrait être corrigé.
Cependant la fonction d'enregistrement du flux capturé n'est pas encore prête pour du travail sérieux, l'échelle de temps des vidéos créés est fausse la plupart du temps.

Snapshot: capture et délai. Quand la mémoire tampon est remplie à 100%, on peut retarder l'affichage jusqu'au maximum à droite.
http://www.kinovea.org/screencaps/0.8.x/capture.png

1,438

(4 replies, posted in General)

Thanks for your interest smile
You should be able to switch to spanish already. See menu Options > Language > Español.

1,439

(11 replies, posted in General)

This is an experimental version : it needs your feedback to improve itself.

The installer is available here: [s]Kinovea.0.8.11[/s]Check 0.8.12 thread

Delay on live
The single highlight this month is the delayed live feature in the capture screen.
You can set a number of seconds to delay the display of the live stream.

Note that the delay can be changed at any time, it doesn't disrupt the video flow. For example you could:
- Watch a scene live, then increase the delay by a few seconds to watch it again.
- Pause the capture, then play with the delay to browse the recent events captured.

The ammount of time you can delay depends on the memory configuration in the Preferences at page Play/Analyze (Until the capture has its own settings page) and the image size of the video source.
Please experiment with it and report what you think smile.

Fixed bugs : 223.
Also, there shouldn't be any more flickering when recording during capture.
However, the recording feature is not ready for serious use yet. The time scale of generated video will usually be wrong.

Snapshot: capture and delay. When the buffer is filled 100%, you can delay all the way to the right.
http://www.kinovea.org/screencaps/0.8.x/capture.png

1,440

(6 replies, posted in Français)

lof123 wrote:

1- Il serait donc interessant d'avoir ces 2 tableaux côte à côte séparé d'une ou plusieurs colonnes plutôt que l'un sur l'autre.

Je vais essayer de regarder comment faire.

lof123 wrote:

2- D'autre part, si c'est possible, il serait aussi tres utile d'avoir une colonne supplémentaire à coté du tableau "Track" qui affiche les differentes images clés au temps t de la colonne C. Par exemple s'il y a une image clé à l'image 1000, dans le tableau track, le chiffre 1000 ou un autre marqueur serait inscrit dans une colonne supplémentaire à côté du temps 1000 de la colonne C.

Hmm, pour l'instant tant que c'est possible j'aimerais éviter que l'utilisateur ait à passer par une fenêtre de configuration supplémentaire. Donc il faudrait trouver une solution qui soit acceptable pour tous.
Par exemple: si le format choisi est de type textuel (classique ou classique + numéro images) alors on ajoute une colonne supplémentaire avec le numéro des images, sinon on ne fait rien.

lof123 wrote:

3- Enfin, une derniere question, est-il possible d'afficher la video en plein ecran quand on fait du tracking ?

Pas pour l'instant, mais il doit être possible de zoomer (CTRL+molette de la souris)