Bonjour,

Milmou wrote:

Après avoir fait le suivi de trajectoire et le calcul de vitesse, je n'arrive pas a agrandir les indicateurs de vitessse. Il apparaissent en tout petit à l'écran...

Hum, je pense que ceci est une anomalie. Je créerai un rapport de bug dessus.
Le problème se produit il me semble quand la taille de l'image est plus grande que l'espace disponible dans l'écran de lecture.
Du coup l'image est réduite en taille, et les indicateurs de vitesses sont réduits de façon proportionnelle, sans qu'ils ne soient limités.
(edit : bug 230)

Milmou wrote:

De plus après avoir fait un réglage de la vitesse en m/s celle-ci apparait malgré tout à l'écran en px/f...
Comment faire pour modifier cela?

Pour avoir une taille en mètres, il faut « calibrer » l'image. Ajouter une ligne quelque part sur un objet de taille connue et indiquer sa taille manuellement. Sinon, le logiciel n'a aucun moyen de faire le rapport entre les pixels et une unité réelle.
Voir l'aide à l'article « Utiliser Kinovea > Mesurer > Mesurer des vitesses ».

1,502

(2 replies, posted in Français)

Bonjour,
C'est un des soucis qui nécessite des optimisations assez lourdes et une réorganisation de l'architecture.
Le problème est connu, plus le format est long à décoder plus il est critique. (ex: full HD en H.264)

FAQ wrote:

Pourquoi la vitesse de ma vidéo diminue toute seule ?
Cela se produit lorsque Kinovea détecte qu'il ne peut pas extraire les images de la vidéo assez vite pour maintenir la cadence demandée.
La vitesse se stabilise alors à un seuil tolérable mais il est conseillé de la diminuer manuellement à nouveau pour retrouver une lecture fluide.

1,503

(1 replies, posted in General)

You might do this by launching Kinovea from the command line and passing arguments.
Unfortunately, I don't think this is documented anywhere, not even in the manual, although it has been there for a while now sad

To launch a file directly, and at 75% speed you can do this at the command line:

> kinovea.exe -file test.mpg -speed 75

Here is the missing doc:
Usage:
[-file <path>] [-speed <0-200>] [-noexp] [-stretch]

-file: complete path of a video to launch; default: 'unknown'
-speed: percentage of original speed to play the video; default: 100.
-stretch: The video will be expanded to the screen size; default: false.
-noexp: The file explorer will not be visible; default: false.

Exemples:
1. > kinovea.exe -file test.mkv -speed 50
2. > kinovea.exe -file test.mkv -stretch -noexp

To do this you need to run Kinovea directly from cmd.exe, from a .bat script or from another program that support running shell commands.

1,504

(2 replies, posted in General)

kicker9 wrote:

What I would like to be able to do is have the users who send me their video clips be able to record back to them my analysis not just throu text, but also in speech. Does Kiovea has this application available-or is there a plug-in available for this?

Hi,
Sorry, it is not possible at the moment. And Kinovea does not have a plug-in system.
It will not be added until Kinovea supports audio as input, which is still on hiatus.

1,505

(2 replies, posted in General)

Are you referring to tracking a player during a whole match ?
The tracking fuction is intended to track joints or small objects. Speed measurement on it will only work in very controlled situations. (perpendicularity, line calibration, etc.)

TeamTermin wrote:

Screen flicker during capture is gone!!!

Great smile
However as noted in the 0.8.11 thread, there is a high chance that the final video created is not properly timed. (It will go too fast relatively to real time).

Currently this is not possible and not to be expected in the foreseeable future.

The main reason is that some of the main usages include saving with drawings painted on which need recoding. When muxing to AVI, not all codec would be available, also, depending on codec, not all framerate are possible, which would mean yet another case for saving with slow motion.

It was initially trying not to reencode if not needed, but it proved too much of a hassle to maintain.
It's a bit of a shame but it's currently the only maintainable solution for me…

Hi,
Thanks for the report.
Does it happen with the dual snapshot button (to save an image) ?
Does the crash occur right away or after a few seconds of saving ?
Do you have the overlay function activated (bug 220) ?
Thanks

1,509

(1 replies, posted in Bug reports)

If I follow, it shouldn't. Maybe it's a bug.

The one important thing to consider is that "what you record is what is displayed". It's not silently recording the live event, it's recording what you see on screen.
You should even be able to change the delay and browse in time during recording, it should be reflected in the video.

Note that due to current implementation, the recorded video has a lot of chance of playing faster than the actual events, especially for large picture sources. (Will be corrected eventually)

1,510

(6 replies, posted in Français)

Aah, je comprends.
Oui effectivement ce serait bien d'exporter le titre ou le temps absolu des images clés qui sont « traversées » pendant le suivi d'une trajectoire.

lof123 wrote:

L'autre point (disposer les 2 tableaux "Images clés" et "Tracking" à la meme hauteur séparés de 3 ou 4 colonnes) me parait plus important et permettrait d'automatiser plus facilement les calculs.

Je regarderai à l'occasion mais j'ai un peu peur que ce soit compliqué dans l'état actuel des choses, à cause de la technique de transformation du format kva vers les formats bureautique (qui à d'autres avantages par ailleurs).
En gros le fichier de données est parcouru séquentiellement de haut en bas, et le fichier tableur est créé au fur et à mesure, ligne par ligne.

Les deux idées risquent poser le même problème d'ailleurs… hmm

1,511

(24 replies, posted in General)

Hi,
My feeling is that some of these improvements are fairly independant and should be considered separately for clarity, even if they share the same goal. I'm not against completely re-writing the player screen, a good deep refactoring has been needed for far too long, but if it's possible to tackle these issues incrementally, it's better.
Having a buffer at input for example is not correlated to how the images are displayed.

In this regards, it may be more convenient to split the issues in different threads.
Maybe it's also time to move the discussion to the developpement mailing list… (And sorry Alexander for the thread hijack roll)

So if I wanted to identify independant threads:
1 - Adding an input buffer.
2 - Moving the decoder to its own thread.
3 - Move rendering code from the player control to a new rendering control.
3b - Make rendering controls from different screens use the same rendering surface.
3c - Make rendering control responsible for knowing how to draw specifc meta data objects.
3d - Adding Direct2D rendering control.

3b, 3c and 3d would be dependant on 3, but I think the first 3 could be coded independantly from one another.
I hope I didn't miss anything. What you refer to "drawing data class" is the "Metadata" class I think. It keeps all the things that might be drawn on top of the image.

I have a some more comments, but I think the discussions will start to be hard to track without splitting.
The mailing list page is here if that's fine with you…

1,512

(24 replies, posted in General)

I've started to read about Direct2D and I must say it's pretty exiting. smile
Moving the rendering to the graphic card will not only increase performance through hardware acceleration, it will also free the CPU for other computations.
Stuff like tracking multiple points might be able to run in real time, that would be awesome.

The drawback is portability to other OS (although I've seen something about X support…) and to older Windows versions. (Which might mean having to maintain/release two versions, but this will be a good excuse to finally try some of the new features of .NET 3.5, like the graphic tablet API).

Anyway, yes, it would be good to use this opportunity to refactor the player screen and decoding/rendering flow.
I have added a diagram of the current architecture here to help.

I like the idea of bufferring the incoming images. It won't be necessary when images are already extracted to memory though, but when decoding directly from file, it will help smoothing out the decoding time.

On the other side of the flow, when outputting to screen, the ideal way is like you say independant modules that take the image and data and draw them on their surface, be it GDI+, Direct2D or whatever.
(An "output modules" architecture would make more sense when we want to add back support for document rendering, we would just have to implement a PDF or ODF renderer.)

Passing the list of items to the renderer would completely change the approach for drawing though… As of now, each drawing has a Draw method, and is responsible for drawing itself on the GDI+ surface according to its own internal data. It's convenient and good encapsulation.

Abstracting the output would mean completely redesign this aspect. This needs to be thought carefully.
For instance, the exact same flow is used when saving snapshot or video. We just pass a new Bitmap to draw on instead of the control's Graphic.

One other way would be if renderers could be abstracted and the concrete selected renderer passed to the Draw method of each object.
The object would draw themselves using universal primitives, and these would be translated to actual methods in the concrete renderer.

What do you think ?

tirosh wrote:

3) Where can i download the experimental versions?

Sticky thread at the top of this section. "Experimental version - XXX"

Hi,

First of all, I will use this opportunity to say that the idea initially discussed at the top of this thread has been implemented in the latest experimental versions.

So when you have a 210fps video configured, the speed slider will actually not go from 0 to 200%, but from 0 to 28.5% (if the video file is crafted as 30fps).
And the speed percentage will refer to the percentage of speed relatively to the actual action.

So your issue becomes "how do I get back to 100%?".

Unfortunately, it's not simple. It was also asked a while back on the french part of the forum, and the issue I found then is that all the frames creating the slow motion are to be decoded anyway. We can't just skip 9 frames out of 10 to go 10 times faster. We have to decode everything (to reconstruct a given image, information from the previous ones is needed).

In this case this means decoding 210 images per seconds, even if we only render some of them to recreate the illusion of normal speed.
It will be extremely hard to keep the pace. For videos shot at 300, 600, 1200 fps or more, it will just be impossible.

Currently I don't see a real time solution to this. The only thing we could do is your point 2, create a new video and decimate frames on the way.
In this case we could recreate the original motion. It would be like creating a timelapse video of a slow motion smile

1,515

(24 replies, posted in General)

Phalanger wrote:

Ok, rendering on an external form seems good, with smooth playback with HD videos which before were issues.  However it does jump now and then (it's not clean the code as it was a hack around).

It sounds super promising! big_smile
It would be nice to have some instrumentation on this, monitoring how many ms are spent on the rendering when using DX and when using GDI+.

Phalanger wrote:

One thing which I think must be changed is the use of the invoking the paint method on the picture box.  This is not a fast method.  It would be much better to call this function directly (and maybe from an external loop).  Going through the operating system/control seems like a possible performance issue.

Yes, you're probably right. I don't quite remember the rationale for always going through the system's paint event. It's possible that it was simply the most convenient way to get a reference on the Graphic object we are painting on.
The timer itself runs in another thread, so asking for the next frame is done asynchronously and I don't think this can change. However, when we have the frame in question, rendering it on the screen can (should) be triggerred right away. As you point, no need to go through the Windows message queue, that can only slow things down.

Phalanger wrote:

I have a feeling the only way it will really work well (like high performance is rewriting this control).  That way frames are extracted straight into the working canvas memory, and drawn from there.  Other drawings on top are added to a list, which adds them to the rendering line to place on top of the image as it is renders.  Maybe when you have more than one video, they should be rendered on the same surface (even if they have separate controls.

Does this sound right?

I'm not sure I followed everything, if I understood correctly you suggest extracting the frames from FFMPeg directly onto the canvas / graphic card memory, instead of using an intermediate Bitmap object. (?).

I don't know how this would work with the working zone concept. Currently the user may apply some image filters like "Auto Contrast" or "Sharpen" for example. These filters are applied on pre-extracted copies of the frames, to speed things up and add flexibility (combining filters, etc.).
I'm not sure we would be able to make the whole thing fast enough to allow for these filters to be applied in real time and still play smoothly…

The working zone and its pre-extraction of frames helps in many other regards too. For example, you can jump straight to any frame without having to decode all the ones in between (this wouldn't be quite possible without pre-extraction, we would have to seek to the closest keyframe (in the video coding sense) and then start decoding from there).
Also, some effects will always need to have a list of frames in memory anyway, e.g. the "overview" and "reverse" functions, and in the future the more complex special effects like chronophotography, etc. So far this model has proven very convenient.