Screen flicker during capture is gone!!!
Great ![]()
However as noted in the 0.8.11 thread, there is a high chance that the final video created is not properly timed. (It will go too fast relatively to real time).
You are not logged in. Please login or register.
Kinovea - Forums → Posts by joan
Screen flicker during capture is gone!!!
Great ![]()
However as noted in the 0.8.11 thread, there is a high chance that the final video created is not properly timed. (It will go too fast relatively to real time).
Currently this is not possible and not to be expected in the foreseeable future.
The main reason is that some of the main usages include saving with drawings painted on which need recoding. When muxing to AVI, not all codec would be available, also, depending on codec, not all framerate are possible, which would mean yet another case for saving with slow motion.
It was initially trying not to reencode if not needed, but it proved too much of a hassle to maintain.
It's a bit of a shame but it's currently the only maintainable solution for me…
Hi,
Thanks for the report.
Does it happen with the dual snapshot button (to save an image) ?
Does the crash occur right away or after a few seconds of saving ?
Do you have the overlay function activated (bug 220) ?
Thanks
If I follow, it shouldn't. Maybe it's a bug.
The one important thing to consider is that "what you record is what is displayed". It's not silently recording the live event, it's recording what you see on screen.
You should even be able to change the delay and browse in time during recording, it should be reflected in the video.
Note that due to current implementation, the recorded video has a lot of chance of playing faster than the actual events, especially for large picture sources. (Will be corrected eventually)
Aah, je comprends.
Oui effectivement ce serait bien d'exporter le titre ou le temps absolu des images clés qui sont « traversées » pendant le suivi d'une trajectoire.
L'autre point (disposer les 2 tableaux "Images clés" et "Tracking" à la meme hauteur séparés de 3 ou 4 colonnes) me parait plus important et permettrait d'automatiser plus facilement les calculs.
Je regarderai à l'occasion mais j'ai un peu peur que ce soit compliqué dans l'état actuel des choses, à cause de la technique de transformation du format kva vers les formats bureautique (qui à d'autres avantages par ailleurs).
En gros le fichier de données est parcouru séquentiellement de haut en bas, et le fichier tableur est créé au fur et à mesure, ligne par ligne.
Les deux idées risquent poser le même problème d'ailleurs… ![]()
Hi,
My feeling is that some of these improvements are fairly independant and should be considered separately for clarity, even if they share the same goal. I'm not against completely re-writing the player screen, a good deep refactoring has been needed for far too long, but if it's possible to tackle these issues incrementally, it's better.
Having a buffer at input for example is not correlated to how the images are displayed.
In this regards, it may be more convenient to split the issues in different threads.
Maybe it's also time to move the discussion to the developpement mailing list… (And sorry Alexander for the thread hijack
)
So if I wanted to identify independant threads:
1 - Adding an input buffer.
2 - Moving the decoder to its own thread.
3 - Move rendering code from the player control to a new rendering control.
3b - Make rendering controls from different screens use the same rendering surface.
3c - Make rendering control responsible for knowing how to draw specifc meta data objects.
3d - Adding Direct2D rendering control.
3b, 3c and 3d would be dependant on 3, but I think the first 3 could be coded independantly from one another.
I hope I didn't miss anything. What you refer to "drawing data class" is the "Metadata" class I think. It keeps all the things that might be drawn on top of the image.
I have a some more comments, but I think the discussions will start to be hard to track without splitting.
The mailing list page is here if that's fine with you…
I've started to read about Direct2D and I must say it's pretty exiting. ![]()
Moving the rendering to the graphic card will not only increase performance through hardware acceleration, it will also free the CPU for other computations.
Stuff like tracking multiple points might be able to run in real time, that would be awesome.
The drawback is portability to other OS (although I've seen something about X support…) and to older Windows versions. (Which might mean having to maintain/release two versions, but this will be a good excuse to finally try some of the new features of .NET 3.5, like the graphic tablet API).
Anyway, yes, it would be good to use this opportunity to refactor the player screen and decoding/rendering flow.
I have added a diagram of the current architecture here to help.
I like the idea of bufferring the incoming images. It won't be necessary when images are already extracted to memory though, but when decoding directly from file, it will help smoothing out the decoding time.
On the other side of the flow, when outputting to screen, the ideal way is like you say independant modules that take the image and data and draw them on their surface, be it GDI+, Direct2D or whatever.
(An "output modules" architecture would make more sense when we want to add back support for document rendering, we would just have to implement a PDF or ODF renderer.)
Passing the list of items to the renderer would completely change the approach for drawing though… As of now, each drawing has a Draw method, and is responsible for drawing itself on the GDI+ surface according to its own internal data. It's convenient and good encapsulation.
Abstracting the output would mean completely redesign this aspect. This needs to be thought carefully.
For instance, the exact same flow is used when saving snapshot or video. We just pass a new Bitmap to draw on instead of the control's Graphic.
One other way would be if renderers could be abstracted and the concrete selected renderer passed to the Draw method of each object.
The object would draw themselves using universal primitives, and these would be translated to actual methods in the concrete renderer.
What do you think ?
3) Where can i download the experimental versions?
Sticky thread at the top of this section. "Experimental version - XXX"
Hi,
First of all, I will use this opportunity to say that the idea initially discussed at the top of this thread has been implemented in the latest experimental versions.
So when you have a 210fps video configured, the speed slider will actually not go from 0 to 200%, but from 0 to 28.5% (if the video file is crafted as 30fps).
And the speed percentage will refer to the percentage of speed relatively to the actual action.
So your issue becomes "how do I get back to 100%?".
Unfortunately, it's not simple. It was also asked a while back on the french part of the forum, and the issue I found then is that all the frames creating the slow motion are to be decoded anyway. We can't just skip 9 frames out of 10 to go 10 times faster. We have to decode everything (to reconstruct a given image, information from the previous ones is needed).
In this case this means decoding 210 images per seconds, even if we only render some of them to recreate the illusion of normal speed.
It will be extremely hard to keep the pace. For videos shot at 300, 600, 1200 fps or more, it will just be impossible.
Currently I don't see a real time solution to this. The only thing we could do is your point 2, create a new video and decimate frames on the way.
In this case we could recreate the original motion. It would be like creating a timelapse video of a slow motion ![]()
Ok, rendering on an external form seems good, with smooth playback with HD videos which before were issues. However it does jump now and then (it's not clean the code as it was a hack around).
It sounds super promising! ![]()
It would be nice to have some instrumentation on this, monitoring how many ms are spent on the rendering when using DX and when using GDI+.
One thing which I think must be changed is the use of the invoking the paint method on the picture box. This is not a fast method. It would be much better to call this function directly (and maybe from an external loop). Going through the operating system/control seems like a possible performance issue.
Yes, you're probably right. I don't quite remember the rationale for always going through the system's paint event. It's possible that it was simply the most convenient way to get a reference on the Graphic object we are painting on.
The timer itself runs in another thread, so asking for the next frame is done asynchronously and I don't think this can change. However, when we have the frame in question, rendering it on the screen can (should) be triggerred right away. As you point, no need to go through the Windows message queue, that can only slow things down.
I have a feeling the only way it will really work well (like high performance is rewriting this control). That way frames are extracted straight into the working canvas memory, and drawn from there. Other drawings on top are added to a list, which adds them to the rendering line to place on top of the image as it is renders. Maybe when you have more than one video, they should be rendered on the same surface (even if they have separate controls.
Does this sound right?
I'm not sure I followed everything, if I understood correctly you suggest extracting the frames from FFMPeg directly onto the canvas / graphic card memory, instead of using an intermediate Bitmap object. (?).
I don't know how this would work with the working zone concept. Currently the user may apply some image filters like "Auto Contrast" or "Sharpen" for example. These filters are applied on pre-extracted copies of the frames, to speed things up and add flexibility (combining filters, etc.).
I'm not sure we would be able to make the whole thing fast enough to allow for these filters to be applied in real time and still play smoothly…
The working zone and its pre-extraction of frames helps in many other regards too. For example, you can jump straight to any frame without having to decode all the ones in between (this wouldn't be quite possible without pre-extraction, we would have to seek to the closest keyframe (in the video coding sense) and then start decoding from there).
Also, some effects will always need to have a list of frames in memory anyway, e.g. the "overview" and "reverse" functions, and in the future the more complex special effects like chronophotography, etc. So far this model has proven very convenient.
Ok, peut-on approfondir ce point ? J'avoue ne pas voir précisément ce qu'apporterai une colonne supplémentaire.
En basculant l'affichage des temps vers « numéro des images » ou « total en millisecondes » la colonne C actuelle sera remplie par des valeurs numérique. Imaginons une colonne supplémentaire, on y mettrai quel type d'information ? C'est pour avoir à la fois le temps sous forme classique et le temps sous forme numérique ?
Merci
I'm not sure I see the point of blocking access to a program and at the same time letting everyone know that it is free to get on the Internet ?
Will there be anything else than Kinovea in the restricting container ?
Also, the enclosed instance of Kinovea would have to retain the right to be copied, distributed again, etc. by users — will that still be true ? Will the enclosed Kinovea be independant or just a part of a larger whole ? Can a user get access to Kinovea binaries individually ? If not, they are not independant programs…
I would understand this model for a launcher or an installer, the distributed program being still available individually for anyone to repackage, but I can't see how it could work - technically - for a blocking wrapper.
As the developer, would this be okay with you?
Thanks for asking.
The project strives for maximum openness and availability.
Note that I'm not the only copyright holder, there is more than 15 translators, and of course all the included library with their own licenses.
Also note that you can charge for the program itself, without wrapping it up, this is perfectly allowed by the license.
edit:
I want to state that I like the idea of creating a service-based business around Kinovea. Providing your expertise, knowledge, teaching skills, etc.
Hi,
I can't really offer legal advice, but here goes:
- Derivative works have to be under the GPLv2.
- A wrapper constitute a derivative work.
From the license: A derivative work is "a work containing the Program or a portion of it, either verbatim or with modifications (…)".
If one could simply create a proprietary wrapper around a GPL software, the whole notion of copyleft would vanish.
From the GPL FAQ: I'd like to incorporate GPL-covered software in my proprietary system. Can I do this?
Maybe you have to better define what you mean by wrapper. If it could be used for any other program, then they are effectively two different programs. Different answer.
I'm fairly sure it cannot use the direct show codecs. It's not in the philosophy of ffmpeg project and they are just too different beasts.
I think softwares like MPlayer may use an higher abstraction and mix the two to increase formats coverage (microsoft specifc stuff sometimes are not supported in ffmpeg), but it sounds troublesome. (I don't know if it's worth it performance wise…)
There's probably plenty of room for improvement in the rendering stack.
Is it acceptable to use third party components like SlimDX (http://slimdx.mdxinfo.com)? Or would pure Direct2D be better?
Third party components are fine as long as they are open source. SlimDX is under the MIT license so it's perfectly fine.
I was actually considering using this for audio support but haven't had time to really dig into it.
Also does Kinovea support direct show input? I think it would be better if ffmpeg was simply a full back, as many direct show codes are more powerful (using hardware decoding).
That is more critical since the whole program is sort of built around the ffmpeg time representation coming from the decoder. (I don't know how DX work, maybe there is a common ground)
Also, I'm under the impression that FFMpeg library is pretty well optimised, from my experiments the issue was more with the rendering than with the decoding.
Having better instrumentation of the performance would be a plus to understand where the bottleneck really is. Maybe the bottleneck is not the same for videos with very large image size than for very fast paced ones for example.
And of course FFMpeg already handles almost every input whereas DirectShow will have to rely on installed codecs. If we can mix the two, and use DX where it is know to outperform FFMpeg, why not.
I'd also like to keep the dependencies on Windows specific stuff at a minimum in the event someone is motivated to try a port to Mono…
Kinovea - Forums → Posts by joan