I added a `-video2` parameter. For the synchronization mark it's more complicated because it's not a property of the screen but an annotation on the video so it's handled differently.

Since you are processing the videos with ffmpeg and looking for black frames I wonder if you could trim the beginning of the video directly with ffmpeg without re-encoding. https://superuser.com/questions/258032/ … of-a-video

Thanks, very interesting. It shows me something I haven't really taken into account in the new system I think.

The new system uses a concept of "windows", that is conceptually similar to the old workspace, they contain the 1 or 2 screens with the content in it. But the XML for these is stored in the application data folder and managed by the program. So now the way to start on a specific screen configuration from the command line is to pass the window id or name, instead of the path to an external file. All the existing windows are listed in the UI so it's easy to reload old ones but creating new ones is also done from the UI.

This means something appears to be missing compared to what you want to do, you would have to create/modify the window xml stored in app data which is doable but not ideal. It would be much simpler if you could pass the paths to the two videos on the command line no?

For a single video you can directly pass the path to the command line or use the -video parameter, no need to create a window or workspace file. Maybe there should be a -video2 parameter that let you start in compare mode directly.

Assuming you have reduced exposure duration to remove the motion blur and then increased gain to try to compensate the darkness of the image, and that caused a lot of noise/grain. There is no easy solution you must add external lights if you are indoor in a low light room. Then you can use a lower gain and not have so much noise.

Hi,
Synchronization works via the "time origin" annotation and is saved in KVA files. So the way to do this would be to have a KVA file for each video with a time origin set a the timestamp of the sync point.

Please note that this workspace file format will be obsolete in the next version as the multi-window system has been rewritten and better integrated in the program, and furthermore "workspace" means a different thing now.

Are you generating these files from code or are you editing manually? The timestamps are not in seconds or frames they are in a unit that depends on the video file so it's a bit more complicated.

This will be fixed in the next release.
See: https://github.com/Kinovea/Kinovea/issues/180

Thanks.

It's related to the function of auto-numbering file names. I can reproduce if I set the file name to contain a very large number like "test1234567890123". When it tries to automatically increment the number it fails because it's too large.

It will be fixed in the next release.

Is it a very long video? I don't think I've had the issue you are mentioning where you click on the cursor and it jumps to a different frame.

You can also try the "time grab": ALT + mouse left drag in the video viewport. When you click down (anywhere) it will be registered to the current frame and when you drag left or right it will move relatively to that so I think it's what you are after. 

edit: actually I see what you mean on certain videos, it might be a bug in the mapping between pixels and timestamps I'll have to double check.

The way they solve this in video editing software like Shotcut or Premiere Pro, which also need frame by frame navigation, is a workflow known as "Proxy editing".

Essentially they create lower resolution and editing-friendly versions of the files, you create your edits on these proxies, and when you are ready to make the final export they swap in the original footage.

This seems the way to go and the fact that it's an industry standard there is an indication that no amount of buffer optimization or GPU acceleration would truly solve it.

I could add a function to create frame by frame friendly version of the video. I think the simplest would be to create the file next to the original with a suffix, as this would open the way for automatically detecting the kva annotation file when opening the original video. This way we can have a similar workflow: add your annotations on the proxy, then export a video based on the original.

One caveat is for making measurements and tracking as the lower resolution will reduce precision. But I think for the main use case of visually inspecting technique/posture it would be a good solution.

Two ways to go about it, either from the file browser by right clicking the thumbnail and a simple convert menu. Or by first opening the video and having an option under export.

For the resolution it should probably be a preset while keeping the original aspect ratio.

2) Video export configuration options (e.g., FPS, bitrate key-in).

5) Possibility to support plug-in open-source HPE models

Thanks for posting (or reposting) these.

It led me to consider a technical solution that I hadn't fully considered before that could address both.

It is possible to send binary data to external programs through "STDIN". FFmpeg readily accepts this for example. So I could start the external process, go in a loop to feed it the images, and retrieve the result at the end.

This is quite appealing to me because it's very open ended and avoids having to write a C++ wrapper or having to export a temporary video file first. I already have functions to "enumerate" the video frames with or without the user's drawings painted on, so it should just be a matter of feeding that to the external program and hiding this technical aspect under the UI.

Not all external programs will support that out of the box but it should be easier and more maintainable to implement the wrapper externally.

As an experiment I could hook the ffmpeg executable to the video export function and expose H.264 encoding as a "profile". (I don't really want the export dialog to turn into Handbrake user interface with a million options, if this approach works we can have custom profiles defined elsewhere and hidden from view).

Hi,
I looked into perfs a bit. Specifically I have some 4K @ 60 fps videos which are impossible to browse on the timeline.

Could you tell if you are seeing the same:

1. forward playback is actually decent. It's not buttery smooth but it runs without slowing down too much. I measured between 9 and 25 ms per frame to decode, and on average it can sustain about 45-50 fps so it's usable for the purpose of watching the action in real time.

2. forward frame by frame also works. For example holding the keyboard right arrow down and letting it move forward.

3. It's backward frame by frame and clicking around in the timeline that is horrendous.

Do you get the same behavior?

When you get this it's mainly related to the encoding. This particular video is encoded with key frames every 6 seconds (360 frames).

So whenever we click in the timeline, it will randomly drop in the middle of one of these segments. The way video decoding works we get to the previous key frame and then it has to decode all the frames forward until it gets to the target.

So I'm getting 2 or 3 seconds to get the requested position, just because it has to actually decode hundreds of other frames to get there.

As an experiment I exported it back using the export function (export in Kinovea uses MJPEG exactly for this reason, so every frame is a keyframe), and now I can do frame by frame and timeline browsing without issues, on the same 4K 60 fps video. (But yes the file is 10x bigger).

(edit: my computer is not particularly beefy).

Yes please!

Even if I have a long list of areas I want to improve personally, I'm always very curious about the community's priorities and workflows. Especially the little things that can have a big impact in terms of usability.

One area where I badly need help is communication. I'm not on social media I want to focus my time on coding. When I wrap up a release I find it incredibly hard to switch into communication mode and make nice video demonstrations of the features. On top of the written documentation which is itself an endeavor.

I'm currently in the final lengths of version 2025.1. It has many new features and improvements all over the place.

One area that is bothering me personally and has been neglected for many years, all the while being the very first thing people see when they open the program, the file browser. It's currently pretty unusable and completely fails at its mission of quickly navigating the video collection. So in 2026 I want to spend a few weeks upgrading it, make it less stupid.

Yeah 4K + H.265 is brutal for frame by frame. Pending further investigation the current strategy is to increase the memory buffers in Preferences > Playback > Memory and then reduce the working zone to have it fit in the buffer, then it should be smooth. But that's very limiting at these resolutions, required memory: width*height*3*frames, for 200 frames ~5 GB.

Hi Chas, yes I love this interaction mode, I use it all the time. I'm not 100% sure we are talking about the same thing though because this should work pretty much the same in Kinovea already. If you grab the navigation cursor and move it back and forth it should update the image immediately. The smoothness depends on whether the images are cached in memory, if they are not it depends on the video encoding.

There is another interaction mode called "time grab", you hold ALT key you can then grab anywhere in the video viewport with the mouse and move left/right to go back and forth in time.

44

(1 replies, posted in General)

I'm not sure if it's me but the images are quite small and they don't match what you wrote in terms of coordinates.

I will add an option to disable bilinear filtering and pixel offset. Indeed it will be better to test if everything works as expected.

On mouse click the program first receives the mouse coordinates on the video surface, these are integer coordinate in a top-down system. This is converted to image coordinates based on the scaling, panning, rotation and mirror done in the viewport. Then this gets converted to world coordinates based on the active spatial calibration and possibly the lens distortion (+ more complications for camera tracking).

Sub-pixel precision mainly depends on the zoom level. You can go up to 10x zoom so at most you get 10 locations in each dimensions inside the pixel. These are stored as floating point.

45

(14 replies, posted in General)

I updated the original post with links to 2024.1.1

Links
   
    Kinovea-2024.1.1.exe (installer)
    Kinovea-2024.1.1.zip (self contained archive)

This is a minor release mainly addressing the crash when exporting to Excel and updating translations.

Fixes

    Fixed - General: in preferences, capture folders selectors were not working.
    Fixed - General: using the portable version from within a synchronized OneDrive directory could cause a crash.
    Fixed - Export: exporting spreadsheet to Microsoft Excel was causing a crash.
    Fixed - Export: copy image to clipboard wasn't working correctly.
    Fixed - Player: keyboard shortcuts weren't working for dual screen commands when the focus is on an individual screen.
    Fixed - Player: timeline position wasn't restored correctly after an export operation.
    Fixed - Annotations: resucitated drawings after undo of delete of a keyframe were not properly registered for tracking.
    Fixed - Annotations: grabbing of lines perfectly vertical or horizontal wasn't working.
    Fixed - Annotations: positions of drawings during rendering were not accurate inside the pixel.
    Fixed - Camera motion: reset of camera motion data wasn't working.

Translation updates

    Spanish, Croatian, Indonesian, Japanese, Korean, Polish, Chinese (traditional), Russian, Ukrainian.

This is the current state of translation:
(If you see your language below but it's not available in the interface it's because the translation is too incomplete, contributions are very welcome, click on the image to go to weblate.).

https://www.kinovea.org/setup/kinovea.2024.1/2024.1.1/weblate.png