The way they solve this in video editing software like Shotcut or Premiere Pro, which also need frame by frame navigation, is a workflow known as "Proxy editing".

Essentially they create lower resolution and editing-friendly versions of the files, you create your edits on these proxies, and when you are ready to make the final export they swap in the original footage.

This seems the way to go and the fact that it's an industry standard there is an indication that no amount of buffer optimization or GPU acceleration would truly solve it.

I could add a function to create frame by frame friendly version of the video. I think the simplest would be to create the file next to the original with a suffix, as this would open the way for automatically detecting the kva annotation file when opening the original video. This way we can have a similar workflow: add your annotations on the proxy, then export a video based on the original.

One caveat is for making measurements and tracking as the lower resolution will reduce precision. But I think for the main use case of visually inspecting technique/posture it would be a good solution.

Two ways to go about it, either from the file browser by right clicking the thumbnail and a simple convert menu. Or by first opening the video and having an option under export.

For the resolution it should probably be a preset while keeping the original aspect ratio.

2) Video export configuration options (e.g., FPS, bitrate key-in).

5) Possibility to support plug-in open-source HPE models

Thanks for posting (or reposting) these.

It led me to consider a technical solution that I hadn't fully considered before that could address both.

It is possible to send binary data to external programs through "STDIN". FFmpeg readily accepts this for example. So I could start the external process, go in a loop to feed it the images, and retrieve the result at the end.

This is quite appealing to me because it's very open ended and avoids having to write a C++ wrapper or having to export a temporary video file first. I already have functions to "enumerate" the video frames with or without the user's drawings painted on, so it should just be a matter of feeding that to the external program and hiding this technical aspect under the UI.

Not all external programs will support that out of the box but it should be easier and more maintainable to implement the wrapper externally.

As an experiment I could hook the ffmpeg executable to the video export function and expose H.264 encoding as a "profile". (I don't really want the export dialog to turn into Handbrake user interface with a million options, if this approach works we can have custom profiles defined elsewhere and hidden from view).

Hi,
I looked into perfs a bit. Specifically I have some 4K @ 60 fps videos which are impossible to browse on the timeline.

Could you tell if you are seeing the same:

1. forward playback is actually decent. It's not buttery smooth but it runs without slowing down too much. I measured between 9 and 25 ms per frame to decode, and on average it can sustain about 45-50 fps so it's usable for the purpose of watching the action in real time.

2. forward frame by frame also works. For example holding the keyboard right arrow down and letting it move forward.

3. It's backward frame by frame and clicking around in the timeline that is horrendous.

Do you get the same behavior?

When you get this it's mainly related to the encoding. This particular video is encoded with key frames every 6 seconds (360 frames).

So whenever we click in the timeline, it will randomly drop in the middle of one of these segments. The way video decoding works we get to the previous key frame and then it has to decode all the frames forward until it gets to the target.

So I'm getting 2 or 3 seconds to get the requested position, just because it has to actually decode hundreds of other frames to get there.

As an experiment I exported it back using the export function (export in Kinovea uses MJPEG exactly for this reason, so every frame is a keyframe), and now I can do frame by frame and timeline browsing without issues, on the same 4K 60 fps video. (But yes the file is 10x bigger).

(edit: my computer is not particularly beefy).

Yes please!

Even if I have a long list of areas I want to improve personally, I'm always very curious about the community's priorities and workflows. Especially the little things that can have a big impact in terms of usability.

One area where I badly need help is communication. I'm not on social media I want to focus my time on coding. When I wrap up a release I find it incredibly hard to switch into communication mode and make nice video demonstrations of the features. On top of the written documentation which is itself an endeavor.

I'm currently in the final lengths of version 2025.1. It has many new features and improvements all over the place.

One area that is bothering me personally and has been neglected for many years, all the while being the very first thing people see when they open the program, the file browser. It's currently pretty unusable and completely fails at its mission of quickly navigating the video collection. So in 2026 I want to spend a few weeks upgrading it, make it less stupid.

Yeah 4K + H.265 is brutal for frame by frame. Pending further investigation the current strategy is to increase the memory buffers in Preferences > Playback > Memory and then reduce the working zone to have it fit in the buffer, then it should be smooth. But that's very limiting at these resolutions, required memory: width*height*3*frames, for 200 frames ~5 GB.

Hi Chas, yes I love this interaction mode, I use it all the time. I'm not 100% sure we are talking about the same thing though because this should work pretty much the same in Kinovea already. If you grab the navigation cursor and move it back and forth it should update the image immediately. The smoothness depends on whether the images are cached in memory, if they are not it depends on the video encoding.

There is another interaction mode called "time grab", you hold ALT key you can then grab anywhere in the video viewport with the mouse and move left/right to go back and forth in time.

7

(1 replies, posted in General)

I'm not sure if it's me but the images are quite small and they don't match what you wrote in terms of coordinates.

I will add an option to disable bilinear filtering and pixel offset. Indeed it will be better to test if everything works as expected.

On mouse click the program first receives the mouse coordinates on the video surface, these are integer coordinate in a top-down system. This is converted to image coordinates based on the scaling, panning, rotation and mirror done in the viewport. Then this gets converted to world coordinates based on the active spatial calibration and possibly the lens distortion (+ more complications for camera tracking).

Sub-pixel precision mainly depends on the zoom level. You can go up to 10x zoom so at most you get 10 locations in each dimensions inside the pixel. These are stored as floating point.

8

(14 replies, posted in General)

I updated the original post with links to 2024.1.1

Links
   
    Kinovea-2024.1.1.exe (installer)
    Kinovea-2024.1.1.zip (self contained archive)

This is a minor release mainly addressing the crash when exporting to Excel and updating translations.

Fixes

    Fixed - General: in preferences, capture folders selectors were not working.
    Fixed - General: using the portable version from within a synchronized OneDrive directory could cause a crash.
    Fixed - Export: exporting spreadsheet to Microsoft Excel was causing a crash.
    Fixed - Export: copy image to clipboard wasn't working correctly.
    Fixed - Player: keyboard shortcuts weren't working for dual screen commands when the focus is on an individual screen.
    Fixed - Player: timeline position wasn't restored correctly after an export operation.
    Fixed - Annotations: resucitated drawings after undo of delete of a keyframe were not properly registered for tracking.
    Fixed - Annotations: grabbing of lines perfectly vertical or horizontal wasn't working.
    Fixed - Annotations: positions of drawings during rendering were not accurate inside the pixel.
    Fixed - Camera motion: reset of camera motion data wasn't working.

Translation updates

    Spanish, Croatian, Indonesian, Japanese, Korean, Polish, Chinese (traditional), Russian, Ukrainian.

This is the current state of translation:
(If you see your language below but it's not available in the interface it's because the translation is too incomplete, contributions are very welcome, click on the image to go to weblate.).

https://www.kinovea.org/setup/kinovea.2024.1/2024.1.1/weblate.png

Just so you know the code base is going through some heavy refactoring right now related to multi-instance use-case.

You are right that there is no place in the UI that would tell the user which plugins they have installed and loaded correctly or got an issue on load, that would be good to have.

The rest is probably a discussion for github as it's more development related.

For the time being you need to check the logs. Each plugin will write down at least one line when they initialize.

2698 - DEBUG - [Main] - RootKernel - Loading built-in camera managers.
2703 - INFO  - [Main] - CameraTypeManager - Initialized DirectShow camera manager.
2716 - INFO  - [Main] - CameraTypeManager - Initialized IP Camera camera manager.
2717 - INFO  - [Main] - CameraTypeManager - Initialized Camera simulator camera manager.
2717 - DEBUG - [Main] - RootKernel - Loading camera managers plugins.
2731 - DEBUG - [Main] - CameraTypeManager - Loaded camera plugins manifests.
3036 - INFO  - [Main] - CameraTypeManager - Initialized GenICam camera manager.

Regarding logs there were recent changes:
- the logs are now only at level WARN by default so they won't print any of that, you need to go to "Help > Enable debug logs" to see these messages logged.
- the logs are now inside the Logs folder rather than at the root of the application data folder.
- each instance is logging to its own log file.

Then the camera discovery runs at non-regular intervals when you are on the camera explorer tab. For the GenICam plugin you should see the whole tree of GenTL providers. Then while you are on the camera tab you should see the discovery result lines at regular intervals. The interval is adaptive.

343075 - DEBUG - [Main] - CameraTypeManager - Discovered 1 cameras in 1205 ms. (DirectShow: 0 (1 ms), IP Camera: 0 (0 ms), Camera simulator: 1 (0 ms), GenICam: 0 (1204 ms)).

I usually open the log file in BareTail so it auto-update continuously and you can highlight certain words.
https://baremetalsoft.com/baretail/

The browser might support more protocols than Kinovea, make sure to configure the camera so it produces an MJPEG stream. If that's already the case and the browser can access the stream while Kinovea can't, it could be a firewall block.

11

(9 replies, posted in General)

Hi,  I will look into the dual replay sync issues. Could you describe the image size and frame rate of each camera, I'll emulate the same to investigate. They are both 720 x 540 @520 fps?

For color, when you are using a Bayer stream format (telltale is the little grid pattern on top of the black and white frames) you have two options:

1. Debayering on the fly: in the camera parameters check the "Enable software demosaicing". This will add a little overhead so if it makes things even worse regarding dropped frames try the second option.

2. Debayering during playback: in Preferences > Capture > General > Record uncompressed video. This ensures the raw frames are saved as-is to the file which is necessary for this to work as we need pixel perfect precision to retain the Bayer grid data all the way. Then when loading videos in the player: Image > Demosaicing and selecting the Bayer format. Note that these raw uncompressed files are not supported by many media players and those that support it might not have the option to rebuild color during playback.

For the first approach I realize that since you are using "retroactive" mode there could be a third way where the debayering is done at the end during the saving process. This is not implemented at the moment.

(color cameras always start with these patterned B&W images but they do the debayering on the hardware, which is not necessarily faster and requires 3 times the bandwidth, that's why the only way to get the top frame rate advertised by the vendor is to go with the bayer format. Also don't bother with the higher bit depth formats like 10 bit or 12 bit, Kinovea doesn't use them, they get converted the same and just eat bandwidth for no additional benefit).

Debes hacer una calibración del espacio con una linea o una cuadrícula. Haz clic derecho y "Calibrar".
https://www.kinovea.org/help/en/measure … ation.html

13

(1 replies, posted in Bug reports)

Hi, that definitely sounds like a bug.
Could you connect to one of the machine and go to the log folder (via help menu) and then collect the files whose name start with "Unhandled exception"? These contain the last things the program was doing before the issue and will help me identify exactly where the problem is. Send them to joan at kinovea dot org or create an issue on github and attach them there.
If possible also collect the log.txt and log.txt.1 files, send these ones via mail only as they can sometimes contain personal data like file paths.
Thank you.

14

(2 replies, posted in General)

Currently there is no option to default the trigger to "armed" but it can be added, I'll write it down.

15

(2 replies, posted in General)

Hi, sorry for the late response.

In general users don't see or use the image based coordinates. The pixel coordinates (when there is no calibration) is aligned to the bottom-left of the pixels, not to their centers.
There are several confounding factors, in particular the way the image is painted on screen with pixel offset and bilinear interpolation. But arithmetic between pixel locations should work the same. Several functions return fractional pixel positions (zooming in, tracking, calibration).

It's hard to describe without making things even more confusing, so I'll post an image. This is a 4x4 image magnified, with the 4 center pixels colored in. I disabled pixel offset and bilinear interpolation (this could be an option if you need this for research). In yellow is Kinovea default coordinate system.

https://www.kinovea.org/screencaps/2024.1/pixel-grid-coordinates.png

So if your image coordinates start at the top-left of the top-left pixel, then the center is at 2, 2. This is what the KVA file will store in the calibration node. If your image coordinates start at the center of the top-left pixel, then the center is at 1.5, 1.5, which is maybe what you expected.

But as mentioned this is confounded by the rendering options, pixel offset mode will make it so everything is pre-offset by half a pixel, this is relevant when the user selects locations in the image. I'm not saying this is bug-free, but the tracking work done last autumn makes me think there is no issue that would be as large as a half pixel, let me know if you find something problematic.

I think the only moment where this would be relevant is if you have an external system giving you pixel-based coordinates.