1

Hi,

I'm currently using Kinovea for a biomechanics research project and would like to clarify some specific technical details.

To better illustrate my question, I created and attached an image with three examples labeled A, B, and C:

https://prnt.sc/G_U8UNLCB57E

    Image 1 (Point A): A black dot is placed exactly at integer coordinate (1, 1).
    My question is: when I click exactly on this position, does Kinovea return the value (1.0, 1.0)? Or is there some internal offset (like 0.5) being applied that alters the reported value?

    Image 2 (Point B): A white dot is placed near (-1, -1).
    In this case, what is the actual coordinate that Kinovea returns when I click this position? Is it truly (-1.0, -1.0), or a slightly shifted value due to interpolation or pixel offset logic?

    Image 3 (Point C): A point is selected near (0.15, 0.5) in the zoomed-in view.
    Does Kinovea actually support and store subpixel values like this (e.g., 0.15), ?

Based on these examples, I would greatly appreciate if you could help me understand:

    How does Kinovea handle subpixel precision?
    If I click somewhere between pixels, does the software interpolate and return floating point coordinates (e.g., 0.33, 1.72)? If so, how is this value determined — visually, through bilinear interpolation, ?

    Are the options “pixel offset” and “bilinear interpolation” configurable in the interface or settings?
    I'd like to experiment with turning them on and off, but I wasn't able to locate a toggle. Are these options exposed to the user?

    What differences should I expect in the returned coordinates for points A, B, and C with these options enabled or disabled?

Thank you again for developing such a powerful and user-friendly tool. I'm trying to ensure the greatest possible precision in my workflow, and these clarifications would be incredibly helpful

2

I'm not sure if it's me but the images are quite small and they don't match what you wrote in terms of coordinates.

I will add an option to disable bilinear filtering and pixel offset. Indeed it will be better to test if everything works as expected.

On mouse click the program first receives the mouse coordinates on the video surface, these are integer coordinate in a top-down system. This is converted to image coordinates based on the scaling, panning, rotation and mirror done in the viewport. Then this gets converted to world coordinates based on the active spatial calibration and possibly the lens distortion (+ more complications for camera tracking).

Sub-pixel precision mainly depends on the zoom level. You can go up to 10x zoom so at most you get 10 locations in each dimensions inside the pixel. These are stored as floating point.