Thanks for the follow up and the details. This might help someone else down the line! I wasn't aware of this feature in Windows I'll have to look into it at some point to see why it's not working.

FYI you should be able to rotate the camera directly inside Kinovea using the image menu.

Something that might happen is that a Windows update somehow changes the driver to a newer version from Microsoft and breaks some features. I have had that happen with other cameras. Check in the device manager if the button to restore the previous driver version is enabled.

48

(26 replies, posted in General)

Thanks for looking into that! I also noticed it at some point but hadn't had time to investigate.

49

(26 replies, posted in General)

P1 wrote:

Could it be hardware related in some way?

It sounds more like a software bug. I'll look into it.

50

(26 replies, posted in General)

If you want to use both versions at the same time at least one of them should be the "portable" version provided as a zip file. Otherwise they will share the preferences folder and since the format has changed this will cause issues. But I don't think this could cause this kind of issues.

Could you share one of these vertical 4K videos with me? Are they opening in portrait automatically or did you explicitly use the rotation menu? In the second photo are they really zoomed-in or is it just how they would look at 1:1 scale on your screen? (what's the resolution of your screen?)

51

(26 replies, posted in General)

I haven't come across this issue. The video export was rewritten and maybe this is what introduced it. One major change is that the video is now exported at the original image size, where it was exported at the display size before. So for large image size this may make the export longer.

I still want to have a system to choose different export formats for web vs archiving as soon as possible but it's a bit of a big undertaking.

If you can change the gamma in the Daheng software but not in Kinovea then there is a bug somewhere. Maybe the property has a unexpected name or something. I don't have access to the camera right now but I will look into this in a couple of weeks and the plugins need to be fixed for 2023.1.

I've never formally tested the max throughput of the IP camera module. The low level code for this is coming from an external library, there might be a couple of extra buffer copies that could be factored out by rewriting it. I don't know if that's the culprit though.

Once the buffers are captured they go through a pipeline that is shared with the other camera types and this is known to support this kind of speed so the issue is probably somewhere before that.

If you film a high precision stopwatch, can you figure if the lost frames are happening at random intervals or if they are bunched together, like a whole chunk missing at once?

As an experiment you could have the server on the same machine and capture through the loopback, see if you still get the limit.

Updated calibration dialog.

https://www.kinovea.org/screencaps/2023.2/2023.2-calibration.png

- New: flipping and rotating axes, pixel size hint, coordinates offset.
- Improved: using numeric box instead of text box to support mouse scroll to change the numbers.
- Plus: added a menu in the coordinate system drawing to re-align it with the grid.

Pixel size can be thought of as the error bar at the center of the grid. If the digitization is off by one pixel this is how much it is off in real world units. (in pixels of the original video, not screen pixels, always good to zoom in).

No special treatment for tracking. The offset is just added at the end so it should work as before. I don't think tracking both the coordinate system and the calibration grid is a realistic scenario so until someone comes with a real use-case it will just keep giving precedence to the grid. We'll see later how that works with camera motion compensation.

Known limitation: The scatter plot diagram is showing the axes un-flipped.

Another feature of the Distance grid tool is that it lets you reverse the X axis direction and anchor the origin at the bottom-right. This can be useful for measuring things going right-to-left. I'll try to get all of this in the normal grid.

edit: actually this can be done by manually flipping the grid.

56

(1 replies, posted in Bug reports)

Which version of Kinovea? Is this Windows 11 on a desktop/laptop or on a tablet computer? Do you have NVidia Broadcast virtual webcam enabled?

Another way is to have a second way of entering the calibration grid dimensions, instead of entering the sizes of the sides, to enter the coordinates of the bottom-left and the top-right corners. If the user chooses this way we calculate the sides and the offset from the entered values. (In theory this could be an aribitrary quad but it makes things simpler to force a rectangle).

That was kind of the direction the Distance grid tool experiment was going. But now I think this tool won't be necessary anymore, if the offset can be set up on the normal grid. Then the distance line can become a display option (and it can have a vertical distance as well).

The link to meta ai tracking project is super interesting. When the basic camera motion integration is done I want to look into connecting machine learning algorithms more easily. There is so much to leverage, super-slow-motion, pose estimation, background/foreground segmentation, image stabilization…

To be clear, you can track the grid, but you can also track the coordinate system origin, independently. If you first move the coordinate system out of the grid corner, then you can right click > Tracking > Start tracking. If both the coordinate system origin and the calibration grid are tracked, then the priority is given to the grid.

To be honest I think the only real use-case of tracking the calibration grid itself is when the camera is moving. I've been working on camera motion compensation for a while and it should be in the next version, although I'm not yet sure if the first version will interact correctly with measurements (it's working for placing drawings that "stick" on world objects despite camera motion, this is already useful for visualizing trajectories and drawing in world space).

Tracking the system's origin has a use case outside camera motion, it's for example when you want to measure something relatively to something else (moving coordinate system). It's not very common in my experience though.

Yeah I think having a configurable "offset" to the coordinate system would be nice. The two options aren't mutually exclusive. This might be useful in the context of camera tracking actually. To set up your calibration in the middle of the sequence but still get meaningful values. Although I'm not sure if camera tracking will be precise enough for making measurements.

59

(1 replies, posted in Français)

Bonjour,
Dans ce cas le plus simple est d'avoir deux instances de Kinovea séparées, une pour la capture et l'autre pour le replay. Le mécanisme de replay est complètement indépendant de la fenêtre de capture, il va juste guetter l'arrivée des nouveaux fichiers dans le dossier cible et lancer la vidéo automatiquement. (Théoriquement ça peut même être sur une machine différente sur le réseau local).

OK but it needs to interact nicely with changing the origin manually from dragging the Coordinate system object around by the axes or the origin. If we change manually on screen and we come back in this dialog it needs to show a correct value.

And we can also activate tracking on the coordinate system. In this case the value we would show here would be a bit tricky, maybe in this case it should be grayed out.

There are two ways to think about this I think. Option 1: this defines where the origin of the coordinate system is in relation to the calibration grid, this is what you wrote. When we activate the display of the coordinate system it will be moved to this new origin. If the origin is far enough it could be outside the image.

Option 2: it could define the coordinate at the origin, that is, a fixed offset applied to the coordinate system such that the intersection of the axes is not {0, 0} but something else.

This is sort of what I do in the new "Distance grid" object (on horizontal axis only though, but it could be done on two axes). In the calibration of this grid you set two distances, for example 6m and 10m, and then even a coordinate on the main axis is already at 6m. This was designed as an experiment for long jump measurements. The important advantage of doing it this way is when the true origin of your coordinate system is way outside the image. So we have a camera looking at the end of the long jump pit with markers at known distance. The coordinate system is still aligned with the grid but the values are pre-transformed. (edit: it doesn't really work fully with the coordinate system at the moment though, only for the distance line inside the grid).

Can you tell if your use case is more amenable to one way or the other? I wonder if people that want to change the origin numerically are actually looking for option 2.