Thanks for the feedback. Yeah it's always hard to know which feature people are actively using or not when I want to simplify the architecture, and sometimes I only get feedback when it's removed hmm

I want to keep the toggle between "fading in/out" and "always visible" as direct as possible. So I'm thinking a Visibility menu with 3 sub options: always visible, default fading and custom fading. And then a fourth menu to go in a dialog box to configure the custom fading. On this dialog we need to be able to change the duration of the opaque section and the duration of the fading sections on each side of the opaque section. The old option had everything on a single dialog, so you were presented with all the complexity even when you just wanted to switch to always visible. I think it will be clearer and more usable to separate the typical cases from the "advanced" cases.

527

(9 replies, posted in Bug reports)

Oh this is great! smile

I reproduced the problem and pushed a fix, let me know if it works for you.

The goal of this piece of code is to set the timer refresh rate for display, and the only thing we really want is to avoid it to be too high, for example if this is a high speed camera running at 500 fps we don't want the burden of refreshing the UI at that rate, it would compete for computer resources with the recording which actually *needs* to happen at 500 fps. So it needs to stay `min()`. In some cases though, the camera framerate can't be known in advance, this is the case for IP cameras (for now, until maybe ONVIF is implemented or something). So in that case we'll just ignore the (unknown at this point anyway) camera framerate altogether and use the configured display framerate or monitor framerate.

The `cameraGrabber.Framerate` is going to come from the camera modules (FrameGrabber.cs in Kinovea.Camera.HTTP for example).

`pipelineManager.Frequency` is only going to be valid after some frames have been received, so it shouldn't be used during connection.

Thanks!

Hi,
A number of drawings are "owned" by the keyframe they are added to, and others are video-level objects. The following are video-level: stopwatches, trajectories, spotlights, auto-numbers. Some other tools are special like the coordinate system and magnifier.

Yeah the persistence/fading needs some work. It was simplified at some point to make the code easier to manage but now it's a bit lacking in features. The drawings will use whatever option was set in the preferences when they were added, and then they are stored with this value so you can't really change it afterwards.

Internally the fading in/out model is more capable though, it's able to fade for any number of frames around the insertion point, or even have a period of time where it's fully opaque and only fade around these boundaries, this is used for example in the next version to support subtitles import. The issue is one of user interface at the moment, I need to find a good way to expose these features that isn't as confusing as the dialog box there once was.

Edit: you're right, the spotlight is a special case, it's using a hard coded value and doesn't honor the fading option.

No, this is for the next version.

This was changed recently for the next version. There is now a concept of time origin that you can set in each video independently with a single click, and the synchronization will be done using the time origins of each video. I think it will make things clearer/easier. The time origin will also be saved in the KVA file so you won't have to find it again when comparing that specific video with another one.

PS: necroposting is fine on this forum smile

531

(1 replies, posted in General)

Hi,
At the moment it is hard coded. And yes the bitrate settings are very aggressive to minimize any loss of information when saving, to best support the archival use-case (e.g: splitting a long session into smaller chunks). It's the first time I hear a player complain because the bitrate is too high though… What player is it?

I've kind of tried to stay away from opening this particular can of worms :-). Codec options can become complex fast, in terms of UI and this complexity will trickle down into the code. Maybe having just two options would work, one for archival and one for web/presentation export? Maybe it can even be selected automatically based on the option to paint the drawings on the images (for archival this should never be done).

For the shortcut issue, after thinking more about it, I think the best way would be to use the existing distinction of showing/hiding the common controls. There is even the F5 shortcut to toggle it already. If the common controls are hidden, the videos should really behave as two completely independent videos.

permanently lock dual video playback in their synchronized state based on the working zone starting points

I am not sure what you mean by that, can you clarify the intent?

533

(9 replies, posted in Bug reports)

Hi,
In addition to the configured framerate and the display framerate, there is also the actual, received framerate. It's possible that Kinovea is not receiving frames at the correct speed for whatever reason (low light, network conditions, camera is lying, etc.). There should be a "Signal" field in the infobar that shows up after a while. This shows the framerate received.

The way recording with delay works has completely changed in the next version, so we'll see if that fixes your particular problem. It will now always save at the camera configured framerate, display framerate is not used anymore.

It's possible the issue is that the stream received is at 30 but Kinovea wasn't able to record it fast enough to disk unless your force it down to 21. Or it's possible the stream received is at 21 in the first place. The "Signal" value should give the answer to that. Another way I use is to pause the stream and navigate in the recent frames with the delay slider to see if the stopwatch matches what the delay says.

Based on your description it looks like the actual received framerate is 21, even before attempting to record to disk. Unless the camera is lying this should be fixable by other ways (most probably adding more light to avoid triggering auto-framerate by way of auto-exposure).

For this case of the camera not sending what it's configured to, currently the best way to fix it is at playback time, by going to menu video > configure timing and fixing the framerate there. In the future I'll also see to add an option to be able to save the "received framerate" in the video, but ideally this option would be per-camera and saved in the preferences per-camera, as the goal is to fix cameras that are lying about their framerate. So as the release of the next version is coming soon, I pushed this to the roadmap for the next one.

534

(3 replies, posted in General)

To get more screen estate you can explicitly collapse the key image panel with the arrow on the right. You can also switch to full screen with F11 (revert to normal with F11 as well).

I agree it would be nice if the magnifier showed the drawings, and also the cursor. Then you could use it as a real magnifier… I'll have to revisit this.

535

(3 replies, posted in General)

Hi,
It could be a bug but I don't reproduce it at the moment. There is a "dual save" button at the bottom right, in the common controls panel, is this what you are using? Which version of Kinovea are you using?

Yeah, it's strange. I had seen these AMCap screenshots.

Here is what I get in AMCap and Graphedit, on multiple computers.

https://kinovea.org/screencaps/0.9.1/kayeton-1280x720.png
https://kinovea.org/screencaps/0.9.1/kayeton-1920x1080.png

Gain is also not supported. Exposure handling is sketchy.

537

(4 replies, posted in General)

Just in case, you can show both the time and frame number simultaneously using the last menu under Options > Time.

Another time format could be "Normalized" time, where the entire video or zone would be remapped to the 0..1 range. Could be useful for comparisons.

Absolute times would be great for line scan footage, and then a way to calibrate the time span and flow of columns, and a way to show time coordinates by placing special vertical lines or points on the frames...

Opening a dedicated thread on this camera as it seems fairly popular due to its price and announced specs.

It is based on the Omnivision sensor OV4689, and is advertised as 1920x1080 @ 60 fps, 1280x720 @ 120 fps and 640x360 @ 330 fps, rolling shutter, for around 100€ depending on where you source it from.

It is variously known as KYT-U400-***, RYS HFR USB2.0 Camera, Webcam UVC High Fram Rate USB Camera, Kayeton 330 fps, etc. It is made by Kayeton (Shenzhen). USB vendor ID is VID=15aa, product ID is PID=1555. (Although I would be surprised if this was a legit Id from USB-IF).

I received a unit a few days ago and so far I'm not impressed…

Issues I have on my camera:
- It cannot be configured to 1920x1080 @ 60 fps but 50 fps.
- It cannot be configured to 1280x720 @ 120 fps but 100 fps.
- When configured on 1920x1080 @ 50 fps, it is sending frames at ~49 fps.
- When configured on 1280x720 @ 100 fps, it is sending frames at ~99 fps.
- When configured on 640x360 @ 330 fps, it is sending frames at ~322 fps.
- Auto exposure can be toggled off, but changing exposure value manually doesn't have any effect.

I'm wondering if I just lost the Shenzhen roulette or if there is a different driver somewhere, or if all shipped units are actually like this. The sales rep on Alibaba is unresponsive.

If you have this camera please report whether you can configure it according to the vendor claims, in any software, thanks.

Note that Kayeton also has a "Global shutter" 1280x720 @ 120 fps camera, a different model, based on an unnammed OV sensor and doing only one resolution/framerate. If you have this one instead please state so.

Thanks

Hi,
You can also use "Copy image to clipboard" and "Paste image from clipboard". But you're right, it doesn't let you change the opacity at the moment.

As a work around you can have 50% opacity between the two videos by activating the "superposition" option down in the common controls next to the synchronization button.

My guess is @fajitas source video isn't MJPEG but H.264 so when asking for a specific time it lands on a non-keyframe, and the output is created with the first frame having missing reconstruction information and possibly a weird decoding timestamp if the file has B-frames (bi-directional prediction, info to rebuild the frame is stored in both adjacent keyframes). This trips Kinovea up. In any case I don't see how to turn this random frame into a full keyframe in the output without transcoding somehow...

In the next version there is a way to import a sequence of images as if it was a video, if they are named correctly like image001.png, image002.png, image003.png, etc. FFMpeg should be able to export that. Maybe an avenue to explore.