Yes this would be nice to have. Yes at the moment the countour is hardcoded to white, but this should be feasible, each spotlight item already has some information stored in the KVA file like center and radius.

Yeah backward playback (and even stepping) is surprisingly complicated. The issue is that most video codecs are optimized for forward playback and the content of a given image is an incremental update over the previous keyframe. So to display a past frame we usually need to go way back, an unknown amount of frames, to find that keyframe, and then decode forward until we find the target frame to show.

At the moment there are a few strategies in Kinovea to alleviate this: 1. Use a intra-frame codec only by default for anything saved by the program. This makes it easier to later step backward/forward as every frame is a keyframe. 2. Keep a small cache of images in memory while playing forward, frames both before and after the current point. This way we can at least go back a few images without seeking. This also helps with smoothing forward playback at normal speed. 3. When the working zone fits entirely in memory, all the frames are cached, this makes random access fast again. In fact if you do this you will see the menu Video > Reverse is active and you can switch the entire video to backward playback.

Hi,
There is a keyboard shortcut for this: CTRL+LEFT arrow and CTRL+RIGHT arrow.
For quick scanning a long video there is also Page Up and Page Down shortcuts, it jumps by increments of 10% of the length of the video.

574

(36 replies, posted in General)

This is Kinovea 0.9.1.
This version introduces capture-and-replay automation, improves capture performances, especially for delayed video, and adds many other improvements. This version requires .NET 4.8.

 

Many thanks to everybody that helped with testing this version. Special thanks to rkantos, Faultyclubs, and Reiner Hente, for their patience in spite of my continuous requests for testing buggy builds tongue

Plenty of changes aren't described here, please consult the full changelog for details.

 
1. Capture automation

We can now trigger recording based on microphone volume, and stop recording after a specific duration. This enables a hands-free, continuous recording workflow.

http://www.kinovea.org/screencaps/0.9.1/091-capture-automation2.png

Recording footage around an event of interest is natively supported with the existing delay feature. For example, say we are filming a golf swing and want to capture from 3 seconds before impact to 2 seconds after it. We set the video delay to 3 seconds in the capture screen, and set "stop recording by duration" to 5 seconds. When the club impact triggers the recording, it will start saving the video stream from 3 seconds ago.

When using multiple instances of Kinovea, each instance now has a deterministic name. By default it will be a number in sequence but you can also use the `-name` argument on the command line for full control. Each instance can use its own preferences file. This is useful to create advanced setups, for example having an instance dedicated to capture and another to replay, or for instrumenting Kinovea from other programs. Multiple Kinovea instances can listen to the same microphone for synchronized recording by audio trigger.

> kinovea.exe -name replay

http://www.kinovea.org/screencaps/0.9.1/091-naming.png

We can also run a script on the resulting file after the capture is complete. This can be useful to copy the file somewhere else or process it further.

 
2. Capture performances

A lot of care went into the performance of delayed capture, and it should now be almost on par with real-time capture. You still need to toggle the option under Preferences > Capture > Recording.

The act of compressing the images for storage is usually the main bottleneck when recording with the typical cameras used in Kinovea (high-end webcams and machine vision cameras). We can now bypass this compression step entirely and record uncompressed videos. Be mindful that uncompressed videos take a lot of storage space. This option is under Preferences > Capture > General.

Modern storage options like SSD, NVMe or RAMDisks all have higher bandwidth than the USB link of the camera on the other side, so hopefully whatever the camera can send to the PC can be recorded without drops. The simulator camera and the infobar above the capture area can be used to diagnose issues.

http://www.kinovea.org/screencaps/0.9.1/091-captureinfobar.png

On top of recording uncompressed videos, we can now record "raw" video streams if the camera supports it. This records the raw sensor images, grayscale with color implicitly encoded in a Bayer grid pattern. The player has a new option to rebuild color images from raw files under menu Image > Demosaicing. The advantage of doing this is the storage bandwidth is only that of a grayscale video, so it cuts requirements by a factor of 3.

http://www.kinovea.org/screencaps/0.9.1/091-debayering3.png

 
3. Replay folders

This is a new concept, completing support for a fully hands-free capture-and-replay workflow. In this mode a playback screen is associated with an entire folder and any new video file created in this folder, usually by the capture module, will be instantly loaded and start playing.

http://www.kinovea.org/screencaps/0.9.1/091-openreplayobs.png

Typically we would use this within a single instance of Kinovea, but as it is based on the file system, we can also have a separate instance of Kinovea dedicated to replay. It should even be possible to put the replay instance on a different machine on the network, copying over the captured files using a post-recording command.

http://www.kinovea.org/screencaps/0.9.1/091-replayobserver.gif

In the above screencast, the left screen is a camera filming the stopwatch. The right screen is open using a replay folder observer on the folder where the captured videos are saved. In this case the capture was configured to stop by itself after 2 seconds. As soon as the capture is completed, the playback automatically starts in the other screen.

 
4. Time origin and relative clock

Many analysis scenarios involve a specific moment within the video that everything else is related to. A golfer's club-ball impact, a baseball pitcher's release point, a long jumper's take-off, the start of a race, etc. We can now navigate to this precise moment and mark it as the zero point, the origin of all times for the clip. Every other moment will now be relative to this origin, using negative time before the event and positive time after it.

http://www.kinovea.org/screencaps/0.9.1/091-timeorigin.png

A new simple clock tool lets you see relative time directly on the image.

http://www.kinovea.org/screencaps/0.9.1/091-relativeclock.gif

 
5. Annotation importers

We can now import .srt subtitles and OpenPose keypoint files.

OpenPose is a deep learning software stack for human posture recognition. The result of OpenPose 25-point body model is automatically imported into a dedicated custom tool in Kinovea. At this point this is not meant to be used for measurements but more for general posture assessment.

http://www.kinovea.org/screencaps/0.9.1/091-openpose.gif

 

Thanks!
Don't hesitate to post feedback, questions, feature requests, bug reports, either in this thread or in dedicated threads.

575

(9 replies, posted in Bug reports)

Merged!
Super thanks big_smile

576

(9 replies, posted in Bug reports)

Yeah, before the nud it was really hard to set small values precisely, that's why the slider is logarithmic, I guess this is no longer really relevant now, so yes, it could use a linear slider instead. It could also be an option. The scenario for very small values is to match two cameras that have different capture latency.

I still think it should be in seconds though, internally everything is in frames but from a user point of view I don't think frames make sense for the general concept of delay, what is the scenario where you think about delay in frames? Also for the case of pre/post recording, like recording for x seconds before and after a trigger event, it's natural to have this in seconds.

Thanks for the feedback. Yeah it's always hard to know which feature people are actively using or not when I want to simplify the architecture, and sometimes I only get feedback when it's removed hmm

I want to keep the toggle between "fading in/out" and "always visible" as direct as possible. So I'm thinking a Visibility menu with 3 sub options: always visible, default fading and custom fading. And then a fourth menu to go in a dialog box to configure the custom fading. On this dialog we need to be able to change the duration of the opaque section and the duration of the fading sections on each side of the opaque section. The old option had everything on a single dialog, so you were presented with all the complexity even when you just wanted to switch to always visible. I think it will be clearer and more usable to separate the typical cases from the "advanced" cases.

578

(9 replies, posted in Bug reports)

Oh this is great! smile

I reproduced the problem and pushed a fix, let me know if it works for you.

The goal of this piece of code is to set the timer refresh rate for display, and the only thing we really want is to avoid it to be too high, for example if this is a high speed camera running at 500 fps we don't want the burden of refreshing the UI at that rate, it would compete for computer resources with the recording which actually *needs* to happen at 500 fps. So it needs to stay `min()`. In some cases though, the camera framerate can't be known in advance, this is the case for IP cameras (for now, until maybe ONVIF is implemented or something). So in that case we'll just ignore the (unknown at this point anyway) camera framerate altogether and use the configured display framerate or monitor framerate.

The `cameraGrabber.Framerate` is going to come from the camera modules (FrameGrabber.cs in Kinovea.Camera.HTTP for example).

`pipelineManager.Frequency` is only going to be valid after some frames have been received, so it shouldn't be used during connection.

Thanks!

Hi,
A number of drawings are "owned" by the keyframe they are added to, and others are video-level objects. The following are video-level: stopwatches, trajectories, spotlights, auto-numbers. Some other tools are special like the coordinate system and magnifier.

Yeah the persistence/fading needs some work. It was simplified at some point to make the code easier to manage but now it's a bit lacking in features. The drawings will use whatever option was set in the preferences when they were added, and then they are stored with this value so you can't really change it afterwards.

Internally the fading in/out model is more capable though, it's able to fade for any number of frames around the insertion point, or even have a period of time where it's fully opaque and only fade around these boundaries, this is used for example in the next version to support subtitles import. The issue is one of user interface at the moment, I need to find a good way to expose these features that isn't as confusing as the dialog box there once was.

Edit: you're right, the spotlight is a special case, it's using a hard coded value and doesn't honor the fading option.

No, this is for the next version.

This was changed recently for the next version. There is now a concept of time origin that you can set in each video independently with a single click, and the synchronization will be done using the time origins of each video. I think it will make things clearer/easier. The time origin will also be saved in the KVA file so you won't have to find it again when comparing that specific video with another one.

PS: necroposting is fine on this forum smile

582

(1 replies, posted in General)

Hi,
At the moment it is hard coded. And yes the bitrate settings are very aggressive to minimize any loss of information when saving, to best support the archival use-case (e.g: splitting a long session into smaller chunks). It's the first time I hear a player complain because the bitrate is too high though… What player is it?

I've kind of tried to stay away from opening this particular can of worms :-). Codec options can become complex fast, in terms of UI and this complexity will trickle down into the code. Maybe having just two options would work, one for archival and one for web/presentation export? Maybe it can even be selected automatically based on the option to paint the drawings on the images (for archival this should never be done).

For the shortcut issue, after thinking more about it, I think the best way would be to use the existing distinction of showing/hiding the common controls. There is even the F5 shortcut to toggle it already. If the common controls are hidden, the videos should really behave as two completely independent videos.

permanently lock dual video playback in their synchronized state based on the working zone starting points

I am not sure what you mean by that, can you clarify the intent?

584

(9 replies, posted in Bug reports)

Hi,
In addition to the configured framerate and the display framerate, there is also the actual, received framerate. It's possible that Kinovea is not receiving frames at the correct speed for whatever reason (low light, network conditions, camera is lying, etc.). There should be a "Signal" field in the infobar that shows up after a while. This shows the framerate received.

The way recording with delay works has completely changed in the next version, so we'll see if that fixes your particular problem. It will now always save at the camera configured framerate, display framerate is not used anymore.

It's possible the issue is that the stream received is at 30 but Kinovea wasn't able to record it fast enough to disk unless your force it down to 21. Or it's possible the stream received is at 21 in the first place. The "Signal" value should give the answer to that. Another way I use is to pause the stream and navigate in the recent frames with the delay slider to see if the stopwatch matches what the delay says.

Based on your description it looks like the actual received framerate is 21, even before attempting to record to disk. Unless the camera is lying this should be fixable by other ways (most probably adding more light to avoid triggering auto-framerate by way of auto-exposure).

For this case of the camera not sending what it's configured to, currently the best way to fix it is at playback time, by going to menu video > configure timing and fixing the framerate there. In the future I'll also see to add an option to be able to save the "received framerate" in the video, but ideally this option would be per-camera and saved in the preferences per-camera, as the goal is to fix cameras that are lying about their framerate. So as the release of the next version is coming soon, I pushed this to the roadmap for the next one.

585

(3 replies, posted in General)

To get more screen estate you can explicitly collapse the key image panel with the arrow on the right. You can also switch to full screen with F11 (revert to normal with F11 as well).

I agree it would be nice if the magnifier showed the drawings, and also the cursor. Then you could use it as a real magnifier… I'll have to revisit this.