Could you describe the symptoms more? When you go to the camera tab, there is no entry for the camera? Or there is a spot but the thumbnail is empty, or you see the thumbnail but when you open it the screen is empty, or the screen is black, etc.
Could you send me the log.txt at joan at kinovea.org, thanks!

512

(36 replies, posted in General)

Hi, yes the PS Eye webcam will only work with 32 bits applications. I don't know of any workaround. At the moment the 32 bit build is broken and it's a bit of a pain to maintain, so the incentive isn't there. So right now there is no plan to add back support for 32 bit. If someone contributes it and it's not a chore to maintain I'll merge it though.

513

(1 replies, posted in General)

Hi,
You can go to menu Tools > Angular kinematics. In the lower right corner there are buttons to export the data to CSV. The first column will be the time in milliseconds and then one column per angle. You can check/uncheck sources to include/exclude angles.

Hi, there are multiple framerates to distinguish, the one announced by the vendor on their website, the one exposed by the driver and that can be configured in AMCap, Kinovea or other Directshow applications, and finally the one that is actually sent by the camera.

If the info bar says something like

1280×720 @ 120 fps (MJPG) - Signal: 101.00 fps

Provided the exposure duration is low enough (less than 1/fps), it means the camera isn't really sending what the driver is announcing. Driver says 120, camera sends 101. This seems to be a recurring problem with these modules from Shenzhen.

AMCap doesn't do any dynamic measurement as far as I know, it shows what the driver says. In Kinovea you will also be shown what the driver says in the configuration window, but then while streaming it will count the frames really received from the camera and show you the actual framerate.

Does the camera not work at all in 0.9.1 or does it work with this framerate discrepancy?

515

(36 replies, posted in General)

The threshold should floor at 60 fps, can you really set it to 30?
To be clear this mechanism is only used by the capture screen. When the video opens in playback, what does the header above the screen say? It should say 24 fps. If both files are showing 24 fps in the header the composite video should also be 24 fps. I'll double check the behavior when the files don't have the same framerate, it's possible there is some trickery that makes it fallback to 100 fps for some reason.

516

(36 replies, posted in General)

Thanks, I think I can reproduce both issues. The Syncronize button is definitely broken for the right video, this looks systematic, collateral damage of recent changes in this area, will fix asap.

I can also reproduce the auto playback issue by manually recording each camera to force a gap in file creation time. The dual playback needs to be robustified. In theory when the late video starts it should force the other one to move back and they should continue in sync. This seems to work when done manually but not when the videos are started automatically.

There is also a third issue you might encounter from times to times, it may happen that both videos start replaying, but not properly locked and one of them start to drift away after a few iterations.

517

(36 replies, posted in General)

inorkuo wrote:

i'm still not sure what the threshold and replacement settings do but I will play around with it.

Typically when capturing a high speed camera, the camera is sending let's say 300 fps, in the final file we don't put 300 fps but something more reasonable like 30 fps so the player can read it without trying to decode at 300 fps which would be too intensive. Physical high speed cameras or phones do the same when saving.

The first line, "Framerate replacement threshold" is the value above which this behavior is active. By default it's 150 fps. Anything under this will keep the value in the final file. Anything above this value will be replaced by something else. I had a hard time with the wording, it's hard to convey the meaning succinctly.

The second line, "Replacement framerate" is the actual framerate that will be written in the video metadata.

The old behavior is that anything above 100 fps was silently converted to 30fps.

inorkuo wrote:

there are a few more issues I've run across. in dual playback mode, the "synchronize videos on the current frames" does not work.

Can you describe how the problem manifests itsedf? What this is supposed to do is create a "time origin" point in each video at the respective current location. Then synchronization works based on these time origin.

- When you hit the button, does it change the timing in individual videos (they should now have negative times before the sync point)?
- Does it work if you move videos to the desired point and set up each individual videos instead, by using "Mark current time as time origin", (button or right click in each video)?

inorkuo wrote:

the last issue is with automatic playback with two playback screens, when new videos are created, often only one will play and I have to manually press play to get both to play together. it seems that there is a correlation to the difference in the length of the two videos. I have the "stop recording by duration" set to 3 seconds but sometimes, one video will be 3.1s and the other will be 2.5s. it seems that when the difference is small, both videos will play. when the difference is larger, only one will play.

Are the camera captured to the same folder? This is not supported in replay at the momentq as it just looks for the most recent one, they should be saved to different folders and each replay screen should be pointed to the respective folder. But even then both screens should load the same video and start playback, not sure what's going on.

One video being 2.5 indicates another issue. Either there are frame drops or the camera isn't sending a true 120 fps.

You can add the bug on github and add the screenshots there. Most probably there is a reset of the scaling so the video and drawings are exported at the original video size, and it's missing a revert to the current scale based on video fitting the screen space or other custom zooming.

Ah yes, the first idea should totally be doable. Even by default it feels that if there is a key image sitting exactly at the time origin it should be framed in red to match the color scheme and the tick mark in the timeline.

Yes this would be nice to have. Yes at the moment the countour is hardcoded to white, but this should be feasible, each spotlight item already has some information stored in the KVA file like center and radius.

Yeah backward playback (and even stepping) is surprisingly complicated. The issue is that most video codecs are optimized for forward playback and the content of a given image is an incremental update over the previous keyframe. So to display a past frame we usually need to go way back, an unknown amount of frames, to find that keyframe, and then decode forward until we find the target frame to show.

At the moment there are a few strategies in Kinovea to alleviate this: 1. Use a intra-frame codec only by default for anything saved by the program. This makes it easier to later step backward/forward as every frame is a keyframe. 2. Keep a small cache of images in memory while playing forward, frames both before and after the current point. This way we can at least go back a few images without seeking. This also helps with smoothing forward playback at normal speed. 3. When the working zone fits entirely in memory, all the frames are cached, this makes random access fast again. In fact if you do this you will see the menu Video > Reverse is active and you can switch the entire video to backward playback.

Hi,
There is a keyboard shortcut for this: CTRL+LEFT arrow and CTRL+RIGHT arrow.
For quick scanning a long video there is also Page Up and Page Down shortcuts, it jumps by increments of 10% of the length of the video.

523

(36 replies, posted in General)

This is Kinovea 0.9.1.
This version introduces capture-and-replay automation, improves capture performances, especially for delayed video, and adds many other improvements. This version requires .NET 4.8.

 

Many thanks to everybody that helped with testing this version. Special thanks to rkantos, Faultyclubs, and Reiner Hente, for their patience in spite of my continuous requests for testing buggy builds tongue

Plenty of changes aren't described here, please consult the full changelog for details.

 
1. Capture automation

We can now trigger recording based on microphone volume, and stop recording after a specific duration. This enables a hands-free, continuous recording workflow.

http://www.kinovea.org/screencaps/0.9.1/091-capture-automation2.png

Recording footage around an event of interest is natively supported with the existing delay feature. For example, say we are filming a golf swing and want to capture from 3 seconds before impact to 2 seconds after it. We set the video delay to 3 seconds in the capture screen, and set "stop recording by duration" to 5 seconds. When the club impact triggers the recording, it will start saving the video stream from 3 seconds ago.

When using multiple instances of Kinovea, each instance now has a deterministic name. By default it will be a number in sequence but you can also use the `-name` argument on the command line for full control. Each instance can use its own preferences file. This is useful to create advanced setups, for example having an instance dedicated to capture and another to replay, or for instrumenting Kinovea from other programs. Multiple Kinovea instances can listen to the same microphone for synchronized recording by audio trigger.

> kinovea.exe -name replay

http://www.kinovea.org/screencaps/0.9.1/091-naming.png

We can also run a script on the resulting file after the capture is complete. This can be useful to copy the file somewhere else or process it further.

 
2. Capture performances

A lot of care went into the performance of delayed capture, and it should now be almost on par with real-time capture. You still need to toggle the option under Preferences > Capture > Recording.

The act of compressing the images for storage is usually the main bottleneck when recording with the typical cameras used in Kinovea (high-end webcams and machine vision cameras). We can now bypass this compression step entirely and record uncompressed videos. Be mindful that uncompressed videos take a lot of storage space. This option is under Preferences > Capture > General.

Modern storage options like SSD, NVMe or RAMDisks all have higher bandwidth than the USB link of the camera on the other side, so hopefully whatever the camera can send to the PC can be recorded without drops. The simulator camera and the infobar above the capture area can be used to diagnose issues.

http://www.kinovea.org/screencaps/0.9.1/091-captureinfobar.png

On top of recording uncompressed videos, we can now record "raw" video streams if the camera supports it. This records the raw sensor images, grayscale with color implicitly encoded in a Bayer grid pattern. The player has a new option to rebuild color images from raw files under menu Image > Demosaicing. The advantage of doing this is the storage bandwidth is only that of a grayscale video, so it cuts requirements by a factor of 3.

http://www.kinovea.org/screencaps/0.9.1/091-debayering3.png

 
3. Replay folders

This is a new concept, completing support for a fully hands-free capture-and-replay workflow. In this mode a playback screen is associated with an entire folder and any new video file created in this folder, usually by the capture module, will be instantly loaded and start playing.

http://www.kinovea.org/screencaps/0.9.1/091-openreplayobs.png

Typically we would use this within a single instance of Kinovea, but as it is based on the file system, we can also have a separate instance of Kinovea dedicated to replay. It should even be possible to put the replay instance on a different machine on the network, copying over the captured files using a post-recording command.

http://www.kinovea.org/screencaps/0.9.1/091-replayobserver.gif

In the above screencast, the left screen is a camera filming the stopwatch. The right screen is open using a replay folder observer on the folder where the captured videos are saved. In this case the capture was configured to stop by itself after 2 seconds. As soon as the capture is completed, the playback automatically starts in the other screen.

 
4. Time origin and relative clock

Many analysis scenarios involve a specific moment within the video that everything else is related to. A golfer's club-ball impact, a baseball pitcher's release point, a long jumper's take-off, the start of a race, etc. We can now navigate to this precise moment and mark it as the zero point, the origin of all times for the clip. Every other moment will now be relative to this origin, using negative time before the event and positive time after it.

http://www.kinovea.org/screencaps/0.9.1/091-timeorigin.png

A new simple clock tool lets you see relative time directly on the image.

http://www.kinovea.org/screencaps/0.9.1/091-relativeclock.gif

 
5. Annotation importers

We can now import .srt subtitles and OpenPose keypoint files.

OpenPose is a deep learning software stack for human posture recognition. The result of OpenPose 25-point body model is automatically imported into a dedicated custom tool in Kinovea. At this point this is not meant to be used for measurements but more for general posture assessment.

http://www.kinovea.org/screencaps/0.9.1/091-openpose.gif

 

Thanks!
Don't hesitate to post feedback, questions, feature requests, bug reports, either in this thread or in dedicated threads.

524

(9 replies, posted in Bug reports)

Merged!
Super thanks big_smile

525

(9 replies, posted in Bug reports)

Yeah, before the nud it was really hard to set small values precisely, that's why the slider is logarithmic, I guess this is no longer really relevant now, so yes, it could use a linear slider instead. It could also be an option. The scenario for very small values is to match two cameras that have different capture latency.

I still think it should be in seconds though, internally everything is in frames but from a user point of view I don't think frames make sense for the general concept of delay, what is the scenario where you think about delay in frames? Also for the case of pre/post recording, like recording for x seconds before and after a trigger event, it's natural to have this in seconds.