That's pretty cool! Thanks for posting this.

I didn't know about the Palette Gear, this is super interesting, I love the modularity of it!

I missed your previous message. It is interesting to know that the thumbnail works but not the video feed. The thumbnail uses a slightly simpler way to capture the image but it means it should be possible to make it work natively in Kinovea.

Could you send me the log file corresponding to trying to open the camera directly in Kinovea?

Hi,
Which version of Kinovea? Can you try on 0.8.25?
When you go in the camera parameters from inside the capture screen, can you list the various image formats and pick a different one. I've noticed that sometimes the default parameter read doesn't correspond to a meaningful combination and black screen results. Picking a configuration among the exposed options usually fixes the issue.

739

(6 replies, posted in Cameras and hardware)

I don't have any experience with this camera. The global shutter + 1224x1024 @ 145 fps definitely sounds interesting on paper if it delivers. They mention "DirectShow" compatibility so yes, it should work with Kinovea.

I didn't know about this vendor, they have several other interesting listings. It's always hard to know if they are the actual manufacturer or just rebranding. They seem to be present on the usual Shenzhen online markets like Aliexpress to get an idea of princing, but this specific model is not listed yet.

I think these guys have the same unit under a different brand. Too bad none of them list the actual imaging sensor, we could have reviewed the claims.

740

(6 replies, posted in Cameras and hardware)

In my experience the camera actually delivers 100fps when set to 120fps. That's the rate at which Kinovea receives the frames anyway.

I'll have to check but I think it's possible amcap is assuming the camera is faithful and writes the requested option in the file descriptor even if it's inaccurate.

Can you film a stopwatch with amcap and verify that the video time keeping is correct compared to the physical stopwatch?

Yeah, after thinking about it I came to the conclusion that this was your use case, replaying a high-speed video at its real time speed.

Currently there is no satisfying way to do it. Tweaking the reference video framerate in order to increase the range of the speed slider is a workaround based on a side effect, I don't think it's a good solution to overload this option with a completely different role than what it was intended for. 

The issue is that if you want to replay a say, 1000fps video to real time with the current player, it will need to decode and display these 1000 frames per second, it will most likely fail, which to me means that this use-case requires a different playback approach altogether.

I very much like the idea of being able to go way higher in speed. If we can do 10x we should be able to do 100x because at that point it wouldn't be related to computer perfs.

This would be very useful to quickly parse a long video, and it would provide timelapse out of the box which I think have unexplored applications in sports, to analyze cyclic motions or long term trends in posture.

So, I would say: 1. a new playback approach for very high framerates, most likely by seeking forward in the file instead of decoding frame by frame. 2. a logarithmic speed slider so you have fine control for slow motion but can still go to 100x speed at the other end, 3. an increased size of the speed slider (this is limited by the size of the screen in dual playback configuration at the minimal monitor resolution supported).

The first few times I used the program, I adjusted the video framerate and couldn't figure out why certain things were happening on my playback window

Hmm, changing the video reference framerate should be a very rare action, only when the video file is broken by having an incorrect information written into it which makes it playback faster/slower than expected. Maybe there is another use-case but that's the original reason for adding this option. In particular it should not be used to emulate slow/fast motion. It's a pretty recent addition.

So this is something the user would do once, usually right after opening the video and discovering it's not playing at real-time speed on 100%. At that point the video is considered broken, all bets are off. If we let the capture speed stay at the previous value it would be wrong for the primary use-case of that option (a non-high speed video with incorrect information in it).

For a high speed video the reference framerate written in the file is already not corresponding to real time so I'm not sure why it would ever have to be changed. The scenario would be when a high speed camera is not capturing at the advertised framerate, but even then you would just change the capture framerate, not the reference video framerate.

My feeling is that this video framerate option is too easy to tamper with compared to its role. Maybe the menu should be "Advanced video timing options" or something.

Not directly as for the trajectory tool unfortunately.

The only way currently is to change the global settings under Options > Preferences > Drawing > Tracking > Default tracking parameters. You could add a trajectory to find the optimal parameters for the video and then set this up so that other trackable objects use it when their tracking is enabled.

That is a great idea smile

I agree and I think it should work like this (except that the first-to-finish would wrap and wait on the first frame instead of the last because it was much easier to implement it this way :-)). Sometimes the algorithm is thrown back and misses the slot though.

The entire thing uses real time in the common slider, and translates real time to local video time (taking slow motion and capture speed into account) to seek to the correct point in each video. The correct behavior of this seek is unfortunately dependent on file formats. I have tested a few of these scenarios but yeah, definitely room for improvement in this regards.

For the scenario you describe you should unlock the speed sliders from the preferences in Playback > General. This will let you match durations of arbitrary sequences. This is requested more often than I anticipated. It seems this would be better as a button directly in the common controls UI instead of hidden in the preferences.

edit: I agree it is currently much harder than it should to perfectly match the duration of one performance in one video to the other, if you want to compare *forms* without regards for timing. As in your example of comparing two swings performed at different speed. Currently you need to tweak the slow motion in one of the video until it seems to cover the same duration.

A two-point synchronization system would mitigate that. You would mark the start and end of the movement in each video and the time would be remapped so that these sections are of the same duration in both. It would still be a linear mapping so the rest of the algorithm shouldn't change.

I don't think it's a hard restriction though. When you go change the video framerate it will update the capture framerate accordingly because it means the previous data was incorrect so it's better to reset everything. But then you can still manually change the capture framerate up or down, it should work. I admit I haven't tested that configuration. A clear use-case for this is timelapse videos.

Thanks for the feedback!

1) The original framerate of the video is indicated in the infobar above the video, maybe the playback framerate could be shown here too. It would be similar to what happens in the capture screen.

2) I think I understand what you mean. The issue is that 200% of real time may become several hundreds or thousands of frames per second to replay. It may not be sustainable or displayable by the monitor. This has become less true in recent years, maybe the max should be revisited. 

The other aspect of rescaling the range of playback speeds is that it lets you select slow motion more easily. If you have a 10x ratio for example which is one of the most common (30 to 300), the natural speed of the video would be too close to the left to be slowed down further with the mouse. Another approach to this would be to have the slider logarithmic instead of linear, as is already the case for the delay slider in the capture screen.

3) I agree. At the moment the way to do that is with the keyboard UP/DOWN arrows (jumps 25%), SHIFT+UP/DOWN (jumps 10%) and CTRL+UP/DOWN (jumps 1%).

So after thinking a bit more about this I feel that the most important point is to properly reload the screen state when reloading a video analysis.

I propose to put more of the screen state in the KVA file: working zone boundaries, playback speed, pan, zoom, scale, aspect ratio, mirror, magnifier. These are all things we would want to reload when sharing the analysis with someone. Not sure about playback position.

Most of these are actually already living alongside the others, they are just not serialized in the output.

Adjustment for high speed video, coordinate system calibration and lens distortion calibration are already stored and reloaded.

Depending on how the KVA file is loaded, decide to restore or discard this screen state:
- if side-loaded automatically as part of a video launch (filename match): restore.
- if loaded as part of the crash-recovery mechanism: restore.
- if loaded as part of program state restoring on launch (not implemented yet): restore.
- if loaded explicitly after the video was already independently loaded: discard.
- if loaded automatically by being player.kva in the application data directory: discard.

The second point is to reload the complete program state on launch. This includes number and type of screens, which video or camera should go in which screen, synchronization, superposition, etc. This is a different feature built on top of the other and I feel has lesser priority.

Thanks for the feedback!
I have considered the concept of "projects" before but felt it was adding a layer of complexity that wasn't really required.

I very much like the idea of being able to launch the program back exactly where you left it. There are some pieces already in there that should make this not too complicated. In particular the crash recovery mechanism is able to restore the state of the video + analysis after a crash even if it wasn't explicitly saved. This works through a special field in the KVA file that references the video it was created on.

From reading your post I feel that the KVA file with the video reference is already matching the concept of "project" that you describe? A key missing feature is the ability to open a KVA file and automatically load the video it references. Also it would need to push back the state of zoom/scale, mirror, magnifier, working zone and current position of playhead.

"Load key image data" is important in itself to be able to load several KVA files onto a single video. This is useful for comparison purposes for example, or if you have a standard KVA with template lines or other reference material.

Regarding opening a video in the active window, I haven't received a lot of feedback about this. You can replace the first video by drag & drop from the explorer. The algorithm tries to make things work intuitively based on the state of the screens, especially for the case where there is an empty screen or if the single screen is a capture one. But yeah, when both screens are filled, the "open video" menu could take the active one into account. Although I'm not sure if replacing the active one is the most intuitive, you could argue that the user is working on it so it might be better to replace the other one. I don't feel the active screen is a really safe hint to rely on here. I agree that the current way, replacing the second screen, is not perfect.

The file explorer should switch to the directory containing the last opened video so if the user really wants to control where to open the next file when the two screens are already used, the drag & drop method should be the easiest and safest way.

750

(44 replies, posted in General)

Another thing that will come into play is that the option defining the memory allocated for the capture buffers is no longer limited to 1GB. The max setting will be computed from the available physical memory.

This should allow much longer delays on x64 systems with more RAM.