856

(3 replies, posted in Bug reports)

Hi,

1.
OK, for some reason I was under the impression that there was an issue between MJPEG (the new codec used for captured files) and the AVI container at certain frame rates.
I just checked and there doesn't seem to be any issue. I don't have iTunes or cSwing though, so please test this file against these players and let me know if it works (only a few frames). If it works I will restore the container format selector.

2.
Yes it is a consequence of a fundamental change in the recording architecture. There is no going back to the previous architecture because it was not good enough to allow the recording of HD stream at 30fps which I think is a must. The only way to emulate it would be to implement the old behavior on top of the new architecture but it will be a bit of a pain.

Can you describe a bit more how you combine recording and delay? How do you know when to hit record?

857

(28 replies, posted in General)

Oh, I completely missed the messages from April 30th and May 1st, sorry.

@gollum: I'll double check the issue with exported framerate. I did several tests where it worked and properly recorded the framerate. I have to check if the value written at the codec level is the same as the value written at the container level.

Regarding naming, thanks for the examples, it helps. Yes, being able to include the alias of the camera in the filename is a must-have.

For auto-incrementing counters, maybe there should be several of them. One global counter that increments whenever a video is recorded, then screen-specific counters, or maybe a global counter that is dual-camera aware and only increment once, and then maybe camera-specific counters.

It would also probably be nice to be able to manipulate the current value of the counters beside just reset.

To change the camera alias you can:
- Right-click the thumbnail and choose "Rename".
- Inside the camera configuration dialog, directly edit the alias (blue text). I always wondered if this hidden editbox was hard to discover, I have my answer. Maybe an explicit rename button would be more usable.
Note that you can change the icon by clicking on it in both dialogs.

@cmdi035: I have seen this effect with some H.264 encoded content, is it your case? I don't have a solution at the moment. The images themselves should be in the correct order but the timestamps are all mangled which is why the cursors jumps back and forth, and why there are issues with seeking.

858

(28 replies, posted in General)

Hi,
Thanks for the detailed message.
I'll try to recap. I think you have correctly identified the various causes. I'll just repost them here for clarity.

1. Framerate mismatch due to auto-exposure.
Kinovea uses the camera configuration information to write the framerate in the file. So if the camera is configured for 30 fps but due to Auto Exposure only actually streams at 15 fps, the resulting video will be sped up.

2. Framerate mismatch due to using a default value.
Kinovea probably cannot read the configuration of the frame grabber. When you go in the configuration dialog do you have the framerate drop down available ?

3. Issues with dual mode when one or both of the videos have a mismatched framerate.
Both videos do not share a common truth for what they consider real-time.

Possible solutions:

  • A. At playback time, being able to correct the information written into the file with a user provided value. I think this is what you propose. I like this idea because it would open other use-cases as well. There are caveats though.

  • B. At record time, write the "signal" value to the file. This value is the computed framerate from the actual stream. It is smoothed over a time window but still subject to noise and variation from auto exposure. I don't particularly like this option because it will result in a lot of files marked as 29.99 fps or 30.01 fps, even though the frames were actually captured by the camera at the correct frequency. The noise is coming from various buffers on the computer side.

  • C. At record time, ask the user to specify a value. This could disrupt the recording workflow.

Caveats of A:
How do you know the actual recording framerate? It depends on the current exposure duration and may change during the recording if exposure is on Auto. It is not always 1/exposure duration anyway (Logitech and Microsoft cameras differ on their strategy here).

There will be a confusion possible between this and the high speed camera dialog which only changes timecodes.


4. Naming pattern for dual cameras
I completely agree and I have been experiencing that very same annoyance myself recently while filming sterescopic videos.

There are other improvements to be had in this area. I'd like to
- be able to specify separate capture directories (to be able to record on two different drives in parallel for performance),
- make it easier to change the capture directory,
- be able to use macros in the directory path so they get named with the date automatically
- have separate prefix/suffix for left/right cameras, etc.
I'll probably start another thread for this to brainstorm ideas.

859

(28 replies, posted in General)

I forgot to mention an important change in the way the recording works with regards to the delay feature.

In previous versions of Kinovea, when you started the recording, you would record what was visible on screen. If you had delay in place of say 10 seconds, then the first image of the resulting video would be 10 seconds earlier than the time you pressed record.

This is no longer the case due to the decoupling of the preview and the recording. This means that the configured delay is no longer taken into account when recording. You always record the "live" feed. Minus the base latency between the camera and the computer memory (which depends on the selected stream format by the way).

This new feature is introduced in 0.8.24.
It may not be entirely intuitive so I'll attempt to clarify the behavior here.

http://www.kinovea.org/screencaps/0.8.x/0824-dss.png

It controls the frequency at which the frames are displayed on screen vs the frequency at which they are captured from the camera. (During live capture sessions).
It has an impact on the smoothness of display and on the maximum duration of the delay feature.

When the camera captures a frame, it is stored in an internal buffer. This buffer is then consumed independently by the display module and by the recording module.
The recording module always consumes all the frames to avoid any frame drop in the resulting video.

For the display on screen there are two choices, controlled by this setting.

1. Camera frame signal.
Each time the camera receives a frame it is added to the internal buffer and the display module receives a signal.
In this strategy, the display module uses this signal to take the latest frame from the buffer and push it to its internal delay buffer, and take one frame from the delay buffer and display it on screen. (If delay is zero they are the same frame).
This is the historical mode.

Advantage: If the computer is fast enough, the delay buffer contains all the camera frames. If you pause the stream you can navigate with the full temporal granularity.
Drawback: If the computer is not fast enough, it may cause jittery display, especially in dual camera mode.

2. Forced framerate.
In this strategy, an independant timer loop is used by the display module. It's only when this timer ticks that the display module takes the latest frame from the buffer and push it to its internal delay buffer and take one frame from the delay buffer and display it on screen.

Advantage: Smoother display, especially when using dual camera mode (unless you go too low in framerate).
Advantage 2: If you lower the framerate, you can fit a longer time period into the delay buffer.
Drawback: Some frames are missing if you pause the stream and analyse it.

Notes:
- The selected framerate has no relation to the camera framerate.
- The selected framerate has no impact on the framerate stored in the recorded videos. (videos are created at the camera framerate).
- In either mode, the recording module takes precedence in terms of performance. The display module will always drop frames if the computer has trouble keeping up.
- Setting the framerate to higher than the camera framerate will not provide any benefits.
- You have to reconnect the stream for this option to take effect.


The default is to use "Forced framerate" at a value of 25 fps. Feedback is welcomed on whether this is a good default.

If you have questions or comments please post below.

861

(28 replies, posted in General)

Experimental version. As always feedback is very appreciated! wink
Beware of regressions and report anything suspicious. Do not assume the issue is known.

----

Special thanks to Milan Djupovac and the folks at Sport Diagnostic Center Šabac (Serbia) for the ongoing testing and support of this version.
And many thanks to the Jumpers Rebound Center in Gillingham (UK) who donated a Microsoft LifeCam Studio for testing purposes.
And also many thanks to DVC Machinevision BV (Netherlands) for the super deal on the Basler camera many months back.

----

This release is almost entirely about cameras and capture.

  1. Improved capture performances and camera configurability

  2. Support for Basler cameras.

  3. Capture history.

  4. Other goodies.


1. Improved capture performances and camera configurability
This has taken the most part of the multi-month effort of this release. I also acquired 7 various USB cameras in the process and was donated one smile

Parts of the low level capture architecture were rewritten from scratch and gave birth to a new capture "pipeline" with a more direct path from the camera to the disk and an improved multi-threading model.

The camera configuration dialog is now more detailed, and lets you choose the precise stream format and framerate, configure exposure duration, gain and focus, when the camera supports it.

http://www.kinovea.org/screencaps/0.8.x/0824-cameraconfiguration.png

The maximum performance will be reached when using the MJPEG stream format with cameras that have on-board compression, as the stream will be directly pushed to the capture file without any transcoding. This should enable the capture of two full HD streams without frame drops.

On the top of the camera screen, the status bar contains new information:

http://www.kinovea.org/screencaps/0.8.x/0824-camerastatus.png

  • Signal : the actual frame rate received from the camera. May or may not match the configuration.

  • Data : the bandwidth between the camera and Kinovea.

  • Drops : the number of frame drops during the current or last recording session.

The new output file format (whichever option is selected as stream input) is MJPEG inside MP4 container (not configurable).

For the live image, another change is the "Display synchronization strategy", to decouple the preview framerate from the captured framerate. I did not find a concise sentence to quickly convey all the implications of this setting, so I'll attempt to describe it in this topic.


2. Support for Basler cameras
This version introduced preliminary support for Basler high end industrial cameras, though their Pylon SDK.
I was only able to test it using a black and white camera so if you have access to a color camera please report how it works for you.

Live view, configuration and recording should all be supported.

3. Capture history
A little feature that was added almost at the last minute, but I think it could prove quite useful.
Basically each time you make a recording, an entry is saved in the history panel, and from there you can launch the videos.

Note that you can import your current capture directory (or any other directory for that matter) into the history using the button on the left. This can also be useful when you recorded a session on the camera and later dumped the SD card on the main computer.
After some threshold the days are grouped into months.

http://www.kinovea.org/screencaps/0.8.x/0824-capturehistory.png


4. Other goodies
- A new tool "Test grid" under menu Image, for cameras. This can be used to assert that the camera is level, locate the center of the image, etc.
- A new timecode "total microseconds" and the ability to select up to one million fps in the high speed camera dialog. For those users that have really high speed cameras.

A number of defects were fixed and even more things were crammed in, please check the raw changelog.

Enjoy!

mccanndavid wrote:

The drag and drop candidate sizing functionality once present, is now seemingly gone on my version(?)

Maybe you have inadvertently reverted to an older version?

When you enter the track configuration you can either drag the border of the search window from the right panel or enter window sizes manually in pixels in the text boxes on the lower left panel.
(Both features were introduced simultaneously in version 0.8.22.)


mccanndavid wrote:

I am wondering when the magnified window would be used/useful? I thought it would be for selecting points, but I can't select within the window and thus need to switch to direct zoom every time.

The magnified window is more to have a "picture in picture" type of effect, for presentation purposes.

Hi,

You are correct in your observations. It's a current limitation of the software. You can define a coordinate system based on a grid, and you can track a grid, but both features don't work together. The coordinate system is not updated by the tracking. The points will always be expressed in the coordinate system set at the point the grid was calibrated.

There is currently no work around.

The new tool is available at this location: /tools/hotfixes/0.8.23/3 - Bike fit.xml

For the interested: download the file, go to Kinovea program files and under "DrawingTools\Custom", replace the existing one, restart Kinovea.

Thanks!

Super cool!

You can send me the file at joan at kinovea dot org. I will upload it on the website for the time being and then include it in the next version.

Yes it is definitely possible, the tools framework supports it.
It is implemented on the "Human Model 2" tool, from the extra "Options" menu.

If someone wants to look at the XML of both files it should be possible to port that feature to the bike fit tool. It's not documented though.

edit: what I mean is that it's possible right now, without waiting for a new Kinovea version. But the XML file of the tool has to be modified.

Ghosting for a single frame like you have here might be doable in Kinovea by
- loading the same video twice in dual mode,
- synchronizing with a 1-frame delay,
- enabling image superposition.

Here is an example (deinterlaced 25 fps):
http://www.kinovea.org/screencaps/0.8.x/syncghost.jpg

I have marked the relevant buttons in the capture.
With interlaced video this gives 4 visible fields, but the blending at 50% makes it hard to see details.

It's actually relatively easy to do for fixed cameras. A basic approach is the following: you average all the pixels from all the frames and it gives you the naked background. Then for each frame (or each second or third frame or whatever), you compare each pixel from the frame against the average background. If the pixel is different, it must pertains to the moving subject, so you copy it to the final image.

A lot can be improved upon this naïve approach to remove noise and ghosting, etc.
I probably mentioned it elsewhere but this was a feature of the ancestor to Kinovea back in 2005. I know it doesn't help in the least, sorry. I still want to work on this though. Maybe now that we can have ultra wide angle good quality lenses on the cheap the need to implement it for moving cameras is less important (making it work for moving cameras has been the blocking point).

Very good point about multiple view analysis. It departs a bit from what I thought with video quality-degrading issues, but I like to consider what the imaging system can deliver as a whole, be it a single or multiple camera system.

For 3D quantitative analysis, I think 2 cameras is the theoretical minimum but that real motion tracking applications never use less than 4 cameras. (I've used 2-camera systems for tracking eyes or fingers in a small volume, but as soon as you want to track body parts you move to 4 to 8 or even more).

So, what can constitute the main problems and defects of a multi-camera setup ? (Considering qualitative analysis only for now).
Mis-synchronization is probably one of the biggest… We can always synchronize to frame-level in software, but sub-frame sync requires hardware support. That's a clever use of WiFi for sure.
I don't know what constitute an acceptable synchronization level for sport analysis ? (I know that stereoscopy video requires almost pixel level sync for example).
On the subject, I have plans for a half software, half hardware sub-frame level synchronization method using the rolling shutter on consumer USB cameras and an Arduino powered strobe light, I'll post more details if I ever get it working.

Resolving power - lack of focus
- Impacts: the sharpness of details.
- Relevance to sport analysis: high.
- Component: Lens and lens mount.
- How to control or improve:
If the subject is always distant, a fixed focus camera may be sufficient. A camera with fixed focus should hopefully be focused at infinity in factory.
The most versatile solution is a manual focus that can be adjusted with a ring or lever.
The most efficient solution may be a motorized focus that we can control in software. (Logitech C920, Microsoft LifeCam). Note that even with motorized focus some webcams can't focus to infinity and anything farther than a few meters, will not be optimally focused.
Some lenses have variable focal length, in this case focusing sould usually be redone after changing the focal length.
Some devices have auto-focus capabilities, in this case care should be taken as to where in the image the focus has been locked.

Resolving power - long exposure
- Impacts: the sharpness of details on moving subjects.
- Relevance to sport analysis: very high.
- Component: Sensor.
- How to control or improve:
Some cameras have auto-exposure, they will adjust exposure to measured light levels. It lower reproducibility and the final exposure choosen may not be adequate (long exposure increases motion blur).
The most versatile solution is a camera for which exposure duration can be changed manually and is capable of short exposures (Exact requirement to be assessed).
- Compromise: low exposure means less light collected at the pixel sites. For laboratory setups artificial lights may be needed.

Resolving power - pixel count and lens resolution
- Impacts: the sharpness of details.
- Relevance to sport analysis: high.
- Component: Sensor and lens.
- How to control or improve:
Some devices are actually limited by their lens, when the lens itself is not able to project an image sharp enough to distinguish details that are two pixels apart.
More pixels is better but only if the lens is adequate. For a given sensor size, more pixels means smaller ones, which makes it more difficult for the lens to match resolution.
Lenses quality is measured in various metrics like lp/ph or MTF curves. A recent evolution is the use of Megapixel ratings. The lens fitted on the camera should have a megapixel rating at least as high as the pixel count of the sensor. (ex: A 3MP rated lens for 1920x1080 images). A good introductory resource on lens quality measurement methodology is at Cambridge in Colour.

Resolving power - image processing and JPEG compression
- Impacts: the sharpness of details.
- Relevance to sport analysis: high.
- Component: Image processing chip on the camera or recording software.
- How to control or improve:
The best solution is a camera that can provide the raw images to the computer, and to perform the color grading there.
The issue is that bandwidth is limited, so it is not always possible to transmit full color frames at the full framerate.
A camera should allow us to control the JPEG compression levels. (No USB camera currently does this to my knowledge).

Spherical distortions - wide angle and ultra wide angle lenses
- Impacts: measurements of distances and speeds.
- Relevance to sport analysis: mid to high.
- Component: Lens.
- How to control or improve:
Lenses with normal field of view (less than around 65°) usually have very low distortion.
For wide angle, a lens without distortions should be preferred, but the cost can skyrocket pretty quickly.
The distortion can be calibrated in software and taken into account for measurements.
- Compromise: A subject evolving at the same distance from the camera will cover less pixels, so less resolution.