766

(28 replies, posted in General)

Experimental version. As always feedback is very appreciated! wink
Beware of regressions and report anything suspicious. Do not assume the issue is known.

----

Special thanks to Milan Djupovac and the folks at Sport Diagnostic Center Šabac (Serbia) for the ongoing testing and support of this version.
And many thanks to the Jumpers Rebound Center in Gillingham (UK) who donated a Microsoft LifeCam Studio for testing purposes.
And also many thanks to DVC Machinevision BV (Netherlands) for the super deal on the Basler camera many months back.

----

This release is almost entirely about cameras and capture.

  1. Improved capture performances and camera configurability

  2. Support for Basler cameras.

  3. Capture history.

  4. Other goodies.


1. Improved capture performances and camera configurability
This has taken the most part of the multi-month effort of this release. I also acquired 7 various USB cameras in the process and was donated one smile

Parts of the low level capture architecture were rewritten from scratch and gave birth to a new capture "pipeline" with a more direct path from the camera to the disk and an improved multi-threading model.

The camera configuration dialog is now more detailed, and lets you choose the precise stream format and framerate, configure exposure duration, gain and focus, when the camera supports it.

http://www.kinovea.org/screencaps/0.8.x/0824-cameraconfiguration.png

The maximum performance will be reached when using the MJPEG stream format with cameras that have on-board compression, as the stream will be directly pushed to the capture file without any transcoding. This should enable the capture of two full HD streams without frame drops.

On the top of the camera screen, the status bar contains new information:

http://www.kinovea.org/screencaps/0.8.x/0824-camerastatus.png

  • Signal : the actual frame rate received from the camera. May or may not match the configuration.

  • Data : the bandwidth between the camera and Kinovea.

  • Drops : the number of frame drops during the current or last recording session.

The new output file format (whichever option is selected as stream input) is MJPEG inside MP4 container (not configurable).

For the live image, another change is the "Display synchronization strategy", to decouple the preview framerate from the captured framerate. I did not find a concise sentence to quickly convey all the implications of this setting, so I'll attempt to describe it in this topic.


2. Support for Basler cameras
This version introduced preliminary support for Basler high end industrial cameras, though their Pylon SDK.
I was only able to test it using a black and white camera so if you have access to a color camera please report how it works for you.

Live view, configuration and recording should all be supported.

3. Capture history
A little feature that was added almost at the last minute, but I think it could prove quite useful.
Basically each time you make a recording, an entry is saved in the history panel, and from there you can launch the videos.

Note that you can import your current capture directory (or any other directory for that matter) into the history using the button on the left. This can also be useful when you recorded a session on the camera and later dumped the SD card on the main computer.
After some threshold the days are grouped into months.

http://www.kinovea.org/screencaps/0.8.x/0824-capturehistory.png


4. Other goodies
- A new tool "Test grid" under menu Image, for cameras. This can be used to assert that the camera is level, locate the center of the image, etc.
- A new timecode "total microseconds" and the ability to select up to one million fps in the high speed camera dialog. For those users that have really high speed cameras.

A number of defects were fixed and even more things were crammed in, please check the raw changelog.

Enjoy!

mccanndavid wrote:

The drag and drop candidate sizing functionality once present, is now seemingly gone on my version(?)

Maybe you have inadvertently reverted to an older version?

When you enter the track configuration you can either drag the border of the search window from the right panel or enter window sizes manually in pixels in the text boxes on the lower left panel.
(Both features were introduced simultaneously in version 0.8.22.)


mccanndavid wrote:

I am wondering when the magnified window would be used/useful? I thought it would be for selecting points, but I can't select within the window and thus need to switch to direct zoom every time.

The magnified window is more to have a "picture in picture" type of effect, for presentation purposes.

Hi,

You are correct in your observations. It's a current limitation of the software. You can define a coordinate system based on a grid, and you can track a grid, but both features don't work together. The coordinate system is not updated by the tracking. The points will always be expressed in the coordinate system set at the point the grid was calibrated.

There is currently no work around.

The new tool is available at this location: /tools/hotfixes/0.8.23/3 - Bike fit.xml

For the interested: download the file, go to Kinovea program files and under "DrawingTools\Custom", replace the existing one, restart Kinovea.

Thanks!

Super cool!

You can send me the file at joan at kinovea dot org. I will upload it on the website for the time being and then include it in the next version.

Yes it is definitely possible, the tools framework supports it.
It is implemented on the "Human Model 2" tool, from the extra "Options" menu.

If someone wants to look at the XML of both files it should be possible to port that feature to the bike fit tool. It's not documented though.

edit: what I mean is that it's possible right now, without waiting for a new Kinovea version. But the XML file of the tool has to be modified.

Ghosting for a single frame like you have here might be doable in Kinovea by
- loading the same video twice in dual mode,
- synchronizing with a 1-frame delay,
- enabling image superposition.

Here is an example (deinterlaced 25 fps):
http://www.kinovea.org/screencaps/0.8.x/syncghost.jpg

I have marked the relevant buttons in the capture.
With interlaced video this gives 4 visible fields, but the blending at 50% makes it hard to see details.

It's actually relatively easy to do for fixed cameras. A basic approach is the following: you average all the pixels from all the frames and it gives you the naked background. Then for each frame (or each second or third frame or whatever), you compare each pixel from the frame against the average background. If the pixel is different, it must pertains to the moving subject, so you copy it to the final image.

A lot can be improved upon this naïve approach to remove noise and ghosting, etc.
I probably mentioned it elsewhere but this was a feature of the ancestor to Kinovea back in 2005. I know it doesn't help in the least, sorry. I still want to work on this though. Maybe now that we can have ultra wide angle good quality lenses on the cheap the need to implement it for moving cameras is less important (making it work for moving cameras has been the blocking point).

Very good point about multiple view analysis. It departs a bit from what I thought with video quality-degrading issues, but I like to consider what the imaging system can deliver as a whole, be it a single or multiple camera system.

For 3D quantitative analysis, I think 2 cameras is the theoretical minimum but that real motion tracking applications never use less than 4 cameras. (I've used 2-camera systems for tracking eyes or fingers in a small volume, but as soon as you want to track body parts you move to 4 to 8 or even more).

So, what can constitute the main problems and defects of a multi-camera setup ? (Considering qualitative analysis only for now).
Mis-synchronization is probably one of the biggest… We can always synchronize to frame-level in software, but sub-frame sync requires hardware support. That's a clever use of WiFi for sure.
I don't know what constitute an acceptable synchronization level for sport analysis ? (I know that stereoscopy video requires almost pixel level sync for example).
On the subject, I have plans for a half software, half hardware sub-frame level synchronization method using the rolling shutter on consumer USB cameras and an Arduino powered strobe light, I'll post more details if I ever get it working.

Resolving power - lack of focus
- Impacts: the sharpness of details.
- Relevance to sport analysis: high.
- Component: Lens and lens mount.
- How to control or improve:
If the subject is always distant, a fixed focus camera may be sufficient. A camera with fixed focus should hopefully be focused at infinity in factory.
The most versatile solution is a manual focus that can be adjusted with a ring or lever.
The most efficient solution may be a motorized focus that we can control in software. (Logitech C920, Microsoft LifeCam). Note that even with motorized focus some webcams can't focus to infinity and anything farther than a few meters, will not be optimally focused.
Some lenses have variable focal length, in this case focusing sould usually be redone after changing the focal length.
Some devices have auto-focus capabilities, in this case care should be taken as to where in the image the focus has been locked.

Resolving power - long exposure
- Impacts: the sharpness of details on moving subjects.
- Relevance to sport analysis: very high.
- Component: Sensor.
- How to control or improve:
Some cameras have auto-exposure, they will adjust exposure to measured light levels. It lower reproducibility and the final exposure choosen may not be adequate (long exposure increases motion blur).
The most versatile solution is a camera for which exposure duration can be changed manually and is capable of short exposures (Exact requirement to be assessed).
- Compromise: low exposure means less light collected at the pixel sites. For laboratory setups artificial lights may be needed.

Resolving power - pixel count and lens resolution
- Impacts: the sharpness of details.
- Relevance to sport analysis: high.
- Component: Sensor and lens.
- How to control or improve:
Some devices are actually limited by their lens, when the lens itself is not able to project an image sharp enough to distinguish details that are two pixels apart.
More pixels is better but only if the lens is adequate. For a given sensor size, more pixels means smaller ones, which makes it more difficult for the lens to match resolution.
Lenses quality is measured in various metrics like lp/ph or MTF curves. A recent evolution is the use of Megapixel ratings. The lens fitted on the camera should have a megapixel rating at least as high as the pixel count of the sensor. (ex: A 3MP rated lens for 1920x1080 images). A good introductory resource on lens quality measurement methodology is at Cambridge in Colour.

Resolving power - image processing and JPEG compression
- Impacts: the sharpness of details.
- Relevance to sport analysis: high.
- Component: Image processing chip on the camera or recording software.
- How to control or improve:
The best solution is a camera that can provide the raw images to the computer, and to perform the color grading there.
The issue is that bandwidth is limited, so it is not always possible to transmit full color frames at the full framerate.
A camera should allow us to control the JPEG compression levels. (No USB camera currently does this to my knowledge).

Spherical distortions - wide angle and ultra wide angle lenses
- Impacts: measurements of distances and speeds.
- Relevance to sport analysis: mid to high.
- Component: Lens.
- How to control or improve:
Lenses with normal field of view (less than around 65°) usually have very low distortion.
For wide angle, a lens without distortions should be preferred, but the cost can skyrocket pretty quickly.
The distortion can be calibrated in software and taken into account for measurements.
- Compromise: A subject evolving at the same distance from the camera will cover less pixels, so less resolution.

I would like to use this thread to compile a list of quality-degrading factors in video, how much they are relevant to sport analysis, which component are involved, and how we may improve upon them.

This list should be general and relevant to anything that can provide a stream of image and store it on-device or transmit it to a computer (DSLR, Industrial camera, USB camera, IP camera, Smartphone, etc.). The trigger for this topic though, is the advent of high-quality, interchangeable, small lenses for surveillance-type cameras. We are now very near a day where little USB cameras can be considered serious imaging devices.

Please add your input, illustrative images, comments, remarks, additional degrading factors, formatting suggestions, etc.
Maybe at some point we can create a PDF or something. It should be useful for evaluating new hardware on the market and as a buyers guide.

Here are some topics that could be covered:

  • limited resolving power - lack of focus.

  • limited resolving power - long exposure.

  • limited resolving power - pixel count and lens resolution.

  • limited resolving power - image processing and JPEG compression.

  • spherical distortions - wide and ultra wide angle lenses.

  • vignetting - mechanical and optical.

  • noise.

  • flares.

  • limited temporal sampling granularity - low framerate.

  • temporal distortion - rolling shutter.

  • limited illuminance - low aperture.

  • limited dynamic range.

  • limited depth of field.

  • chromatic aberrations.

  • unfaithful color reproduction.

777

(4 replies, posted in General)

Ok, this is officially very cool :-)
I wonder why I never tried that, thank you for the heads up!

Exposure compensation seems to change the gain level. I could not find a setting for exposure duration unfortunately. The auto-exposure is what degrades the framerate in low-light.

It made me realize that the image size should be displayed in Kinovea for these streams, as it's not always given.

778

(4 replies, posted in General)

Hi,
I have not yet tried to serve smartphone camera stream as MJPEG. It is quite relevant to the current effort in the capture module.
What application do you use on the device and what type of framerate/frame size does it achieve ?

Hi,
The second issue is linked to the first. Kinovea will mark the file as 30fps. If the incoming stream was actually 9fps it will be played back accelerated.

There are several things that can cause a decrease in stream framerate. The easiest one is when the exposure duration is set to longer than the frame interval. Exposure will take precedence and the framerate will be lowered automatically and silently.

Go into the settings, device properties, locate the exposure parameter, make sure "Auto" in not checked, and slide to the left. The image will get darker and darker as you decrease the duration of exposure, this is expected. Test with various settings to see the impact on framerate.

The next version of Kinovea will allow you to get more out of the cameras. The Microsoft LifeCam Studio has MJPEG on-board compression and Kinovea can leverage it to improve recording performances.

780

(7 replies, posted in General)

Hi,
I have not, but my development machine is now x64. I hope to be able to work on this and have something out sometime this year.