Super cool!
You can send me the file at joan at kinovea dot org. I will upload it on the website for the time being and then include it in the next version.
You are not logged in. Please login or register.
Kinovea - Forums → Posts by joan
Super cool!
You can send me the file at joan at kinovea dot org. I will upload it on the website for the time being and then include it in the next version.
Yes it is definitely possible, the tools framework supports it.
It is implemented on the "Human Model 2" tool, from the extra "Options" menu.
If someone wants to look at the XML of both files it should be possible to port that feature to the bike fit tool. It's not documented though.
edit: what I mean is that it's possible right now, without waiting for a new Kinovea version. But the XML file of the tool has to be modified.
Ghosting for a single frame like you have here might be doable in Kinovea by
- loading the same video twice in dual mode,
- synchronizing with a 1-frame delay,
- enabling image superposition.
Here is an example (deinterlaced 25 fps):
I have marked the relevant buttons in the capture.
With interlaced video this gives 4 visible fields, but the blending at 50% makes it hard to see details.
It's actually relatively easy to do for fixed cameras. A basic approach is the following: you average all the pixels from all the frames and it gives you the naked background. Then for each frame (or each second or third frame or whatever), you compare each pixel from the frame against the average background. If the pixel is different, it must pertains to the moving subject, so you copy it to the final image.
A lot can be improved upon this naïve approach to remove noise and ghosting, etc.
I probably mentioned it elsewhere but this was a feature of the ancestor to Kinovea back in 2005. I know it doesn't help in the least, sorry. I still want to work on this though. Maybe now that we can have ultra wide angle good quality lenses on the cheap the need to implement it for moving cameras is less important (making it work for moving cameras has been the blocking point).
Very good point about multiple view analysis. It departs a bit from what I thought with video quality-degrading issues, but I like to consider what the imaging system can deliver as a whole, be it a single or multiple camera system.
For 3D quantitative analysis, I think 2 cameras is the theoretical minimum but that real motion tracking applications never use less than 4 cameras. (I've used 2-camera systems for tracking eyes or fingers in a small volume, but as soon as you want to track body parts you move to 4 to 8 or even more).
So, what can constitute the main problems and defects of a multi-camera setup ? (Considering qualitative analysis only for now).
Mis-synchronization is probably one of the biggest… We can always synchronize to frame-level in software, but sub-frame sync requires hardware support. That's a clever use of WiFi for sure.
I don't know what constitute an acceptable synchronization level for sport analysis ? (I know that stereoscopy video requires almost pixel level sync for example).
On the subject, I have plans for a half software, half hardware sub-frame level synchronization method using the rolling shutter on consumer USB cameras and an Arduino powered strobe light, I'll post more details if I ever get it working.
Resolving power - lack of focus
- Impacts: the sharpness of details.
- Relevance to sport analysis: high.
- Component: Lens and lens mount.
- How to control or improve:
If the subject is always distant, a fixed focus camera may be sufficient. A camera with fixed focus should hopefully be focused at infinity in factory.
The most versatile solution is a manual focus that can be adjusted with a ring or lever.
The most efficient solution may be a motorized focus that we can control in software. (Logitech C920, Microsoft LifeCam). Note that even with motorized focus some webcams can't focus to infinity and anything farther than a few meters, will not be optimally focused.
Some lenses have variable focal length, in this case focusing sould usually be redone after changing the focal length.
Some devices have auto-focus capabilities, in this case care should be taken as to where in the image the focus has been locked.
Resolving power - long exposure
- Impacts: the sharpness of details on moving subjects.
- Relevance to sport analysis: very high.
- Component: Sensor.
- How to control or improve:
Some cameras have auto-exposure, they will adjust exposure to measured light levels. It lower reproducibility and the final exposure choosen may not be adequate (long exposure increases motion blur).
The most versatile solution is a camera for which exposure duration can be changed manually and is capable of short exposures (Exact requirement to be assessed).
- Compromise: low exposure means less light collected at the pixel sites. For laboratory setups artificial lights may be needed.
Resolving power - pixel count and lens resolution
- Impacts: the sharpness of details.
- Relevance to sport analysis: high.
- Component: Sensor and lens.
- How to control or improve:
Some devices are actually limited by their lens, when the lens itself is not able to project an image sharp enough to distinguish details that are two pixels apart.
More pixels is better but only if the lens is adequate. For a given sensor size, more pixels means smaller ones, which makes it more difficult for the lens to match resolution.
Lenses quality is measured in various metrics like lp/ph or MTF curves. A recent evolution is the use of Megapixel ratings. The lens fitted on the camera should have a megapixel rating at least as high as the pixel count of the sensor. (ex: A 3MP rated lens for 1920x1080 images). A good introductory resource on lens quality measurement methodology is at Cambridge in Colour.
Resolving power - image processing and JPEG compression
- Impacts: the sharpness of details.
- Relevance to sport analysis: high.
- Component: Image processing chip on the camera or recording software.
- How to control or improve:
The best solution is a camera that can provide the raw images to the computer, and to perform the color grading there.
The issue is that bandwidth is limited, so it is not always possible to transmit full color frames at the full framerate.
A camera should allow us to control the JPEG compression levels. (No USB camera currently does this to my knowledge).
Spherical distortions - wide angle and ultra wide angle lenses
- Impacts: measurements of distances and speeds.
- Relevance to sport analysis: mid to high.
- Component: Lens.
- How to control or improve:
Lenses with normal field of view (less than around 65°) usually have very low distortion.
For wide angle, a lens without distortions should be preferred, but the cost can skyrocket pretty quickly.
The distortion can be calibrated in software and taken into account for measurements.
- Compromise: A subject evolving at the same distance from the camera will cover less pixels, so less resolution.
I would like to use this thread to compile a list of quality-degrading factors in video, how much they are relevant to sport analysis, which component are involved, and how we may improve upon them.
This list should be general and relevant to anything that can provide a stream of image and store it on-device or transmit it to a computer (DSLR, Industrial camera, USB camera, IP camera, Smartphone, etc.). The trigger for this topic though, is the advent of high-quality, interchangeable, small lenses for surveillance-type cameras. We are now very near a day where little USB cameras can be considered serious imaging devices.
Please add your input, illustrative images, comments, remarks, additional degrading factors, formatting suggestions, etc.
Maybe at some point we can create a PDF or something. It should be useful for evaluating new hardware on the market and as a buyers guide.
Here are some topics that could be covered:
limited resolving power - lack of focus.
limited resolving power - long exposure.
limited resolving power - pixel count and lens resolution.
limited resolving power - image processing and JPEG compression.
spherical distortions - wide and ultra wide angle lenses.
vignetting - mechanical and optical.
noise.
flares.
limited temporal sampling granularity - low framerate.
temporal distortion - rolling shutter.
limited illuminance - low aperture.
limited dynamic range.
limited depth of field.
chromatic aberrations.
unfaithful color reproduction.
Ok, this is officially very cool :-)
I wonder why I never tried that, thank you for the heads up!
Exposure compensation seems to change the gain level. I could not find a setting for exposure duration unfortunately. The auto-exposure is what degrades the framerate in low-light.
It made me realize that the image size should be displayed in Kinovea for these streams, as it's not always given.
Hi,
I have not yet tried to serve smartphone camera stream as MJPEG. It is quite relevant to the current effort in the capture module.
What application do you use on the device and what type of framerate/frame size does it achieve ?
Hi,
The second issue is linked to the first. Kinovea will mark the file as 30fps. If the incoming stream was actually 9fps it will be played back accelerated.
There are several things that can cause a decrease in stream framerate. The easiest one is when the exposure duration is set to longer than the frame interval. Exposure will take precedence and the framerate will be lowered automatically and silently.
Go into the settings, device properties, locate the exposure parameter, make sure "Auto" in not checked, and slide to the left. The image will get darker and darker as you decrease the duration of exposure, this is expected. Test with various settings to see the impact on framerate.
The next version of Kinovea will allow you to get more out of the cameras. The Microsoft LifeCam Studio has MJPEG on-board compression and Kinovea can leverage it to improve recording performances.
Hi,
I have not, but my development machine is now x64. I hope to be able to work on this and have something out sometime this year.
You should have something roughly like this :

The right click must be done on the tiny square to the right of "Capture", but I guess you already found that if you rendered the pin.
This camera module is quite interesting : ELP-USBFHD01M from Ailipu Technology. (You may find it on your favorite Chinese reseller at about 35€ + shipping).
It is based on an Omnivision OV2710 sensor which does 1920x1080@30fps, 1280x720@60fps and 640x480@120fps.
They apparently implemented MJPEG compression on all of these sensor outputs and the camera is UVC compliant.
It comes with various M12 lenses options so the lens might be interchangeable.
It is not clear whether it has manual exposure or not. I just ordered some to find out if it delivers on the specs.
Re-lenses, I think the C920 have a proprietary lens mount and the lens can't be swapped easily. (Haven't disassembled mine yet).
One thing that crossed my radar is this S-Mount for the C920 board.
Ah, if only all USB cameras used standard S-mount so we could swap in M12 lenses that would be great…
There are high quality wide-angle M12 lenses that would be very interesting to use, and we could reuse them when upgrading camera.
This guy went the other way and created a CS-mount adapter.
I have also yet to find a 60fps USB 2.0 camera with more than 640x480 resolution.
- The PS3Eye does 640x480 @ 75fps.
- The logitech C910 is apparently also capable of 640x480 @ 60 fps. The option was removed in the C920 for some reason.
- The C920-c (business version?) is reported by some sources as 960x720 @ 60 fps but I can't verify.
- I don't know about the C930 or C615.
- There is also a Chinese clone Gucee that claims 640x480 @ 60 fps.
1080p @ 60 fps is nowhere to be found…
It shouldn't be a bandwidth problem for the recent Logitech cameras like the C920 because unlike the PS3Eye and other cameras they have on-board MJPEG and H.264 compression. Maybe the compression chip is not fast enough to keep up, or they just didn't bother with it as it's less likely to interest their primary market.
The next options are USB 3.0 and GigE cameras, but the price jump is painful. Also the stream is raw so now it's the PC-side and the hard-drive that have to cope with compression and writing speed.
Kinovea - Forums → Posts by joan