I would like to use this thread to compile a list of quality-degrading factors in video, how much they are relevant to sport analysis, which component are involved, and how we may improve upon them.

This list should be general and relevant to anything that can provide a stream of image and store it on-device or transmit it to a computer (DSLR, Industrial camera, USB camera, IP camera, Smartphone, etc.). The trigger for this topic though, is the advent of high-quality, interchangeable, small lenses for surveillance-type cameras. We are now very near a day where little USB cameras can be considered serious imaging devices.

Please add your input, illustrative images, comments, remarks, additional degrading factors, formatting suggestions, etc.
Maybe at some point we can create a PDF or something. It should be useful for evaluating new hardware on the market and as a buyers guide.

Here are some topics that could be covered:

  • limited resolving power - lack of focus.

  • limited resolving power - long exposure.

  • limited resolving power - pixel count and lens resolution.

  • limited resolving power - image processing and JPEG compression.

  • spherical distortions - wide and ultra wide angle lenses.

  • vignetting - mechanical and optical.

  • noise.

  • flares.

  • limited temporal sampling granularity - low framerate.

  • temporal distortion - rolling shutter.

  • limited illuminance - low aperture.

  • limited dynamic range.

  • limited depth of field.

  • chromatic aberrations.

  • unfaithful color reproduction.

872

(4 replies, posted in General)

Ok, this is officially very cool :-)
I wonder why I never tried that, thank you for the heads up!

Exposure compensation seems to change the gain level. I could not find a setting for exposure duration unfortunately. The auto-exposure is what degrades the framerate in low-light.

It made me realize that the image size should be displayed in Kinovea for these streams, as it's not always given.

873

(4 replies, posted in General)

Hi,
I have not yet tried to serve smartphone camera stream as MJPEG. It is quite relevant to the current effort in the capture module.
What application do you use on the device and what type of framerate/frame size does it achieve ?

Hi,
The second issue is linked to the first. Kinovea will mark the file as 30fps. If the incoming stream was actually 9fps it will be played back accelerated.

There are several things that can cause a decrease in stream framerate. The easiest one is when the exposure duration is set to longer than the frame interval. Exposure will take precedence and the framerate will be lowered automatically and silently.

Go into the settings, device properties, locate the exposure parameter, make sure "Auto" in not checked, and slide to the left. The image will get darker and darker as you decrease the duration of exposure, this is expected. Test with various settings to see the impact on framerate.

The next version of Kinovea will allow you to get more out of the cameras. The Microsoft LifeCam Studio has MJPEG on-board compression and Kinovea can leverage it to improve recording performances.

875

(7 replies, posted in General)

Hi,
I have not, but my development machine is now x64. I hope to be able to work on this and have something out sometime this year.

876

(12 replies, posted in Cameras and hardware)

You should have something roughly like this :

http://www.kinovea.org/screencaps/0.8.x/0824-graphstudionext.png

The right click must be done on the tiny square to the right of "Capture", but I guess you already found that if you rendered the pin.

877

(12 replies, posted in Cameras and hardware)

This camera module is quite interesting : ELP-USBFHD01M from Ailipu Technology. (You may find it on your favorite Chinese reseller at about 35€ + shipping).

It is based on an Omnivision OV2710 sensor which does 1920x1080@30fps, 1280x720@60fps and 640x480@120fps.
They apparently implemented MJPEG compression on all of these sensor outputs and the camera is UVC compliant.
It comes with various M12 lenses options so the lens might be interchangeable.
It is not clear whether it has manual exposure or not. I just ordered some to find out if it delivers on the specs.

Re-lenses, I think the C920 have a proprietary lens mount and the lens can't be swapped easily. (Haven't disassembled mine yet).

One thing that crossed my radar is this S-Mount for the C920 board.

Ah, if only all USB cameras used standard S-mount so we could swap in M12 lenses that would be great…
There are high quality wide-angle M12 lenses that would be very interesting to use, and we could reuse them when upgrading camera.

This guy went the other way and created a CS-mount adapter.

879

(12 replies, posted in Cameras and hardware)

I have also yet to find a 60fps USB 2.0 camera with more than 640x480 resolution.
- The PS3Eye does 640x480 @ 75fps.
- The logitech C910 is apparently also capable of 640x480 @ 60 fps. The option was removed in the C920 for some reason.
- The C920-c (business version?) is reported by some sources as 960x720 @ 60 fps but I can't verify.
- I don't know about the C930 or C615.
- There is also a Chinese clone Gucee that claims 640x480 @ 60 fps.

1080p @ 60 fps is nowhere to be found…

It shouldn't be a bandwidth problem for the recent Logitech cameras like the C920 because unlike the PS3Eye and other cameras they have on-board MJPEG and H.264 compression. Maybe the compression chip is not fast enough to keep up, or they just didn't bother with it as it's less likely to interest their primary market.

The next options are USB 3.0 and GigE cameras, but the price jump is painful. Also the stream is raw so now it's the PC-side and the hard-drive that have to cope with compression and writing speed.

880

(12 replies, posted in Cameras and hardware)

One approach to streamline the workflow without live streaming would be to use a WiFi-enabled SD card. Ideally we would like to access the card as a network drive…

I haven't tried these in a while so I can't recommend a specific product. The original Eye-Fi had some workflow limitations that made it a bit frustrating to use. I don't know if they improved upon this.

I do not know if the camera can stay on while the WiFi-enabled part of the card is reading the files. There might be sharing issues at the filesystem level preventing this.

If someone has experimented with these please share.

The GoPro also has some WiFi options and an embedded HTTP server. I don't know if anyone ever found out how to access SD cards files through this way.

881

(12 replies, posted in Cameras and hardware)

slim_n_trim wrote:

So that we can open files on the GoPro via Kinovea directly (live capturing).

Opening files is different from live capture. Opening files should be no problem, the SD card is seen as a storage device.

I have no direct experience with the cable you mention and haven't passed the page through translation, however note that a cable for FPV is bound to be analog (to limit latency) and will connect to a radio transmitter rather than a PC.

Other video out options that I know of on the GoPro are the small preview stream over WiFi (not supported in Kinovea) and the HDMI out which needs an additional dedicated capture hardware (of which I do not know if they are well supported in Kinovea or not).

Yes you should be able to do that.
Move to the wanted position with the arrow keys and hit the working zone buttons (green square brackets).

First of all I'm very sorry for the lost file. Let's get to the bottom of this so that doesn't happen again.

I'll try to summarize my understanding of the issues, please correct me if I'm wrong.

1. Your workflow is to record video pairs with two Sony DSC-H20 using the internal memory cards. Then you transfer the files manually to the computer. (No live capture).
2. You record a single master video for each camera.
3. For each camera master video, you then create several sub sequences. You do this using Kinovea, setting a working zone and saving it.

Issue 1: When reopening a pair of sub-sequences, the endpoint and duration do not match, although the original working zones matched.

4. For each pair of sub-sequence, you open the files in Kinovea, create a new working zone to fix the endpoint from issue 1.
5. You save the fixed-up sub-sequences again.

Issue 2: When saving one of the video, the other video file becomes corrupted.
Issue 3: QuickTime cannot open any video after it has passed through Kinovea saving.

Please correct any misunderstanding.

Let's also name things:
1. Creation of master files A and B in the cameras.
2. Creation of files A1, A2, …, A12, and B1, B2, …, B12 in Kinovea.
3. Creation of files A1fix, A2fix, etc. and B1fix, B2fix, etc. in Kinovea.

The major issue is #2, file corruption.
There is one critical bug that can cause this, it happens when you overwrite an open file. For example, if you open A12 and B12, fix A12 endpoint, then in the Save File dialog you either save it back to A12 or B12. Please create a backup of your files before attempting a reproduction of the issue.

I'll add the necessary check to prevent this failure.

edit: analysing the log, I think it is indeed what happened:

1673000 - DEBUG - [Main] - VideoReaderFFMpeg - [File] - Filename : Part11-030-Oil.MP4
...
1683468 - DEBUG - [Main] - VideoReaderFFMpeg - [File] - Filename : Part11-462-H2O.MP4
...
1802000 - DEBUG - [Saving] - FrameServerPlayer - Saving selection [1001]->[225225] to: Part11-030-Oil.MP4
...
1823468 - DEBUG - [Saving] - FrameServerPlayer - Saving selection [1001]->[238238] to: Part11-462-H2O.MP4

884

(12 replies, posted in Cameras and hardware)

Hum… More digging landed on this topic where it is said that the HDV source describes itself with VideoInfoHeader2 which is not supported by SampleGrabber, the piece of DirectShow used to collect the stream images directly and make them available to the application. They tried with files from the camera so it may not be the same for the source itself.

To at least confirm what is said in the forum post, please download Graph Studio Next, it is a lower level DirectShow application that may be used to diagnose these issues (Graph Edit Plus or graphedt should work as well).

Go to "Graph > Insert Video Source" and click on the camcorder. If it doesn't appear here, go to "Graph > Insert Filter" and look for it there.
Once added, right click on the little "Capture" pin and then "Properties…". In the "Capture" tab you will see the raw description of the media types exposed by the source.
Each media type should have either a VideoInfoHeader or a VideoInfoHeader2, and then a BitmapInfoHeader beneath it.

1. Do all the media types really only use VideoInfoHeader2 ?
2. If you right-click the capture pin and do "Render Pin", then start Play, do you get the video ?

885

(12 replies, posted in Cameras and hardware)

Ok, in order to understand the origin of the issue, please test with SnapshotMaker from AForge.NET sample applications (third one).
- Can you enumerate the video resolutions?
- Can you connect the camera?

Thanks