Some more updates

USB topology
If you plan on using several webcams or if you have a lot of other USB devices connected to your computer, it is important to understand a bit about the USB topology to get the best of your system.

I want to write a longer piece about this because it will be best with schematics. In a nutshell, you want to connect each webcam to a separate root hub so that it gets its full bandwidth and power.

USB works with host controllers (piece of hardware), root hubs (one per host controller), regular hubs and devices. A root hub will source several USB ports on the computer. All of these ports will share the bus bandwidth. Cameras need a lot of bandwidth so it's best if you connect them to ports that belong to different root hubs.

Do not bother connecting the camera to a regular hub. The camera will share the bandwidth with other devices on the same root hub in addition to the devices connected downstream to the hub. A bus powered hub will also usually not be able to source enough current to a camera connected downstream.

To better understand a comptuer USB topology, I like to use USB tree viewer, it will show you the whole USB tree from controllers to devices. It also shows low level USB descriptors which can help understand what capabilities are supported by a camera at the USB level, especially for UVC compliant cameras like the C920.

USB 3.0 ports won't provide increased bandwidth to USB 2.0 devices. There is an entire USB 2.0 compatibility lane inside the USB 3.0 pipe.

872

(12 replies, posted in Cameras and hardware)

Thanks for the report.
Many cards from AVerMedia are advertised as "DirectShow compatible" which seems to be in contrast to other brands. Do you have any experience with these ?

The hardest part of making sure Kinovea works well with all kinds of cameras is to get the actual hardware for testing and debugging.

Although donations and my personal investments cover the cost for hosting and bandwidth of the website, getting cameras to test the software with is still an issue. There are many cameras on the market and I cannot justify to buy them just to make Kinovea work with a particular product.

And this is where you may enter the picture :-)
With hardware donations you help Kinovea support more cameras and help it become a better solution for you and for everyone else; in the spirit of open source.

This program is mostly oriented towards improving the capture module because this is where being a spare-time project makes things difficult.

You can order hardware through the Amazon Wish List or send it yourself directly using the project postal address listed below.
In any case please do look at the Amazon wish list to get an idea of what is currently needed.


Amazon Wish List
The wish list contains items that I think would be most interesting to get in order to make progress. There is a small comment on the right of the item to explain why it would be useful and the type of work that will be done with the donated device.


Direct shipping
You can also directly send your hardware to Kinovea postal address. Do not send hardware expecting a return shipment later unless we have formally agreed to such.


Kinovea
14 rue Jean Jacques Rousseau
App 723
33400 Talence
France


Camera manufacturers
If you are a camera manufacturer and your cameras have an SDK, donating a device is a great way to promote your brand and models. Kinovea has been downloaded more than 100 000 times and is a software of choice in many sport science universities.
Get in touch: asso@kinovea.org


Recently donated
One Logitech C920 HD – October 2014.
I bought the 70€ camera with a year’s worth of donations so I consider this the first donated hardware.
It has boosted the development of a better model for the Capture module and helps testing HD streaming and recording of USB 2.0 cameras.

Thanks

Permalink : http://www.kinovea.org/en/hardware-donations/

874

(9 replies, posted in General)

Hi,

You can export the calibration parameters from within the calibration dialog using menu File > Save. The default directory is under "CameraCalibration" in the application data directory. Then from the second PC open a video, open the calibration dialog again and do File > Open and point it to the xml file.

To be clear, when you "Open" or "Save" it uses Kinovea format. When you "Import" it imports Agisoft Lens format. You can also directly reuse the Agisoft file and import it from the second PC.
You need to open/import it for each video.

The same file can be used for the same camera as long as the camera configuration doesn't change focal length, aspect ratio, etc.

For example on the PS3Eye there are two zoom positions by turning the ring.
On the GoPro depending on the configuration you can end up in Wide or Medium, 170° and 127° respectively I think. A different file is needed.

Technically each camera unit will have slightly different values. For example one of the computed parameters is the center of the lens relatively to the center of the sensor. It will not be exactly the same from one camera to the next.

That being said, the error introduced by these microscopic differences is probably less than the error of the calibration process itself, and less than the error introduced during coordinates digitization. So it may be fine to reuse a file created for a different camera unit unless you are already in an extremely controlled setting.

I'll gladly publish contributed calibration files on a dedicated page on the site so we can share them and experiment. Send them to joan at kinovea dot org and add details on the configuration with which they were created. If you validate that they also work at other configurations please mention it as well.

Interesting.
In AForge the filter graph is built using Intelligent Connect, the capture source and sample grabber are added to the graph, then a call to Render is made which asks the filter graph manager to do the connection. Can you tell if the CodeLabs sample use DirectConnect instead by any chance ?
I've seen that even when doing RGB24 to RGB24 there are extra filters added by the manager for some reason, including a color space converter…
I'm thinking of bypassing the Intelligent Connect for media types supported natively by the application, it would avoid extra copies and allocations along the way, in addition to avoid this type of surprises.

With regards to flipping, it's a known difficulty with DirectShow. Some formats needs to be flipped, others not.

Currently the code assumes the image is flipped (as it should for RGB24) and during the creation of the bitmap it inverts it. As it works on other machines I believe that assumption is correct but maybe there is another filter upstream that makes a faulty conversion…

OK, I still have a PS3Eye around. Although the lens is dead it's still good for testing.

I'm using CL-Eye-Driver-5.3.0.0341. In Graph studio, Graph > Insert filter, I have it listed under "PS3Eye Camera" in the DirectShow filters category and in the Video Capture Sources category. You can change the category in the top left combo box.

I also have it listed as "USB Camera-B4.04.27.1" in the WDM Streaming Capture devices. This is the actual name of the device at the USB level. It's not UVC compliant though, so Windows cannot automatically wrap it in a DirectShow capture source filter (hence the need for a third party driver). This filter only has the audio source.

Anyway, the list of supported media types is presented strangely, each framerate is a different media type. Normally there would be a list of media types for the various frame sizes, and a list of framerates supported at each frame size.

I can see that the main media types supported are RGB32 and RGB24, with two resolutions each.
We can also see this in the demo application, under Options > Video capture pin. Half of the entries are listed as "(32 bits)" and the lower half as "(24 bits)".

One way to get the problem you are having would be if some component is interpreting an RGB32 sample as an RGB24 image. Unfortunately due to a limitation in the current version, the RGB32 is automatically chosen in Kinovea. I'm experimenting with a version that lifts this limitation so I might contact you off-list for testing if it's ok with you.

Ooh, very interesting :-)

I am deep into the capture code these days so this case is of utmost interest.

Basically it should work like this :
1. The camera driver exposes a number of formats that it supports, usually what the camera can source.
2. The capture code creates a "Sample Grabber" specifying format RGB24.
3. The capture code calls DirectShow to connect the camera driver to the sample grabber, and DirectShow adds the necessary converters inbetween (the path to go from source to destination might have several steps).
4. During capture, samples are grabbed in RGB24, a Bitmap object is created with the sample inside, and the bitmap is sent downstream to the rest of the code.

In 1, the source format from the driver can be RGB24 already, but usually it's in some form of YUV format like YUY2, I420 (a list here). It may also be in a compressed format if the camera has on-board compression (MJPEG or H.264).

What might happen is that DirectShow cannot find the proper color space converter to go from the YUV format to RGB24, but normally if there is no match the connection fails entirely.
Another possibility is that it can find the proper color conversion chain but somewhere something is interpreting the source wrongly.

I think I still have a working PS3Eye, I'll also see if I can find something.
I don't know if the CodeLabs driver is based on the Microsoft UVC driver like for most cameras, I'll check that later.

If you want to start to investigate, you can download GraphStudioNext. Add your camera and look at the properties of the output pin.

879

(12 replies, posted in Cameras and hardware)

I also need to investigate these capture cards more. It seems the gaming market is creating a new demand for them which could drive the prices down.

If the use-case is display only, like delayed playback for example, it might be interesting.

If you want to record to disk it's more difficult, I'm afraid at the moment Kinovea isn't quite capable of recording full HD to disk in real time without dropping frames. Using the on-camera storage is a more appropriate option for "record" scenario.

Some more experimentations on my side…

Drivers
On Windows 7 there are two capable drivers giving slightly different options.
The Microsoft driver that installs itself by default, and the Logitech driver that must be installed manually.

The MS driver provides the stream in YUY2 (an uncompressed format), MJPG and H264.
The Logitech driver provides RGB24, I420 (two uncompressed formats), MJPG and H264. (These options will be available for selection in the next version). I haven't yet found what the actual native uncompressed format of the camera is.

However there is a catch, the official Logitech driver provides the H.264 stream in an unusual way (on a secondary pin of the DirectShow filter), which makes it invisible to Kinovea and to the majority of DirectShow based capture applications.

On the upside, when using the Logitech driver, it is possible to get and set the exposure duration with 100µs granularity, with a lowest value at 300µs.

Both drivers lie to some degree with the list of framerates they support. For example when using MJPG output, the drivers report that they support 7 framerates, from 5 to 30 fps, but whatever we choose, it will always use 30 fps.

DirectShow internals
When we connect to the camera, there is some black magic done by DirectShow to find a suitable decoder for the camera output down to the application. DirectShow will search the installed codecs on the computer and look for the best ones to complete the connection.

This means that when the camera is outputting H264 for example, there is a specific codec that is selected to provide the decoding. It may be a third party codec installed as part of another application or one provided with Windows. The performance of this codec has a direct impact on performance and image quality.

The problem is that codecs register themselves with a "merit" value that changes their likelihood of being selected.
More tests are needed to see if it would be practical to try to bypass this system completely or find a set of known-to-work-well codecs for various camera output formats, or at least detect which codec ends up being used.

So I got my C920 earlier this week and I've started experimenting with it. Let's make this the official Logitech C920 thread, and everyone should post their findings or issues with this camera. I'll keep in mind the holy grail of the thread, being able to record and preview two C920 at their full 1920×1080 @ 30 fps, simultaneously.

1. Exposure time.
The camera has a setting for exposure duration which is great. Extremely important for sport video as we already discussed in the forum. Combined with the HD frame size and the 30 fps it makes it a very interesting tool indeed.

To change the exposure time you have to go into the device property pages and to Camera control. I think the Logitech software also has a page where you can change it.

The mapping between the settings and actual durations is not published by Logitech though, and there are conflicting informations between the DirectShow spec and Logitech. They do have a lower level API where it's possible to set the duration by increments of 100µs, so I'll try to use that at some point in the future.

2. Framerate auto adjustment.
If you leave the exposure control on automatic, the exposure duration will adjust to the light conditions. As the exposure and framerate are partly interdependent, it may also alter the framerate. You don't want this.

Especially if you are indoors, the exposure will set itself to longer values, automatically decreasing the framerate.
To get the full 30 fps you need to set exposure on manual (and maybe get powerful artificial lights).

There seems to be a bug in the camera driver as it forgets the settings when you restart the streaming though.

3. Support in Kinovea
The list of available image sizes / framerates is somewhat limited in the current versions of Kinovea. I have started to change the code to get the full range of capabilities.

The camera has on-board compression and can stream to H.264 and MJPEG in addition to uncompressed.
At the moment Kinovea only exposes the configurations options corresponding to the H.264 stream due to some underlying limitation that I'm working on lifting.

USB 2.0 doesn't have the bandwidth for uncompressed full HD @ 30 fps, so getting the compressed stream is the only way to get the top score. However it means that the images have to be uncompressed on the computer to be displayed and then recompressed to save to disk. I'll later see if it's possible to directly save the MJPEG stream to disk.

882

(13 replies, posted in Cameras and hardware)

Follow-up:
After some testing and off-forum communication, the issue should be resolved in the next version of Kinovea.

883

(13 replies, posted in Cameras and hardware)

Hum, Point Grey have just changed their web site so most Google links are dead. It has become harder to find relevant documentation.

Anyway, in order to understand the problem better I would be interested in the following:
- Do you get the possibility to configure the stream size/framerate in V1 ?
- Could you download Graph Studio Next. It is an Open Source DirectShow utility working at a lower level.

1. Go to Graph > Insert Video Source > FlyCapture2 Camera.
It should add a box for the camera on the main panel. (At that point it crashes for me. The DirectShow filter apparently expect that a camera is actually attached).
2. How many capture pins are there on the right side of the box (little squares protruding) ?
3. If you right click the little capture pin and choose Properties, do you get the stream configuration tab (where you can change video format, image size and framerate) in the resulting dialog ?
4. And if you go to the Interfaces tab in this dialog, do you see the IAMStreamConfig entry ?

Thanks

884

(13 replies, posted in Cameras and hardware)

Yes I will make some experiments and get back to you for testing.

Thanks for the details.
In the lab I was on they used USB-to-Ethernet extension cables so most of the wiring was inside the walls. It's another alternative for indoor settings.

Regarding multi-core processors it's definitely a game-changer. There are at least 3 intensive threads running in parallel: the image production in the camera driver/grabber, the display of images in the UI thread, and the compression/storage to disk in another background thread. If the CPU has to block any of these to run the others, performances will suffer.

I am making progress on a new architecture that will allow me to better control how everything works together at a low level and making sure the stream of images flows down to the disk with maximal fluidity.