By default it will only "see" (as in create a thumbnail for) the cameras that are connected directly to the computer.

For network cameras you have to use the "Manual connection" button under the camera list and configure the camera IP and access URL there.

As far as I can see the camera streams H.264 which is not supported as an IP camera stream in Kinovea. Only MJPEG streams are supported.

Ok, that is strange and sounds like a bug. You should be able to see velocity as soon as a trajectory point has two neighbors, before and after, and acceleration as soon as it has four neighbors. So only the first and last points should be "###".

What happens when you display "Position" instead? Do you get anything?
Can you save to a KVA file and send it by mail please? (joan at kinovea dot org), thanks.
Can you right click the trajectory and go into "Data analysis" dialog. Do you get the position over time and other plots there?

The ELP has some advantages but it definitely has a lower image quality than the C920. There are some sample videos of the ELP at the bottom of the blog post, shot with not ideal lighting (as is often the case).

(I think you should be able to get the Logitech for less than 70€ now, as the price has dropped).

I can't really comment on whether 480p is enough for your application. Try to find someone with a cheap webcam and see for yourself if it's acceptable.

Regarding gait analysis, one aspect to keep in mind is the USB bandwidth. I have seen at a podiatrics setup a lot of USB peripherals (gait platform, other specific devices, etc.) and it can create issues for the cameras if you want to have two of them streaming at the same time in addition to all the other devices.

A C920 should give very decent results. You can configure stream, image size, framerate, exposure and gain directly in Kinovea. There is also special code to directly store the MJPEG stream to disk in order to reach peak performance during recording and avoid frame drops.
One drawback is that focus is built for room-scale and degrades a bit if the subject is at 5 meters or more.
I haven't personally tested the C930 but from all I've gathered there is no real advantage as far as filming sport is concerned.

Theoretically the Reflex would give a much better image but I'm not sure it will work at all. Does the HDMI output let you stream the live content seen by the sensor to a TV ? Or is it just to feed back videos previously recorded on the internal storage ? Also, video capture boxes haven't been tested and I haven't had any report that any of them work.

coxisambo wrote:

The only thing is that when you ask to show measure it shows position and displacement values, but for velocity, acceleration and their components it appear ### instead of numbers.

This usually happens when there aren't enough data points to compute the value. Since velocity and acceleration are derivatives and the data is filtered to remove noise, there is a minimum number of points that needs to be present before and after the position of interest in order to calculate the kinematic quantity at that point.

coxisambo wrote:

The white hand that moves during left button mouse clicked do not allow position the line on to the articular centre, it diminish digitizing precision. A cross or a transparent hand would be better.

Yes, fair point. Note that you can zoom in 600% with CTRL+mouse scroll, which should provide sub-pixel accuracy.

803

(8 replies, posted in General)

Ah, maybe it wasn't clear enough about what the feature is supposed to do.

The image will not be undistorted, it is the coordinate system that will now take the distortion into account. So if you add a planar or perspective coordinate system, the coordinates used for positions, distances, angles, speed, etc. will be distortion corrected.

If you add lines or grids, you will see them bend along the distortion field.

Rectifying the images in real-time is costly and Kinovea architecture is not suitable for this as it is.

Regarding accuracy there could be arguments in both directions, on one hand it is better to use the actual pixels captured by the sensor for digitizing points rather than use the rectified images that will be interpolated. On the other hand a circular marker will no longer be circular at the image periphery which could limit the accuracy of automated tracking. Currently the philosophy is to respect the original image from the sensor as much as possible.

Hi,
Yes they are UVC compliant and you can configure and drive them in Kinovea.

Note that there are a lot of models from this vendor, with a few baseline hardware and a lot of variations on top of them for lenses.
For the high framerate you'll be looking at the ones based on OV2710 sensor (ELP-USBFHD01M).

I wrote a review of this model on my personal blog here.

In my experiments it doesn't do 120fps but 100fps, and only at lower resolution (640x480). 1280×720 @ 60 fps is nice though but there are other shortcomings.

805

(1 replies, posted in Français)

Bonjour,
Malheureusement à l'heure actuelle cette fonctionalité n'est pas supportée.
Si vous filmez avec un téléphone ou une tablette il faut tenir l'appareil en "paysage" pour éviter de faire des vidéos plus hautes que larges.

I reproduce a problem but I'm not sure it's the same.

For me it's the video that is 29.97 fps that is further along the timecodes, not the other way around.
It is running too fast by a bit less than 1% which is the expected error from the rounding issue mentioned earlier.

The issue only occurs during playback itself, as soon as I pause and restart playback the synchronization error is reset to zero. There is no issue when using the common slider to set the common time. It also has no impact on time-dependent computations like chronometer, kinematics, etc.

I ack the problem with timer precision. I'll have to think about the proper way to fix it.

Unless I misunderstand what you mean, I think synchronizing on time codes is the same as what is currently done. The global time is converted to video-local times (to account for the synchronization point which is zero in your case I presume) and then converted to a video-local timestamp to reach the correct frame.

What you describe could be caused by a rounding error somewhere. I will try to reproduce the problem.

Do you have the same effect when you manually set the common time (click/drag inside the common frame tracker)?
There could indeed be an issue with the fact that the playback timer has a granularity of 1ms which cannot express 29.97fps precisely. I will double check that there is an adjustment to account for that.

808

(8 replies, posted in General)

Yes it will work. It doesn't matter that you film from 10cm or 5cm. The program knows the expected geometry of the checkerboard and it compares it to what the image shows. Whether the squares occupy 25 pixels or 100 is not important for this. It is however important that all parts of the image are mapped to take into account asymmetries, so it should be from various angles.

It may appear that the distortion changes when you move the camera closer or farther away from the screen but in reality it does not. The periphery of the image still has the same amount of distortion. If you filmed a checkerboard ten times bigger from ten times farther away you would get the same image.

If you do the calibration several times you will see that the coefficients are not exactly the same. It should still work accurately and this margin of error is probably lower than the error introduced during digitization of points/objects positions.

The calibration of distortion works on the projected image so once calibrated it will work no matter how far the subject is.

I would like to understand whether it's a bug or not first.

The way it is supposed to work is that there is a global time defined outside of both videos and the synchronization ensures that each video local time maps to the current global time. Normally they should not diverge. At most, if the videos have different framerates not multiple of each other one of them might slightly oscillate between being late or early relatively to the other.

There is a dual cursor in the common navigation bar at the bottom. Does it diverge over time?
What are the framerates of each video? (you can see this from the high speed camera menu).

(Assuming version 0.8.24)

810

(8 replies, posted in General)

bluesrumba wrote:

I saw that there is a new function "camera calibration". I thought that could be used to correct the wide angle of some cameras (like the action cameras) .

Here are some notes I wrote last year regarding lens calibration. They should still be relevant I think, let me know if it works.

Summary : film checkerboard pattern, import 5 images to Agisoft, export calibration file, import calibration in Kinovea.

1. In Agisoft Lens: Tools > Show chessboard.
2. Film the screen with the camera from up close. About 10 cm for a GoPro. Film from 5 different angles: one central and the others from the corners. This assumes a flat screen. See example images below.
3. Open the video in Kinovea.
4. In Kinovea: Find 5 clear images (no motion blur) corresponding to the 5 various points of view and export them as images.
5. In Agisoft Lens: Tools > add photos. Add the 5 images.
6. In Agisoft Lens: Tools > Calibration. Check every checkbox except "skew" and "k4". Run calibration.
7. In Agisoft Lens: File > Save calibration…
8. In Kinovea: Image > Camera calibration. File > Import > Agisoft Lens. Import the file and Apply.

Done.

You can reuse the same calibration file for all videos filmed with this camera with the same lens settings. (If you change from wide angle to normal, another calibration file is needed.) Other cameras of the same model will have similar calibration files, but for the most accurate result you'll want to use a file specific to your camera.

The calibration computes the focal length, the misalignment of the lens center with regards to the sensor center, optical axis orientation with regards to the sensor plane, and distortion coefficients.

Verification

1. Go into the image tab in the calibration window and check the rectified image.

2. With a perspective plane.
- Reopen the checkerboard video in Kinovea.
- Add a perspective grid on the checkerboard and verifies that the lines are correctly distorted.
- Zoom to the max, adjust corners of the perspective grid and then calibrate the grid by the number of blocks covered horizontally and vertically.
- Display the coordinate system and check that it is correctly distorted. You can see the error accumulating at the periphery.
- Add a line covering a number of blocks, display its measurement and check that it matches.

My calibration images looked like this:
http://www.kinovea.org/screencaps/0.8.x/0823-calibration.jpg