706

(44 replies, posted in General)

I'm happy to announce the general availability of Kinovea 0.8.25.

This post describes some of the improvements in version 0.8.25 over version 0.8.24.
This release focuses on usability and polishing of existing features, and introduces one new feature in the Capture module.

 
1. General

Starting with version 0.8.25 a native x64 build is provided. There are now 4 download options. The `zip` files are the portable versions and will run self-contained in the extraction directory. The `exe` files are the installer versions.
The minimum requirements have not changed and Kinovea still runs under all Windows versions between Windows XP and Windows 10.

The interface is now translated to Arabic thanks to Dr. Mansour Attaallah from the Faculty of Physical Education, Alexandria University - Egypt.

 
2. File explorer

Thumbnails details

The details overlaid on the thumbnails have been extended and made configurable. The framerate and creation time have been added to the fields that can be displayed, the framerate is displayed by default. Right-click the empty space in the explorer to bring the thumbnails context menu and choose the fields you would like to be shown.

http://www.kinovea.org/screencaps/0.8.25/0825-thumbnaildetails.png

 
3. Playback module

Interactive navigation cursor

The video now updates immediately when moving the playback cursor. This behavior was previously only activated when the working zone was entirely loaded in memory. It is now enabled by default. The experience should be largely improved but if you are on a less powerful system and navigation is problematic, the behavior of the cursor can be reverted from Preferences > Playback > General > "Update image during time cursor movement".

http://www.kinovea.org/screencaps/0.8.25/0825-interactivecursor.png

Video framerate

The internal framerate of the video can be customized from the bottom part of the dialog in Video > Configure video timing. This setting changes the "default" framerate of the video by overriding what is written in the file. This is a different concept than slow motion. What the setting does is redefine the nominal speed of the video, the 100%. This is useful when a video has a wrong framerate embedded in it which can happen sometimes. In general use you would not use this setting very often but it can save an odd file. Note that this setting is also not the same as the Capture framerate that can be set from the same configuration box.

http://www.kinovea.org/screencaps/0.8.25/0825-framerateconfig.png

 
4. Annotation tools & measurements

Named objects

All drawing tool instances (angles, arrows, markers, chronometers, etc.) now have a custom "Name" property. This makes it easier to match drawings with their value when exporting data to spreadsheet. Regarding spreadsheet export, all lines and point markers are now exported to the spreadsheet, whether or not they have the "Display measure" option active in Kinovea.

http://www.kinovea.org/screencaps/0.8.25/0825-nameddrawingusage.png
http://www.kinovea.org/screencaps/0.8.25/0825-nameddrawingexport.png

Custom length unit

A new custom length unit can be used to cover use-cases that are not natively supported by Kinovea. By default Kinovea natively supports Millimeters, Centimeters, Meters, Inches, Feet and Yards. The extra option can be used to define a new unit such as Micrometers or Kilometers depending on the scale of the video being analyzed, or any unit specific to your field. The default value for this option is "Percentage (%)". The percentage unit would make sense when analyzing dimensions of objects purely relatively to one reference object. The mapping between video pixels and real life dimensions in the custom unit is defined by a calibration line, or a calibration grid for non-orthogonal planes. Any line or grid can be used as the calibration object.

The unit is defined in Preferences > Playback > Units > Custom length unit. It can then be used in any line or grid during calibration.

http://www.kinovea.org/screencaps/0.8.25/0825-customlengthunitconfig.png
http://www.kinovea.org/screencaps/0.8.25/0825-customlengthunitusage.png

Default tracking parameters

A default tracking profile can be defined from Preferences > Drawings > Tracking. This profile will be applied by default to newly added tracks and trackable custom tools like the bikefit tool or the goniometer. The parameters can be expressed in percentage of the image size or in actual pixels. Note that in the case of tracks, the tracking profile can also be modified on a per-object basis after addition. This is not currently possible for other objects.

http://www.kinovea.org/screencaps/0.8.25/0825-defaulttrackingprofile.png

    
5. Capture module

File naming automation

The file naming engine has been rewritten from scratch to support a variety of automation scenarios that were not previously well supported. The complete path of captured files is configured from Preferences > Capture > Image naming and Preferences > Capture > video naming.

A complete path is constructed by the concatenation of three top-level values: a root directory, a sub directory and the file name. It is possible to define a different value for these three top-level variables for the left and right screens and for images and videos. The sub directory can stay empty if you do not need this level of customization. Defining root directories on different physical drives for the left and right screens can improve recording performances by parallelizing the writing.

The sub directory and the file name can contain "context variables" that are automatically replaced just in time when saving the file. These variables start with a % sign followed by a keyword. In addition to date and time components you can use the camera alias, the configured framerate and the received framerate in the file name.

http://www.kinovea.org/screencaps/0.8.25/0825-filenaming.png

The complete list of context variable and the corresponding keyword can be found by clicking the "%" button next to the text boxes.

http://www.kinovea.org/screencaps/0.8.25/0825-contexvariables.png

A few examples:

    Root: "C:\Users\joan\Documents"
    Sub directory: "Kinovea\%year\%year%month\%year%month%day"
    File: "%year%month%day-%hour%minute%second"

    Result: "C:\Users\joan\Documents\Kinovea\2016\201608\20160815\20160815-141127.jpg"
    Root: "D:\videos\training\joan"
    Sub directory:
    File: "squash - %camalias - %camfps"

    Result: "D:\videos\training\joan\squash - Back camera - 30.00.mp4"

If the file name component does not contain any variable, Kinovea will try to find a number in it and automatically increment it in preparation for the next video so as not to disrupt the flow during multi-attempts recording sessions.

Capture mosaic

The capture mosaic is a new feature introduced in Kinovea 0.8.25. It uses the buffer of images supporting the delay feature as a source of images and display several images from this buffer simultaneously on the screen. The result is a collection of video streams coming from the same camera but slightly shifted in time or running at different framerates. The capture mosaic can be configured by clicking the mosaic button in the capture screen:

http://www.kinovea.org/screencaps/0.8.25/0825-capturemosaicbutton.png
http://www.kinovea.org/screencaps/0.8.25/0825-capturemosaicconfig.png

Modes:

1. The single view mode corresponds to the usual capture mode: a single video stream is presented, shifted in time by the value of the delay slider.

2. The multiple views mode will split the video stream and present the action shifted in time a bit further for each stream. For example if the delay buffer can contain 100 images (this depends on the image size and the memory options) and the mosaic is configured to show 4 images, then it will show:

  • the real time image;

  • a second image from 33 frames ago;

  • another one from 66 frames ago;

  • and a fourth one from 100 frames ago.

Each quadrant will continue to update and show its own delayed stream. This can be helpful to get several opportunities to review a fast action.

3. The slow motion mode will split the video stream and present the action in slow motion. Each stream runs at the same speed factor. In order to provide continuous slow motion the streams have to periodically catch up with real time. Having several streams allows you to get continuous slow motion in real time.

4. The time freeze mode will split the video stream and show several still images taken from the buffer. The images are static and the entire collection will synchronize at once, providing a new frozen view of the motion.

    
6. Feedback

Feel free to use this post for feedback, bug reports, usability issues, feature suggestions, etc.

If you go to Options > Time you can choose a format for timecodes. Using "Total milliseconds" for example should turn all times into numerical values in the interface and when exporting to spreadsheet. Then you can use them for time arithmetic.

In the data analysis window of tracks the times will always be in milliseconds, disregarding the time format option. So when copying to clipboard and pasting to spreadsheet it should also allow arithmetic.

So I haven't experienced this issue myself yet, but it has been wildly reported that a recent auto-update of Windows 10 is breaking MJPEG streams for applications based on DirectShow. This impacts for example the Logitech cameras in Kinovea.

There is a long thread over at MS Dev forums and the issue is apparently breaking many high profile applications like Skype.

Basically they wanted to support for multiple applications to consume camera streams so they moved the decoding stage upstream in the pipeline and removed access to the compressed stream.

They are working on a fix which should be pushed through auto-update in September.

This is definitely handled differently in 0.8.24.

When using comparison the videos should be both progressing relatively to a common absolute time. There is also an option in the preferences to unlock the speed sliders if that's required. The speed slider percentage is relative to real time (takes capture framerate into account if it has been set).

After the synchronization point has been set between videos, the times before it are indeed expressed as negative.

0.8.25 also has a additional setting to force a different reference playback framerate, in case the video metadata is wrong.

It's also my experience that most "cheaper" sensors will auto-exposure in low light conditions and degrade the framerate to a value of 1/exposure duration.

Thanks for providing the straight-from-camera files. Here are the links:
- ex100f-1.avi. 240fps, 1/10000, 512×384px. (8.17 MB).
- ex100f-2.avi. 480fps, 1/10000, 224×160px. (7.36 MB).

Cool. Thanks for posting!

That's a lot of light for a 100µs opening, that's good, was it a very sunny day? It also seems you are facing the sun which would help with the short aperture time. The dynamic range doesn't seem super high though. I wonder the amount of light required for indoor filming.

The rolling shutter distortion is visible on the club on the way down.

Do you still have the raw file straight from camera and could you upload it somewhere? I'm wary of YouTube compression artifacts. We can also host it here if it's not too large.

Do we know what sensor this device is using?

712

(3 replies, posted in General)

This operation is not currently possible but that's a very good idea!

A slight generalization of this would be "alignment by coordinate systems" or "by calibration", as the approach could also work with the line-based calibration / coordinate system (just origin + scale, axes stay aligned with image axes). Even if less accurate it's often the only calibration available.

Lens distortion correction might also comes into play.

The original goal with superposition was actually to compute this transform matrix automatically, refining it using the video sequence to ignore the foreground layer. I very much like the idea of being able to do something manually before implementing an automation of it.

We need the full homography matrix by the way, not just affine, as it will map arbitrary quad to arbitrary quad. I've been thinking about how to finally build a platform to experiment with these ideas more easily. I also need to revisit and homogenize the matrix maths in some places. No ETA.

This looks pretty promising!

714

(2 replies, posted in Cameras and hardware)

Yeah, this is the color model, unfortunately I hadn't been able to test it at the time and 0.8.24 won't work with any of the color models.

The issue should have been fixed for 0.8.25 but the code is still based on Basler Pylon API v4 which was the then-current version when I worked on this last fall. Since then Balser updated their software stack to v5 which will break compatibility. I will have to revisit this for 0.8.26.

coxisambo wrote:

Some times the camera is not well placed and it is not 100% horitzontal.

The grid coordinate system should be well suited for this, as it's one of its main purposes. Add a perspective grid and right-click one of the corner and enter the calibrate dialog.

coxisambo wrote:

Another thing is to calculate inclinations or an angle between to segments that are not intercontected by an axis of movement. Then an angulus of "four" points would be the point.   Digitization is then from distal to proximal in both arms.

Yes, that would be a nice tool to have. It might be doable as a custom tool.

Well, the circle tool is more designed as an annotation tool rather than a measurement tool. As such it's not exported in spreadsheet export. The center and radius are saved (in pixels) in the KVA file.

The marker tool is going to be the preferred option to export individual coordinates of things.

I just thought of the fact that you are filming underwater!

The lens distortion is going to be different due to the refractive index of water vs air. Ideally you should perform the lens distortion calibration underwater as well, not reuse the coefficients computed in air. I don't know exactly how much of a difference it will make but it's worth a test I think.

Yes the frame shift most likely depends on the format. Or even the encoder in the camera. If you want you can send me a file with the problem so that I can see if it's a bug that could be fixed somehow. Less than 5MB send to joan at kinovea dot org, if more than that host it somewhere else and send me the link.

Regarding filtering, if it's possible, I would still suggest to test a digitization of a file for which you have ground truth available if possible. Ideally coming from a physical measurement system, not from another optical based system. The filtering helps smoothing the minuscule noise introduced by the manual or automated tracking process, where even subpixel placement at 600% zoom might not be enough to get the correct coordinates. I would assume this to be universally beneficial for precision/repeatability.

Note that the radial distortion calibration will also not be perfect, and usually less accurate at the periphery. The tracking works only in 2D so deviation from the the plane of motion is also going to add errors. If you are computing derivatives the noise is going to increase the error. If you compute or save acceleration data for example, I would definitely try to evaluate the accuracy first, to know where you are standing.

There is a filtering pass on the raw coordinates to remove the high frequency noise produced by the digitization. There is more information about the exact process and the cutoff frequency selected in the about tab of the data analysis dialog.

The approach comes from sport science literature, I don't know its relevance to burst swim of fishes. But I think it should still be better than the raw coordinates.

The spreadsheet export from the main menu does not have the filtering, it's just the raw data. The shift by 4 frames is strange. Maybe one of those files where the first image has a time coordinate different than zero and cause some issues.

Export through spreadsheet from main menu or export through data analysis dialog from trajectory context menu?