I would like to know why multiple views are necessary for calibrating the lens distortion. When I upload the annotations from calibration grids of a checkerboard recorded from 3 different angles (as the manual describes and as other softwares and coding for calibration describe) the calibration file does not appear to work leading to a wildly distorted image and being unable to obtain linear measurements. However if I create 5 distortion grids from a single view of a checkerboard filling the field of view and recorded in an orthogonal plane, the calibration using this set of annotations almost perfectly corrects the image and resulting linear measures. I do not know how the calibration is coded in Kinovea and would like to explain in my paper of Kinovea validity for linear measurement, why recording a checkerboard from a single view is sufficient to obtain effective distortion calibration. I suspect none of the validity papers published so far corrected for distortion because they also ran into issues with applying it the way it is described in the manual. I am following a very good coding blog for distortion calibration generally and they mention that using only one view might be sufficient. Anybody else had problems with the multiple view process? Anybody could tell me what algorithm Kinovea uses for distortion calibration? Anybody can explain why multiple view would be necessary or why only one view is sufficient to obtaining good calibration? I have a really good video tutorial on calibrating Kinovea from one view of a checkerboard coming out with my article once I can address this question.
Thanks for reporting about this.
The distortion model is the one from OpenCV.
http://docs.opencv.org/doc/tutorials/ca … ation.html
Kinovea solves for the three coefficients k1, k2, k3 (radial distortion). And the others (p1, p2, tangential distortion, and fx, fy, cx, cy intrinsic parameters).
When you run the calibration the coefficients found will be written in the log and should be available in the camera calibration dialog.
I imagine there are two places where this process could fail, either in the initial calibration, or when applying it.
You may be right that only one view is sufficient. I think the idea with multiple views was to have data in all parts of the image, as if you have only one view there might be corners with no data if the pattern filmed is at an angle. I'm not sure why it would fail with multiple views though, it worked in the past. I'll have to retry it.
It should also be possible to automatically find the corners in the image and avoid the whole process of manually placing them.
You can also tweak the coefficients manually to find a good fit. In this case usually only k1 and k2 are used.
Thank you Joan. That is what I thought and so I recorded the checkerboard from a large screen and close enough that it would fill the camera field of view. I have been reading on OpenCV blog and I am happy to know that the model is from OpenCV so now I have the code to understand how it is programmed. They have nice tutorials on lens distortion that other users might find helpful:
https://learnopencv.com/camera-calibrat … ng-opencv/
I tried several times before giving up and moving to the single view which totally worked at my great surprise. I will try it again with the 3 views but I am pretty sure it is failing somewhere.