856

(12 replies, posted in Cameras and hardware)

Hum… More digging landed on this topic where it is said that the HDV source describes itself with VideoInfoHeader2 which is not supported by SampleGrabber, the piece of DirectShow used to collect the stream images directly and make them available to the application. They tried with files from the camera so it may not be the same for the source itself.

To at least confirm what is said in the forum post, please download Graph Studio Next, it is a lower level DirectShow application that may be used to diagnose these issues (Graph Edit Plus or graphedt should work as well).

Go to "Graph > Insert Video Source" and click on the camcorder. If it doesn't appear here, go to "Graph > Insert Filter" and look for it there.
Once added, right click on the little "Capture" pin and then "Properties…". In the "Capture" tab you will see the raw description of the media types exposed by the source.
Each media type should have either a VideoInfoHeader or a VideoInfoHeader2, and then a BitmapInfoHeader beneath it.

1. Do all the media types really only use VideoInfoHeader2 ?
2. If you right-click the capture pin and do "Render Pin", then start Play, do you get the video ?

857

(12 replies, posted in Cameras and hardware)

Ok, in order to understand the origin of the issue, please test with SnapshotMaker from AForge.NET sample applications (third one).
- Can you enumerate the video resolutions?
- Can you connect the camera?

Thanks

858

(23 replies, posted in General)

Hello,

The automated tracking performance will depend on the subject, background, markers, light, motion blur, and other factors.
In any cases you should consider that it might be required to manually adjust some points to get the correct trajectory.

One limitation of the current version is that angle tracking (and other tools) does not let you configure the sizes of the tracking windows (search window and object window).

The trajectory tool does not have this limitation and let you alter the tracking windows sizes. You may want to try this one first. Analysis windows with kinematics plots are also only supported for the trajectory tool.

The trajectory tool tracker's default settings are the same as the angle tool ones, but the tracking windows are visible, so you could use this to better understand why the automated tracking fails in the case of the angle. The usual issue is that too much of the background is included in the object window. In this case you would have to somehow make the markers appear bigger (by using bigger ones or moving the camera closer for example). You should use round-shaped markers. It may take a few tries to find the optimum sizes for the combination of markers/background/motion blur in your videos.

You could also use three trajectory tools and compute the angle externally.

When the tracking eventually fails in the trajectory tool, rewind back to the first point of failure and use the right click menu to delete bogus points coming after.

edit: Another thing to note, the current algorithm doesn't fare very well with occlusions. For squat jump scenarios for example, an arm might temporarily occlude a hip marker. In this case it might be required to manually adjust the tracker during the frames where the marker is invisible.

Also review this topic dedicated to markers.

The primary target of the C920 being video call, web casting and the like, the "infinity" focus is quite short (a few meters). When used in the field, we'll usually want to focus farther to get the clearest images possible. 
Check out the "Hubble-fix" from wxforums.net, the result is quite striking!

860

(6 replies, posted in Bug reports)

No, unfortunately it's not something that can be easily fixed, as the constant framerate assumption is scattered throughout the code.

861

(6 replies, posted in Bug reports)

Thanks for the sample!
I reproduce the problem right away. Unfortunately at the moment I cannot fix it.

It is a unusual file in the sense that it actually has a variable framerate. Basically inside a video file, in addition to the global framerate, each individual frame has a timestamp. Here the video framerate is set to be 4fps, for 5526 frames (23 minutes). What Kinovea does when asked for the next frame is: 1. decode one frame and display it, 2. check the actual frame timestamp, 3. update the frame position in the timeline. This approach works to correct small variations or non integer frame intervals.

It doesn't work with this kind of videos because there aren't really 5526 individual frames. It might be an encoder optimized for screen capture. When the image doesn't change, no frame is saved in the video at all. Whenever the screen does change though, a frame is stored with the proper timestamp. Thus the sequence of timestamps is highly non linear and depends on the content dynamics. But when Kinovea reads the file back, basically being a constant framerate player, it jumps from frame to frame and eats the time gaps in between, giving a much accelerated result.

862

(1 replies, posted in Ideas and feature requests)

Thanks,

d.j.i.p wrote:

I often need to rotate my video because they are underwater videos and I have no other solution than taking them upside down. I use an other software to rotate it but a 2 in one with kinovea would be perfect !

Because there are so many different operations one could want to do on the image, color adjustments, rotation, distortion rectification, cropping, etc. the current plan is to not to try to do them in real time at all, but to provide an enriched export dialog with these options. It will be for a later development though.

There is currently no announcement list to subscribe to, the closest thing to it is the twitter feed.

863

(6 replies, posted in Bug reports)

Please check with version 0.8.23 as the video decoding has been updated.
If the issue is still reproduced, could you send me a short sample for analysis. joan at kinovea dot org.
Thanks

864

(6 replies, posted in Bug reports)

Hi,
There are several possible cause for mis-synchronization.
What Kinovea version are you using ? (While we are at it, what Windows version ?)

Re: real time playback, do you mean that the speed slider decreases by itself or do you mean that at 100% speed the action is played in too slow/fast motion ?
When in dual playback, both speed slider should be locked (unless specified in the preferences).
Would you happen to be comparing a video taken with a high speed camera with a video taken with a normal camera ? If so, what is the capture framerate of the highspeed video ?

865

(12 replies, posted in Cameras and hardware)

rcfan2 wrote:

I'm guessing that this issue never got resolved - as my Canon HV30 has the same issue.

Unfortunately I never got the chance to try and test an HDV camcorder first hand and see what the problem was. Now it seems all manufacturers have stopped producing them years ago, Firewire is no longer supported on new computers, etc…

1. Add a perspective grid
2. Place its corners on the corners of a rectangular object visible in the scene and on the same plane as the plane that you want to make measurements on.
3. Right click the grid and use "Calibrate", enter the physical dimensions of the rectangle.

You can display the coordinate system with Image > Coordinate system.

Hello,
The calibration by line does not have any notion of horizontal or vertical. It maps a 2D distance in pixels to a distance in the specified unit. At the bottom of it it is just a scaling factor from pixels to centimeters or whatever. 2D distances are computed using the usual Euclidean distance.

This can give wrong results if you have a video with rectangular pixels that has not been automatically detected, (video looks squashed or stretched). More commonly, if the plane of motion is not perpendicular to the camera optical axis. For this you can use the perspective coordinate system.

Are you working with AVCHD / H.264 videos perhaps ? There are several issues with them that haven't been addressed yet.

Depending on the flavor of the encoder and the parameters used, they are more or less critical. Sometimes the working zone in/out points can't be set properly.

Hi,
Please try this with 0.8.23. Both the dual screen synchronization code and the recording code have changed since.

Note that this type of error:

937 - ERROR - [5] - VideoFile - GetThumbnail Error : Frame reading failed

is specific to the home screen with the thumbnails. It is not super critical. It does hint at a problem, but it might not be related to your issue.

Upcoming in Kinovea
The configuration dialog now shows the various available stream formats (RGB24, MJPG, H264, etc.). On the C920, this allows the use of the streams that are compressed on-board by the camera, decreasing the bandwidth requirements.

For some specific stream formats (currently RGB24 and MJPG), the DirectShow's Intelligent Connect is not used and the plumbing is done manually. This has proven to be more reliable.

The vendor-specific exposure property from Logitech is now supported, allowing the configuration by 100µs increments instead of the default DirectShow property which is imprecise and badly supported by Logitech cameras. On the C920 the minimum value is 300µs.

A new capture pipeline has been implemented and integrated to streamline capture and recording.

For the MJPEG stream of the C920 and other cameras, the frames received are only decompressed for the preview. When recording they are directly pushed to the final file. The recording is given higher priority than the preview where a frame drop is not catastrophic. This greatly increases the throughput. I am able to record the full 1920x1080 @ 30 fps on my machine. We are still doing tests with Milan for the dual record scenario.

MJPEG is probably going to become the application perferred codec for output. It is simpler to implement and debug, mature JPEG libraries can be used, and its intraframe-only compression scheme makes it more suitable for frame-by-frame playback.