616

(1 replies, posted in General)

There is no hard limit. I guess it depends on the computer. But it doesn't handle occlusions very well so unless you are tracking things that somehow move on non-crossing path the limiting factor will be the failure of tracking due to something moving in front of the object/joint. If you go frame by frame to verify and adjust when needed it shouldn't be a problem.

I agree it's annoying, it has happened to me as well. However I think it's important to have a shortcut for the synchronized playback and using the same shortcut as for the single video case makes sense in my opinion.

The problem is that at the moment there is no distinction between opening two videos for synchronization or just for looking at two videos individually. We need to figure out a heuristic to distinguish these use cases. Maybe keep them independent until the user actually clicks on the synchronization button?

It might work but I'm not 100% sure.
One thing is quite certain I think is that you will need to use an externally powered USB hub, one that will provide the standard 500mA *per port*, rather than a non powered one that will pull 500mA from the PC and share it among its downstream ports. These cameras are relatively power hungry.

If you select the MJPEG stream type in the settings the camera will compress the stream on its side and the bandwidth may be manageable. In this case Kinovea won't recompress it before storing so it's the most streamlined scenario. If the limiting factor is the I/O bandwidth at the drive, another trick would be to set the output files to automatically record one camera to the SSD and the other to the internal HDD.

Two C920 recording to an internal SSD via USB 2.0 works in Kinovea but it's already a scenario that requires the right USB setup.

If you have enough RAM on the laptop another trick could be to setup a RAM drive. It's like having a drive backed up by memory, so lower latency and greater bandwidth. I've never tried that myself but in theory it should give the maximum performances. It will compete with the size of the delay buffer if you want to use the live delay feature though, and you have to remember to copy the files over to your main drive before shutting off otherwise they are lost.

As far as I'm aware the only way is to send the AV or HDMI out of the camera through a capture box or card on the PC. I don't have a lot of experience with that setup so maybe someone with more experience can chime in.

620

(16 replies, posted in Cameras and hardware)

Hi everyone,

Has anyone tested this camera in Kinovea?
It is supposed to do 4096x2160 @ 30 fps, 1920x1080 @ 60 fps and 1280x720 @ 90 fps.
Does it work at all in Kinovea? Is it streaming in MJPEG?

621

(14 replies, posted in General)

litch09 wrote:

I think I have found a small bug in the linear kinematics tool when using a high-speed camera recording. I have recorded at 120 fps and manually change the video timing to reflect this. Then I track something (e.g. someone running) - the calculations of velocity are correct, but the x-axis (time) of the plot doesn't seem to be adjusted for the high speed setting (the time is 4 times that indicated in playback window).

I can reproduce this if I change the reference video framerate in menu Video > Configure video timing dialog. Can you confirm this what you are doing?

For high speed camera you should only change the top part in this dialog: High speed camera > Capture framerate. If you only change this it should work as expected I think.

The bottom part is for when a video has metadata indicating that its nominal framerate is some number but the actual framerate is something else, say 30fps vs 29.97 or something. It does get confusing with high speed camera though. But in your case the camera did intend to create a 30fps file as advertised (or whatever the framerate) and the playback should normally be in slow motion. Maybe these should be two different dialogs because the bottom part is for a rarer scenario, when the video file is broken.

Please retry with just putting your 120fps in capture framerate and leaving the video framerate to its original value and let us know if it works.

622

(1 replies, posted in General)

Hi,

Not sure if this will suit your needs but here is a little-known trick that may help.

1. Create a text file with the following contents:

<?xml version="1.0" encoding="utf-8"?>
<KinoveaSyntheticVideo>
  <FormatVersion>1.0</FormatVersion>
  <ImageSize>800;600</ImageSize>
  <FramesPerSecond>10</FramesPerSecond>
  <DurationFrames>100</DurationFrames>
  <BackgroundColor>255;255;255;255</BackgroundColor>
  <FrameNumber>false</FrameNumber>
</KinoveaSyntheticVideo>

2. Change the extension of the file to .ksv,
3. Open the file in Kinovea as a video. It will be a blank video of the specified size, framerate and duration,
4. Add drawings to your liking,
5. Copy the current frame by doing a right-click on the frame outside of the drawing and "Copy image to clipboard".

It will copy the entire frame and won't retain transparency so it's not as good as if you could just copy the drawing to paste it on a different background, but maybe it can be acceptable for your scenario. You can change the background color in the KSV file. Color format is ARGB.

Now if you create your drawings in an actual video as a reference, you could do this:
1. Save the reference analysis as a KVA file,
2. Create a KSV file with the same image size, framerate and duration as the original video,
3. Use menu File > Load key images data… and import your KVA into the blank video.

Now the human models tools don't have tracking by default so if you want to store several positions you will either have to create a new drawing at each pose you are interested in, or make them trackable.

To diagnose recording problem you can check the infobar a the top of the capture screen. Look for the bandwidth and drops.

Make sure the camera is configured to MJPEG which should reduce the bandwidth.

624

(14 replies, posted in General)

chrishall123 wrote:

SlowMotion Loop feed - got it running in 4 screen split, but its confusing as to the order of the screens to watch....

Is it possible to slo mo 1 x full screen?

Yeah I know what you mean. Continuously slowing down the real time is an ill-posed problem though, if there was only one view there would have to be gaps in time when it runs out of space.

I tried to design it so that if there are several people constantly doing things in sequence, you could still theoretically see each of their actions fully in slow motion. But in practice you have to guess when you are supposed to change your focus and to which sub-view. It's not possible for the program to know when the action you are interested in starts, so it can't switch by itself.

Another approach would be to assume that the next run/action in the sequence is not happening until you are finished viewing the first one in slow motion. So for example if the action takes 5 seconds and you are watching at half speed, people have to be spaced by at least 10 seconds.

With this approach there could be a button that let you force a sync with real time and reset the buffer, so you can ensure the synchronization doesn't happen at the wrong time. The synchronization gap happening at the wrong time is what the multi-view is trying to solve by showing views with staggered ages, the gap in one view is filled by another view.

Another improvement would be to display a countdown showing when it will go out of space and do a synchronization jump.

I don't know if any of that made sense, it's a tricky feature I'm not sure it's possible to make it intuitive without understanding the internal mechanism.

625

(8 replies, posted in Bug reports)

Cool! I also think it's one of the most powerful feature in the program, but yeah the documentation is certainly lacking.
I will get around to at least write a page with a raw list of each node and the attributes they support.

If you haven't seen it already, here is an article discussing custom tools: http://www.kinovea.org/en/creating-a-custom-tool/

arnopluk wrote:

Is it possible to specify the position of labels for angles (like 'radius')?

No, at the moment the text distance is hardcoded. It's a good point though, I'll add that to the backlog for the next version.

arnopluk wrote:

With 'optionGroup' you can specify options which can be selected by the user. Can you specify the default options which are used? (In Human Model 2 I found the keyword 'DefaultOptions', but no example how to use it.)

Yes. I'll recap how this works for everyone and then add more details on declaring default options.

Each object visibility (objects: segment, handle, ellipse, angle, distance, position, computed point) can be controlled by named "options". These options end up in the context menu of the drawing and can be toggled on/off by the user. Options can also control the enabling of "constraints" placed on handles, such that the handle can be freely moved or constrained based on user desire.

This is controlled by specifying optionGroup="name of option" as an attribute on the object. Multiple objects' visibility or constraints in different objects can be controlled by the same option. There can be multiple options in the drawing.

Option discovery is entirely based on the textual name so be sure to correctly copy & paste the name otherwise they will end up as two different options.

By default all options are "off". To specify default options that are "on", create a "DefaultOptions" node and add the named options under "OptionGroup" subnodes, as following:

<DefaultOptions>
    <OptionGroup>Display ankle angles</OptionGroup>
    <OptionGroup>Display hips angles</OptionGroup>
</DefaultOptions>

Options listed here that are not matched to an option declared elsewhere will be ignored.

arnopluk wrote:

I would like to have 2 constraints in effect for my handle: 1. Fixed length ('DistanceToPoint') and 2. Rotation in steps of 5 degrees ('RotationSteps'). However, only the constraint that is configured last, is used by Kinovea.

Yes you are right. I didn't anticipate that use-case, all the other constraints are mutually exclusive I think, but rotation steps could be combined with others. This is not possible at the moment.

arnopluk wrote:

Is it possible to lock handles so they can't be changed by the user?
For example: Lock the angle-to-vertical of a certain line, while allowing a user to change its length.

For that specific scenario I think you could use a "LineSlide" constraint, where a handle is only allowed to move along a line defined by two other points. The LineSlide constraint takes "point1", "point2" and "position" attribute. The "position" attribute further determines where the handle can go in relation to the two existing points, and can take the following values: BeforeSegment, BeforeAndOnSegment, OnSegment, AfterAndOnSegment, AfterSegment, Anywhere.  Check the "Archery top view" tool for an example.

So you would first make sure your have two points defining the fixed-angle line in the point list  (without necessary creating a visible segment from them). Then add a third point and a handle referencing it, with a constraint that only allows it to slide along the line defined by the other two, maybe using AfterAndOnSegment. Then you can create a segment object that goes from the angle origin to the sliding point.

I just realized that creating a completely locked point is harder than necessary. You can create a "Point" but it won't be visible until it's either a handle or a computed point. Right now the only way seems to be to create two dummy points and create computed point based on them. Or maybe a collapsed segment. I should probably add a simpler constraint that completely locks a handle in place.

How can I make my own icons for the custom tools?

The gist of it is that the "Icon" node contains the Base64 encoded image. In practice:

Can the position of annotation in the capture screen be saved?
For example: Default annotation in a fixed-camera setup.

You can have a KVA file that is always loaded when you open a camera.

Create the annotations in the playback screen (maybe by reloading a video captured from the camera for alignment) and save them to a KVA file named "capture.kva" (File > Save > Save only the analysis). Then place that file in Kinovea %appdata% directory. The location of this directory depends on whether you are running the installed or zipped version, but you can get to it from menu Help > Open log folder.

The same can be done for the playback screen using "playback.kva". It's an application-wide thing though, there is currently no support for per-camera default KVA.

626

(6 replies, posted in Bug reports)

Thanks for looking it up. OK, I'll get the latest driver and dig up the camera if I can find it and try to see what's going on.

627

(6 replies, posted in Bug reports)

Oh you are on Windows 7, in that case the program name in the process tab should have a *32 at the end if it's 32 bit.

It could still be this because Kinovea 0.8.24 was only released as a 32 bit app. It's only from 0.8.25 and forward that there is an x64 build. So it would be very consistent with this being the origin of the issue.

628

(6 replies, posted in Bug reports)

Hi,

If it's indeed a 64bit/32bit issue, it's possible that the CL-Eye test software is itself 32 bit, and that would be the reason it works with the driver. If you go to the Task manager, in the Details tab, show the "Platform" column, it will tell if the application is 32 bit.

Try to load the file in Kinovea 0.8.26 from the download page, it has a lot of updates, including in the loading library.

Hi,
Yeah, I haven't seen this error in a while. It happens when the loading process manages to load the file, but then when it attempts to decode the first frame something bad happens.

The exact error can be seen in the log (you can get to the log from the Help menu).  It should be either "First frame couldn't be loaded" or "First frame loaded but negative timestamp", which would be my guess. In the code I have added a comment that some AVCHD files exposed the second behavior. This negative timestamp breaks a lot of things down the line for the timekeeping and so it is not supported. I think it's an encoder issue. The first timestamp shoud be zero. Some files have a positive first timestamp which Kinovea tries to work with as well. I haven't seen reports on this lately so most likely the encoder that was producing these files is either not used anymore or has been fixed.