18 new observational references have just been contributed, thanks !

Most of them represent team sport fields, but also archery targets and tennis court.
These type of references open a new perspective: using them as coach boards or for some kind of notational analysis.

I can definitely see the need for opening them in their own screen.
It would also be interesting to have it turned into a video so you can have several sets of positions/annotations, maybe synchronize with the actual video, etc.
(Combine this with dual export and you have a play sequence on the left, and the corresponding step by step diagram on the right, nice smile)

One way to go about this would be:
1 - Support .svg in the explorer and allow opening directly in a player screen. (currently only for .bmp, .jpg, .png)
2 - When opening an image file (any supported type), ask the user if he/she wants to turn it into a video. Ask for duration and frame rate.

Other options ?
Other settings for creating the video ? (background color, image size…)

Having .svg thumbnails in the explorer would be interesting since currently I don't think there is a shell plugin that does it. There was one (proprietary) for Win XP but it has been discontinued.

Note: this is for mid-term/long term. Don't expect to see this in the next experimental version, but some parts of it may be added on the go.

If I understand correctly you have used the "High speed camera" option on the 30/30 video, and set it to 120.

Typically you would use this option the other way around, on the 120fps video. The option is just to correctly map the timecode on real time action. It has no effect on the play back speed.

I don't think you will be able to run both videos at the same real time "pace" (slowing down the normal one 4 times but not the other). That was possible in some earlier version but the slow motion sliders have also been locked since.
(Second time this comes up. Need a key modifier for independent action on the active screen…)
To move left/right a single video you can use the set of control buttons under each video. (But yes, keyboard shortcuts will act on both)

1,188

(13 replies, posted in General)

That is what I meant by

In a later installment, we will have to reconfigure the screen's control dynamically to get the best screen estate possible.

I was thinking the following:
Player screen:
- keeping the tools bar and the key image panel (the key image panel can be collapsed manually already)
- condensing the other controls on a single line. The playback buttons going right to the slow motion control, then the navigation bar, then the save buttons.

Capture screen:
- keeping the tools bar and recently recorded files panel. (the panel can be collapsed manually)
- condensing the rest on a single line. The next image and next video boxes going right to the delay control.

The advantage of this is it keeps the screens entirely functional while improving screen estate a lot. Seems like a good compromise.
However it might be easier (coding wise) to go with just the image. So feedback is important here smile

1,189

(2 replies, posted in Bug reports)

Bump.
Bug 253 is a similar issue but a different exception…
I'm looking for a log (at DEBUG verbose level) of this problem. (still can't reproduce)

1,190

(0 replies, posted in Français)

Version particulièrement expérimentale wink Merci de remonter toutes les regressions !

Installeur: [s]Kinovea.Setup.0.8.16.exe[/s]0.8.17

Le topic annonce sur le forum anglais.

Je recherche un contributeur qui pourrait traduire de l'Anglais vers le Français ces messages d'annonces de versions, posts sur le blog et autres !

1,191

(16 replies, posted in General)

Very experimental, much feedback needed ! wink
Beware of regressions and report anything suspicious. Do not assume the issue is known.

Installer: [s]Kinovea.Setup.0.8.16.exe[/s]. 0.8.17

The drawings code was refactored to ease the addition and creation of new drawings in the future. The amount of work that went into the refactoring is quite substantial but the level of flexibility is starting to be interesting.
Although it's not an explicit goal, we are not too far from a plug-in system where a third party could provide a sport-specific tool and it would be integrated dynamically. (It's not a goal right now, but thinking about this level of flexibility helps with the design)

Visible changes (tip of the iceberg, really)
- The grid and plane are now first class drawings: you can add as many of them as you want, and they will also be saved and loaded in the KVA files.
- Option to display "time ticks" on track path.
- Importing a KVA file now *merges* it on top of the existing key images (before it would do a "replace").
- New conf boxes for color and style.

Impacted areas (test, test, test for regressions)
1. KVA file format. The format has changed in some incompatible ways. I wrote a conversion routine so existing KVA (and embedded analysis) should be OK, but the color/style information will be lost. Please report any issue.
2. Export to spreadsheet.
3. File reading (FFMpeg updated).
4. Language selection.

On top of that:
- Full screen mode (discuss). (There is no menu to get out yet, use F11 shortcut key!)
- Support for WebM file format.


Otherwise exterminated bugs : 209, 245, 247, 248, 250, 255.
(250 = captured file is playing too fast).

+ Raw changelog.

Some snaps of the conf boxes.
http://www.kinovea.org/screencaps/0.8.x/trackshapes.png
^^ Track shape
http://www.kinovea.org/screencaps/0.8.x/confline.pnghttp://www.kinovea.org/screencaps/0.8.x/confpreset.png

1,192

(1 replies, posted in General)

Bumping an old topic to add some info smile

- Currently the player is mostly single threaded. There is a second thread but just used for the timing ticks. Decoding and rendering both happen on the main thread (This can explain poor performances with HD videos).

- The file explorer loads thumbnails in a background thread.

- When you see a progress bar, it's generally indicating a task running in a background thread. (saving to file, applying image filters, loading image cache)

- In the capture screen the main thread is used for rendering. The actual grabbing from the device runs in a second thread.
Until recently the recording happened on the main thread as well. This caused the timing issue reported. (recorded video playing too fast).
This should be fixed in the next version as the recording happen in its own thread now.

There is plan for asynchronous decoding (A decoding thread would fill a queue and the main thread would consume it). We had discussed this with Phalanger some time ago, and I would like to try to address this this summer. (Either this or creating a good framework for generalized "trackibility" of the drawings)

Thanks!
It's not clear whether it adds streaming or just controlling the hardware remotely. It sounds like changing shutter speed, adjusting lens, or triggering the capture (but capture itself would still happen on the device). Do you have streaming working in another application ?

1,194

(4 replies, posted in Bug reports)

In fact I'm considering a release in the next few days because there have been some wide internal changes and I would like to have them more thoroughly tested as soon as possible.

edit: new version is online.

1,195

(4 replies, posted in Bug reports)

Issue with Samsung cameras should be fixed in the next experimental version (0.8.16).

1,196

(1 replies, posted in Bug reports)

It has to do with the encoding of the video.

Some codecs encode each frame independently, others mangle data from several frames to gain on the compression ratio.
For the latter case, the encoding is generally optimized for "forward decoding". Going back one frame means we actually have to go back to the previous key frame (last full frame), then decode back all the way to the target frame.

MTS uses AVCHD which uses this technique and other more complicated patterns (which sometimes confuses Kinovea to the point where the frames are not in order). Also, since this is HD video and decoding each frame takes some time, the issue is more visible than for lower resolution samples even if using similar encoding pattern.

If you are analyzing very short clips, you may constrain the working zone to 3 seconds or less, to trigger the "caching" of frames to memory.
Transcoding to an "intra-frame" only codec may also fix the issue. (MJPEG is such a codec but there are others)

edit: Oh, you mean it doesn't work at all ? Hum, yes, that might be one instance where the decoding process is confused. I'll have to dig deeper into this matter to see what is going on…
If you have the logs on Debug mode activated, you can see what it is trying to do and what is really happening.

1,197

(13 replies, posted in General)

Update:
I started to look into this. I think we could have a primitive full screen support first, and then improve upon it. By primitive, I mean that it will go into full screen (hiding the window bar and Windows task bar), the menu and main tool bar will be hidden, the file explorer collapsed, but the screen itself will still have all its controls on it.
In a later installment, we will have to reconfigure the screen's control dynamically to get the best screen estate possible.

I have just made some proof of work tests in a sandbox so it's still subject to unforeseen issues.

On the topic of tools, several ideas could be mixed together in a bigger scheme:

- Stick figure tool to map body position during performance. (how many joints ? proportions ?)
- Lower-body only posture (to represent knee bending, hips level, used in podiatrics for instance)
- Full body posture
- Profile line (used during squat jump and other tests analysis for example)
- Drawing a stick figure to represent the body during an exercise or stretching position.

All these tools would share a lot of source code. They might all be represented by :
- a set of joints
- a set of segments linking some joints together.
- Constraints on segments ? Hierarchy ? (To be defined)

The idea would be to design a tool family that would work in a generic way for this type of joints+segments tools.
(instead of creating X static tools)

Specs:
- The set of joints, segments, constraints would be described in a text file.
- The user would be able to create his own files to expand his toolbox.
- Everything else like rendering and manipulation of the joints would be handled in a generic way.
- Some all-purpose instances would be bundled by default, like the ones cited above.


Input very welcome on:

- An already existing file format created for this purpose. (otherwise it will be XML with "as simple as possible" syntax)
Todo: look into stick figure animation software, software to design exercises or yoga positions, etc.

- Imagine yourself using such a tool. What do you expect when you drag a joint around ? What type of constraints ?

Thanks

edit: Checking on stick figure animation software. Very interesting usability.

1,199

(8 replies, posted in Cameras and hardware)

I don't think it'll work. As kinoveafan wrote, the software has to be made specifically for the multicam driver. That is, we can't use it directly with DirectShow, special code has to be written and integrated in the pipeline. Sorry if this wasn't clear enough.

1,200

(1 replies, posted in Cameras and hardware)

Hi,

Short answer: no, and unfortunately not likely to be in the near future.

Long answer:
Relevant bits from the Wikipedia (I also had read about it a while ago)

GigE Vision is an interface standard introduced in 2006 for high-performance industrial cameras.
(…)
The GigE Vision standard is - by a few definitions (for example, the ITU-T) - an open standard. However, it is available only by signing a non-disclosure agreement, so it is considered by many European countries to be a closed standard.
(…)
It is available under license to any organization for a nominal fee.
(…)
One consequence of the license is that it is not possible to write open source software using the GigE Vision specification, as it could reveal the details of the standard, which is why most image acquisition SDKs for GigE Vision are closed source.
There is currently at least two different free software projects trying to implement the GigE Vision protocol by reverse engineering.

Will have to see the state of advancement of the reverse engineering projects.