In fact I'm considering a release in the next few days because there have been some wide internal changes and I would like to have them more thoroughly tested as soon as possible.
edit: new version is online.
You are not logged in. Please login or register.
Kinovea - Forums → Posts by joan
In fact I'm considering a release in the next few days because there have been some wide internal changes and I would like to have them more thoroughly tested as soon as possible.
edit: new version is online.
Issue with Samsung cameras should be fixed in the next experimental version (0.8.16).
It has to do with the encoding of the video.
Some codecs encode each frame independently, others mangle data from several frames to gain on the compression ratio.
For the latter case, the encoding is generally optimized for "forward decoding". Going back one frame means we actually have to go back to the previous key frame (last full frame), then decode back all the way to the target frame.
MTS uses AVCHD which uses this technique and other more complicated patterns (which sometimes confuses Kinovea to the point where the frames are not in order). Also, since this is HD video and decoding each frame takes some time, the issue is more visible than for lower resolution samples even if using similar encoding pattern.
If you are analyzing very short clips, you may constrain the working zone to 3 seconds or less, to trigger the "caching" of frames to memory.
Transcoding to an "intra-frame" only codec may also fix the issue. (MJPEG is such a codec but there are others)
edit: Oh, you mean it doesn't work at all ? Hum, yes, that might be one instance where the decoding process is confused. I'll have to dig deeper into this matter to see what is going on…
If you have the logs on Debug mode activated, you can see what it is trying to do and what is really happening.
Update:
I started to look into this. I think we could have a primitive full screen support first, and then improve upon it. By primitive, I mean that it will go into full screen (hiding the window bar and Windows task bar), the menu and main tool bar will be hidden, the file explorer collapsed, but the screen itself will still have all its controls on it.
In a later installment, we will have to reconfigure the screen's control dynamically to get the best screen estate possible.
I have just made some proof of work tests in a sandbox so it's still subject to unforeseen issues.
On the topic of tools, several ideas could be mixed together in a bigger scheme:
- Stick figure tool to map body position during performance. (how many joints ? proportions ?)
- Lower-body only posture (to represent knee bending, hips level, used in podiatrics for instance)
- Full body posture
- Profile line (used during squat jump and other tests analysis for example)
- Drawing a stick figure to represent the body during an exercise or stretching position.
All these tools would share a lot of source code. They might all be represented by :
- a set of joints
- a set of segments linking some joints together.
- Constraints on segments ? Hierarchy ? (To be defined)
The idea would be to design a tool family that would work in a generic way for this type of joints+segments tools.
(instead of creating X static tools)
Specs:
- The set of joints, segments, constraints would be described in a text file.
- The user would be able to create his own files to expand his toolbox.
- Everything else like rendering and manipulation of the joints would be handled in a generic way.
- Some all-purpose instances would be bundled by default, like the ones cited above.
Input very welcome on:
- An already existing file format created for this purpose. (otherwise it will be XML with "as simple as possible" syntax)
Todo: look into stick figure animation software, software to design exercises or yoga positions, etc.
- Imagine yourself using such a tool. What do you expect when you drag a joint around ? What type of constraints ?
Thanks
edit: Checking on stick figure animation software. Very interesting usability.
I don't think it'll work. As kinoveafan wrote, the software has to be made specifically for the multicam driver. That is, we can't use it directly with DirectShow, special code has to be written and integrated in the pipeline. Sorry if this wasn't clear enough.
Hi,
Short answer: no, and unfortunately not likely to be in the near future.
Long answer:
Relevant bits from the Wikipedia (I also had read about it a while ago)
GigE Vision is an interface standard introduced in 2006 for high-performance industrial cameras.
(…)
The GigE Vision standard is - by a few definitions (for example, the ITU-T) - an open standard. However, it is available only by signing a non-disclosure agreement, so it is considered by many European countries to be a closed standard.
(…)
It is available under license to any organization for a nominal fee.
(…)
One consequence of the license is that it is not possible to write open source software using the GigE Vision specification, as it could reveal the details of the standard, which is why most image acquisition SDKs for GigE Vision are closed source.
There is currently at least two different free software projects trying to implement the GigE Vision protocol by reverse engineering.
Will have to see the state of advancement of the reverse engineering projects.
I have been contemplating the migration from Subversion to Mercurial for a long while now.
I'll probably try the migration sometime soon.
If you are using the source code and have a problem with this, please discuss in this thread.
The advantages of Mercurial over SVN will be easier merge of external contributions, easier merges of branches, branches history more accessible, history replication everywhere as backup.
The CodingTeam forge where the code lives already supports Mercurial.
It is posible to live capture in highspeed using Casio´s driver
Really? Interesting! I assumed the casio would only record highspeed to its internal card.
Do you have more details? Is it a new firmware for the camera ? What is the output connexion ? What software allows this?
(Attention je ne suis pas du tout spécialiste
)
Si le PC a une carte graphique avec deux sorties, on peut brancher un second écran. Certaines cartes ont même une sortie TV, on pourra donc avoir une copie de l'écran de l'ordi sur un écran de télé. (cela dit c'est possible qu'à taille équivalente un écran de PC soit moins cher, et la résolution sera meilleure)
Je ne vois pas trop comment faire pour renvoyer la vidéo vers un écran sans avoir aussi le reste de l'interface du logiciel. Il n'y a pas encore de mode plein écran dans Kinovea, et même dans ce cas, les deux écrans « verraient » la même chose.
Si on envoie directement le flux de la caméra vers un écran d'affichage c'est possible, mais alors le problème est de renvoyer ce flux vers le PC…
Si quelqu'un a trouvé des solutions, n'hésitez pas à partager…
is kinovea based on VLC?
No, but both share use of the FFMpeg decoding library.
is it possible to auto pause the video on a comment (ideally outside of the Kinovea application playback with VLC or WMP etc)
Auto pause is done through the time freeze export, lower right button. Do you know of an application that can be told to auto pause a video based on the video's meta data ?
I can get the stop watch to show but it doesnt then run?
Yes, and I realize it's not very intuitive. When you add a stopwatch it just sits there. You have to right-click > start stopwatch.
Maybe it would be better to attach it a few frames back and turn on the counter on the current frame, I don't know.
Is there a Kinovea player application?
Kinovea ![]()
I am getting confused with the frame rate settings footage will always be at the same rate can this be set as preference?
The frame rate settings is only for high speed camera. You only have videos filmed in high speed ?
2: the subtitle to run under the video for a set number of frames or for the duration of the working area should be simpler to implement than sync'd subtitles
This could be the same feature as the suggested "Drawing stays on screen until next drawing in sequence appears" basically, all the drawings on the current key image would stay 100% opaque until the next key image.
This would correspond to attaching the drawings to the "section" between two key images instead of just the first image.
Oh, hadn't thought about subtitles comments. Interesting idea.
Currently there is the label tool and the comment box, but as you noted these are attached to a single frame, and more for interactive review.
I have used one or two subtitle authoring software in the past and there are some design challenges to address. Providing a way to synchronize the text with the video complexifies the interface a lot.
Please discuss what you think would be the best approach to this ![]()
There was once a sub project to have extensive support for this and other features. (wiki). (not planned at the moment)
If we display the coordinate axis on the video, it would be expected to be able to redefine it directly… (?)
I have the same question regarding the ave speed calculation. I'm measuring bar speed for the bench press in the sagittal plane.
"is it X or Y or the resultant."
The "resultant".

Also, can you provide the ave speed as part of the exported data?
Well, I see there are requests for more detailed data, or data presented differently, etc. Maybe we need to design the "export to spreadsheet" feature differently so that user have more control on what get exported and how… (Currently you can play with the XSLT files in the program directory but it's rather complex and you can't do everything anyway).
Any plans on calculating acceleration values?
I just added this to the idea backlog. You can "+1" it there.
Kinovea - Forums → Posts by joan