1,141

(1 replies, posted in General)

Bumping an old topic to add some info smile

- Currently the player is mostly single threaded. There is a second thread but just used for the timing ticks. Decoding and rendering both happen on the main thread (This can explain poor performances with HD videos).

- The file explorer loads thumbnails in a background thread.

- When you see a progress bar, it's generally indicating a task running in a background thread. (saving to file, applying image filters, loading image cache)

- In the capture screen the main thread is used for rendering. The actual grabbing from the device runs in a second thread.
Until recently the recording happened on the main thread as well. This caused the timing issue reported. (recorded video playing too fast).
This should be fixed in the next version as the recording happen in its own thread now.

There is plan for asynchronous decoding (A decoding thread would fill a queue and the main thread would consume it). We had discussed this with Phalanger some time ago, and I would like to try to address this this summer. (Either this or creating a good framework for generalized "trackibility" of the drawings)

Thanks!
It's not clear whether it adds streaming or just controlling the hardware remotely. It sounds like changing shutter speed, adjusting lens, or triggering the capture (but capture itself would still happen on the device). Do you have streaming working in another application ?

1,143

(4 replies, posted in Bug reports)

In fact I'm considering a release in the next few days because there have been some wide internal changes and I would like to have them more thoroughly tested as soon as possible.

edit: new version is online.

1,144

(4 replies, posted in Bug reports)

Issue with Samsung cameras should be fixed in the next experimental version (0.8.16).

1,145

(1 replies, posted in Bug reports)

It has to do with the encoding of the video.

Some codecs encode each frame independently, others mangle data from several frames to gain on the compression ratio.
For the latter case, the encoding is generally optimized for "forward decoding". Going back one frame means we actually have to go back to the previous key frame (last full frame), then decode back all the way to the target frame.

MTS uses AVCHD which uses this technique and other more complicated patterns (which sometimes confuses Kinovea to the point where the frames are not in order). Also, since this is HD video and decoding each frame takes some time, the issue is more visible than for lower resolution samples even if using similar encoding pattern.

If you are analyzing very short clips, you may constrain the working zone to 3 seconds or less, to trigger the "caching" of frames to memory.
Transcoding to an "intra-frame" only codec may also fix the issue. (MJPEG is such a codec but there are others)

edit: Oh, you mean it doesn't work at all ? Hum, yes, that might be one instance where the decoding process is confused. I'll have to dig deeper into this matter to see what is going on…
If you have the logs on Debug mode activated, you can see what it is trying to do and what is really happening.

1,146

(13 replies, posted in General)

Update:
I started to look into this. I think we could have a primitive full screen support first, and then improve upon it. By primitive, I mean that it will go into full screen (hiding the window bar and Windows task bar), the menu and main tool bar will be hidden, the file explorer collapsed, but the screen itself will still have all its controls on it.
In a later installment, we will have to reconfigure the screen's control dynamically to get the best screen estate possible.

I have just made some proof of work tests in a sandbox so it's still subject to unforeseen issues.

On the topic of tools, several ideas could be mixed together in a bigger scheme:

- Stick figure tool to map body position during performance. (how many joints ? proportions ?)
- Lower-body only posture (to represent knee bending, hips level, used in podiatrics for instance)
- Full body posture
- Profile line (used during squat jump and other tests analysis for example)
- Drawing a stick figure to represent the body during an exercise or stretching position.

All these tools would share a lot of source code. They might all be represented by :
- a set of joints
- a set of segments linking some joints together.
- Constraints on segments ? Hierarchy ? (To be defined)

The idea would be to design a tool family that would work in a generic way for this type of joints+segments tools.
(instead of creating X static tools)

Specs:
- The set of joints, segments, constraints would be described in a text file.
- The user would be able to create his own files to expand his toolbox.
- Everything else like rendering and manipulation of the joints would be handled in a generic way.
- Some all-purpose instances would be bundled by default, like the ones cited above.


Input very welcome on:

- An already existing file format created for this purpose. (otherwise it will be XML with "as simple as possible" syntax)
Todo: look into stick figure animation software, software to design exercises or yoga positions, etc.

- Imagine yourself using such a tool. What do you expect when you drag a joint around ? What type of constraints ?

Thanks

edit: Checking on stick figure animation software. Very interesting usability.

1,148

(8 replies, posted in Cameras and hardware)

I don't think it'll work. As kinoveafan wrote, the software has to be made specifically for the multicam driver. That is, we can't use it directly with DirectShow, special code has to be written and integrated in the pipeline. Sorry if this wasn't clear enough.

1,149

(1 replies, posted in Cameras and hardware)

Hi,

Short answer: no, and unfortunately not likely to be in the near future.

Long answer:
Relevant bits from the Wikipedia (I also had read about it a while ago)

GigE Vision is an interface standard introduced in 2006 for high-performance industrial cameras.
(…)
The GigE Vision standard is - by a few definitions (for example, the ITU-T) - an open standard. However, it is available only by signing a non-disclosure agreement, so it is considered by many European countries to be a closed standard.
(…)
It is available under license to any organization for a nominal fee.
(…)
One consequence of the license is that it is not possible to write open source software using the GigE Vision specification, as it could reveal the details of the standard, which is why most image acquisition SDKs for GigE Vision are closed source.
There is currently at least two different free software projects trying to implement the GigE Vision protocol by reverse engineering.

Will have to see the state of advancement of the reverse engineering projects.

1,150

(1 replies, posted in General)

I have been contemplating the migration from Subversion to Mercurial for a long while now.
I'll probably try the migration sometime soon.

If you are using the source code and have a problem with this, please discuss in this thread.

The advantages of Mercurial over SVN will be easier merge of external contributions, easier merges of branches, branches history more accessible, history replication everywhere as backup.
The CodingTeam forge where the code lives already supports Mercurial.

feedback wrote:

It is posible to live capture in highspeed using Casio´s driver

Really? Interesting! I assumed the casio would only record highspeed to its internal card.
Do you have more details? Is it a new firmware for the camera ? What is the output connexion ? What software allows this?

1,152

(13 replies, posted in Français)

(Attention je ne suis pas du tout spécialiste smile)

Si le PC a une carte graphique avec deux sorties, on peut brancher un second écran. Certaines cartes ont même une sortie TV, on pourra donc avoir une copie de l'écran de l'ordi sur un écran de télé. (cela dit c'est possible qu'à taille équivalente un écran de PC soit moins cher, et la résolution sera meilleure)

Je ne vois pas trop comment faire pour renvoyer la vidéo vers un écran sans avoir aussi le reste de l'interface du logiciel. Il n'y a pas encore de mode plein écran dans Kinovea, et même dans ce cas, les deux écrans « verraient » la même chose.
Si on envoie directement le flux de la caméra vers un écran d'affichage c'est possible, mais alors le problème est de renvoyer ce flux vers le PC…

Si quelqu'un a trouvé des solutions, n'hésitez pas à partager…

HDAV wrote:

is kinovea based on VLC?

No, but both share use of the FFMpeg decoding library.

HDAV wrote:

is it possible to auto pause the video on a comment (ideally outside of the Kinovea application playback with VLC or WMP etc)

Auto pause is done through the time freeze export, lower right button. Do you know of an application that can be told to auto pause a video based on the video's meta data ?

HDAV wrote:

I can get the stop watch to show but it doesnt then run?

Yes, and I realize it's not very intuitive. When you add a stopwatch it just sits there. You have to right-click > start stopwatch.
Maybe it would be better to attach it a few frames back and turn on the counter on the current frame, I don't know.

HDAV wrote:

Is there a Kinovea player application?

Kinovea smile

HDAV wrote:

I am getting confused with the frame rate settings footage will always be at the same rate can this be set as preference?

The frame rate settings is only for high speed camera. You only have videos filmed in high speed ?

1,154

(5 replies, posted in Bug reports)

HDAV wrote:

2: the subtitle to run under the video for a set number of frames or for the duration of the working area should be simpler to implement than sync'd subtitles

This could be the same feature as the suggested "Drawing stays on screen until next drawing in sequence appears" basically, all the drawings on the current key image would stay 100% opaque until the next key image.
This would correspond to attaching the drawings to the "section" between two key images instead of just the first image.

1,155

(5 replies, posted in Bug reports)

Oh, hadn't thought about subtitles comments. Interesting idea.
Currently there is the label tool and the comment box, but as you noted these are attached to a single frame, and more for interactive review.
I have used one or two subtitle authoring software in the past and there are some design challenges to address. Providing a way to synchronize the text with the video complexifies the interface a lot.
Please discuss what you think would be the best approach to this smile