Awesome ! Thanks for the feedback smile
The problem with the second screen is general and exists even for different models.
When the second screen is created, it doesn't realize the first camera of the list of devices is already streaming to the other screen and it tries to connect to it. It doesn't fail, we just get a black screen…
Apparently it's a bit tricky to know if a given device is already in use by another application. However detecting that it is used by the other screen should be doable… (The hard part is not to break the automatic reconnection mechanism)

Yes, it makes sense smile

So,
CTRL + Key = next 1% spot.
Shift + Key = next 10% spot.
Key alone = next 25% spot.

Respectively Left / Right for the decrease/increase of the delayed during live capture.

----------------
Regarding capture and left/right reversal.
The idea was that when you press pause and the buffer is frozen, you would want to use the same interface for playing back and step-by-steping the recent frames.
I think the most intuitive way for playback is to have a left-to-right navigation, left being the start of the video, right being the end. (as in the playback screen).
So in that case it would be better if you can go forward in the video by pressing RIGHT.

Let say the user didn't apply any delay. Slider is all the way left. Left end is most recent, right end is oldest.
As soon as you pause frame grabbing, we will want to turn the slider over so that left is now start and right is now end. Maybe even force the cursor back to left.

At that point sliding right (with mouse or keys) should make you go forward in time. This will be the opposite of when live capturing where sliding right makes you go back in time.

(Maybe the reversal of the function will not be so obvious, I don't know.)

Yes I think the PS3Eye driver is one example of a driver that doesn't support 2 cameras connected at once.
(Unless using the SDK that the people at CodeLaboratories propose, but it means developping code specifically around these devices).

Ah oui désolé je me suis trompé smile
Cette fonction n'est apparue que dans une version ultérieure.
La version expérimentale est accessible depuis le sujet attaché en haut de ce sous forum. Ici pour la 0.8.13.

1,280

(10 replies, posted in General)

biosol wrote:

the whole legs are blurring and sometimes it's appears to go forward - backward for a while.  Maybe there is a setting I need to change or a de-interlacing issue?  I don't know...

Yes exactly, it looks like combing from interlaced frames. Try with menu Image > Deinterlace.

biosol wrote:

Also, just for additional info, the SiliconCoach people told me to have at least a Core2 cpu, so maybe trying to use my old P4 system for anything other than web surfing and email is not practical any more.  What I'm kind of trying to figure out is if Kinovea would run fine on the new Corei3 cpu that have the graphics build into the cpu (2100/2200 I think)?

I'm afraid I'm not very knowledgeable when it comes to CPU types and power smile Maybe some other users can give their experience with regards to hardware specs.
Currently most of the drain is on the CPU but there is a sub-project to get the rendering done through Direct2D, which should leverage hardware acceleration from the Graphic card and give a good performance boost.

biosol wrote:

Last, do you have a release schedule for the beta?  Just wondering when the next version might be coming out?

Next experimental version should hopefully be released begining of april.
The next official version will be released when the current one is stable enough (no new features are planned) and after translations/manual update.

Hello,

One scenario that haven't been tested (to my knowledge) is how the program reacts when two cameras of the exact same brand and model are connected to the computer.
If any one can test this setup, please report. (experimental version 0.8.13 needed)

When clicking the configuration button, does the device list display two different entries ?
I realize it may also depend on the device driver, since it needs to be capable of handling two devices at once.

Thanks

1,282

(10 replies, posted in General)

biosol wrote:

I have loaded in two standard definition videos (captured from a commercial program) and the playback seems choppy and the speed of the playback seems inconsistent.  Is a more powerful computer required to playback 2 video screens?  Should I install a different version of .net or some other software?

Do you have this for all videos or only these specific ones ?
There can be some issues of jumpy playback with some instances of H.264 files.
If you see the speed slider moving to the left by itself, this indeed indicates that the program has trouble keeping up the pace. (do you know the frame rate of the video ?)

biosol wrote:

I do have SiliconCoach running on a powerful system, a Core2Quad 9550 Windows 7_64 with 8GB RAM and a GeForce GTS250 video card.  I could install Kinovea on this system if you're sure it will not affect my SiliconCoach installation.  Please advise.

I cannot think of anything that would affect your existing programs. Files are installed in Program Files, there is one entry in the registry to remember the language of installation.
Most notably, Kinovea doesn't register itself to open any file type or to be launched when a video device is connected to the machine. (I know some other softwares do that but personally I find it a bit intrusive)

And regarding your last question, no, there is currently no functions that would work on the two capture screens simultaneously.
Thanks

daww wrote:

My experience today told me that it would be good if after capturing a video (you get a little vignette under the main scrren) it would be great if you could double click directly that vignette to read it or better if you could right click on it and have a menu to open it rename it etc...

Thanks for the feedback, also check the other thread for capture screen suggestions.

The way to do it is to use the Dual export button on the lower right, right to the common navigation bar.

But there is a catch. If you are on Windows 7 + High DPI settings + Kinovea 0.8.7, this button might not be visible at all.
This bug has been fixed since, so you may want to try the experimental version. Alternatively, you can set the DPI settings back to 100% through Control Panel > Display.

1,285

(4 replies, posted in Français)

Oui il peut y avoir des soucis avec les fichiers encodés en H.264 (c'est le cas des fichiers .MTS) mais normalement ils sont supportés.
À l'occasion, ça m'intéresse quand même de savoir si le problème du chrono négatif est toujours là dans les versions plus récentes smile

edit: j'ai reproduit le souci avec la dernière version également; sur des fichiers .m2ts.
Le curseur de navigation fait également des allers retours pendant la lecture hmm

1,286

(4 replies, posted in Français)

Hmm, étrange.
Si c'est sur la version 0.8.7 ça vaudrait le coup d'essayer avec la version expérimentale car l'identification de la position des images a été légèrement modifiée.
Une image donnée pouvait être associée à la mauvaise position (à quelques images près) ce qui aurait pu donner ce genre de bug.
Si c'est déjà sur la version 0.8.13 (ou autre à partir de 0.8.8), alors il faudra investiguer plus loin.

1,287

(16 replies, posted in Cameras and hardware)

For the framerate it depends on the camera. Sometimes it is set and fixed at server side, and we just retrieve whatever the server sends. Other times, you can specify parameters in the URL.

For example for AXIS network cameras, here is the list of possible parameters.

The final URL might look like this:

http://<CameraIP>/axis-cgi/mjpg/video.cgi?resolution=640x480&compression=10&color=0&text=0

This will really depend on the camera API and capabilities though. The camera manual should cover the details.

I'd assume little difference in term of image quality between JPEG and MJPEG streams, as MJPEG is just a series of JPEG-compressed image inside a container (there is no temporal compression as in most other video encoding methods)
Some (most ?) cameras might only support one of the two stream type.

1,288

(16 replies, posted in Cameras and hardware)

Well, hopefully none of this will be necessary as network cameras should be natively supported in the coming versions. At least those exporting MJPEG or JPEG streams.
(With many thanks to the AForge.NET framework, already used in Kinovea for Directshow devices and image filters)

I tried to keep the workflow unchanged for users using classic capture devices.
The network camera will be seen as another device in the device list that you will have to select, and then change the parameters.

Here are some screenshots of the updated interface :

http://www.kinovea.org/screencaps/0.8.x/devicenetwork.png

Then when you reopen the source configuration you may change the actual source. (here, receiving WebcamXP JPEG stream from another PC on the local network)
http://www.kinovea.org/screencaps/0.8.x/devicenetworkconfig.png

The most recently used network camera will be automatically tried.
The URL list will also keep the last 5 addresses that succeeded, for quick jump.
The network camera alias will not use the same mechanics of disconnection monitoring. If there's nothing coming from the other side, it will just stay black. (But as soon as the source is live again, it is displayed.)

This also makes for a cheap camera split : Stream the source (e.g using WebcamXP), then open two capture screens in Kinovea both connected on the host. Each stream gets its own buffer so you can play/pause/delay independently.

Of course much testing will be needed wink

1,289

(16 replies, posted in Cameras and hardware)

(Moved posts in a new thread)

1,290

(16 replies, posted in Cameras and hardware)

OK, I hadn't realized that AForge.NET already provides access to JPG / MJPEG streams of IP cameras through another API smile
I was able to serve the webcam with WebcamXP and then reconnect to it with AForge as a JPG stream on http://localhost:8080/cam_1.jpg with very good frame rate.
Also tested some open network camera streaming from Internet.

It works well, now what's needed is a way to integrate this in the existing code and user interface, and more testing.