Page 1 of 1

Tracking objects with the Eyetribe Server

PostPosted: 01 Mar 2014, 15:28
by garbage
Hi,

I’m new to this community and would like to say hello to everybody first. I read all posts on this forum and would like to discuss one of the very crucial requirements for any good gaze supported software: the tracking of objects.

Can you describe the current approach of the Eyetribe Server in more detail? To which extent is there a tracker available? I am wondering if you considered advanced tracking algorithms, for example using Hidden Markov Chains. This implementation (http://youtu.be/W4hpgdNp3rA) seems quite promising. The research paper can be found here: http://pages.bangor.ac.uk/~eesa0c/pdfs/mantiuk13gdot.pdf.

IMHO tracking is the most crucial issue progressing to expand the capabilities of eye controlled systems. Therefore I am wondering which features will be provided by the Eyetribe Server and which tasks have to be addressed in the application domain.

Thanks!

Re: Tracking objects with the Eyetribe Server

PostPosted: 02 Mar 2014, 11:03
by bugi74
Different use cases require different algorithms. The linked algorithm might work well with the demonstrated case, but in some other cases it might actually work worse. Thus, it can not be the only tracking algorithm available. I do not think it is TET server's task to implement all of them (and let user choose one), especially when e.g. the linked algorithm requires the application to provide quite a bit of dynamic extra data (the objects that could possibly be looked at and their velocities). Also, new algorithms will be developed over time; it is pointless to rely on TET to implement them all, at least not as quickly as we users would want it to happen (always "yesterday already").

It is enough if the server provides us developers with enough tracking data that we can implement whatever is needed by ourselves.

However, currently the server does not provide some of the raw/source data, which e.g. causes an extra step in implementing own calibration/coordinate mapping algorithms. Not a show-stopper, though. So, instead of explaining their server's tracking, I'd just wish they would provide more of the data, and provide a way to bypass the built-in calibration requirement.

Re: Tracking objects with the Eyetribe Server

PostPosted: 03 Mar 2014, 13:15
by garbage
Thanks for the fast reply. I don't doubt that the presented algorithm is not the last word on this subject. Nevertheless it is a modern approach to overcome the challenges of tracking objects in a highly dynamic scenario.

I have some experience with pricey commercial eye trackers as well as the ITU Gaze Tracker and some other open source approaches. All my experience showed that the main challenge is not to gather the raw gaze fixation data but to interpret these measurements in a meaningful way. IMHO when developing useful applications there is always a point where tracking is the limiting factor.

To let the application developer design its own tracking algorithms is a nice feature, but this task requires a lot of background knowledge of eye tracking. I cannot see that most of the targeted audience for a hundred dollar sensor are willing and capable to develop a state of the art tracking algorithm on their own. Hence I think it is extremely crucial to provide some libraries with robust and reliable as well as flexible tracking solutions. As long as this cannot be provided I seriously doubt this product is ready for prime time, even if only targeting application developers.

Therefore I would like to repeat my question: Which tracking algorithms does the Eyetribe server provide and are there any developments expected by the TET team or any other supporting group of this device?

Re: Tracking objects with the Eyetribe Server

PostPosted: 04 Mar 2014, 03:38
by Martin
To the initial post, the example requires the algorithms to know the object shape and location on the display. This is an active application that yields smoother interaction, this since it has been designed with gaze control in mind. The reason we released two samples for Unity graphics engine is to enable the sort of interaction with a full 3D control.

The Youtube video you posted is a nice example of how to use a priori knowledge to produce a great experience (on a $25,000 laboratory rig). Still I believe that an even better implementation has yet to be seen.

This is ongoing progress. By us at The Eye Tribe, our partners, and the many innovative developers that have now received their devices.

We nowadays don't talk all that much about the inner workings of the algorithms (if you really want to know, come works with us). That's been a change coming from the academics to start this company. Still, the goal has always been to make eye tracking widely available. Now we are trying a different approach. Step one, provide a solid and affordable product for early adopters. The reasoning being that thousands of minds around the world is a far better approach to building all applications in-house.

You can rest assure that we a) have a team working on algorithms b) have shipped a very capable device. Next major update is in the works, it will be made available for download, when it is ready.

Meanwhile, a solid starting point for those interested in learning more about event detection algorithms and gaze data processing techniques is:

Identifying Fixations and Saccades in Eye-Tracking Protocols by D. Salvucci and J. Goldberg (2000). Follow the paper trail from there.

Stay tuned.