Page 1 of 1

Converting gaze coordinates to window coordinates

PostPosted: 04 Dec 2015, 03:11
by sunfish7
I can't see how to do this when the window is not fullscreen.

The demo doesn't behave correctly if the window is not fullscreen. Even with the in-app recalibrating.

I think to do this properly it maybe necessary to recover the Unity app window's coordinates in screen space, but there doesn't seem to be any obvious way to do this:

http://forum.unity3d.com/threads/how-to ... ion.67228/

π

PS even if it IS fullscreen, I suspect it is going to still be slightly inaccurate because some vertical spaces consumed by Unity's header and footer. (unless the project is built into a standalone app).

UnityGazeUtils.cs contains:

Code: Select all
        public static Point2D getGazeCoordsToUnityWindowCoords(Point2D gp)
        {
            double rx = gp.X * ((double)Screen.width / GazeManager.Instance.ScreenResolutionWidth);
            double ry = (GazeManager.Instance.ScreenResolutionHeight - gp.Y) * ((double)Screen.height / GazeManager.Instance.ScreenResolutionHeight);

            return new Point2D(rx, ry);
        }


...but that isn't going to work correctly. For example if you're looking at the bottom of the monitor, ry is going to adjust to give you the value for the bottom of the window. Not the same thing. If there is a 1 inch footer, there is now a 1 inch discrepancy between the pixel you're looking at and the pixel it is reporting.

Re: Converting gaze coordinates to window coordinates

PostPosted: 08 Dec 2015, 10:53
by Anders
I can't see how to do this when the window is not fullscreen. The demo doesn't behave correctly if the window is not fullscreen. Even with the in-app recalibrating.

We have several Unity demos on our GitHub. Which one are you talking about?

I think to do this properly it maybe necessary to recover the Unity app window's coordinates in screen space, but there doesn't seem to be any obvious way to do this

That is correct. Unity does not support getting the anchor points of the player window. If a developer wishes to retrieve this information, they have to implement it themselves per platform. Searching the Unity Forums on this topic will tell you the same.

Getting the anchor point og the windows is something that is supported by e.g. JavaFX. See our Java FX example for an example of this.

...but that isn't going to work correctly. For example if you're looking at the bottom of the monitor, ry is going to adjust to give you the value for the bottom of the window. Not the same thing. If there is a 1 inch footer, there is now a 1 inch discrepancy between the pixel you're looking at and the pixel it is reporting.

The snippet you refer to assumes full screen state. Hence it will not work in floating window state.

Our system is calibrated according to the physical monitor that is chosen during the calibration process. That means, that gaze data will be given according to the resolution of this. So unless you can retrieve information about the window that is running in this screen, you have no way of mapping gaze inside the window. In your situation, the limitation is in Unity. Go ahead and send them a feature request on this :-)