I managed to add some compensation for the z-shift a bit. I didn't use the calibration over a distance as it was proving to be too tedious and often the sensor would report bad data and I wasn't able to filter it out easily enough. I switched my approach to using a formula.
ZInitial = headPosition.Z at sweet spot;
ZCurrent = headPosition.Z;
YGaze = gazePoint.Y;
YMouse = cursorPoint.Y;
YDelta = YMouse - YGaze;
YConstant = Math.Sqrt(Math.Abs(YDelta))/(ZCurrent - ZInitial);
Then to compute the delta needed to put it back in the right spot:
YDelta = ((ZCurrent-ZInitial)*YConstant)^2
If I could get a good value for the head's Z-Distance from the sensor I'm sure it would work smoothly. The code currently has a problem if the user heads more near to the sensor. I think that's because I'm incorrectly judging head distance based on the pupils. Would be nice to know the actual distance some how, maybe more calibrations and tweaking?
How it works:
1. The user runs a normal calibration run to set the sensor up properly (looking at the circles).
2. The user uses the GazeDot and positions their body so that the gaze dot lines up with where they are looking on the screen.
3. User clicks a button which calls SetSweetSpot()
4. User moves farther away, looking at the tip of their cursor and clicks a button that calls SetDeltas()
5. User moves closer to sensor and clicks a button that calls SetDeltas() again (which is supposed to calculate for YConstantOnNearZ but has issues right now)
6. You now have a method of correcting the gaze offset by a bit.
I was able to keep the gaze location within 100 pixels of the actual gaze location, it's not perfect but it is a heck of a lot better. Seems to work nicely within a 2 ft range from the sweet spot. Would no doubt be more better if I could get a realistic head position or if the sensor had depth sensing in it.
The X correction when you move side to side will most likely need to be done by figuring out the projection from the sensor to the user which I don't understand the math behind very well (I was never good at 3D Maths
).
Here's a video showing the result (Video is unlisted)
http://www.youtube.com/watch?v=0PvtbHz_arE- Code: Select all
using System;
using System.Drawing;
using System.Runtime.InteropServices;
using TETCSharpClient;
using TETCSharpClient.Data;
namespace NetworkController.Plugin.EyeTribe
{
public class Correction : IGazeUpdateListener,IDisposable
{
public static Correction Instance;
public double ZInitial;
public double ZCurrent;
public double YGaze;
public double YMouse;
public double YDelta;
public double YConstantOnFarZ;
public double YConstantOnNearZ;
public Correction()
{
Instance = this;
GazeManager.Instance.AddGazeListener(this);
}
public void Dispose()
{
GazeManager.Instance.RemoveGazeListener(this);
Instance = null;
}
/// <summary>
/// When looking at a point on the screen and the dot using the normal eyetribe result is exactly on the point you are looking at.
/// </summary>
public void SetSweetSpot()
{
ZInitial = ZCurrent;
}
/// <summary>
/// Assuming user is looking directly at the tip of mouse cursor the offsets can be calculated.
/// The user should be leaned back or forward enough to make a noticable error.
/// </summary>
public void SetDeltas()
{
YDelta = YMouse - YGaze;
//Perform calcs for constants
//Yp= Sqr(Yd) / (Zc - Zi)
if(ZCurrent < ZInitial) YConstantOnFarZ = Math.Sqrt(Math.Abs(YDelta))/(ZCurrent - ZInitial);
else YConstantOnNearZ = Math.Sqrt(Math.Abs(YDelta))/(ZCurrent - ZInitial); //Different constant for near due to bad head position z value
}
public void OnGazeUpdate(GazeData gazeData)
{
var isValid = ((gazeData.State & GazeData.STATE_TRACKING_GAZE) != 0)
&& ((gazeData.State & GazeData.STATE_TRACKING_PRESENCE) != 0)
&& ((gazeData.State & GazeData.STATE_TRACKING_EYES) != 0)
&& ((gazeData.State & GazeData.STATE_TRACKING_FAIL) == 0)
&& ((gazeData.State & GazeData.STATE_TRACKING_LOST) == 0)
&& gazeData.SmoothedCoordinates != null
&& gazeData.SmoothedCoordinates.X != 0
&& gazeData.SmoothedCoordinates.Y != 0
;
if (!isValid) return;
var headPosition = gazeData.HeadPosition(); //See other post for code: http://theeyetribe.com/forum/viewtopic.php?f=11&t=35&sid=301d70c38eb44f37495cf997dc8d9b11
if (headPosition == null) return;
//Now that we can suppose the data is probably valid we can make use of it.
var cursorPoint = GetCursorPosition();
var gazePoint = gazeData.SmoothedCoordinates;
ZCurrent = headPosition.Z;
YGaze = gazePoint.Y;
YMouse = cursorPoint.Y;
}
public Point2D CorrectPoint(Point2D point)
{
var result = new Point2D(point);
var y = (ZCurrent - ZInitial)* (ZCurrent<ZInitial ? YConstantOnFarZ : YConstantOnNearZ );
result.Y = result.Y - (y*y); //Yd = ((Zc-Zi)*Yc)^2
return result;
}
public void OnCalibrationStateChanged(bool isCalibrated) { }
public void OnScreenIndexChanged(int screenIndex) { }
#region Windows Mouse API
/// <summary>
/// Struct representing a point.
/// </summary>
[StructLayout(LayoutKind.Sequential)]
public struct POINT
{
public int X;
public int Y;
public static implicit operator Point(POINT point)
{
return new Point(point.X, point.Y);
}
}
/// <summary>
/// Retrieves the cursor's position, in screen coordinates.
/// </summary>
/// <see>See MSDN documentation for further information.</see>
[DllImport("user32.dll")]
public static extern bool GetCursorPos(out POINT lpPoint);
public static Point GetCursorPosition()
{
POINT lpPoint;
GetCursorPos(out lpPoint);
//bool success = User32.GetCursorPos(out lpPoint);
// if (!success)
return lpPoint;
}
#endregion
}
}