Subjective satisfaction for feedback? – What about botched calibration attempts?
Subjective feedback satisfaction
Before attempting to send feedback data, I’m never really sure how to rate my satisfaction on the scale. I don’t know what “good” is.
It made me wonder about how others are making the decision. E.g. somebody might use the eye tracker, and have very high expectations. Their “poor” experience might actually be the best that the eye tracker can do.
Stars are a more important indicator?
Can the calibration stars reveal the true experience? For example, somebody gets five stars, and they put a low satisfaction rating. A five-star calibration indicates that the eye tracker was working optimally, regardless of how they felt about it? That is, setting the satisfaction level is not as important.
Effect of fumbled calibrations, or outlier calibrations
What happens if somebody screws up their calibration? E.g. they’re blinking too many times, they miss a couple targets, or they’re moving around too much.
In some of the previous versions, I was getting one stars, and five stars in different attempts. Had the feedback sending procedure been working in those versions, I’m not sure how you people would have taken it. (Although, in these cases, I think it had to do with all the flickering that I was getting).
Your preferences for the conditions that we should send feedback?
Is it best if we send feedback on calibrations that get 4 to 5 stars? After a calibration attempt, evaluate how you just performed? Cancel the sending attempt, and try again if you felt that you didn’t do well (e.g. didn’t follow the targets accurately)?
I imagine that you have to send successful attempts to some extent.
I use speech recognition, and you can correct mistakes, which improves the software for the future. However, I’m mindful of the situations where I say “correct that”, and instead of correcting the word, I decide to change the entire thought. I wonder if that screws up the speech recognition, or if there’s any mechanism to compensate for that.
The same thing goes for using speech recognition with Google search. You say your search term, and if it’s incorrect, you go in there, and fix it manually, which is data that Google uses. Sometimes, I’ll go in to not fix the speech recognition attempt, but I’ll want a completely different term. I wonder how they deal with that. Maybe if the search term varies too much, it’s ignored.
Maybe similarly with the eye-tracking calibration, if I rub my eyes mid-calibration, maybe there’s a way you detect the divergence. Maybe you have a way to gather data regardless of the conditions of the calibration. Or maybe you don’t, and we should try to be as attentive as possible during a calibration.
Thanks.