Browsing by Author "Vickers, Stephen"
Now showing 1 - 16 of 16
Results Per Page
Sort Options
Item Metadata only Designing gaze gestures for gaming: An investigation of performance.(Association for Computing Machinery, 2010) Istance, Howell; Hyrskykari, A.; Immonen, L.; Mansikkamaa, S.; Vickers, StephenItem Open Access Eye-gaze interaction techniques for use in online games and environments for users with severe physical disabilities.(De Montfort University, 2011) Vickers, StephenMulti-User Virtual Environments (MUVEs) and Massively Multi-player On- line Games (MMOGs) are a popular, immersive genre of computer game. For some disabled users, eye-gaze offers the only input modality with the potential for sufficiently high bandwidth to support the range of time-critical interaction tasks required to play. Although, there has been much research into gaze interaction techniques for computer interaction over the past twenty years, much of this has focused on 2D desktop application control. There has been some work that investigates the use of gaze interaction as an additional input device for gaming but very little on using gaze on its own. Further, configuration of these techniques usually requires expert knowledge often beyond the capabilities of a parent, carer or support worker. The work presented in this thesis addresses these issues by the investigation of novel gaze-only interaction techniques. These are to enable at least a beginner level of game play to take place together with a means of adapting the techniques to suit an individual. To achieve this, a collection of novel gaze based interaction techniques have been evaluated through empirical studies. These have been encompassed within an extensible software architecture that has been made available for free download. Further, a metric of reliability is developed that when used as a measure within a specially designed diagnostic test, allows the interaction technique to be adapted to suit an individual. Methods of selecting interaction techniques based upon game task are also explored and a novel methodology based on expert task analysis is developed to aid selection.Item Metadata only EyeGuitar: Making rhythm based music video games accessible using only eye movements.(ACM, 2011) Vickers, Stephen; Istance, Howell; Smalley, M.Item Metadata only For Your Eyes Only: Controlling 3D Online Games by Eye-Gaze(Springer, 2009) Istance, Howell; Hyrskykari, A.; Vickers, Stephen; Chaves, T.Massively multiplayer online role-playing games, such as World of Warcraft, have become the most widespread 3D graphical environments with millions of active subscribers worldwide. People with severe motor impairments should be able to take part in these games without the extent of their disability being apparent to others online. Eye gaze is a high bandwidth modality that can support this. We have developed a software device that uses gaze input in different modes for emulating mouse and keyboard events appropriate for interacting with on-line games. We report an evaluation study that investigated gaze-based interaction with World of Warcraft using the device. We have found that it is feasible to carry out tasks representative of game play at a beginners skill level using gaze alone. The results from the locomotion task part of the study show similar performance for gaze-based interaction compared with a keyboard and mouse. We discuss the usability issues that arose when completing three types of tasks in the game and the implications of these for playing of this type of game using gaze as the only input modality.Item Metadata only Gaze gestures or dwell based interaction?(ACM, 2012-03) Hyrskykari, Aulikki; Istance, Howell; Vickers, StephenThe two cardinal problems recognized with gaze-based interaction techniques are: how to avoid unintentional commands, and how to overcome the limited accuracy of eye tracking. Gaze gestures are a relatively new technique for giving commands, which has the potential to overcome these problems. We present a study that compares gaze gestures with dwell selection as an interaction technique. The study involved 12 participants and was performed in the context of using an actual application. The participants gave commands to a 3D immersive game using gaze gestures and dwell icons. We found that gaze gestures are not only a feasible means of issuing commands in the course of game play, but they also exhibited performance that was at least as good as or better than dwell selections. The gesture condition produced less than half of the errors when compared with the dwell condition. The study shows that gestures provide a robust alternative to dwell-based interaction with the reliance on positional accuracy being substantially reduced.Item Metadata only Gaze interaction with virtual on-line communities: Levelling the playing field for disabled users.(Springer Link, 2010) Bates, R.; Vickers, Stephen; Istance, HowellItem Open Access Gazing into a Second Life: Gaze-driven adventures, control barriers, and the need for disability privacy in an online virtual world.(2008) Vickers, Stephen; Istance, Howell; Bates, R.Online virtual worlds such as Second Life and World of Warcraft offer users the chance to participate in potentially limitless virtual worlds, all via a standard desktop pc, mouse and keyboard. This paper addresses some of the interaction barriers and privacy concerns that people with disabilities may encounter when using these worlds, and introduces an avatar Turing test that should be passed for worlds to be accessible for all users. The paper then focuses on the needs of high-level motor disabled users who may use gaze control as an input modality for computer interaction. A taxonomy and survey of interaction are introduced, and an experiment in gaze based interaction is conducted within these virtual worlds. The results of the survey highlight the barriers where people with disabilities cannot interact as efficiently as able-bodied users. Finally, the paper discusses methods for enabling gaze based interaction for high-level motor disabled users and calls for game designers to consider disabled users when designing game interfaces.Item Open Access An investigation into determining head pose for gaze estimation on unmodified mobile devices(ACM, 2014-03) Ackland, Stephen; Istance, Howell; Coupland, Simon; Vickers, StephenTraditionally, devices which are able to determine a users gaze are large, expensive and often restrictive. We investigate the prospect of using common webcams and mobile devices such as laptops, tablets and phones without modification as an alternative means for obtaining a users gaze. A person’s gaze can be fundamentally determined by the pose of the head as well as the orientation of the eyes. This initial work investigates the first of these factors - an estimate of the 3D head pose (and subsequently the positions of the eye centres) relative to a camera device. Specifically, we seek a low cost algorithm that requires only a one-time calibration for an individual user, that can run in real-time on the aforementioned mobile devices with noisy camera data. We use our head tracker to estimate the 4 eye corners of a user over a 10 second video. We present the results at several different frames per second (fps) to analyse the impact on the tracker with lower quality cameras. We show that our algorithm is efficient enough to run at 75fps on a common laptop, but struggles with tracking loss when the fps is lower than 10fps.Item Open Access Keeping an eye on the game: Eye gaze interaction with massively multiplayer online games and virtual communities for motor impaired users.(2008) Vickers, Stephen; Istance, Howell; Hyrskykari, A.; Ali, N.; Bates, R.Online virtual communities are becoming increasingly popular both within the able-bodied and disabled user communities. These games assume the use of keyboard and mouse as standard input devices, which in some cases is not appropriate for users with a disability. This paper explores gaze-based interaction methods and highlights the problems associated with gaze control of online virtual worlds. The paper then presents a novel ‘Snap Clutch’ software tool that addresses these problems and enables gaze control. The tool is tested with an experiment showing that effective gaze control is possible although task times are longer. Errors caused by gaze control are identified and potential methods for reducing these are discussed. Finally, the paper demonstrates that gaze driven locomotion can potentially achieve parity with mouse and keyboard driven locomotion, and shows that gaze is a viable modality for game based locomotion both for able-bodied and disabled users alike.Item Metadata only Performing Locomotion Tasks in Immersive Computer Games with an Adapted Eye-Tracking Interface(ACM, 2013) Vickers, Stephen; Istance, Howell; Hyrskykari, AulikkiYoung people with severe physical disabilities may benefit greatly from participating in immersive computer games. In-game tasks can be fun, engaging, educational, and socially interactive. But for those who are unable to use traditional methods of computer input such as a mouse and keyboard, there is a barrier to interaction that they must first overcome. Eye-gaze interaction is one method of input that can potentially achieve the levels of interaction required for these games. How we use eye-gaze or the gaze interaction technique depends upon the task being performed, the individual performing it, and the equipment available. To fully realize the impact of participation in these environments, techniques need to be adapted to the person’s abilities. We describe an approach to designing and adapting a gaze interaction technique to support locomotion, a task central to immersive game playing. This is evaluated by a group of young people with cerebral palsy and muscular dystrophy. The results show that by adapting the interaction technique, participants are able to significantly improve their in-game character control.Item Metadata only Snap clutch, a moded approach to solving the Midas touch problem.(ACM, 2008) Istance, Howell; Bates, R.; Hyrskykari, A.; Vickers, StephenThis paper proposes a simple approach to an old problem, that of the 'Midas Touch'. This uses modes to enable different types of mouse behavior to be emulated with gaze and by using gestures to switch between these modes. A light weight gesture is also used to switch gaze control off when it is not needed, thereby removing a major cause of the problem. The ideas have been trialed in Second Life, which is characterized by a feature-rich of set of interaction techniques and a 3D graphical world. The use of gaze with this type of virtual community is of great relevance to severely disabled people as it can enable them to be in the community on a similar basis to able-bodied participants. The assumption here though is that this group will use gaze as a single modality and that dwell will be an important selection technique. The Midas Touch Problem needs to be considered in the context of fast dwell-based interaction. The solution proposed here, Snap Clutch, is incorporated into the mouse emulator software. The user trials reported here show this to be a very promising way in dealing with some of the interaction problems that users of these complex interfaces face when using gaze by dwell.Item Metadata only Soft computing head tracking interaction for telerobotic control.(IEEE, 2010) Vickers, Stephen; Coupland, SimonItem Metadata only Towards dynamic accessibility through soft gaze gesture recognition(IEEE, 2012-09-05) Shell, Jethro; Vickers, Stephen; Istance, Howell; Coupland, SimonIt is difficult for some sets of users with physical disabilities to operate standard input devices such as a keyboard and mouse. Eye gaze technologies and more specifically gaze gestures are emerging to assist such users. There is a high level of inter and intra user variation in the ability to perform gaze gestures due to the high levels of noise with the gaze patterns. In this paper we use a novel fuzzy transfer learning approach in order to construct a fuzzy system for gaze gesture recognition which can be automatically adapted for different users and/or user groups. We show that the fuzzy system is able to recognise gestures across groups of both able bodied (AB) and disabled users through the use of a base of AB data surpassing an expert constructed classifier.Item Open Access User performance of gaze-based interaction with on-line virtual communities.(2008) Vickers, Stephen; Istance, Howell; Hyrskykari, A.; Ali, N.We present the results of an investigation into gaze-based interaction techniques with on-line virtual communities. The purpose of this study was to gain a better understanding of user performance with a gaze interaction technique developed for interacting with 3D graphical on-line communities and games. The study involved 12 participants each of whom carried out 2 equivalent sets of 3 tasks in a world created in Second Life. One set was carried out using a keystroke and mouse emulator driven by gaze, and the other set was carried out with the normal keyboard and mouse.. The study demonstrates that subjects were easily able to perform a set of tasks with eye gaze with only a minimal amount of training. It has also identified the causes of user errors and the amount of performance improvement that could be expected if the causes of these errors can be designed outItem Metadata only The validity of using non-representative users in gaze communication research.(ACM, 2012-03) Istance, Howell; Vickers, Stephen; Hyrskykari, AulikkiGaze-based interaction techniques have been investigated for the last two decades, and in many cases the evaluation of these has been based on trials with able-bodied users and conventional usability criteria, mainly speed and accuracy. The target user group of many of the gaze-based techniques investigated is, however, people with different types of physical disabilities. We present the outcomes of two studies that compare the performance of two groups of participants with a type of physical disability (one being cerebral palsy and the other muscular dystrophy) with that of a control group of able-bodied participants doing a task using a particular gaze interaction technique. One study used a task based on dwell-time selection, and the other used a task based on gaze gestures. In both studies, the groups of participants with physical disabilities performed significantly worse than the able-bodied control participants. We question the ecological validity of research into gaze interaction intended for people with physical disabilities that only uses able-bodied participants in evaluation studies without any testing using members of the target user population.Item Open Access What were we all looking at? Identifying objects of collective visual attention(Taylor & Francis, 2015) Ma, Zhong; Vickers, Stephen; Istance, Howell; Ackland, Stephen; Zhao, Xinbo; Wang, WenhuWe aim to identify the salient objects in an image by applying a model of visual attention. We automate the process by predicting those objects in an image that are most likely to be the focus of someone’s visual attention. Concretely, we first generate fixation maps from the eye tracking data, which express the ground truth of people’s visual attention for each training image. Then, we extract the high-level features based on the bag-of-visual-words image representation as input attributes along with the fixation maps to train a support vector regression model. With this model, we can predict a new query image’s saliency. Our experiments show that the model is capable of providing a good estimate for human visual attention in test images sets with one salient object and multiple salient objects. In this way, we seek to reduce the redundant information within the scene, and thus provide a more accurate depiction of the scene.