Browsing by Author "Istance, Howell"
Now showing 1 - 20 of 25
Results Per Page
Sort Options
Item Metadata only Agents negotiation & communication within a real time cooperative multiagent system(2004) Al-Hudhud, Ghada; Ayesh, Aladdin, 1972-; Istance, Howell; Turner, Martin J.Item Metadata only Attentive interfaces for users with disabilities: eye gaze for intention and uncertainty estimation(2009) Prendinger, Helmut; Hyrskykari, Aulikki; Nakayama, Minoru; Istance, Howell; Bee, Nikolaus; Takahasi, YosiyukiAttentive user interfaces (AUIs) capitalize on the rich information that can be obtained from users' gaze behavior in order to infer relevant aspects of their cognitive state. Not only is eye gaze an excellent clue to states of interest and intention, but also to preference and confidence in comprehension. AUIs are built with the aim of adapting the interface to the user's current information need, and thus reduce workload of interaction. Given those characteristics, it is believed that AUIs can have particular benefits for users with severe disabilities, for whom operating a physical device (like a mouse pointer) might be very strenuous or infeasible. This paper presents three studies that attempt to gauge uncertainty and intention on the part of the user from gaze data, and compare the success of each approach. The paper discusses how the application of the approaches adopted in each study to user interfaces can support users with severe disabilities.Item Open Access Designing a Gamified System to Promote Health(UCL, 2015-02-23) Kucharczyk, E.; Scase, M. O.; Istance, HowellAlthough gamified health interventions have the potential to enhance the quality of life of older users, there are significant design issues that need to be considered when designing games and gamified systems for an older target market.Item Metadata only Designing gaze gestures for gaming: An investigation of performance.(ACM, 2010) Istance, Howell; Hyrskykari, A.; Immonen, L.; Mansikkamaa, S.; Vickers, StephenItem Metadata only EyeGuitar: Making rhythm based music video games accessible using only eye movements.(ACM, 2011) Vickers, Stephen; Istance, Howell; Smalley, M.Item Metadata only For Your Eyes Only: Controlling 3D Online Games by Eye-Gaze(Springer, 2009) Istance, Howell; Hyrskykari, A.; Vickers, Stephen; Chaves, T.Massively multiplayer online role-playing games, such as World of Warcraft, have become the most widespread 3D graphical environments with millions of active subscribers worldwide. People with severe motor impairments should be able to take part in these games without the extent of their disability being apparent to others online. Eye gaze is a high bandwidth modality that can support this. We have developed a software device that uses gaze input in different modes for emulating mouse and keyboard events appropriate for interacting with on-line games. We report an evaluation study that investigated gaze-based interaction with World of Warcraft using the device. We have found that it is feasible to carry out tasks representative of game play at a beginners skill level using gaze alone. The results from the locomotion task part of the study show similar performance for gaze-based interaction compared with a keyboard and mouse. We discuss the usability issues that arose when completing three types of tasks in the game and the implications of these for playing of this type of game using gaze as the only input modality.Item Metadata only Gaze gestures or dwell based interaction?(ACM, 2012-03) Hyrskykari, Aulikki; Istance, Howell; Vickers, StephenThe two cardinal problems recognized with gaze-based interaction techniques are: how to avoid unintentional commands, and how to overcome the limited accuracy of eye tracking. Gaze gestures are a relatively new technique for giving commands, which has the potential to overcome these problems. We present a study that compares gaze gestures with dwell selection as an interaction technique. The study involved 12 participants and was performed in the context of using an actual application. The participants gave commands to a 3D immersive game using gaze gestures and dwell icons. We found that gaze gestures are not only a feasible means of issuing commands in the course of game play, but they also exhibited performance that was at least as good as or better than dwell selections. The gesture condition produced less than half of the errors when compared with the dwell condition. The study shows that gestures provide a robust alternative to dwell-based interaction with the reliance on positional accuracy being substantially reduced.Item Metadata only Gaze interaction with virtual on-line communities: Levelling the playing field for disabled users.(Springer, 2010) Bates, R.; Vickers, Stephen; Istance, HowellItem Metadata only Gaze-Aware Systems and Attentive Applications(IGI Global, 2011) Istance, Howell; Hyrskykari, A.In this chapter, we examine systems that use the current focus of a person’s visual attention to make the system easier to use, less effortful and, hopefully, more efficient. If the system can work out which object the person is interested in, or is likely to interact with next, then the need for the person to deliberately point at, or otherwise identify that object to the system can be removed. This approach can be applied to interaction with real-world objects and people as well as to objects presented on a display close to the system user. We examine just what we can infer about a person’s focus of visual attention, and their intention to do something from studying their eye movements, and what, if anything, the system should do about it. A detailed example of an attentive system is presented where the system estimates the difficulty a reader has understanding individual words when reading in a foreign language, and displays a translation automatically if it thinks it is needed.Item Open Access Gazing into a Second Life: Gaze-driven adventures, control barriers, and the need for disability privacy in an online virtual world.(2008) Vickers, Stephen; Istance, Howell; Bates, R.Online virtual worlds such as Second Life and World of Warcraft offer users the chance to participate in potentially limitless virtual worlds, all via a standard desktop pc, mouse and keyboard. This paper addresses some of the interaction barriers and privacy concerns that people with disabilities may encounter when using these worlds, and introduces an avatar Turing test that should be passed for worlds to be accessible for all users. The paper then focuses on the needs of high-level motor disabled users who may use gaze control as an input modality for computer interaction. A taxonomy and survey of interaction are introduced, and an experiment in gaze based interaction is conducted within these virtual worlds. The results of the survey highlight the barriers where people with disabilities cannot interact as efficiently as able-bodied users. Finally, the paper discusses methods for enabling gaze based interaction for high-level motor disabled users and calls for game designers to consider disabled users when designing game interfaces.Item Metadata only Human attention-based regions of interest extraction using computational intelligence(IEEE, 2015) Al-Azawi, M.; Yang, Yingjie; Istance, HowellMachine vision is still a challenging topic and attracts researchers to carry out researches in this field. Efforts have been placed to design machine vision systems (MVS) that are inspired by human vision system (HVS). Attention is one of the important properties of HVS, with which the human can focus only on part of the scene at a time; regions with more abrupt features attract human attention more than other regions. This property improves the speed of HVS in recognizing and identifying the contents of a scene. In this paper, we will discuss the human attention and its application in MVS. In addition, a new method of extracting regions of interest and hence interesting objects from the images is presented. The new method utilizes neural networks as classifiers to classify important and unimportant regions.Item Open Access An investigation into determining head pose for gaze estimation on unmodified mobile devices(ACM, 2014-03) Ackland, Stephen; Istance, Howell; Coupland, Simon; Vickers, StephenTraditionally, devices which are able to determine a users gaze are large, expensive and often restrictive. We investigate the prospect of using common webcams and mobile devices such as laptops, tablets and phones without modification as an alternative means for obtaining a users gaze. A person’s gaze can be fundamentally determined by the pose of the head as well as the orientation of the eyes. This initial work investigates the first of these factors - an estimate of the 3D head pose (and subsequently the positions of the eye centres) relative to a camera device. Specifically, we seek a low cost algorithm that requires only a one-time calibration for an individual user, that can run in real-time on the aforementioned mobile devices with noisy camera data. We use our head tracker to estimate the 4 eye corners of a user over a 10 second video. We present the results at several different frames per second (fps) to analyse the impact on the tracker with lower quality cameras. We show that our algorithm is efficient enough to run at 75fps on a common laptop, but struggles with tracking loss when the fps is lower than 10fps.Item Open Access Irregularity-based image regions saliency identification and evaluation(2016) Al-Azawi, M.; Yang, Yingjie; Istance, HowellSaliency or Salient regions extraction form images is still a challenging field since it needs some understanding for the image and the nature of the image. The technique that is suitable in some application is not necessarily useful in other application, thus, saliency enhancement is application oriented. In this paper, a new technique of extracting the salient regions from an image is proposed which utilizes the local features of the surrounding region of the pixels. The level of saliency is then decided based on the global comparison of the saliency-enhanced image. To make the process fully automatic a new Fuzzy-Based thresholding technique has been proposed also. The paper contains a survey of the state-of-the-art methods of saliency evaluation and a new saliency evaluation technique was proposed.Item Metadata only Irregularity-based saliency identification and evaluation(IEEE, 2013) Al-Azawi, M.; Yang, Yingjie; Istance, HowellItem Open Access Keeping an eye on the game: Eye gaze interaction with massively multiplayer online games and virtual communities for motor impaired users.(2008) Vickers, Stephen; Istance, Howell; Hyrskykari, A.; Ali, N.; Bates, R.Online virtual communities are becoming increasingly popular both within the able-bodied and disabled user communities. These games assume the use of keyboard and mouse as standard input devices, which in some cases is not appropriate for users with a disability. This paper explores gaze-based interaction methods and highlights the problems associated with gaze control of online virtual worlds. The paper then presents a novel ‘Snap Clutch’ software tool that addresses these problems and enables gaze control. The tool is tested with an experiment showing that effective gaze control is possible although task times are longer. Errors caused by gaze control are identified and potential methods for reducing these are discussed. Finally, the paper demonstrates that gaze driven locomotion can potentially achieve parity with mouse and keyboard driven locomotion, and shows that gaze is a viable modality for game based locomotion both for able-bodied and disabled users alike.Item Metadata only A new gaze points agglomerative clustering algorithm and its application in regions of interest extraction(IEEE, 2014-02-21) Al-Azawi, M.; Yang, Yingjie; Istance, HowellIn computer vision applications it is necessary to extract the regions of interest in order to reduce the search space and to improve image contents identification. Human-Oriented Regions of Interest can be extracted by collecting some feedback from the user. The feedback usually provided by the user by giving different ranks for the identified regions in the image. This rank is then used to adapt the identification process. Nowadays eye tracking technology is widely used in different applications, one of the suggested applications is by using the data collected from the eye-tracking device, which represents the user gaze points in extracting the regions of interest. In this paper we shall introduce a new agglomerative clustering algorithm which uses blobs extraction technique and statistical measures in clustering the gaze points obtained from the eye tracker. The algorithm is fully automatic, which means does not need any human intervention to specify the stopping criterion. In the suggested algorithm the points are replaced with small regions (blobs) then these blobs are grouped together to form a cloud, from which the interesting regions are constructed.Item Metadata only Performing Locomotion Tasks in Immersive Computer Games with an Adapted Eye-Tracking Interface(ACM, 2013) Vickers, Stephen; Istance, Howell; Hyrskykari, AulikkiYoung people with severe physical disabilities may benefit greatly from participating in immersive computer games. In-game tasks can be fun, engaging, educational, and socially interactive. But for those who are unable to use traditional methods of computer input such as a mouse and keyboard, there is a barrier to interaction that they must first overcome. Eye-gaze interaction is one method of input that can potentially achieve the levels of interaction required for these games. How we use eye-gaze or the gaze interaction technique depends upon the task being performed, the individual performing it, and the equipment available. To fully realize the impact of participation in these environments, techniques need to be adapted to the person’s abilities. We describe an approach to designing and adapting a gaze interaction technique to support locomotion, a task central to immersive game playing. This is evaluated by a group of young people with cerebral palsy and muscular dystrophy. The results show that by adapting the interaction technique, participants are able to significantly improve their in-game character control.Item Embargo Real-Time 3D Head Pose Tracking Through 2.5D Constrained Local Models with Local Neural Fields(Springer, 2019-03-04) Ackland, Stephen; Chiclana, Francisco; Istance, Howell; Coupland, SimonTracking the head in a video stream is a common thread seen within computer vision literature, supplying the research community with a large number of challenging and interesting problems. Head pose estimation from monocular cameras is often considered an extended application after the face tracking task has already been performed. This often involves passing the resultant 2D data through a simpler algorithm that best fits the data to a static 3D model to determine the 3D pose estimate. This work describes the 2.5D Constrained Local Model, combining a deformable 3D shape point model with 2D texture information to provide direct estimation of the pose parameters, avoiding the need for additional optimization strategies. It achieves this through an analytical derivation of a Jacobian matrix describing how changes in the parameters of the model create changes in the shape within the image through a full-perspective camera model. In addition, the model has very low computational complexity and can run in real-time on modern mobile devices such as tablets and laptops. The Point Distribution Model of the face is built in a unique way, so as to minimize the effect of changes in facial expressions on the estimated head pose and hence make the solution more robust. Finally, the texture information is trained via Local Neural Fields (LNFs) a deep learning approach that utilizes small discriminative patches to exploit spatial relationships between the pixels and provide strong peaks at the optimal locations.Item Metadata only Simulation and visualization of a scalable real time multiple robot system(2005) Al-Hudhud, Ghada; Ayesh, Aladdin, 1972-; Turner, Martin J.; Istance, HowellItem Metadata only Snap clutch, a moded approach to solving the Midas touch problem.(ACM, 2008) Istance, Howell; Bates, R.; Hyrskykari, A.; Vickers, StephenThis paper proposes a simple approach to an old problem, that of the 'Midas Touch'. This uses modes to enable different types of mouse behavior to be emulated with gaze and by using gestures to switch between these modes. A light weight gesture is also used to switch gaze control off when it is not needed, thereby removing a major cause of the problem. The ideas have been trialed in Second Life, which is characterized by a feature-rich of set of interaction techniques and a 3D graphical world. The use of gaze with this type of virtual community is of great relevance to severely disabled people as it can enable them to be in the community on a similar basis to able-bodied participants. The assumption here though is that this group will use gaze as a single modality and that dwell will be an important selection technique. The Midas Touch Problem needs to be considered in the context of fast dwell-based interaction. The solution proposed here, Snap Clutch, is incorporated into the mouse emulator software. The user trials reported here show this to be a very promising way in dealing with some of the interaction problems that users of these complex interfaces face when using gaze by dwell.