Browsing by Author "Rafferty, Joseph"
Now showing 1 - 12 of 12
Results Per Page
Sort Options
Item Embargo An approach to provide dynamic, illustrative, video-based guidance within a goal-driven smart home(Springer, 2016-10-27) Rafferty, Joseph; Nugent, Chris; Liu, Jun; Chen, LimingThe global population is aging in a never-before seen way, introducing an increasing ageing-related cognitive ailments, such as dementia. This aging is coupled with a reduction in the global support ratio, reducing the availability of formal and informal support and therefore capacity to care for those suffering these aging related ailments. Assistive Smart Homes (SH) are a promising form of technology enabling assistance with activities of daily living, providing support of suffers of cognitive ailments and increasing their independence and quality of life. Traditional SH systems have deficiencies that have been partially addressed by through goal-driven SH systems. Goal-driven SHs incorporate flexible activity models, goals, which partially address some of these issues. Goals may be combined to provide assistance with dynamic and variable activities. This paradigm-shift, however, introduces the need to provide dynamic assistance within such SHs. This study presents a novel approach to achieve this through video based content analysis and a mechanism to facilitate matching analysed videos to dynamic activities/goals. The mechanism behind this approach is detailed and followed by the presentation of an evaluation where showing promising results were shown.Item Open Access Automatic Metadata Generation Through Analysis of Narration Within Instructional Videos(Springer US, 2015-08-08) Rafferty, Joseph; Nugent, Chris; Liu, Jun; Chen, LimingCurrent activity recognition based assistive living solutions have adopted relatively rigid models of inhabitant activities. These solutions have some deficiencies associated with the use of these models. To address this, a goal-oriented solution has been proposed. In a goal-oriented solution, goal models offer a method of flexibly modelling inhabitant activity. The flexibility of these goal models can dynamically produce a large number of varying action plans that may be used to guide inhabitants. In order to provide illustrative, video-based, instruction for these numerous actions plans, a number of video clips would need to be associated with each variation. To address this, rich metadata may be used to automatically match appropriate video clips from a video repository to each specific, dynamically generated, activity plan. This study introduces a mechanism of automatically generating suitable rich metadata representing actions depicted within video clips to facilitate such video matching. This performance of this mechanism was evaluated using eighteen video files; during this evaluation metadata was automatically generated with a high level of accuracy.Item Embargo Automatic Summarization of Activities Depicted in Instructional Videos by Use of Speech Analysis(Springer, 2014-12) Rafferty, Joseph; Nugent, Chris; Liu, Jun; Chen, LimingExisting activity recognition based assistive living solutions have adopted a relatively rigid approach to modelling activities. To address the deficiencies of such approaches, a goal-oriented solution has been proposed that will offer a method of flexibly modelling activities. This approach does, however, have a disadvantage in that the performance of goals may vary hence requiring differing video clips to be associated with these variations. In order to address this shortcoming, the use of rich metadata to facilitate automatic sequencing and matching of appropriate video clips is necessary. This paper introduces a mechanism of automatically generating rich metadata which details the actions depicted in video files to facilitate matching and sequencing. This mechanism was evaluated with 14 video files, producing annotations with a high degree of accuracy.Item Open Access From Activity Recognition to Intention Recognition for Assisted Living Within Smart Homes(IEEE Transactions on Human-Machine Systems, 2017-01-05) Rafferty, Joseph; Nugent, Chris; Liu, Jun; Chen, LimingThe global population is aging; projections show that by 2050, more than 20% of the population will be aged over 64. This will lead to an increase in aging related illness, a decrease in informal support, and ultimately issues with providing care for these individuals. Assistive smart homes provide a promising solution to some of these issues. Nevertheless, they currently have issues hindering their adoption. To help address some of these issues, this study introduces a novel approach to implementing assistive smart homes. The devised approach is based upon an intention recognition mechanism incorporated into an intelligent agent architecture. This approach is detailed and evaluated. Evaluation was performed across three scenarios. Scenario 1 involved a web interface, focusing on testing the intention recognition mechanism. Scenarios 2 and 3 involved retrofitting a home with sensors and providing assistance with activities over a period of 3 months. The average accuracy for these three scenarios was 100%, 64.4%, and 83.3%, respectively. Future will extend and further evaluate this approach by implementing advanced sensor-filtering rules and evaluating more complex activities.Item Open Access Goal Lifecycles and Ontological Models for Intention Based Assistive Living within Smart Environments(Computer Systems Science and Engineering, 2015-01) Rafferty, Joseph; Chen, Liming; Nugent, Chris; Liu, JunCurrent ambient assistive living solutions have adopted a traditional sensor-centric approach, involving data analysis and activity recognition to provide assistance to individuals. The reliance on sensors and activity recognition in this approach introduces issues with scalability and ability to model activity variations. This study introduces a novel approach to assistive living which intends to address these issues via a paradigm shift from a sensor centric approach to a goal-oriented one. The goal-oriented approach focuses on identification of user goals in order to pro-actively offer assistance by either pre-defined or dynamically constructed instructions. This paper introduces the architecture of this goal-oriented approach and describes an ontological goal model to serve as its basis. The use of this approach is illustrated in a case study which focuses on assisting a user with activities of daily living.Item Metadata only A Goal-driven, assistive agent for instructing and guiding user activities.(IOS Press BV, 2015-11) Rafferty, Joseph; Nugent, Chris; Liu, Jun; Chen, LimingCurrent ambient assistive living solutions have adopted a traditional sensor-centric approach, involving data analysis and activity recognition to provide assistance to individuals. The reliance on sensors and activity recognition in this approach introduces a number of issues. This study introduces a novel approach to assistive living which intends to address these issues via a paradigm shift from a sensor centric approach to a goal-oriented one. The goal-oriented approach focuses on identification of user goals in order to pro-actively offer assistance by either pre-defined or dynamically constructed video-based instruction. This extended abstract introduces the architecture of this goal-oriented approach, covers the novel developments required to realize it and discusses the current state of the research.Item Embargo Learning Behaviour for Service Personalisation and Adaptation(Springer, 2014-12-05) Chen, Liming; Skillen, Kerry-Louise; Burns, William; Quinn, Susan; Rafferty, Joseph; Nugent, Chris; Donnelly, Mark P.; Solheim, IvarContext-aware applications within pervasive environments are increasingly being developed as services and deployed in the cloud. As such these services are increasingly required to be adaptive to individual users to meet their specific needs or to reflect the changes of their behavior. To address this emerging challenge this paper introduces a service-oriented personalisation framework for service personalisation with special emphasis being placed on behavior learning for user model and service function adaptation. The paper describes the system architecture and the underlying methods and technologies including modelling and reasoning, behavior analysis and a personalisation mechanism. The approach has been implemented in a service-oriented prototype system, and evaluated in a typical scenario of providing personalised travel assistance for the elderly using the help-on-demand services deployed on smartphone.Item Metadata only A mechanism for nominating video clips to provide assistance for instrumental activities of daily living(Springer, 2015-11) Rafferty, Joseph; Nugent, Chris; Liu, J.; Chen, LimingCurrent assistive smart homes have adopted a relatively rigid approach to modeling activities. The use of these activity models have introduced factors which block adoption of smart home technology. To address this, goal driven smart homes have been proposed, these are based upon more flexible activity structures. However, this goal-driven approach does have a disadvantage where flexibility in this activity modeling can lead to difficulty providing illustrative guidance. To address this, a video analysis and nomination mechanism is required to provide suitable assistive clips for a given goal. This paper introduces a novel mechanism for nominating a suitable video clip given a pool of automatically generated metadata. This mechanism was then evaluated using a voice based assistant application and a tool emulating assistance requests by a goal-driven smart home. The initial evaluation produced promising results.Item Open Access NFC based provisioning of instructional videos to assist with instrumental activities of daily living(IEEE, 2014-11-06) Rafferty, Joseph; Nugent, Chris; Chen, Liming; Qi, Jun; Dutton, Rachael; Zirk, Anna; Boye, Lars Thomas; Kohn, Michael; Hellman, RittaExisting assistive living and prompting based solutions have adopted a relatively complex approach to supporting individuals. These solutions have involved sensor based monitoring, activity recognition and assistance provisioning. Traditionally they have suffered from a number of issues rooted in scalability and performance levels associated with the activity recognition process. This paper introduces a simplistic approach to assistive living within a user's residence through the use of NFC tags and smart devices. The core concept of this approach is presented and is subsequently placed within the context of related work. A description of the architecture is provided and results following technical evaluation of the first system prototype are discussed.Item Embargo Ontological Goal Modelling for Proactive Assistive Living in Smart Environments(Springer, 2013-12) Rafferty, Joseph; Chen, Liming; Nugent, ChrisExisting assistive living solutions have traditionally adopted a bottom-up approach involving sensor based monitoring, data analysis to activity recognition and assistance provisioning. This approach, however, suffers from applicability and scalability issues associated with sensor density and variations in performing user activities. In an effort to alleviate these challenges, the current study proposes a goal oriented top-down approach to assistive living which offers a paradigm shift from a sensor centric view to a goal oriented view. The basic concept of the approach is that if a user’s goal can be identified, then assistance can be provided proactively through pre-defined or dynamically constructed activity related instructions. The paper first introduces the system architecture for the proposed approach. It then describes an ontological goal model to serve as the basis for such an approach. The utility of the approach is illustrated in a use scenario focused on assisting a user with their activities of daily living.Item Embargo Ontology-based Activity Recognition Framework and Services(ACM, 2013-12-02) Chen, Liming; Nugent, Chris; Rafferty, JosephThis paper introduces an ontology-based integrated framework for activity modeling, activity recognition and activity model evolution. Central to the framework is ontological activity modeling and semantic-based activity recognition, which is supported by an iterative process that incrementally improves the completeness and accuracy of activity models. In addition, the paper presents a service-oriented architecture for the realization of the proposed framework which can provide activity context-aware services in a scalable distributed manner. The paper further describes and discusses the implementation and testing experience of the framework and services in the context of smart home based assistive living.Item Metadata only Special issue on Sensing, Data Analysis and Platforms for Ubiquitous Intelligence(MDPI, 2018) Chen, Liming; Chen, G.; Yu, H.; Rafferty, JosephThe 14th IEEE International Conference on Ubiquitous Intelligence and Computing (UIC 2017) will be held in San Francisco, USA, 4–8 August, 2017, and will provide opportunities for researchers and practitioners to share and disseminate research results related to the topics of sensors. Ubiquitous sensors, devices, networks, and information are paving the way towards a smart world, in which computational intelligence is distributed throughout the physical environment to provide reliable and relevant services to people. This ubiquitous intelligence will change the computing landscape because it will enable new breeds of applications and systems to be developed and the realm of computing possibilities will be significantly extended. By enhancing everyday objects with sensing and intelligence, many tasks and processes could be simplified, the physical spaces where people interact like the workplaces, homes or cities, could become more efficient, safer and more enjoyable. This Special Issue will select top-quality papers from IEEE UIC 2017, covering fundamental sensing, smart objects, devices, human-object interactions, data analysis, and their applications for intelligent environments, smart systems, services, and personalisation and adaptation. We invite authors of selected papers to significantly consolidate and improve their highly recommended papers with substantial new content to this Special Issue, which will inform and stimulate the research communities. Potential topics include, but are not limited to: AutoID technologies such as RFID/iBeacon Embedded Chips, Sensors, and Actuators Wearable Devices and Embodied interaction Smart Objects and Interactions Smart human-machine/robot interaction Smart Systems and Services Human Activity Recognition Adaptive, Autonomic and Context-aware Systems Big Data in Ubiquitous Systems Smart Environments and Applications: Intelligent Traffic and Transportation Smart Healthcare and Active Assisted Living Smart Education and Learning Virtual Personal Assistants, Cognitive Experts Socially intelligent robots and applications