Browsing by Author "Liu, Jun"
Now showing 1 - 8 of 8
Results Per Page
Sort Options
Item Embargo An approach to provide dynamic, illustrative, video-based guidance within a goal-driven smart home(Springer, 2016-10-27) Rafferty, Joseph; Nugent, Chris; Liu, Jun; Chen, LimingThe global population is aging in a never-before seen way, introducing an increasing ageing-related cognitive ailments, such as dementia. This aging is coupled with a reduction in the global support ratio, reducing the availability of formal and informal support and therefore capacity to care for those suffering these aging related ailments. Assistive Smart Homes (SH) are a promising form of technology enabling assistance with activities of daily living, providing support of suffers of cognitive ailments and increasing their independence and quality of life. Traditional SH systems have deficiencies that have been partially addressed by through goal-driven SH systems. Goal-driven SHs incorporate flexible activity models, goals, which partially address some of these issues. Goals may be combined to provide assistance with dynamic and variable activities. This paradigm-shift, however, introduces the need to provide dynamic assistance within such SHs. This study presents a novel approach to achieve this through video based content analysis and a mechanism to facilitate matching analysed videos to dynamic activities/goals. The mechanism behind this approach is detailed and followed by the presentation of an evaluation where showing promising results were shown.Item Open Access Automatic Metadata Generation Through Analysis of Narration Within Instructional Videos(Springer, 2015-08-08) Rafferty, Joseph; Nugent, Chris; Liu, Jun; Chen, LimingCurrent activity recognition based assistive living solutions have adopted relatively rigid models of inhabitant activities. These solutions have some deficiencies associated with the use of these models. To address this, a goal-oriented solution has been proposed. In a goal-oriented solution, goal models offer a method of flexibly modelling inhabitant activity. The flexibility of these goal models can dynamically produce a large number of varying action plans that may be used to guide inhabitants. In order to provide illustrative, video-based, instruction for these numerous actions plans, a number of video clips would need to be associated with each variation. To address this, rich metadata may be used to automatically match appropriate video clips from a video repository to each specific, dynamically generated, activity plan. This study introduces a mechanism of automatically generating suitable rich metadata representing actions depicted within video clips to facilitate such video matching. This performance of this mechanism was evaluated using eighteen video files; during this evaluation metadata was automatically generated with a high level of accuracy.Item Embargo Automatic Summarization of Activities Depicted in Instructional Videos by Use of Speech Analysis(Springer, 2014-12) Rafferty, Joseph; Nugent, Chris; Liu, Jun; Chen, LimingExisting activity recognition based assistive living solutions have adopted a relatively rigid approach to modelling activities. To address the deficiencies of such approaches, a goal-oriented solution has been proposed that will offer a method of flexibly modelling activities. This approach does, however, have a disadvantage in that the performance of goals may vary hence requiring differing video clips to be associated with these variations. In order to address this shortcoming, the use of rich metadata to facilitate automatic sequencing and matching of appropriate video clips is necessary. This paper introduces a mechanism of automatically generating rich metadata which details the actions depicted in video files to facilitate matching and sequencing. This mechanism was evaluated with 14 video files, producing annotations with a high degree of accuracy.Item Open Access From Activity Recognition to Intention Recognition for Assisted Living Within Smart Homes(IEEE, 2017-01-05) Rafferty, Joseph; Nugent, Chris; Liu, Jun; Chen, LimingThe global population is aging; projections show that by 2050, more than 20% of the population will be aged over 64. This will lead to an increase in aging related illness, a decrease in informal support, and ultimately issues with providing care for these individuals. Assistive smart homes provide a promising solution to some of these issues. Nevertheless, they currently have issues hindering their adoption. To help address some of these issues, this study introduces a novel approach to implementing assistive smart homes. The devised approach is based upon an intention recognition mechanism incorporated into an intelligent agent architecture. This approach is detailed and evaluated. Evaluation was performed across three scenarios. Scenario 1 involved a web interface, focusing on testing the intention recognition mechanism. Scenarios 2 and 3 involved retrofitting a home with sensors and providing assistance with activities over a period of 3 months. The average accuracy for these three scenarios was 100%, 64.4%, and 83.3%, respectively. Future will extend and further evaluate this approach by implementing advanced sensor-filtering rules and evaluating more complex activities.Item Open Access Goal Lifecycles and Ontological Models for Intention Based Assistive Living within Smart Environments(Computer Systems Science and Engineering, 2015-01) Rafferty, Joseph; Chen, Liming; Nugent, Chris; Liu, JunCurrent ambient assistive living solutions have adopted a traditional sensor-centric approach, involving data analysis and activity recognition to provide assistance to individuals. The reliance on sensors and activity recognition in this approach introduces issues with scalability and ability to model activity variations. This study introduces a novel approach to assistive living which intends to address these issues via a paradigm shift from a sensor centric approach to a goal-oriented one. The goal-oriented approach focuses on identification of user goals in order to pro-actively offer assistance by either pre-defined or dynamically constructed instructions. This paper introduces the architecture of this goal-oriented approach and describes an ontological goal model to serve as its basis. The use of this approach is illustrated in a case study which focuses on assisting a user with activities of daily living.Item Metadata only A Goal-driven, assistive agent for instructing and guiding user activities.(IOS Press, 2015-11) Rafferty, Joseph; Nugent, Chris; Liu, Jun; Chen, LimingCurrent ambient assistive living solutions have adopted a traditional sensor-centric approach, involving data analysis and activity recognition to provide assistance to individuals. The reliance on sensors and activity recognition in this approach introduces a number of issues. This study introduces a novel approach to assistive living which intends to address these issues via a paradigm shift from a sensor centric approach to a goal-oriented one. The goal-oriented approach focuses on identification of user goals in order to pro-actively offer assistance by either pre-defined or dynamically constructed video-based instruction. This extended abstract introduces the architecture of this goal-oriented approach, covers the novel developments required to realize it and discusses the current state of the research.Item Metadata only A mechanism for nominating video clips to provide assistance for instrumental activities of daily living(Springer, 2015-11) Rafferty, Joseph; Nugent, Chris; Liu, Jun; Chen, LimingCurrent assistive smart homes have adopted a relatively rigid approach to modeling activities. The use of these activity models have introduced factors which block adoption of smart home technology. To address this, goal driven smart homes have been proposed, these are based upon more flexible activity structures. However, this goal-driven approach does have a disadvantage where flexibility in this activity modeling can lead to difficulty providing illustrative guidance. To address this, a video analysis and nomination mechanism is required to provide suitable assistive clips for a given goal. This paper introduces a novel mechanism for nominating a suitable video clip given a pool of automatically generated metadata. This mechanism was then evaluated using a voice based assistant application and a tool emulating assistance requests by a goal-driven smart home. The initial evaluation produced promising results.Item Metadata only Using Ambient Intelligence for Disaster Management(Springer, 2006-10-09) Augusto, J.C.; Liu, Jun; Chen, LimingThis paper presents an architecture to help the decision-making process of disaster managers. Here we focus on a core aspect of this process which is taking decisions in the presence of conflicting options. We exemplify this problem with three simple scenarios related to diverse contexts and provide an explanation on how our system will advice in all these cases.