Automatic Summarization of Activities Depicted in Instructional Videos by Use of Speech Analysis

Date

2014-12

Advisors

Journal Title

Journal ISSN

ISSN

Volume Title

Publisher

Springer

Type

Book chapter

Peer reviewed

Yes

Abstract

Existing activity recognition based assistive living solutions have adopted a relatively rigid approach to modelling activities. To address the deficiencies of such approaches, a goal-oriented solution has been proposed that will offer a method of flexibly modelling activities. This approach does, however, have a disadvantage in that the performance of goals may vary hence requiring differing video clips to be associated with these variations. In order to address this shortcoming, the use of rich metadata to facilitate automatic sequencing and matching of appropriate video clips is necessary. This paper introduces a mechanism of automatically generating rich metadata which details the actions depicted in video files to facilitate matching and sequencing. This mechanism was evaluated with 14 video files, producing annotations with a high degree of accuracy.

Description

Keywords

Annotation, Automated speech recognition, Parsing, Ontology, Assistive Living, Smart Environments, Video, Guidance

Citation

Rafferty, J. et al. (2014) Automatic Summarization of Activities Depicted in Instructional Videos by Use of Speech Analysis. The 6th International Work-conference on Ambient Assisted Living (IWAAL2014), Lecture Notes in Computer Science LNCS No.8868, pp.123-130

Rights

Research Institute