High-level geospatial information discovery and fusion for geocoded multimedia

Date

2013

Advisors

Journal Title

Journal ISSN

ISSN

1742-7371

Volume Title

Publisher

Emerald

Type

Article

Peer reviewed

Yes

Abstract

Purpose– Improvements and portability of technologies and smart devices have enabled a rapid growth in the amount of user-generated media such as photographs and videos. Whilst various media generation and management systems exist, it still remains a challenge to discover the right information, for the right purpose. This paper aims to propose an approach to reverse geocoding by cross-referencing multiple geospatial data sources to enable the enrichment of media and therefore enable better organisation and searching of the media to create an overall picture about places. Design/methodology/approach– The paper presents a system architecture that incorporates the proposed approach to aggregate several geospatial databases to enrich geo-tagged media with human readable information, which will further enable the goal of creating an overall picture about places. The approach enables the semantic information relating to point of interest. Findings– Implementation of the proposed approach shows that a single geospatial data source does not contain enough information to accurately describe the high-level geospatial information for geocoded multimedia. However, fusing several geospatial data sources together enables richer, more accurate high-level geospatial information to be tagged to the geocoded multimedia. Originality/value– The contribution in this paper shows that high-level geospatial information can be retrieved from many data sources and fused together to enrich geocoded multimedia which can facilitate better searching and retrieval of the multimedia.

Description

Keywords

GPS, Multimedia, Geospatial, Geotag, Reverse geocoding, Semantic enrichment

Citation

Ennis, A. et al. (2013) High-level geospatial information discovery and fusion for geocoded multimedia. International Journal of Pervasive Computing and Communications, 9 (4)., pp.367-382

Rights

Research Institute