Low memory visual saliency architecture for data reduction in wireless sensor networks

Citation data:

IET Wireless Sensor Systems, ISSN: 2043-6386, Vol: 2, Issue: 2, Page: 115-127

Publication Year:
2012
Usage 13
Abstract Views 13
Captures 5
Readers 5
Citations 6
Citation Indexes 6
Repository URL:
https://ro.ecu.edu.au/ecuworks2012/567
DOI:
10.1049/iet-wss.2011.0038
Author(s):
Ngau, C. W. H.; Ang, Li-minn; Seng, K. P.
Publisher(s):
Institution of Engineering and Technology (IET); IET Digital Library
Tags:
Engineering; Bandwidth limitation; Communication overheads; Computation complexity; Hardware implementations; High resolution; Image compression techniques; Low memory; Memory requirements; Realtime processing; Reduction rate; Visual perception; Visual saliency; Wireless sensor network (WSNs); Data reduction; Hardware; Image compression; Memory architecture; Telecommunication systems; Visualization; Wireless sensor networks
article description
Traditionally, to reduce communication overheads because of bandwidth limitations in wireless sensor networks (WSNs), image compression techniques are used on high-resolution captures. Higher data reduction rates can be achieved by first removing redundant parts of the capture prior to the application of image compression. To locate these redundant parts, biologically plausible visual saliency processing is used to isolate parts that seemed important based on visual perception. Although visual saliency proves to be an effective method in providing a distinctive difference between important and unimportant regions, computational complexity and memory requirements often impair implementation. This study presents an implementation of a low-memory visual saliency architecture with reduced computation complexity for data reduction in WSNs through salient patch transmission. A custom softcore microprocessor-based hardware implementation on a field programmable gate array is then used to verify the architecture. Real-time processing demonstrated that data reductions of more than 50% are achievable for simple to medium scenes without the application of image compression techniques. © 2012 The Institution of Engineering and Technology.