Large-scale crowdsourced subjective assessment of picturewise just noticeable difference

dc.cclicenceCC-BY-NCen
dc.contributor.authorLin, Hanhe
dc.contributor.authorChen, Guangan
dc.contributor.authorJenadeleh, Mohsen
dc.contributor.authorHosu, Vlad
dc.contributor.authorReips, Ulf-Dietrich
dc.contributor.authorHamzaoui, Raouf
dc.contributor.authorSaupe, Dietmar
dc.date.acceptance2022-03-23
dc.date.accessioned2022-03-31T15:54:59Z
dc.date.available2022-03-31T15:54:59Z
dc.date.issued2022-03-31
dc.descriptionTRR 161 (Project A05) The file attached to this record is the author's final peer reviewed version. The Publisher's final version can be found by following the DOI link.en
dc.description.abstractThe picturewise just noticeable difference (PJND) for a given image, compression scheme, and subject is the smallest distortion level that the subject can perceive when the image is compressed with this compression scheme. The PJND can be used to determine the compression level at which a given proportion of the population does not notice any distortion in the compressed image. To obtain accurate and diverse results, the PJND must be determined for a large number of subjects and images. This is particularly important when experimental PJND data are used to train deep learning models that can predict a probability distribution model of the PJND for a new image. To date, such subjective studies have been carried out in laboratory environments. However, the number of participants and images in all existing PJND studies is very small because of the challenges involved in setting up laboratory experiments. To address this limitation, we develop a framework to conduct PJND assessments via crowdsourcing. We use a new technique based on slider adjustment and a flicker test to determine the PJND. A pilot study demonstrated that our technique could decrease the study duration by 50% and double the perceptual sensitivity compared to the standard binary search approach that successively compares a test image side by side with its reference image. Our framework includes a robust and systematic scheme to ensure the reliability of the crowdsourced results. Using 1,008 source images and distorted versions obtained with JPEG and BPG compression, we apply our crowdsourcing framework to build the largest PJND dataset, KonJND-1k (Konstanz just noticeable difference 1k dataset). A total of 503 workers participated in the study, yielding 61,030 PJND samples that resulted in an average of 42 samples per source image. The KonJND-1k dataset is available at http://database.mmsp-kn.de/konjnd-1kdatabase.htmlen
dc.funderOther external funder (please detail below)en
dc.funder.otherDFG (German Research Foundation)en
dc.identifier.citationH. Lin, G. Chen, M. Jenadeleh, V. Hosu, U. Reips, R. Hamzaoui, D. Saupe, (2022) Large-scale crowdsourced subjective assessment of picturewise just noticeable difference. IEEE Transactions on Circuits and Systems for Video Technology, 32 (9), pp. 5859-5873en
dc.identifier.doihttps://doi.org/10.1109/tcsvt.2022.3163860
dc.identifier.urihttps://hdl.handle.net/2086/21792
dc.language.isoen_USen
dc.peerreviewedYesen
dc.projectidProject-ID 251654672en
dc.publisherIEEEen
dc.researchinstituteInstitute of Engineering Sciences (IES)en
dc.subjectJust noticeable differenceen
dc.subjectSatisfied user ratioen
dc.subjectCrowdsourcingen
dc.subjectFlicker testen
dc.titleLarge-scale crowdsourced subjective assessment of picturewise just noticeable differenceen
dc.typeArticleen

Files

Original bundle
Now showing 1 - 1 of 1
Loading...
Thumbnail Image
Name:
TCSVT22.pdf
Size:
4.21 MB
Format:
Adobe Portable Document Format
Description:
Main article
License bundle
Now showing 1 - 1 of 1
No Thumbnail Available
Name:
license.txt
Size:
4.2 KB
Format:
Item-specific license agreed upon to submission
Description: