• Title/Summary/Keyword: Color Metrics

Search Result 22, Processing Time 0.021 seconds

Improved Angle-of-View Measurement Method for Display Devices

  • Lee, Eun-Jung;Chong, Jong-Ho;Yang, Sun-A;Lee, Hun-Jung;Shin, Mi-Ok;Kim, Su-Young;Choi, Dong-Wook;Lee, Seung-Bae;Lee, Han-Yong;Berkeley, Brian H.
    • Journal of Information Display
    • /
    • v.11 no.1
    • /
    • pp.17-20
    • /
    • 2010
  • With the increasing demand for a better FPD image quality, better evaluation metrics and advanced display quality measurement methods are required to meet these needs. There are many measurement methods for evaluating the viewing angle of various display devices, but these methods, which include luminance drop, color shift, and contrast ratio decrease, are imperfect considering that human perception does not completely correlate to them. In this paper, a new method of measuring the perceptual angle of FPDs is proposed, considering human visual perception, which uses the color space of the color appearance model.

Improved Method for Angle-of-View Measurement of Display Devices

  • Lee, Eun-Jung;Chong, Jong-Ho;Yang, Sun-A;Lee, Hun-Jung;Shin, Mi-Ok;Kim, Su-Young;Choi, Dong-Wook;Lee, Seung-Bae;Lee, Han-Yong;Berkeley, Brian H.
    • 한국정보디스플레이학회:학술대회논문집
    • /
    • 2009.10a
    • /
    • pp.979-982
    • /
    • 2009
  • With increasing demand for better FPD image quality, better evaluation metrics and advanced display quality measurement methods are required to meet these needs. There are many measurement methods to evaluate viewing angle of the various different display devices. However, these methods, which include luminance drop, color shift, and contrast ratio decrease, are imperfect considering that human perception does not completely correlate to these methods. In this paper, we propose a new method for measuring perceptual angle of FPDs considering human visual perception, which uses color space of the color appearance model.

  • PDF

Optimum Parameter Ranges on Highly Preferred Images: Focus on Dynamic Range, Color, and Contrast (선호도 높은 이미지의 최적 파라미터 범위 연구: 다이내믹 레인지, 컬러, 콘트라스트를 중심으로)

  • Park, Hyung-Ju;Har, Dong-Hwan
    • The Journal of the Korea Contents Association
    • /
    • v.13 no.1
    • /
    • pp.9-18
    • /
    • 2013
  • In order to measure the parameters of consumers' preferred image quality, this research suggests image quality assessment factors; dynamic range, color, and contrast. They have both physical image quality factors and psychological characteristics from the previous researches. We found out the specific ranges of preferred image quality metrics. As a result, Digital Zone System meant for dynamic range generally shows 6~10 stop ranges in portrait, nightscape, and landscape. Total RGB mean values represent in portrait (67.2~215.2), nightscape (46~142), and landscape (52~185). Portrait total RGB averages have the widest range, landscape, and nightscape, respectively. Total scene contrast ranges show in portrait (196~589), nightscape (131~575), and landscape (104~767). Especially in portrait, skin tone RGB mean values are in ZONE V as the exposure standard, but practically image consumers' preferred skin tone level is in ZONE IV. Also, total scene versus main subject contrast ratio represents 1:1.2; therefore, we conclude that image consumers prefer the out-of-focus effect in portrait. Throughout this research, we can measure the preferred image quality metrics ranges. Also, we expect the practical and specific dynamic range, color, and contrast information of preferred image quality to positively influence product development.

A Watermarking Technique for User Authentication Based on a Combination of Face Image and Device Identity in a Mobile Ecosystem

  • Al-Jarba, Fatimah;Al-Khathami, Mohammed
    • International Journal of Computer Science & Network Security
    • /
    • v.21 no.9
    • /
    • pp.303-316
    • /
    • 2021
  • Digital content protection has recently become an important requirement in biometrics-based authentication systems due to the challenges involved in designing a feasible and effective user authentication method. Biometric approaches are more effective than traditional methods, and simultaneously, they cannot be considered entirely reliable. This study develops a reliable and trustworthy method for verifying that the owner of the biometric traits is the actual user and not an impostor. Watermarking-based approaches are developed using a combination of a color face image of the user and a mobile equipment identifier (MEID). Employing watermark techniques that cannot be easily removed or destroyed, a blind image watermarking scheme based on fast discrete curvelet transform (FDCuT) and discrete cosine transform (DCT) is proposed. FDCuT is applied to the color face image to obtain various frequency coefficients of the image curvelet decomposition, and for high frequency curvelet coefficients DCT is applied to obtain various frequency coefficients. Furthermore, mid-band frequency coefficients are modified using two uncorrelated noise sequences with the MEID watermark bits to obtain a watermarked image. An analysis is carried out to verify the performance of the proposed schema using conventional performance metrics. Compared with an existing approach, the proposed approach is better able to protect multimedia data from unauthorized access and will effectively prevent anyone other than the actual user from using the identity or images.

Attention-based for Multiscale Fusion Underwater Image Enhancement

  • Huang, Zhixiong;Li, Jinjiang;Hua, Zhen
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.16 no.2
    • /
    • pp.544-564
    • /
    • 2022
  • Underwater images often suffer from color distortion, blurring and low contrast, which is caused by the propagation of light in the underwater environment being affected by the two processes: absorption and scattering. To cope with the poor quality of underwater images, this paper proposes a multiscale fusion underwater image enhancement method based on channel attention mechanism and local binary pattern (LBP). The network consists of three modules: feature aggregation, image reconstruction and LBP enhancement. The feature aggregation module aggregates feature information at different scales of the image, and the image reconstruction module restores the output features to high-quality underwater images. The network also introduces channel attention mechanism to make the network pay more attention to the channels containing important information. The detail information is protected by real-time superposition with feature information. Experimental results demonstrate that the method in this paper produces results with correct colors and complete details, and outperforms existing methods in quantitative metrics.

The Impact of Audiovisual Elements on Learning Outcomes - Focusing on MOOC -

  • Li Meng;Hong, Chang-kee
    • International Journal of Internet, Broadcasting and Communication
    • /
    • v.16 no.3
    • /
    • pp.98-112
    • /
    • 2024
  • As digital education progresses, MOOC (Massive Open Online Courses) are increasingly utilized by learners, making research on MOOC learning outcomes a necessary endeavor. In this study, we systematically investigated the impact of audiovisual elements on learning outcomes in MOOC, highlighting the nuanced role these components play in enhancing educational effectiveness. Through a comprehensive survey and rigorous analysis involving descriptive statistics, reliability metrics, and regression techniques, we quantified the influence of text, graphics, color, teacher images, sound effects, background music, and teacher's voice on learner attention, cognitive load, and satisfaction. We discovered that background music and text layout significantly improve engagement and reduce cognitive burden, underscoring their pivotal role in the instructional design of MOOC. We findings contribute new insights to the field of digital education, emphasizing the critical importance of integrating audiovisual elements thoughtfully to foster better learning environments and outcomes. Not only advances academic understanding of multimedia learning impacts but also offers practical guidance for educators and course designers seeking to enhance the efficacy of MOOC.

Image Quality Assessment Considering both Computing Speed and Robustness to Distortions (계산 속도와 왜곡 강인성을 동시 고려한 이미지 품질 평가)

  • Kim, Suk-Won;Hong, Seongwoo;Jin, Jeong-Chan;Kim, Young-Jin
    • Journal of KIISE
    • /
    • v.44 no.9
    • /
    • pp.992-1004
    • /
    • 2017
  • To assess image quality accurately, an image quality assessment (IQA) metric is required to reflect the human visual system (HVS) properly. In other words, the structure, color, and contrast ratio of the image should be evaluated in consideration of various factors. In addition, as mobile embedded devices such as smartphone become popular, a fast computing speed is important. In this paper, the proposed IQA metric combines color similarity, gradient similarity, and phase similarity synergistically to satisfy the HVS and is designed by using optimized pooling and quantization for fast computation. The proposed IQA metric is compared against existing 13 methods using 4 kinds of evaluation methods. The experimental results show that the proposed IQA metric ranks the first on 3 evaluation methods and the first on the remaining method, next to VSI which is the most remarkable IQA metric. Its computing speed is on average about 20% faster than VSI's. In addition, we find that the proposed IQA metric has a bigger amount of correlation with the HVS than existing IQA metrics.

A Multi-Level Integrator with Programming Based Boosting for Person Authentication Using Different Biometrics

  • Kundu, Sumana;Sarker, Goutam
    • Journal of Information Processing Systems
    • /
    • v.14 no.5
    • /
    • pp.1114-1135
    • /
    • 2018
  • A multiple classification system based on a new boosting technique has been approached utilizing different biometric traits, that is, color face, iris and eye along with fingerprints of right and left hands, handwriting, palm-print, gait (silhouettes) and wrist-vein for person authentication. The images of different biometric traits were taken from different standard databases such as FEI, UTIRIS, CASIA, IAM and CIE. This system is comprised of three different super-classifiers to individually perform person identification. The individual classifiers corresponding to each super-classifier in their turn identify different biometric features and their conclusions are integrated together in their respective super-classifiers. The decisions from individual super-classifiers are integrated together through a mega-super-classifier to perform the final conclusion using programming based boosting. The mega-super-classifier system using different super-classifiers in a compact form is more reliable than single classifier or even single super-classifier system. The system has been evaluated with accuracy, precision, recall and F-score metrics through holdout method and confusion matrix for each of the single classifiers, super-classifiers and finally the mega-super-classifier. The different performance evaluations are appreciable. Also the learning and the recognition time is fairly reasonable. Thereby making the system is efficient and effective.

Blind Quality Metric via Measurement of Contrast, Texture, and Colour in Night-Time Scenario

  • Xiao, Shuyan;Tao, Weige;Wang, Yu;Jiang, Ye;Qian, Minqian.
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.15 no.11
    • /
    • pp.4043-4064
    • /
    • 2021
  • Night-time image quality evaluation is an urgent requirement in visual inspection. The lighting environment of night-time results in low brightness, low contrast, loss of detailed information, and colour dissonance of image, which remains a daunting task of delicately evaluating the image quality at night. A new blind quality assessment metric is presented for realistic night-time scenario through a comprehensive consideration of contrast, texture, and colour in this article. To be specific, image blocks' color-gray-difference (CGD) histogram that represents contrast features is computed at first. Next, texture features that are measured by the mean subtracted contrast normalized (MSCN)-weighted local binary pattern (LBP) histogram are calculated. Then statistical features in Lαβ colour space are detected. Finally, the quality prediction model is conducted by the support vector regression (SVR) based on extracted contrast, texture, and colour features. Experiments conducted on NNID, CCRIQ, LIVE-CH, and CID2013 databases indicate that the proposed metric is superior to the compared BIQA metrics.

Improvement of GOCI-II Ground System for Monitoring of Level-1 Data Quality (천리안 해양위성 2호 Level-1 영상의 품질관리를 위한 지상국 시스템 개선)

  • Sun-Ju Lee;Kum-Hui Oh;Gm-Sil Kang;Woo-Chang Choi;Jong-Kuk Choi;Jae-Hyun Ahn
    • Korean Journal of Remote Sensing
    • /
    • v.39 no.6_2
    • /
    • pp.1529-1539
    • /
    • 2023
  • The data from Geostationary Ocean Color Imager-II (GOCI-II), which observes the color of the sea to monitor marine environments, undergoes various correction processes in the ground station system, producing data from Raw to Level-2 (L2). Quality issues arising at each processing stage accumulate step by step, leading to an amplification of errors in the satellite data. To address this, improvements were made to the GOCI-II ground station system to measure potential optical quality and geolocation accuracy errors in the Level-1A/B (L1A/B) data. A newly established Radiometric and Geometric Performance Assessment Module (RGPAM) now measures five optical quality factors and four geolocation accuracy factors in near real-time. Testing with GOCI-II data has shown that RGPAM's functions, including data processing, display and download of measurement results, work well. The performance metrics obtained through RGPAM are expected to serve as foundational data for real-time radiometric correction model enhancements, assessment of L1 data quality consistency, and the development of reprocessing strategies to address identified issues related to the GOCI-II detector's sensitivity degradation.