• Title/Summary/Keyword: 컬러 화질 향상

Search Result 33, Processing Time 0.019 seconds

A study on colour appearance by the size of colour stimulation at foveal vision (중심와 시각에서 색채 자극의 크기에 따른 컬러 어피어런스 연구)

  • Hong, Ji-Young
    • The Journal of the Institute of Internet, Broadcasting and Communication
    • /
    • v.18 no.3
    • /
    • pp.23-28
    • /
    • 2018
  • Next generation displays show a trend of evolving from the display device environment (represented by existing televisions) to the mobile environment. The mobile display corresponding to the personal display is similar to a home theatre; however, they are advantageous because they are small and have a relatively lower weight. Therefore, the display industry has an interest in diverse product applications of displays, reproducing more accurate colours, and offering improved image quality from display devices of various sizes. To address these interests, a psychophysical experiment was conducted in this research. The experiment compared the size of the colour stimulation corresponding to foveal vision by gradually increasing the lightness of the background. This was based on the assumption of possible differences in colours being recognized by the lightness of the background and the size of the colour stimulation. Contrary to the results of previous studies, where the colours are identified more clearly as the size of the colour stimulation increases (assuming that the lightness of the background is not considered) here the results of the experiment showed that the attributes of the identified colours were different depending on the lightness of the background and the size of the colour stimulation. Based on the experimental results, it is possible to resolve errors in colour conversion that can occur when the input image is switched from a large screen size to a mobile size display, and to reproduce the colours more accurately and improve the image quality.

Edge-adaptive demosaicking method for complementary color filter array of digital video cameras (디지털 비디오 카메라용 보색 필터를 위한 에지 적응적 색상 보간 방법)

  • Han, Young-Seok;Kang, Hee;Kang, Moon-Gi
    • Journal of Broadcast Engineering
    • /
    • v.13 no.1
    • /
    • pp.174-184
    • /
    • 2008
  • Complementary color filter array (CCFA) is widely used in consumer-level digital video cameras, since it not only has high sensitivity and good signal-to-noise ratio in low-light condition but also is compatible with the interlaced scanning used in broadcast systems. However, the full-color images obtained from CCFA suffer from the color artifacts such as false color and zipper effects. These artifacts can be removed with edge-adaptive demosaicking (ECD) approaches which are generally used in rrimary color filter array (PCFA). Unfortunately, the unique array pattern of CCFA makes it difficult that CCFA adopts ECD approaches. Therefore, to apply ECD approaches suitable for CCFA to demosaicking is one of the major issues to reconstruct the full-color images. In this paper, we propose a new ECD algorithm for CCFA. To estimate an edge direction precisely and enhance the quality of the reconstructed image, a function of spatial variances is used as a weight, and new color conversion matrices are presented for considering various edge directions. Experimental results indicate that the proposed algorithm outperforms the conventional method with respect to both objective and subjective criteria.

A Mobile Landmarks Guide : Outdoor Augmented Reality based on LOD and Contextual Device (모바일 랜드마크 가이드 : LOD와 문맥적 장치 기반의 실외 증강현실)

  • Zhao, Bi-Cheng;Rosli, Ahmad Nurzid;Jang, Chol-Hee;Lee, Kee-Sung;Jo, Geun-Sik
    • Journal of Intelligence and Information Systems
    • /
    • v.18 no.1
    • /
    • pp.1-21
    • /
    • 2012
  • In recent years, mobile phone has experienced an extremely fast evolution. It is equipped with high-quality color displays, high resolution cameras, and real-time accelerated 3D graphics. In addition, some other features are includes GPS sensor and Digital Compass, etc. This evolution advent significantly helps the application developers to use the power of smart-phones, to create a rich environment that offers a wide range of services and exciting possibilities. To date mobile AR in outdoor research there are many popular location-based AR services, such Layar and Wikitude. These systems have big limitation the AR contents hardly overlaid on the real target. Another research is context-based AR services using image recognition and tracking. The AR contents are precisely overlaid on the real target. But the real-time performance is restricted by the retrieval time and hardly implement in large scale area. In our work, we exploit to combine advantages of location-based AR with context-based AR. The system can easily find out surrounding landmarks first and then do the recognition and tracking with them. The proposed system mainly consists of two major parts-landmark browsing module and annotation module. In landmark browsing module, user can view an augmented virtual information (information media), such as text, picture and video on their smart-phone viewfinder, when they pointing out their smart-phone to a certain building or landmark. For this, landmark recognition technique is applied in this work. SURF point-based features are used in the matching process due to their robustness. To ensure the image retrieval and matching processes is fast enough for real time tracking, we exploit the contextual device (GPS and digital compass) information. This is necessary to select the nearest and pointed orientation landmarks from the database. The queried image is only matched with this selected data. Therefore, the speed for matching will be significantly increased. Secondly is the annotation module. Instead of viewing only the augmented information media, user can create virtual annotation based on linked data. Having to know a full knowledge about the landmark, are not necessary required. They can simply look for the appropriate topic by searching it with a keyword in linked data. With this, it helps the system to find out target URI in order to generate correct AR contents. On the other hand, in order to recognize target landmarks, images of selected building or landmark are captured from different angle and distance. This procedure looks like a similar processing of building a connection between the real building and the virtual information existed in the Linked Open Data. In our experiments, search range in the database is reduced by clustering images into groups according to their coordinates. A Grid-base clustering method and user location information are used to restrict the retrieval range. Comparing the existed research using cluster and GPS information the retrieval time is around 70~80ms. Experiment results show our approach the retrieval time reduces to around 18~20ms in average. Therefore the totally processing time is reduced from 490~540ms to 438~480ms. The performance improvement will be more obvious when the database growing. It demonstrates the proposed system is efficient and robust in many cases.