• Title/Summary/Keyword: color conversion component

Search Result 19, Processing Time 0.03 seconds

Robust pattern watermarking using wavelet transform and multi-weights (웨이브렛 변환과 다중 가중치를 이용한 강인한 패턴 워터마킹)

  • 김현환;김용민;김두영
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.25 no.3B
    • /
    • pp.557-564
    • /
    • 2000
  • This paper presents a watermarking algorithm for embedding visually recognizable pattern (Mark, Logo, Symbol, stamping or signature) into the image. first, the color image(RGB model)is transformed in YCbCr model and then the Y component is transformed into 3-level wavelet transform. Next, the values are assembled with pattern watermark. PN(pseudo noise) code at spread spectrum communication method and mutilevel watermark weights. This values are inserted into discrete wavelet domain. In our scheme, new calculating method is designed to calculate wavelet transform with integer value in considering the quantization error. and we used the color conversion with fixed-point arithmetic to be easy to make the hardware hereafter. Also, we made the new solution using mutilevel threshold to robust to common signal distortions and malicious attack, and to enhance quality of image in considering the human visual system. the experimental results showed that the proposed watermarking algorithm was superior to other similar water marking algorithm. We showed what it was robust to common signal processing and geometric transform such as brightness. contrast, filtering. scaling. JPEG lossy compression and geometric deformation.

  • PDF

Implementation of the System Converting Image into Music Signals based on Intentional Synesthesia (의도적인 공감각 기반 영상-음악 변환 시스템 구현)

  • Bae, Myung-Jin;Kim, Sung-Ill
    • Journal of IKEEE
    • /
    • v.24 no.1
    • /
    • pp.254-259
    • /
    • 2020
  • This paper is the implementation of the conversion system from image to music based on intentional synesthesia. The input image based on color, texture, and shape was converted into melodies, harmonies and rhythms of music, respectively. Depending on the histogram of colors, the melody can be selected and obtained probabilistically to form the melody. The texture in the image expressed harmony and minor key with 7 characteristics of GLCM, a statistical texture feature extraction method. Finally, the shape of the image was extracted from the edge image, and using Hough Transform, a frequency component analysis, the line components were detected to produce music by selecting the rhythm according to the distribution of angles.

Color Image Splicing Detection using Benford's Law and color Difference (밴포드 법칙과 색차를 이용한 컬러 영상 접합 검출)

  • Moon, Sang-Hwan;Han, Jong-Goo;Moon, Yong-Ho;Eom, Il-Kyu
    • Journal of the Institute of Electronics and Information Engineers
    • /
    • v.51 no.5
    • /
    • pp.160-167
    • /
    • 2014
  • This paper presents a spliced color image detection method using Benford' Law and color difference. For a suspicious image, after color conversion, the discrete wavelet transform and the discrete cosine transform are performed. We extract the difference between the ideal Benford distribution and the empirical Benford distribution of the suspicious image as features. The difference between Benford distributions for each color component were also used as features. Our method shows superior splicing detection performance using only 13 features. After training the extracted feature vector using SVM classifier, we determine whether the presence of the image splicing forgery. Experimental results show that the proposed method outperforms the existing methods with smaller number of features in terms of splicing detection accuracy.

New Prefiltering Methods based on a Histogram Matching to Compensate Luminance and Chrominance Mismatch for Multi-view Video (다시점 비디오의 휘도 및 색차 성분 불일치 보상을 위한 히스토그램 매칭 기반의 전처리 기법)

  • Lee, Dong-Seok;Yoo, Ji-Sang
    • Journal of the Institute of Electronics Engineers of Korea SP
    • /
    • v.47 no.6
    • /
    • pp.127-136
    • /
    • 2010
  • In multi-view video, illumination disharmony between neighboring views can occur on account of different location of each camera and imperfect camera calibration, and so on. Such discrepancy can be the cause of the performance decrease of multi-view video coding by mismatch of inter-view prediction which refer to the pictures obtained from the neighboring views at the same time. In this paper, we propose an efficient histogram-based prefiltering algorithm to compensate mismatches between the luminance and chrominance components in multi-view video for improving its coding efficiency. To compensate illumination variation efficiently, all camera frames of a multi-view sequence are adjusted to a predefined reference through the histogram matching. A Cosited filter that is used for chroma subsampling in many video encoding schemes is applied to each color component prior to histogram matching to improve its performance. The histogram matching is carried out in the RGB color space after color space converting from YCbCr color space. The effective color conversion skill that has respect to direction of edge and range of pixel value in an image is employed in the process. Experimental results show that the compression ratio for the proposed algorithm is improved comparing with other methods.

Optimal combination of 3-component photoinitiation system to increase the degree of conversion of resin monomers (레진 모노머의 중합전환률 증가를 위한 3종 중합개시 시스템의 적정 비율)

  • Kim, Chang-Gyu;Moon, Ho-Jin;Shin, Dong-Hoon
    • Restorative Dentistry and Endodontics
    • /
    • v.36 no.4
    • /
    • pp.313-323
    • /
    • 2011
  • Objectives: This study investigated the optimal combination of 3-component photoinitiation system, consisting of CQ, p-octyloxy-phenyl-phenyl iodonium hexafluoroantimonate (OPPI), and 2-dimethylaminoethyl methacrylate (DMAEMA) to increase the degree of conversion of resin monomers, and analyze the effect of the ratio of the photoinitiator to the co-initiator. Materials and Methods: Each photoinitiators (CQ and OPP) and co-initiator (DMAEMA) were mixed in three levels with 0.2 wt.% (low concentration, L), 1.0 wt.% (medium concentration, M), and 2.0 wt.% (high concentration, H). A total of nine groups using the Taguchi method were tested according to the following proportion of components in the photoinitiator system: LLL, LMM, LHH, MLM, MMH, MHL, HLH, HML, HHM. Each monomer was polymerized using a quartz-tungsten-halogen curing unit (Demetron 400, USA) for 5, 20, 40, 60, 300 sec and the degree of conversion (DC) was determined at each exposure time using FTIR. Results: Significant differences were found for DC values in groups. MMH group and HHM group exhibited greater initial DC than the others. No significant difference was found with the ratio of the photoinitiators (CQ, OPPI) to the co-initiator (DMAEMA). The concentrations of CQ didn't affect the DC values, but those of OPPI did strongly. Conclusions: MMH and HHM groups seem to be best ones to get increased DC. MMH group is indicated for bright, translucent color and HHM group is good for dark, opaque colored-resin.

Terrain Cover Classification Technique Based on Support Vector Machine (Support Vector Machine 기반 지형분류 기법)

  • Sung, Gi-Yeul;Park, Joon-Sung;Lyou, Joon
    • Journal of the Institute of Electronics Engineers of Korea SC
    • /
    • v.45 no.6
    • /
    • pp.55-59
    • /
    • 2008
  • For effective mobility control of UGV(unmanned ground vehicle), the terrain cover classification is an important component as well as terrain geometry recognition and obstacle detection. The vision based terrain cover classification algorithm consists of pre-processing, feature extraction, classification and post-processing. In this paper, we present a method to classify terrain covers based on the color and texture information. The color space conversion is performed for the pre-processing, the wavelet transform is applied for feature extraction, and the SVM(support vector machine) is applied for the classifier. Experimental results show that the proposed algorithm has a promising classification performance.

A Study on the Improvement of Skin Loss Area in Skin Color Extraction for Face Detection (얼굴 검출을 위한 피부색 추출 과정에서 피부색 손실 영역 개선에 관한 연구)

  • Kim, Dong In;Lee, Gang Seong;Han, Kun Hee;Lee, Sang Hun
    • Journal of the Korea Convergence Society
    • /
    • v.10 no.5
    • /
    • pp.1-8
    • /
    • 2019
  • In this paper, we propose an improved facial skin color extraction method to solve the problem that facial surface is lost due to shadow or illumination in skin color extraction process and skin color extraction is not possible. In the conventional HSV method, when facial surface is brightly illuminated by light, the skin color component is lost in the skin color extraction process, so that a loss area appears on the face surface. In order to solve these problems, we extract the skin color, determine the elements in the H channel value range of the skin color in the HSV color space among the lost skin elements, and combine the coordinates of the lost part with the coordinates of the original image, To minimize the number of In the face detection process, the face was detected using the LBP Cascade Classifier, which represents texture feature information in the extracted skin color image. Experimental results show that the proposed method improves the detection rate and accuracy by 5.8% and 9.6%, respectively, compared with conventional RGB and HSV skin color extraction and face detection using the LBP cascade classifier method.

Detection of Red Tide Distribution in the Southern Coast of the Korea Waters using Landsat Image and Euclidian Distance (Landsat 영상과 유클리디언 거리측정 방법을 이용한 한반도 남부해역 적조영역 검출)

  • Sur, Hyung-Soo;Kim, Seok-Gyu;Lee, Chil-Woo
    • Journal of the Korean Association of Geographic Information Studies
    • /
    • v.10 no.4
    • /
    • pp.1-13
    • /
    • 2007
  • We make image that accumulate two principal component after change picture to use GLCM(Gray Level Co-Occurrence Matrix)'s texture feature information. And then these images use preprocess to achieved corner detection and area detection. Experiment results, two principle component conversion accumulation images had most informations about six kind textures by Eigen value 94.6%. When compared with red tide area that uses sea color and red tide area of image that have all principle component, displayed the most superior result. Also, we creates Euclidian space using Euclidian distance measurement about red tide area and clear sea. We identify of red tide area by red tide area and clear sea about random sea area through Euclidian distance and spatial distribution.

  • PDF

Effects of Light Sources in Poultry House on Growth Performance, Carcass Yield, Meat Quality and Blood Components of Finishing Broilers (계사 내 광원이 육계 후기의 생산성, 도체수율, 육질 특성 및 혈액성분에 미치는 영향)

  • Hong, Eui-Chul;Kang, Bo-Seok;Kang, Hwan-Ku;Jeon, Jin-Joo;You, Are-Sun;Kim, Hyun-Soo;Son, Jiseon;Kim, Chan-Ho;Kim, Hee-Jin
    • Korean Journal of Poultry Science
    • /
    • v.47 no.3
    • /
    • pp.159-167
    • /
    • 2020
  • This study investigated the effect of different light sources in the poultry house on performance, meat quality, and blood composition of finishing broilers. Two hundred and forty male broilers (1-day-old, 42.2±0.1 g) were divided into three groups and subjected to different light source treatments (incandescent, LED, and fluorescent lamps) from 3 weeks of age (four replications/treatment, 20 birds/replication). After breeding for 6 weeks, the carcass yield and meat quality of broilers with similar body weight (BW; 3.4±0.07 kg) were investigated, and blood components were analyzed. Corn-soybean meal-based feed was provided as starter (CP 22.5%, ME 3,020 kcal/kg), early (CP 18.5%, ME 3,050 kcal/kg), and finishing (CP 18%, ME 3,100 kcal/kg). Performance, carcass yield, meat quality, and blood components were evaluated. BW, BW gain, feed intake, and feed conversion ratio did not show any significant differences among treatments. There was no significant difference on live weight and carcass yield among treatments. There was no significant difference on meat color, shear force, and water holding capacity; however, cooking loss at 17.2% was the highest in the LED treatment (P<0.05). There was no significant difference on blood components except for glucose (blood biochemistry component) among treatments. Glucose was 234.5 mg/dL, 256.9 mg/dL, and 250.1 mg/dL in the three treatments, respectively, with a significant difference between incandescent and LED treatments (P<0.05). These results are used useful as basic data for investigating the effect of lighting in broilers production.