• Title/Summary/Keyword: color combination image

Search Result 212, Processing Time 0.02 seconds

Design expression method of the sexual image of evening dresses shown in the haute couture collection (오뜨꾸뛰르 컬렉션에 나타난 섹슈얼 이미지 이브닝 드레스 디자인 표현방법)

  • Peng, Xiaochun;Eum, Jungsun;Yoo, Youngsun
    • The Research Journal of the Costume Culture
    • /
    • v.24 no.5
    • /
    • pp.642-652
    • /
    • 2016
  • The purpose of this study is to understand the concept of a sexual image and verify the method of its design expression through case studies of the sexual image evening dress shown in an Haute Couture Collection over the last 10 years (2005~2014). The results of analysis are as follows: First, "expression by the seeing through of the natural body" expressed a natural sexual image that combined fashion of the previous times with the beauty of the natural body by using a see-through material and classic or ethnic image. Second, "expression of a sexual-image look that emphasizes romantic detail" illustrated a romantic sexual image that emphasizes feminine sensitivity by mixing various ornamental elements such as see-through material and symbolism of underwear image. Third, "creation of a nude look using skin color" expressed sexual image of an evening dress by inducing erotic sexual association through a combination of opaque material of skin color and design shape that is intended for body exposure. Fourth, "combination of sexual symbol and heterogeneous elements" expressed a decadent, avant-garde and futuristic sexual images by using women's underwear and a sexual symbol as design motif and mixed with mismatching elements. Fifth, "use of a fantastic black image" expressed an exclusive and refined sexual image and a decadent and primitive sexual image by using a fantastic image of black color. The results of this study are expected to be used for the design process of the evening-dress industry that aims for quality improvement.

The Image Evaluation for Tone Variation in Same Color of Clothing and Lipstick of the Clothing Wearers (의복과 립스틱의 동일색상 톤 변화에 따른 의복착용자의 이미지 평가)

  • Jeong, Su-Jin
    • Journal of the Korea Fashion and Costume Design Association
    • /
    • v.9 no.2
    • /
    • pp.15-30
    • /
    • 2007
  • The purpose of this study is to investigate the effect of makeup, clothing tone and clothing style on wearer's with same color coordination of lipstick and clothing. The experimental materials developed for this study were a set of stimulus and response scales (7-point scale semantic). The stimuli were 64 color pictures were manipulated by computer simulation. This experiment design was $2{\times}2{\times}4{\times}4$ factorial design. The stimuli were a set of eyeshadow color(brown), clothing style (formal style of Jacket / skirt and casual style of cardigan / pants), lipstick and clothing color (red and orange), lipstick tone(vivid, light, dull and dark), clothing tone(vivid, light, dull and dark). The subjects of this research were 384 female undergraduates living in Gyeongsangnam-do. The investigation was carried out at a lecture hall at the time between 10 a.m. and 3 p.m. in May 2006. The data were analyzed using SPSS program. Factor analysis, 4-way ANOVA, t-test, and Duncan test were used as analysis methods. Image factors according to variation of clothing style, clothing color, and makeup color are composed of 4 different dimensions (visibility, attractiveness, tenderness, and stability). In dimension of the visibility, the image was perceived to be glowing and luxurious regardless of lipstick tone and lipstick color in the case of the vivid tone clothing. According to the variation of clothing style, clothing color and tone, makeup color composed of eyeshadow color, lipstick color and tone, it was investigated that the images for a clothing wearer were expressed diversely, were shown differently in image dimensions, and could be produced to different images. The analysis data for images according to the combination of makeup and clothing color, tone, and style thus provide basic material for image consulting or color coordination.

  • PDF

Analysis of wedding servicescape color combination image -focusing on the comparison between hotel banquet hall, general wedding hall and sanctuary- (결혼 예식장 종류에 따른 서비스스케이프 배색 이미지 분석 -호텔 예식 연회, 일반예식장, 종교 결혼식장과의 비교를 중심으로-)

  • Kim, Kyung-Hee;Jo, Mi-Na;Yang, Il-Sun
    • Science of Emotion and Sensibility
    • /
    • v.14 no.1
    • /
    • pp.73-82
    • /
    • 2011
  • This study was aimed to analyze the wedding servicescape color combination image focusing on the comparison between hotel banquet hall, general wedding hall and sanctuary. The survey was conducted among 400 customers(aged from 20~39 years old) who lived in Seoul and Kyunggi Province and 315 were analyzed. The statistical data analyses were performed using SPSS/WIN 17.0 and reliability analysis, factor analysis, t-test, ANOVA were used. Based on the result of the conducting factor analysis, color image of wedding hall were classified into 3 factors: delicateness, nobleness, and vivaciousness. Cronbach's alpha was calculated for the reliability of the survey instrument. Consequently, wedding hall color image were shown 'clear' 3.60, 'mild' 3.50, 'delicate' 3.38. Comparison among wedding hall types, 'vivaciousness' was 3.00 at general wedding hall, 'nobleness' was 3.64 at hotel banquet hall, and 'delicateness' was 3.60 at hotel banquet hall. Demographic differences of wedding hall color image were found by sex, marital status, monthly income but not by age, education and occupation. The results of this study will serve as a basis of wedding hall color marketing researches.

  • PDF

GAN-based Color Palette Extraction System by Chroma Fine-tuning with Reinforcement Learning

  • Kim, Sanghyuk;Kang, Suk-Ju
    • Journal of Semiconductor Engineering
    • /
    • v.2 no.1
    • /
    • pp.125-129
    • /
    • 2021
  • As the interest of deep learning, techniques to control the color of images in image processing field are evolving together. However, there is no clear standard for color, and it is not easy to find a way to represent only the color itself like the color-palette. In this paper, we propose a novel color palette extraction system by chroma fine-tuning with reinforcement learning. It helps to recognize the color combination to represent an input image. First, we use RGBY images to create feature maps by transferring the backbone network with well-trained model-weight which is verified at super resolution convolutional neural networks. Second, feature maps are trained to 3 fully connected layers for the color-palette generation with a generative adversarial network (GAN). Third, we use the reinforcement learning method which only changes chroma information of the GAN-output by slightly moving each Y component of YCbCr color gamut of pixel values up and down. The proposed method outperforms existing color palette extraction methods as given the accuracy of 0.9140.

Content-based Image Retrieval using Spatial-Color and Gabor Texture on A Mobile Device (모바일 디바이스상에서 공간-칼라와 가버 질감을 이용한 내용-기반 영상 검색)

  • Lee, Yong-Hwan;Lee, June-Hwan;Cho, Han-Jin;Kwon, Oh-Kin;Kim, Youngseop
    • Journal of the Semiconductor & Display Technology
    • /
    • v.13 no.4
    • /
    • pp.91-96
    • /
    • 2014
  • Mobile image retrieval is one of the most exciting and fastest growing research fields in the area of multimedia technology. As the amount of digital contents continues to grow users are experiencing increasing difficulty in finding specific images in their image libraries. This paper proposes a new efficient and effective mobile image retrieval method that applies a weighted combination of color and texture utilizing spatial-color and second order statistics. The system for mobile image searches runs in real-time on an iPhone and can easily be used to find a specific image. To evaluate the performance of the new method, we assessed the iPhone simulations performance in terms of average precision and recall using several image databases and compare the results with those obtained using existing methods. Experimental trials revealed that the proposed descriptor exhibited a significant improvement of over 13% in retrieval effectiveness, compared to the best of the other descriptors.

Image Retrieval Using Spacial Color Correlation and Local Texture Characteristics (칼라의 공간적 상관관계 및 국부 질감 특성을 이용한 영상검색)

  • Sung, Joong-Ki;Chun, Young-Deok;Kim, Nam-Chul
    • Journal of the Institute of Electronics Engineers of Korea SP
    • /
    • v.42 no.5 s.305
    • /
    • pp.103-114
    • /
    • 2005
  • This paper presents a content-based image retrieval (CBIR) method using the combination of color and texture features. As a color feature, a color autocorrelogram is chosen which is extracted from the hue and saturation components of a color image. As a texture feature, BDIP(block difference of inverse probabilities) and BVLC(block variation of local correlation coefficients) are chosen which are extracted from the value component. When the features are extracted, the color autocorrelogram and the BVLC are simplified in consideration of their calculation complexity. After the feature extraction, vector components of these features are efficiently quantized in consideration of their storage space. Experiments for Corel and VisTex DBs show that the proposed retrieval method yields 9.5% maximum precision gain over the method using only the color autucorrelogram and 4.0% over the BDIP-BVLC. Also, the proposed method yields 12.6%, 14.6%, and 27.9% maximum precision gains over the methods using wavelet moments, CSD, and color histogram, respectively.

입체영상에서 자극의 색상, 배경색, 제시거리가 인간의 심도지각에 미치는 영향에 관한 연구

  • 박경수;이안재
    • Proceedings of the ESK Conference
    • /
    • 1995.04a
    • /
    • pp.181-186
    • /
    • 1995
  • This study investigated the effects of several factors - stimulus color, background color, and predicted depth - that affect depth perception in stereoscopic displays. For this study, two experiments were conducted; in the first experiment, the subjects were asked to indicate the depth perceived from presented image(rectangle) using matching mark, and in the second experiment, the subjects were asked to adjust one image(controllable rectangle) to have the same perceived depth as the other image(fixed rectangle) using keyboard. The depth perceived under various combination of levels of these factors was compared with depth predicted by the geometry of streopsis. Through two experiments, we found that stimulus color, predicted depth, and interaction between stimulus color and background color affected perceived depth significantly, and that red was perceived to be closest to the observer followed by yellow, green, and then blue.

Trends in image processing techniques applied to corrosion detection and analysis (부식 검출과 분석에 적용한 영상 처리 기술 동향)

  • Beomsoo Kim;Jaesung Kwon;Jeonghyeon Yang
    • Journal of the Korean institute of surface engineering
    • /
    • v.56 no.6
    • /
    • pp.353-370
    • /
    • 2023
  • Corrosion detection and analysis is a very important topic in reducing costs and preventing disasters. Recently, image processing techniques have been widely applied to corrosion identification and analysis. In this work, we briefly introduces traditional image processing techniques and machine learning algorithms applied to detect or analyze corrosion in various fields. Recently, machine learning, especially CNN-based algorithms, have been widely applied to corrosion detection. Additionally, research on applying machine learning to region segmentation is very actively underway. The corrosion is reddish and brown in color and has a very irregular shape, so a combination of techniques that consider color and texture, various mathematical techniques, and machine learning algorithms are used to detect and analyze corrosion. We present examples of the application of traditional image processing techniques and machine learning to corrosion detection and analysis.

e-Catalogue Image Retrieval Using Vectorial Combination of Color Edge (컬러에지의 벡터적 결합을 이용한 e-카탈로그 영상 검색)

  • Hwang, Yei-Seon;Park, Sang-Gun;Chun, Jun-Chul
    • The KIPS Transactions:PartB
    • /
    • v.9B no.5
    • /
    • pp.579-586
    • /
    • 2002
  • The edge descriptor proposed by MPEG-7 standard is a representative approach for the contents-based image retrieval using the edge information. In the edge descriptor, the edge information is the edge histogram derived from a gray-level value image. This paper proposes a new method which extracts color edge information from color images and a new approach for the contents-based image retrieval based on the color edge histogram. The poposed method and technique are applied to image retrieval of the e-catalogue. For the evaluation, the results of image retrieval using the proposed approach are compared with those of image retrieval using the edge descriptor by MPEG-7 and the statistics shows the efficiency of the proposed method. The proposed color edge model is made by combining the R,G,B channel components vectorially and by characterizing the vector norm of the edge map. The color edge histogram using the direction of the color edge model is subsequently used for the contents-based image retrieval.

A Watermarking Technique for User Authentication Based on a Combination of Face Image and Device Identity in a Mobile Ecosystem

  • Al-Jarba, Fatimah;Al-Khathami, Mohammed
    • International Journal of Computer Science & Network Security
    • /
    • v.21 no.9
    • /
    • pp.303-316
    • /
    • 2021
  • Digital content protection has recently become an important requirement in biometrics-based authentication systems due to the challenges involved in designing a feasible and effective user authentication method. Biometric approaches are more effective than traditional methods, and simultaneously, they cannot be considered entirely reliable. This study develops a reliable and trustworthy method for verifying that the owner of the biometric traits is the actual user and not an impostor. Watermarking-based approaches are developed using a combination of a color face image of the user and a mobile equipment identifier (MEID). Employing watermark techniques that cannot be easily removed or destroyed, a blind image watermarking scheme based on fast discrete curvelet transform (FDCuT) and discrete cosine transform (DCT) is proposed. FDCuT is applied to the color face image to obtain various frequency coefficients of the image curvelet decomposition, and for high frequency curvelet coefficients DCT is applied to obtain various frequency coefficients. Furthermore, mid-band frequency coefficients are modified using two uncorrelated noise sequences with the MEID watermark bits to obtain a watermarked image. An analysis is carried out to verify the performance of the proposed schema using conventional performance metrics. Compared with an existing approach, the proposed approach is better able to protect multimedia data from unauthorized access and will effectively prevent anyone other than the actual user from using the identity or images.