• Title/Summary/Keyword: 텍스처

Search Result 407, Processing Time 0.025 seconds

Audio Texture Synthesis using EM Optimization (EM 최적화를 이용한 오디오 텍스처 합성)

  • Roe, Chang-Hwan;Yoo, Min-Joon;Lee, In-Kwon
    • 한국HCI학회:학술대회논문집
    • /
    • 2007.02a
    • /
    • pp.274-280
    • /
    • 2007
  • 오디오 텍스처 합성은 주어진 짧은 오디오 클립으로부터 임의의 길이를 갖는 새로운 오디오 클립을 생성하는 방법이다. 이는 애니메이션이나 영화에서 비디오와 정확한 동기화를 이루는 사운드 효과를, 혹은 임의의 길이를 갖는 배경 음악을 효율적으로 만들 수 있는 방법이다. 최근 Lie Lu는 주어진 예제 오디오 클립을 여러 조각으로 나눈 후, 이 조각들을 그래프 형태로 연결하고, 생성된 그래프를 탐색하면서 임의의 길이를 가지는 오디오 클립을 합성하는 방법을 제안하였다. 비교적 간단한 방법으로도 원본 오디오 클립과 비슷한 느낌의 오디오 클립을 만들어낸다는 장점이 있지만, 이는 원본 내의 여러 오디오 조각들이 단지 지속적으로 연결되는 형태로 합성되기 때문에 종종 반복되는 느낌을 받는다는 단점이 있다. 본 논문에서는 Lie Lu의 방법과는 달리 주어진 예제 오디오 클립을 직접 합성함으로써 반복성을 줄이면서도 원본과 비슷한 느낌을 갖는 결과 오디오 클립을 생성할 수 있는 방법을 제안한다. 특히 본 논문에서는 정확한 합성을 위하여 EM 최적화 방법을 사용한다. 본 논문에서 제안하는 합성 방법은 먼저 예제 오디오 클립을 일정 단위로 나누고 이렇게 나눠진 부분들을 일정 길이만큼 서로 겹쳐지게 합성하여 임의의 길이의 오디오 클립을 만든다. 그 후 만들어진 오디오 클립을 예제 오디오 클립과 부분 부분을 비교하여 확장된 오디오 클립과 최대한 비슷한 부분을 예제 오디오 클립에서 찾는다. 그 다음 찾아진 결과를 결과 오디오에 다시 합성하여 오디오 클립을 만든다. 이런 과정을 반복하여 최적화된 가장 적절한 결과값을 구한다. 이 결과는 분할된 부분들이 가장 자연스럽게 이어지는 결과가 된다. 본 논문에서는 최적화를 사용하여 오디오를 합성하기 때문에 합성 결과를 쉽게 조정할 수 있다는 장점이 있다. 최적화 문제에 특정 제약 조건을 넣음으로써 사용자가 원하는 부분의 음악이 결과 사운드의 특정 부분에 위치 할 수 있게 하고 이로써 특정 흐름을 만들어낼 수 있으며, 일부가 손실된 사운드 데이터의 복구를 가능하게 하는 등의 결과를 생성할 수 있다. EM 최적화를 사용한 오디오 텍스처 합성 방법은 기존의 합성 방법에 비해 질적인 측면에서 보다 좋은 결과를 생성할 수 있고, 비교적 반복이 덜한 패턴들을 만들어 낼 수 있다. 이를 입증하기 위해 이에 대한 사용자 설문 조사 결과가 제시된다.

  • PDF

Face Relighting Based on Virtual Irradiance Sphere and Reflection Coefficients (가상 복사조도 반구와 반사계수에 근거한 얼굴 재조명)

  • Han, Hee-Chul;Sohn, Kwang-Hoon
    • Journal of Broadcast Engineering
    • /
    • v.13 no.3
    • /
    • pp.339-349
    • /
    • 2008
  • We present a novel method to estimate the light source direction and relight a face texture image of a single 3D model under arbitrary unknown illumination conditions. We create a virtual irradiance sphere to detect the light source direction from a given illuminated texture image using both normal vector mapping and weighted bilinear interpolation. We then induce a relighting equation with estimated ambient and diffuse coefficients. We provide the result of a series of experiments on light source estimation, relighting and face recognition to show the efficiency and accuracy of the proposed method in restoring the shading and shadows areas of a face texture image. Our approach for face relighting can be used for not only illuminant invariant face recognition applications but also reducing visual load and Improving visual performance in tasks using 3D displays.

Liver Tumor Detection Using Texture PCA of CT Images (CT영상의 텍스처 주성분 분석을 이용한 간종양 검출)

  • Sur, Hyung-Soo;Chong, Min-Young;Lee, Chil-Woo
    • The KIPS Transactions:PartB
    • /
    • v.13B no.6 s.109
    • /
    • pp.601-606
    • /
    • 2006
  • The image data amount that used in medical institution with great development of medical technology is increasing rapidly. Therefore, people need automation method that use image processing description than macrography of doctors for analysis many medical image. In this paper. we propose that acquire texture information to using GLCM about liver area of abdomen CT image, and automatically detects liver tumor using PCA from this data. Method by one feature as intensity of existent liver humor detection was most but we changed into 4 principal component accumulation images using GLCM's texture information 8 feature. Experiment result, 4 principal component accumulation image's variance percentage is 89.9%. It was seen this compare with liver tumor detecting that use only intensity about 92%. This means that can detect liver tumor even if reduce from dimension of image data to 4 dimensions that is the half in 8 dimensions.

Overview of Inter-Component Coding in 3D-HEVC (3D-HEVC를 위한 인터-컴포넌트 부호화 방법)

  • Park, Min Woo;Lee, Jin Young;Kim, Chanyul
    • Journal of Broadcast Engineering
    • /
    • v.20 no.4
    • /
    • pp.545-556
    • /
    • 2015
  • A HEVC-compatible 3D video coding method (3D-HEVC) has been recently developed as an extension of the high efficiency video coding (HEVC) standard. In order to efficiently deal with the multi-view video plus depth (MVD) format, 3D-HEVC exploits an inter-component prediction which allows the prediction between texture and depth map images in addition to a temporal prediction used in the conventional single layer video coding such as H.264/AVC and HEVC. The performance of the inter-component prediction is normally affected by the accuracy of the disparity vector, and thus it is important to have an accurate disparity vector used for the inter-component prediction. This paper, therefore, introduces a disparity derivation method and inter-component algorithms using the disparity vector for the efficient 3D video coding. Simulation results show that the 3D-HEVC provides higher coding performance compared with the simulcast approach using HEVC and the simple multi-view extension (MH-HEVC).

Sensory Characteristics and Consumer Acceptance of Yakgwa with Glutinous Rice Flour (찹쌀가루 첨가 약과의 관능적 특성 및 소비자 기호도)

  • Park, Jin-Sook;Shin, Malshick;Choe, Eunok;Lee, Kyong-Ae
    • Journal of the East Asian Society of Dietary Life
    • /
    • v.26 no.3
    • /
    • pp.271-277
    • /
    • 2016
  • This study was performed to identify sensory characteristics of the Korean traditional cookie Yakgwa prepared by partially replacing wheat flour with glutinous rice flour as well as to conduct cross-cultural comparison of the sensory descriptions of the Yakgwa sample set between Korea and Chinese panelists. Korean and Chinese highly trained panelists identified 22 sensory attributes by descriptive analysis. The addition of glutinous rice flour decreased soybean oil odor, moistness, oiliness and increased hardness, crispness of the Yakgwa samples. In the consumer test, consumers from Korea (n=89) and China (n=56) participated. Yakgwa with 50% glutinous rice flour had a significantly higher overall acceptability than other the Yakgwa samples by Korean and Chinese consumers.

Improved Bag of Visual Words Image Classification Using the Process of Feature, Color and Texture Information (특징, 색상 및 텍스처 정보의 가공을 이용한 Bag of Visual Words 이미지 자동 분류)

  • Park, Chan-hyeok;Kwon, Hyuk-shin;Kang, Seok-hoon
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 2015.10a
    • /
    • pp.79-82
    • /
    • 2015
  • Bag of visual words(BoVW) is one of the image classification and retrieval methods, using feature point that automatical sorting and searching system by image feature vector of data base. The existing method using feature point shall search or classify the image that user unwanted. To solve this weakness, when comprise the words, include not only feature point but color information that express overall mood of image or texture information that express repeated pattern. It makes various searching possible. At the test, you could see the result compared between classified image using the words that have only feature point and another image that added color and texture information. New method leads to accuracy of 80~90%.

  • PDF

Image Contrast Enhancement using Adaptive Unsharp Mask and Directional Information (방향성 정보와 적응적 언샾 마스크를 이용한 영상의 화질 개선)

  • Lee, Im-Geun
    • Journal of the Korea Society of Computer and Information
    • /
    • v.16 no.3
    • /
    • pp.27-34
    • /
    • 2011
  • In this paper, the novel approach for image contrast enhancement is introduced. The method is based on the unsharp mask and directional information of images. Since the unsharp mask techniques give better visual quality than the conventional sharpening mask, there are much works on image enhancement using unsharp masks. The proposed algorithm decomposes the image to several blocks and extracts directional information using DCT. From the geometric properties of the block, each block is labeled as appropriate type and processed by adaptive unsharp mask. The masking process is skipped at the flat area to reduce the noise artifact, but at the texture and edge area, the adaptive unsharp mask is applied to enhance the image contrast based on the edge direction. Experiments show that the proposed algorithm produces the contrast enhanced images with superior visual quality, suppressing the noise effects and enhancing edge at the same time.

Realistic 3D Scene Reconstruction from an Image Sequence (연속적인 이미지를 이용한 3차원 장면의 사실적인 복원)

  • Jun, Hee-Sung
    • The KIPS Transactions:PartB
    • /
    • v.17B no.3
    • /
    • pp.183-188
    • /
    • 2010
  • A factorization-based 3D reconstruction system is realized to recover 3D scene from an image sequence. The image sequence is captured from uncalibrated perspective camera from several views. Many matched feature points over all images are obtained by feature tracking method. Then, these data are supplied to the 3D reconstruction module to obtain the projective reconstruction. Projective reconstruction is converted to Euclidean reconstruction by enforcing several metric constraints. After many triangular meshes are obtained, realistic reconstruction of 3D models are finished by texture mapping. The developed system is implemented in C++, and Qt library is used to implement the system user interface. OpenGL graphics library is used to realize the texture mapping routine and the model visualization program. Experimental results using synthetic and real image data are included to demonstrate the effectiveness of the developed system.

Generation and Comparison of 3-Dimensional Geospatial Information using Unmanned Aerial Vehicle Photogrammetry Software (무인항공사진측량 소프트웨어를 이용한 3차원 공간정보 생성 및 비교)

  • Yang, Sung-Ryong;Lee, Hak-Sool
    • Journal of the Society of Disaster Information
    • /
    • v.15 no.3
    • /
    • pp.427-439
    • /
    • 2019
  • Purpose: We generated geospatial information of unmanned aerial vehicle based on various SW and analyzed the location accuracy of orthoimage and DSM and texture mapping of 3D mesh. Method: The same unmanned aerial image data is processed using two different SW, and spatial information is generated. Among the generated spatial information, the orthoimage and DSM were compared with the spatial information generation results of the unmanned aerial photogrammetry SW by performing quantitative analysis by calculating RMSE of horizontal position and vertical position error and performing qualitative analysis. Results: There were no significant differences in the positional accuracy of the orthoimage and DSM generated by each SW, and differences in texture mapping in 3D mesh. The creation of the 3D mesh indicated the impact of the Unmanned Aerial Photogrammetry SW. Conclusion: It is shown that there is no effect of SW on the creation of orthoimage and DSM for geospatial analysis based on unmanned aerial vehicle. However, when 3D visualization is performed, texture mapping results are different depending on SW.

A Study on the Analysis of Jeju Island Precipitation Patterns using the Convolution Neural Network (합성곱신경망을 이용한 제주도 강수패턴 분석 연구)

  • Lee, Dong-Hoon;Lee, Bong-Kyu
    • Journal of Software Assessment and Valuation
    • /
    • v.15 no.2
    • /
    • pp.59-66
    • /
    • 2019
  • Since Jeju is the absolute weight of agriculture and tourism, the analysis of precipitation is more important than other regions. Currently, some numerical models are used for analysis of precipitation of Jeju Island using observation data from meteorological satellites. However, since precipitation changes are more diverse than other regions, it is difficult to obtain satisfactory results using the existing numerical models. In this paper, we propose a Jeju precipitation pattern analysis method using the texture analysis method based on Convolution Neural Network (CNN). The proposed method converts the water vapor image and the temperature information of the area of ​​Jeju Island from the weather satellite into texture images. Then converted images are fed into the CNN to analyse the precipitation patterns of Jeju Island. We implement the proposed method and show the effectiveness of the proposed method through experiments.