• 제목/요약/키워드: 컬러모델

검색결과 309건 처리시간 0.028초

3D Facial Animation with Head Motion Estimation and Facial Expression Cloning (얼굴 모션 추정과 표정 복제에 의한 3차원 얼굴 애니메이션)

  • Kwon, Oh-Ryun;Chun, Jun-Chul
    • The KIPS Transactions:PartB
    • /
    • 제14B권4호
    • /
    • pp.311-320
    • /
    • 2007
  • This paper presents vision-based 3D facial expression animation technique and system which provide the robust 3D head pose estimation and real-time facial expression control. Many researches of 3D face animation have been done for the facial expression control itself rather than focusing on 3D head motion tracking. However, the head motion tracking is one of critical issues to be solved for developing realistic facial animation. In this research, we developed an integrated animation system that includes 3D head motion tracking and facial expression control at the same time. The proposed system consists of three major phases: face detection, 3D head motion tracking, and facial expression control. For face detection, with the non-parametric HT skin color model and template matching, we can detect the facial region efficiently from video frame. For 3D head motion tracking, we exploit the cylindrical head model that is projected to the initial head motion template. Given an initial reference template of the face image and the corresponding head motion, the cylindrical head model is created and the foil head motion is traced based on the optical flow method. For the facial expression cloning we utilize the feature-based method, The major facial feature points are detected by the geometry of information of the face with template matching and traced by optical flow. Since the locations of varying feature points are composed of head motion and facial expression information, the animation parameters which describe the variation of the facial features are acquired from geometrically transformed frontal head pose image. Finally, the facial expression cloning is done by two fitting process. The control points of the 3D model are varied applying the animation parameters to the face model, and the non-feature points around the control points are changed by use of Radial Basis Function(RBF). From the experiment, we can prove that the developed vision-based animation system can create realistic facial animation with robust head pose estimation and facial variation from input video image.

2D Spatial-Map Construction for Workers Identification and Avoidance of AGV (AGV의 작업자 식별 및 회피를 위한 2D 공간 지도 구성)

  • Ko, Jung-Hwan
    • Journal of the Institute of Electronics and Information Engineers
    • /
    • 제49권9호
    • /
    • pp.347-352
    • /
    • 2012
  • In this paper, an 2D spatial-map construction for workers identification and avoidance of AGV using the detection scheme of the spatial coordinates based on stereo camera is proposed. In the proposed system, face area of a moving person is detected from a left image among the stereo image pairs by using the YCbCr color model and its center coordinates are computed by using the centroid method and then using these data, the stereo camera embedded on the mobile robot can be controlled for tracking the moving target in real-time. Moreover, using the disparity map obtained from the left and right images captured by the tracking-controlled stereo camera system and the perspective transformation between a 3-D scene and an image plane, depth map can be detected. From some experiments on AGV driving with 240 frames of the stereo images, it is analyzed that error ratio between the calculated and measured values of the worker's width is found to be very low value of 2.19% and 1.52% on average.

Design of Moving Picture Retrieval System using Scene Change Technique (장면 전환 기법을 이용한 동영상 검색 시스템 설계)

  • Kim, Jang-Hui;Kang, Dae-Seong
    • Journal of the Institute of Electronics Engineers of Korea SP
    • /
    • 제44권3호
    • /
    • pp.8-15
    • /
    • 2007
  • Recently, it is important to process multimedia data efficiently. Especially, in case of retrieval of multimedia information, technique of user interface and retrieval technique are necessary. This paper proposes a new technique which detects cuts effectively in compressed image information by MPEG. A cut is a turning point of scenes. The cut-detection is the basic work and the first-step for video indexing and retrieval. Existing methods have a weak point that they detect wrong cuts according to change of a screen such as fast motion of an object, movement of a camera and a flash. Because they compare between previous frame and present frame. The proposed technique detects shots at first using DC(Direct Current) coefficient of DCT(Discrete Cosine Transform). The database is composed of these detected shots. Features are extracted by HMMD color model and edge histogram descriptor(EHD) among the MPEG-7 visual descriptors. And detections are performed in sequence by the proposed matching technique. Through this experiments, an improved video segmentation system is implemented that it performs more quickly and precisely than existing techniques have.

Implement of Hand Gesture Interface using Ratio and Size Variation of Gesture Clipping Region (제스쳐 클리핑 영역 비율과 크기 변화를 이용한 손-동작 인터페이스 구현)

  • Choi, Chang-Yur;Lee, Woo-Beom
    • The Journal of the Institute of Internet, Broadcasting and Communication
    • /
    • 제13권1호
    • /
    • pp.121-127
    • /
    • 2013
  • A vision based hand-gesture interface method for substituting a pointing device is proposed in this paper, which is used the ratio and size variation of Gesture Region. Proposed method uses the skin hue&saturation of the hand region from the HSI color model to extract the hand region effectively. This method can remove the non-hand region, and reduces the noise effect by the light source. Also, as the computation quantity is reduced by detecting not the static hand-shape recognition, but the ratio and size variation of hand-moving from the clipped hand region in real time, more response speed is guaranteed. In order to evaluate the performance of the our proposed method, after applying to the computerized self visual acuity testing system as a pointing device. As a result, the proposed method showed the average 86% gesture recognition ratio and 87% coordinate moving recognition ratio.

A Study on Digital Fingerprinting Technology for the Copyright Protection of the Image Contents Printout (이미지 콘텐츠 출력물의 저작권보호를 위한 디지털 핑거프린팅 기술에 관한 연구)

  • Seo, Yong-Seok;Kim, Won-Gyum;Lee, Seon-Hwa;Suh, Young-Ho;Hwang, Chi-Jung
    • Proceedings of the Korea Contents Association Conference
    • /
    • 한국콘텐츠학회 2006년도 추계 종합학술대회 논문집
    • /
    • pp.242-245
    • /
    • 2006
  • This paper addresses an image fingerprinting scheme for the print-to-capture model performed by a photo printer and digital camera. When capturing an image by a digital camera, various kinds of distortions such as noise, geometrical distortions, and lens distortions are applied slightly and simultaneously. In this paper, we consider several steps to extract fingerprints from the distorted image in print-and capture scenario. To embed ID into an image as a fingerprint, multi-bits embedding is applied. We embed 64 bits ID information as a fingerprint into spatial domain of color images. In order to restore a captured image from distortions a noise reduction filter is performed and a rectilinear tiling pattern is used as a template. To make the template a multi-bits fingerprint is embedded repeatedly like a tiling pattern into the spatial domain of the image. We show that the extracting is successful from the image captured by a digital camera through the experiment.

  • PDF

Object classification for domestic waste based on Convolutional neural networks (심층 신경망 기반의 생활폐기물 자동 분류)

  • Nam, Junyoung;Lee, Christine;Patankar, Asif Ashraf;Wang, Hanxiang;Li, Yanfen;Moon, Hyeonjoon
    • Proceedings of the Korean Society of Broadcast Engineers Conference
    • /
    • 한국방송∙미디어공학회 2019년도 추계학술대회
    • /
    • pp.83-86
    • /
    • 2019
  • 도시화 과정에서 도시의 생활폐기물 문제가 빠르게 증가되고 있고, 효과적이지 못한 생활폐기물 관리는 도시의 오염을 악화시키고 물리적인 환경오염과 경제적인 부분에서 극심한 문제들을 야기시킬 수 있다. 게다가 부피가 커서 관리하기 힘든 대형 생활폐기물들이 증가하여 도시 발전에도 방해가 된다. 생활폐기물을 처리하는데 있어 대형 생활폐기물 품목에 대해서는 요금을 청구하여 처리한다. 다양한 유형의 대형 생활폐기물을 수동으로 분류하는 것은 시간과 비용이 많이 든다. 그 결과 대형 생활폐기물을 자동으로 분류하는 시스템을 도입하는 것이 중요하다. 본 논문에서는 대형 생활폐기물 분류를 위한 시스템을 제안하며, 이 논문의 4 가지로 분류된다. 1) 높은 정확도와 강 분류(roust classification) 수행에 적합한 Convolution Neural Network(CNN) 모델 중 VGG-19, Inception-V3, ResNet50 의 정확도와 속도를 비교한다. 제안된 20 개의 클래스의 대형 생활폐기물의 데이터 셋(data set)에 대해 가장 높은 분류의 정확도는 86.19%이다. 2) 불균형 데이터 문제를 처리하기 Class Weight VGG-19(CW-VGG-19)와 Extreme Gradient Boosting VGG-19 두 가지 방법을 사용하였다. 3) 20 개의 클래스를 포함하는 데이터 셋을 수동으로 수집 및 검증하였으며 각 클래스의 컬러 이미지 수는 500 개 이상이다. 4) 딥 러닝(Deep Learning) 기반 모바일 애플리케이션을 개발하였다.

  • PDF

Automatic Detecting of Joint of Human Body and Mapping of Human Body using Humanoid Modeling (인체 모델링을 이용한 인체의 조인트 자동 검출 및 인체 매핑)

  • Kwak, Nae-Joung;Song, Teuk-Seob
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • 제15권4호
    • /
    • pp.851-859
    • /
    • 2011
  • In this paper, we propose the method that automatically extracts the silhouette and the joints of consecutive input image, and track joints to trace object for interaction between human and computer. Also the proposed method presents the action of human being to map human body using joints. To implement the algorithm, we model human body using 14 joints to refer to body size. The proposed method converts RGB color image acquired through a single camera to hue, saturation, value images and extracts body's silhouette using the difference between the background and input. Then we automatically extracts joints using the corner points of the extracted silhouette and the data of body's model. The motion of object is tracted by applying block-matching method to areas around joints among all image and the human's motion is mapped using positions of joints. The proposed method is applied to the test videos and the result shows that the proposed method automatically extracts joints and effectively maps human body by the detected joints. Also the human's action is aptly expressed to reflect locations of the joints

Hole-Filling Method Using Extrapolated Spatio-temporal Background Information (추정된 시공간 배경 정보를 이용한 홀채움 방식)

  • Kim, Beomsu;Nguyen, Tien Dat;Hong, Min-Cheol
    • Journal of the Institute of Electronics and Information Engineers
    • /
    • 제54권8호
    • /
    • pp.67-80
    • /
    • 2017
  • This paper presents a hole-filling method using extrapolated spatio-temporal background information to obtain a synthesized view. A new temporal background model using non-overlapped patch based background codebook is introduced to extrapolate temporal background information In addition, a depth-map driven spatial local background estimation is addressed to define spatial background constraints that represent the lower and upper bounds of a background candidate. Background holes are filled by comparing the similarities between the temporal background information and the spatial background constraints. Additionally, a depth map-based ghost removal filter is described to solve the problem of the non-fit between a color image and the corresponding depth map of a virtual view after 3-D warping. Finally, an inpainting is applied to fill in the remaining holes with the priority function that includes a new depth term. The experimental results demonstrated that the proposed method led to results that promised subjective and objective improvement over the state-of-the-art methods.

Face Detection for Automatic Avatar Creation by using Deformable Template and GA (Deformable Template과 GA를 이용한 얼굴 인식 및 아바타 자동 생성)

  • Park Tae-Young;Kwon Min-Su;Kang Hoon
    • Journal of the Korean Institute of Intelligent Systems
    • /
    • 제15권1호
    • /
    • pp.110-115
    • /
    • 2005
  • This paper proposes the method to detect contours of a face, eyes and a mouth in a color image for making an avatar automatically. First, we use the HSI color model to exclude the effect of various light condition, and we find skin regions in an input image by using the skin color is defined on HS-plane. And then, we use deformable templates and Genetic Algorithm(GA) to detect contours of a face, eyes and a mouth. Deformable templates consist of B-spline curves and control point vectors. Those can represent various shape of a face, eyes and a mouth. And GA is very useful search procedure based on the mechanics of natural selection and natural genetics. Second, an avatar is created automatically by using contours and Fuzzy C-means clustering(FCM). FCM is used to reduce the number of face color As a result, we could create avatars like handmade caricatures which can represent the user's identity, differing from ones generated by the existing methods.

Implementation of Mutual Conversion System between Body Movement and Visual·Auditory Information (신체 움직임-시·청각 정보 상호변환 시스템의 구현)

  • Bae, Myung-Jin;Kim, Sung-Ill
    • Journal of IKEEE
    • /
    • 제22권2호
    • /
    • pp.362-368
    • /
    • 2018
  • This paper has implemented a mutual conversion system that mutually converts between body motion signals and both visual and auditory signals. The present study is based on intentional synesthesia that can be perceived by learning. The Euler's angle was used in body movements as the output of a wearable armband(Myo). As a muscle sense, roll, pitch and yaw signals were used in this study. As visual and auditory signals, MIDI(Musical Instrument Digital Interface) signals and HSI(Hue, Saturation, Intensity) color model were used respectively. The method of mutual conversion between body motion signals and both visual and auditory signals made it easy to infer by applying one-to-one correspondence. Simulation results showed that input motion signals were compared with output simulation ones using ROS(Root Operation System) and Gazebo which is a 3D simulation tool, to enable the mutual conversion between body motion information and both visual and auditory information.