• Title/Summary/Keyword: 배경 생성

Search Result 739, Processing Time 0.028 seconds

An Algorithim for Converting 2D Face Image into 3D Model (얼굴 2D 이미지의 3D 모델 변환 알고리즘)

  • Choi, Tae-Jun;Lee, Hee-Man
    • Journal of the Korea Society of Computer and Information
    • /
    • v.20 no.4
    • /
    • pp.41-48
    • /
    • 2015
  • Recently, the spread of 3D printers has been increasing the demand for 3D models. However, the creation of 3D models should have a trained specialist using specialized softwares. This paper is about an algorithm to produce a 3D model from a single sheet of two-dimensional front face photograph, so that ordinary people can easily create 3D models. The background and the foreground are separated from a photo and predetermined constant number vertices are placed on the seperated foreground 2D image at a same interval. The arranged vertex location are extended in three dimensions by using the gray level of the pixel on the vertex and the characteristics of eyebrows and nose of the nomal human face. The separating method of the foreground and the background uses the edge information of the silhouette. The AdaBoost algorithm using the Haar-like feature is also employed to find the location of the eyes and nose. The 3D models obtained by using this algorithm are good enough to use for 3D printing even though some manual treatment might be required a little bit. The algorithm will be useful for providing 3D contents in conjunction with the spread of 3D printers.

Completion of Occluded Objects in a Video Sequence using Spatio-Temporal Matching (시공간 정합을 이용한 비디오 시퀀스에서의 가려진 객체의 복원)

  • Heo, Mi-Kyoung;Moon, Jae-Kyoung;Park, Soon-Yong
    • The KIPS Transactions:PartB
    • /
    • v.14B no.5
    • /
    • pp.351-360
    • /
    • 2007
  • Video Completion refers to a computer vision technique which restores damaged images by filling missing pixels with suitable color in a video sequence. We propose a new video completion technique to fill in image holes which are caused by removing an unnecessary object in a video sequence, where two objects cross each other in the presence of camera motion. We remove the closer object from a camera which results in image holes. Then these holes are filled by color information of some others frames. First of all, spatio-temporal volumes of occluding and occluded objects are created according to the centroid of the objects. Secondly, a temporal search technique by voxel matching separates and removes the occluding object. Finally. these holes are filled by using spatial search technique. Seams on the boundary of completed pixels we removed by a simple blending technique. Experimental results using real video sequences show that the proposed technique produces new completed videos.

Slow Sync Image Synthesis from Short Exposure Flash Smartphone Images (단노출 플래시 스마트폰 영상에서 저속 동조 영상 생성)

  • Lee, Jonghyeop;Cho, Sunghyun;Lee, Seungyong
    • Journal of the Korea Computer Graphics Society
    • /
    • v.27 no.3
    • /
    • pp.1-11
    • /
    • 2021
  • Slow sync is a photography technique where a user takes an image with long exposure and a camera flash to enlighten the foreground and background. Unlike short exposure with flash and long exposure without flash, slow sync guarantees the bright foreground and background in the dim environment. However, taking a slow sync image with a smartphone is difficult because the smartphone camera has continuous and weak flash and can not turn on flash if the exposure time is long. This paper proposes a deep learning method that input is a short exposure flash image and output is a slow sync image. We present a deep learning network with a weight map for spatially varying enlightenment. We also propose a dataset that consists of smartphone short exposure flash images and slow sync images for supervised learning. We utilize the linearity of a RAW image to synthesize a slow sync image from short exposure flash and long exposure no-flash images. Experimental results show that our method trained with our dataset synthesizes slow sync images effectively.

Automatic FE Mesh Generation Technique using Computer Aided Geometric Design for Free-form Discrete Spatial Structure (CAGD를 이용한 프리폼 이산화 공간구조물의 유한요소망 자동생성기법)

  • Lee, Sang-Jin
    • Journal of Korean Association for Spatial Structures
    • /
    • v.10 no.2
    • /
    • pp.77-86
    • /
    • 2010
  • This paper provides background theories and numerical results of automatic finite element (FE) mesh generation for freeform discrete structures. The present method adopts the computer aided geometric design (CAGD) technique to overcome the limitation of case-sensitive traditional automatic FE mesh generator. The present technique involves two steps. The first one is to represent the shape of the structure using the geometric model based on the CAGD and the second one is to generate the discrete FE mesh of spatial structures over the geometric model. From numerical results, it is found to be that the present technique is very easy to produce the FE mesh for free-form spatial structures and it can also reuse some features of traditional automatic mesh generator in the process. Furthermore, it shows the possibility to be used for the shape optimization of large spatial structures.

  • PDF

Automatic Pedestrian Removal Algorithm Using Multiple Frames (다중 프레임에서의 보행자 검출 및 삭제 알고리즘)

  • Kim, ChangSeong;Lee, DongSuk;Park, Dong Sun
    • Smart Media Journal
    • /
    • v.4 no.2
    • /
    • pp.26-33
    • /
    • 2015
  • In this paper, we propose an efficient automatic pedestrian removal system from a frame in a video sequence. It firstly finds pedestrians from the frame using a Histogram of Oriented Gradient(HOG) / Linear-Support Vector Machine(L-SVM) classifier, searches for proper background patches, and then the patches are used to replace the deleted pedestrians. Background patches are retrieved from the reference video sequence and a modified feather blender algorithm is applied to make boundaries of replaced blocks look naturally. The proposed system, is designed to automatically detect object and generate natural-looking patches, while most existing systems provide search operation in manual. In the experiment, the average PSNR of the replaced blocks is 19.246

Unified coding scheme of speech and music (음악 및 음성 신호의 융합 압축 기술)

  • O, Eun-Mi
    • Broadcasting and Media Magazine
    • /
    • v.16 no.4
    • /
    • pp.59-71
    • /
    • 2011
  • 오디오와 음성 압축 기술적 근간은 서로 다르지만, 최근의 모바일 멀티미디어 기기 시장의 컨버전스 현상에 따라 압축하고자 하는 신호가 혼용되고 있으며, 비슷한 목표 전송률과 음질로 수렴하고 있다. 현재는 동일 기기에서 서로 다른 압축 기술을 적용하고 있으나, 음성과 음악이 동시에 서비스 되는 멀티미디어 기기에서는 단일 압축 방식으로 처리하고자 하는 이슈가 부각되고 있다. 특히, 스마트 폰 및 음악 콘텐츠 포탈 서비스의 대중화를 고려할 때, 음성 및 음악 신호 모두를 효율적으로 압축하는 음악 및 음성 신호의 융합 압축 기술이 더욱 필요해 보인다. 본 고에서는 MPEG 오디오 그룹에서 가장 최근 진행한 Unified Speech and Audio Coding(USAC)의 탄생 배경 및 표준화 현황을 소개한다. USAC는 64kbps 이하에서 기술적으로 최고 성능을 지닌 AMR-WB+ 및 HE-AAC v2보다도 우월한 음질을 보이며, 높은 비트율에서도 동등한 음질을 보장한다. 이런 우수한 음질에 기여한 USAC의 스위칭 구조와 더불어 기술적으로 향상된 주요 모듈인 파라미터 기반 스테레오 및 고주파 압축, 그리고 엔트로피 코딩 방식에 대해서 살펴 본다. 향후, 다양한 오디오 신호를 효율적으로 압축하는 USAC는 디지털 라디오, 모바일 TV, 그리고 오디오 북과 같은 사용자 시나리오에서 사용될 확률이 높아 보인다. 또한, USAC는 배경 잡음이나 배경 음악이 있는 경우에도 성능이 우수하기 때문에 YouTube 및 podcast 등과 같이 사용자가 콘텐츠를 생성할 때도 유용하게 사용 될 수 있다.

A Hardware Implementation of Moving Object Detection Algorithm using Gaussian Mixture Model (가우시안 혼합 모델을 이용한 이동 객체 검출 알고리듬의 하드웨어 구현)

  • Kim, Gyeong-hun;An, Hyo-Sik;Shin, Kyung-wook
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 2015.05a
    • /
    • pp.407-409
    • /
    • 2015
  • In this paper, a hardware implementation of MOD(Moving Object Detection) algorithm is described, which is based GMM(Gaussian Mixture Model) and background subtraction. The EGML(Effective Gaussian Mixture Learning) is used to model and update background. Some approximations of EGML calculations are applied to reduce hardware complexity, and pipelining technique is used to improve operating speed. Gaussian parameters are adjustable according to various environment conditions to achieve better MOD performance. MOD processor is verified by using FPGA-in-the-loop verification, and it can operate with 109 MHz clock frequency on XC5VSX95T FPGA device.

  • PDF

A Content-Based Image Classification using Neural Network (신경망을 이용한 내용기반 영상 분류)

  • 이재원;김상균
    • Journal of Korea Multimedia Society
    • /
    • v.5 no.5
    • /
    • pp.505-514
    • /
    • 2002
  • In this Paper, we propose a method of content-based image classification using neural network. The images for classification ate object images that can be divided into foreground and background. To deal with the object images efficiently, object region is extracted with a region segmentation technique in the preprocessing step. Features for the classification are texture and shape features extracted from wavelet transformed image. The neural network classifier is constructed with the extracted features and the back-propagation learning algorithm. Among the various texture features, the diagonal moment was more effective. A test with 300 training data and 300 test data composed of 10 images from each of 30 classes shows correct classification rates of 72.3% and 67%, respectively.

  • PDF

Multi-Small Target Tracking Algorithm in Infrared Image Sequences (적외선 연속 영상에서 다중 소형 표적 추적 알고리즘)

  • Joo, Jae-Heum
    • Journal of the Institute of Convergence Signal Processing
    • /
    • v.14 no.1
    • /
    • pp.33-38
    • /
    • 2013
  • In this paper, we propose an algorithm to track multi-small targets in infrared image sequences in case of dissipation or creation of targets by using the background estimation filter, Kahnan filter and mean shift algorithm. We detect target candidates in a still image by subtracting an original image from an background estimation image, and we track multi-targets by using Kahnan filter and target selection. At last, we adjust specific position of targets by using mean shift algorithm In the experiments, we compare the performance of each background estimation filters, and verified that proposed algorithm exhibits better performance compared to classic methods.

Test Case Generation Technique for Interoperability Testing (상호운용성 테스트를 위한 테스트케이스 생성 기법)

  • Lee Ji-Hyun;Noh Hye-Min;Yoo Cheol-Jung;Chang Ok-Bae;Lee Jun-Wook
    • Journal of KIISE:Software and Applications
    • /
    • v.33 no.1
    • /
    • pp.44-57
    • /
    • 2006
  • With the rapid growth of network technology, two or more products from different vendors are integrated and interact with each other to perform a certain function in the latest systems. Thus. interoperability testing is considered as an essential aspect of correctness of integrated systems. Interoperability testing is to test the ability of software and hardware on different machines from different vendors to share data. Most of the researches model communication system behavior using EFSM(Extended Finite State Machines) and use EFSM as an input of test scenario generation algorithm. Actually, there are many studies on systematic and optimal test case generation algorithms using EFSM. But in these researches, the study for generating EFSM model which is a foundation of test scenario generation isn't sufficient. This study proposes an EFSM generating technique from informal requirement analysis document for more complete interoperability testing. and implements prototype of Test Case Generation Tool generating test cases semi-automatically. Also we describe theoretical base and algorithms applied to prototype implementation.