• 제목/요약/키워드: Real scene image

검색결과 223건 처리시간 0.027초

CG와 동영상의 지적합성 (Intelligent Composition of CG and Dynamic Scene)

  • 박종일;정경훈;박경세;송재극
    • 한국방송∙미디어공학회:학술대회논문집
    • /
    • 한국방송공학회 1995년도 학술대회
    • /
    • pp.77-81
    • /
    • 1995
  • Video composition is to integrate multiple image materials into one scene. It considerably enhances the degree of freedom in producing various scenes. However, we need to adjust the viewing point sand the image planes of image planes of image materials for high quality video composition. In this paper, were propose an intelligent video composition technique concentrating on the composition of CG and real scene. We first model the camera system. The projection is assumed to be perspective and the camera motion is assumed to be 3D rotational and 3D translational. Then, we automatically extract camera parameters comprising the camera model from real scene by a dedicated algorithm. After that, CG scene is generated according to the camera parameters of the real scene. Finally the two are composed into one scene. Experimental results justify the validity of the proposed method.

Real Scene Text Image Super-Resolution Based on Multi-Scale and Attention Fusion

  • Xinhua Lu;Haihai Wei;Li Ma;Qingji Xue;Yonghui Fu
    • Journal of Information Processing Systems
    • /
    • 제19권4호
    • /
    • pp.427-438
    • /
    • 2023
  • Plenty of works have indicated that single image super-resolution (SISR) models relying on synthetic datasets are difficult to be applied to real scene text image super-resolution (STISR) for its more complex degradation. The up-to-date dataset for realistic STISR is called TextZoom, while the current methods trained on this dataset have not considered the effect of multi-scale features of text images. In this paper, a multi-scale and attention fusion model for realistic STISR is proposed. The multi-scale learning mechanism is introduced to acquire sophisticated feature representations of text images; The spatial and channel attentions are introduced to capture the local information and inter-channel interaction information of text images; At last, this paper designs a multi-scale residual attention module by skillfully fusing multi-scale learning and attention mechanisms. The experiments on TextZoom demonstrate that the model proposed increases scene text recognition's (ASTER) average recognition accuracy by 1.2% compared to text super-resolution network.

도로교통 영상처리를 위한 고속 영상처리시스템의 하드웨어 구현 (An Onboard Image Processing System for Road Images)

  • 이운근;이준웅;조석빈;고덕화;백광렬
    • 제어로봇시스템학회논문지
    • /
    • 제9권7호
    • /
    • pp.498-506
    • /
    • 2003
  • A computer vision system applied to an intelligent safety vehicle has been required to be worked on a small sized real time special purposed hardware not on a general purposed computer. In addition, the system should have a high reliability even under the adverse road traffic environment. This paper presents a design and an implementation of an onboard hardware system taking into account for high speed image processing to analyze a road traffic scene. The system is mainly composed of two parts: an early processing module of FPGA and a postprocessing module of DSP. The early processing module is designed to extract several image primitives such as the intensity of a gray level image and edge attributes in a real-time Especially, the module is optimized for the Sobel edge operation. The postprocessing module of DSP utilizes the image features from the early processing module for making image understanding or image analysis of a road traffic scene. The performance of the proposed system is evaluated by an experiment of a lane-related information extraction. The experiment shows the successful results of image processing speed of twenty-five frames of 320$\times$240 pixels per second.

Modeling the Visual Target Search in Natural Scenes

  • Park, Daecheol;Myung, Rohae;Kim, Sang-Hyeob;Jang, Eun-Hye;Park, Byoung-Jun
    • 대한인간공학회지
    • /
    • 제31권6호
    • /
    • pp.705-713
    • /
    • 2012
  • Objective: The aim of this study is to predict human visual target search using ACT-R cognitive architecture in real scene images. Background: Human uses both the method of bottom-up and top-down process at the same time using characteristics of image itself and knowledge about images. Modeling of human visual search also needs to include both processes. Method: In this study, visual target object search performance in real scene images was analyzed comparing experimental data and result of ACT-R model. 10 students participated in this experiment and the model was simulated ten times. This experiment was conducted in two conditions, indoor images and outdoor images. The ACT-R model considering the first saccade region through calculating the saliency map and spatial layout was established. Proposed model in this study used the guide of visual search and adopted visual search strategies according to the guide. Results: In the analysis results, no significant difference on performance time between model prediction and empirical data was found. Conclusion: The proposed ACT-R model is able to predict the human visual search process in real scene images using salience map and spatial layout. Application: This study is useful in conducting model-based evaluation in visual search, particularly in real images. Also, this study is able to adopt in diverse image processing program such as helper of the visually impaired.

고속 컨텐츠 인식 동영상 리타겟팅 기법 (Fast Content-Aware Video Retargeting Algorithm)

  • 박대현;김윤
    • 한국컴퓨터정보학회논문지
    • /
    • 제18권11호
    • /
    • pp.77-86
    • /
    • 2013
  • 본 논문에서는 동영상의 주요 컨텐츠를 보존하면서 영상의 크기를 변환하는 고속 동영상 리타겟팅 기법을 제안한다. 기존의 Seam Carving에서는 seam을 하나씩 구할 때마다 누적 에너지의 갱신이 발생하며, 여기서 누적 에너지는 동적계획법을 이용하여 계산하기 때문에 전체 연산시간의 지연은 불가피하다. 본 논문에서는 전체 동영상을 특징이 서로 비슷한 scene으로 나누고, 각 scene의 첫 프레임에서는 seam이 될 수 있는 모든 후보들 중 복수개의 seam을 추출하여 누적 에너지의 갱신과정을 줄여 고속화한다. 또한 scene의 두 번째 프레임부터 인접한 프레임 상호간에 상관성을 이용하여, 연속하는 프레임은 누적 에너지를 계산하지 않고 이전 프레임의 seam 정보를 참조한 계산만으로 모든 seam을 추출한다. 따라서 제안하는 시스템은 누적 에너지에 계산되는 연산량을 대폭 줄였으며 전체 프레임의 분석도 필요하지 않아 고속화가 가능하고, 컨텐츠의 떨림 현상은 발생하지 않는다. 실험 결과는 제안하는 방법이 처리 속도와 메모리 사용량 면에서 실시간 처리에 적합하고, 영상이 가지고 있는 컨텐츠를 보존하면서 영상의 크기를 조절할 수 있음을 보여준다.

각도 정보를 이용한 카메라 보정 알고리듬 (A Calibration Algorithm Using Known Angle)

  • 권인소;하종은
    • 제어로봇시스템학회논문지
    • /
    • 제10권5호
    • /
    • pp.415-420
    • /
    • 2004
  • We present a new algorithm for the calibration of a camera and the recovery of 3D scene structure up to a scale from image sequences using known angles between lines in the scene. Traditional method for calibration using scene constraints requires various scene constraints due to the stratified approach. Proposed method requires only one type of scene constraint of known angle and also it directly recovers metric structure up to an unknown scale from projective structure. Specifically, we recover the matrix that is the homography between the projective structure and the Euclidean structure using angles. Since this matrix is a unique one in the given set of image sequences, we can easily deal with the problem of varying intrinsic parameters of the camera. Experimental results on the synthetic and real images demonstrate the feasibility of the proposed algorithm.

동적 환경에서의 효과적인 움직이는 객체 추출 (An effective background subtraction in dynamic scene.)

  • 한재혁;김용진;유세운;이상화;박종일
    • 한국HCI학회:학술대회논문집
    • /
    • 한국HCI학회 2009년도 학술대회
    • /
    • pp.631-636
    • /
    • 2009
  • 컴퓨터 비전 분야에서 전경을 추출하기 위한 영역 분할(segmentation) 방법에 대한 연구가 활발히 진행되어 왔다. 특히, 전경이 배제된 배경 영상과 현재 프레임의 차이를 이용하여 전경을 추출하는 배경 차분(background subtraction) 방법은 요구하는 계산량에 비해 우수한 품질의 전경 추출이 가능하므로 실시간 처리가 필요한 비전 시스템에 다양하게 응용되고 있다. 그러나 배경 차분 방법만을 이용하여서는 배경이 동적으로 변하는 환경에서 정확한 전경을 추출해 내지 못하는 단점이 있다. 본 논문에서는 정적인 배경과 동적인 배경이 공존하는 환경에서 영역 분할을 효과적으로 수행하는 방법을 제안한다. 제안된 방법은 정적인 배경 영역에 대해서는 기존의 배경 차분 방법을 이용하여 전경을 추출하고, 동적인 배경 영역에 대해서는 깊이 정보를 이용하여 전경을 추출하는 하이브리드 방식을 사용한다. 정적인 배경에 동적인 영상을 프로젝터로 투영하는 환경에서 제안된 방법의 효율성을 검증하였다.

  • PDF

Development of 3D Stereoscopic Image Generation System Using Real-time Preview Function in 3D Modeling Tools

  • Yun, Chang-Ok;Yun, Tae-Soo;Lee, Dong-Hoon
    • 한국멀티미디어학회논문지
    • /
    • 제11권6호
    • /
    • pp.746-754
    • /
    • 2008
  • A 3D stereoscopic image is generated by interdigitating every scene with video editing tools that are rendered by two cameras' views in 3D modeling tools, like Autodesk MAX(R) and Autodesk MAYA(R). However, the depth of object from a static scene and the continuous stereo effect in the view of transformation, are not represented in a natural method. This is because after choosing the settings of arbitrary angle of convergence and the distance between the modeling and those two cameras, the user needs to render the view from both cameras. So, the user needs a process of controlling the camera's interval and rendering repetitively, which takes too much time. Therefore, in this paper, we will propose the 3D stereoscopic image editing system for solving such problems as well as exposing the system's inherent limitations. We can generate the view of two cameras and can confirm the stereo effect in real-time on 3D modeling tools. Then, we can intuitively determine immersion of 3D stereoscopic image in real-time, by using the 3D stereoscopic image preview function.

  • PDF

A real-time Universal inside parking system : by Applying 3D-scene Interoretation and Motion Analysis

  • Lee, Changheun;Doehyon Ahn
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 제어로봇시스템학회 2002년도 ICCAS
    • /
    • pp.99.6-99
    • /
    • 2002
  • $\textbullet$ Contents 1 introduction $\textbullet$ Contents 2 The system implementation $\textbullet$ Contents 3 low \ulcornerlevel image processing $\textbullet$ Contents 4 scene sequence interpretation $\textbullet$ Contents 5 Implementation experimental result

  • PDF

직선 조합의 에너지 전파를 이용한 고속 물체인식 (Fast Object Recognition using Local Energy Propagation from Combination of Saline Line Groups)

  • 강동중
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 제어로봇시스템학회 2000년도 제15차 학술회의논문집
    • /
    • pp.311-311
    • /
    • 2000
  • We propose a DP-based formulation for matching line patterns by defining a robust and stable geometric representation that is based on the conceptual organizations. Usually, the endpoint proximity and collinearity of image lines, as two main conceptual organization groups, are useful cues to match the model shape in the scene. As the endpoint proximity, we detect junctions from image lines. We then search for junction groups by using geometric constraint between the junctions. A junction chain similar to the model chain is searched in the scene, based on a local comparison. A Dynamic Programming-based search algorithm reduces the time complexity for the search of the model chain in the scene. Our system can find a reasonable matching, although there exist severely distorted objects in the scene. We demonstrate the feasibility of the DP-based matching method using both synthetic and real images.

  • PDF