• 제목/요약/키워드: Scene Recognition

검색결과 193건 처리시간 0.027초

Real Scene Text Image Super-Resolution Based on Multi-Scale and Attention Fusion

  • Xinhua Lu;Haihai Wei;Li Ma;Qingji Xue;Yonghui Fu
    • Journal of Information Processing Systems
    • /
    • 제19권4호
    • /
    • pp.427-438
    • /
    • 2023
  • Plenty of works have indicated that single image super-resolution (SISR) models relying on synthetic datasets are difficult to be applied to real scene text image super-resolution (STISR) for its more complex degradation. The up-to-date dataset for realistic STISR is called TextZoom, while the current methods trained on this dataset have not considered the effect of multi-scale features of text images. In this paper, a multi-scale and attention fusion model for realistic STISR is proposed. The multi-scale learning mechanism is introduced to acquire sophisticated feature representations of text images; The spatial and channel attentions are introduced to capture the local information and inter-channel interaction information of text images; At last, this paper designs a multi-scale residual attention module by skillfully fusing multi-scale learning and attention mechanisms. The experiments on TextZoom demonstrate that the model proposed increases scene text recognition's (ASTER) average recognition accuracy by 1.2% compared to text super-resolution network.

실내 환경 이미지 매칭을 위한 GMM-KL프레임워크 (GMM-KL Framework for Indoor Scene Matching)

  • Kim, Jun-Young;Ko, Han-Seok
    • 대한전기학회:학술대회논문집
    • /
    • 대한전기학회 2005년도 학술대회 논문집 정보 및 제어부문
    • /
    • pp.61-63
    • /
    • 2005
  • Retreiving indoor scene reference image from database using visual information is important issue in Robot Navigation. Scene matching problem in navigation robot is not easy because input image that is taken in navigation process is affinly distorted. We represent probabilistic framework for the feature matching between features in input image and features in database reference images to guarantee robust scene matching efficiency. By reconstructing probabilistic scene matching framework we get a higher precision than the existing feaure-feature matching scheme. To construct probabilistic framework we represent each image as Gaussian Mixture Model using Expectation Maximization algorithm using SIFT(Scale Invariant Feature Transform).

  • PDF

Comparisons of Object Recognition Performance with 3D Photon Counting & Gray Scale Images

  • Lee, Chung-Ghiu;Moon, In-Kyu
    • Journal of the Optical Society of Korea
    • /
    • 제14권4호
    • /
    • pp.388-394
    • /
    • 2010
  • In this paper the object recognition performance of a photon counting integral imaging system is quantitatively compared with that of a conventional gray scale imaging system. For 3D imaging of objects with a small number of photons, the elemental image set of a 3D scene is obtained using the integral imaging set up. We assume that the elemental image detection follows a Poisson distribution. Computational geometrical ray back propagation algorithm and parametric maximum likelihood estimator are applied to the photon counting elemental image set in order to reconstruct the original 3D scene. To evaluate the photon counting object recognition performance, the normalized correlation peaks between the reconstructed 3D scenes are calculated for the varied and fixed total number of photons in the reconstructed sectional image changing the total number of image channels in the integral imaging system. It is quantitatively illustrated that the recognition performance of the photon counting integral imaging system can be similar to that of a conventional gray scale imaging system as the number of image viewing channels in the photon counting integral imaging (PCII) system is increased up to the threshold point. Also, we present experiments to find the threshold point on the total number of image channels in the PCII system which can guarantee a comparable recognition performance with a gray scale imaging system. To the best of our knowledge, this is the first report on comparisons of object recognition performance with 3D photon counting & gray scale images.

Construction Site Scene Understanding: A 2D Image Segmentation and Classification

  • Kim, Hongjo;Park, Sungjae;Ha, Sooji;Kim, Hyoungkwan
    • 국제학술발표논문집
    • /
    • The 6th International Conference on Construction Engineering and Project Management
    • /
    • pp.333-335
    • /
    • 2015
  • A computer vision-based scene recognition algorithm is proposed for monitoring construction sites. The system analyzes images acquired from a surveillance camera to separate regions and classify them as building, ground, and hole. Mean shift image segmentation algorithm is tested for separating meaningful regions of construction site images. The system would benefit current monitoring practices in that information extracted from images could embrace an environmental context.

  • PDF

마커 없는 증강 현실 구현을 위한 물체인식 (Object Recogniton for Markerless Augmented Reality Embodiment)

  • 폴 안잔 쿠마;이형진;김영범;이슬람 모하마드 카이룰;백중환
    • 한국항행학회논문지
    • /
    • 제13권1호
    • /
    • pp.126-133
    • /
    • 2009
  • 본 논문에서는 마커 없이 증강 현실을 구현하기 위한 물체 인식 기법을 제안한다. 먼저 SIFT(Scale Invariant Feature Transform)알고리즘을 사용하여 물체 영상으로부터 특징점을 찾는데, 이러한 특징점들은 비율, 회전 또는 이동시에도 그 특징이 변하지 않는 장점이 있다. 또한 조도의 변화에도 일부는 변화지 않는 특성을 갖는다. 추출된 특징점의 독립적인 특성을 이용해 화면내의 다른 이미지의 매칭 포인트를 찾을 수 있는데, 학습된 영상과 매칭이 이루어지면, 매칭된 점을 이용해 화면내의 물체를 찾는다. 본 논문에서는 장면의 첫 프레임에서 발생하는 템플릿 이미지와의 매칭을 통해 현재의 화면에서 물체를 인식하였다. 네 종류의 물체에 대해 인식 실험을 한 결과 제안한 방법이 우수한 성능을 갖는 것을 확인하였다.

  • PDF

Density Change Adaptive Congestive Scene Recognition Network

  • Jun-Hee Kim;Dae-Seok Lee;Suk-Ho Lee
    • International journal of advanced smart convergence
    • /
    • 제12권4호
    • /
    • pp.147-153
    • /
    • 2023
  • In recent times, an absence of effective crowd management has led to numerous stampede incidents in crowded places. A crucial component for enhancing on-site crowd management effectiveness is the utilization of crowd counting technology. Current approaches to analyzing congested scenes have evolved beyond simple crowd counting, which outputs the number of people in the targeted image to a density map. This development aligns with the demands of real-life applications, as the same number of people can exhibit vastly different crowd distributions. Therefore, solely counting the number of crowds is no longer sufficient. CSRNet stands out as one representative method within this advanced category of approaches. In this paper, we propose a crowd counting network which is adaptive to the change in the density of people in the scene, addressing the performance degradation issue observed in the existing CSRNet(Congested Scene Recognition Network) when there are changes in density. To overcome the weakness of the CSRNet, we introduce a system that takes input from the image's information and adjusts the output of CSRNet based on the features extracted from the image. This aims to improve the algorithm's adaptability to changes in density, supplementing the shortcomings identified in the original CSRNet.

3차원 손 특징을 이용한 손 동작 인식에 관한 연구 (A study on hand gesture recognition using 3D hand feature)

  • 배철수
    • 한국정보통신학회논문지
    • /
    • 제10권4호
    • /
    • pp.674-679
    • /
    • 2006
  • 본 논문에서는 3차원 손 특징 데이터를 이용한 동작 인식 시스템을 제안하고자 한다. 제안된 시스템은 3차원 센서에 의해 조밀한 범위의 영상을 생성하여 손 동작에 대한 3차원 특징을 추출하여 손 동작을 분류한다. 또한 다양한 조명과 배경하에서의 손을 견실하게 분할하고 색상 정보와 상관이 없어 수화와 같은 복잡한 손 동작에 대해서도 견실한 인식능력을 나타낼 수가 있다. 제안된 방법의 전체적인 순서는 3차원 영상 획득, 팔 분할, 손과 팔목 분할, 손 자세 추정, 3차원 특징 추출, 그리고 동작 분류로 구성되어 있고, 수화 자세에 대한 인식 실험으로 제안된 시스템의 효율성을 입증하였다.

가상 세계 거주자의 지각 메커니즘 설계 및 구현 (Design and Implementation of the Perception Mechanism for the Agent in the Virtual World)

  • 박재우;정근재;박종희
    • 한국콘텐츠학회논문지
    • /
    • 제11권8호
    • /
    • pp.1-13
    • /
    • 2011
  • 가상 세계에서 인간과 유사한 에이전트를 만들기 위해서는 지각, 인식, 판단 그리고 행동에 대한 정교한 설계가 중요하다. 이와 관련하여 자율형 에이전트의 지각 기능과 인식 기능을 개발하고자 한다. 시야 속에서 획득되어진 가장 원시적 데이터인 이미지의 색상차를 이용하여 모양과 영역들을 구별하는 것으로부터 점, 선 색깔들을 기본 단위로 사용하는 지각 메커니즘을 개발한다. 이렇게 지각되어진 모양들을 지능적으로 인식하기 위해서 가려지거나 손실된 모양에서 원래의 모양을 추측하기 위한 추론 알고리즘을 개발하고 객체에 관해 온톨로지로 부터 얻어진 일반적 특성정보를 이용한다. 개별적으로 파악된 이차원 모양들과 다른 모양들과의 공간적 위치관계들이 삼차원 모양들을 이루고 그러한 모양을 가진 해당 객체들은 장면들을 구성하게 된다. 삼차원 모양들은 각 장면에서 자신만의 영역을 차지하며 에이전트는 객체들과 장면들을 분석하여 사물과 현상들을 인식한다. 이러한 장면에 대한 인식기능을 이용하여 에이전트가 시공간 영역속에서 지식을 축적하고 이용하는 방법을 개발하고 예제상황을 통해 구현결과를 보여준다.

직선 조합의 에너지 전파를 이용한 고속 물체인식 (Fast Object Recognition using Local Energy Propagation from Combination of Saline Line Groups)

  • 강동중
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 제어로봇시스템학회 2000년도 제15차 학술회의논문집
    • /
    • pp.311-311
    • /
    • 2000
  • We propose a DP-based formulation for matching line patterns by defining a robust and stable geometric representation that is based on the conceptual organizations. Usually, the endpoint proximity and collinearity of image lines, as two main conceptual organization groups, are useful cues to match the model shape in the scene. As the endpoint proximity, we detect junctions from image lines. We then search for junction groups by using geometric constraint between the junctions. A junction chain similar to the model chain is searched in the scene, based on a local comparison. A Dynamic Programming-based search algorithm reduces the time complexity for the search of the model chain in the scene. Our system can find a reasonable matching, although there exist severely distorted objects in the scene. We demonstrate the feasibility of the DP-based matching method using both synthetic and real images.

  • PDF

Novel View Generation Using Affine Coordinates

  • Sengupta, Kuntal;Ohya, Jun
    • 한국방송∙미디어공학회:학술대회논문집
    • /
    • 한국방송공학회 1997년도 Proceedings International Workshop on New Video Media Technology
    • /
    • pp.125-130
    • /
    • 1997
  • In this paper we present an algorithm to generate new views of a scene, starting with images from weakly calibrated cameras. Errors in 3D scene reconstruction usually gets reflected in the quality of the new scene generated, so we seek a direct method for reprojection. In this paper, we use the knowledge of dense point matches and their affine coordinate values to estimate the corresponding affine coordinate values in the new scene. We borrow ideas from the object recognition literature, and extend them significantly to solve the problem of reprojection. Unlike the epipolar line intersection algorithms for reprojection which requires at least eight matched points across three images, we need only five matched points. The theory of reprojection is used with hardware based rendering to achieve fast rendering. We demonstrate our results of novel view generation from stereopairs for arbitrary locations of the virtual camera.

  • PDF