• 제목/요약/키워드: Complex scene

검색결과 134건 처리시간 0.027초

압축 도메인 상에서 메크로 블록 타입과 DC 계수를 사용한 급격한 장면 변화 검출 알고리즘 (Abrupt Scene Change Detection Algorithm Using Macroblock Type and DC Coefficient in Compressed Domain)

  • 이흥렬;이웅희;이웅호;정동석
    • 대한전자공학회:학술대회논문집
    • /
    • 대한전자공학회 2003년도 하계종합학술대회 논문집 Ⅲ
    • /
    • pp.1527-1530
    • /
    • 2003
  • Video is an important and challenge media and requires sophisticated indexing schemes for efficient retrieval from visual databases. Scene change detection is the first step for automatic indexing of video data. Recently, several scene change detection algorithms in the pixel and compressed domains have been reported in the literature. However, using pixel methods are computationally complex and are not very robust in detecting scene change detection. In this paper, we propose robust abrupt scene change detection using macroblock type and DC coefficient. Experimental results show that the proposed algorithm is robust for detection of most abrupt scene changes in the compressed domain.

  • PDF

Relation between Game Motivation and Preference to Cutscenes

  • 완소음;조동민
    • 만화애니메이션 연구
    • /
    • 통권36호
    • /
    • pp.573-592
    • /
    • 2014
  • Following rapid development of software and hardware technologies and increasing enhancement in arithmetic capability, there are more and more content that can be accommodated and processed in video games, which is also increasingly complex and fine. Cutscene as a main narrative method have been developed, which have become necessary to express some key plots and important scenarios in games. Good cutscene can strengthen engagement of players with virtual world in games and make players share affection and sorrow with roles in games; while badly-designed cut-scene or overused cut-scene will impair immersion of players and affect players' gaming experience; for this reason, developers should not continue cut-scene design just from opinions of designers nor make players passive receivers, instead, they should reduce as much as possible interruption by cut-scene to players' immersion and grant players with better immersion. After all, only designs depending on demands and preferences of players by having some knowledge of impacts of cut-scene on players' immersion can be accepted by players.

Cross Mask와 에지 정보를 사용한 동영상 분할 (Dynamic Scene Segmentation Algorithm Using a Cross Mask and Edge Information)

  • 강정숙;박래홍;이상욱
    • 대한전자공학회논문지
    • /
    • 제26권8호
    • /
    • pp.1247-1256
    • /
    • 1989
  • In this paper, we propose the dynamic scene segmentation algorithm using a cross mask and edge information. This method, a combination of the conventioanl feature-based and pixel-based approaches, uses edges as features and determines moving pixels, with a cross mask centered on each edge pixel, by computing similarity measure between two consecutive image frames. With simple calcualtion the proposed method works well for image consisting of complex background or several moving objects. Also this method works satisfactorily in case of rotaitional motion.

  • PDF

복잡한 배경에서 움직이는 물체의 영역분할에 관한 연구 (A Segmentation Method for a Moving Object on A Static Complex Background Scene.)

  • 박상민;권희웅;김동성;정규식
    • 대한전기학회논문지:전력기술부문A
    • /
    • 제48권3호
    • /
    • pp.321-329
    • /
    • 1999
  • Moving Object segmentation extracts an interested moving object on a consecutive image frames, and has been used for factory automation, autonomous navigation, video surveillance, and VOP(Video Object Plane) detection in a MPEG-4 method. This paper proposes new segmentation method using difference images are calculated with three consecutive input image frames, and used to calculate both coarse object area(AI) and it's movement area(OI). An AI is extracted by removing background using background area projection(BAP). Missing parts in the AI is recovered with help of the OI. Boundary information of the OI confines missing parts of the object and gives inital curves for active contour optimization. The optimized contours in addition to the AI make the boundaries of the moving object. Experimental results of a fast moving object on a complex background scene are included.

  • PDF

Arabic Words Extraction and Character Recognition from Picturesque Image Macros with Enhanced VGG-16 based Model Functionality Using Neural Networks

  • Ayed Ahmad Hamdan Al-Radaideh;Mohd Shafry bin Mohd Rahim;Wad Ghaban;Majdi Bsoul;Shahid Kamal;Naveed Abbas
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • 제17권7호
    • /
    • pp.1807-1822
    • /
    • 2023
  • Innovation and rapid increased functionality in user friendly smartphones has encouraged shutterbugs to have picturesque image macros while in work environment or during travel. Formal signboards are placed with marketing objectives and are enriched with text for attracting people. Extracting and recognition of the text from natural images is an emerging research issue and needs consideration. When compared to conventional optical character recognition (OCR), the complex background, implicit noise, lighting, and orientation of these scenic text photos make this problem more difficult. Arabic language text scene extraction and recognition adds a number of complications and difficulties. The method described in this paper uses a two-phase methodology to extract Arabic text and word boundaries awareness from scenic images with varying text orientations. The first stage uses a convolution autoencoder, and the second uses Arabic Character Segmentation (ACS), which is followed by traditional two-layer neural networks for recognition. This study presents the way that how can an Arabic training and synthetic dataset be created for exemplify the superimposed text in different scene images. For this purpose a dataset of size 10K of cropped images has been created in the detection phase wherein Arabic text was found and 127k Arabic character dataset for the recognition phase. The phase-1 labels were generated from an Arabic corpus of quotes and sentences, which consists of 15kquotes and sentences. This study ensures that Arabic Word Awareness Region Detection (AWARD) approach with high flexibility in identifying complex Arabic text scene images, such as texts that are arbitrarily oriented, curved, or deformed, is used to detect these texts. Our research after experimentations shows that the system has a 91.8% word segmentation accuracy and a 94.2% character recognition accuracy. We believe in the future that the researchers will excel in the field of image processing while treating text images to improve or reduce noise by processing scene images in any language by enhancing the functionality of VGG-16 based model using Neural Networks.

명도 정보와 분할/합병 방법을 이용한 자연 영상에서의 텍스트 영역 추출 (Text Region Extraction of Natural Scene Images using Gray-level Information and Split/Merge Method)

  • 김지수;김수형;최영우
    • 한국정보과학회논문지:소프트웨어및응용
    • /
    • 제32권6호
    • /
    • pp.502-511
    • /
    • 2005
  • 본 논문에서는 자연 이미지에 포함되어 있는 텍스트를 추출하기 위해 명도 정보를 사용한 하이브리드 분석 방법(HAM)을 제안하였다. 즉, 제안한 방법은 명도 정보 분석(Gray-intensity Information Analysis)과 분할/합병 분석(Split/Merge Analysis)을 결합하였다. 제안한 방법의 추출 결과를 보면 단순한 영상과 복잡한 영상 모두에서 기존의 연구 결과보다 우수한 성능을 보임을 알 수 있었다.

Real Scene Text Image Super-Resolution Based on Multi-Scale and Attention Fusion

  • Xinhua Lu;Haihai Wei;Li Ma;Qingji Xue;Yonghui Fu
    • Journal of Information Processing Systems
    • /
    • 제19권4호
    • /
    • pp.427-438
    • /
    • 2023
  • Plenty of works have indicated that single image super-resolution (SISR) models relying on synthetic datasets are difficult to be applied to real scene text image super-resolution (STISR) for its more complex degradation. The up-to-date dataset for realistic STISR is called TextZoom, while the current methods trained on this dataset have not considered the effect of multi-scale features of text images. In this paper, a multi-scale and attention fusion model for realistic STISR is proposed. The multi-scale learning mechanism is introduced to acquire sophisticated feature representations of text images; The spatial and channel attentions are introduced to capture the local information and inter-channel interaction information of text images; At last, this paper designs a multi-scale residual attention module by skillfully fusing multi-scale learning and attention mechanisms. The experiments on TextZoom demonstrate that the model proposed increases scene text recognition's (ASTER) average recognition accuracy by 1.2% compared to text super-resolution network.

Adaptive Face Mask Detection System based on Scene Complexity Analysis

  • Kang, Jaeyong;Gwak, Jeonghwan
    • 한국컴퓨터정보학회논문지
    • /
    • 제26권5호
    • /
    • pp.1-8
    • /
    • 2021
  • 코로나바이러스-19(COVID-19)의 대유행에 따라 전 세계 수많은 확진자가 발생하고 있으며 국민을 불안에 떨게 하고 있다. 바이러스 감염 확산을 방지하기 위해서는 마스크를 제대로 착용하는 것이 필수적이지만 몇몇 사람들은 마스크를 쓰지 않거나 제대로 착용하지 않고 있다. 본 논문에서는 영상 이미지에서의 효율적인 마스크 감지 시스템을 제안한다. 제안 방법은 우선 입력 이미지의 모든 얼굴의 영역을 YOLOv5를 사용하여 감지하고 감지된 얼굴의 수에 따라 3가지의 장면 복잡도(Simple, Moderate, Complex) 중 하나로 분류한다. 그 후 장면 복잡도에 따라 3가지 ResNet(ResNet-18, 50, 101) 중 하나를 기반으로 한 Faster-RCNN을 사용하여 얼굴 부위를 감지하고 마스크를 제대로 착용하였는지 식별한다. 공개 마스크 감지 데이터셋을 활용하여 실험한 결과 제안한 장면 복잡도 기반 적응적인 모델이 다른 모델에 비해 가장 성능이 뛰어남을 확인하였다.

텐서보팅을 이용한 텍스트 배열정보의 획득과 이를 이용한 텍스트 검출 (Extraction of Text Alignment by Tensor Voting and its Application to Text Detection)

  • 이귀상;또안;박종현
    • 한국정보과학회논문지:소프트웨어및응용
    • /
    • 제36권11호
    • /
    • pp.912-919
    • /
    • 2009
  • 본 논문에서는 이차원 텐서보팅과 에지 기반 방법을 이용하여 자연영상에서 문자를 검출하는 새로운 방법을 제시한다. 텍스트의 문자들은 보통 연속적인 완만한 곡선 상에 배열되어 있고 서로 가깝게 위치하며, 이러한 특성은 텐서보팅에 의하여 효과적으로 검출될 수 있다. 이차원 텐서보팅은 토큰의 연속성을 curve saliency 로 산출하며 이러한 특성은 다양한 영상해석에 사용된다. 먼저 에지 검출을 이용하여 영상 내의 텍스트 영역이 위치할 가능성이 있는 텍스트 후보영역을 찾고 이러한 후보영역의 연속성을 텐서보팅에 의해 검증하여 잡음영역을 제거하고 텍스트 영역만을 구분한다. 실험 결과, 제안된 방법은 복잡한 자연영상에서 효과적으로 텍스트 영역을 검출함을 확인하였다.

이동로봇주행을 위한 영상처리 기술

  • 허경식;김동수
    • 전자공학회지
    • /
    • 제23권12호
    • /
    • pp.115-125
    • /
    • 1996
  • This paper presents a new algorithm for the self-localization of a mobile robot using one degree perspective Invariant(Cross Ratio). Most of conventional model-based self-localization methods have some problems that data structure building, map updating and matching processes are very complex. Use of a simple cross ratio can be effective to the above problems. The algorithm is based on two basic assumptions that the ground plane is flat and two locally parallel sloe-lines are available. Also it is assumed that an environmental map is available for matching between the scene and the model. To extract an accurate steering angle for a mobile robot, we take advantage of geometric features such as vanishing points. Feature points for cross ratio are extracted robustly using a vanishing point and intersection points between two locally parallel side-lines and vertical lines. Also the local position estimation problem has been treated when feature points exist less than 4points in the viewed scene. The robustness and feasibility of our algorithms have been demonstrated through real world experiments In Indoor environments using an indoor mobile robot, KASIRI-II(KAist Simple Roving Intelligence).

  • PDF