• Title/Summary/Keyword: 움직임 검출

Search Result 408, Processing Time 0.03 seconds

The Implementation of Motion Vector Detection Algorithm for the Optical-Sensor (광센서용 움직임 벡터 검출 알고리즘 구현)

  • Park, Nho-Kyung;Park, Sang-Bong;Park, Min-Hyeong
    • The Journal of the Institute of Internet, Broadcasting and Communication
    • /
    • v.10 no.5
    • /
    • pp.251-257
    • /
    • 2010
  • In this paper, we propose modified algorithm of motion vector detection for the pixel of image in the optical sensor. It is designed to reduce the amount of operation and have more accuracy in the motion detection than previous block matching algorithm. The proposed algorithm is implemented with Cyclone and fabricated using SEC 0.35um CMOS 1-poly-4-metal technology. The result of test with CARTESIAN ROBOT meets the desired performance.

MPEG Video Segmentation using Hierarchical Frame Search (계층적 프레임 탐색을 이용한 MPEG 비디오 분할)

  • 김주민;최영우;정규식
    • Proceedings of the IEEK Conference
    • /
    • 2000.09a
    • /
    • pp.215-218
    • /
    • 2000
  • 디지털 비디오 데이터를 효율적으로 브라우징 하는데 필요한 비디오 분할에 관한 연구가 활발하게 진행되고 있다. 본 연구에서는 비디오 데이터를 Shot단위로 분할하고, Shot내부에서 카메라 동작과 객체 움직임 분석을 이용한 sub-shot으로 분할하고자 한다. 연구 방법으로는 I-frame의 DC 영상을 이용하여 픽쳐그룹을 Shot(장면이 바뀐 경우), Move(카메라 동작,객체움직임), Static(영상의 변화가 거의 없는 경우)로 세분화하고 해당 픽쳐 그룹의 P, B-frame을 검사하여 정확한 컷 발생 위치, 디졸브, 카메라동작, 객체 움직임을 검출하게 된다. 픽쳐그룹 분류에서 정확성을 높이기 위해 계층적 신경망과 다중 특징을 이용한다. 정확한 컷 발생위치 검출하기 위해서 P, B프레임의 메크로블럭 타입을 이용한 통계적 방법을 이용하고, 디졸브, 카메라 동작, 객체 움직임을 검출하기 위해서 P, B-frame의 메크로블럭 타입과 움직임 벡터를 이용한 신경망으로 검출한다. 본 연구에서는 계층적 탐색을 이용하여 시간을 단축할 수 있고, 계층적 신경망과 다중 특징을 이용하여 픽쳐 그룹을 세분화 할 수 있고, 메크로 블록 타입과 통계적 방법을 이용하여 정확한 컷 검출을 할수 있고, 신경망을 이용하여 디졸브, 카메라 동작, 객체움직임을 검출 할 수 있음을 확인한다.

  • PDF

Region-Based Moving Object Segmentation for Video Monitoring System (비디오 감시시스템을 위한 영역 기반의 움직이는 물체 분할)

  • 이경미;김종배;이창우;김항준
    • Journal of the Institute of Electronics Engineers of Korea CI
    • /
    • v.40 no.1
    • /
    • pp.30-38
    • /
    • 2003
  • This paper presents an efficient region-based motion segmentation method for segmenting of moving objects in a traffic scene with a focus on a Video Monitoring System (VMS). The presented method consists of two phases: motion detection and motion segmentation. Using the adaptive thresholding technique, the differences between two consecutive frames are analyzed to detect the movements of objects in a scene. To segment the detected regions into meaningful objects which have the similar intensity and motion information, the regions are initially segmented using a k-means clustering algorithm and then, the neighboring regions with the similar motion information are merged. Since we deal with not the whole image, but the detected regions in the segmentation phase, the computational cost is reduced dramatically. Experimental results demonstrate robustness in the occlusions among multiple moving objects and the change in environmental conditions as well.

Automatic Moving Object Segmentation using Robust Edge Linking for Content-based Coding (내용 기반 코딩을 위한 강력한 에지 연결에 의한 움직임 객체 자동 분할)

  • 김준기;이호석
    • Journal of KIISE:Computer Systems and Theory
    • /
    • v.31 no.5_6
    • /
    • pp.305-320
    • /
    • 2004
  • Moving object segmentation is a fundamental function for content-based application. Moving object edges are produced by matching the detected moving edges with the current frame edges. But we can often experience the object edge disconnectedness due to coincidence of similarity between the object and background colors or the decrease of movement of moving object. The edge disconnectedness is a serious problem because it degrades the object visual quality so conspicuously That it sometimes makes it inadequate to perform content-based coding. We have solved this problem by developing a robust and comprehensive edge linking algorithm. And we also developed an automatic moving object segmentation algorithm. These algorithms can produce the completely linked moving object edge boundary and the accurate moving object segmentation. These algorithms can process CIF 30 frames/sec in a PC. These algorithms can be used for the MPEG-4 content-based coding.

Analysis of Human Activity Using Motion Vector (움직임 벡터를 이용한 사람 활동성 분석)

  • Kim, Sun-Woo;Choi, Yeon-Sung;Yang, Hae-Kwon
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 2011.10a
    • /
    • pp.157-160
    • /
    • 2011
  • In this paper, We proposed the method of recognition and analysis of human activites using Motion vector in real-time surveillance system. We employs subtraction image techniques to detect blob(human) in the foreground. When MPEG-4 video recording EPZS(Enhanced Predicted Zonal Search) is detected the values of motion vectors were used. In this paper, the activities of human recognize and classified such as meta-classes like this {Active, Inactive}, {Moving, Non-moving}, {Walking, Running}. Each step was separated using a step-by-step threshold values. We created approximately 150 conditions for the simulation. As a result, We showed a high success rate about 86~98% to distinguish each steps in simulation image.

  • PDF

Motion Activity Detection using Wireless 3-Axis Accelerometer Sensor for Elder and Feeble Person (노약자 보호를 위한 무선 3축 가속도 센서를 이용한 움직임 검출 시스템)

  • Choi, Jeong-Yeon;Jung, Sung-Boo;Lee, Hyun-Kwan;Eom, Ki-Hwan
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 2009.10a
    • /
    • pp.565-568
    • /
    • 2009
  • This paper proposes an monitoring system of elder and feeble person's motion activity using an object's motion activity data. The proposed system used wireless 3-axis sensor module, product by Freescale(Wireless Sensing Triple Axis Reference Design Board (ZSTAR)). We distribute sensing data into three classes using Neural Network System SVM. We find performance of proposed system that simulate some case about walk, past walk, fallen. Classify result data and graph of sensing data present succes rate 80%.

  • PDF

Shot Motion Classification Using Partial Decoding of INTRA Picture in Compressed Video (압축비디오에서 인트라픽쳐 부분 복호화를 이용한 샷 움직임 분류)

  • Kim, Kang-Wook;Kwon, Seong-Geun
    • Journal of Korea Multimedia Society
    • /
    • v.14 no.7
    • /
    • pp.858-865
    • /
    • 2011
  • In order to allow the user to efficiently browse, select, and retrieve a desired video part without having to deal directly with GBytes of compressed data, classification of shot motion characteristic has to be carried out as a preparation for such user interaction. The organization of video information for video database requires segmentation of a video into its constituent shots and their subsequent characterization in terms of content and camera movement in shot. In order to classify shot motion, it is a conventional way to use element of motion vector. However, there is a limit to estimate global camera motion because the way that uses motion vectors only represents local movement. For shot classification in terms of motion information, we propose a new scheme consisting of partial decoding of INTRA pictures and comparing the x, y displacement vector curve between the decoded I-frame and next P-frame in compressed video data.

Facial Features and Motion Recovery using multi-modal information and Paraperspective Camera Model (다양한 형식의 얼굴정보와 준원근 카메라 모델해석을 이용한 얼굴 특징점 및 움직임 복원)

  • Kim, Sang-Hoon
    • The KIPS Transactions:PartB
    • /
    • v.9B no.5
    • /
    • pp.563-570
    • /
    • 2002
  • Robust extraction of 3D facial features and global motion information from 2D image sequence for the MPEG-4 SNHC face model encoding is described. The facial regions are detected from image sequence using multi-modal fusion technique that combines range, color and motion information. 23 facial features among the MPEG-4 FDP (Face Definition Parameters) are extracted automatically inside the facial region using color transform (GSCD, BWCD) and morphological processing. The extracted facial features are used to recover the 3D shape and global motion of the object using paraperspective camera model and SVD (Singular Value Decomposition) factorization method. A 3D synthetic object is designed and tested to show the performance of proposed algorithm. The recovered 3D motion information is transformed into global motion parameters of FAP (Face Animation Parameters) of the MPEG-4 to synchronize a generic face model with a real face.

Video Event Detection according to Generating of Semantic Unit based on Moving Object (객체 움직임의 의미적 단위 생성을 통한 비디오 이벤트 검출)

  • Shin, Ju-Hyun;Baek, Sun-Kyoung;Kim, Pan-Koo
    • Journal of Korea Multimedia Society
    • /
    • v.11 no.2
    • /
    • pp.143-152
    • /
    • 2008
  • Nowadays, many investigators are studying various methodologies concerning event expression for semantic retrieval of video data. However, most of the parts are still using annotation based retrieval that is defined into annotation of each data and content based retrieval using low-level features. So, we propose a method of creation of the motion unit and extracting event through the unit for the more semantic retrieval than existing methods. First, we classify motions by event unit. Second, we define semantic unit about classified motion of object. For using these to event extraction, we create rules that are able to match the low-level features, from which we are able to retrieve semantic event as a unit of video shot. For the evaluation of availability, we execute an experiment of extraction of semantic event in video image and get approximately 80% precision rate.

  • PDF

Multiple face detection and tracking using active camera and skin color (액티브 카메라와 피부색상에 의한 다중 얼굴 검출 및 추적)

  • 김광희;이배호
    • Proceedings of the IEEK Conference
    • /
    • 2001.09a
    • /
    • pp.377-380
    • /
    • 2001
  • 본 논문에서는 실내에서 액티브 카메라를 사용하여 다중 인물의 얼굴의 위치를 검출하고. 추적할 수 있으며 조명과 배경 등의 영향에 강인한 추적 알고리즘을 제시하고자 한다. 알고리즘은 얼굴영역 검출, 추적의 2단계로 구성되며, 빠르고 효율적인 얼굴영역 검출은 추적 알고리즘의 성능향상으로 이어지므로, 이를 위해 독특한 색상영역 분포를 갖는 피부 색상 특징을 이용하였다. 표본영상에서 추출된 피부색상 픽셀들을 바탕으로 YCbCr 색상계를 사용하여 얼굴 색상모델을 구축한 후, Gaussian 함수를 사용하여 입력 영상의 픽셀과 얼굴색상모델과의 유사도를 결정하였다. 최종 얼굴 영역은 추출된 영역에 대한 얼굴의 타원특징, 해부학적 특징을 이용하여 결정된다. 추적은 추출된 얼굴영역과 temporal Gaussian 필터를 적용한 움직임 추정을 통한 움직임 검출의 조합으로 이루어진다. 또한, 예측버퍼의 사용으로 탐색영역의 축소로 인한 계산량 감소와 처리 속도의 증가시켰으며, pan/tilt가 가능한 카메라를 사용하여 상호 피드백이 가능하도록 하였다. 제시된 알고리즘은 PC 상에서 시뮬레이션되었으며, 좋은 결과를 얻을 수 있었다.

  • PDF