• Title/Summary/Keyword: 이동영상

Search Result 3,197, Processing Time 0.032 seconds

Moving Object Tracking using Differential Image (차영상을 이용한 이동 객체 추적)

  • 오명관;한군희;최동진;전병민
    • Proceedings of the Korea Contents Association Conference
    • /
    • 2004.05a
    • /
    • pp.396-400
    • /
    • 2004
  • In this study, we have proposed the tracking system of single moving object. The tracking system was estimated motion using differential image, and than track the moving object by controlled Pan/Tilt device of camera. Proposed tracking system is devided into image acquisition and preprocessing phase, motion estimation phase and object tracking phase. To estimation the motion, differential image method was used. In the binary differential image, decision of threshold value was used adaptive method. And in grouping the object area, block_based recursive labeling algorithm was used. As a result of experiment, motion of moving object can be estimated. The result of tracking, object was not lost and object was tracked correctly.

  • PDF

Real-time MPEG-4 Video Encoder for Live Video Service over CDMA network (CDMA 망에서의 실시간 동영상 서비스를 위한 MPEG-4 비디오 인코더)

  • Lee Yong-Hee;Song Joon-Ho;Kim In-Kwon;Shin Heon-Shik
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.31 no.8B
    • /
    • pp.707-715
    • /
    • 2006
  • One of the most promising services on the wireless network is multimedia data service. With recently emerged wireless communication technologies which conventionally were devoted to mobile phone services, pre-encoded contents as well as live video data can be transmitted via the same network. As there is enough room in the improvement of data transmission bandwidth in wireless network, video data service is likely to be more demanding. In this paper, real time MPEG-4 video encoder is described as apart of a whole system for live video services over wireless networks. As there are minimal assumptions on the underlying networks, presented system and service can be easily supported by different network system.

A Study on Tracking a Moving Object using Photogrammetric Techniques - Focused on a Soccer Field Model - (사진측랑기법을 이용한 이동객체 추적에 관한 연구 - 축구장 모형을 중심으로 -)

  • Bae Sang-Keun;Kim Byung-Guk;Jung Jae-Seung
    • Journal of the Korean Society of Surveying, Geodesy, Photogrammetry and Cartography
    • /
    • v.24 no.2
    • /
    • pp.217-226
    • /
    • 2006
  • Extraction and tracking objects are fundamental and important steps of the digital image processing and computer vision. Many algorithms about extracting and tracking objects have been developed. In this research, a method is suggested for tracking a moving object using a pair of CCD cameras and calculating the coordinate of the moving object. A 1/100 miniature of soccer field was made to apply the developed algorithms. After candidates were selected from the acquired images using the RGB value of a moving object (soccer ball), the object was extracted using its size (MBR size) among the candidates. And then, image coordinates of a moving object are obtained. The real-time position of a moving object is tracked in the boundary of the expected motion, which is determined by centering the moving object. The 3D position of a moving object can be obtained by conducting the relative orientation, absolute orientation, and space intersection of a pair of the CCD camera image.

The Recognition of Crack Detection Using Difference Image Analysis Method based on Morphology (모폴로지 기반의 차영상 분석기법을 이용한 균열검출의 인식)

  • Byun Tae-bo;Kim Jang-hyung;Kim Hyung-soo
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.10 no.1
    • /
    • pp.197-205
    • /
    • 2006
  • This paper presents the moving object tracking method using vision system. In order to track object in real time, the image of moving object have to be located the origin of the image coordinate axes. Accordingly, Fuzzy Control System is investigated for tracking the moving object, which control the camera module with Pan/Tilt mechanism. Hereafter, so the this system is applied to mobile robot, we design and implement image processing board for vision system. Also fuzzy controller is implemented to the StrongArm board. Finally, the proposed fuzzy controller is useful for the real-time moving object tracking system by experiment.

Moving Object Tracking in UAV Video using Motion Estimation (움직임 예측을 이용한 무인항공기 영상에서의 이동 객체 추적)

  • Oh, Hoon-Geol;Lee, Hyung-Jin;Baek, Joong-Hwan
    • Journal of Advanced Navigation Technology
    • /
    • v.10 no.4
    • /
    • pp.400-405
    • /
    • 2006
  • In this paper, we propose a moving object tracking algorithm by using motion estimation in UAV(Unmanned Aerial Vehicle) video. Proposed algorithm is based on generation of initial image from detected reference image, and tracking of moving object under the time-varying image. With a series of this procedure, tracking process is stable even when the UAV camera sways by correcting position of moving object, and tracking time is relatively reduced. A block matching algorithm is also utilized to determine the similarity between reference image and moving object. An experimental result shows that our proposed algorithm is better than the existing full search algorithm.

  • PDF

ALGORITHMS FOR MOVING OBJECT DETECTION: YSTAR-NEOPAT SURVEY PROGRAM (이동천체 후보 검출을 위한 알고리즘 개발: YSTAR-NEOPAT 탐사프로그램)

  • Bae, Young-Ho;Byun, Yong-Ik;Kang, Yong-Woo;Park, Sun-Youp;Oh, Se-Heon;Yu, Seoung-Yeol;Han, Won-Young;Yim, Hong-Suh;Moon, Hong-Kyu
    • Journal of Astronomy and Space Sciences
    • /
    • v.22 no.4
    • /
    • pp.393-408
    • /
    • 2005
  • We developed and compared two automatic algorithms for moving object detections in the YSTAR-NEOPAT sky survey program. One method, called starlist comparison method, is to identify moving object candidates by comparing the photometry data tables from successive images. Another method, called image subtraction method, is to identify the candidates by subtracting one image from another which isolates sources moving against background stars. The efficiency and accuracy of these algorithms have been tested using actual survey data from the YSTAR-NEOPAT telescope system. For the detected candidates, we performed eyeball inspection of animated images to confirm validity of asteroid detections. Main conclusions include followings. First, the optical distortion in the YSTAR-NEOPAT wide-field images can be properly corrected by comparison with USNO-B1.0 catalog and the astrometric accuracy can be preserved at around 1.5 arcsec. Secondly, image subtraction provides more robust and accurate detection of moving objects. For two different thresholds of 2.0 and $4.0\sigma$, image subtraction method uncovered 34 and 12 candidates and most of them are confirmed to be real. Starlist comparison method detected many more candidates, 60 and 6 for each threshold level, but nearly half of them turned out to be false detections.

A Study on the Availability of the On-Board Imager(OBI) and Cone-Beam CT(CBCT) in the Verification of Patient Set-up (온보드 영상장치(On-Board Imager) 및 콘빔CT(CBCT)를 이용한 환자 자세 검증의 유용성에 대한 연구)

  • Bak, Jino;Park, Sung-Ho;Park, Suk-Won
    • Radiation Oncology Journal
    • /
    • v.26 no.2
    • /
    • pp.118-125
    • /
    • 2008
  • Purpose: On-line image guided radiation therapy(on-line IGRT) and(kV X-ray images or cone beam CT images) were obtained by an on-board imager(OBI) and cone beam CT(CBCT), respectively. The images were then compared with simulated images to evaluate the patient's setup and correct for deviations. The setup deviations between the simulated images(kV or CBCT images), were computed from 2D/2D match or 3D/3D match programs, respectively. We then investigated the correctness of the calculated deviations. Materials and Methods: After the simulation and treatment planning for the RANDO phantom, the phantom was positioned on the treatment table. The phantom setup process was performed with side wall lasers which standardized treatment setup of the phantom with the simulated images, after the establishment of tolerance limits for laser line thickness. After a known translation or rotation angle was applied to the phantom, the kV X-ray images and CBCT images were obtained. Next, 2D/2D match and 3D/3D match with simulation CT images were taken. Lastly, the results were analyzed for accuracy of positional correction. Results: In the case of the 2D/2D match using kV X-ray and simulation images, a setup correction within $0.06^{\circ}$ for rotation only, 1.8 mm for translation only, and 2.1 mm and $0.3^{\circ}$ for both rotation and translation, respectively, was possible. As for the 3D/3D match using CBCT images, a correction within $0.03^{\circ}$ for rotation only, 0.16 mm for translation only, and 1.5 mm for translation and $0.0^{\circ}$ for rotation, respectively, was possible. Conclusion: The use of OBI or CBCT for the on-line IGRT provides the ability to exactly reproduce the simulated images in the setup of a patient in the treatment room. The fast detection and correction of a patient's positional error is possible in two dimensions via kV X-ray images from OBI and in three dimensions via CBCT with a higher accuracy. Consequently, the on-line IGRT represents a promising and reliable treatment procedure.

Implementation of a Self Controlled Mobile Robot with Intelligence to Recognize Obstacles (장애물 인식 지능을 갖춘 자율 이동로봇의 구현)

  • 류한성;최중경
    • Journal of the Institute of Electronics Engineers of Korea SP
    • /
    • v.40 no.5
    • /
    • pp.312-321
    • /
    • 2003
  • In this paper, we implement robot which are ability to recognize obstacles and moving automatically to destination. we present two results in this paper; hardware implementation of image processing board and software implementation of visual feedback algorithm for a self-controlled robot. In the first part, the mobile robot depends on commands from a control board which is doing image processing part. We have studied the self controlled mobile robot system equipped with a CCD camera for a long time. This robot system consists of a image processing board implemented with DSPs, a stepping motor, a CCD camera. We will propose an algorithm in which commands are delivered for the robot to move in the planned path. The distance that the robot is supposed to move is calculated on the basis of the absolute coordinate and the coordinate of the target spot. And the image signal acquired by the CCD camera mounted on the robot is captured at every sampling time in order for the robot to automatically avoid the obstacle and finally to reach the destination. The image processing board consists of DSP (TMS320VC33), ADV611, SAA7111, ADV7l76A, CPLD(EPM7256ATC144), and SRAM memories. In the second part, the visual feedback control has two types of vision algorithms: obstacle avoidance and path planning. The first algorithm is cell, part of the image divided by blob analysis. We will do image preprocessing to improve the input image. This image preprocessing consists of filtering, edge detection, NOR converting, and threshold-ing. This major image processing includes labeling, segmentation, and pixel density calculation. In the second algorithm, after an image frame went through preprocessing (edge detection, converting, thresholding), the histogram is measured vertically (the y-axis direction). Then, the binary histogram of the image shows waveforms with only black and white variations. Here we use the fact that since obstacles appear as sectional diagrams as if they were walls, there is no variation in the histogram. The intensities of the line histogram are measured as vertically at intervals of 20 pixels. So, we can find uniform and nonuniform regions of the waveforms and define the period of uniform waveforms as an obstacle region. We can see that the algorithm is very useful for the robot to move avoiding obstacles.

A Dynamic Video Adaptation Scheme based on the Predictions of the Size and the Quality of Encoded Video Streams (동영상 크기 및 품질 예측에 기반한 동적 동영상 어댑테이션)

  • 김종항;남기용;이상민;낭종호
    • Proceedings of the Korean Information Science Society Conference
    • /
    • 2004.04b
    • /
    • pp.538-540
    • /
    • 2004
  • Proxy를 이용한 동적 동영상 어댑테이션[1]은 이동 단말기나 현재 네트워크 상태의 특성을 고려해 동적으로 동영상을 변형할 수 있다는 장점이 있다. 하지만 기존에 제안된 동영상 어댑테이션 방법들은 품질 측정을 위해 반복적인 디코딩, 인코딩을 해야 하기 때문에 적절한 형태의 동영상을 생성하는데 많은 시간이 걸려서 지연시간이 최우선적으로 고려되는 실제 상황에서는 이용하기가 힘들다. 본 논문에서는 반복적인 디코딩, 인코딩 작업 없이 어댑테이션된 동영상을 생성하는 동적 동영상 어댑테이션 방법을 제안한다. 인코딩된 동영상의 파일의 크기와 품질에 초점을 맞추어 비디오 코덱의 특성을 분석하고, 그 결과를 테이블로 만들어 Proxy에 저장해둔다. 이동 단말기가 동영상을 요청하면, Proxy에서는 해당 코덱의 분석 결과 테이블을 참조하여 가능한 한 최고의 품질로 디코딩 및 인코딩을 하여 어댑테이션된 동영상을 전송하게 된다.

  • PDF

Development of a deep-learning based automatic tracking of moving vehicles and incident detection processes on tunnels (딥러닝 기반 터널 내 이동체 자동 추적 및 유고상황 자동 감지 프로세스 개발)

  • Lee, Kyu Beom;Shin, Hyu Soung;Kim, Dong Gyu
    • Journal of Korean Tunnelling and Underground Space Association
    • /
    • v.20 no.6
    • /
    • pp.1161-1175
    • /
    • 2018
  • An unexpected event could be easily followed by a large secondary accident due to the limitation in sight of drivers in road tunnels. Therefore, a series of automated incident detection systems have been under operation, which, however, appear in very low detection rates due to very low image qualities on CCTVs in tunnels. In order to overcome that limit, deep learning based tunnel incident detection system was developed, which already showed high detection rates in November of 2017. However, since the object detection process could deal with only still images, moving direction and speed of moving vehicles could not be identified. Furthermore it was hard to detect stopping and reverse the status of moving vehicles. Therefore, apart from the object detection, an object tracking method has been introduced and combined with the detection algorithm to track the moving vehicles. Also, stopping-reverse discrimination algorithm was proposed, thereby implementing into the combined incident detection processes. Each performance on detection of stopping, reverse driving and fire incident state were evaluated with showing 100% detection rate. But the detection for 'person' object appears relatively low success rate to 78.5%. Nevertheless, it is believed that the enlarged richness of image big-data could dramatically enhance the detection capacity of the automatic incident detection system.