• Title/Summary/Keyword: 영상 기반 추적

Search Result 864, Processing Time 0.034 seconds

Making 2.5D with Vanishing Point in Photoshop (Photoshop Vanishing Point를 이용한 2.5D 제작에 관한연구)

  • Yoon, Young-Doo;Choi, Eun-Young
    • The Journal of the Korea Contents Association
    • /
    • v.9 no.12
    • /
    • pp.146-153
    • /
    • 2009
  • Thanks to computer graphic technology development, graphic design programming is easily accessible by any home computer user today since it is free from the burdens of complicated 알고리듬 or the expensive graphic tools that were required in the past. The term 알고리듬 2.5 is commonly used by computer graphic designers to refer to 2D, a form of pseudo-3D. In this study, by using 2.5D, which was previously utilized for strengthening visual effects and engine efficiency, together with Adobe Photoshop along with After Effects, I will incorporate these into motion graphics. Today, motion graphics dominate the advertisement and image markets. Since viewers have developed higher expectations, a more dynamic 3D space graphic technology is preferred over the outdated 2D basis. In this study, I will produce a 2.5D image which is generated through a vanishing point filter of Adobe Photoshop and After Effects based on still image information and captured at an angle of Axonometric Projection. Also, I will compare the effectiveness of the production process and camera angle flexibility between the previous 3D process and new 2.5 D process.

The Analysis of 2001 Land Use Distribution of Daejeon Metropolitan City based on KOMPSAT-1 EOC Imagery (KOMPSAT-1 EOC 자료를 활용한 2001년도 대전시 토지이용 현황의 공간적 분포 분석)

  • Kim, Youn-Soo;Jeon, Gap-Ho;Lee, Kwang-Jae
    • Journal of the Korean Association of Geographic Information Studies
    • /
    • v.7 no.3
    • /
    • pp.13-21
    • /
    • 2004
  • The dissemination of commercial satellite images. which have the high spatial resolution such as aerial photos, are the active trend in remote sensing community because of the recent development in satellite and sensor technology. Such high resolution satellite images provide a unique tool for the monitoring of ongoing urban land use change. Especially KOMPSAT-1, which was launched at December 1999 and successfully operated up to now, provides repeatedly panchromatic images over Korean peninsula, which has the spatial resolution of 6.6m. Based upon this KOMPSAT-1 EOC image data we can try to analyze and assess the temporal urban land use change, which could not be done because lack of such data. The aim of this paper is to analyze and assess the spatial land use characteristics of Daejeon Metropolitan City based on KOMPSAT-1 EOC data. The land use map of year 2001 is generated through the modification of the year 2000 land use map, which is published by National Geographic Information Institute, using visual interpretation of KOMPSAT-1 EOC image which is acquired in year 2001. This study can be the start point of the time series analysis of the long term land use change monitoring mit KOMPSAT-1 EOC data.

  • PDF

Preliminary Study Related with Application of Transportation Survey and Analysis by Unmanned Aerial Vehicle(Drone) (드론기반 고속도로 교통조사분석 활용을 위한 기초연구)

  • Kim, Soo-Hee;Lee, Jae-Kwang;Han, Dong-Hee;Yoon, Jae-Yong;Jeong, So-Young
    • The Journal of The Korea Institute of Intelligent Transport Systems
    • /
    • v.16 no.6
    • /
    • pp.182-194
    • /
    • 2017
  • Most of the drone (Unmanned Aerial Vehicle) research in terms of traffic management involves detecting and tracking roads or vehicles. The purpose of analyzing image footage in the transportation sector is to overcome the limitations of the existing traffic data collection system (vehicle detectors, DSRC, etc.). With regards to this, drones are the good alternatives. However, due to limitation in their maximum flight time, they are appropriate to use as a complementary rather than replacing the existing collection system. Therefore, further research is needed for utilizing drones for transportation analysis purpose. Traffic problems often arise from one particular section or a point that expands to the whole road network and drones can be fully utilized to analyze these particular sections. Based on the study on the uses of traffic survey analysis, this study is conducted by extracting traffic flow parameters from video images(range 800~1000m) of highway unit segments that were taken by drones. In addition, video images were taken at a high altitude with the development of imaging technologies.

Ensemble Deep Network for Dense Vehicle Detection in Large Image

  • Yu, Jae-Hyoung;Han, Youngjoon;Kim, JongKuk;Hahn, Hernsoo
    • Journal of the Korea Society of Computer and Information
    • /
    • v.26 no.1
    • /
    • pp.45-55
    • /
    • 2021
  • This paper has proposed an algorithm that detecting for dense small vehicle in large image efficiently. It is consisted of two Ensemble Deep-Learning Network algorithms based on Coarse to Fine method. The system can detect vehicle exactly on selected sub image. In the Coarse step, it can make Voting Space using the result of various Deep-Learning Network individually. To select sub-region, it makes Voting Map by to combine each Voting Space. In the Fine step, the sub-region selected in the Coarse step is transferred to final Deep-Learning Network. The sub-region can be defined by using dynamic windows. In this paper, pre-defined mapping table has used to define dynamic windows for perspective road image. Identity judgment of vehicle moving on each sub-region is determined by closest center point of bottom of the detected vehicle's box information. And it is tracked by vehicle's box information on the continuous images. The proposed algorithm has evaluated for performance of detection and cost in real time using day and night images captured by CCTV on the road.

Feature Point Filtering Method Based on CS-RANSAC for Efficient Planar Homography Estimating (효과적인 평면 호모그래피 추정을 위한 CS-RANSAC 기반의 특징점 필터링 방법)

  • Kim, Dae-Woo;Yoon, Ui-Nyoung;Jo, Geun-Sik
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.5 no.6
    • /
    • pp.307-312
    • /
    • 2016
  • Markerless tracking for augmented reality using Homography can augment virtual objects correctly and naturally on live view of real-world environment by using correct pose and direction of camera. The RANSAC algorithm is widely used for estimating Homography. CS-RANSAC algorithm is one of the novel algorithm which cooperates a constraint satisfaction problem(CSP) into RANSAC algorithm for increasing accuracy and decreasing processing time. However, CS-RANSAC algorithm can be degraded performance of calculating Homography that is caused by selecting feature points which estimate low accuracy Homography in the sampling step. In this paper, we propose feature point filtering method based on CS-RANSAC for efficient planar Homography estimating the proposed algorithm evaluate which feature points estimate high accuracy Homography for removing unnecessary feature point from the next sampling step using Symmetric Transfer Error to increase accuracy and decrease processing time. To evaluate our proposed method we have compared our algorithm with the bagic CS-RANSAC algorithm, and basic RANSAC algorithm in terms of processing time, error rate(Symmetric Transfer Error), and inlier rate. The experiment shows that the proposed method produces 5% decrease in processing time, 14% decrease in Symmetric Transfer Error, and higher accurate homography by comparing the basic CS-RANSAC algorithm.

A preliminary study for development of an automatic incident detection system on CCTV in tunnels based on a machine learning algorithm (기계학습(machine learning) 기반 터널 영상유고 자동 감지 시스템 개발을 위한 사전검토 연구)

  • Shin, Hyu-Soung;Kim, Dong-Gyou;Yim, Min-Jin;Lee, Kyu-Beom;Oh, Young-Sup
    • Journal of Korean Tunnelling and Underground Space Association
    • /
    • v.19 no.1
    • /
    • pp.95-107
    • /
    • 2017
  • In this study, a preliminary study was undertaken for development of a tunnel incident automatic detection system based on a machine learning algorithm which is to detect a number of incidents taking place in tunnel in real time and also to be able to identify the type of incident. Two road sites where CCTVs are operating have been selected and a part of CCTV images are treated to produce sets of training data. The data sets are composed of position and time information of moving objects on CCTV screen which are extracted by initially detecting and tracking of incoming objects into CCTV screen by using a conventional image processing technique available in this study. And the data sets are matched with 6 categories of events such as lane change, stoping, etc which are also involved in the training data sets. The training data are learnt by a resilience neural network where two hidden layers are applied and 9 architectural models are set up for parametric studies, from which the architectural model, 300(first hidden layer)-150(second hidden layer) is found to be optimum in highest accuracy with respect to training data as well as testing data not used for training. From this study, it was shown that the highly variable and complex traffic and incident features could be well identified without any definition of feature regulation by using a concept of machine learning. In addition, detection capability and accuracy of the machine learning based system will be automatically enhanced as much as big data of CCTV images in tunnel becomes rich.

Infrared LED Pointer for Interactions in Collaborative Environments (협업 환경에서의 인터랙션을 위한 적외선 LED 포인터)

  • Jin, Yoon-Suk;Lee, Kyu-Hwa;Park, Jun
    • Journal of the HCI Society of Korea
    • /
    • v.2 no.1
    • /
    • pp.57-63
    • /
    • 2007
  • Our research was performed in order to implement a new pointing device for human-computer interactions in a collaborative environments based on Tiled Display system. We mainly focused on tracking the position of an infrared light source and applying our system to various areas. More than simple functionality of mouse clicking and pointing, we developed a device that could be used to help people communicate better with the computer. The strong point of our system is that it could be implemented in any place where a camera can be installed. Due to the fact that this system processes only infrared light, computational overhead for LED recognition was very low. Furthermore, by analyzing user's movement, various actions are expected to be performed with more convenience. This system was tested for presentation and game control.

  • PDF

Pictorial Model of Upper Body based Pose Recognition and Particle Filter Tracking (그림모델과 파티클필터를 이용한 인간 정면 상반신 포즈 인식)

  • Oh, Chi-Min;Islam, Md. Zahidul;Kim, Min-Wook;Lee, Chil-Woo
    • 한국HCI학회:학술대회논문집
    • /
    • 2009.02a
    • /
    • pp.186-192
    • /
    • 2009
  • In this paper, we represent the recognition method for human frontal upper body pose. In HCI(Human Computer Interaction) and HRI(Human Robot Interaction) when a interaction is established the human has usually frontal direction to the robot or computer and use hand gestures then we decide to focus on human frontal upper-body pose, The two main difficulties are firstly human pose is consist of many parts which cause high DOF(Degree Of Freedom) then the modeling of human pose is difficult. Secondly the matching between image features and modeling information is difficult. Then using Pictorial Model we model the human main poses which are mainly took the space of frontal upper-body poses and we recognize the main poses by making main pose database. using determined main pose we used the model parameters for particle filter which predicts the posterior distribution for pose parameters and can determine more specific pose by updating model parameters from the particle having the maximum likelihood. Therefore based on recognizing main poses and tracking the specific pose we recognize the human frontal upper body poses.

  • PDF

A Robust Marker Detection Algorithm Using Hybrid Features in Augmented Reality (증강현실 환경에서 복합특징 기반의 강인한 마커 검출 알고리즘)

  • Park, Gyu-Ho;Lee, Heng-Suk;Han, Kyu-Phil
    • The KIPS Transactions:PartA
    • /
    • v.17A no.4
    • /
    • pp.189-196
    • /
    • 2010
  • This paper presents an improved marker detection algorithm using hybrid features such as corner, line segment, region, and adaptive threshold values, etc. In usual augmented reality environments, there are often marker occlusion and poor illumination. However, existing ARToolkit fails to recognize the marker in these situations, especially, partial concealment of marker by user, large change of illumination and dim circumstances. In order to solve these problems, the adaptive threshold technique is adopted to extract a marker region and a corner extraction method based on line segments is presented against marker occlusions. In addition, a compensating method, corresponding the marker size and center between registered and extracted one, is proposed to increase the template matching efficiency, because the inside marker size of warped images is slightly distorted due to the movement of corner and warping. Therefore, experimental results showed that the proposed algorithm can robustly detect the marker in severe illumination change and occlusion environment and use similar markers because the matching efficiency was increased almost 30%.

Wearable User Interface based on EOG and Marker Recognition (EOG와 마커인식을 이용한 착용형 사용자 인터페이스)

  • Kang, Sun-Kyoung;Jung, Sung-Tae;Lee, Sang-Seol
    • Journal of the Korea Society of Computer and Information
    • /
    • v.11 no.6 s.44
    • /
    • pp.133-141
    • /
    • 2006
  • Recently many wearable computers have been developed. But they still have many user interface problems from both an input and output perspective. This paper presents a wearable user interface based on EOG(electrooculogram) sensing circuit and marker recognition. In the proposed user interface, the EOG sensor circuit which tracks the movement of eyes by sensing the potential difference across the eye is used as a pointing device. Objects to manipulate are represented human readable markers. And the marker recognition system detects and recognize markers from the camera input image. When a marker is recognized, the corresponding property window and method window are displayed to the head mounted display. Users manipulate the object by selecting a property or a method item from the window. By using the EOG sensor circuit and the marker recognition system, we can manipulate an object with only eye movement in the wearable computing environment.

  • PDF