• Title/Summary/Keyword: matching algorithm

Search Result 2,267, Processing Time 0.025 seconds

Development of an Image Processing Algorithm for Paprika Recognition and Coordinate Information Acquisition using Stereo Vision (스테레오 영상을 이용한 파프리카 인식 및 좌표 정보 획득 영상처리 알고리즘 개발)

  • Hwa, Ji-Ho;Song, Eui-Han;Lee, Min-Young;Lee, Bong-Ki;Lee, Dae-Weon
    • Journal of Bio-Environment Control
    • /
    • v.24 no.3
    • /
    • pp.210-216
    • /
    • 2015
  • Purpose of this study was a development of an image processing algorithm to recognize paprika and acquire it's 3D coordinates from stereo images to precisely control an end-effector of a paprika auto harvester. First, H and S threshold was set using HSI histogram analyze for extracting ROI(region of interest) from raw paprika cultivation images. Next, fundamental matrix of a stereo camera system was calculated to process matching between extracted ROI of corresponding images. Epipolar lines were acquired using F matrix, and $11{\times}11$ mask was used to compare pixels on the line. Distance between extracted corresponding points were calibrated using 3D coordinates of a calibration board. Non linear regression analyze was used to prove relation between each pixel disparity of corresponding points and depth(Z). Finally, the program could calculate horizontal(X), vertical(Y) directional coordinates using stereo camera's geometry. Horizontal directional coordinate's average error was 5.3mm, vertical was 18.8mm, depth was 5.4mm. Most of the error was occurred at 400~450mm of depth and distorted regions of image.

Motion Linearity-based Frame Rate Up Conversion Method (선형 움직임 기반 프레임률 향상 기법)

  • Kim, Donghyung
    • Journal of the Korea Academia-Industrial cooperation Society
    • /
    • v.18 no.7
    • /
    • pp.734-740
    • /
    • 2017
  • A frame rate up-conversion scheme is needed when moving pictures with a low frame rate is played on appliances with a high frame rate. Frame rate up-conversion methods interpolate the frame with two consecutive frames of the original source. This can be divided into the frame repetition method and motion estimation-based the frame interpolation one. Frame repetition has very low complexity, but it can yield jerky artifacts. The interpolation method based on a motion estimation and compensation can be divided into pixel or block interpolation methods. In the case of pixel interpolation, the interpolated frame was classified into four areas, which were interpolated using different methods. The block interpolation method has relatively low complexity, but it can yield blocking artifacts. The proposed method is the frame rate up-conversion method based on a block motion estimation and compensation using the linearity of motion. This method uses two previous frames and one next frame for motion estimation and compensation. The simulation results show that the proposed algorithm effectively enhances the objective quality, particularly in a high resolution image. In addition, the proposed method has similar or higher subjective quality than other conventional approaches.

Performance Evaluation of Scaling based Dynamic Time Warping Algorithms for the Detection of Low-rate TCP Attacks (Low-rate TCP 공격 탐지를 위한 스케일링 기반 DTW 알고리즘의 성능 분석)

  • So, Won-Ho;Shim, Sang-Heon;Yoo, Kyoung-Min;Kim, Young-Chon
    • Journal of the Institute of Electronics Engineers of Korea TC
    • /
    • v.44 no.3 s.357
    • /
    • pp.33-40
    • /
    • 2007
  • In this paper, low-rate TCP attack as one of shrew attacks is considered and the scaling based dynamic time warping (S-DTW) algorithm is introduced. The low-rate TCP attack can not be detected by the detection method for the previous flooding DoS/DDoS (Denial of Service/Distirbuted Denial of Service) attacks due to its low average traffic rate. It, however, is a periodic short burst that exploits the homogeneity of the minimum retransmission timeout (RTO) of TCP flows and then some pattern matching mechanisms have been proposed to detect it among legitimate input flows. A DTW mechanism as one of detection approaches has proposed to detect attack input stream consisting of many legitimate or attack flows, and shown a depending method as well. This approach, however, has a problem that legitimate input stream may be caught as an attack one. In addition, it is difficult to decide a threshold for separation between the legitimate and the malicious. Thus, the causes of this problem are analyzed through simulation and the scaling by maximum auto-correlation value is executed before computing the DTW. We also discuss the results on applying various scaling approaches and using standard deviation of input streams monitored.

Development of Homogeneous Road Section Determination and Outlier Filter Algorithm (국도의 동질구간 선정과 이상치 제거 방법에 관한 연구)

  • Do, Myung-Sik;Kim, Sung-Hyun;Bae, Hyun-Sook;Kim, Jong-Sik
    • Journal of Korean Society of Transportation
    • /
    • v.22 no.7 s.78
    • /
    • pp.7-16
    • /
    • 2004
  • The homogeneous road section is defined as one consisted of similar traffic characteristics focused on demand and supply. The criteria, in the aspect of demand, are the diverging rate and the ratio of green time to cycle time at signalized intersection, and distance between the signalized intersections. The criteria, in that or supply, are the traffic patterns such as traffic volume and its speed. In this study, the effective method to generate valuable data, pointing out the problems of removal method of obscure data, is proposed using data collected from Gonjiam IC to Jangji IC on the national highway No.3. Travel times are collected with licence matching method and traffic volume and speed are collected from detectors. Futhermore, the method of selecting homogeneous road section is proposed considering demand and supply aspect simultaneously. This method using outlier filtering algorithm can be applied to generate the travel time forecasting model and to revise the obscured of missing data transmitting from detectors. The point and link data collected at the same time on the rational highway can be used as a basis predicting the travel time and revising the obscured data in the future.

High Performance Object Recognition with Application of the Size and Rotational Invariant Feature of the Fourier Descriptor to the 3D Information of Edges (푸리에 표현자의 크기와 회전 불변 특징을 에지에 대한 3차원 정보에 응용한 고효율의 물체 인식)

  • Wang, Shi;Chen, Hongxin;I, Jun-Ho;Lin, Haiping;Kim, Hyong-Suk;Kim, Jong-Man
    • Journal of the Institute of Electronics Engineers of Korea CI
    • /
    • v.45 no.6
    • /
    • pp.170-178
    • /
    • 2008
  • A high performance object recognition algorithm using Fourier description of the 3D information of the objects is proposed. Object boundaries contain sufficient information for recognition in most of objects. However, it is not well utilized as the key solution of the object recognition since obtaining the accurate boundary information is not easy. Also, object boundaries vary highly depending on the size or orientation of object. The proposed object recognition algorithm is based on 1) the accurate object boundaries extracted from the 3D shape which is obtained by the laser scan device, and 2) reduction of the required database using the size and rotational invariant feature of the Fourier Descriptor. Such Fourier information is compared with the database and the recognition is done by selecting the best matching object. The experiments have been done on the rich database of MPEG 7 Part B.

Efficient Methods for Detecting Frame Characteristics and Objects in Video Sequences (내용기반 비디오 검색을 위한 움직임 벡터 특징 추출 알고리즘)

  • Lee, Hyun-Chang;Lee, Jae-Hyun;Jang, Ok-Bae
    • Journal of KIISE:Software and Applications
    • /
    • v.35 no.1
    • /
    • pp.1-11
    • /
    • 2008
  • This paper detected the characteristics of motion vector to support efficient content -based video search of video. Traditionally, the present frame of a video was divided into blocks of equal size and BMA (block matching algorithm) was used, which predicts the motion of each block in the reference frame on the time axis. However, BMA has several restrictions and vectors obtained by BMA are sometimes different from actual motions. To solve this problem, the foil search method was applied but this method is disadvantageous in that it has to make a large volume of calculation. Thus, as an alternative, the present study extracted the Spatio-Temporal characteristics of Motion Vector Spatio-Temporal Correlations (MVSTC). As a result, we could predict motion vectors more accurately using the motion vectors of neighboring blocks. However, because there are multiple reference block vectors, such additional information should be sent to the receiving end. Thus, we need to consider how to predict the motion characteristics of each block and how to define the appropriate scope of search. Based on the proposed algorithm, we examined motion prediction techniques for motion compensation and presented results of applying the techniques.

Aberration Retrieval Algorithm of Optical Pickups Using the Extended Nijboer-Zernike Approach (확장된 네이보어-제르니케 방법에 의한 광픽업의 파면수차 복원 알고리즘)

  • Jun, Jae-Chul;Chung, Ki-Soo;Lee, Gun-Kee
    • Journal of the Institute of Convergence Signal Processing
    • /
    • v.11 no.1
    • /
    • pp.32-40
    • /
    • 2010
  • In this work, the method of acquiring the pupil function of optical system is proposed. The wavefront aberration and the intensity distribution of pupil can be analysed with the pupil function. This system can be adopted to the manufacturing line of optical pickup directly and also has good performance to analysing various property of optical instrument. It is one kind of inverse problem to get pupil functions by 3D beam data. The extended Nijboer-Zernike(ENZ) approach recently proposed by Netherlands research group is adopted to accompany to solve these inverse problem. The ENZ approach is one of a aberration retrieval method for which numerous approaches are available. But this approach is new in the sense that it use the highly efficient representation of pupil functions by means of their Zernike coefficients. These coefficients are estimated by using matching procedure in the focal region the theoretical 3D intensity distribution and measured 3D intensity distribution. The algorithm that can be applied more general circumstance such as high-numerical aperture instrument is developed by modifying original ENZ approach. By these scheme, MS windows based GUI program is developed and the good performance is verified with generated 3D beam data.

The Development of Image Processing System Using Area Camera for Feeding Lumber (영역카메라를 이용한 이송중인 제재목의 화상처리시스템 개발)

  • Kim, Byung Nam;Lee, Hyoung Woo;Kim, Kwang Mo
    • Journal of the Korean Wood Science and Technology
    • /
    • v.37 no.1
    • /
    • pp.37-47
    • /
    • 2009
  • For the inspection of wood, machine vision is the most common automated inspection method used at present. It is required to sort wood products by grade and to locate surface defects prior to cut-up. Many different sensing methods have been applied to inspection of wood including optical, ultrasonic, X-ray sensing in the wood industry. Nowadays the scanning system mainly employs CCD line-scan camera to meet the needs of accurate detection of lumber defects and real-time image processing. But this system needs exact feeding system and low deviation of lumber thickness. In this study low cost CCD area sensor was used for the development of image processing system for lumber being fed. When domestic red pine being fed on the conveyer belt, lumber images of irregular term of captured area were acquired because belt conveyor slipped between belt and roller. To overcome incorrect image merging by the unstable feeding speed of belt conveyor, it was applied template matching algorithm which was a measure of the similarity between the pattern of current image and the next one. Feeding the lumber over 13.8 m/min, general area sensor generates unreadable image pattern by the motion blur. The red channel of RGB filter showed a good performance for removing background of the green conveyor belt from merged image. Threshold value reduction method that was a image-based thresholding algorithm performed well for knot detection.

Lane Detection in Complex Environment Using Grid-Based Morphology and Directional Edge-link Pairs (복잡한 환경에서 Grid기반 모폴리지와 방향성 에지 연결을 이용한 차선 검출 기법)

  • Lin, Qing;Han, Young-Joon;Hahn, Hern-Soo
    • Journal of the Korean Institute of Intelligent Systems
    • /
    • v.20 no.6
    • /
    • pp.786-792
    • /
    • 2010
  • This paper presents a real-time lane detection method which can accurately find the lane-mark boundaries in complex road environment. Unlike many existing methods that pay much attention on the post-processing stage to fit lane-mark position among a great deal of outliers, the proposed method aims at removing those outliers as much as possible at feature extraction stage, so that the searching space at post-processing stage can be greatly reduced. To achieve this goal, a grid-based morphology operation is firstly used to generate the regions of interest (ROI) dynamically, in which a directional edge-linking algorithm with directional edge-gap closing is proposed to link edge-pixels into edge-links which lie in the valid directions, these directional edge-links are then grouped into pairs by checking the valid lane-mark width at certain height of the image. Finally, lane-mark colors are checked inside edge-link pairs in the YUV color space, and lane-mark types are estimated employing a Bayesian probability model. Experimental results show that the proposed method is effective in identifying lane-mark edges among heavy clutter edges in complex road environment, and the whole algorithm can achieve an accuracy rate around 92% at an average speed of 10ms/frame at the image size of $320{\times}240$.

Generation and Detection of Cranial Landmark

  • Heo, Suwoong;Kang, Jiwoo;Kim, Yong Oock;Lee, Sanghoon
    • Journal of International Society for Simulation Surgery
    • /
    • v.2 no.1
    • /
    • pp.26-32
    • /
    • 2015
  • Purpose When a surgeon examines the morphology of skull of patient, locations of craniometric landmarks of 3D computed tomography(CT) volume are one of the most important information for surgical purpose. The locations of craniometric landmarks can be found manually by surgeon from the 3D rendered volume or 2D sagittal, axial, and coronal slices which are taken by CT. Since there are many landmarks on the skull, finding these manually is time-consuming, exhaustive, and occasionally inexact. These inefficiencies raise a demand for a automatic localization technique for craniometric landmark points. So in this paper, we propose a novel method through which we can automatically find these landmark points, which are useful for surgical purpose. Materials and Methods At first, we align the experimental data (CT volumes) using Frankfurt Horizontal Plane (FHP) and Mid Sagittal Plane(MSP) which are defined by 3 and 2 cranial landmark points each. The target landmark of our experiment is the anterior nasal spine. Prior to constructing a statistical cubic model which would be used for detecting the location of the landmark from a given CT volume, reference points for the anterior nasal spine were manually chosen by a surgeon from several CT volume sets. The statistical cubic model is constructed by calculating weighted intensity means of these CT sets around the reference points. By finding the location where similarity function (squared difference function) has the minimal value with this model, the location of the landmark can be found from any given CT volume. Results In this paper, we used 5 CT volumes to construct the statistical cubic model. The 20 CT volumes including the volumes, which were used to construct the model, were used for testing. The range of age of subjects is up to 2 years (24 months) old. The found points of each data are almost close to the reference point which were manually chosen by surgeon. Also it has been seen that the similarity function always has the global minimum at the detection point. Conclusion Through the experiment, we have seen the proposed method shows the outstanding performance in searching the landmark point. This algorithm would make surgeons efficiently work with morphological informations of skull. We also expect the potential of our algorithm for searching the anatomic landmarks not only cranial landmarks.