• Title/Summary/Keyword: Object detecting

Search Result 555, Processing Time 0.025 seconds

A Study on Abalone Young Shells Counting System using Machine Vision (머신비전을 이용한 전복 치패 계수에 관한 연구)

  • Park, Kyung-min;Ahn, Byeong-Won;Park, Young-San;Bae, Cherl-O
    • Journal of the Korean Society of Marine Environment & Safety
    • /
    • v.23 no.4
    • /
    • pp.415-420
    • /
    • 2017
  • In this paper, an algorithm for object counting via a conveyor system using machine vision is suggested. Object counting systems using image processing have been applied in a variety of industries for such purposes as measuring floating populations and traffic volume, etc. The methods of object counting mainly used involve template matching and machine learning for detecting and tracking. However, operational time for these methods should be short for detecting objects on quickly moving conveyor belts. To provide this characteristic, this algorithm for image processing is a region-based method. In this experiment, we counted young abalone shells that are similar in shape, size and color. We applied a characteristic conveyor system that operated in one direction. It obtained information on objects in the region of interest by comparing a second frame that continuously changed according to the information obtained with reference to objects in the first region. Objects were counted if the information between the first and second images matched. This count was exact when young shells were evenly spaced without overlap and missed objects were calculated using size information when objects moved without extra space. The proposed algorithm can be applied for various object counting controls on conveyor systems.

Detecting and Extracting Changed Objects in Ground Information (지반정보 변화객체 탐지·추출 시스템 개발)

  • Kim, Kwangsoo;Kim, Bong Wan;Jang, In Sung
    • Journal of the Korean Society of Surveying, Geodesy, Photogrammetry and Cartography
    • /
    • v.39 no.6
    • /
    • pp.515-523
    • /
    • 2021
  • An integrated underground spatial map consists of underground facilities, underground structures, and ground information, and is periodically updated. In this paper, we design and implement a system for detecting and extracting only changed ground objects to shorten the map update speed. To find the changed objects, all the objects are compared, which are included in the newly input map and the reference map in the integrated map. Since the entire process of comparing objects and generating results is classified by function, the implemented system is composed of several modules such as object comparer, changed object detector, history data manager, changed object extractor, changed type classifier, and changed object saver. We use two metrics: detection rate and extraction rate, to evaluate the performance of the system. As a result of applying the system to boreholes, ground wells, soil layers, and rock floors in Pyeongtaek, 100% of inserted, deleted, and updated objects in each layer are detected. In addition, it provides the advantage of ensuring the up-to-dateness of the reference map by downloading it whenever maps are compared. In the future, additional research is needed to confirm the stability and effectiveness of the developed system using various data to apply it to the field.

Assessment of the Object Detection Ability of Interproximal Caries on Primary Teeth in Periapical Radiographs Using Deep Learning Algorithms (유치의 치근단 방사선 사진에서 딥 러닝 알고리즘을 이용한 모델의 인접면 우식증 객체 탐지 능력의 평가)

  • Hongju Jeon;Seonmi Kim;Namki Choi
    • Journal of the korean academy of Pediatric Dentistry
    • /
    • v.50 no.3
    • /
    • pp.263-276
    • /
    • 2023
  • The purpose of this study was to evaluate the performance of a model using You Only Look Once (YOLO) for object detection of proximal caries in periapical radiographs of children. A total of 2016 periapical radiographs in primary dentition were selected from the M6 database as a learning material group, of which 1143 were labeled as proximal caries by an experienced dentist using an annotation tool. After converting the annotations into a training dataset, YOLO was trained on the dataset using a single convolutional neural network (CNN) model. Accuracy, recall, specificity, precision, negative predictive value (NPV), F1-score, Precision-Recall curve, and AP (area under curve) were calculated for evaluation of the object detection model's performance in the 187 test datasets. The results showed that the CNN-based object detection model performed well in detecting proximal caries, with a diagnostic accuracy of 0.95, a recall of 0.94, a specificity of 0.97, a precision of 0.82, a NPV of 0.96, and an F1-score of 0.81. The AP was 0.83. This model could be a valuable tool for dentists in detecting carious lesions in periapical radiographs.

An Object Detection and Tracking System using Fuzzy C-means and CONDENSATION (Fuzzy C-means와 CONDENSATION을 이용한 객체 검출 및 추적 시스템)

  • Kim, Jong-Ho;Kim, Sang-Kyoon;Hang, Goo-Seun;Ahn, Sang-Ho;Kang, Byoung-Doo
    • Journal of Korea Society of Industrial Information Systems
    • /
    • v.16 no.4
    • /
    • pp.87-98
    • /
    • 2011
  • Detecting a moving object from videos and tracking it are basic and necessary preprocessing steps in many video systems like object recognition, context aware, and intelligent visual surveillance. In this paper, we propose a method that is able to detect a moving object quickly and accurately in a condition that background and light change in a real time. Furthermore, our system detects strongly an object in a condition that the target object is covered with other objects. For effective detection, effective Eigen-space and FCM are combined and employed, and a CONDENSATION algorithm is used to trace a detected object strongly. First, training data collected from a background image are linear-transformed using Principal Component Analysis (PCA). Second, an Eigen-background is organized from selected principal components having excellent discrimination ability on an object and a background. Next, an object is detected with FCM that uses a convolution result of the Eigen-vector of previous steps and the input image. Finally, an object is tracked by using coordinates of an detected object as an input value of condensation algorithm. Images including various moving objects in a same time are collected and used as training data to realize our system that is able to be adapted to change of light and background in a fixed camera. The result of test shows that the proposed method detects an object strongly in a condition having a change of light and a background, and partial movement of an object.

Stop Object Method within Intersection with Using Adaptive Background Image (적응적 배경영상을 이용한 교차로 내 정지 객체 검출 방법)

  • Kang, Sung-Jun;Sur, Am-Seog;Jeong, Sung-Hwan
    • Journal of the Korea Academia-Industrial cooperation Society
    • /
    • v.14 no.5
    • /
    • pp.2430-2436
    • /
    • 2013
  • This study suggests a method of detecting the still object, which becomes a cause of danger within the crossroad. The Inverse Perspective Transform was performed in order to make the object size consistent by being inputted the real-time image from CCTV that is installed within the crossroad. It established the detection area in the image with the perspective transform and generated the adaptative background image with the use of the moving information on object. The detection of the stop object was detected the candidate region of the stop object by using the background-image differential method. To grasp the appearance of truth on the detected candidate region, a method is proposed that uses the gradient information on image and EHD(Edge Histogram Descriptor). To examine performance of the suggested algorithm, it experimented by storing the images in the commuting time and the daytime through DVR, which is installed on the cross street. As a result of experiment, it could efficiently detect the stop vehicle within the detection region inside the crossroad. The processing speed is shown in 13~18 frame per second according to the area of the detection region, thereby being judged to likely have no problem about the real-time processing.

Object Feature Extraction and Matching for Effective Multiple Vehicles Tracking (효과적인 다중 차량 추적을 위한 객체 특징 추출 및 매칭)

  • Cho, Du-Hyung;Lee, Seok-Lyong
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.2 no.11
    • /
    • pp.789-794
    • /
    • 2013
  • A vehicle tracking system makes it possible to induce the vehicle movement path for avoiding traffic congestion and to prevent traffic accidents in advance by recognizing traffic flow, monitoring vehicles, and detecting road accidents. To track the vehicles effectively, those which appear in a sequence of video frames need to identified by extracting the features of each object in the frames. Next, the identical vehicles over the continuous frames need to be recognized through the matching among the objects' feature values. In this paper, we identify objects by binarizing the difference image between a target and a referential image, and the labelling technique. As feature values, we use the center coordinate of the minimum bounding rectangle(MBR) of the identified object and the averages of 1D FFT(fast Fourier transform) coefficients with respect to the horizontal and vertical direction of the MBR. A vehicle is tracked in such a way that the pair of objects that have the highest similarity among objects in two continuous images are regarded as an identical object. The experimental result shows that the proposed method outperforms the existing methods that use geometrical features in tracking accuracy.

Decimation-in-time Search Direction Algorithm for Displacement Prediction of Moving Object (이동물체의 변위 예측을 위한 시간솎음 탐색 방향 알고리즘)

  • Lim Kang-mo;Lee Joo-shin
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.9 no.2
    • /
    • pp.338-347
    • /
    • 2005
  • In this paper, a decimation-in-time search direction algorithm for displacement prediction of moving object is proposed. The initialization of the proposed algorithm for moving direction prediction is performed by detecting moving objects at sequential frames and by obtaining a moving angle and a moving distance. A moving direction of the moving object at current frame is obtained by applying the decimation-in-time search direction mask. The decimation-in-tine search direction mask is that the moving object is detected by thinning out frames among the sequential frames, and the moving direction of the moving object is predicted by the search mask which is decided by obtaining the moving angle of the moving object in the 8 directions. to examine the propriety of the proposed algorithm, velocities of a driving car are measured and tracked, and to evaluate the efficiency, the proposed algorithm is compared to the full search algorithm. The evaluated results show that the number of displacement search times is reduced up to 91.8$\%$ on the average in the proposed algorithm, and the processing time of the tracking is 32.1ms on the average.

A Framework for Object Detection by Haze Removal (안개 제거에 의한 객체 검출 성능 향상 방법)

  • Kim, Sang-Kyoon;Choi, Kyoung-Ho;Park, Soon-Young
    • Journal of the Institute of Electronics and Information Engineers
    • /
    • v.51 no.5
    • /
    • pp.168-176
    • /
    • 2014
  • Detecting moving objects from a video sequence is a fundamental and critical task in video surveillance, traffic monitoring and analysis, and human detection and tracking. It is very difficult to detect moving objects in a video sequence degraded by the environmental factor such as fog. In particular, the color of an object become similar to the neighbor and it reduces the saturation, thus making it very difficult to distinguish the object from the background. For such a reason, it is shown that the performance and reliability of object detection and tracking are poor in the foggy weather. In this paper, we propose a novel method to improve the performance of object detection, combining a haze removal algorithm and a local histogram-based object tracking method. For the quantitative evaluation of the proposed system, information retrieval measurements, recall and precision, are used to quantify how well the performance is improved before and after the haze removal. As a result, the visibility of the image is enhanced and the performance of objects detection is improved.

A Multiple Vehicle Object Detection Algorithm Using Feature Point Matching (특징점 매칭을 이용한 다중 차량 객체 검출 알고리즘)

  • Lee, Kyung-Min;Lin, Chi-Ho
    • The Journal of The Korea Institute of Intelligent Transport Systems
    • /
    • v.17 no.1
    • /
    • pp.123-128
    • /
    • 2018
  • In this paper, we propose a multi-vehicle object detection algorithm using feature point matching that tracks efficient vehicle objects. The proposed algorithm extracts the feature points of the vehicle using the FAST algorithm for efficient vehicle object tracking. And True if the feature points are included in the image segmented into the 5X5 region. If the feature point is not included, it is processed as False and the corresponding area is blacked to remove unnecessary object information excluding the vehicle object. Then, the post processed area is set as the maximum search window size of the vehicle. And A minimum search window using the outermost feature points of the vehicle is set. By using the set search window, we compensate the disadvantages of the search window size of mean-shift algorithm and track vehicle object. In order to evaluate the performance of the proposed method, SIFT and SURF algorithms are compared and tested. The result is about four times faster than the SIFT algorithm. And it has the advantage of detecting more efficiently than the process of SUFR algorithm.

Development of an Integrated Traffic Object Detection Framework for Traffic Data Collection (교통 데이터 수집을 위한 객체 인식 통합 프레임워크 개발)

  • Yang, Inchul;Jeon, Woo Hoon;Lee, Joyoung;Park, Jihyun
    • The Journal of The Korea Institute of Intelligent Transport Systems
    • /
    • v.18 no.6
    • /
    • pp.191-201
    • /
    • 2019
  • A fast and accurate integrated traffic object detection framework was proposed and developed, harnessing a computer-vision based deep-learning approach performing automatic object detections, a multi object tracking technology, and video pre-processing tools. The proposed method is capable of detecting traffic object such as autos, buses, trucks and vans from video recordings taken under a various kinds of external conditions such as stability of video, weather conditions, video angles, and counting the objects by tracking them on a real-time basis. By creating plausible experimental scenarios dealing with various conditions that likely affect video quality, it is discovered that the proposed method achieves outstanding performances except for the cases of rain and snow, thereby resulting in 98% ~ 100% of accuracy.