• Title/Summary/Keyword: object detect

Search Result 935, Processing Time 0.023 seconds

A Study on Detection and Resolving of Occlusion Area by Street Tree Object using ResNet Algorithm (ResNet 알고리즘을 이용한 가로수 객체의 폐색영역 검출 및 해결)

  • Park, Hong-Gi;Bae, Kyoung-Ho
    • Journal of the Korea Academia-Industrial cooperation Society
    • /
    • v.21 no.10
    • /
    • pp.77-83
    • /
    • 2020
  • The technologies of 3D spatial information, such as Smart City and Digital Twins, are developing rapidly for managing land and solving urban problems scientifically. In this construction of 3D spatial information, an object using aerial photo images is built as a digital DB. Realistically, the task of extracting a texturing image, which is an actual image of the object wall, and attaching an image to the object wall are important. On the other hand, occluded areas occur in the texturing image. In this study, the ResNet algorithm in deep learning technologies was tested to solve these problems. A dataset was constructed, and the street tree was detected using the ResNet algorithm. The ability of the ResNet algorithm to detect the street tree was dependent on the brightness of the image. The ResNet algorithm can detect the street tree in an image with side and inclination angles.

Real-Time Landmark Detection using Fast Fourier Transform in Surveillance (서베일런스에서 고속 푸리에 변환을 이용한 실시간 특징점 검출)

  • Kang, Sung-Kwan;Park, Yang-Jae;Chung, Kyung-Yong;Rim, Kee-Wook;Lee, Jung-Hyun
    • Journal of Digital Convergence
    • /
    • v.10 no.7
    • /
    • pp.123-128
    • /
    • 2012
  • In this paper, we propose a landmark-detection system of object for more accurate object recognition. The landmark-detection system of object becomes divided into a learning stage and a detection stage. A learning stage is created an interest-region model to set up a search region of each landmark as pre-information necessary for a detection stage and is created a detector by each landmark to detect a landmark in a search region. A detection stage sets up a search region of each landmark in an input image with an interest-region model created in the learning stage. The proposed system uses Fast Fourier Transform to detect landmark, because the landmark-detection is fast. In addition, the system fails to track objects less likely. After we developed the proposed method was applied to environment video. As a result, the system that you want to track objects moving at an irregular rate, even if it was found that stable tracking. The experimental results show that the proposed approach can achieve superior performance using various data sets to previously methods.

Real Time Hornet Classification System Based on Deep Learning (딥러닝을 이용한 실시간 말벌 분류 시스템)

  • Jeong, Yunju;Lee, Yeung-Hak;Ansari, Israfil;Lee, Cheol-Hee
    • Journal of IKEEE
    • /
    • v.24 no.4
    • /
    • pp.1141-1147
    • /
    • 2020
  • The hornet species are so similar in shape that they are difficult for non-experts to classify, and because the size of the objects is small and move fast, it is more difficult to detect and classify the species in real time. In this paper, we developed a system that classifies hornets species in real time based on a deep learning algorithm using a boundary box. In order to minimize the background area included in the bounding box when labeling the training image, we propose a method of selecting only the head and body of the hornet. It also experimentally compares existing boundary box-based object recognition algorithms to find the best algorithms that can detect wasps in real time and classify their species. As a result of the experiment, when the mish function was applied as the activation function of the convolution layer and the hornet images were tested using the YOLOv4 model with the Spatial Attention Module (SAM) applied before the object detection block, the average precision was 97.89% and the average recall was 98.69%.

A Study of Tram-Pedestrian Collision Prediction Method Using YOLOv5 and Motion Vector (YOLOv5와 모션벡터를 활용한 트램-보행자 충돌 예측 방법 연구)

  • Kim, Young-Min;An, Hyeon-Uk;Jeon, Hee-gyun;Kim, Jin-Pyeong;Jang, Gyu-Jin;Hwang, Hyeon-Chyeol
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.10 no.12
    • /
    • pp.561-568
    • /
    • 2021
  • In recent years, autonomous driving technologies have become a high-value-added technology that attracts attention in the fields of science and industry. For smooth Self-driving, it is necessary to accurately detect an object and estimate its movement speed in real time. CNN-based deep learning algorithms and conventional dense optical flows have a large consumption time, making it difficult to detect objects and estimate its movement speed in real time. In this paper, using a single camera image, fast object detection was performed using the YOLOv5 algorithm, a deep learning algorithm, and fast estimation of the speed of the object was performed by using a local dense optical flow modified from the existing dense optical flow based on the detected object. Based on this algorithm, we present a system that can predict the collision time and probability, and through this system, we intend to contribute to prevent tram accidents.

Estimation Method for Kinematic Constraint of Unknown Object by Active Sensing (미지 물체의 구속상태에 관한 실시간 추정방법)

  • Hwang Chang-Soon
    • Transactions of the Korean Society of Mechanical Engineers A
    • /
    • v.29 no.2 s.233
    • /
    • pp.188-200
    • /
    • 2005
  • Control of a multi-fingered robotic hand is usually based on the theoretical analysis for kinematics and dynamics of fingers and of object. However, the implementation of such analyses to robotic hands is difficult because of errors and uncertainties in the real situations. This article presents the control method for estimating the kinematic constraint of an unknown object by active sensing. The experimental system has a two-fingered robotic hand suspended vertically for manipulation in the vertical plane. The fingers with three degrees-of-freedom are driven by wires directly connected to voice-coil motors without reduction gears. The fingers are equipped with three-axis force sensors and with dynamic tactile sensors that detect slippage between the fingertip surfaces and the object. In order to make an accurate estimation for the kinematic constraint of the unknown object, i.e. the constraint direction and the constraint center, four kinds of the active sensing and feedback control algorithm were developed: two position-based algorithms and two force-based algorithms. Furthermore, the compound and effective algorithm was also developed by combining two algorithms. Force sensors are mainly used to adapt errors and uncertainties encountered during the constraint estimation. Several experimental results involving the motion of lifting a finger off an unknown object are presented.

Object Classification based on Weakly Supervised E2LSH and Saliency map Weighting

  • Zhao, Yongwei;Li, Bicheng;Liu, Xin;Ke, Shengcai
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.10 no.1
    • /
    • pp.364-380
    • /
    • 2016
  • The most popular approach in object classification is based on the bag of visual-words model, which has several fundamental problems that restricting the performance of this method, such as low time efficiency, the synonym and polysemy of visual words, and the lack of spatial information between visual words. In view of this, an object classification based on weakly supervised E2LSH and saliency map weighting is proposed. Firstly, E2LSH (Exact Euclidean Locality Sensitive Hashing) is employed to generate a group of weakly randomized visual dictionary by clustering SIFT features of the training dataset, and the selecting process of hash functions is effectively supervised inspired by the random forest ideas to reduce the randomcity of E2LSH. Secondly, graph-based visual saliency (GBVS) algorithm is applied to detect the saliency map of different images and weight the visual words according to the saliency prior. Finally, saliency map weighted visual language model is carried out to accomplish object classification. Experimental results datasets of Pascal 2007 and Caltech-256 indicate that the distinguishability of objects is effectively improved and our method is superior to the state-of-the-art object classification methods.

Object Detection Method for The Wild Pig Surveillance System (멧돼지 감시 시스템을 위한 객체 검출 방법)

  • Kim, Dong-Woo;Song, Young-Jun;Kim, Ae-Kyeong;Hong, You-Sik;Ahn, Jae-Hyeong
    • The Journal of the Institute of Internet, Broadcasting and Communication
    • /
    • v.10 no.5
    • /
    • pp.229-235
    • /
    • 2010
  • In this paper, we propose a method to improve the efficiency of the moving object detection in real-time surveillance camera system. The existing methods, the methods using differential image and background image, are difficult to detect the moving object from outside the video streams. The proposed method keeps the background image if it doesn't be detected moving object using the differential value between a previous frame and a current frame. And the background image is renewed as the moving object is gone in a frame. To decide people and wild pig, the proposed system estimates a bounding box enclosing each moving object in the detecting region. As a result of simulation, the proposed method is better than the existing method.

Moving Object Detection Using Sparse Approximation and Sparse Coding Migration

  • Li, Shufang;Hu, Zhengping;Zhao, Mengyao
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.14 no.5
    • /
    • pp.2141-2155
    • /
    • 2020
  • In order to meet the requirements of background change, illumination variation, moving shadow interference and high accuracy in object detection of moving camera, and strive for real-time and high efficiency, this paper presents an object detection algorithm based on sparse approximation recursion and sparse coding migration in subspace. First, low-rank sparse decomposition is used to reduce the dimension of the data. Combining with dictionary sparse representation, the computational model is established by the recursive formula of sparse approximation with the video sequences taken as subspace sets. And the moving object is calculated by the background difference method, which effectively reduces the computational complexity and running time. According to the idea of sparse coding migration, the above operations are carried out in the down-sampling space to further reduce the requirements of computational complexity and memory storage, and this will be adapt to multi-scale target objects and overcome the impact of large anomaly areas. Finally, experiments are carried out on VDAO datasets containing 59 sets of videos. The experimental results show that the algorithm can detect moving object effectively in the moving camera with uniform speed, not only in terms of low computational complexity but also in terms of low storage requirements, so that our proposed algorithm is suitable for detection systems with high real-time requirements.

Brain Dynamics and Interactions for Object Detection and Basic-level Categorization (물체 탐지와 범주화에서의 뇌의 동적 움직임 추적)

  • Kim, Ji-Hyun;Kwon, Hyuk-Chan;Lee, Yong-Ho
    • Proceedings of the Korean Society for Emotion and Sensibility Conference
    • /
    • 2009.05a
    • /
    • pp.219-222
    • /
    • 2009
  • Rapid object recognition is one of the main stream research themes focusing to reveal how human recognizes object and interacts with environment in natural world. This field of study is of consequence in that it is highly important in evolutionary perspective to quickly see the external objects and judge their characteristics to plan future reactions. In this study, we investigated how human detect natural scene objects and categorize them in a limited time frame. We applied Magnetoencepahlogram (MEG) while participants were performing detection (e.g. object vs. texture) or basic-level categorization (e.g. cars vs. dogs) tasks to track the dynamic interaction in human brain for rapid object recognition process. The results revealed that detection and categorization involves different temporal and functional connections that correlated for the successful recognition process as a whole. These results imply that dynamics in the brain are important for our interaction with environment. The implication from this study can be further extended to investigate the effect of subconscious emotional factors on the dynamics of brain interactions during the rapid recognition process.

  • PDF

Object Identification and Localization for Image Recognition (이미지 인식을 위한 객체 식별 및 지역화)

  • Lee, Yong-Hwan;Park, Je-Ho;Kim, Youngseop
    • Journal of the Semiconductor & Display Technology
    • /
    • v.11 no.4
    • /
    • pp.49-55
    • /
    • 2012
  • This paper proposes an efficient method of object identification and localization for image recognition. The new proposed algorithm utilizes correlogram back-projection in the YCbCr chromaticity components to handle the problem of sub-region querying. Utilizing similar spatial color information enables users to detect and locate primary location and candidate regions accurately, without the need for additional information about the number of objects. Comparing this proposed algorithm to existing methods, experimental results show that improvement of 21% was observed. These results reveal that color correlogram is markedly more effective than color histogram for this task. Main contribution of this paper is that a different way of treating color spaces and a histogram measure, which involves information on spatial color, are applied in object localization. This approach opens up new opportunities for object detection for the use in the area of interactive image and 2-D based augmented reality.