• Title/Summary/Keyword: Object accuracy

Search Result 1,410, Processing Time 0.035 seconds

A Formal Specification and Accuracy Checking of 2+1 View Integrated Metamodel Using Z and Object-Z (Z/Object-Z 사용한 2+1 View 통합 메타모델의 정형 명세와 명확성 검사)

  • Song, Chee-Yang
    • Journal of the Korea Academia-Industrial cooperation Society
    • /
    • v.15 no.1
    • /
    • pp.449-459
    • /
    • 2014
  • The proposed 2+1 view integrated metamodel defined formerly with a graphical class model can not be guaranteed the syntactic clarity and accuracy precisely for the metamodel due to the informal specification. This paper specifies the syntactic semantics formally for the 2+1 view integrated metamodel using Z and Object-Z and checks the accuracy of the metamodel with Z/Eves tool. The formal specification is expressed in Z and Object-Z schema separately for syntax and statics semantics of the 2+1 view integrated metamodel, which applying the converting rule between class model and Z/Object-Z. The accuracy of the Z specification for the metamodel is verified using Z/Eves tool, which can check the syntax, type, and domain of the Z specification. The transformation specification and checking of the 2+1 view integrated metamodel can help establish more accurate the syntactic semantics of its construct and check the accuracy of the metamodel.

A Method for Improving Accuracy of Object Recognition and Pose Estimation by Using Kinect sensor (Kinect센서를 이용한 물체 인식 및 자세 추정을 위한 정확도 개선 방법)

  • Kim, Anna;Yee, Gun Kyu;Kang, Gitae;Kim, Yong Bum;Choi, Hyouk Ryeol
    • The Journal of Korea Robotics Society
    • /
    • v.10 no.1
    • /
    • pp.16-23
    • /
    • 2015
  • This paper presents a method of improving the pose recognition accuracy of objects by using Kinect sensor. First, by using the SURF algorithm, which is one of the most widely used local features point algorithms, we modify inner parameters of the algorithm for efficient object recognition. The proposed method is adjusting the distance between the box filter, modifying Hessian matrix, and eliminating improper key points. In the second, the object orientation is estimated based on the homography. Finally the novel approach of Auto-scaling method is proposed to improve accuracy of object pose estimation. The proposed algorithm is experimentally tested with objects in the plane and its effectiveness is validated.

Sub-Frame Analysis-based Object Detection for Real-Time Video Surveillance

  • Jang, Bum-Suk;Lee, Sang-Hyun
    • International Journal of Internet, Broadcasting and Communication
    • /
    • v.11 no.4
    • /
    • pp.76-85
    • /
    • 2019
  • We introduce a vision-based object detection method for real-time video surveillance system in low-end edge computing environments. Recently, the accuracy of object detection has been improved due to the performance of approaches based on deep learning algorithm such as Region Convolutional Neural Network(R-CNN) which has two stage for inferencing. On the other hand, one stage detection algorithms such as single-shot detection (SSD) and you only look once (YOLO) have been developed at the expense of some accuracy and can be used for real-time systems. However, high-performance hardware such as General-Purpose computing on Graphics Processing Unit(GPGPU) is required to still achieve excellent object detection performance and speed. To address hardware requirement that is burdensome to low-end edge computing environments, We propose sub-frame analysis method for the object detection. In specific, We divide a whole image frame into smaller ones then inference them on Convolutional Neural Network (CNN) based image detection network, which is much faster than conventional network designed forfull frame image. We reduced its computationalrequirementsignificantly without losing throughput and object detection accuracy with the proposed method.

A Design and Implementation of Object Recognition based Interactive Game Contents using Kinect Sensor and Unity 3D Engine (키넥트 센서와 유니티 3D 엔진기반의 객체 인식 기법을 적용한 체험형 게임 콘텐츠 설계 및 구현)

  • Jung, Se-hoon;Lee, Ju-hwan;Jo, Kyeong-Ho;Park, Jae-Seong;Sim, Chun Bo
    • Journal of Korea Multimedia Society
    • /
    • v.21 no.12
    • /
    • pp.1493-1503
    • /
    • 2018
  • We propose an object recognition system and experiential game contents using Kinect to maximize object recognition rate by utilizing underwater robots. we implement an ice hockey game based on object-aware interactive contents to validate the excellence of the proposed system. The object recognition system, which is a preprocessor module, is composed based on Kinect and OpenCV. Network sockets are utilized for object recognition communications between C/S. The problem of existing research, degradation of object recognition at long distance, is solved by combining the system development method suggested in the study. As a result of the performance evaluation, the underwater robot object recognized all target objects (90.49%) with 80% of accuracy from a 2m distance, revealing 42.46% of F-Measure. From a 2.5m distance, it recognized 82.87% of the target objects with 60.5% of accuracy, showing 34.96% of F-Measure. Finally, it recognized 98.50% of target objects with 59.4% of accuracy from a 3m distance, showing 37.04% of F-measure.

The Accuracy analysis of a RFID-based Positioning System with Kalman-filter (칼만필터를 적용한 RFID-기반 위치결정 시스템의 정확도 분석)

  • Heo, Joon;Kim, Jung-Hwan;Sohn, Hong-Gyoo;Yun, Kong-Hyun
    • Proceedings of the Korean Society of Surveying, Geodesy, Photogrammetry, and Cartography Conference
    • /
    • 2007.04a
    • /
    • pp.447-450
    • /
    • 2007
  • Positioning technology for moving object is an important and essential component of ubiquitous. Also RFID(Radio Frequency IDentification) is a core technology of ubiquitous wireless communication. In this study we adapted kalman-filter theory to RFID-based Positioning System in order to trace a time-variant moving object and verify the positioning accuracy using RMSE (Roong technology for moving object is an important and essential component of ubiquitous Mean Square Error). The purpose of this study is to verify an effect of kalman-filter on the positioning accuracy and to analyze what does each design factor have an effect on the positioning accuracy by means of simulations and to suggest a standard of optimal design factor of a RFID-based Positioning System. From the results of simulations, Kalman-filer improved the positioning accuracy remarkably; the detection range of RFID tag is not a determining factor. The smaller standard deviation of detection range improves the positioning accuracy. However it accompanies a smaller fluctuation of the positioning accuracy. The larger detection rate of RFID tag yields the smaller fluctuation in the positioning accuracy and has more stable system and improves the positioning accuracy;

  • PDF

Object Detection Accuracy Improvements of Mobility Equipments through Substitution Augmentation of Similar Objects (유사물체 치환증강을 통한 기동장비 물체 인식 성능 향상)

  • Heo, Jiseong;Park, Jihun
    • Journal of the Korea Institute of Military Science and Technology
    • /
    • v.25 no.3
    • /
    • pp.300-310
    • /
    • 2022
  • A vast amount of labeled data is required for deep neural network training. A typical strategy to improve the performance of a neural network given a training data set is to use data augmentation technique. The goal of this work is to offer a novel image augmentation method for improving object detection accuracy. An object in an image is removed, and a similar object from the training data set is placed in its area. An in-painting algorithm fills the space that is eliminated but not filled by a similar object. Our technique shows at most 2.32 percent improvements on mAP in our testing on a military vehicle dataset using the YOLOv4 object detector.

Restoring CCTV Data and Improving Object Detection Performance in Construction Sites by Super Resolution Based on Deep Learning (Super Resolution을 통한 건설현장 CCTV 고해상도 복원 및 Object Detection 성능 향상)

  • Kim, Kug-Bin;Suh, Hyo-Jeong;Kim, Ha-Rim;Yoo, Wi-Sung;Cho, Hun-Hee
    • Proceedings of the Korean Institute of Building Construction Conference
    • /
    • 2023.05a
    • /
    • pp.251-252
    • /
    • 2023
  • As technology improves with the 4th industrial revolution, smart construction is becoming a key part of safety management in the architecture and civil engineering. By using object detection technology with CCTV data, construction sites can be managed efficiently. In this study, super resolution technology based on deep learning is proposed to improve the accuracy of object detection in construction sites. As the resolution of a train set data and test set data get higher, the accuracy of object detection model gets better. Therefore, according to the scale of construction sites, different object detection models can be considered.

  • PDF

Lidar Based Object Recognition and Classification (자율주행을 위한 라이다 기반 객체 인식 및 분류)

  • Byeon, Yerim;Park, Manbok
    • Journal of Auto-vehicle Safety Association
    • /
    • v.12 no.4
    • /
    • pp.23-30
    • /
    • 2020
  • Recently, self-driving research has been actively studied in various institutions. Accurate recognition is important because information about surrounding objects is needed for safe autonomous driving. This study mainly deals with the signal processing of LiDAR among sensors for object recognition. LiDAR is a sensor that is widely used for high recognition accuracy. First, we clustered and tracked objects by predicting relative position and speed of objects. The characteristic points of all objects were extracted using point cloud data of each objects through proposed algorithm. The Classification between vehicle and pedestrians is estimated using number of characteristic points and distances among characteristic points. The algorithm for classifying cars and pedestrians was implemented and verified using test vehicle equipped with LiDAR sensors. The accuracy of proposed object classification algorithm was about 97%. The classification accuracy was improved by about 13.5% compared with deep learning based algorithm.

Vanishing point-based 3D object detection method for improving traffic object recognition accuracy

  • Jeong-In, Park
    • Journal of the Korea Society of Computer and Information
    • /
    • v.28 no.1
    • /
    • pp.93-101
    • /
    • 2023
  • In this paper, we propose a method of creating a 3D bounding box for an object using a vanishing point to increase the accuracy of object recognition in an image when recognizing an traffic object using a video camera. Recently, when vehicles captured by a traffic video camera is to be detected using artificial intelligence, this 3D bounding box generation algorithm is applied. The vertical vanishing point (VP1) and horizontal vanishing point (VP2) are derived by analyzing the camera installation angle and the direction of the image captured by the camera, and based on this, the moving object in the video subject to analysis is specified. If this algorithm is applied, it is easy to detect object information such as the location, type, and size of the detected object, and when applied to a moving type such as a car, it is tracked to determine the location, coordinates, movement speed, and direction of each object by tracking it. Able to know. As a result of application to actual roads, tracking improved by 10%, in particular, the recognition rate and tracking of shaded areas (extremely small vehicle parts hidden by large cars) improved by 100%, and traffic data analysis accuracy was improved.

Accurate Pig Detection for Video Monitoring Environment (비디오 모니터링 환경에서 정확한 돼지 탐지)

  • Ahn, Hanse;Son, Seungwook;Yu, Seunghyun;Suh, Yooil;Son, Junhyung;Lee, Sejun;Chung, Yongwha;Park, Daihee
    • Journal of Korea Multimedia Society
    • /
    • v.24 no.7
    • /
    • pp.890-902
    • /
    • 2021
  • Although the object detection accuracy with still images has been significantly improved with the advance of deep learning techniques, the object detection problem with video data remains as a challenging problem due to the real-time requirement and accuracy drop with occlusion. In this research, we propose a method in pig detection for video monitoring environment. First, we determine a motion, from a video data obtained from a tilted-down-view camera, based on the average size of each pig at each location with the training data, and extract key frames based on the motion information. For each key frame, we then apply YOLO, which is known to have a superior trade-off between accuracy and execution speed among many deep learning-based object detectors, in order to get pig's bounding boxes. Finally, we merge the bounding boxes between consecutive key frames in order to reduce false positive and negative cases. Based on the experiment results with a video data set obtained from a pig farm, we confirmed that the pigs could be detected with an accuracy of 97% at a processing speed of 37fps.