• 제목/요약/키워드: Object recognition system

검색결과 721건 처리시간 0.021초

우리나라 국민의 우주위험인식 수준과 국가 재난정책 (Public's Recognition of the Space Object's Re-entry Situations and the National Space Disaster Management Policy)

  • 김시은;조성기;홍정유
    • 한국안전학회지
    • /
    • 제31권6호
    • /
    • pp.84-92
    • /
    • 2016
  • Since the mankind started its space mission, the number of artificial space objects has been increasing exponentially. It contains not just the space machines which are in use but the machines which are out of order. Meantime, those dead machines are being a serious danger, a real threat to human's lives and property because of it could re-enter into the earth's atmosphere and fall to the ground causing mega-disaster. As the number of space activities gets growing so far, the re-entry of the space objects will be a lot more happened in the future. Therefore, not just natural space object like asteroids but the artificial space object like artificial satellite and space station can cause the disaster by falling to the ground. To protect our nation and our property, the government has set up the space disaster management center in Korea astronomy and Space science Institute. In this study, we surveyed public's recognition of the space object's re-entry situation and analyzed it to contribute building national space disaster management policy.

3D Shape Descriptor for Segmenting Point Cloud Data

  • Park, So Young;Yoo, Eun Jin;Lee, Dong-Cheon;Lee, Yong Wook
    • 한국측량학회지
    • /
    • 제30권6_2호
    • /
    • pp.643-651
    • /
    • 2012
  • Object recognition belongs to high-level processing that is one of the difficult and challenging tasks in computer vision. Digital photogrammetry based on the computer vision paradigm has begun to emerge in the middle of 1980s. However, the ultimate goal of digital photogrammetry - intelligent and autonomous processing of surface reconstruction - is not achieved yet. Object recognition requires a robust shape description about objects. However, most of the shape descriptors aim to apply 2D space for image data. Therefore, such descriptors have to be extended to deal with 3D data such as LiDAR(Light Detection and Ranging) data obtained from ALS(Airborne Laser Scanner) system. This paper introduces extension of chain code to 3D object space with hierarchical approach for segmenting point cloud data. The experiment demonstrates effectiveness and robustness of the proposed method for shape description and point cloud data segmentation. Geometric characteristics of various roof types are well described that will be eventually base for the object modeling. Segmentation accuracy of the simulated data was evaluated by measuring coordinates of the corners on the segmented patch boundaries. The overall RMSE(Root Mean Square Error) is equivalent to the average distance between points, i.e., GSD(Ground Sampling Distance).

Vision-based garbage dumping action detection for real-world surveillance platform

  • Yun, Kimin;Kwon, Yongjin;Oh, Sungchan;Moon, Jinyoung;Park, Jongyoul
    • ETRI Journal
    • /
    • 제41권4호
    • /
    • pp.494-505
    • /
    • 2019
  • In this paper, we propose a new framework for detecting the unauthorized dumping of garbage in real-world surveillance camera. Although several action/behavior recognition methods have been investigated, these studies are hardly applicable to real-world scenarios because they are mainly focused on well-refined datasets. Because the dumping actions in the real-world take a variety of forms, building a new method to disclose the actions instead of exploiting previous approaches is a better strategy. We detected the dumping action by the change in relation between a person and the object being held by them. To find the person-held object of indefinite form, we used a background subtraction algorithm and human joint estimation. The person-held object was then tracked and the relation model between the joints and objects was built. Finally, the dumping action was detected through the voting-based decision module. In the experiments, we show the effectiveness of the proposed method by testing on real-world videos containing various dumping actions. In addition, the proposed framework is implemented in a real-time monitoring system through a fast online algorithm.

A Study on Image Labeling Technique for Deep-Learning-Based Multinational Tanks Detection Model

  • Kim, Taehoon;Lim, Dongkyun
    • International Journal of Internet, Broadcasting and Communication
    • /
    • 제14권4호
    • /
    • pp.58-63
    • /
    • 2022
  • Recently, the improvement of computational processing ability due to the rapid development of computing technology has greatly advanced the field of artificial intelligence, and research to apply it in various domains is active. In particular, in the national defense field, attention is paid to intelligent recognition among machine learning techniques, and efforts are being made to develop object identification and monitoring systems using artificial intelligence. To this end, various image processing technologies and object identification algorithms are applied to create a model that can identify friendly and enemy weapon systems and personnel in real-time. In this paper, we conducted image processing and object identification focused on tanks among various weapon systems. We initially conducted processing the tanks' image using a convolutional neural network, a deep learning technique. The feature map was examined and the important characteristics of the tanks crucial for learning were derived. Then, using YOLOv5 Network, a CNN-based object detection network, a model trained by labeling the entire tank and a model trained by labeling only the turret of the tank were created and the results were compared. The model and labeling technique we proposed in this paper can more accurately identify the type of tank and contribute to the intelligent recognition system to be developed in the future.

2D 얼굴 영상을 이용한 로봇의 감정인식 및 표현시스템 (Emotion Recognition and Expression System of Robot Based on 2D Facial Image)

  • 이동훈;심귀보
    • 제어로봇시스템학회논문지
    • /
    • 제13권4호
    • /
    • pp.371-376
    • /
    • 2007
  • This paper presents an emotion recognition and its expression system of an intelligent robot like a home robot or a service robot. Emotion recognition method in the robot is used by a facial image. We use a motion and a position of many facial features. apply a tracking algorithm to recognize a moving user in the mobile robot and eliminate a skin color of a hand and a background without a facial region by using the facial region detecting algorithm in objecting user image. After normalizer operations are the image enlarge or reduction by distance of the detecting facial region and the image revolution transformation by an angel of a face, the mobile robot can object the facial image of a fixing size. And materialize a multi feature selection algorithm to enable robot to recognize an emotion of user. In this paper, used a multi layer perceptron of Artificial Neural Network(ANN) as a pattern recognition art, and a Back Propagation(BP) algorithm as a learning algorithm. Emotion of user that robot recognized is expressed as a graphic LCD. At this time, change two coordinates as the number of times of emotion expressed in ANN, and change a parameter of facial elements(eyes, eyebrows, mouth) as the change of two coordinates. By materializing the system, expressed the complex emotion of human as the avatar of LCD.

Development of Infants Music Education Application Using Augmented Reality

  • Yeon, Seunguk;Seo, Sukyong
    • 한국멀티미디어학회논문지
    • /
    • 제21권1호
    • /
    • pp.69-76
    • /
    • 2018
  • Augmented Reality (AR) technology has rapidly been applied to various application areas including e-learning and e-education. Focusing on the design and development of android tablet application, this study targeted to develop infant music education using AR technology. We used a tablet instead of personal computer because it is more easily accessible and more convenient. Our system allows infant users to play with teaching aids like blocks or puzzles to mimic musical play like game. The user sets the puzzle piece on the playground in front of the tablet and presses the play button. Then, the system extracts a region of interest among the images acquired by internal camera and separates the foreground image from the background image. The block recognition software analyzes, recognizes and shows the result using AR technology. In order to have reasonably working recognition ratio, we did experiments with more than 5,000 frames of actual playing scenarios. We found that the recognition rate can be secured up to 95%, when the threshold values are selected well using various condition parameters.

Study On Masked Face Detection And Recognition using transfer learning

  • Kwak, NaeJoung;Kim, DongJu
    • International Journal of Advanced Culture Technology
    • /
    • 제10권1호
    • /
    • pp.294-301
    • /
    • 2022
  • COVID-19 is a crisis with numerous casualties. The World Health Organization (WHO) has declared the use of masks as an essential safety measure during the COVID-19 pandemic. Therefore, whether or not to wear a mask is an important issue when entering and exiting public places and institutions. However, this makes face recognition a very difficult task because certain parts of the face are hidden. As a result, face identification and identity verification in the access system became difficult. In this paper, we propose a system that can detect masked face using transfer learning of Yolov5s and recognize the user using transfer learning of Facenet. Transfer learning preforms by changing the learning rate, epoch, and batch size, their results are evaluated, and the best model is selected as representative model. It has been confirmed that the proposed model is good at detecting masked face and masked face recognition.

서비스 로봇을 위한 지시 물체 분할 방법 (Segmentation of Pointed Objects for Service Robots)

  • 김형오;김수환;김동환;박성기
    • 로봇학회논문지
    • /
    • 제4권2호
    • /
    • pp.139-146
    • /
    • 2009
  • This paper describes how a person extracts a unknown object with pointing gesture while interacting with a robot. Using a stereo vision sensor, our proposed method consists of two stages: the detection of the operators' face, the estimation of the pointing direction, and the extraction of the pointed object. The operator's face is recognized by using the Haar-like features. And then we estimate the 3D pointing direction from the shoulder-to-hand line. Finally, we segment an unknown object from 3D point clouds in estimated region of interest. On the basis of this proposed method, we implemented an object registration system with our mobile robot and obtained reliable experimental results.

  • PDF

지능기법을 이용한 영상분활 및 물체추적에 관한 연구 (A Study on Image Segmentation and Tracking based on Intelligent Method)

  • 이민중;황기현;김종윤;진태석
    • 대한전자공학회:학술대회논문집
    • /
    • 대한전자공학회 2007년도 하계종합학술대회 논문집
    • /
    • pp.311-312
    • /
    • 2007
  • This dissertation proposes a global search and a local search method to track the object in real-time. The global search recognizes a target object among the candidate objects through the entire image search, and the local search recognizes and track only the target object through the block search. This dissertation uses the object color and feature information to achieve fast object recognition. Finally we conducted an experiment for the object tracking system based on a pan/tilt structure.

  • PDF

A Study on Design and Implementation of Speech Recognition System Using ART2 Algorithm

  • Kim, Joeng Hoon;Kim, Dong Han;Jang, Won Il;Lee, Sang Bae
    • International Journal of Fuzzy Logic and Intelligent Systems
    • /
    • 제4권2호
    • /
    • pp.149-154
    • /
    • 2004
  • In this research, we selected the speech recognition to implement the electric wheelchair system as a method to control it by only using the speech and used DTW (Dynamic Time Warping), which is speaker-dependent and has a relatively high recognition rate among the speech recognitions. However, it has to have small memory and fast process speed performance under consideration of real-time. Thus, we introduced VQ (Vector Quantization) which is widely used as a compression algorithm of speaker-independent recognition, to secure fast recognition and small memory. However, we found that the recognition rate decreased after using VQ. To improve the recognition rate, we applied ART2 (Adaptive Reason Theory 2) algorithm as a post-process algorithm to obtain about 5% recognition rate improvement. To utilize ART2, we have to apply an error range. In case that the subtraction of the first distance from the second distance for each distance obtained to apply DTW is 20 or more, the error range is applied. Likewise, ART2 was applied and we could obtain fast process and high recognition rate. Moreover, since this system is a moving object, the system should be implemented as an embedded one. Thus, we selected TMS320C32 chip, which can process significantly many calculations relatively fast, to implement the embedded system. Considering that the memory is speech, we used 128kbyte-RAM and 64kbyte ROM to save large amount of data. In case of speech input, we used 16-bit stereo audio codec, securing relatively accurate data through high resolution capacity.