• Title/Summary/Keyword: Object recognition system

Search Result 717, Processing Time 0.029 seconds

Leakage Prevention System of Mobile Data using Object Recognition and Beacon (사물인식과 비콘을 활용한 모바일 내부정보 유출방지 시스템)

  • Chae, Geonhui;Choi, Seongmin;Seol, Jihwan;Lee, Jaeheung
    • The Journal of the Institute of Internet, Broadcasting and Communication
    • /
    • v.18 no.5
    • /
    • pp.17-23
    • /
    • 2018
  • The rapid development of mobile technology has increased the use of mobile devices, and the possibility of security incidents is also increasing. The leakage of information through photos is the most representative. Previous methods for preventing this are disadvantageous in that they can not take pictures for other purposes. In this paper, we design and implement a system to prevent information leakage through photos using object recognition and beacon. The system inspects pictures through object recognition based on deep learning and verifies whether security policies are violated. In addition, the location of the mobile device is identified through the beacon and the appropriate rules are applied. Web applications for administrator allow you to set rules for taking photos by location. As soon as a user takes a photo, they apply appropriate rules to the location to automatically detect photos that do not conform to security policies.

The 3-D Underwater Object Recognition Using Neural Networks and Ultrasonic Sensor Fabricated with 1-3 Type Piezoelectric Composites (1-3형 압전복합체로 제작한 초음파센서와 신경회로망을 이용한 3차원 수중 물체인식)

  • 조현철;이기성
    • The Transactions of the Korean Institute of Electrical Engineers C
    • /
    • v.50 no.7
    • /
    • pp.324-325
    • /
    • 2001
  • In this study, the characteristics of ultrasonic sensor fabricated with PZT-Polymer 1-3 type composites are investigated. The 3-D Underwater object recognition using the self-made ultrasonic sensor and SOFM neural network is presented. The ultrasonic sensor is satisfied with the required condition of commercial ultrasonic sensor in underwater. The 3-D underwater object recognition for the training data and the testing data are 100[100%], respectively. The experimental results have shown that the ultrasonic sensor fabricated with PZT-Polymer 1-3 type composites can be applied for sonar system.

  • PDF

Implementation of Character and Object Metadata Generation System for Media Archive Construction (미디어 아카이브 구축을 위한 등장인물, 사물 메타데이터 생성 시스템 구현)

  • Cho, Sungman;Lee, Seungju;Lee, Jaehyeon;Park, Gooman
    • Journal of Broadcast Engineering
    • /
    • v.24 no.6
    • /
    • pp.1076-1084
    • /
    • 2019
  • In this paper, we introduced a system that extracts metadata by recognizing characters and objects in media using deep learning technology. In the field of broadcasting, multimedia contents such as video, audio, image, and text have been converted to digital contents for a long time, but the unconverted resources still remain vast. Building media archives requires a lot of manual work, which is time consuming and costly. Therefore, by implementing a deep learning-based metadata generation system, it is possible to save time and cost in constructing media archives. The whole system consists of four elements: training data generation module, object recognition module, character recognition module, and API server. The deep learning network module and the face recognition module are implemented to recognize characters and objects from the media and describe them as metadata. The training data generation module was designed separately to facilitate the construction of data for training neural network, and the functions of face recognition and object recognition were configured as an API server. We trained the two neural-networks using 1500 persons and 80 kinds of object data and confirmed that the accuracy is 98% in the character test data and 42% in the object data.

Simultaneous and Coded Driving System of Ultrasonic Sensor Array for Object Recognition in Autonomous Mobile Robots

  • Kim, Ch-S.;Choi, B.J.;Park, S.H.;Lee, Y.J.;Lee, S.R.
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 2003.10a
    • /
    • pp.2519-2523
    • /
    • 2003
  • Ultrasonic sensors are widely used in mobile robot applications to recognize external environments, because they are cheap, easy to use, and robust under varying lighting conditions. In most cases, a single ultrasonic sensor is used to measure the distance to an object based on time-of-flight (TOF) information, whereas multiple sensors are used to recognize the shape of an object, such as a corner, plane, or edge. However, the conventional sequential driving technique involves a long measurement time. This problem can be resolved by pulse coding ultrasonic signals, which allows multi-sensors to be fired simultaneously and adjacent objects to be distinguished. Accordingly, the current presents a new simultaneous coded driving system for an ultrasonic sensor array for object recognition in autonomous mobile robots. The proposed system is designed and implemented using a DSP and FPGA. A micro-controller board is made using a DSP, Polaroid 6500 ranging modules are modified for firing the coded signals, and a 5-channel coded signal generating board is made using a FPGA. To verify the proposed method, experiments were conducted in an environment with overlapping signals, and the flight distances for each sensor were obtained from the received overlapping signals using correlations and conversion to a bipolar PCM-NRZ signal.

  • PDF

The Method of Abandoned Object Recognition based on Neural Networks (신경망 기반의 유기된 물체 인식 방법)

  • Ryu, Dong-Gyun;Lee, Jae-Heung
    • Journal of IKEEE
    • /
    • v.22 no.4
    • /
    • pp.1131-1139
    • /
    • 2018
  • This paper proposes a method of recognition abandoned objects using convolutional neural networks. The method first detects an area for an abandoned object in image and, if there is a detected area, applies convolutional neural networks to that area to recognize which object is represented. Experiments were conducted through an application system that detects illegal trash dumping. The experiments result showed the area of abandoned object was detected efficiently. The detected areas enter the input of convolutional neural networks and are classified into whether it is a trash or not. To do this, I trained convolutional neural networks with my own trash dataset and open database. As a training result, I achieved high accuracy for the test set not included in the training set.

Realization of Object Detection Algorithm and Eight-channel LiDAR sensor for Autonomous Vehicles (자율주행자동차를 위한 8채널 LiDAR 센서 및 객체 검출 알고리즘의 구현)

  • Kim, Ju-Young;Woo, Seong Tak;Yoo, Jong-Ho;Park, Young-Bin;Lee, Joong-Hee;Cho, Hyun-Chang;Choi, Hyun-Yong
    • Journal of Sensor Science and Technology
    • /
    • v.28 no.3
    • /
    • pp.157-163
    • /
    • 2019
  • The LiDAR sensor, which is widely regarded as one of the most important sensors, has recently undergone active commercialization owing to the significant growth in the production of ADAS and autonomous vehicle components. The LiDAR sensor technology involves radiating a laser beam at a particular angle and acquiring a three-dimensional image by measuring the lapsed time of the laser beam that has returned after being reflected. The LiDAR sensor has been incorporated and utilized in various devices such as drones and robots. This study focuses on object detection and recognition by employing sensor fusion. Object detection and recognition can be executed as a single function by incorporating sensors capable of recognition, such as image sensors, optical sensors, and propagation sensors. However, a single sensor has limitations with respect to object detection and recognition, and such limitations can be overcome by employing multiple sensors. In this paper, the performance of an eight-channel scanning LiDAR was evaluated and an object detection algorithm based on it was implemented. Furthermore, object detection characteristics during daytime and nighttime in a real road environment were verified. Obtained experimental results corroborate that an excellent detection performance of 92.87% can be achieved.

People Counting System by Facial Age Group (얼굴 나이 그룹별 피플 카운팅 시스템)

  • Ko, Ginam;Lee, YongSub;Moon, Nammee
    • Journal of the Institute of Electronics and Information Engineers
    • /
    • v.51 no.2
    • /
    • pp.69-75
    • /
    • 2014
  • Existing People Counting System using a single overhead mounted camera has limitation in object recognition and counting in various environments. Those limitations are attributable to overlapping, occlusion and external factors, such as over-sized belongings and dramatic light change. Thus, this paper proposes the new concept of People Counting System by Facial Age Group using two depth cameras, at overhead and frontal viewpoints, in order to improve object recognition accuracy and robust people counting to external factors. The proposed system is counting the pedestrians by five process such as overhead image processing, frontal image processing, identical object recognition, facial age group classification and in-coming/out-going counting. The proposed system developed by C++, OpenCV and Kinect SDK, and it target group of 40 people(10 people by each age group) was setup for People Counting and Facial Age Group classification performance evaluation. The experimental results indicated approximately 98% accuracy in People Counting and 74.23% accuracy in the Facial Age Group classification.

Image Based Human Action Recognition System to Support the Blind (시각장애인 보조를 위한 영상기반 휴먼 행동 인식 시스템)

  • Ko, ByoungChul;Hwang, Mincheol;Nam, Jae-Yeal
    • Journal of KIISE
    • /
    • v.42 no.1
    • /
    • pp.138-143
    • /
    • 2015
  • In this paper we develop a novel human action recognition system based on communication between an ear-mounted Bluetooth camera and an action recognition server to aid scene recognition for the blind. First, if the blind capture an image of a specific location using the ear-mounted camera, the captured image is transmitted to the recognition server using a smartphone that is synchronized with the camera. The recognition server sequentially performs human detection, object detection and action recognition by analyzing human poses. The recognized action information is retransmitted to the smartphone and the user can hear the action information through the text-to-speech (TTS). Experimental results using the proposed system showed a 60.7% action recognition performance on the test data captured in indoor and outdoor environments.

Implementation of Object Tracking System with Multi Camera by Using Background Generation Technique (배경 생성 기법을 이용한 다중 카메라 객체 추적 시스템 구현)

  • Jo, Hyun-Tae;Jang, Jae-Nee;Kang, Nam-Oh;Paik, Joon-Ki
    • Proceedings of the IEEK Conference
    • /
    • 2008.06a
    • /
    • pp.947-948
    • /
    • 2008
  • Recently, many efforts have been made for research and application of object tracking system. However, introduced object tracking algorithms have limitations to adopt a realtime object tracking system with multi camera. In this paper, we present a novel background generation and target object recognition algorithm for realtime object tracking system with multi camera and implemented it.

  • PDF

Real-Time Place Recognition for Augmented Mobile Information Systems (이동형 정보 증강 시스템을 위한 실시간 장소 인식)

  • Oh, Su-Jin;Nam, Yang-Hee
    • Journal of KIISE:Computing Practices and Letters
    • /
    • v.14 no.5
    • /
    • pp.477-481
    • /
    • 2008
  • Place recognition is necessary for a mobile user to be provided with place-dependent information. This paper proposes real-time video based place recognition system that identifies users' current place while moving in the building. As for the feature extraction of a scene, there have been existing methods based on global feature analysis that has drawback of sensitive-ness for the case of partial occlusion and noises. There have also been local feature based methods that usually attempted object recognition which seemed hard to be applied in real-time system because of high computational cost. On the other hand, researches using statistical methods such as HMM(hidden Markov models) or bayesian networks have been used to derive place recognition result from the feature data. The former is, however, not practical because it requires huge amounts of efforts to gather the training data while the latter usually depends on object recognition only. This paper proposes a combined approach of global and local feature analysis for feature extraction to complement both approaches' drawbacks. The proposed method is applied to a mobile information system and shows real-time performance with competitive recognition result.