• Title/Summary/Keyword: Object recognition system

Search Result 714, Processing Time 0.025 seconds

Object Recognition of Ultrasonic Transducer fabricated with Porous Piezoelectric Cewramics (다공질 압전 소자로 제작한 초음파 트랜스듀서의 물체복원)

  • Cho, Hyun-Chul;Lee, Su-Ho;Park, Jung-Hak;Choi, Heon-Il;SaGong, Geon
    • Proceedings of the KIEE Conference
    • /
    • 1996.07c
    • /
    • pp.1495-1497
    • /
    • 1996
  • In this study, Object restoration of ultrasonic transducer fabricated with porous piezoelectric ceramics using Modified SCL(Simple Competitive Learning) neural networks are presented. Using the acquired object data $16{\times}16$ pixels, Modified SCL neural networks using the $16{\times}16$ low resolution image was used for object restoration of $32{\times}32$ high resolution image. The experimental results show that the ultrasonic transducer fabricated with porous piezoelectric ceramics could be applied for sonar system.

  • PDF

Real-time Identification of Traffic Light and Road Sign for the Next Generation Video-Based Navigation System (차세대 실감 내비게이션을 위한 실시간 신호등 및 표지판 객체 인식)

  • Kim, Yong-Kwon;Lee, Ki-Sung;Cho, Seong-Ik;Park, Jeong-Ho;Choi, Kyoung-Ho
    • Journal of Korea Spatial Information System Society
    • /
    • v.10 no.2
    • /
    • pp.13-24
    • /
    • 2008
  • A next generation video based car navigation is researched to supplement the drawbacks of existed 2D based navigation and to provide the various services for safety driving. The components of this navigation system could be a load object database, identification module for load lines, and crossroad identification module, etc. In this paper, we proposed the traffic lights and road sign recognition method which can be effectively exploited for crossroad recognition in video-based car navigation systems. The method uses object color information and other spatial features in the video image. The results show average 90% recognition rate from 30m to 60m distance for traffic lights and 97% at 40-90m distance for load sign. The algorithm also achieves 46msec/frame processing time which also indicates the appropriateness of the algorithm in real-time processing.

  • PDF

A Study on the Automated Payment System for Artificial Intelligence-Based Product Recognition in the Age of Contactless Services

  • Kim, Heeyoung;Hong, Hotak;Ryu, Gihwan;Kim, Dongmin
    • International Journal of Advanced Culture Technology
    • /
    • v.9 no.2
    • /
    • pp.100-105
    • /
    • 2021
  • Contactless service is rapidly emerging as a new growth strategy due to consumers who are reluctant to the face-to-face situation in the global pandemic of coronavirus disease 2019 (COVID-19), and various technologies are being developed to support the fast-growing contactless service market. In particular, the restaurant industry is one of the most desperate industrial fields requiring technologies for contactless service, and the representative technical case should be a kiosk, which has the advantage of reducing labor costs for the restaurant owners and provides psychological relaxation and satisfaction to the customer. In this paper, we propose a solution to the restaurant's store operation through the unmanned kiosk using a state-of-the-art artificial intelligence (AI) technology of image recognition. Especially, for the products that do not have barcodes in bakeries, fresh foods (fruits, vegetables, etc.), and autonomous restaurants on highways, which cause increased labor costs and many hassles, our proposed system should be very useful. The proposed system recognizes products without barcodes on the ground of image-based AI algorithm technology and makes automatic payments. To test the proposed system feasibility, we established an AI vision system using a commercial camera and conducted an image recognition test by training object detection AI models using donut images. The proposed system has a self-learning system with mismatched information in operation. The self-learning AI technology allows us to upgrade the recognition performance continuously. We proposed a fully automated payment system with AI vision technology and showed system feasibility by the performance test. The system realizes contactless service for self-checkout in the restaurant business area and improves the cost-saving in managing human resources.

Online Face Avatar Motion Control based on Face Tracking

  • Wei, Li;Lee, Eung-Joo
    • Journal of Korea Multimedia Society
    • /
    • v.12 no.6
    • /
    • pp.804-814
    • /
    • 2009
  • In this paper, a novel system for avatar motion controlling by tracking face is presented. The system is composed of three main parts: firstly, LCS (Local Cluster Searching) method based face feature detection algorithm, secondly, HMM based feature points recognition algorithm, and finally, avatar controlling and animation generation algorithm. In LCS method, face region can be divided into many small piece regions in horizontal and vertical direction. Then the method will judge each cross point that if it is an object point, edge point or the background point. The HMM method will distinguish the mouth, eyes, nose etc. from these feature points. Based on the detected facial feature points, the 3D avatar is controlled by two ways: avatar orientation and animation, the avatar orientation controlling information can be acquired by analyzing facial geometric information; avatar animation can be generated from the face feature points smoothly. And finally for evaluating performance of the developed system, we implement the system on Window XP OS, the results show that the system can have an excellent performance.

  • PDF

Image Objects Detection Method for the Embedded System (임베디드 시스템을 위한 영상객체의 검출방법)

  • Kim, Yun-Il;Rho, Seung-Ryong
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.15 no.4
    • /
    • pp.420-425
    • /
    • 2009
  • In this paper, image detection and recognition algorithms are studied with respect to embedded carrier system. There are many suggested techniques to detect and recognize objects. But they have the propensity to need much calculation for high hit rate. Advanced and modified method needs to study for embedded systems that low power consumption and real time response are requested. The proposed methods were implemented using Intel(R) Open Source Computer Vision Library provided by Intel Corporation. And they run and tested on embedded system using a ARM920T processor by cross-compiling. They showed 1.6sec response time and 95% hit rate and supported the automated moving carrier system smoothly.

Development of a Simple Computer Vision System (컴퓨터 시각 장치의 개발)

  • 박동철;석민수
    • Journal of the Korean Institute of Telematics and Electronics
    • /
    • v.20 no.1
    • /
    • pp.1-6
    • /
    • 1983
  • To give the recognition capability of task objects by computer vision to a sensor-based robot system, an image digitizer and some basic software techniques were developed and repofted here. The image digitizer was developed with the CROMEMCO SYSTEM III microcomputer anti C.C.T.V. camera to convert the analog valued scene into digitized image which could be pro-cessed by a digital computer. Basic software techniques for the computer vision system were aimed at the recognition of 3-dimensional objects. Experiments with these techniques were carried out using the image of a cubicle which could be considered as typical simple 3-dimensional object.

  • PDF

The Cooperate Middleware System based on Web-Service for Logistics Information Process with Applies RFID (RFID를 활용하여 물류정보 처리를 위한 웹 서비스 기반의 연동 미들웨어 시스템)

  • Kim, Yei-Chang;Park, Myung-Soo
    • Journal of Digital Convergence
    • /
    • v.5 no.2
    • /
    • pp.1-13
    • /
    • 2007
  • Recently, RFID has emerged as the main technology in the logistic services. When the existing recognition technology based on bar codes brings about lots of problem due to its own limits. RFID becomes the center of attention to solve them. However, RFID is not without any obstacles: companies have their own operating systems, while RFID is developed regardless of each company's special features. RFID middleware system based on web service is expected to remove these obstacles. This paper shows how to operate the middleware based on web service and to lay in the DB the tag informations taken from reader system. Middle assures that companies adopting RFID system for their logistic service are given adaptability to any systems whatsoever, available by way of defining logistic information, tag information and reader information. For this purpose, we implement as the basic web service a middleware system that turns all data into XML(eXtensible Markup Language) of SOAP (Simple Object Access Protocol), the standard data.

  • PDF

Human Action Recognition Based on 3D Human Modeling and Cyclic HMMs

  • Ke, Shian-Ru;Thuc, Hoang Le Uyen;Hwang, Jenq-Neng;Yoo, Jang-Hee;Choi, Kyoung-Ho
    • ETRI Journal
    • /
    • v.36 no.4
    • /
    • pp.662-672
    • /
    • 2014
  • Human action recognition is used in areas such as surveillance, entertainment, and healthcare. This paper proposes a system to recognize both single and continuous human actions from monocular video sequences, based on 3D human modeling and cyclic hidden Markov models (CHMMs). First, for each frame in a monocular video sequence, the 3D coordinates of joints belonging to a human object, through actions of multiple cycles, are extracted using 3D human modeling techniques. The 3D coordinates are then converted into a set of geometrical relational features (GRFs) for dimensionality reduction and discrimination increase. For further dimensionality reduction, k-means clustering is applied to the GRFs to generate clustered feature vectors. These vectors are used to train CHMMs separately for different types of actions, based on the Baum-Welch re-estimation algorithm. For recognition of continuous actions that are concatenated from several distinct types of actions, a designed graphical model is used to systematically concatenate different separately trained CHMMs. The experimental results show the effective performance of our proposed system in both single and continuous action recognition problems.

Implementation and Verification of Deep Learning-based Automatic Object Tracking and Handy Motion Control Drone System (심층학습 기반의 자동 객체 추적 및 핸디 모션 제어 드론 시스템 구현 및 검증)

  • Kim, Youngsoo;Lee, Junbeom;Lee, Chanyoung;Jeon, Hyeri;Kim, Seungpil
    • IEMEK Journal of Embedded Systems and Applications
    • /
    • v.16 no.5
    • /
    • pp.163-169
    • /
    • 2021
  • In this paper, we implemented a deep learning-based automatic object tracking and handy motion control drone system and analyzed the performance of the proposed system. The drone system automatically detects and tracks targets by analyzing images obtained from the drone's camera using deep learning algorithms, consisting of the YOLO, the MobileNet, and the deepSORT. Such deep learning-based detection and tracking algorithms have both higher target detection accuracy and processing speed than the conventional color-based algorithm, the CAMShift. In addition, in order to facilitate the drone control by hand from the ground control station, we classified handy motions and generated flight control commands through motion recognition using the YOLO algorithm. It was confirmed that such a deep learning-based target tracking and drone handy motion control system stably track the target and can easily control the drone.

Development of a Vision Based Fall Detection System For Healthcare (헬스케어를 위한 영상기반 기절동작 인식시스템 개발)

  • So, In-Mi;Kang, Sun-Kyung;Kim, Young-Un;Lee, Chi-Geun;Jung, Sung-Tae
    • Journal of the Korea Society of Computer and Information
    • /
    • v.11 no.6 s.44
    • /
    • pp.279-287
    • /
    • 2006
  • This paper proposes a method to detect fall action by using stereo images to recognize emergency situation. It uses 3D information to extract the visual information for learning and testing. It uses HMM(Hidden Markov Model) as a recognition algorithm. The proposed system extracts background images from two camera images. It extracts a moving object from input video sequence by using the difference between input image and background image. After that, it finds the bounding rectangle of the moving object and extracts 3D information by using calibration data of the two cameras. We experimented to the recognition rate of fall action with the variation of rectangle width and height and that of 3D location of the rectangle center point. Experimental results show that the variation of 3D location of the center point achieves the higher recognition rate than the variation of width and height.

  • PDF