• Title/Summary/Keyword: Marker Recognition rate

Search Result 22, Processing Time 0.02 seconds

The Study of Noise Reduction For Marking the Tag Clearly In Implementation of Augmented Reality (증강현실 구현에서 태그를 명확하게 하기 위한 잡음 제거에 관한 연구)

  • Lee, Gyeong-Ho;Kim, Young-Seop
    • Journal of the Semiconductor & Display Technology
    • /
    • v.9 no.4
    • /
    • pp.63-66
    • /
    • 2010
  • Detecting marker coordinates is important in augmented reality system based on tag. If a marker is not detected, objects can't be augmented. In this paper, we propose a noise reduction method for augmented reality. Using a blue color space to HIS color transformation was performed on the binary. Erosion operator and the dilation operator of the binary images were performed. Experimental results show that proposed method produces a tag image recognizable in various light environments. And using the area of the rectangle, the labeling could be detected through the tag. Tag recognition rate is improved by removing noise.

Human Activity Recognition Using Body Joint-Angle Features and Hidden Markov Model

  • Uddin, Md. Zia;Thang, Nguyen Duc;Kim, Jeong-Tai;Kim, Tae-Seong
    • ETRI Journal
    • /
    • v.33 no.4
    • /
    • pp.569-579
    • /
    • 2011
  • This paper presents a novel approach for human activity recognition (HAR) using the joint angles from a 3D model of a human body. Unlike conventional approaches in which the joint angles are computed from inverse kinematic analysis of the optical marker positions captured with multiple cameras, our approach utilizes the body joint angles estimated directly from time-series activity images acquired with a single stereo camera by co-registering a 3D body model to the stereo information. The estimated joint-angle features are then mapped into codewords to generate discrete symbols for a hidden Markov model (HMM) of each activity. With these symbols, each activity is trained through the HMM, and later, all the trained HMMs are used for activity recognition. The performance of our joint-angle-based HAR has been compared to that of a conventional binary and depth silhouette-based HAR, producing significantly better results in the recognition rate, especially for the activities that are not discernible with the conventional approaches.

Implementation of Real-time Recognition System for Korean Sign Language (한글 수화의 실시간 인식 시스템의 구현)

  • Han Young-Hwan
    • The Journal of the Korea Contents Association
    • /
    • v.5 no.4
    • /
    • pp.85-93
    • /
    • 2005
  • In this paper, we propose recognition system which tracks the unmarked hand of a person performing sign language in complex background. First of all, we measure entropy for the difference image between continuous frames. Using a color information that is similar to a skin color in candidate region which has high value, we extract hand region only from background image. On the extracted hand region, we detect a contour and recognize sign language by applying improved centroidal profile method. In the experimental results for 6 kinds of sing language movement, unlike existing methods, we can stably recognize sign language in complex background and illumination changes without marker. Also, it shows the recognition rate with more than 95% for person and $90\sim100%$ for each movement at 15 frames/second.

  • PDF

HMM-Based Human Gait Recognition (HMM을 이용한 보행자 인식)

  • Sin Bong-Kee;Suk Heung-Il
    • Journal of KIISE:Software and Applications
    • /
    • v.33 no.5
    • /
    • pp.499-507
    • /
    • 2006
  • Recently human gait has been considered as a useful biometric supporting high performance human identification systems. This paper proposes a view-based pedestrian identification method using the dynamic silhouettes of a human body modeled with the Hidden Markov Model(HMM). Two types of gait models have been developed both with an endless cycle architecture: one is a discrete HMM method using a self-organizing map-based VQ codebook and the other is a continuous HMM method using feature vectors transformed into a PCA space. Experimental results showed a consistent performance trend over a range of model parameters and the recognition rate up to 88.1%. Compared with other methods, the proposed models and techniques are believed to have a sufficient potential for a successful application to gait recognition.

Implementation and Verification of Artificial Intelligence Drone Delivery System (인공지능 드론 배송 시스템의 구현 및 검증)

  • Sungnam Lee
    • IEMEK Journal of Embedded Systems and Applications
    • /
    • v.19 no.1
    • /
    • pp.33-38
    • /
    • 2024
  • In this paper, we propose the implementation of a drone delivery system using artificial intelligence in a situation where the use of drones is rapidly increasing and human errors are occurring. This system requires the implementation of an accurate control algorithm, assuming that last-mile delivery is delivered to the apartment veranda. To recognize the delivery location, a recognition system using the YOLO algorithm was implemented, and a delivery system was installed on the drone to measure the distance to the object and increase the delivery distance to ensure stable delivery even at long distances. As a result of the experiment, it was confirmed that the recognition system recognized the marker with a match rate of more than 60% at a distance of less than 10m while the drone hovered stably. In addition, the drone carrying a 500g package was able to withstand the torque applied as the rail lengthened, extending to 1.5m and then stably placing the package down on the veranda at the end of the rail.

Gesture Recognition System using Motion Information (움직임 정보를 이용한 제스처 인식 시스템)

  • Han, Young-Hwan
    • The KIPS Transactions:PartB
    • /
    • v.10B no.4
    • /
    • pp.473-478
    • /
    • 2003
  • In this paper, we propose the gesture recognition system using a motion information from extracted hand region in complex background image. First of all, we measure entropy for the difference image between continuous frames. Using a color information that is similar to a skin color in candidate region which has high value, we extract hand region only from background image. On the extracted hand region, we detect a contour using the chain code and recognize hand gesture by applying improved centroidal profile method. In the experimental results for 6 kinds of hand gesture, unlike existing methods, we can stably recognize hand gesture in complex background and illumination changes without marker. Also, it shows the recognition rate with more than 95% for person and 90∼100% for each gesture at 15 frames/second.

Implementation of URL Connecting Application Service Platform Based on Recognition of AR Maker Using LED Panel and Smartphone (LED 전광판과 스마트폰을 이용한 AR 마커인식 기반의 URL 연결 서비스 플랫폼 구현)

  • Park, Kunwon;Hwang, Junho;Yoo, Myungsik
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.38C no.8
    • /
    • pp.692-698
    • /
    • 2013
  • As the mobile marketing through the smartphone has gradually increased, the smartphone application services using the AR marker, QR codes and augmented reality have attracted much attention. Furthermore the outdoor advertising is migrated to LED signage, which brings the visible light wireless communication technologies to the trial for mobile marketing. In this paper we present the implementation of AR marker-based URL access application services through smartphone camera using visible-light wireless communication technologies. We analyze the performance of the implemented system in terms of connection time and success rate.

Quantitative Analysis of Bioactive Marker Compounds from Cinnamomi Ramulus and Cinnamomi Cortex by HPLC-UV

  • Jeong, Su Yang;Zhao, Bing Tian;Moon, Dong Cheul;Kang, Jong Seong;Lee, Je Hyun;Min, Byung Sun;Son, Jong Keun;Woo, Mi Hee
    • Natural Product Sciences
    • /
    • v.19 no.1
    • /
    • pp.28-35
    • /
    • 2013
  • In this study, quantitative and pattern recognition analysis for the quality evaluation of Cinnamomi Ramulus and Cinnamomi Cortex using HPLC/UV was developed. For quantitative analysis, three major bioactive compounds were determined. The separation conditions employed for HPLC/UV were optimized using an ODS $C_{18}$ column ($250{\times}4.6$ mm, 5 ${\mu}m$) with gradient conditions of acetonitrile and water as the mobile phase, at a flow rate of 1.0 mL/min and a detection wavelength of 265 nm. This method was fully validated with respect to linearity, accuracy, precision, recovery, and robustness. The HPLC/UV method was applied successfully to the quantification of three major compounds in the extract of Cinnamomi Ramulus and Cinnamomi Cortex. The HPLC analytical method for pattern recognition analysis was validated by repeated analysis of thirty eight Cinnamomi Ramulus and thirty five Cinnamomi Cortex samples. The results indicate that the established HPLC/UV method is suitable for quantitative analysis.

Implementation of augmented reality and object tracking using multiple camera (다중 카메라를 이용한 객체추적과 증강현실의 구현)

  • Kim, Hag-Hee
    • Journal of the Korea Society of Computer and Information
    • /
    • v.16 no.6
    • /
    • pp.89-97
    • /
    • 2011
  • When examining current process of object tracking and search, objects were tracked by extracting them from image that was inputted through fixed single camera and objects were recognized through Zoom function to know detailed information on objects tracked. This study proposed system that expresses information on area that can seek and recognize object tracked as augmented reality by recognizing and seeking object by using multi camera. The result of experiment on proposed system showed that the number of pixels that was included in calculation was remarkably reduced and recognition rate of object was enhanced and time that took to identify information was shortened. Compared with existing methods, this system has advantage of better accuracy that can detect the motion of object and advantage of shortening time that took to detect motion.

Gesture Spotting by Web-Camera in Arbitrary Two Positions and Fuzzy Garbage Model (임의 두 지점의 웹 카메라와 퍼지 가비지 모델을 이용한 사용자의 의미 있는 동작 검출)

  • Yang, Seung-Eun
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.1 no.2
    • /
    • pp.127-136
    • /
    • 2012
  • Many research of hand gesture recognition based on vision system have been conducted which enable user operate various electronic devices more easily. 3D position calculation and meaningful gesture classification from similar gestures should be executed to recognize hand gesture accurately. A simple and cost effective method of 3D position calculation and gesture spotting (a task to recognize meaningful gesture from other similar meaningless gestures) is described in this paper. 3D position is achieved by calculation of two cameras relative position through pan/tilt module and a marker regardless with the placed position. Fuzzy garbage model is proposed to provide a variable reference value to decide whether the user gesture is the command gesture or not. The reference is achieved from fuzzy command gesture model and fuzzy garbage model which returns the score that shows the degree of belonging to command gesture and garbage gesture respectively. Two-stage user adaptation is proposed that off-line (batch) adaptation for inter-personal difference and on-line (incremental) adaptation for intra-difference to enhance the performance. Experiment is conducted for 5 different users. The recognition rate of command (discriminate command gesture) is more than 95% when only one command like meaningless gesture exists and more than 85% when the command is mixed with many other similar gestures.