• Title/Summary/Keyword: Vision-based Control

Search Result 687, Processing Time 0.029 seconds

Object Recognition of Robot Using 3D RFID System

  • Roh, Se-Gon;Park, Jin-Ho;Lee, Young-Hoon;Choi, Hyouk-Ryeol
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 2005.06a
    • /
    • pp.62-67
    • /
    • 2005
  • Object recognition in the field of robotics generally has depended on a computer vision system. Recently, RFID(Radio Frequency IDentification) technology has been suggested to support recognition and has been rapidly and widely applied. This paper introduces the more advanced RFID-based recognition. A novel tag named 3D tag, which facilitates the understanding of the object, was designed. The previous RFID-based system only detects the existence of the object, and therefore, the system should find the object and had to carry out a complex process such as pattern match to identify the object. 3D tag, however, not only detects the existence of the object as well as other tags, but also estimates the orientation and position of the object. These characteristics of 3D tag allows the robot to considerably reduce its dependence on other sensors required for object recognition the object. In this paper, we analyze the 3D tag's detection characteristic and the position and orientation estimation algorithm of the 3D tag-based RFID system.

  • PDF

Development of a Robot Personality based on Cultural Paradigm using Fuzzy Logic (퍼지 로직을 이용한 문화 패러다임 기반의 로봇 성격 개발)

  • Qureshi, Favad Ahmed;Kim, Eun-Tai;Park, Mi-Gnon
    • Proceedings of the Korean Institute of Intelligent Systems Conference
    • /
    • 2008.04a
    • /
    • pp.385-391
    • /
    • 2008
  • Robotics has emerged as an important field for the future. It is our vision that robots in future will be able to transcend these precincts and work side by side humans for the greater good of mankind. We developed a face robot for this purpose. However, Life like robots demands a certain level of intelligence. Some scientists have proposed an event based learning approach, in which the robot can be taken as a small child and through learning from surrounding entities develops its own personality. In fact some scientists have proposed an entire new personality of the robot itself in which robot can have its own internal states, intentions, beliefs, desires and feelings. Our approach should not only be to develop a robot personality model but also to understand human behavior and incorporate it into the robot model. Human's personality is very complex and rests on many factors like its physical surrounding, its social surrounding, and internal states and beliefs etc. This paper discusses the development of this platform to evaluate this and develop a standard by a society based approach including the cultural paradigm. For this purpose the fuzzy control theory is used. Since the fuzzy theory is very near human analytical thinking it provides a very good platform to develop such a model.

  • PDF

An Improved Approach for 3D Hand Pose Estimation Based on a Single Depth Image and Haar Random Forest

  • Kim, Wonggi;Chun, Junchul
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.9 no.8
    • /
    • pp.3136-3150
    • /
    • 2015
  • A vision-based 3D tracking of articulated human hand is one of the major issues in the applications of human computer interactions and understanding the control of robot hand. This paper presents an improved approach for tracking and recovering the 3D position and orientation of a human hand using the Kinect sensor. The basic idea of the proposed method is to solve an optimization problem that minimizes the discrepancy in 3D shape between an actual hand observed by Kinect and a hypothesized 3D hand model. Since each of the 3D hand pose has 23 degrees of freedom, the hand articulation tracking needs computational excessive burden in minimizing the 3D shape discrepancy between an observed hand and a 3D hand model. For this, we first created a 3D hand model which represents the hand with 17 different parts. Secondly, Random Forest classifier was trained on the synthetic depth images generated by animating the developed 3D hand model, which was then used for Haar-like feature-based classification rather than performing per-pixel classification. Classification results were used for estimating the joint positions for the hand skeleton. Through the experiment, we were able to prove that the proposed method showed improvement rates in hand part recognition and a performance of 20-30 fps. The results confirmed its practical use in classifying hand area and successfully tracked and recovered the 3D hand pose in a real time fashion.

Development of Multi-functional Tele-operative Modular Robotic System For Watermelon Cultivation in Greenhouse

  • H. Hwang;Kim, C. S.;Park, D. Y.
    • Journal of Biosystems Engineering
    • /
    • v.28 no.6
    • /
    • pp.517-524
    • /
    • 2003
  • There have been worldwide research and development efforts to automate various processes of bio-production and those efforts will be expanded with priority given to tasks which require high intensive labor or produce high value-added product and tasks under hostile environment. In the field of bio-production capabilities of the versatility and robustness of automated system have been major bottlenecks along with economical efficiency. This paper introduces a new concept of automation based on tole-operation, which can provide solutions to overcome inherent difficulties in automating bio-production processes. Operator(farmer), computer, and automatic machinery share their roles utilizing their maximum merits to accomplish given tasks successfully. Among processes of greenhouse watermelon cultivation tasks such as pruning, watering, pesticide application, and harvest with loading were chosen based on the required labor intensiveness and functional similarities to realize the proposed concept. The developed system was composed of 5 major hardware modules such as wireless remote monitoring and task control module, wireless remote image acquisition and data transmission module, gantry system equipped with 4 d.o.f. Cartesian type robotic manipulator, exchangeable modular type end-effectors, and guided watermelon loading and storage module. The system was operated through the graphic user interface using touch screen monitor and wireless data communication among operator, computer, and machine. The proposed system showed practical and feasible way of automation in the field of volatile bio-production process.

Development of Mask-RCNN Based Axle Control Violation Detection Method for Enforcement on Overload Trucks (과적 화물차 단속을 위한 Mask-RCNN기반 축조작 검지 기술 개발)

  • Park, Hyun suk;Cho, Yong sung;Kim, Young Nam;Kim, Jin pyung
    • The Journal of The Korea Institute of Intelligent Transport Systems
    • /
    • v.21 no.5
    • /
    • pp.57-66
    • /
    • 2022
  • The Road Management Administration is cracking down on overloaded vehicles by installing low-speed or high-speed WIMs at toll gates and main lines on expressways. However, in recent years, the act of intelligently evading the overloaded-vehicle control system of the Road Management Administration by illegally manipulating the variable axle of an overloaded truck is increasing. In this manipulation, when entering the overloaded-vehicle checkpoint, all axles of the vehicle are lowered to pass normally, and when driving on the main road, the variable axle of the vehicle is illegally lifted with the axle load exceeding 10 tons alarmingly. Therefore, this study developed a technology to detect the state of the variable axle of a truck driving on the road using roadside camera images. In particular, this technology formed the basis for cracking down on overloaded vehicles by lifting the variable axle after entering the checkpoint and linking the vehicle with the account information of the checkpoint. Fundamentally, in this study, the tires of the vehicle were recognized using the Mask RCNN algorithm, the recognized tires were virtually arranged before and after the checkpoint, and the height difference of the vehicle was measured from the arrangement to determine whether the variable axle was lifted after the vehicle left the checkpoint.

A Study on the Implementation of RFID-Based Autonomous Navigation System for Robotic Cellular Phone (RCP) (RFID를 이용한 RCP 자율 네비게이션 시스템 구현을 위한 연구)

  • Choe Jae-Il;Choi Jung-Wook;Oh Dong-Ik;Kim Seung-Woo
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.12 no.5
    • /
    • pp.480-488
    • /
    • 2006
  • Industrial and economical importance of CP(Cellular Phone) is growing rapidly. Combined with IT technology, CP is one of the most attractive technologies of today. However, unless we find a new breakthrough in the technology, its growth may slow down soon. RT(Robot Technology) is considered one of the most promising next generation technologies. Unlike the industrial robot of the past, today's robots require advanced features, such as soft computing, human-friendly interface, interaction technique, speech recognition object recognition, among many others. In this paper, we present a new technological concept named RCP (Robotic Cellular Phone) which integrates RT and CP in the vision of opening a combined advancement of CP, IT, and RT, RCP consists of 3 sub-modules. They are $RCP^{Mobility}$(RCP Mobility System), $RCP^{Interaction}$, and $RCP^{Integration}$. The main focus of this paper is on $RCP^{Mobility}$ which combines an autonomous navigation system of the RT mobility with CP. Through $RCP^{Mobility}$, we are able to provide CP with robotic functions such as auto-charging and real-world robotic entertainment. Ultimately, CP may become a robotic pet to the human beings. $RCP^{Mobility}$ consists of various controllers. Two of the main controllers are trajectory controller and self-localization controller. While the former is responsible for the wheel-based navigation of RCP, the latter provides localization information of the moving RCP With the coordinates acquired from RFID-based self-localization controller, trajectory controller refines RCP's movement to achieve better navigation. In this paper, a prototype of $RCP^{Mobility}$ is presented. We describe overall structure of the system and provide experimental results on the RCP navigation.

Sampling-based Control of SAR System Mounted on A Simple Manipulator (간단한 기구부와 결합한 공간증강현실 시스템의 샘플 기반 제어 방법)

  • Lee, Ahyun;Lee, Joo-Ho;Lee, Joo-Haeng
    • Korean Journal of Computational Design and Engineering
    • /
    • v.19 no.4
    • /
    • pp.356-367
    • /
    • 2014
  • A robotic sapatial augmented reality (RSAR) system, which combines robotic components with projector-based AR technique, is unique in its ability to expand the user interaction area by dynamically changing the position and orientation of a projector-camera unit (PCU). For a moving PCU mounted on a conventional robotic device, we can compute its extrinsic parameters using a robot kinematics method assuming a link and joint geometry is available. In a RSAR system based on user-created robot (UCR), however, it is difficult to calibrate or measure the geometric configuration, which limits to apply a conventional kinematics method. In this paper, we propose a data-driven kinematics control method for a UCR-based RSAR system. The proposed method utilized a pre-sampled data set of camera calibration acquired at sufficient instances of kinematics configurations in fixed joint domains. Then, the sampled set is compactly represented as a set of B-spline surfaces. The proposed method have merits in two folds. First, it does not require any kinematics model such as a link length or joint orientation. Secondly, the computation is simple since it just evaluates a several polynomials rather than relying on Jacobian computation. We describe the proposed method and demonstrates the results for an experimental RSAR system with a PCU on a simple pan-tilt arm.

3D Facial Animation with Head Motion Estimation and Facial Expression Cloning (얼굴 모션 추정과 표정 복제에 의한 3차원 얼굴 애니메이션)

  • Kwon, Oh-Ryun;Chun, Jun-Chul
    • The KIPS Transactions:PartB
    • /
    • v.14B no.4
    • /
    • pp.311-320
    • /
    • 2007
  • This paper presents vision-based 3D facial expression animation technique and system which provide the robust 3D head pose estimation and real-time facial expression control. Many researches of 3D face animation have been done for the facial expression control itself rather than focusing on 3D head motion tracking. However, the head motion tracking is one of critical issues to be solved for developing realistic facial animation. In this research, we developed an integrated animation system that includes 3D head motion tracking and facial expression control at the same time. The proposed system consists of three major phases: face detection, 3D head motion tracking, and facial expression control. For face detection, with the non-parametric HT skin color model and template matching, we can detect the facial region efficiently from video frame. For 3D head motion tracking, we exploit the cylindrical head model that is projected to the initial head motion template. Given an initial reference template of the face image and the corresponding head motion, the cylindrical head model is created and the foil head motion is traced based on the optical flow method. For the facial expression cloning we utilize the feature-based method, The major facial feature points are detected by the geometry of information of the face with template matching and traced by optical flow. Since the locations of varying feature points are composed of head motion and facial expression information, the animation parameters which describe the variation of the facial features are acquired from geometrically transformed frontal head pose image. Finally, the facial expression cloning is done by two fitting process. The control points of the 3D model are varied applying the animation parameters to the face model, and the non-feature points around the control points are changed by use of Radial Basis Function(RBF). From the experiment, we can prove that the developed vision-based animation system can create realistic facial animation with robust head pose estimation and facial variation from input video image.

Comparison of the Static Balance Ability according to the Subjective Visual Vertical in Healthy Adults

  • Kwon, Jung Won;Yeo, Sang Seok
    • The Journal of Korean Physical Therapy
    • /
    • v.32 no.3
    • /
    • pp.152-156
    • /
    • 2020
  • Purpose: The subjective visual vertical (SVV) test is used to evaluate the otolith function in the inner ear. This study compared the different balance ability according to the results of the SVV in healthy adults. Methods: This study recruited 30 normal healthy subjects who did not have neurological and musculoskeletal disorders. The subjects were divided into experimental and control groups based on the results of SVV: experimental group, >2°; control group, <2°. The static balance ability was evaluated using the Fourier Index, which could evaluate the balance capacity objectively and quantitatively. Results: The mean angle of the SVV in the experimental and control groups was 4.44° and 0.59°, respectively. In the result of the Fourier series, the F1 frequency band in the experimental group showed a significantly higher value under one condition compared to the control group (p<0.05). In the F2-4 and F5-6 frequency bands, the experimental group showed a significant increase in the Fourier series value under the four conditions compared to the control group (p<0.05). In the F7-8 frequency band, significantly higher values of the Fourier series were observed in the experimental group under the three different conditions (p<0.05). Conclusion: These results showed increased trunk sway while maintaining static balance in the experimental group who showed a larger SVV angle compared to the control group. The SVV can be applied to evaluate the vestibular system and balance ability in normal adults.

Resizing effect of image and ROI in using control charts to monitor image data (이미지 데이터를 모니터링하는 관리도에서 이미지와 ROI 크기 조정의 영향)

  • Lee, JuHyoung;Yoon, Hyeonguk;Lee, Sungmin;Lee, Jaeheon
    • The Korean Journal of Applied Statistics
    • /
    • v.30 no.3
    • /
    • pp.487-501
    • /
    • 2017
  • A machine vision system (MVS) is a computer system that utilizes one or more image-capturing devices to provide image data for analysis and interpretation. Recently there have been a number of industrial- and medical-device applications where control charts have been proposed for use with image data. The use of image-based control charting is somewhat different from traditional control charting applications, and these differences can be attributed to several factors, such as the type of data monitored and how the control charts are applied. In this paper, we investigate the adjustment effect of image size and region of interest (ROI) size, when we use control charts to monitor grayscale image data in industry.