• Title/Summary/Keyword: hand coordinate system

Search Result 85, Processing Time 0.02 seconds

Camera Calibration using the TSK fuzzy system (TSK 퍼지 시스템을 이용한 카메라 켈리브레이션)

  • Lee Hee-Sung;Hong Sung-Jun;Oh Kyung-Sae;Kim Eun-Tai
    • Proceedings of the Korean Institute of Intelligent Systems Conference
    • /
    • 2006.05a
    • /
    • pp.56-58
    • /
    • 2006
  • Camera calibration in machine vision is the process of determining the intrinsic cameara parameters and the three-dimensional (3D) position and orientation of the camera frame relative to a certain world coordinate system. On the other hand, Takagi-Sugeno-Kang (TSK) fuzzy system is a very popular fuzzy system and approximates any nonlinear function to arbitrary accuracy with only a small number of fuzzy rules. It demonstrates not only nonlinear behavior but also transparent structure. In this paper, we present a novel and simple technique for camera calibration for machine vision using TSK fuzzy model. The proposed method divides the world into some regions according to camera view and uses the clustered 3D geometric knowledge. TSK fuzzy system is employed to estimate the camera parameters by combining partial information into complete 3D information. The experiments are performed to verify the proposed camera calibration.

  • PDF

Inviscid Rotational Flows Near a Corner and Within a Triangle

  • Suh, Yong-Kweon
    • Journal of Mechanical Science and Technology
    • /
    • v.15 no.6
    • /
    • pp.813-820
    • /
    • 2001
  • Solutions of inviscid rotational flows near the corners of an arbitrary angle and within a triangle of arbitrary shapes are presented. The corner-flow solutions has a rotational component as a particular solution. The addition of irrotatoinal components yields a general solution, which is indeterminate unless the far-field condition is imposed. When the corner angle is less than 90$^{\circ}$the flow asymptotically becomes rotational. For the corner angle larger than 90$^{\circ}$it tends to become irrotational. The general solution for the corner flow is then applied to rotational flows within a triangle (Method I). The error level depends on the geometry, and a parameter space is presented by which we can estimate the error level of solutions. On the other hand, Method II employing three separate coordinate systems is developed. The error level given by Method II is moderate but less dependent on the geometry.

  • PDF

Non-restraint Master Interface of Minimally Invasive Surgical Robot Using Hand Motion Capture (손동작 영상획득을 이용한 최소침습수술로봇 무구속 마스터 인터페이스)

  • Jang, Ik-Gyu
    • Journal of Biomedical Engineering Research
    • /
    • v.37 no.3
    • /
    • pp.105-111
    • /
    • 2016
  • Introduction: Surgical robot is the alternative instrument that substitutes the difficult and precise surgical operation; should have intuitiveness operationally to transfer natural motions. There are limitations of hand motion derived from contacting mechanical handle in the surgical robot master interface such as mechanical singularity, isotropy, coupling problems. In this paper, we will confirm and verify the feasibility of intuitive Non-restraint master interface which tracking the hand motion using infra-red camera and only 3 reflective markers without the hardware handle for the surgical robot master interface. Materials & methods: We configured S/W and H/W system; arranged 6 infra-red cameras and attached 3 reflective markers on hands for measuring 3 dimensional coordinate then we find the 7 motions of grasp, yaw, pitch, roll, px, py, pz. And we connected Virtual-Master to the slave surgical robot(Laparobot) and observed the feasibility. To verify the result of motion, we compare the result of Non-restraint master and that of clinometer (and protractor) through measuring 0~180 degree, 10degree interval, 1000 samples and recorded standard deviation stands for error rate of the value. Results: We confirmed that the average angle values of Non-restraint master interface is accurately corresponds to the result of clinometer (and protractor) and have low error rates during motion. Investigation & Conclusion: In this paper, we confirmed the feasibility and accuracy of 3D Non-restraint master interface that can offer the intuitive motion of non-contact hardware handle. As a result, we can expect the high intuitiveness, dexterousness of surgical robot.

Golf Swing Classification Using Fuzzy System (퍼지 시스템을 이용한 골프 스윙 분류)

  • Park, Junwook;Kwak, Sooyeong
    • Journal of Broadcast Engineering
    • /
    • v.18 no.3
    • /
    • pp.380-392
    • /
    • 2013
  • A method to classify a golf swing motion into 7 sections using a Kinect sensor and a fuzzy system is proposed. The inputs to the fuzzy logic are the positions of golf club and its head, which are extracted from the information of golfer's joint position and color information obtained by a Kinect sensor. The proposed method consists of three modules: one for extracting the joint's information, another for detecting and tracking of a golf club, and the other for classifying golf swing motions. The first module extracts the hand's position among the joint information provided by a Kinect sensor. The second module detects the golf club as well as its head with the Hough line transform based on the hand's coordinate. Using a fuzzy logic as a classification engine reduces recognition errors and, consequently, improves the performance of robust classification. From the experiments of real-time video clips, the proposed method shows the reliability of classification by 85.2%.

NUI/NUX of the Virtual Monitor Concept using the Concentration Indicator and the User's Physical Features (사용자의 신체적 특징과 뇌파 집중 지수를 이용한 가상 모니터 개념의 NUI/NUX)

  • Jeon, Chang-hyun;Ahn, So-young;Shin, Dong-il;Shin, Dong-kyoo
    • Journal of Internet Computing and Services
    • /
    • v.16 no.6
    • /
    • pp.11-21
    • /
    • 2015
  • As growing interest in Human-Computer Interaction(HCI), research on HCI has been actively conducted. Also with that, research on Natural User Interface/Natural User eXperience(NUI/NUX) that uses user's gesture and voice has been actively conducted. In case of NUI/NUX, it needs recognition algorithm such as gesture recognition or voice recognition. However these recognition algorithms have weakness because their implementation is complex and a lot of time are needed in training because they have to go through steps including preprocessing, normalization, feature extraction. Recently, Kinect is launched by Microsoft as NUI/NUX development tool which attracts people's attention, and studies using Kinect has been conducted. The authors of this paper implemented hand-mouse interface with outstanding intuitiveness using the physical features of a user in a previous study. However, there are weaknesses such as unnatural movement of mouse and low accuracy of mouse functions. In this study, we designed and implemented a hand mouse interface which introduce a new concept called 'Virtual monitor' extracting user's physical features through Kinect in real-time. Virtual monitor means virtual space that can be controlled by hand mouse. It is possible that the coordinate on virtual monitor is accurately mapped onto the coordinate on real monitor. Hand-mouse interface based on virtual monitor concept maintains outstanding intuitiveness that is strength of the previous study and enhance accuracy of mouse functions. Further, we increased accuracy of the interface by recognizing user's unnecessary actions using his concentration indicator from his encephalogram(EEG) data. In order to evaluate intuitiveness and accuracy of the interface, we experimented it for 50 people from 10s to 50s. As the result of intuitiveness experiment, 84% of subjects learned how to use it within 1 minute. Also, as the result of accuracy experiment, accuracy of mouse functions (drag(80.4%), click(80%), double-click(76.7%)) is shown. The intuitiveness and accuracy of the proposed hand-mouse interface is checked through experiment, this is expected to be a good example of the interface for controlling the system by hand in the future.

GPS-based Augmented Reality System for Social Network Proposition (소셜 네트워크를 위한 GPS기반 증강현실 시스템 제안)

  • Liu, Jie;Jin, Seong-geun;Lee, Seong-Ok;Jung, Hoe-Kyung
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 2012.10a
    • /
    • pp.903-905
    • /
    • 2012
  • Recent research on Augmented Reality is Actively expand and Augmented reality feature added to the social network system (Social Network System) has become a necessity. In this paper, GPS-based Augmented Reality System for Social Network is introduced, is proposed. This system can add recent check-in friends in facebook by automatically to synchronizing the location coordinate, and it could also adding location coordinates system is represented in a real-world environment by AR, is Marker-based AR system that was Commonly used AR system is a huge cost by handheld devices in processing and storage space, the disadvantages of the marker-based AR systems can be solved by using Location-based AR applications. Therefore, the proposed GPS-based Augmented Reality System for Social Network, automatically searches for the optimal speed for Wifi and 4G network to iOS Hand AR system was desired in future.

  • PDF

Development of an Automatic Seeding System Using Machine Vision for Seed Line-up of Cucurbitaceous Vegetables (기계시각을 이용한 박과채소 종자 정렬파종시스템 개발)

  • Kim, Dong-Eok;Cho, Han-Keun;Chang, Yu-Seob;Kim, Jong-Goo;Kim, Hyeon-Hwan;Son, Jae-Ryoung
    • Journal of Biosystems Engineering
    • /
    • v.32 no.3
    • /
    • pp.179-189
    • /
    • 2007
  • Most of the seeds of cucurbitaceous rootstock species used for grafting were mainly sown by hand. This study was carried out to develop an on-line discriminating algorithm of seed direction using machine vision and an automatic seeding system. The seeding system was composed of a supplying device, feeding device, machine vision system, reversing device, seeding device and system control section. Machine vision was composed of a color CCD camera, frame grabber, image inspection chamber, lighting and personal computer. The seed image was segmented into a region of seed part and background part using thresholding technique in which H value of HSI color coordinate system. A seed direction was discriminated by comparing position between the center of circumscribed rectangle to a seed and the center of seed image. It took about 49ms to identify and redirect seed. Line-up status of seed was good the more than 95% of a sowed seed. Seeding capacity of this system was shown to be 10,140 grains per hour, which is three times faster than that of a typical worker.

3D Object Location Identification Using Finger Pointing and a Robot System for Tracking an Identified Object (손가락 Pointing에 의한 물체의 3차원 위치정보 인식 및 인식된 물체 추적 로봇 시스템)

  • Gwak, Dong-Gi;Hwang, Soon-Chul;Ok, Seo-Won;Yim, Jung-Sae;Kim, Dong Hwan
    • Journal of the Korean Society of Manufacturing Technology Engineers
    • /
    • v.24 no.6
    • /
    • pp.703-709
    • /
    • 2015
  • In this work, a robot aimed at grapping and delivering an object by using a simple finger-pointing command from a hand- or arm-handicapped person is introduced. In this robot system, a Leap Motion sensor is utilized to obtain the finger-motion data of the user. In addition, a Kinect sensor is also used to measure the 3D (Three Dimensional)-position information of the desired object. Once the object is pointed at through the finger pointing of the handicapped user, the exact 3D information of the object is determined using an image processing technique and a coordinate transformation between the Leap Motion and Kinect sensors. It was found that the information obtained is transmitted to the robot controller, and that the robot eventually grabs the target and delivers it to the handicapped person successfully.

3D Pointing for Effective Hand Mouse in Depth Image (깊이영상에서 효율적인 핸드 마우스를 위한 3D 포인팅)

  • Joo, Sung-Il;Weon, Sun-Hee;Choi, Hyung-Il
    • Journal of the Korea Society of Computer and Information
    • /
    • v.19 no.8
    • /
    • pp.35-44
    • /
    • 2014
  • This paper proposes a 3D pointing interface that is designed for the efficient application of a hand mouse. The proposed method uses depth images to secure high-quality results even in response to changes in lighting and environmental conditions and uses the normal vector of the palm of the hand to perform 3D pointing. First, the hand region is detected and tracked using the existing conventional method; based on the information thus obtained, the region of the palm is predicted and the region of interest is obtained. Once the region of interest has been identified, this region is approximated by the plane equation and the normal vector is extracted. Next, to ensure stable control, interpolation is performed using the extracted normal vector and the intersection point is detected. For stability and efficiency, the dynamic weight using the sigmoid function is applied to the above detected intersection point, and finally, this is converted into the 2D coordinate system. This paper explains the methods of detecting the region of interest and the direction vector and proposes a method of interpolating and applying the dynamic weight in order to stabilize control. Lastly, qualitative and quantitative analyses are performed on the proposed 3D pointing method to verify its ability to deliver stable control.

Development of 3-D Stereotactic Localization System and Radiation Measurement for Stereotactic Radiosurgery (방사선수술을 위한 3차원 정위 시스템 및 방사선량 측정 시스템 개발)

  • Suh, Tae-Suk;Suh, Doug-Young;Park, Sung-Hun;Jang, Hong-Seok;Choe, Bo-Young;Yoon, Sei-Chul;Shinn, Kyung-Sub;Bahk, Yong-Whee;Kim, Il-Hwan;Kang, Wee-Sang;Ha, Sung-Whan;Park, Charn-Il
    • Journal of Radiation Protection and Research
    • /
    • v.20 no.1
    • /
    • pp.25-36
    • /
    • 1995
  • The purpose of this research is to develop stereotactic localization and radiation measurement system for the efficient and precise radiosurgery. The algorithm to obtain a 3-D stereotactic coordinates of the target has been developed using a Fisher CT or angio localization. The procedure of stereotactic localization was programmed with PC computer, and consists of three steps: (1) transferring patient images into PC; (2) marking the position of target and reference points of the localizer from the patient image; (3) computing the stereotactic 3-D coordinates of target associated with position information of localizer. Coordinate transformation was quickly done on a real time base. The difference of coordinates computed from between Angio and CT localization method was within 2 mm, which could be generally accepted for the reliability of the localization system developed. We measured dose distribution in small fields of NEC 6 MVX linear accelerator using various detector; ion chamber, film, diode. Specific quantities measured include output factor, percent depth dose (PDD), tissue maximum ratio (TMR), off-axis ratio (OAR). There was small variation of measured data according to the different kinds of detectors used. The overall trends of measured beam data were similar enough to rely on our measurement. The measurement was performed with the use of hand-made spherical water phantom and film for standard arc set-up. We obtained the dose distribution as we expected. In conclusion, PC-based 3-D stereotactic localization system was developed to determine the stereotactic coordinate of the target. A convenient technique for the small field measurement was demonstrated. Those methods will be much helpful for the stereotactic radiosurgery.

  • PDF