• Title/Summary/Keyword: 3-D position

Search Result 2,275, Processing Time 0.034 seconds

Localization Using 3D-Lidar Based Road Reflectivity Map and IPM Image (3D-Lidar 기반 도로 반사도 지도와 IPM 영상을 이용한 위치추정)

  • Jung, Tae-Ki;Song, Jong-Hwa;Im, Jun-Hyuck;Lee, Byung-Hyun;Jee, Gyu-In
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.22 no.12
    • /
    • pp.1061-1067
    • /
    • 2016
  • Position of the vehicle for driving is essential to autonomous navigation. However, there appears GPS position error due to multipath which is occurred by tall buildings in downtown area. In this paper, GPS position error is corrected by using camera sensor and highly accurate map made with 3D-Lidar. Input image through inverse perspective mapping is converted into top-view image, and it works out map matching with the map which has intensity of 3D-Lidar. Performance comparison was conducted between this method and traditional way which does map matching with input image after conversion of map to pinhole camera image. As a result, longitudinal error declined 49% and complexity declined 90%.

3-D Positioning Using Stereo Vision and Guide-Mark Pattern For A Quadruped Walking Robot (스테레오 시각 정보를 이용한 4각보행 로보트의 3차원 위치 및 자세 검출)

  • ;;;Zeungnam Bien
    • Journal of the Korean Institute of Telematics and Electronics
    • /
    • v.27 no.8
    • /
    • pp.1188-1200
    • /
    • 1990
  • In this paper, the 3-D positioning problem for a quadruped walking robot is investigated. In order to determine the robot's exterior position and orentation in a worls coordinate system, a stereo 3-D positioning algorithm is proposed. The proposed algorithm uses a Guide-Mark Pattern (GMP) specialy designed for fast and reliable extraction of 3-D robot position information from the uncontrolled working environment. Some experimental results along with error analysis and several means of reducing the effects of vision processing error in the proposed algorithm are disscussed.

  • PDF

Three-dimensional evaluation of the association between tongue position and upper airway morphology in adults: A cross-sectional study

  • Yuchen Zheng;Hussein Aljawad;Min-Seok Kim;Su-Hoon Choi;Min-Soo Kim;Min-Hee Oh;Jin-Hyoung Cho
    • The korean journal of orthodontics
    • /
    • v.53 no.5
    • /
    • pp.317-327
    • /
    • 2023
  • Objective: This study aimed to evaluate the association between low tongue position (LTP) and the volume and dimensions of the nasopharyngeal, retropalatal, retroglossal, and hypopharyngeal segments of the upper airway. Methods: A total of 194 subjects, including 91 males and 103 females were divided into a resting tongue position (RTP) group and a LTP group according to their tongue position. Subjects in the LTP group were divided into four subgroups (Q1, Q2, Q3, and Q4) according to the intraoral space volume. The 3D slicer software was used to measure the volume and minimum and average cross-sectional areas of each group. Airway differences between the RTP and LTP groups were analyzed to explore the association between tongue position and the upper airway. Results: No significant differences were found in the airway dimensions between the RTP and LTP groups. For both retropalatal and retroglossal segments, the volume and average cross-sectional area were significantly greater in the patients with extremely low tongue position. Regression analysis showed that the retroglossal airway dimensions were positively correlated with the intraoral space volume and negatively correlated with A point-nasion-B point and palatal plane to mandibular plane. Males generally had larger retroglossal and hypopharyngeal airways than females. Conclusions: Tongue position did not significantly influence upper airway volume or dimensions, except in the extremely LTP subgroup.

3D View Controlling by Using Eye Gaze Tracking in First Person Shooting Game (1 인칭 슈팅 게임에서 눈동자 시선 추적에 의한 3차원 화면 조정)

  • Lee, Eui-Chul;Cho, Yong-Joo;Park, Kang-Ryoung
    • Journal of Korea Multimedia Society
    • /
    • v.8 no.10
    • /
    • pp.1293-1305
    • /
    • 2005
  • In this paper, we propose the method of manipulating the gaze direction of 3D FPS game's character by using eye gaze detection from the successive images captured by USB camera, which is attached beneath HMD. The proposed method is composed of 3 parts. In the first fart, we detect user's pupil center by real-time image processing algorithm from the successive input images. In the second part of calibration, the geometric relationship is determined between the monitor gazing position and the detected eye position gazing at the monitor position. In the last fart, the final gaze position on the HMB monitor is tracked and the 3D view in game is control]ed by the gaze position based on the calibration information. Experimental results show that our method can be used for the handicapped game player who cannot use his (or her) hand. Also, it can increase the interest and immersion by synchronizing the gaze direction of game player and that of game character.

  • PDF

A Design and Implementation of Worker Motion 3D Visualization Module Based on Human Sensor

  • Sejong Lee
    • Journal of the Korea Society of Computer and Information
    • /
    • v.29 no.9
    • /
    • pp.109-114
    • /
    • 2024
  • In this paper, we design and implement a worker motion 3D visualization module based on human sensors. The three key modules that make up this system are Human Sensor Implementation, Data Set Creation, and Visualization. Human Sensor Implementation provides the functions of setting and installing the human sensor locations and collecting worker motion data through the human sensors. Data Set Creation offers functions for converting and storing motion data, creating near real-time worker motion data sets, and processing and managing sensor and motion data sets. Visualization provides functions for visualizing the worker's 3D model, evaluating motions, calculating loads, and managing large-scale data. In worker 3D model visualization, motion data sets (Skeleton & Position) are synchronized and mapped to the worker's 3D model, and the worker's 3D model motion animation is visualized by combining the worker's 3D model with analysis results. The human sensor-based worker motion 3D visualization module designed and implemented in this paper can be widely utilized as a foundational technology in the smart factory field in the future.

A Study of the Effects of Vowels on the Pronunciation of English Sibilants (영어 치찰음 발음에 미치는 모음의 영향 연구)

  • Koo, Hee-San
    • Speech Sciences
    • /
    • v.15 no.3
    • /
    • pp.31-38
    • /
    • 2008
  • The aim of this study was to find how English vowels affect the pronunciation of English sibilants /$d_3,\;{_3}$, z/ by Korean learners of English. Fifteen nonsense syllables composed by five vowels /a, e, i, o, u/ were pronounced six times by twelve Korean learners of English. Test scores were measured from the scoreboard made by a speech training software program, which was designed for English pronunciation practice and improvement. Results show that 1) the subjects had the lowest scores in /a_a/ position, and 2) subjects had lower scores in the /i_i/ position than in /e_e/, /o_o/ and /u_u/ positions when they pronounced $/d_3/,\;/{_3}/$, and /z/ in their respective inter-vocalic position. This study found that for the group studied Korean learners of English have more difficulty in pronouncing sibilants in /a_a/ and /i_i/ positions than in the other positions.

  • PDF

Gaze Detection by Computing Facial and Eye Movement (얼굴 및 눈동자 움직임에 의한 시선 위치 추적)

  • 박강령
    • Journal of the Institute of Electronics Engineers of Korea SP
    • /
    • v.41 no.2
    • /
    • pp.79-88
    • /
    • 2004
  • Gaze detection is to locate the position on a monitor screen where a user is looking by computer vision. Gaze detection systems have numerous fields of application. They are applicable to the man-machine interface for helping the handicapped to use computers and the view control in three dimensional simulation programs. In our work, we implement it with a computer vision system setting a IR-LED based single camera. To detect the gaze position, we locate facial features, which is effectively performed with IR-LED based camera and SVM(Support Vector Machine). When a user gazes at a position of monitor, we can compute the 3D positions of those features based on 3D rotation and translation estimation and affine transform. Finally, the gaze position by the facial movements is computed from the normal vector of the plane determined by those computed 3D positions of features. In addition, we use a trained neural network to detect the gaze position by eye's movement. As experimental results, we can obtain the facial and eye gaze position on a monitor and the gaze position accuracy between the computed positions and the real ones is about 4.8 cm of RMS error.

Automatic Camera Pose Determination from a Single Face Image

  • Wei, Li;Lee, Eung-Joo;Ok, Soo-Yol;Bae, Sung-Ho;Lee, Suk-Hwan;Choo, Young-Yeol;Kwon, Ki-Ryong
    • Journal of Korea Multimedia Society
    • /
    • v.10 no.12
    • /
    • pp.1566-1576
    • /
    • 2007
  • Camera pose information from 2D face image is very important for making virtual 3D face model synchronize with the real face. It is also very important for any other uses such as: human computer interface, 3D object estimation, automatic camera control etc. In this paper, we have presented a camera position determination algorithm from a single 2D face image using the relationship between mouth position information and face region boundary information. Our algorithm first corrects the color bias by a lighting compensation algorithm, then we nonlinearly transformed the image into $YC_bC_r$ color space and use the visible chrominance feature of face in this color space to detect human face region. And then for face candidate, use the nearly reversed relationship information between $C_b\;and\;C_r$ cluster of face feature to detect mouth position. And then we use the geometrical relationship between mouth position information and face region boundary information to determine rotation angles in both x-axis and y-axis of camera position and use the relationship between face region size information and Camera-Face distance information to determine the camera-face distance. Experimental results demonstrate the validity of our algorithm and the correct determination rate is accredited for applying it into practice.

  • PDF

The Chemical Structure of Acertannin. (Acertannin의 화학구조)

  • 우린근
    • YAKHAK HOEJI
    • /
    • v.6 no.1
    • /
    • pp.11-16
    • /
    • 1962
  • The position of galloyl groups in acertannin, as 3, 6-digalloylpolygalito, has been established by well-defined processes. In the courses of the processes eight new compounds, octamethoxyacertannin, 2, 4-dimethoxy-1, 5-anhydro-D-sorbitol, 6-tosylpolygalito, 2, 3, 4-tri-benzoyl-6-tosylpolygalitol, 3, 6-anhydro-1, 5-anhydro-D-sorbitol, and 2, 4-ditosyl-3, 6-anhydro-1, 5-anhydro-D-sorbitol, have been characterized.

  • PDF