• Title/Summary/Keyword: three dimensional vision

Search Result 220, Processing Time 0.022 seconds

Visual Sensor Design and Environment Modeling for Autonomous Mobile Welding Robots (자율 주행 용접 로봇을 위한 시각 센서 개발과 환경 모델링)

  • Kim, Min-Yeong;Jo, Hyeong-Seok;Kim, Jae-Hun
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.8 no.9
    • /
    • pp.776-787
    • /
    • 2002
  • Automation of welding process in shipyards is ultimately necessary, since the welding site is spatially enclosed by floors and girders, and therefore welding operators are exposed to hostile working conditions. To solve this problem, a welding mobile robot that can navigate autonomously within the enclosure has been developed. To achieve the welding task in the closed space, the robotic welding system needs a sensor system for the working environment recognition and the weld seam tracking, and a specially designed environment recognition strategy. In this paper, a three-dimensional laser vision system is developed based on the optical triangulation technology in order to provide robots with 3D work environmental map. Using this sensor system, a spatial filter based on neural network technology is designed for extracting the center of laser stripe, and evaluated in various situations. An environment modeling algorithm structure is proposed and tested, which is composed of the laser scanning module for 3D voxel modeling and the plane reconstruction module for mobile robot localization. Finally, an environmental recognition strategy for welding mobile robot is developed in order to recognize the work environments efficiently. The design of the sensor system, the algorithm for sensing the partially structured environment with plane segments, and the recognition strategy and tactics for sensing the work environment are described and discussed with a series of experiments in detail.

A reliable quasi-dense corresponding points for structure from motion

  • Oh, Jangseok;Hong, Hyunggil;Cho, Yongjun;Yun, Haeyong;Seo, Kap-Ho;Kim, Hochul;Kim, Mingi;Lee, Onseok
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.14 no.9
    • /
    • pp.3782-3796
    • /
    • 2020
  • A three-dimensional (3D) reconstruction is an important research area in computer vision. The ability to detect and match features across multiple views of a scene is a critical initial step. The tracking matrix W obtained from a 3D reconstruction can be applied to structure from motion (SFM) algorithms for 3D modeling. We often fail to generate an acceptable number of features when processing face or medical images because such images typically contain large homogeneous regions with minimal variation in intensity. In this study, we seek to locate sufficient matching points not only in general images but also in face and medical images, where it is difficult to determine the feature points. The algorithm is implemented on an adaptive threshold value, a scale invariant feature transform (SIFT), affine SIFT, speeded up robust features (SURF), and affine SURF. By applying the algorithm to face and general images and studying the geometric errors, we can achieve quasi-dense matching points that satisfy well-functioning geometric constraints. We also demonstrate a 3D reconstruction with a respectable performance by applying a column space fitting algorithm, which is an SFM algorithm.

Facial Gaze Detection by Estimating Three Dimensional Positional Movements (얼굴의 3차원 위치 및 움직임 추정에 의한 시선 위치 추적)

  • Park, Gang-Ryeong;Kim, Jae-Hui
    • Journal of the Institute of Electronics Engineers of Korea SP
    • /
    • v.39 no.3
    • /
    • pp.23-35
    • /
    • 2002
  • Gaze detection is to locate the position on a monitor screen where a user is looking. In our work, we implement it with a computer vision system setting a single camera above a monitor and a user moves (rotates and/or translates) his face to gaze at a different position on the monitor. To detect the gaze position, we locate facial region and facial features(both eyes, nostrils and lip corners) automatically in 2D camera images. From the movement of feature points detected in starting images, we can compute the initial 3D positions of those features by camera calibration and parameter estimation algorithm. Then, when a user moves(rotates and/or translates) his face in order to gaze at one position on a monitor, the moved 3D positions of those features can be computed from 3D rotation and translation estimation and affine transform. Finally, the gaze position on a monitor is computed from the normal vector of the plane determined by those moved 3D positions of features. As experimental results, we can obtain the gaze position on a monitor(19inches) and the gaze position accuracy between the computed positions and the real ones is about 2.01 inches of RMS error.

Recognizing a polyhedron by network constraint analysis

  • Ishikawa, Seiji;Kubota, Mayumi;Nishimura, Hiroshi;Kato, Kiyoshi
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 1991.10b
    • /
    • pp.1591-1596
    • /
    • 1991
  • The present paper describes a method of recognizing a polyhedron employing the notion of network constraint analysis. Typical difficulties in three-dimensional object recognition, other than shading, reflection, and hidden line problems, include the case where appearances of an object vary according to observation points and the case where an object to be recognized is occluded by other objects placed in its front, resulting in incomplete information on the object shape. These difficulties can, however, be solved to a large extent, by taking account of certain local constraints defined on a polyhedral shape. The present paper assumes a model-based vision employing an appearance-oriented model of a polyhedron which is provided by placing it at the origin of a large sphere and observing it from various positions on the surface of the sphere. The model is actually represented by the sets of adjacent faces pairs of the polyhedron observed from those positions. Since the shape of a projected face gives constraint to that of its adjacent face, this results in a local constraint relation between these faces. Each projected face of an unknown polyhedron on an acquired image is examined its match with those faces in the model, producing network constraint relations between faces in the image and faces in the model. Taking adjacency of faces into consideration, these network constraint relations are analyzed. And if the analysis finally provides a solution telling existence of one to one match of the faces between the unknown polyhedron and the model, the unknown polyhedron is understood to be one of those memorized models placed in a certain posture. In the performed experiment, a polyhedron was observed from 320 regularly arranged points on a sphere to provide its appearance model and a polyhedron with arbitrarily postured, occluded, or imposed another difficulty was successfully recognized.

  • PDF

Estimating Surface Orientation Using Statistical Model From Texture Gradient in Monocular Vision (단안의 무늬 그래디언트로 부터 통계학적 모델을 이용한 면 방향 추정)

  • Chung, Sung-Chil;Choi, Yeon-Sung;Choi, Jong-Soo
    • Journal of the Korean Institute of Telematics and Electronics
    • /
    • v.26 no.7
    • /
    • pp.157-165
    • /
    • 1989
  • To recover three dimensional information in Shape from Texture, the distorting effects of projection must be distinguished from properties of the texture on which the distortion acts. In this paper, we show an approximated maximum likelihood estimation method in which we find surface orientation of the visible surface (hemisphere) in gaussian sphere using local analysis of the texture. In addition, assuming that an orthogonal projection and a circle is an image formation system and a texel (texture element) respectively, we drive the surface orientation from the distribution of variation by means of orthogonal projection of a tangent direction which exists regularly in the arc length of a circle. We present the orientation parameters of textured surface with slant and tilt in gradient space, and also the surface normal of the resulted surface orientationas as needle map. This algorithm is applied to geographic contour (artificially generated chejudo) and synthetic texture.

  • PDF

Distinction of Real Face and Photo using Stereo Vision (스테레오비전을 이용한 실물 얼굴과 사진의 구분)

  • Shin, Jin-Seob;Kim, Hyun-Jung;Won, Il-Yong
    • Journal of the Korea Society of Computer and Information
    • /
    • v.19 no.7
    • /
    • pp.17-25
    • /
    • 2014
  • In the devices that leave video records, it is an important issue to distinguish whether the input image is a real object or a photo when securing an identifying image. Using a single image and sensor, which is a simple way to distinguish the target from distance measurement has many weaknesses. Thus, this paper proposes a way to distinguish a simple photo and a real object by using stereo images. It is not only measures the distance to the target, but also checks a three-dimensional effect by making the depth map of the face area. They take pictures of the photos and the real faces, and the measured value of the depth map is applied to the learning algorithm. Exactly through iterative learning to distinguish between the real faces and the photos looked for patterns. The usefulness of the proposed algorithm was verified experimentally.

3D Pose Estimation of a Circular Feature With a Coplanar Point (공면 점을 포함한 원형 특징의 3차원 자세 및 위치 추정)

  • Kim, Heon-Hui;Park, Kwang-Hyun;Ha, Yun-Su
    • Journal of the Institute of Electronics Engineers of Korea SC
    • /
    • v.48 no.5
    • /
    • pp.13-24
    • /
    • 2011
  • This paper deals with a 3D-pose (orientation and position) estimation problem of a circular object in 3D-space. Circular features can be found with many objects in real world, and provide crucial cues in vision-based object recognition and location. In general, as a circular feature in 3D space is perspectively projected when imaged by a camera, it is difficult to recover fully three-dimensional orientation and position parameters from the projected curve information. This paper therefore proposes a 3D pose estimation method of a circular feature using a coplanar point. We first interpret a circular feature with a coplanar point in both the projective space and 3D space. A procedure for estimating 3D orientation/position parameters is then described. The proposed method is verified by a numerical example, and evaluated by a series of experiments for analyzing accuracy and sensitivity.

A Parallel Mode Confocal System using a Micro-Lens and Pinhole Array in a Dual Microscope Configuration (이중 현미경 구조를 이용한 마이크로 렌즈 및 핀홀 어레이 기반 병렬 공초점 시스템)

  • Bae, Sang Woo;Kim, Min Young;Ko, Kuk Won;Koh, Kyung Chul
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.19 no.11
    • /
    • pp.979-983
    • /
    • 2013
  • The three-dimensional measurement method of confocal systems is a spot scanning method which has a high resolution and good illumination efficiency. However, conventional confocal systems had a weak point in that it has to perform XY axis scanning to achieve FOV (Field of View) vision through spot scanning. There are some methods to improve this problem involving the use of a galvano mirror [1], pin-hole array, etc. Therefore, in this paper we propose a method to improve a parallel mode confocal system using a micro-lens and pin-hole array in a dual microscope configuration. We made an area scan possible by using a combination MLA (Micro Lens Array) and pin-hole array, and used an objective lens to improve the light transmittance and signal-to-noise ratio. Additionally, we made it possible to change the objective lens so that it is possible to select a lens considering the reflection characteristic of the measuring object and proper magnification. We did an experiment using 5X, 2.3X objective lens, and did a calibration of height using a VLSI calibration target.

A Study on the Evaluation of Driver's Collision Avoidance Maneuver based on GMDH (GMDH를 이용한 운전자의 충돌 회피 행동 평가에 관한 연구)

  • Lee, Jong-Hyeon;Oh, Ji-Yong;Kim, Gu-Yong;Kim, Jong-Hae
    • Journal of IKEEE
    • /
    • v.22 no.3
    • /
    • pp.866-869
    • /
    • 2018
  • This paper presents the analysis of the human driving behavior based on the expression as a GMDH technique focusing on the driver's collision avoidance maneuver. The driving data are collected by using the three dimensional driving simulator based on CAVE, which provides stereoscopic immersive vision. A GMDH is also introduced and applied to the measured data in order to build a mathematical model of driving behavior. From the obtained model, it is found that the longitudinal distance between cars($x_1$), the longitudinal relative velocity($x_2$) and the lateral displacement between cars($x_4$) play important roles in the collision avoidance maneuver under the 3D environments.

Prevention of Complication and Management of Unfavorable Results in Reduction Malarplasty (광대뼈 축소성형술 시 합병증의 예방과 불만족스러운 결과에 대한 해결방안)

  • Yang, Jung Hak;Lee, Ji Hyuck;Yang, Doo Byung;Chung, Jae Young
    • Archives of Plastic Surgery
    • /
    • v.35 no.4
    • /
    • pp.465-470
    • /
    • 2008
  • Purpose: Reduction malarplasty is a popular aesthetic surgery for contouring wide and prominent zygoma. However a few patients complain postoperative results and want to revise the midfacial contour. We analyzed the etiology of unfavorable results and treated unsatisfied midfacial contours after reduction malarplasty. Methods: Total 53 patients were performed secondary operation for correction of unfavorable results after primary reduction malarplasty from elsewhere. Midfacial contour was evaluated with plain films and three-dimensional computed tomography. Unfavorable midfacial contours were corrected by secondary malarplasty. Flaring of zygomatic arch was reduced with infracturing technique and prominent zygomatic body was reduced with shaving. Drooped or displaced zygoma complex has been suspended to higher position and fixed with interosseous wiring. As adjuvant procedure, autologous fat injection has been performed in the region of depressed zygomatic body region. Results: The etiology of unfavorable midfacial contour after reduction malarplasty was classified into 7 categories: undercorrection of zygomatic arch(n=8), undercorrection of zygomatic arch and undercorrection of zygomatic body(n=6), undercorrection of zygomatic arch and overcorrection of zygomatic body(n=28), overcorrection of zygomatic body(n=3), simple asymmetry(n=4), malunion(n=2) or nonunion(n=2). Slim and balanced malar contour was achieved with treatment. And most of the patients were satisfied with the results of the surgery. Conclusion: To prevent the unfavorable results after reduction malarplasty, complete analysis of facial contour, choice of appropriate operation technique, precise osteotomy under direct vision, and security of zygoma position are important.