• Title/Summary/Keyword: Perspective Plane Image

Search Result 35, Processing Time 0.029 seconds

Noncontact 3-dimensional measurement using He-Ne laser and CCD camera (He-Ne 레이저와 CCD 카메라를 이용한 비접촉 3차원 측정)

  • Kim, Bong-chae;Jeon, Byung-cheol;Kim, Jae-do
    • Transactions of the Korean Society of Mechanical Engineers A
    • /
    • v.21 no.11
    • /
    • pp.1862-1870
    • /
    • 1997
  • A fast and precise technique to measure 3-dimensional coordinates of an object is proposed. It is essential to take the 3-dimensional measurements of the object in design and inspection. Using this developed system a surface model of a complex shape can be constructed. 3-dimensional world coordinates are projected onto a camera plane by the perspective transformation, which plays an important role in this measurement system. According to the shape of the object two measuring methods are proposed. One is rotation of an object and the other is translation of measuring unit. Measuring speed depending on image processing time is obtained as 200 points per second. Measurement resolution i sexperimented by two parameters among others; the angle between the laser beam plane and the camera, and the distance between the camera and the object. As a result of these experiments, it was found that measurement resolution ranges from 0.3mm to 1.0mm. This constructed surface model could be used in manufacturing tools such as rapid prototyping machine.

Implementation of Transformation Algorithm for a Leg-wheel Hexapod Robot Using Stereo Vision (스테레오 영상처리를 이용한 바퀴달린 6족 로봇의 형태변형 알고리즘 구현)

  • Lee, Sang-Hun;Kim, Jin-Geol
    • Proceedings of the KIEE Conference
    • /
    • 2006.10c
    • /
    • pp.202-204
    • /
    • 2006
  • In this paper, the detection scheme of the spatial coordinates based on stereo camera for a Transformation algorithm of an Leg-wheel Hexapod Robot is proposed. Robot designed as can have advantages that do transfer possibility fast mobility in flat topography and uneven topography through walk that use wheel drive. In the proposed system, using the disparity data obtained from the left and right images captured by the stereo camera system and the perspective transformation between a 3-D scene and an image plane, depth information can be detected. Robot uses construed environmental data and transformation algorithm, decide wheel drive and leg waik, and can calculate width of street and regulate width of robot.

  • PDF

Markerless camera pose estimation framework utilizing construction material with standardized specification

  • Harim Kim;Heejae Ahn;Sebeen Yoon;Taehoon Kim;Thomas H.-K. Kang;Young K. Ju;Minju Kim;Hunhee Cho
    • Computers and Concrete
    • /
    • v.33 no.5
    • /
    • pp.535-544
    • /
    • 2024
  • In the rapidly advancing landscape of computer vision (CV) technology, there is a burgeoning interest in its integration with the construction industry. Camera calibration is the process of deriving intrinsic and extrinsic parameters that affect when the coordinates of the 3D real world are projected onto the 2D plane, where the intrinsic parameters are internal factors of the camera, and extrinsic parameters are external factors such as the position and rotation of the camera. Camera pose estimation or extrinsic calibration, which estimates extrinsic parameters, is essential information for CV application at construction since it can be used for indoor navigation of construction robots and field monitoring by restoring depth information. Traditionally, camera pose estimation methods for cameras relied on target objects such as markers or patterns. However, these methods, which are marker- or pattern-based, are often time-consuming due to the requirement of installing a target object for estimation. As a solution to this challenge, this study introduces a novel framework that facilitates camera pose estimation using standardized materials found commonly in construction sites, such as concrete forms. The proposed framework obtains 3D real-world coordinates by referring to construction materials with certain specifications, extracts the 2D coordinates of the corresponding image plane through keypoint detection, and derives the camera's coordinate through the perspective-n-point (PnP) method which derives the extrinsic parameters by matching 3D and 2D coordinate pairs. This framework presents a substantial advancement as it streamlines the extrinsic calibration process, thereby potentially enhancing the efficiency of CV technology application and data collection at construction sites. This approach holds promise for expediting and optimizing various construction-related tasks by automating and simplifying the calibration procedure.

O-ring Size Measurement Based on a Small Machine Vision Inspection Equipment (소형 머신 비전 검사 장비에 기반한 O링 치수 측정)

  • Jung, YouSoo;Park, Kil-Houm
    • Journal of Korea Society of Industrial Information Systems
    • /
    • v.19 no.4
    • /
    • pp.41-52
    • /
    • 2014
  • In this paper, O-ring size measurement algorithm based on a small machine vision inspection equipment which can replace a expensive and large machine vision inspection equipment is presented. The small machine vision inspection equipment acquires a image from a CCD camera shooting a measurement plane which located on a back light and the proposed size measurement algorithm is apply to the image. For improvement of size measurement accuracy, camera lens distortion correction and perspective distortion correction are conducted by software technique. Consider O-ring's shape, ellipse fitting model is applied. In order to increase the reliability of ellipse fitting, RANSAC algorithm is applied.

Volume measurement of limb edema using three dimensional registration method of depth images based on plane detection (깊이 영상의 평면 검출 기반 3차원 정합 기법을 이용한 상지 부종의 부피 측정 기술)

  • Lee, Wonhee;Kim, Kwang Gi;Chung, Seung Hyun
    • Journal of Korea Multimedia Society
    • /
    • v.17 no.7
    • /
    • pp.818-828
    • /
    • 2014
  • After emerging of Microsoft Kinect, the interest in three-dimensional (3D) depth image was significantly increased. Depth image data of an object can be converted to 3D coordinates by simple arithmetic calculation and then can be reconstructed as a 3D model on computer. However, because the surface coordinates can be acquired only from the front area facing Kinect, total solid which has a closed surface cannot be reconstructed. In this paper, 3D registration method for multiple Kinects was suggested, in which surface information from each Kinect was simultaneously collected and registered in real time to build 3D total solid. To unify relative coordinate system used by each Kinect, 3D perspective transform was adopted. Also, to detect control points which are necessary to generate transformation matrix, 3D randomized Hough transform was used. Once transform matrices were generated, real time 3D reconstruction of various objects was possible. To verify the usefulness of suggested method, human arms were 3D reconstructed and the volumes of them were measured by using four Kinects. This volume measuring system was developed to monitor the level of lymphedema of patients after cancer treatment and the measurement difference with medical CT was lower than 5%, expected CT reconstruction error.

A Defocus Technique based Depth from Lens Translation using Sequential SVD Factorization

  • Kim, Jong-Il;Ahn, Hyun-Sik;Jeong, Gu-Min;Kim, Do-Hyun
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 2005.06a
    • /
    • pp.383-388
    • /
    • 2005
  • Depth recovery in robot vision is an essential problem to infer the three dimensional geometry of scenes from a sequence of the two dimensional images. In the past, many studies have been proposed for the depth estimation such as stereopsis, motion parallax and blurring phenomena. Among cues for depth estimation, depth from lens translation is based on shape from motion by using feature points. This approach is derived from the correspondence of feature points detected in images and performs the depth estimation that uses information on the motion of feature points. The approaches using motion vectors suffer from the occlusion or missing part problem, and the image blur is ignored in the feature point detection. This paper presents a novel approach to the defocus technique based depth from lens translation using sequential SVD factorization. Solving such the problems requires modeling of mutual relationship between the light and optics until reaching the image plane. For this mutuality, we first discuss the optical properties of a camera system, because the image blur varies according to camera parameter settings. The camera system accounts for the camera model integrating a thin lens based camera model to explain the light and optical properties and a perspective projection camera model to explain the depth from lens translation. Then, depth from lens translation is proposed to use the feature points detected in edges of the image blur. The feature points contain the depth information derived from an amount of blur of width. The shape and motion can be estimated from the motion of feature points. This method uses the sequential SVD factorization to represent the orthogonal matrices that are singular value decomposition. Some experiments have been performed with a sequence of real and synthetic images comparing the presented method with the depth from lens translation. Experimental results have demonstrated the validity and shown the applicability of the proposed method to the depth estimation.

  • PDF

Practical Reading of Gilles Deleuze on Frame from Filmmaking Perspective (들뢰즈의 프레임: 영화제작 관점에서 읽기)

  • Kim, Jung-Ho;Kim, Jae Sung
    • The Journal of the Korea Contents Association
    • /
    • v.19 no.11
    • /
    • pp.527-548
    • /
    • 2019
  • For Deleuze, the frame is a closed system with numerous subsets of information. the frame can be defined by mathematics and physics. it is a geometric system of equilibrium and harmony with variables or coordinates. like paintings, Linear perspective represents a three-dimensional depth in a two-dimensional plane through vanishing points, horizontal lines in the frame. Linear perspective makes it possible to assume the infinity towards the vanishing point and the infinity towards the outside of the frame, the opposite of the vanishing point. Not only figures and lines in the drawing paper, but also the space between the figures and lines in the drawing paper was recognized. that is space, the 3rd dimension. with the centripetal force and centrifugal force of the frame, frame follow the physical rules of power and movement. de framing is against the dominant linear perspective and central tendency of the frame. The film contains four-dimensional time while reproducing three-dimensional space in two dimensions. It may be that the outside of the frame, or outside the field of view, contains thought, the fifth dimension.

Position Control of Mobile Robot for Human-Following in Intelligent Space with Distributed Sensors

  • Jin Tae-Seok;Lee Jang-Myung;Hashimoto Hideki
    • International Journal of Control, Automation, and Systems
    • /
    • v.4 no.2
    • /
    • pp.204-216
    • /
    • 2006
  • Latest advances in hardware technology and state of the art of mobile robot and artificial intelligence research can be employed to develop autonomous and distributed monitoring systems. And mobile service robot requires the perception of its present position to coexist with humans and support humans effectively in populated environments. To realize these abilities, robot needs to keep track of relevant changes in the environment. This paper proposes a localization of mobile robot using the images by distributed intelligent networked devices (DINDs) in intelligent space (ISpace) is used in order to achieve these goals. This scheme combines data from the observed position using dead-reckoning sensors and the estimated position using images of moving object, such as those of a walking human, used to determine the moving location of a mobile robot. The moving object is assumed to be a point-object and projected onto an image plane to form a geometrical constraint equation that provides position data of the object based on the kinematics of the intelligent space. Using the a priori known path of a moving object and a perspective camera model, the geometric constraint equations that represent the relation between image frame coordinates of a moving object and the estimated position of the robot are derived. The proposed method utilizes the error between the observed and estimated image coordinates to localize the mobile robot, and the Kalman filtering scheme is used to estimate the location of moving robot. The proposed approach is applied for a mobile robot in ISpace to show the reduction of uncertainty in the determining of the location of the mobile robot. Its performance is verified by computer simulation and experiment.

2D Spatial-Map Construction for Workers Identification and Avoidance of AGV (AGV의 작업자 식별 및 회피를 위한 2D 공간 지도 구성)

  • Ko, Jung-Hwan
    • Journal of the Institute of Electronics and Information Engineers
    • /
    • v.49 no.9
    • /
    • pp.347-352
    • /
    • 2012
  • In this paper, an 2D spatial-map construction for workers identification and avoidance of AGV using the detection scheme of the spatial coordinates based on stereo camera is proposed. In the proposed system, face area of a moving person is detected from a left image among the stereo image pairs by using the YCbCr color model and its center coordinates are computed by using the centroid method and then using these data, the stereo camera embedded on the mobile robot can be controlled for tracking the moving target in real-time. Moreover, using the disparity map obtained from the left and right images captured by the tracking-controlled stereo camera system and the perspective transformation between a 3-D scene and an image plane, depth map can be detected. From some experiments on AGV driving with 240 frames of the stereo images, it is analyzed that error ratio between the calculated and measured values of the worker's width is found to be very low value of 2.19% and 1.52% on average.

A Study on the code and design elements as a way of transition (애니메이션 화면 전환 수단으로서의 조형 요소 변화에 대한 연구)

  • Kim, Jean-Young
    • Cartoon and Animation Studies
    • /
    • s.14
    • /
    • pp.83-99
    • /
    • 2008
  • In general, the cut or dissolve or etc., collective changeover represents the change of scene in the film. Animation film makes scene's various parts to allow intended sensibility and narrative factors by special manufacturing skill generating frame image one by one and transfer it to the different symbolic dimensional expression. Nowadays sequential scene composition is not any more the unique special treatment for 2D animation according to image handling skill like morphing, metamorphosis, etc. becomes diverse and elaborate. But 2D manual animation makes spectator to be absorbed into different visual dimensions continuously and strongly beyond character and background, namely object and space. that is 2D manual animation's strong attractiveness. Finally these characteristics enable literary function which makes it possible to do delicate metaphorical through full scene composition basis and to communicate a implicative meaning system The analysis about scene broke boundary between symbolic perspective world and plane formative world and it became more diverse and complicated. Hereupon the analyzing the composition basis of formative element in the animation film scene and it's application effect make it helpful to analysis and application in the modern image scene having new absorbing methods

  • PDF