• Title/Summary/Keyword: 3D coordinate extraction

Search Result 38, Processing Time 0.033 seconds

Development of the Computer Vision based Continuous 3-D Feature Extraction System via Laser Structured Lighting (레이저 구조광을 이용한 3차원 컴퓨터 시각 형상정보 연속 측정 시스템 개발)

  • Im, D. H.;Hwang, H.
    • Journal of Biosystems Engineering
    • /
    • v.24 no.2
    • /
    • pp.159-166
    • /
    • 1999
  • A system to extract continuously the real 3-D geometric fearture information from 2-D image of an object, which is fed randomly via conveyor has been developed. Two sets of structured laser lightings were utilized. And the laser structured light projection image was acquired using the camera from the signal of the photo-sensor mounted on the conveyor. Camera coordinate calibration matrix was obtained, which transforms 2-D image coordinate information into 3-D world space coordinate using known 6 points. The maximum error after calibration showed 1.5 mm within the height range of 103mm. The correlation equation between the shift amount of the laser light and the height was generated. Height information estimated after correlation showed the maximum error of 0.4mm within the height range of 103mm. An interactive 3-D geometric feature extracting software was developed using Microsoft Visual C++ 4.0 under Windows system environment. Extracted 3-D geometric feature information was reconstructed into 3-D surface using MATLAB.

  • PDF

Automation of Bio-Industrial Process Via Tele-Task Command(I) -identification and 3D coordinate extraction of object- (원격작업 지시를 이용한 생물산업공정의 생력화 (I) -대상체 인식 및 3차원 좌표 추출-)

  • Kim, S. C.;Choi, D. Y.;Hwang, H.
    • Journal of Biosystems Engineering
    • /
    • v.26 no.1
    • /
    • pp.21-28
    • /
    • 2001
  • Major deficiencies of current automation scheme including various robots for bioproduction include the lack of task adaptability and real time processing, low job performance for diverse tasks, and the lack of robustness of take results, high system cost, failure of the credit from the operator, and so on. This paper proposed a scheme that could solve the current limitation of task abilities of conventional computer controlled automatic system. The proposed scheme is the man-machine hybrid automation via tele-operation which can handle various bioproduction processes. And it was classified into two categories. One category was the efficient task sharing between operator and CCM(computer controlled machine). The other was the efficient interface between operator and CCM. To realize the proposed concept, task of the object identification and extraction of 3D coordinate of an object was selected. 3D coordinate information was obtained from camera calibration using camera as a measurement device. Two stereo images were obtained by moving a camera certain distance in horizontal direction normal to focal axis and by acquiring two images at different locations. Transformation matrix for camera calibration was obtained via least square error approach using specified 6 known pairs of data points in 2D image and 3D world space. 3D world coordinate was obtained from two sets of image pixel coordinates of both camera images with calibrated transformation matrix. As an interface system between operator and CCM, a touch pad screen mounted on the monitor and remotely captured imaging system were used. Object indication was done by the operator’s finger touch to the captured image using the touch pad screen. A certain size of local image processing area was specified after the touch was made. And image processing was performed with the specified local area to extract desired features of the object. An MS Windows based interface software was developed using Visual C++6.0. The software was developed with four modules such as remote image acquisiton module, task command module, local image processing module and 3D coordinate extraction module. Proposed scheme shoed the feasibility of real time processing, robust and precise object identification, and adaptability of various job and environments though selected sample tasks.

  • PDF

Acceleration of Viewport Extraction for Multi-Object Tracking Results in 360-degree Video (360도 영상에서 다중 객체 추적 결과에 대한 뷰포트 추출 가속화)

  • Heesu Park;Seok Ho Baek;Seokwon Lee;Myeong-jin Lee
    • Journal of Advanced Navigation Technology
    • /
    • v.27 no.3
    • /
    • pp.306-313
    • /
    • 2023
  • Realistic and graphics-based virtual reality content is based on 360-degree videos, and viewport extraction through the viewer's intention or automatic recommendation function is essential. This paper designs a viewport extraction system based on multiple object tracking in 360-degree videos and proposes a parallel computing structure necessary for multiple viewport extraction. The viewport extraction process in 360-degree videos is parallelized by composing pixel-wise threads, through 3D spherical surface coordinate transformation from ERP coordinates and 2D coordinate transformation of 3D spherical surface coordinates within the viewport. The proposed structure evaluated the computation time for up to 30 viewport extraction processes in aerial 360-degree video sequences and confirmed up to 5240 times acceleration compared to the CPU-based computation time proportional to the number of viewports. When using high-speed I/O or memory buffers that can reduce ERP frame I/O time, viewport extraction time can be further accelerated by 7.82 times. The proposed parallelized viewport extraction structure can be applied to simultaneous multi-access services for 360-degree videos or virtual reality contents and video summarization services for individual users.

종합병원관리 전산화 System-MEDIOS

  • 이승훈
    • Journal of Biomedical Engineering Research
    • /
    • v.3 no.1
    • /
    • pp.55-58
    • /
    • 1982
  • In this paper, a method for camera position estimation in gaster using elechoendoscopic image sequence is proposed. In order to obtain proper image sequences, the gaster in divided into three sections. It is presented that camera position modeling for 3D information extraction and image distortion due to the endoscopic lenses is corrected.The feature points are represented with respect to the reference coordinate system belpw 10 percents error rate. The faster distortion correction algorithm is proposed in this paper. This algorithm uses error table which is faster than coordinate transform method using n-th order polynomials.

  • PDF

Implementation of 3Dimension Cloth Animation based on Cloth Design System (의복 디자인 시스템을 이용한 웹 3차원 의복 애니메이션 구현)

  • Kim, Ju-Ri;Lee, Hae-Jung;Joung, Suck-Tae
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.15 no.10
    • /
    • pp.2157-2163
    • /
    • 2011
  • In this paper, we designed 2D, 2.5D cloth design system and a 3D cloth animation system. They make the 3D cloth animation possible by using coordinate points extracted from 2D and 2.5D cloth design system in order to realize a system that allows customers to wear clothes in the virtual space. To make natural draping, it uses for description the mesh creation and transformation algorithms, path extraction algorithm, warp algorithm, and brightness extraction and application algorithms. The coordinate points extracted here are received as text format data and inputted as clothing information in the cloth file. Moreover, the cloth file has a 2D pattern and is realized to be used in the 3D cloth animation system. The 3D cloth animation system generated in this way builds a web-based fashion mall using ISB (Internet Space Builder) and lets customers view the clothing animation on the web by adding the animation process to the simulation result.

Estimation of 3D Rotation Information of Animation Character Face (애니메이션 캐릭터 얼굴의 3차원 회전정보 측정)

  • Jang, Seok-Woo;Weon, Sun-Hee;Choi, Hyung-Il
    • Journal of the Korea Society of Computer and Information
    • /
    • v.16 no.8
    • /
    • pp.49-56
    • /
    • 2011
  • Recently, animation contents has become extensively available along with the development of cultural industry. In this paper, we propose a method to analyze a face of animation character and extract 3D rotational information of the face. The suggested method first generates a dominant color model of a face by learning the face image of animation character. Our system then detects the face and its components with the model, and establishes two coordinate systems: base coordinate system and target coordinate system. Our system estimates three dimensional rotational information of the animation character face using the geometric relationship of the two coordinate systems. Finally, in order to visually represent the extracted 3D information, a 3D face model in which the rotation information is reflected is displayed. In experiments, we show that our method can extract 3D rotation information of a character face reasonably.

Extraction or gaze point on display based on EOG for general paralysis patient (전신마비 환자를 위한 EOG 기반 디스플레이 상의 응시 좌표 산출)

  • Lee, D.H.;Yu, J.H.;Kim, D.H.
    • Journal of rehabilitation welfare engineering & assistive technology
    • /
    • v.5 no.1
    • /
    • pp.87-93
    • /
    • 2011
  • This paper proposes a method for extraction of the gaze point on display using EOG(Electrooculography) signal. Based on the linear property of EOG signal, the proposed method corrects scaling difference, rotation difference and origin difference between coordinate of using EOG signal and coordinate on display, without adjustment using the head movement. The performance of the proposed method was evaluated by measuring the difference between extracted gaze point and displayed circle point on the monitor with 1680*1050 resolution. Experimental results show that the average distance errors at the gaze points are 3%(56pixel) on x-axis, 4%(47pixel) on y-axis, respectively. This method can be used to human computer interface of pointing device for general paralysis patients or HCI for VR game application.

A Study on Depth Data Extraction for Object Based on Camera Calibration of Known Patterns (기지 패턴의 카메라 Calibration에 기반한 물체의 깊이 데이터 추출에 관한 연구)

  • 조현우;서경호;김태효
    • Proceedings of the Korea Institute of Convergence Signal Processing
    • /
    • 2001.06a
    • /
    • pp.173-176
    • /
    • 2001
  • In this thesis, a new measurement system is implemented for depth data extraction based on the camera calibration of the known pattern. The relation between 3D world coordinate and 2D image coordinate is analyzed. A new camera calibration algorithm is established from the analysis and then, the internal variables and external variables of the CCD camera are obtained. Suppose that the measurement plane is horizontal plane, from the 2D plane equation and coordinate transformation equation the approximation values corresponding minimum values using Newton-Rabbson method is obtained and they are stored into the look-up table for real time processing . A slit laser light is projected onto the object, and a 2D image obtained on the x-z plane in the measurement system. A 3D shape image can be obtained as the 2D (x-z)images are continuously acquired, during the object is moving to the y direction. The 3D shape images are displayed on computer monitor by use of OpenGL software. In a measuremental result, we found that the resolution of pixels have $\pm$ 1% of error in depth data. It seems that the error components are due to the vibration of mechanic and optical system. We expect that the measurement system need some of mechanic stability and precision optical system in order to improve the system.

  • PDF

AUTOMATIC ROAD NETWORK EXTRACTION. USING LIDAR RANGE AND INTENSITY DATA

  • Kim, Moon-Gie;Cho, Woo-Sug
    • Proceedings of the KSRS Conference
    • /
    • 2005.10a
    • /
    • pp.79-82
    • /
    • 2005
  • Recently the necessity of road data is still being increased in industrial society, so there are many repairing and new constructions of roads at many areas. According to the development of government, city and region, the update and acquisition of road data for GIS (Geographical Information System) is very necessary. In this study, the fusion method with range data(3D Ground Coordinate System Data) and Intensity data in stand alone LiDAR data is used for road extraction and then digital image processing method is applicable. Up to date Intensity data of LiDAR is being studied. This study shows the possibility method for road extraction using Intensity data. Intensity and Range data are acquired at the same time. Therefore LiDAR does not have problems of multi-sensor data fusion method. Also the advantage of intensity data is already geocoded, same scale of real world and can make ortho-photo. Lastly, analysis of quantitative and quality is showed with extracted road image which compare with I: 1,000 digital map.

  • PDF

A Study on Iris Recognition by Iris Feature Extraction from Polar Coordinate Circular Iris Region (극 좌표계 원형 홍채영상에서의 특징 검출에 의한 홍채인식 연구)

  • Jeong, Dae-Sik;Park, Kang-Ryoung
    • Journal of the Institute of Electronics Engineers of Korea SP
    • /
    • v.44 no.3
    • /
    • pp.48-60
    • /
    • 2007
  • In previous researches for iris feature extraction, they transform a original iris image into rectangular one by stretching and interpolation, which causes the distortion of iris patterns. Consequently, it reduce iris recognition accuracy. So we are propose the method that extracts iris feature by using polar coordinates without distortion of iris patterns. Our proposed method has three strengths compared with previous researches. First, we extract iris feature directly from polar coordinate circular iris image. Though it requires a little more processing time, there is no degradation of accuracy for iris recognition and we compares the recognition performance of polar coordinate to rectangular type using by Hamming Distance, Cosine Distance and Euclidean Distance. Second, in general, the center position of pupil is different from that of iris due to camera angle, head position and gaze direction of user. So, we propose the method of iris feature detection based on polar coordinate circular iris region, which uses pupil and iris position and radius at the same time. Third, we overcome override point from iris patterns by using polar coordinates circular method. each overlapped point would be extracted from the same position of iris region. To overcome such problem, we modify Gabor filter's size and frequency on first track in order to consider low frequency iris patterns caused by overlapped points. Experimental results showed that EER is 0.29%, d' is 5,9 and EER is 0.16%, d' is 6,4 in case of using conventional rectangular image and proposed method, respectively.