• Title/Summary/Keyword: precise image

Search Result 690, Processing Time 0.03 seconds

3D Fingertip Estimation based on the TOF Camera for Virtual Touch Screen System (가상 터치스크린 시스템을 위한 TOF 카메라 기반 3차원 손 끝 추정)

  • Kim, Min-Wook;Ahn, Yang-Keun;Jung, Kwang-Mo;Lee, Chil-Woo
    • The KIPS Transactions:PartB
    • /
    • v.17B no.4
    • /
    • pp.287-294
    • /
    • 2010
  • TOF technique is one of the skills that can obtain the object's 3D depth information. But depth image has low resolution and fingertip occupy very small region, so, it is difficult to find the precise fingertip's 3D information by only using depth image from TOF camera. In this paper, we estimate fingertip's 3D location using Arm Model and reliable hand's 3D location information that is modified by hexahedron as hand model. Using proposed method we can obtain more precise fingertip's 3D information than using only depth image.

T-joint Laser Welding of Circular and Square Pipes Using the Vision Tracking System (용접선 추적 비전장치를 이용한 원형-사각 파이프의 T형 조인트 레이저용접)

  • Son, Yeong-Il;Park, Gi-Yeong;Lee, Gyeong-Don
    • Laser Solutions
    • /
    • v.12 no.1
    • /
    • pp.19-24
    • /
    • 2009
  • Because of its fast and precise welding performance, laser welding is becoming a new excellent welding method. However, the precise focusing and robust seam tracking are required to apply laser welding to the practical fields. In order to laser weld a type of T joint like a circular pipe on a square pipe, which could be met in the three dimensional structure such as an aluminum space frame, a visual sensor system was developed for automation of focusing and seam tracking. The developed sensor system consists of a digital CCD camera, a structured laser, and a vision processor. It is moved and positioned by a 2-axis motorized stage, which is attached to a 6 axis robot manipulator with a laser welding head. After stripe-type structured laser illuminates a target surface, images are captured through the digital CCD camera. From the image, seam error and defocusing error are calculated using image processing algorithms which includes efficient techniques handling continuously changed image patterns. These errors are corrected by the stage off-line during welding or teaching. Laser welding of a circular pipe on a square pipe was successful with the vision tracking system by reducing the path positioning and de focusing errors due to the robot teaching or a geometrical variation of specimens and jig holding.

  • PDF

ANALYSIS OF THE IMAGE SENSOR CONTROL METHOD

  • Park, Jong-Euk;Kong, Jong-Pil;Heo, Haeng-Pal;Kim, Young-Sun;Yong, Sang-Soon
    • Proceedings of the KSRS Conference
    • /
    • 2007.10a
    • /
    • pp.464-467
    • /
    • 2007
  • All image data acquisition systems for example the digital camera and digital camcorder, use the image sensor to convert the image data (light) into electronic data. These image sensors are used in satellite camera for high quality and resolution image data. There are two kinds of image sensors, the one is the CCD (charge coupled device) detector sensor and the other is the CMOS (complementary metal-oxide semiconductor) image sensor. The CCD sensor control system has more complex than the CMOS sensor control system. For the high quality image data on CCD sensor, the precise timing control signal and the several voltage sources are needed in the control system. In this paper, the comparison of the CCD with CMOS sensor, the CCD sensor characteristic, and the control system will be described.

  • PDF

A Study on Glass Processing System

  • Song, Jai-Chul
    • International journal of advanced smart convergence
    • /
    • v.4 no.2
    • /
    • pp.84-93
    • /
    • 2015
  • This study is for the development of Cover Glass Grinding Processing System. This system is developed for manufacturing a mass product system grinding cover glasses with highly precise mechanism, and we improved resulted quality. In the development process, we developed a complete process technology through mechanical design, image processing technology, spindle control, mark identification algorithm etc. With this cover glass grinding development, we could developed process technology, image processing technology, organization mechanisms and control algorithms.

SEMI-AUTOMATIC EXTRACTION OF AGRICULTURAL LAND USE AND VEGETATION INFORMATION USING HIGH RESOLUTION SATELLITE IMAGES

  • Lee, Mi-Seon;Kim, Seong-Joon;Shin, Hyoung-Sub;Park, Jong-Hwa
    • Proceedings of the KSRS Conference
    • /
    • 2008.10a
    • /
    • pp.147-150
    • /
    • 2008
  • This study refers to develop a semi-automatic extraction of agricultural land use and vegetation information using high resolution satellite images. Data of IKONOS satellite image (May 25 of 2001) and QuickBird satellite image (May 1 of 2006) which resembles with the spatial resolution and spectral characteristics of KOMPSAT3. The precise agricultural land use classification was tried using ISODATA unsupervised classification technique and the result was compared with on-screen digitizing land use accompanying with field investigation. For the extraction of vegetation information, three crops of paddy, com and red pepper were selected and the spectral characteristics were collected during each growing period using ground spectroradiometer. The vegetation indices viz. RVI, NDVI, ARVI, and SAVI for the crops were evaluated. The evaluation process is under development using the ERDAS IMAGINE Spatial Modeler Tool.

  • PDF

Scene-based Nonuniformity Correction by Deep Neural Network with Image Roughness-like and Spatial Noise Cost Functions

  • Hong, Yong-hee;Song, Nam-Hun;Kim, Dae-Hyeon;Jun, Chan-Won;Jhee, Ho-Jin
    • Journal of the Korea Society of Computer and Information
    • /
    • v.24 no.6
    • /
    • pp.11-19
    • /
    • 2019
  • In this paper, a new Scene-based Nonuniformity Correction (SBNUC) method is proposed by applying Image Roughness-like and Spatial Noise cost functions on deep neural network structure. The classic approaches for nonuniformity correction require generally plenty of sequential image data sets to acquire accurate image correction offset coefficients. The proposed method, however, is able to estimate offset from only a couple of images powered by the characteristic of deep neural network scheme. The real world SWIR image set is applied to verify the performance of proposed method and the result shows that image quality improvement of PSNR 70.3dB (maximum) is achieved. This is about 8.0dB more than the improved IRLMS algorithm which preliminarily requires precise image registration process on consecutive image frames.

Interpretation of Real Information-missing Patch of Remote Sensing Image with Kriging Interpolation of Spatial Statistics

  • Yiming, Feng;Xiangdong, Lei;Yuanchang, Lu
    • Proceedings of the KSRS Conference
    • /
    • 2003.11a
    • /
    • pp.1479-1481
    • /
    • 2003
  • The aim of this paper was mainly to interpret the real information-missing patch of image by using the kriging interpolation technology of spatial statistics. The TM Image of the Jingouling Forest Farm of Wangqing Forestry Bureau of Northeast China on 1 July 1997 was used as the tested material in this paper. Based on the classification for the TM image, the information pixel-missing patch of image was interpolated by the kriging interpolation technology of spatial statistics theory under the image treatment software-ERDAS and the geographic information system software-Arc/Info. The interpolation results were already passed precise examination. This paper would provide a method and means for interpreting the information-missing patch of image.

  • PDF

DTM GENERATION OF RADARSAT AND SPOT SATELLITE IMAGERY USING GROUND CONTROL POINTS EXTRACTED FROM SAR IMAGE

  • PARK DOO-YOUL;KIM JIN-KWANG;LEE HO-NAM;WON JOONG-SUN
    • Proceedings of the KSRS Conference
    • /
    • 2005.10a
    • /
    • pp.667-670
    • /
    • 2005
  • Ground control points(GCPs) can be extracted from SAR data given precise orbit for DTM generation using optic images and other SAR data. In this study, we extract GCPs from ERS SAR data and SRTM DEM. Although it is very difficult to identify GCPs in ERS SAR image, the geometry of optic image and other SAR data are able to be corrected and more precise DTM can be constructed from stereo optic images. Twenty GCPs were obtained from the ERS SAR data with precise Delft orbit information. After the correction was applied, the mean values of planimetric distance errors of the GCPs were 3.7m, 12.1 and -0.8m with standard deviations of 19.9m, 18.1, and 7.8m in geocentric X, Y, and Z coordinates, respectively. The geometries of SPOT stereo pair were corrected by 13 GCPs, and r.m.s. errors were 405m, 705m and 8.6m in northing, easting and height direction, respectively. And the geometries of RADARS AT stereo pair were corrected by 12 GCPs, and r.m.s. errors were 804m, 7.9m and 6.9m in northing, easting and height direction, respectively. DTMs, through a method of area based matching with pyramid images, were generated by SPOT stereo images and RADARS AT stereo images. Comparison between points of the obtained DTMs and points estimated from a national 1 :5,000 digital map was performed. For DTM by SPOT stereo images, the mean values of distance errors in northing, easting and height direction were respectively -7.6m, 9.6m and -3.1m with standard deviations of 9.1m, 12.0m and 9.1m. For DTM by RADARSAT stereo images, the mean values of distance errors in northing, easting and height direction were respectively -7.6m, 9.6m and -3.1m with standard deviations of 9.1m, 12.0m and 9.1m. These results met the accuracy of DTED level 2

  • PDF

Information Fusion of Photogrammetric Imagery and Lidar for Reliable Building Extraction (광학 영상과 Lidar의 정보 융합에 의한 신뢰성 있는 구조물 검출)

  • Lee, Dong-Hyuk;Lee, Kyoung-Mu;Lee, Sang-Uk
    • Journal of Broadcast Engineering
    • /
    • v.13 no.2
    • /
    • pp.236-244
    • /
    • 2008
  • We propose a new building detection and description algorithm for Lidar data and photogrammetric imagery using color segmentation, line segments matching, perceptual grouping. Our algorithm consists of two steps. In the first step, from the initial building regions extracted from Lidar data and the color segmentation results from the photogrammetric imagery, we extract coarse building boundaries based on the Lidar results with split and merge technique from aerial imagery. In the secondstep, we extract precise building boundaries based on coarse building boundaries and edges from aerial imagery using line segments matching and perceptual grouping. The contribution of this algorithm is that color information in photogrammetric imagery is used to complement collapsed building boundaries obtained by Lidar. Moreover, linearity of the edges and construction of closed roof form are used to reflect the characteristic of man-made object. Experimental results on multisensor data demonstrate that the proposed algorithm produces more accurate and reliable results than Lidar sensor.

Investigation of Sensor Models for Precise Geolocation of GOES-9 Images (GOES-9 영상의 정밀기하보정을 위한 여러 센서모델 분석)

  • Hur, Dong-Seok;Kim, Tae-Jung
    • Korean Journal of Remote Sensing
    • /
    • v.22 no.4
    • /
    • pp.285-294
    • /
    • 2006
  • A numerical formula that presents relationship between a point of a satellite image and its ground position is called a sensor model. For precise geolocation of satellite images, we need an error-free sensor model. However, the sensor model based on GOES ephemeris data has some error, in particular after Image Motion Compensation (IMC) mechanism has been turned off. To solve this problem, we investigated three sensor models: collinearity model, direct linear transform (DLT) model and orbit-based model. We applied matching between GOES images and global coastline database and used successful results as control points. With control points we improved the initial image geolocation accuracy using the three models. We compared results from three sensor models. As a result, we showed that the orbit-based model is a suitable sensor model for precise geolocation of GOES-9 Images.