• Title/Summary/Keyword: image central position

Search Result 57, Processing Time 0.035 seconds

Moving Path Tracing of Image Central Position with Autocorrelation Function

  • Kim, Young-Bin;Ryu, Kwang-Ryol;Sclabassi, Robert J.
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 2008.05a
    • /
    • pp.302-305
    • /
    • 2008
  • For an complete image composition to be stitched on several mosaic images, tracing displacement of direction and distance between successive images are important parameters. The input image is modeled by using a general second order two-dimensional Taylor-series and then converting it to a $3{\times}3$ correlation block and storing the data. A moving factor and coordinate is calculated by comparing the continuous correlation blocks. The experimentation result has a success rate of 85% for moving path tracing as continuous images are moved to 10% of image central position.

  • PDF

Vision Sensor System for Weld Seam Tracking of I-Butt Joint with Height Variation (높이 변화가 있는 막대기 용접선 추적용 시각센서)

  • Kim Moo-Yeon;Kim Jae-Woong
    • Journal of Welding and Joining
    • /
    • v.22 no.6
    • /
    • pp.43-49
    • /
    • 2004
  • In this study, a visual sensor system which can detect I-butt weld joint with height variation and includes a seam tracking algorithm was investigated. Three-dimensional position of an object can be acquired by using the method of distance measurement, i.e., an optical trigonometry which results from the spatial relations between the camera, the object and the structured light by a visible laser. Effects of laser intensity and iris number for the image quality as well as object material were investigated for the optical system design. For the image processing, a region of interest is defined from the whole image and a line image of laser is drew by using the gray level difference in the image. From the drew laser line, the weld joint can be recognized in searching the biggest point position calculated from the central difference method. Through a series of welding experiments, a good tracking performance was confirmed under GMA welding.

A Study on the Image Processing of Visual Sensor for Weld Seam Tracking in GMA Welding (GMA 용접에서 용접선 추적용 시각센서의 화상처리에 관한 연구)

  • 정규철;김재웅
    • Journal of Welding and Joining
    • /
    • v.18 no.3
    • /
    • pp.60-67
    • /
    • 2000
  • In this study, we constructed a preview-sensing visual sensor system for weld seam tracking in GMA welding. The visual sensor consists of a CCD camera, a diode laser system with a cylindrical lens and a band-pass-filter to overcome the degrading of image due to spatters and/or arc light. To obtain weld joint position and edge points accurately from the captured image, we compared Hough transform method with central difference method. As a result, we present Hough transform method can more accurately extract the points and it can be applied to real time weld seam tracking. Image processing is carried out to extract straight lines that express laser stripe. After extracting the lines, weld joint position and edge points is determined by intersecting points of the lines. Although a spatter trace is in the image, it is possible to recognize the position of weld joint. Weld seam tracking was precisely implemented with adopting Hough transform method, and it is possible to track the weld seam in the case of offset angle is in the region of $\pm15^{\circ}$.

  • PDF

A Study on the Image Processing of Visual Sensor for Weld Seam Tracking in GMA Welding

  • Kim, J.-W.;Chung, K.-C.
    • International Journal of Korean Welding Society
    • /
    • v.1 no.2
    • /
    • pp.23-29
    • /
    • 2001
  • In this study, a preview-sensing visual sensor system is constructed far weld seam tracking in GMA welding. The visual sensor system consists of a CCD camera, a diode laser system with a cylindrical lens, and a band-pass-filter to overcome the degrading of image due to spatters and/or arc light. Among the image processing methods, Hough transform method is compared with the central difference method from a viewpoint of the capability for extracting the accurate feature position. As a result, it was revealed that Hough transform method can more accurately extract the feature positions and it can be applied to real time weld seam tracking. Image processing which includes Hough transform method is carried out to extract straight lines that express laser stripe. After extracting the lines, weld joint position and edge points are determined by intersecting the lines. Even though the image includes a spatter trace on it, it is possible to recognize the position of weld joint. Weld seam tracking was precisely implemented with adopting Hough transform method, and it is possible to track the weld seam in the case of offset angle is in the region of $\pm$ $15^{\circ}$.

  • PDF

PANORAMIC IMAGE OF MANDIBULAR CONDYLE ACCORDING TO HEAD POSITION (두부 위치에 따른 하악 과두의 파노라마상)

  • Kim Jeong Hwa;Choi Soon Chul
    • Journal of Korean Academy of Oral and Maxillofacial Radiology
    • /
    • v.20 no.2
    • /
    • pp.219-225
    • /
    • 1990
  • Panoramic radiography is convenient in clinic and visualizes those areas which other technique do not give. But the technique has limitation of image distortion which results from the relationship of the ramus to the focal trough and from the direction of the central ray. This study is, using 7 dry skulls, to determine the effect of rotation of patient's head on reducing those distortion and determine the magnification ratio of images of mandibular condyle in rotated patient head position. The obtained results were as follows: 1. Generally, in panoramic radiography the anterolateral portion of the mandibular condyle was best to be visualized. 2. There are no significant difference between the image readability of anteromedial portion and that of anterocentral portion of the mandibular condyle. 3. Anterolateral portion of the mandibular condyle was better visualized in rotated head position by 20 degree or horizontal condylar inclination than in conventional position or in rotated head position by 10 degree. 4. The magnification ratio of the anteroposterior diameter in the image of mandibular condyle was least in the rotated head position by horizontal inclination of the mandibular condyle and was largest by 20 degree.

  • PDF

Compact Catadioptric Wide Imaging with Secondary Planar Mirror

  • Ko, Young-Jun;Yi, Soo-Yeong
    • Current Optics and Photonics
    • /
    • v.3 no.4
    • /
    • pp.329-335
    • /
    • 2019
  • Wide FOV imaging systems are important for acquiring rich visual information. A conventional catadioptric imaging system deploys a camera in front of a curved mirror to acquire a wide FOV image. This is a cumbersome setup and causes unnecessary occlusions in the acquired image. In order to reduce both the burden of the camera deployment and the occlusions in the images, this study uses a secondary planar mirror in the catadioptric imaging system. A compact design of the catadioptric imaging system and a condition for the position of the secondary planar mirror to satisfy the central imaging are presented. The image acquisition model of the catadioptric imaging system with a secondary planar mirror is discussed based on the principles of geometric optics in this study. As a backward mapping, the acquired image is restored to a distortion-free image in the experiments.

Effect of slice inclination and object position within the field of view on the measurement accuracy of potential implant sites on cone-beam computed tomography

  • Saberi, Bardia Vadiati;Khosravifard, Negar;Nourzadeh, Alireza
    • Imaging Science in Dentistry
    • /
    • v.50 no.1
    • /
    • pp.37-43
    • /
    • 2020
  • Purpose: The purpose of this study was to evaluate the accuracy of linear measurements in the horizontal and vertical dimensions based on object position and slice inclination in cone-beam computed tomography (CBCT) images. Materials and Methods: Ten dry sheep hemi-mandibles, each with 4 sites (incisor, canine, premolar, and molar), were evaluated when either centrally or peripherally positioned within the field of view (FOV) with the image slices subjected to either oblique or orthogonal inclinations. Four types of images were created of each region: central/cross-sectional, central/coronal, peripheral/cross-sectional, and peripheral/coronal. The horizontal and vertical dimensions were measured for each region of each image type. Direct measurements of each region were obtained using a digital caliper in both horizontal and vertical dimensions. CBCT and direct measurements were compared using the Bland-Altman plot method. P values <0.05 were considered to indicate statistical significance. Results: The buccolingual dimension of the incisor and premolar areas and the height of the incisor, canine, and molar areas showed statistically significant differences on the peripheral/coronal images compared to the direct measurements (P<0.05). Molar area height in the central/coronal slices also differed significantly from the direct measurements (P<0.05). Cross-sectional images of either the central or peripheral position had no marked difference from the gold-standard values, indicating sufficient accuracy. Conclusion: Peripheral object positioning within the FOV in combination with applying an orthogonal inclination to the slices resulted in significant inaccuracies in the horizontal and vertical measurements. The most undesirable effect was observed in the molar area and the vertical dimension.

A Study on a Visual Sensor System for Weld Seam Tracking in Robotic GMA Welding (GMA 용접로봇용 용접선 시각 추적 시스템에 관한 연구)

  • 김동호;김재웅
    • Journal of Welding and Joining
    • /
    • v.19 no.2
    • /
    • pp.208-214
    • /
    • 2001
  • In this study, we constructed a visual sensor system for weld seam tracking in real time in GMA welding. A sensor part consists of a CCD camera, a band-pass filter, a diode laser system with a cylindrical lens, and a vision board for inter frame process. We used a commercialized robot system which includes a GMA welding machine. To extract the weld seam we used a inter frame process in vision board from that we could remove the noise due to the spatters and fume in the image. Since the image was very reasonable by using the inter frame p개cess, we could use the simplest way to extract the weld seam from the image, such as first differential and central difference method. Also we used a moving average method to the successive position data or weld seam for reducing the data fluctuation. In experiment the developed robot system with visual sensor could be able to track a most popular weld seam. such as a fillet-joint, a V-groove, and a lap-joint of which weld seam include planar and height directional variation.

  • PDF

Visibility detection approach to road scene foggy images

  • Guo, Fan;Peng, Hui;Tang, Jin;Zou, Beiji;Tang, Chenggong
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.10 no.9
    • /
    • pp.4419-4441
    • /
    • 2016
  • A cause of vehicle accidents is the reduced visibility due to bad weather conditions such as fog. Therefore, an onboard vision system should take visibility detection into account. In this paper, we propose a simple and effective approach for measuring the visibility distance using a single camera placed onboard a moving vehicle. The proposed algorithm is controlled by a few parameters and mainly includes camera parameter estimation, region of interest (ROI) estimation and visibility computation. Thanks to the ROI extraction, the position of the inflection point may be measured in practice. Thus, combined with the estimated camera parameters, the visibility distance of the input foggy image can be computed with a single camera and just the presence of road and sky in the scene. To assess the accuracy of the proposed approach, a reference target based visibility detection method is also introduced. The comparative study and quantitative evaluation show that the proposed method can obtain good visibility detection results with relatively fast speed.

A Study on the Vision Sensor System for Tracking the I-Butt Weld Joints (I형 맞대기 용접선 추적용 시각센서 시스템에 관한 연구)

  • Bae, Hee-Soo;Kim, Jae-Woong
    • Journal of the Korean Society for Precision Engineering
    • /
    • v.18 no.9
    • /
    • pp.179-185
    • /
    • 2001
  • In this study, a visual sensor system for weld seam tracking the I-butt weld joints in GMA welding was constructed. The sensor system consists of a CCD camera, a diode laser with a cylindrical lens and a band-pass-filter to overcome the degrading of image due to spatters and arc light. In order to obtain the enhanced image, quantitative relationship between laser intensity and iris number was investigated. Throughout the repeated experiments, the shutter speed was set at 1-milisecond for minimizing the effect of spatters on the image, and therefore most of the spatter trace in the image have been found to be reduced. Region of interest was defined from the entire image and gray level of searched laser line was compared to that of weld line. The differences between these gray levels lead to spot the position of weld joint using central difference method. The results showed that, as long as weld line was within $^\pm$15$^\circ$from the longitudinal straight fine, the system constructed in this study could track the weld line successful1y. Since the processing time reduced to 0.05 sec, it is expected that the developed method could be adopted to high speed welding such as laser welding.

  • PDF