• Title/Summary/Keyword: Image pixel

Search Result 2,503, Processing Time 0.034 seconds

Region-based Multi-level Thresholding for Color Image Segmentation (영역 기반의 Multi-level Thresholding에 의한 컬러 영상 분할)

  • Oh, Jun-Taek;Kim, Wook-Hyun
    • Journal of the Institute of Electronics Engineers of Korea SP
    • /
    • v.43 no.6 s.312
    • /
    • pp.20-27
    • /
    • 2006
  • Multi-level thresholding is a method that is widely used in image segmentation. However most of the existing methods are not suited to be directly used in applicable fields and moreover expanded until a step of image segmentation. This paper proposes region-based multi-level thresholding as an image segmentation method. At first we classify pixels of each color channel to two clusters by using EWFCM(Entropy-based Weighted Fuzzy C-Means) algorithm that is an improved FCM algorithm with spatial information between pixels. To obtain better segmentation results, a reduction of clusters is then performed by a region-based reclassification step based on a similarity between regions existing in a cluster and the other clusters. The clusters are created using the classification information of pixels according to color channel. We finally perform a region merging by Bayesian algorithm based on Kullback-Leibler distance between a region and the neighboring regions as a post-processing method as many regions still exist in image. Experiments show that region-based multi-level thresholding is superior to cluster-, pixel-based multi-level thresholding, and the existing mettled. And much better segmentation results are obtained by the post-processing method.

Implementation of Web-based Remote Multi-View 3D Imaging Communication System Using Adaptive Disparity Estimation Scheme (적응적 시차 추정기법을 이용한 웹 기반의 원격 다시점 3D 화상 통신 시스템의 구현)

  • Ko Jung-Hwan;Kim Eun-Soo
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.31 no.1C
    • /
    • pp.55-64
    • /
    • 2006
  • In this paper, a new web-based remote 3D imaging communication system employing an adaptive matching algorithm is suggested. In the proposed method, feature values are extracted from the stereo image pair through estimation of the disparity and similarities between each pixel of the stereo image. And then, the matching window size for disparity estimation is adaptively selected depending on the magnitude of this feature value. Finally, the detected disparity map and the left image is transmitted into the client region through the network channel. And then, in the client region, right image is reconstructed and intermediate views be synthesized by a linear combination of the left and right images using interpolation in real-time. From some experiments on web based-transmission in real-time and synthesis of the intermediate views by using two kinds of stereo images of 'Joo' & 'Hoon' captured by real camera, it is analyzed that PSNRs of the intermediate views reconstructed by using the proposed transmission scheme are highly measured by 30dB for 'Joo', 27dB for 'Hoon' and the delay time required to obtain the intermediate image of 4 view is also kept to be very fast value of 67.2ms on average, respectively.

A Deblurring Algorithm Combined with Edge Directional Color Demosaicing for Reducing Interpolation Artifacts (컬러 보간 에러 감소를 위한 에지 방향성 컬러 보간 방법과 결합된 디블러링 알고리즘)

  • Yoo, Du Sic;Song, Ki Sun;Kang, Moon Gi
    • Journal of the Institute of Electronics and Information Engineers
    • /
    • v.50 no.7
    • /
    • pp.205-215
    • /
    • 2013
  • In digital imaging system, Bayer pattern is widely used and the observed image is degraded by optical blur during image acquisition process. Generally, demosaicing and deblurring process are separately performed in order to convert a blurred Bayer image to a high resolution color image. However, the demosaicing process often generates visible artifacts such as zipper effect and Moire artifacts when performing interpolation across edge direction in Bayer pattern image. These artifacts are emphasized by the deblurring process. In order to solve this problem, this paper proposes a deblurring algorithm combined with edge directional color demosaicing method. The proposed method is consisted of interpolation step and region classification step. Interpolation and deblurring are simultaneously performed according to horizontal and vertical directions, respectively during the interpolation step. In the region classification step, characteristics of local regions are determined at each pixel position and the directionally obtained values are region adaptively fused. Also, the proposed method uses blur model based on wave optics and deblurring filter is calculated by using estimated characteristics of local regions. The simulation results show that the proposed deblurring algorithm prevents the boosting of artifacts and outperforms conventional approaches in both objective and subjective terms.

Object-Based Integral Imaging Depth Extraction Using Segmentation (영상 분할을 이용한 객체 기반 집적영상 깊이 추출)

  • Kang, Jin-Mo;Jung, Jae-Hyun;Lee, Byoung-Ho;Park, Jae-Hyeung
    • Korean Journal of Optics and Photonics
    • /
    • v.20 no.2
    • /
    • pp.94-101
    • /
    • 2009
  • A novel method for the reconstruction of 3D shape and texture from elemental images has been proposed. Using this method, we can estimate a full 3D polygonal model of objects with seamless triangulation. But in the triangulation process, all the objects are stitched. This generates phantom surfaces that bridge depth discontinuities between different objects. To solve this problem we need to connect points only within a single object. We adopt a segmentation process to this end. The entire process of the proposed method is as follows. First, the central pixel of each elemental image is computed to extract spatial position of objects by correspondence analysis. Second, the object points of central pixels from neighboring elemental images are projected onto a specific elemental image. Then, the center sub-image is segmented and each object is labeled. We used the normalized cut algorithm for segmentation of the center sub-image. To enhance the speed of segmentation we applied the watershed algorithm before the normalized cut. Using the segmentation results, the subdivision process is applied to pixels only within the same objects. The refined grid is filtered with median and Gaussian filters to improve reconstruction quality. Finally, each vertex is connected and an object-based triangular mesh is formed. We conducted experiments using real objects and verified our proposed method.

Adaptable Center Detection of a Laser Line with a Normalization Approach using Hessian-matrix Eigenvalues

  • Xu, Guan;Sun, Lina;Li, Xiaotao;Su, Jian;Hao, Zhaobing;Lu, Xue
    • Journal of the Optical Society of Korea
    • /
    • v.18 no.4
    • /
    • pp.317-329
    • /
    • 2014
  • In vision measurement systems based on structured light, the key point of detection precision is to determine accurately the central position of the projected laser line in the image. The purpose of this research is to extract laser line centers based on a decision function generated to distinguish the real centers from candidate points with a high recognition rate. First, preprocessing of an image adopting a difference image method is conducted to realize image segmentation of the laser line. Second, the feature points in an integral pixel level are selected as the initiating light line centers by the eigenvalues of the Hessian matrix. Third, according to the light intensity distribution of a laser line obeying a Gaussian distribution in transverse section and a constant distribution in longitudinal section, a normalized model of Hessian matrix eigenvalues for the candidate centers of the laser line is presented to balance reasonably the two eigenvalues that indicate the variation tendencies of the second-order partial derivatives of the Gaussian function and constant function, respectively. The proposed model integrates a Gaussian recognition function and a sinusoidal recognition function. The Gaussian recognition function estimates the characteristic that one eigenvalue approaches zero, and enhances the sensitivity of the decision function to that characteristic, which corresponds to the longitudinal direction of the laser line. The sinusoidal recognition function evaluates the feature that the other eigenvalue is negative with a large absolute value, making the decision function more sensitive to that feature, which is related to the transverse direction of the laser line. In the proposed model the decision function is weighted for higher values to the real centers synthetically, considering the properties in the longitudinal and transverse directions of the laser line. Moreover, this method provides a decision value from 0 to 1 for arbitrary candidate centers, which yields a normalized measure for different laser lines in different images. The normalized results of pixels close to 1 are determined to be the real centers by progressive scanning of the image columns. Finally, the zero point of a second-order Taylor expansion in the eigenvector's direction is employed to refine further the extraction results of the central points at the subpixel level. The experimental results show that the method based on this normalization model accurately extracts the coordinates of laser line centers and obtains a higher recognition rate in two group experiments.

A System Model of Iterative Image Reconstruction for High Sensitivity Collimator in SPECT (SPECT용 고민감도 콜리메이터를 위한 반복적 영상재구성방법의 시스템 모델 개발)

  • Bae, Seung-Bin;Lee, Hak-Jae;Kim, Young-Kwon;Kim, You-Hyun;Lee, Ki-Sung;Joung, Jin-Hun
    • Journal of radiological science and technology
    • /
    • v.33 no.1
    • /
    • pp.31-36
    • /
    • 2010
  • Low energy high resolution (LEHR) collimator is the most widely used collimator in SPECT imaging. LEHR has an advantage in terms of image resolution but has a difficulty in acquiring high sensitivity due to the narrow hole size and long septa height. Throughput in SPECT can be improved by increasing counts per second with the use of high sensitivity collimators. The purpose of this study is to develop a system model in iterative image reconstruction to recover the resolution degradation caused by high sensitivity collimators with bigger hole size. We used fan-beam model instead of parallel-beam model for calculation of detection probabilities to accurately model the high sensitivity collimator with wider holes. In addition the weight factors were calculated and applied onto the probabilities as a function of incident angle of incoming photons and distance from source to the collimator surface. The proposed system model resulted in the equivalent performance with the same counts (i.e. in shortened acquisition time) and improved image quality in the same acquisition time. The proposed method can be effectively applied for resolution improvement of pixel collimator of next generation solid state detectors.

Line-of-Sight (LOS) Vector Adjustment Model for Restitution of SPOT 4 Imagery (SPOT 4 영상의 기하보정을 위한 시선 벡터 조정 모델)

  • Jung, Hyung-Sup
    • Journal of the Korean Society of Surveying, Geodesy, Photogrammetry and Cartography
    • /
    • v.28 no.2
    • /
    • pp.247-254
    • /
    • 2010
  • In this paper, a new approach has been studied correcting the geometric distortion of SPOT 4 imagery. Two new equations were induced by the relationship between satellite and the Earth in the space. line-of-sight (LOS) vector adjustment model for SPOT 4 imagery was implemented in this study. This model is to adjust LOS vector under the assumption that the orbital information of satellite provided by receiving station is uncertain and this uncertainty makes a constant error over the image. This model is verified using SPOT 4 satellite image with high look angle and thirty five ground points, which include 10 GCPs(Ground Control Points) and 25 check points, measured by the GPS. In total thirty five points, the geometry of satellite image calculated by given satellite information(such as satellite position, velocity, attitude and look angles, etc) from SPOT 4 satellite image was distorted with a constant error. Through out the study, it was confirmed that the LOS vector adjustment model was able to be applied to SPOT4 satellite image. Using this model, RMSEs (Root Mean Square Errors) of twenty five check points taken by increasing the number of GCPs from two to ten were less than one pixel. As a result, LOS vector adjustment model could efficiently correct the geometry of SPOT4 images with only two GCPs. This method also is expected to get good results for the different satellite images that are similar to the geometry of SPOT images.

A Study on the Dynamic Range Expansion of the Shack-Hartmann Wavefront Sensor using Image Processing (영상처리 기법을 이용한 샥-하트만 파면 센서의 측정범위 확장에 대한 연구)

  • Kim, Min-Seok;Kim, Ji-Yeon;Uhm, Tae-Kyung;Youn, Sung-Kie;Lee, Jun-Ho
    • Korean Journal of Optics and Photonics
    • /
    • v.18 no.6
    • /
    • pp.375-382
    • /
    • 2007
  • The Shack-Hartmann wavefront sensor is composed of a lenslet array generating the spot images from which local slope is calculated and overall wavefront is measured. Generally the principle of wavefront reconstruction is that the spot centroid of each lenslet array is calculated from pixel intensity values in its subaperture, and then overall wavefront is reconstructed by the local slope of the wavefront obtained by deviations from reference positions. Hence the spot image of each lenslet array has to remain in its subaperture for exact measurement of the wavefront. However the spot of each lenslet array deviates from its subaperture area when a wavefront with large local slopes enters the Shack-Hartmann sensor. In this research, we propose a spot image searching method that finds the area of each measured spot image flexibly and determines the centroid of each spot in its area Also the algorithms that match these centroids to their reference points unequivocally, even if some of them are situated off the allocated subaperture, are proposed. Finally we verify the proposed algorithm with the test of a defocus measurement through experimental setup for the Shack-Hartmann wavefront sensor. It has been shown that the proposed algorithm can expand the dynamic range without additional devices.

Advanced LWIR Thermal Imaging Sight Design (원적외선 2세대 열상조준경의 설계)

  • Hong, Seok-Min;Kim, Hyun-Sook;Park, Yong-Chan
    • Korean Journal of Optics and Photonics
    • /
    • v.16 no.3
    • /
    • pp.209-216
    • /
    • 2005
  • A new second generation advanced thermal imager, which can be used for battle tank sight has been developed by ADD. This system uses a $480\times6$ TDI HgCdTe detector, operating in the $7.7-10.3{\mu}m$ wavelength made by Sofradir. The IR optics has dual field of views such as $2.67\times2^{\circ}$ in NFOV and $10\times7.5^{\circ}$ in WFOV. And also, this optics is used for athermalization of the system. It is certain that our sensor can be used in wide temperature range without any degradation of the system performance. The scanning system to be able to display 470,000 pixels is developed so that the pixel number is greatly increased comparing with the first generation thermal imaging system. In order to correct non-uniformity of detector arrays, the two point correction method has been developed by using the thermo electric cooler. Additionally, to enhance the image of low contrast and improve the detection capability, we have proposed the new technique of histogram processing being suitable for the characteristics of contrast distribution of thermal imagery. Through these image processing techniques, we obtained the highest quality thermal image. The MRTD of the LWIR thermal sight shows good results below 0.05K at spatial frequency 2 cycles/mrad at the narrow field of view.

Accelerated Convolution Image Processing by Using Look-Up Table and Overlap Region Buffering Method (Loop-Up Table과 필터 중첩영역 버퍼링 기법을 이용한 컨벌루션 영상처리 고속화)

  • Kim, Hyun-Woo;Kim, Min-Young
    • Journal of the Institute of Electronics Engineers of Korea SC
    • /
    • v.49 no.4
    • /
    • pp.17-22
    • /
    • 2012
  • Convolution filtering methods have been widely applied to various digital signal processing fields for image blurring, sharpening, edge detection, and noise reduction, etc. According to their application purpose, the filter mask size or shape and the mask value are selected in advance, and the designed filter is applied to input image for the convolution processing. In this paper, we proposed an image processing acceleration method for the convolution processing by using two-dimensional Look-up table (LUT) and overlap-region buffering technique. First, based on the fixed convolution mask value, the multiplication operation between 8 or 10 bit pixel values of the input image and the filter mask values is performed a priori, and the results memorized in LUT are referred during the convolution process. Second, based on symmetric structural characteristics of the convolution filters, inherent duplicated operation region is analysed, and the saved operation results in one step before in the predefined memory buffer is recalled and reused in current operation step. Through this buffering, unnecessary repeated filter operation on the same regions is minimized in sequential manner. As the proposed algorithms minimize the computational amount needed for the convolution operation, they work well under the operation environments utilizing embedded systems with limited computational resources or the environments of utilizing general personnel computers. A series of experiments under various situations verifies the effectiveness and usefulness of the proposed methods.