• Title/Summary/Keyword: Image pixel

Search Result 2,510, Processing Time 0.028 seconds

Quantitative Analysis of Digital Radiography Pixel Values to absorbed Energy of Detector based on the X-Ray Energy Spectrum Model (X선 스펙트럼 모델을 이용한 DR 화소값과 디텍터 흡수에너지의 관계에 대한 정량적 분석)

  • Kim Do-Il;Kim Sung-Hyun;Ho Dong-Su;Choe Bo-young;Suh Tae-Suk;Lee Jae-Mun;Lee Hyoung-Koo
    • Progress in Medical Physics
    • /
    • v.15 no.4
    • /
    • pp.202-209
    • /
    • 2004
  • Flat panel based digital radiography (DR) systems have recently become useful and important in the field of diagnostic radiology. For DRs with amorphous silicon photosensors, CsI(TI) is normally used as the scintillator, which produces visible light corresponding to the absorbed radiation energy. The visible light photons are converted into electric signal in the amorphous silicon photodiodes which constitute a two dimensional array. In order to produce good quality images, detailed behaviors of DR detectors to radiation must be studied. The relationship between air exposure and the DR outputs has been investigated in many studies. But this relationship was investigated under the condition of the fixed tube voltage. In this study, we investigated the relationship between the DR outputs and X-ray in terms of the absorbed energy in the detector rather than the air exposure using SPEC-l8, an X-ray energy spectrum model. Measured exposure was compared with calculated exposure for obtaining the inherent filtration that is a important input variable of SPEC-l8. The absorbed energy in the detector was calculated using algorithm of calculating the absorbed energy in the material and pixel values of real images under various conditions was obtained. The characteristic curve was obtained using the relationship of two parameter and the results were verified using phantoms made of water and aluminum. The pixel values of the phantom image were estimated and compared with the characteristic curve under various conditions. It was found that the relationship between the DR outputs and the absorbed energy in the detector was almost linear. In a experiment using the phantoms, the estimated pixel values agreed with the characteristic curve, although the effect of scattered photons introduced some errors. However, effect of a scattered X-ray must be studied because it was not included in the calculation algorithm. The result of this study can provide useful information about a pre-processing of digital radiography.

  • PDF

Usability Evaluation of Applied Low-dose CT When Examining Urinary Calculus Using Computed Tomography (컴퓨터 단층촬영을 이용한 요로결석 검사에서 저선량 CT의 적용에 대한 유용성 평가)

  • Kim, Hyeon-Jin;Ji, Tae-Jeong
    • The Journal of the Korea Contents Association
    • /
    • v.17 no.6
    • /
    • pp.81-85
    • /
    • 2017
  • The aim of this study was to evaluate the usability of applied Low dose Computed Tomography(LDCT) protocol in examining urinary calculus using computed tomography. The subjects of this study were urological patients who visited a medical institution located in Busan from June to December 2016 and the protocol used in this study was Adaptive Statistical Iterative Reconstruction: low-dose CT with 50% Adaptive Statistical Iterative Reconstruction (ASIR). As results of quantitative analysis, the mean pixel value and standard deviation within kidney region of image(ROI)of the axial image were $26.21{\pm}7.08$ in abdomen CT pre scan and $20.03{\pm}8.16$ in low-dose CT. Also the mean pixel value and standard deviation within kidney ROI of the coronal image were $22.07{\pm}7.35$ in abdomen CT pre scan and $21.67{\pm}6.11$ in low dose CT. The results of qualitative analysis showed that four raters' mean values of observed kidney artifacts were $19.14{\pm}0.36$ when using abdomen CT protocol and $19.17{\pm}0.43$ in low-dose CT, and the mean value of resolution and contrast was $19.35{\pm}0.70$ when using abdomen CT protocol and $19.29{\pm}0.58$ in low-dose CT. Also the results of a exposure dose analysis showed that the mean values of CTDIvol and DLP in abdomen CT pre scan were 18.02 mGy and $887.51mGy{\cdot}cm$ respectively and the mean values of CTDIvol and DLP when using low-dose CT protocol were 7.412 mGy and $361.22mGy{\cdot}cm$ respectively. The resulting dose reduction rate was 58.82% and 59.29%, respectively.

Analysis of Contrast Medium Dilution Rate for changes in Tube Current and SOD, which are Parameters of Lower Limb Angiography Examination (하지 혈관조영검사 시 매개변수인 관전류와 SOD에 변화에 대한 조영제 희석률 분석)

  • Kong, Chang gi;Han, Jae Bok
    • Journal of the Korean Society of Radiology
    • /
    • v.14 no.5
    • /
    • pp.603-612
    • /
    • 2020
  • This study has a purpose to look into the effect of the relationship between the Tube current (mA) and SOD(Source to Object Distance), which is a parameter of lower limb angiography examination, and the dilution rate of the contrast medium concentration (300, 320, 350) on the image. To that end, using 3 mm vessel model water phantom, a vessel model custom made in the size of peripheral vessel diameter, this study measured relationships between change of parameters, such as tube current (mA), SOD and varying concentrations (300, 320, 350) of contrast medium dilution into SNR and CNR values while analyzing the coefficients of variance(cv<10). The software used to measure SNR and CNR values was Image J 1.50i from NIH (National Institutes of Health, USA). MPV (mean pixel value) and SD (standard deviation) were used after verifying numerically the image signal for region of interest (ROI) and background on phantom from the DICOM (digital imaging and communications in medicine) 3.0 file transmitted to PACS. As to contrast medium dilution by the change of tube current, when 146 mA and 102 mA were compared, For both SNR and CNR, the coefficient of variation value was less than 10 until the section of CM: N/S dilution (100% ~ 30% : 70%) but CM: N/S dilution rate (20%: 80% ~ 10% : 90%) the coefficient of variation was 10 or more. As to contrast medium dilution by concentration for SOD change, when SOD's (32.5 cm and 22.5 cm) were compared,For both SNR and CNR, the coefficient of variation value was less than 10 until the section of CM: N/S dilution (100% ~ 30% : 70%) but CM: N/S dilution rate (20%: 80% ~ 10% : 90%) the coefficient of variation was 10 or more. As to contrast medium dilution by concentration for SOD change, when SOD's (32.5 cm and 12.5 cm) were compared,For both SNR and CNR, the coefficient of variation value was less than 10 until the section of CM: N/S dilution (100% ~ 30% : 70%) but CM: N/S dilution rate (20%: 80% ~ 10% : 90%) the coefficient of variation was 10 or more. As a result, set a low tube current value in other tests or procedures including peripheral angiography of the lower extremities in the intervention, and make the table as close as possible to the image receiver, and adjust the contrast agent concentration (300) to CM: N/S dilution (30%: 70%). ) Is suggested as the most efficient way to obtain images with an appropriate concentration while simultaneously reducing the burden on the kidney and the burden on exposure.

Detection Ability of Occlusion Object in Deep Learning Algorithm depending on Image Qualities (영상품질별 학습기반 알고리즘 폐색영역 객체 검출 능력 분석)

  • LEE, Jeong-Min;HAM, Geon-Woo;BAE, Kyoung-Ho;PARK, Hong-Ki
    • Journal of the Korean Association of Geographic Information Studies
    • /
    • v.22 no.3
    • /
    • pp.82-98
    • /
    • 2019
  • The importance of spatial information is rapidly rising. In particular, 3D spatial information construction and modeling for Real World Objects, such as smart cities and digital twins, has become an important core technology. The constructed 3D spatial information is used in various fields such as land management, landscape analysis, environment and welfare service. Three-dimensional modeling with image has the hig visibility and reality of objects by generating texturing. However, some texturing might have occlusion area inevitably generated due to physical deposits such as roadside trees, adjacent objects, vehicles, banners, etc. at the time of acquiring image Such occlusion area is a major cause of the deterioration of reality and accuracy of the constructed 3D modeling. Various studies have been conducted to solve the occlusion area. Recently the researches of deep learning algorithm have been conducted for detecting and resolving the occlusion area. For deep learning algorithm, sufficient training data is required, and the collected training data quality directly affects the performance and the result of the deep learning. Therefore, this study analyzed the ability of detecting the occlusion area of the image using various image quality to verify the performance and the result of deep learning according to the quality of the learning data. An image containing an object that causes occlusion is generated for each artificial and quantified image quality and applied to the implemented deep learning algorithm. The study found that the image quality for adjusting brightness was lower at 0.56 detection ratio for brighter images and that the image quality for pixel size and artificial noise control decreased rapidly from images adjusted from the main image to the middle level. In the F-measure performance evaluation method, the change in noise-controlled image resolution was the highest at 0.53 points. The ability to detect occlusion zones by image quality will be used as a valuable criterion for actual application of deep learning in the future. In the acquiring image, it is expected to contribute a lot to the practical application of deep learning by providing a certain level of image acquisition.

Region-based Multi-level Thresholding for Color Image Segmentation (영역 기반의 Multi-level Thresholding에 의한 컬러 영상 분할)

  • Oh, Jun-Taek;Kim, Wook-Hyun
    • Journal of the Institute of Electronics Engineers of Korea SP
    • /
    • v.43 no.6 s.312
    • /
    • pp.20-27
    • /
    • 2006
  • Multi-level thresholding is a method that is widely used in image segmentation. However most of the existing methods are not suited to be directly used in applicable fields and moreover expanded until a step of image segmentation. This paper proposes region-based multi-level thresholding as an image segmentation method. At first we classify pixels of each color channel to two clusters by using EWFCM(Entropy-based Weighted Fuzzy C-Means) algorithm that is an improved FCM algorithm with spatial information between pixels. To obtain better segmentation results, a reduction of clusters is then performed by a region-based reclassification step based on a similarity between regions existing in a cluster and the other clusters. The clusters are created using the classification information of pixels according to color channel. We finally perform a region merging by Bayesian algorithm based on Kullback-Leibler distance between a region and the neighboring regions as a post-processing method as many regions still exist in image. Experiments show that region-based multi-level thresholding is superior to cluster-, pixel-based multi-level thresholding, and the existing mettled. And much better segmentation results are obtained by the post-processing method.

Implementation of Web-based Remote Multi-View 3D Imaging Communication System Using Adaptive Disparity Estimation Scheme (적응적 시차 추정기법을 이용한 웹 기반의 원격 다시점 3D 화상 통신 시스템의 구현)

  • Ko Jung-Hwan;Kim Eun-Soo
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.31 no.1C
    • /
    • pp.55-64
    • /
    • 2006
  • In this paper, a new web-based remote 3D imaging communication system employing an adaptive matching algorithm is suggested. In the proposed method, feature values are extracted from the stereo image pair through estimation of the disparity and similarities between each pixel of the stereo image. And then, the matching window size for disparity estimation is adaptively selected depending on the magnitude of this feature value. Finally, the detected disparity map and the left image is transmitted into the client region through the network channel. And then, in the client region, right image is reconstructed and intermediate views be synthesized by a linear combination of the left and right images using interpolation in real-time. From some experiments on web based-transmission in real-time and synthesis of the intermediate views by using two kinds of stereo images of 'Joo' & 'Hoon' captured by real camera, it is analyzed that PSNRs of the intermediate views reconstructed by using the proposed transmission scheme are highly measured by 30dB for 'Joo', 27dB for 'Hoon' and the delay time required to obtain the intermediate image of 4 view is also kept to be very fast value of 67.2ms on average, respectively.

A Deblurring Algorithm Combined with Edge Directional Color Demosaicing for Reducing Interpolation Artifacts (컬러 보간 에러 감소를 위한 에지 방향성 컬러 보간 방법과 결합된 디블러링 알고리즘)

  • Yoo, Du Sic;Song, Ki Sun;Kang, Moon Gi
    • Journal of the Institute of Electronics and Information Engineers
    • /
    • v.50 no.7
    • /
    • pp.205-215
    • /
    • 2013
  • In digital imaging system, Bayer pattern is widely used and the observed image is degraded by optical blur during image acquisition process. Generally, demosaicing and deblurring process are separately performed in order to convert a blurred Bayer image to a high resolution color image. However, the demosaicing process often generates visible artifacts such as zipper effect and Moire artifacts when performing interpolation across edge direction in Bayer pattern image. These artifacts are emphasized by the deblurring process. In order to solve this problem, this paper proposes a deblurring algorithm combined with edge directional color demosaicing method. The proposed method is consisted of interpolation step and region classification step. Interpolation and deblurring are simultaneously performed according to horizontal and vertical directions, respectively during the interpolation step. In the region classification step, characteristics of local regions are determined at each pixel position and the directionally obtained values are region adaptively fused. Also, the proposed method uses blur model based on wave optics and deblurring filter is calculated by using estimated characteristics of local regions. The simulation results show that the proposed deblurring algorithm prevents the boosting of artifacts and outperforms conventional approaches in both objective and subjective terms.

Object-Based Integral Imaging Depth Extraction Using Segmentation (영상 분할을 이용한 객체 기반 집적영상 깊이 추출)

  • Kang, Jin-Mo;Jung, Jae-Hyun;Lee, Byoung-Ho;Park, Jae-Hyeung
    • Korean Journal of Optics and Photonics
    • /
    • v.20 no.2
    • /
    • pp.94-101
    • /
    • 2009
  • A novel method for the reconstruction of 3D shape and texture from elemental images has been proposed. Using this method, we can estimate a full 3D polygonal model of objects with seamless triangulation. But in the triangulation process, all the objects are stitched. This generates phantom surfaces that bridge depth discontinuities between different objects. To solve this problem we need to connect points only within a single object. We adopt a segmentation process to this end. The entire process of the proposed method is as follows. First, the central pixel of each elemental image is computed to extract spatial position of objects by correspondence analysis. Second, the object points of central pixels from neighboring elemental images are projected onto a specific elemental image. Then, the center sub-image is segmented and each object is labeled. We used the normalized cut algorithm for segmentation of the center sub-image. To enhance the speed of segmentation we applied the watershed algorithm before the normalized cut. Using the segmentation results, the subdivision process is applied to pixels only within the same objects. The refined grid is filtered with median and Gaussian filters to improve reconstruction quality. Finally, each vertex is connected and an object-based triangular mesh is formed. We conducted experiments using real objects and verified our proposed method.

Adaptable Center Detection of a Laser Line with a Normalization Approach using Hessian-matrix Eigenvalues

  • Xu, Guan;Sun, Lina;Li, Xiaotao;Su, Jian;Hao, Zhaobing;Lu, Xue
    • Journal of the Optical Society of Korea
    • /
    • v.18 no.4
    • /
    • pp.317-329
    • /
    • 2014
  • In vision measurement systems based on structured light, the key point of detection precision is to determine accurately the central position of the projected laser line in the image. The purpose of this research is to extract laser line centers based on a decision function generated to distinguish the real centers from candidate points with a high recognition rate. First, preprocessing of an image adopting a difference image method is conducted to realize image segmentation of the laser line. Second, the feature points in an integral pixel level are selected as the initiating light line centers by the eigenvalues of the Hessian matrix. Third, according to the light intensity distribution of a laser line obeying a Gaussian distribution in transverse section and a constant distribution in longitudinal section, a normalized model of Hessian matrix eigenvalues for the candidate centers of the laser line is presented to balance reasonably the two eigenvalues that indicate the variation tendencies of the second-order partial derivatives of the Gaussian function and constant function, respectively. The proposed model integrates a Gaussian recognition function and a sinusoidal recognition function. The Gaussian recognition function estimates the characteristic that one eigenvalue approaches zero, and enhances the sensitivity of the decision function to that characteristic, which corresponds to the longitudinal direction of the laser line. The sinusoidal recognition function evaluates the feature that the other eigenvalue is negative with a large absolute value, making the decision function more sensitive to that feature, which is related to the transverse direction of the laser line. In the proposed model the decision function is weighted for higher values to the real centers synthetically, considering the properties in the longitudinal and transverse directions of the laser line. Moreover, this method provides a decision value from 0 to 1 for arbitrary candidate centers, which yields a normalized measure for different laser lines in different images. The normalized results of pixels close to 1 are determined to be the real centers by progressive scanning of the image columns. Finally, the zero point of a second-order Taylor expansion in the eigenvector's direction is employed to refine further the extraction results of the central points at the subpixel level. The experimental results show that the method based on this normalization model accurately extracts the coordinates of laser line centers and obtains a higher recognition rate in two group experiments.

A System Model of Iterative Image Reconstruction for High Sensitivity Collimator in SPECT (SPECT용 고민감도 콜리메이터를 위한 반복적 영상재구성방법의 시스템 모델 개발)

  • Bae, Seung-Bin;Lee, Hak-Jae;Kim, Young-Kwon;Kim, You-Hyun;Lee, Ki-Sung;Joung, Jin-Hun
    • Journal of radiological science and technology
    • /
    • v.33 no.1
    • /
    • pp.31-36
    • /
    • 2010
  • Low energy high resolution (LEHR) collimator is the most widely used collimator in SPECT imaging. LEHR has an advantage in terms of image resolution but has a difficulty in acquiring high sensitivity due to the narrow hole size and long septa height. Throughput in SPECT can be improved by increasing counts per second with the use of high sensitivity collimators. The purpose of this study is to develop a system model in iterative image reconstruction to recover the resolution degradation caused by high sensitivity collimators with bigger hole size. We used fan-beam model instead of parallel-beam model for calculation of detection probabilities to accurately model the high sensitivity collimator with wider holes. In addition the weight factors were calculated and applied onto the probabilities as a function of incident angle of incoming photons and distance from source to the collimator surface. The proposed system model resulted in the equivalent performance with the same counts (i.e. in shortened acquisition time) and improved image quality in the same acquisition time. The proposed method can be effectively applied for resolution improvement of pixel collimator of next generation solid state detectors.