• Title/Summary/Keyword: noise in image data

Search Result 747, Processing Time 0.031 seconds

A Study on the Digital Drawing of Archaeological Relics Using Open-Source Software (오픈소스 소프트웨어를 활용한 고고 유물의 디지털 실측 연구)

  • LEE Hosun;AHN Hyoungki
    • Korean Journal of Heritage: History & Science
    • /
    • v.57 no.1
    • /
    • pp.82-108
    • /
    • 2024
  • With the transition of archaeological recording method's transition from analog to digital, the 3D scanning technology has been actively adopted within the field. Research on the digital archaeological digital data gathered from 3D scanning and photogrammetry is continuously being conducted. However, due to cost and manpower issues, most buried cultural heritage organizations are hesitating to adopt such digital technology. This paper aims to present a digital recording method of relics utilizing open-source software and photogrammetry technology, which is believed to be the most efficient method among 3D scanning methods. The digital recording process of relics consists of three stages: acquiring a 3D model, creating a joining map with the edited 3D model, and creating an digital drawing. In order to enhance the accessibility, this method only utilizes open-source software throughout the entire process. The results of this study confirms that in terms of quantitative evaluation, the deviation of numerical measurement between the actual artifact and the 3D model was minimal. In addition, the results of quantitative quality analysis from the open-source software and the commercial software showed high similarity. However, the data processing time was overwhelmingly fast for commercial software, which is believed to be a result of high computational speed from the improved algorithm. In qualitative evaluation, some differences in mesh and texture quality occurred. In the 3D model generated by opensource software, following problems occurred: noise on the mesh surface, harsh surface of the mesh, and difficulty in confirming the production marks of relics and the expression of patterns. However, some of the open source software did generate the quality comparable to that of commercial software in quantitative and qualitative evaluations. Open-source software for editing 3D models was able to not only post-process, match, and merge the 3D model, but also scale adjustment, join surface production, and render image necessary for the actual measurement of relics. The final completed drawing was tracked by the CAD program, which is also an open-source software. In archaeological research, photogrammetry is very applicable to various processes, including excavation, writing reports, and research on numerical data from 3D models. With the breakthrough development of computer vision, the types of open-source software have been diversified and the performance has significantly improved. With the high accessibility to such digital technology, the acquisition of 3D model data in archaeology will be used as basic data for preservation and active research of cultural heritage.

A Study on the Resolution Analysis of Digital X-ray Images with increasing Thickness of PMMA (조직 등가물질 두께 증가에 따른 디지털 엑스선 영상의 해상도 분석에 관한 연구)

  • Kim, Junwoo
    • Journal of the Korean Society of Radiology
    • /
    • v.15 no.2
    • /
    • pp.173-179
    • /
    • 2021
  • Scattered x-ray generated by digital radiography systems also have the advantage of increasing signals, but ultimately detectability is reduced by decreasing resolution and increasing noise of x-ray images transmitted objects. An indirect method of measuring scattered x-ray in a modulation-transfer function (MTF) for evaluating resolution in a spatial-frequency domain can be considered as a drop in the MTF value corresponding to zero-frequency. In this study, polymethyl methacrylate (PMMA) was used as a patient tissue equivalent, and MTFs were obtained for various thicknesses to quantify the effect of scattered x-ray on resolution. X-ray image signals were observed to decrease by 35 ~ 83% with PMMA thickness increasing, which is determined by the absorption or scattering of x-rays in PMMA, resulting in reduced MTF and increased scatter fraction. The method to compensate for MTF degradation by PMMA resulted in the MTF inflation without considering the optical spreading generated by the indirect-conversion type detector. Data fitting or zero-padding are needed to compensate for MTF more reasonably on edge-spread function or line-spread function.

Fully Automatic Coronary Calcium Score Software Empowered by Artificial Intelligence Technology: Validation Study Using Three CT Cohorts

  • June-Goo Lee;HeeSoo Kim;Heejun Kang;Hyun Jung Koo;Joon-Won Kang;Young-Hak Kim;Dong Hyun Yang
    • Korean Journal of Radiology
    • /
    • v.22 no.11
    • /
    • pp.1764-1776
    • /
    • 2021
  • Objective: This study aimed to validate a deep learning-based fully automatic calcium scoring (coronary artery calcium [CAC]_auto) system using previously published cardiac computed tomography (CT) cohort data with the manually segmented coronary calcium scoring (CAC_hand) system as the reference standard. Materials and Methods: We developed the CAC_auto system using 100 co-registered, non-enhanced and contrast-enhanced CT scans. For the validation of the CAC_auto system, three previously published CT cohorts (n = 2985) were chosen to represent different clinical scenarios (i.e., 2647 asymptomatic, 220 symptomatic, 118 valve disease) and four CT models. The performance of the CAC_auto system in detecting coronary calcium was determined. The reliability of the system in measuring the Agatston score as compared with CAC_hand was also evaluated per vessel and per patient using intraclass correlation coefficients (ICCs) and Bland-Altman analysis. The agreement between CAC_auto and CAC_hand based on the cardiovascular risk stratification categories (Agatston score: 0, 1-10, 11-100, 101-400, > 400) was evaluated. Results: In 2985 patients, 6218 coronary calcium lesions were identified using CAC_hand. The per-lesion sensitivity and false-positive rate of the CAC_auto system in detecting coronary calcium were 93.3% (5800 of 6218) and 0.11 false-positive lesions per patient, respectively. The CAC_auto system, in measuring the Agatston score, yielded ICCs of 0.99 for all the vessels (left main 0.91, left anterior descending 0.99, left circumflex 0.96, right coronary 0.99). The limits of agreement between CAC_auto and CAC_hand were 1.6 ± 52.2. The linearly weighted kappa value for the Agatston score categorization was 0.94. The main causes of false-positive results were image noise (29.1%, 97/333 lesions), aortic wall calcification (25.5%, 85/333 lesions), and pericardial calcification (24.3%, 81/333 lesions). Conclusion: The atlas-based CAC_auto empowered by deep learning provided accurate calcium score measurement as compared with manual method and risk category classification, which could potentially streamline CAC imaging workflows.

Intelligent Railway Detection Algorithm Fusing Image Processing and Deep Learning for the Prevent of Unusual Events (철도 궤도의 이상상황 예방을 위한 영상처리와 딥러닝을 융합한 지능형 철도 레일 탐지 알고리즘)

  • Jung, Ju-ho;Kim, Da-hyeon;Kim, Chul-su;Oh, Ryum-duck;Ahn, Jun-ho
    • Journal of Internet Computing and Services
    • /
    • v.21 no.4
    • /
    • pp.109-116
    • /
    • 2020
  • With the advent of high-speed railways, railways are one of the most frequently used means of transportation at home and abroad. In addition, in terms of environment, carbon dioxide emissions are lower and energy efficiency is higher than other transportation. As the interest in railways increases, the issue related to railway safety is one of the important concerns. Among them, visual abnormalities occur when various obstacles such as animals and people suddenly appear in front of the railroad. To prevent these accidents, detecting rail tracks is one of the areas that must basically be detected. Images can be collected through cameras installed on railways, and the method of detecting railway rails has a traditional method and a method using deep learning algorithm. The traditional method is difficult to detect accurately due to the various noise around the rail, and using the deep learning algorithm, it can detect accurately, and it combines the two algorithms to detect the exact rail. The proposed algorithm determines the accuracy of railway rail detection based on the data collected.

Magnitude of beam-hardening artifacts produced by gutta-percha and metal posts on cone-beam computed tomography with varying tube current

  • Gaeta-Araujo, Hugo;Nascimento, Eduarda Helena Leandro;Fontenele, Rocharles Cavalcante;Mancini, Arthur Xavier Maseti;Freitas, Deborah Queiroz;Oliveira-Santos, Christiano
    • Imaging Science in Dentistry
    • /
    • v.50 no.1
    • /
    • pp.1-7
    • /
    • 2020
  • Purpose: This study was performed to evaluate the magnitude of artifacts produced by gutta-percha and metal posts on cone-beam computed tomography (CBCT) scans obtained with different tube currents and with or without metal artifact reduction (MAR). Materials and Methods: A tooth was inserted in a dry human mandible socket, and CBCT scans were acquired after root canal instrumentation, root canal filling, and metal post placement with various tube currents with and without MAR activation. The artifact magnitude was assessed by the standard deviation (SD) of gray values and the contrast-to-noise ratio (CNR) at the various distances from the tooth. Data were compared using multi-way analysis of variance. Results: At all distances, a current of 4 mA was associated with a higher SD and a lower CNR than 8 mA or 10 mA (P<0.05). For the metal posts without MAR, the artifact magnitude as assessed by SD was greatest at 1.5 cm or less (P<0.05). When MAR was applied, SD values for distances 1.5 cm or closer to the tooth were reduced (P<0.05). MAR usage did not influence the magnitude of artifacts in the control and gutta-percha groups(P>0.05). Conclusion: Increasing the tube current from 4 mA to 8 mA may reduce the magnitude of artifacts from metal posts. The magnitude of artifacts arising from metal posts was significantly higher at distances of 1.5 cm or less than at greater distances. MAR usage improved image quality near the metal post, but had no significant influence farther than 1.5 cm from the tooth.

Counterfeit Money Detection Algorithm using Non-Local Mean Value and Support Vector Machine Classifier (비지역적 특징값과 서포트 벡터 머신 분류기를 이용한 위변조 지폐 판별 알고리즘)

  • Ji, Sang-Keun;Lee, Hae-Yeoun
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.2 no.1
    • /
    • pp.55-64
    • /
    • 2013
  • Due to the popularization of digital high-performance capturing equipments and the emergence of powerful image-editing softwares, it is easy for anyone to make a high-quality counterfeit money. However, the probability of detecting a counterfeit money to the general public is extremely low. In this paper, we propose a counterfeit money detection algorithm using a general purpose scanner. This algorithm determines counterfeit money based on the different features in the printing process. After the non-local mean value is used to analyze the noises from each money, we extract statistical features from these noises by calculating a gray level co-occurrence matrix. Then, these features are applied to train and test the support vector machine classifier for identifying either original or counterfeit money. In the experiment, we use total 324 images of original money and counterfeit money. Also, we compare with noise features from previous researches using wiener filter and discrete wavelet transform. The accuracy of the algorithm for identifying counterfeit money was over 94%. Also, the accuracy for identifying the printing source was over 93%. The presented algorithm performs better than previous researches.

Shallow Subsurface Structure of the Yaksoo Area, Ulsan, Korea by Geophysical Surveys (물리탐사기법에 의한 울산광역시 약수지역 천부지하구조 조사)

  • Lee, Jung-Mo;Kong, Young-Sae;Chang, Tae-Woo;Park, Dong-Hee;Kim, Tae-Kyung
    • Journal of the Korean Geophysical Society
    • /
    • v.3 no.1
    • /
    • pp.57-66
    • /
    • 2000
  • The location and geometry of the Ulsan Fault play important roles in interpreting tectonic evolution of the southeastern part of the Korean Peninsula. Dipole-dipole electrical resistivity surveys and seismic refraction surveys were carried out in the Yaksoo area, Ulsan in order to measure the thickness of the alluvium covering the Ulsan Fault and to find associated fracture zones and possibly the location of its major fault plane. The collected data were analyzed and interpreted. Some results reported previously by others were also used in this interpretation. No low resistivity anomalies were found in the cross-sectional resistivity image of the survey line located in the east of the Dong River. In contrast, well-developed continuous low resistivity anomalies were detected in the west of the Dong River. This strongly suggests that the major fault plane of the Ulsan Fault is located under or in the west part of the Dong River. Two refraction boundaries corresponding to the underground water level and the bottom of the alluvium were found by refraction surveys carried out on the limited part of the east survey line. The thickness of the alluvium was found to be about 30 m. Small faults in the basement rock identified by reflection surveys were not detected by both resistivity and refraction seismic surveys. This might be explained by assuming that low resistivity anomaly is more closely related to the clay contents than the water contents. On the other hand, it may be resulted by the limited resolution of the resistivity and refraction surveys. Detailed study is required to clarify the reason. Resistivity survey is frequently considered to be a good exploration method to detect subsurface faults. However, it appears to be less useful than reflection seismic survey in this work. In dipole-dipole resistivity survey, the number of separation should be increased to survey deeper subsurface with the same resolution. However, signal to noise ratio decreases as the number of separation increases. In this survey area, the signal to noise ratio of up to sixteen separations was good enough based on the statistical properties of measurements.

  • PDF

RPC Model Generation from the Physical Sensor Model (영상의 물리적 센서모델을 이용한 RPC 모델 추출)

  • Kim, Hye-Jin;Kim, Jae-Bin;Kim, Yong-Il
    • Journal of Korean Society for Geospatial Information Science
    • /
    • v.11 no.4 s.27
    • /
    • pp.21-27
    • /
    • 2003
  • The rational polynomial coefficients(RPC) model is a generalized sensor model that is used as an alternative for the physical sensor model for IKONOS-2 and QuickBird. As the number of sensors increases along with greater complexity, and as the need for standard sensor model has become important, the applicability of the RPC model is also increasing. The RPC model can be substituted for all sensor models, such as the projective camera the linear pushbroom sensor and the SAR This paper is aimed at generating a RPC model from the physical sensor model of the KOMPSAT-1(Korean Multi-Purpose Satellite) and aerial photography. The KOMPSAT-1 collects $510{\sim}730nm$ panchromatic images with a ground sample distance (GSD) of 6.6m and a swath width of 17 km by pushbroom scanning. We generated the RPC from a physical sensor model of KOMPSAT-1 and aerial photography. The iterative least square solution based on Levenberg-Marquardt algorithm is used to estimate the RPC. In addition, data normalization and regularization are applied to improve the accuracy and minimize noise. And the accuracy of the test was evaluated based on the 2-D image coordinates. From this test, we were able to find that the RPC model is suitable for both KOMPSAT-1 and aerial photography.

  • PDF

Assessment of Attenuation Correction Techniques with a $^{137}Cs$ Point Source ($^{137}Cs$ 점선원을 이용한 감쇠 보정기법들의 평가)

  • Bong, Jung-Kyun;Kim, Hee-Joung;Son, Hye-Kyoung;Park, Yun-Young;Park, Hae-Joung;Yun, Mi-Jin;Lee, Jong-Doo;Jung, Hae-Jo
    • The Korean Journal of Nuclear Medicine
    • /
    • v.39 no.1
    • /
    • pp.57-68
    • /
    • 2005
  • Purpose: The objective of this study was to assess attenuation correction algorithms with the $^{137}Cs$ point source for the brain positron omission tomography (PET) imaging process. Materials & Methods: Four different types of phantoms were used in this study for testing various types of the attenuation correction techniques. Transmission data of a $^{137}Cs$ point source were acquired after infusing the emission source into phantoms and then the emission data were subsequently acquired in 3D acquisition mode. Scatter corrections were performed with a background tail-fitting algorithm. Emission data were then reconstructed using iterative reconstruction method with a measured (MAC), elliptical (ELAC), segmented (SAC) and remapping (RAC) attenuation correction, respectively. Reconstructed images were then both qualitatively and quantitatively assessed. In addition, reconstructed images of a normal subject were assessed by nuclear medicine physicians. Subtracted images were also compared. Results: ELEC, SAC, and RAC provided a uniform phantom image with less noise for a cylindrical phantom. In contrast, a decrease in intensity at the central portion of the attenuation map was noticed at the result of the MAC. Reconstructed images of Jaszack and Hoffan phantoms presented better quality with RAC and SAC. The attenuation of a skull on images of the normal subject was clearly noticed and the attenuation correction without considering the attenuation of the skull resulted in artificial defects on images of the brain. Conclusion: the complicated and improved attenuation correction methods were needed to obtain the better accuracy of the quantitative brain PET images.

Time-Lapse Crosswell Seismic Study to Evaluate the Underground Cavity Filling (지하공동 충전효과 평가를 위한 시차 공대공 탄성파 토모그래피 연구)

  • Lee, Doo-Sung
    • Geophysics and Geophysical Exploration
    • /
    • v.1 no.1
    • /
    • pp.25-30
    • /
    • 1998
  • Time-lapse crosswell seismic data, recorded before and after the cavity filling, showed that the filling increased the velocity at a known cavity zone in an old mine site in Inchon area. The seismic response depicted on the tomogram and in conjunction with the geologic data from drillings imply that the size of the cavity may be either small or filled by debris. In this study, I attempted to evaluate the filling effect by analyzing velocity measured from the time-lapse tomograms. The data acquired by a downhole airgun and 24-channel hydrophone system revealed that there exists measurable amounts of source statics. I presented a methodology to estimate the source statics. The procedure for this method is: 1) examine the source firing-time for each source, and remove the effect of irregular firing time, and 2) estimate the residual statics caused by inaccurate source positioning. This proposed multi-step inversion may reduce high frequency numerical noise and enhance the resolution at the zone of interest. The multi-step inversion with different starting models successfully shows the subtle velocity changes at the small cavity zone. The inversion procedure is: 1) conduct an inversion using regular sized cells, and generate an image of gross velocity structure by applying a 2-D median filter on the resulting tomogram, and 2) construct the starting velocity model by modifying the final velocity model from the first phase. The model was modified so that the zone of interest consists of small-sized grids. The final velocity model developed from the baseline survey was as a starting velocity model on the monitor inversion. Since we expected a velocity change only in the cavity zone, in the monitor inversion, we can significantly reduce the number of model parameters by fixing the model out-side the cavity zone equal to the baseline model.

  • PDF