• Title/Summary/Keyword: image acquisition

Search Result 1,319, Processing Time 0.033 seconds

Assessment of Attenuation Correction Techniques with a $^{137}Cs$ Point Source ($^{137}Cs$ 점선원을 이용한 감쇠 보정기법들의 평가)

  • Bong, Jung-Kyun;Kim, Hee-Joung;Son, Hye-Kyoung;Park, Yun-Young;Park, Hae-Joung;Yun, Mi-Jin;Lee, Jong-Doo;Jung, Hae-Jo
    • The Korean Journal of Nuclear Medicine
    • /
    • v.39 no.1
    • /
    • pp.57-68
    • /
    • 2005
  • Purpose: The objective of this study was to assess attenuation correction algorithms with the $^{137}Cs$ point source for the brain positron omission tomography (PET) imaging process. Materials & Methods: Four different types of phantoms were used in this study for testing various types of the attenuation correction techniques. Transmission data of a $^{137}Cs$ point source were acquired after infusing the emission source into phantoms and then the emission data were subsequently acquired in 3D acquisition mode. Scatter corrections were performed with a background tail-fitting algorithm. Emission data were then reconstructed using iterative reconstruction method with a measured (MAC), elliptical (ELAC), segmented (SAC) and remapping (RAC) attenuation correction, respectively. Reconstructed images were then both qualitatively and quantitatively assessed. In addition, reconstructed images of a normal subject were assessed by nuclear medicine physicians. Subtracted images were also compared. Results: ELEC, SAC, and RAC provided a uniform phantom image with less noise for a cylindrical phantom. In contrast, a decrease in intensity at the central portion of the attenuation map was noticed at the result of the MAC. Reconstructed images of Jaszack and Hoffan phantoms presented better quality with RAC and SAC. The attenuation of a skull on images of the normal subject was clearly noticed and the attenuation correction without considering the attenuation of the skull resulted in artificial defects on images of the brain. Conclusion: the complicated and improved attenuation correction methods were needed to obtain the better accuracy of the quantitative brain PET images.

Quantification of Cerebrovascular Reserve Using Tc-99m HMPAO Brain SPECT and Lassen's Algorithm (Tc-99m HMPAO 뇌 SPECT와 Lassen 알고리즘을 이용한 뇌혈관 예비능의 정량화)

  • Kim, Kyeong-Min;Lee, Dong-Soo;Kim, Seok-Ki;Lee, Jae-Sung;Kang, Keon-Wook;Yeo, Jeong-Seok;Chung, June-Key;Lee, Myung-Chul
    • The Korean Journal of Nuclear Medicine
    • /
    • v.34 no.4
    • /
    • pp.322-335
    • /
    • 2000
  • Purpose: For quantitative estimation of cerebrovascular reserve (CVR), we estimated the cerebral blood flow (CBF) using Lassen's nonlinearity correction algorithm and Tc-99m HMPAO brain SPECT images acquired with consecutive acquisition protocol. Using the values of CBF in basal and acetaBolamide (ACZ) stress states, CBF increase was calculated. Materials and Methods: In 9 normal subjects (age; $72{\pm}4$ years), brain SPECT was performed at basal and ACZ stress states consecutively after injection of 555 MBq and 1,110 MBq of Tc-99m HMPAO, respectively. Cerebellum was automatically extracted as reference region on basal SPECT image using threshold method. Assuming basal CBF of cerebellum as 55 ml/100g/min, CBF was calculated lot every pixel at basal states using Lassen's algorithm. Cerebellar blood flow at stress was estimated comparing counts of cerebellum at rest and ACZ stress and Lassen's algorithm. CBF of every pixel at ACZ stress state was calculated using Lassen's algorithm and ACZ cerebellar count. CVR was calculated by subtracting basal CBF from ACZ stress CBF for every pixel. The percent CVR was calculated by dividing CVR by basal CBF. The CBF and percentage CVR parametric images were generated. Results: The CBF and percentage CVR parametric images were obtained successfully in all the subjects. Global mean CBF were $49.6{\pm}5.5ml/100g/min\;and\;64.4{\pm}10.2ml/100g/min$ at basal and ACZ stress states, respectively. The increase of CBF at ACZ stress state was $14.7{\pm}9.6ml/100g/min$. The global mean percent CVR was 30.7% and was higher than the 13.8% calculated using count images. Conclusion: The blood flow at basal and ACZ stress states and cerebrovascular reserve were estimated using basal/ACZ Tc-99m-HMPAO SPECT images and Lassen's algorithm. Using these values, parametric images for blood flow and cerebrovascular reserve were generated.

  • PDF

Utilization of Smart Farms in Open-field Agriculture Based on Digital Twin (디지털 트윈 기반 노지스마트팜 활용방안)

  • Kim, Sukgu
    • Proceedings of the Korean Society of Crop Science Conference
    • /
    • 2023.04a
    • /
    • pp.7-7
    • /
    • 2023
  • Currently, the main technologies of various fourth industries are big data, the Internet of Things, artificial intelligence, blockchain, mixed reality (MR), and drones. In particular, "digital twin," which has recently become a global technological trend, is a concept of a virtual model that is expressed equally in physical objects and computers. By creating and simulating a Digital twin of software-virtualized assets instead of real physical assets, accurate information about the characteristics of real farming (current state, agricultural productivity, agricultural work scenarios, etc.) can be obtained. This study aims to streamline agricultural work through automatic water management, remote growth forecasting, drone control, and pest forecasting through the operation of an integrated control system by constructing digital twin data on the main production area of the nojinot industry and designing and building a smart farm complex. In addition, it aims to distribute digital environmental control agriculture in Korea that can reduce labor and improve crop productivity by minimizing environmental load through the use of appropriate amounts of fertilizers and pesticides through big data analysis. These open-field agricultural technologies can reduce labor through digital farming and cultivation management, optimize water use and prevent soil pollution in preparation for climate change, and quantitative growth management of open-field crops by securing digital data for the national cultivation environment. It is also a way to directly implement carbon-neutral RED++ activities by improving agricultural productivity. The analysis and prediction of growth status through the acquisition of the acquired high-precision and high-definition image-based crop growth data are very effective in digital farming work management. The Southern Crop Department of the National Institute of Food Science conducted research and development on various types of open-field agricultural smart farms such as underground point and underground drainage. In particular, from this year, commercialization is underway in earnest through the establishment of smart farm facilities and technology distribution for agricultural technology complexes across the country. In this study, we would like to describe the case of establishing the agricultural field that combines digital twin technology and open-field agricultural smart farm technology and future utilization plans.

  • PDF

Generative Adversarial Network-Based Image Conversion Among Different Computed Tomography Protocols and Vendors: Effects on Accuracy and Variability in Quantifying Regional Disease Patterns of Interstitial Lung Disease

  • Hye Jeon Hwang;Hyunjong Kim;Joon Beom Seo;Jong Chul Ye;Gyutaek Oh;Sang Min Lee;Ryoungwoo Jang;Jihye Yun;Namkug Kim;Hee Jun Park;Ho Yun Lee;Soon Ho Yoon;Kyung Eun Shin;Jae Wook Lee;Woocheol Kwon;Joo Sung Sun;Seulgi You;Myung Hee Chung;Bo Mi Gil;Jae-Kwang Lim;Youkyung Lee;Su Jin Hong;Yo Won Choi
    • Korean Journal of Radiology
    • /
    • v.24 no.8
    • /
    • pp.807-820
    • /
    • 2023
  • Objective: To assess whether computed tomography (CT) conversion across different scan parameters and manufacturers using a routable generative adversarial network (RouteGAN) can improve the accuracy and variability in quantifying interstitial lung disease (ILD) using a deep learning-based automated software. Materials and Methods: This study included patients with ILD who underwent thin-section CT. Unmatched CT images obtained using scanners from four manufacturers (vendors A-D), standard- or low-radiation doses, and sharp or medium kernels were classified into groups 1-7 according to acquisition conditions. CT images in groups 2-7 were converted into the target CT style (Group 1: vendor A, standard dose, and sharp kernel) using a RouteGAN. ILD was quantified on original and converted CT images using a deep learning-based software (Aview, Coreline Soft). The accuracy of quantification was analyzed using the dice similarity coefficient (DSC) and pixel-wise overlap accuracy metrics against manual quantification by a radiologist. Five radiologists evaluated quantification accuracy using a 10-point visual scoring system. Results: Three hundred and fifty CT slices from 150 patients (mean age: 67.6 ± 10.7 years; 56 females) were included. The overlap accuracies for quantifying total abnormalities in groups 2-7 improved after CT conversion (original vs. converted: 0.63 vs. 0.68 for DSC, 0.66 vs. 0.70 for pixel-wise recall, and 0.68 vs. 0.73 for pixel-wise precision; P < 0.002 for all). The DSCs of fibrosis score, honeycombing, and reticulation significantly increased after CT conversion (0.32 vs. 0.64, 0.19 vs. 0.47, and 0.23 vs. 0.54, P < 0.002 for all), whereas those of ground-glass opacity, consolidation, and emphysema did not change significantly or decreased slightly. The radiologists' scores were significantly higher (P < 0.001) and less variable on converted CT. Conclusion: CT conversion using a RouteGAN can improve the accuracy and variability of CT images obtained using different scan parameters and manufacturers in deep learning-based quantification of ILD.

A Study on the Digital Drawing of Archaeological Relics Using Open-Source Software (오픈소스 소프트웨어를 활용한 고고 유물의 디지털 실측 연구)

  • LEE Hosun;AHN Hyoungki
    • Korean Journal of Heritage: History & Science
    • /
    • v.57 no.1
    • /
    • pp.82-108
    • /
    • 2024
  • With the transition of archaeological recording method's transition from analog to digital, the 3D scanning technology has been actively adopted within the field. Research on the digital archaeological digital data gathered from 3D scanning and photogrammetry is continuously being conducted. However, due to cost and manpower issues, most buried cultural heritage organizations are hesitating to adopt such digital technology. This paper aims to present a digital recording method of relics utilizing open-source software and photogrammetry technology, which is believed to be the most efficient method among 3D scanning methods. The digital recording process of relics consists of three stages: acquiring a 3D model, creating a joining map with the edited 3D model, and creating an digital drawing. In order to enhance the accessibility, this method only utilizes open-source software throughout the entire process. The results of this study confirms that in terms of quantitative evaluation, the deviation of numerical measurement between the actual artifact and the 3D model was minimal. In addition, the results of quantitative quality analysis from the open-source software and the commercial software showed high similarity. However, the data processing time was overwhelmingly fast for commercial software, which is believed to be a result of high computational speed from the improved algorithm. In qualitative evaluation, some differences in mesh and texture quality occurred. In the 3D model generated by opensource software, following problems occurred: noise on the mesh surface, harsh surface of the mesh, and difficulty in confirming the production marks of relics and the expression of patterns. However, some of the open source software did generate the quality comparable to that of commercial software in quantitative and qualitative evaluations. Open-source software for editing 3D models was able to not only post-process, match, and merge the 3D model, but also scale adjustment, join surface production, and render image necessary for the actual measurement of relics. The final completed drawing was tracked by the CAD program, which is also an open-source software. In archaeological research, photogrammetry is very applicable to various processes, including excavation, writing reports, and research on numerical data from 3D models. With the breakthrough development of computer vision, the types of open-source software have been diversified and the performance has significantly improved. With the high accessibility to such digital technology, the acquisition of 3D model data in archaeology will be used as basic data for preservation and active research of cultural heritage.

Research on The Utility of Acquisition of Oblique Views of Bilateral Orbit During the Dacryoscintigraphy (눈물길 조영검사 시 양측 안 와 사위 상 획득의 유용성에 대한 연구)

  • Park, Jwa-Woo;Lee, Bum-Hee;Park, Seung-Hwan;Park, Su-Young;Jung, Chan-Wook;Ryu, Hyung-Gi;Kim, Ho-Shin
    • The Korean Journal of Nuclear Medicine Technology
    • /
    • v.18 no.1
    • /
    • pp.76-81
    • /
    • 2014
  • Purpose: Diversity and the lachrymal duct deformities and the passage inside the nasal cavity except for anterior image such as epiphora happens during the test were able to express more precisely during the dacryoscintigraphy. Also, we thought about the necessity of a method to classify the passage into the naso-lachrymal duct from epiphora. Therefore, we are to find the validity of the method to obtain both oblique views except for anterior views. Materials and Methods: The targets of this research are 78 patients with epiphora due to the blockage at the lachrymal duct from January 2013 to August 2013. Average age was $56.96{\pm}13.36$. By using a micropipette, we dropped 1-2 drops of $^{99m}TcO4^-$ of 3.7 MBq (0.1 mCi) with $10{\mu}L$ of each drop into the inferior conjunctival fold, then we performed dynamic check for 20 minutes with 20 frames of each minute. In case of we checked the passage from both eyes to nasal cavity immediately after the dynamic check, we obtained oblique view immediately. If we didn't see the passage in either side of the orbit, we obtained oblique views of the orbit after checking the frontal film in 40 minutes. The instrument we used was Pin-hole Collimator with Gamma Camera(Siemens Orbiter, Hoffman Estates, IL, USA). Results: Among the 78 patients with dacryoscintigraphy, 35 patients were confirmed with passage into the nasal cavity from the anterior view. Among those 35 patients, 15 patients were confirmed with passage into the nasal cavity on both eyes, and it was able to observe better passage patterns through oblique view with a result of 8 on both eyes, 2 on left eye, and 1 on right eye. 20 patients had passage in left eye or right eye, among those patients 10 patients showed clear passage compared to the anterior view. 13 patients had possible passage, and 30 patients had no proof of motion of the tracer. To sum up, 21 patients (60%) among 35 patients showed clear pattern of passage with additional oblique views compared to anterior view. People responded obtaining oblique views though 5 points scale about the utility of passage identification helps make diagnoses the passage, passage delayed, and blockage of naso-lachrymal duct by showing the well-seen portions from anterior view. Also, when classifying passage to naso-lachrymal duct and flow to the skin, oblique views has higher chance of classification in case of epiphora (anterior:$4.14{\pm}0.3$, oblique:$4.55{\pm}0.4$). Conclusion: It is considered that if you obtain oblique views of the bilateral orbits in addition to anterior view during the dacryoscintigraphy, the ability of diagnose for reading will become higher because you will be able to see the areas that you could not observe from the anterior view so that you can see if it emitted after the naso-lachrymal duct and the flow of epiphora on the skin.

  • PDF

Evaluation of Combine IGRT using ExacTrac and CBCT In SBRT (정위적체부방사선치료시 ExacTrac과 CBCT를 이용한 Combine IGRT의 유용성 평가)

  • Ahn, Min Woo;Kang, Hyo Seok;Choi, Byoung Joon;Park, Sang Jun;Jung, Da Ee;Lee, Geon Ho;Lee, Doo Sang;Jeon, Myeong Soo
    • The Journal of Korean Society for Radiation Therapy
    • /
    • v.30 no.1_2
    • /
    • pp.201-208
    • /
    • 2018
  • Purpose : The purpose of this study is to compare and analyze the set-up errors using the Combine IGRT with ExacTrac and CBCT phased in the treatment of Stereotatic Body Radiotherapy. Methods and materials : Patient who were treated Stereotatic Body Radiotherapy in the ulsan university hospital from May 2014 to november 2017 were classified as treatment area three brain, nine spine, three pelvis. First using ExacTrac Set-up error calibrated direction of Lateral(Lat), Longitudinal(Lng), Vertical(Vrt), Roll, Pitch, Yaw, after applied ExacTrac moving data in addition to use CBCT and set-up error calibrated direction of Lat, Lng, Vrt, Rotation(Rtn). Results : When using ExacTrac, the error in the brain region is Lat $0.18{\pm}0.25cm$, Lng $0.23{\pm}0.04cm$, Vrt $0.30{\pm}0.36cm$, Roll $0.36{\pm}0.21^{\circ}$, Pitch $1.72{\pm}0.62^{\circ}$, Yaw $1.80{\pm}1.21^{\circ}$, spine Lat $0.21{\pm}0.24cm$, Lng $0.27{\pm}0.36cm$, Vrt $0.26{\pm}0.42cm$, Roll $1.01{\pm}1.17^{\circ}$, Pitch $0.66{\pm}0.45^{\circ}$, Yaw $0.71{\pm}0.58^{\circ}$, pelvis Lat $0.20{\pm}0.16cm$, Lng $0.24{\pm}0.29cm$, Vrt $0.28{\pm}0.29cm$, Roll $0.83{\pm}0.21^{\circ}$, Pitch $0.57{\pm}0.45^{\circ}$, Yaw $0.52{\pm}0.27^{\circ}$ When CBCT is performed after the couch movement, the error in brain region is Lat $0.06{\pm}0.05cm$, Lng $0.07{\pm}0.06cm$, Vrt $0.00{\pm}0.00cm$, Rtn $0.0{\pm}0.0^{\circ}$, spine Lat $0.06{\pm}0.04cm$, Lng $0.16{\pm}0.30cm$, Vrt $0.08{\pm}0.08cm$, Rtn $0.00{\pm}0.00^{\circ}$, pelvis Lat $0.06{\pm}0.07cm$, Lng $0.04{\pm}0.05cm$, Vrt $0.06{\pm}0.04cm$, Rtn $0.0{\pm}0.0^{\circ}$. Conclusion : Combine IGRT with ExacTrac in addition to CBCT during Stereotatic Body Radiotherapy showed that it was possible to reduce the set-up error of patients compared to single ExacTrac. However, the application of Combine IGRT increases patient set-up verification time and absorption dose in the body for image acquisition. Therefore, depending on the patient's situation that using Combine IGRT to reduce the patient's set-up error can increase the radiation treatment effectiveness.

  • PDF

A Methodology of Customer Churn Prediction based on Two-Dimensional Loyalty Segmentation (이차원 고객충성도 세그먼트 기반의 고객이탈예측 방법론)

  • Kim, Hyung Su;Hong, Seung Woo
    • Journal of Intelligence and Information Systems
    • /
    • v.26 no.4
    • /
    • pp.111-126
    • /
    • 2020
  • Most industries have recently become aware of the importance of customer lifetime value as they are exposed to a competitive environment. As a result, preventing customers from churn is becoming a more important business issue than securing new customers. This is because maintaining churn customers is far more economical than securing new customers, and in fact, the acquisition cost of new customers is known to be five to six times higher than the maintenance cost of churn customers. Also, Companies that effectively prevent customer churn and improve customer retention rates are known to have a positive effect on not only increasing the company's profitability but also improving its brand image by improving customer satisfaction. Predicting customer churn, which had been conducted as a sub-research area for CRM, has recently become more important as a big data-based performance marketing theme due to the development of business machine learning technology. Until now, research on customer churn prediction has been carried out actively in such sectors as the mobile telecommunication industry, the financial industry, the distribution industry, and the game industry, which are highly competitive and urgent to manage churn. In addition, These churn prediction studies were focused on improving the performance of the churn prediction model itself, such as simply comparing the performance of various models, exploring features that are effective in forecasting departures, or developing new ensemble techniques, and were limited in terms of practical utilization because most studies considered the entire customer group as a group and developed a predictive model. As such, the main purpose of the existing related research was to improve the performance of the predictive model itself, and there was a relatively lack of research to improve the overall customer churn prediction process. In fact, customers in the business have different behavior characteristics due to heterogeneous transaction patterns, and the resulting churn rate is different, so it is unreasonable to assume the entire customer as a single customer group. Therefore, it is desirable to segment customers according to customer classification criteria, such as loyalty, and to operate an appropriate churn prediction model individually, in order to carry out effective customer churn predictions in heterogeneous industries. Of course, in some studies, there are studies in which customers are subdivided using clustering techniques and applied a churn prediction model for individual customer groups. Although this process of predicting churn can produce better predictions than a single predict model for the entire customer population, there is still room for improvement in that clustering is a mechanical, exploratory grouping technique that calculates distances based on inputs and does not reflect the strategic intent of an entity such as loyalties. This study proposes a segment-based customer departure prediction process (CCP/2DL: Customer Churn Prediction based on Two-Dimensional Loyalty segmentation) based on two-dimensional customer loyalty, assuming that successful customer churn management can be better done through improvements in the overall process than through the performance of the model itself. CCP/2DL is a series of churn prediction processes that segment two-way, quantitative and qualitative loyalty-based customer, conduct secondary grouping of customer segments according to churn patterns, and then independently apply heterogeneous churn prediction models for each churn pattern group. Performance comparisons were performed with the most commonly applied the General churn prediction process and the Clustering-based churn prediction process to assess the relative excellence of the proposed churn prediction process. The General churn prediction process used in this study refers to the process of predicting a single group of customers simply intended to be predicted as a machine learning model, using the most commonly used churn predicting method. And the Clustering-based churn prediction process is a method of first using clustering techniques to segment customers and implement a churn prediction model for each individual group. In cooperation with a global NGO, the proposed CCP/2DL performance showed better performance than other methodologies for predicting churn. This churn prediction process is not only effective in predicting churn, but can also be a strategic basis for obtaining a variety of customer observations and carrying out other related performance marketing activities.

Basic Research on the Possibility of Developing a Landscape Perceptual Response Prediction Model Using Artificial Intelligence - Focusing on Machine Learning Techniques - (인공지능을 활용한 경관 지각반응 예측모델 개발 가능성 기초연구 - 머신러닝 기법을 중심으로 -)

  • Kim, Jin-Pyo;Suh, Joo-Hwan
    • Journal of the Korean Institute of Landscape Architecture
    • /
    • v.51 no.3
    • /
    • pp.70-82
    • /
    • 2023
  • The recent surge of IT and data acquisition is shifting the paradigm in all aspects of life, and these advances are also affecting academic fields. Research topics and methods are being improved through academic exchange and connections. In particular, data-based research methods are employed in various academic fields, including landscape architecture, where continuous research is needed. Therefore, this study aims to investigate the possibility of developing a landscape preference evaluation and prediction model using machine learning, a branch of Artificial Intelligence, reflecting the current situation. To achieve the goal of this study, machine learning techniques were applied to the landscaping field to build a landscape preference evaluation and prediction model to verify the simulation accuracy of the model. For this, wind power facility landscape images, recently attracting attention as a renewable energy source, were selected as the research objects. For analysis, images of the wind power facility landscapes were collected using web crawling techniques, and an analysis dataset was built. Orange version 3.33, a program from the University of Ljubljana was used for machine learning analysis to derive a prediction model with excellent performance. IA model that integrates the evaluation criteria of machine learning and a separate model structure for the evaluation criteria were used to generate a model using kNN, SVM, Random Forest, Logistic Regression, and Neural Network algorithms suitable for machine learning classification models. The performance evaluation of the generated models was conducted to derive the most suitable prediction model. The prediction model derived in this study separately evaluates three evaluation criteria, including classification by type of landscape, classification by distance between landscape and target, and classification by preference, and then synthesizes and predicts results. As a result of the study, a prediction model with a high accuracy of 0.986 for the evaluation criterion according to the type of landscape, 0.973 for the evaluation criterion according to the distance, and 0.952 for the evaluation criterion according to the preference was developed, and it can be seen that the verification process through the evaluation of data prediction results exceeds the required performance value of the model. As an experimental attempt to investigate the possibility of developing a prediction model using machine learning in landscape-related research, this study was able to confirm the possibility of creating a high-performance prediction model by building a data set through the collection and refinement of image data and subsequently utilizing it in landscape-related research fields. Based on the results, implications, and limitations of this study, it is believed that it is possible to develop various types of landscape prediction models, including wind power facility natural, and cultural landscapes. Machine learning techniques can be more useful and valuable in the field of landscape architecture by exploring and applying research methods appropriate to the topic, reducing the time of data classification through the study of a model that classifies images according to landscape types or analyzing the importance of landscape planning factors through the analysis of landscape prediction factors using machine learning.