• 제목/요약/키워드: Data Acquisition Method

검색결과 1,055건 처리시간 0.028초

Assessment of Attenuation Correction Techniques with a $^{137}Cs$ Point Source ($^{137}Cs$ 점선원을 이용한 감쇠 보정기법들의 평가)

  • Bong, Jung-Kyun;Kim, Hee-Joung;Son, Hye-Kyoung;Park, Yun-Young;Park, Hae-Joung;Yun, Mi-Jin;Lee, Jong-Doo;Jung, Hae-Jo
    • The Korean Journal of Nuclear Medicine
    • /
    • 제39권1호
    • /
    • pp.57-68
    • /
    • 2005
  • Purpose: The objective of this study was to assess attenuation correction algorithms with the $^{137}Cs$ point source for the brain positron omission tomography (PET) imaging process. Materials & Methods: Four different types of phantoms were used in this study for testing various types of the attenuation correction techniques. Transmission data of a $^{137}Cs$ point source were acquired after infusing the emission source into phantoms and then the emission data were subsequently acquired in 3D acquisition mode. Scatter corrections were performed with a background tail-fitting algorithm. Emission data were then reconstructed using iterative reconstruction method with a measured (MAC), elliptical (ELAC), segmented (SAC) and remapping (RAC) attenuation correction, respectively. Reconstructed images were then both qualitatively and quantitatively assessed. In addition, reconstructed images of a normal subject were assessed by nuclear medicine physicians. Subtracted images were also compared. Results: ELEC, SAC, and RAC provided a uniform phantom image with less noise for a cylindrical phantom. In contrast, a decrease in intensity at the central portion of the attenuation map was noticed at the result of the MAC. Reconstructed images of Jaszack and Hoffan phantoms presented better quality with RAC and SAC. The attenuation of a skull on images of the normal subject was clearly noticed and the attenuation correction without considering the attenuation of the skull resulted in artificial defects on images of the brain. Conclusion: the complicated and improved attenuation correction methods were needed to obtain the better accuracy of the quantitative brain PET images.

Detection Ability of Occlusion Object in Deep Learning Algorithm depending on Image Qualities (영상품질별 학습기반 알고리즘 폐색영역 객체 검출 능력 분석)

  • LEE, Jeong-Min;HAM, Geon-Woo;BAE, Kyoung-Ho;PARK, Hong-Ki
    • Journal of the Korean Association of Geographic Information Studies
    • /
    • 제22권3호
    • /
    • pp.82-98
    • /
    • 2019
  • The importance of spatial information is rapidly rising. In particular, 3D spatial information construction and modeling for Real World Objects, such as smart cities and digital twins, has become an important core technology. The constructed 3D spatial information is used in various fields such as land management, landscape analysis, environment and welfare service. Three-dimensional modeling with image has the hig visibility and reality of objects by generating texturing. However, some texturing might have occlusion area inevitably generated due to physical deposits such as roadside trees, adjacent objects, vehicles, banners, etc. at the time of acquiring image Such occlusion area is a major cause of the deterioration of reality and accuracy of the constructed 3D modeling. Various studies have been conducted to solve the occlusion area. Recently the researches of deep learning algorithm have been conducted for detecting and resolving the occlusion area. For deep learning algorithm, sufficient training data is required, and the collected training data quality directly affects the performance and the result of the deep learning. Therefore, this study analyzed the ability of detecting the occlusion area of the image using various image quality to verify the performance and the result of deep learning according to the quality of the learning data. An image containing an object that causes occlusion is generated for each artificial and quantified image quality and applied to the implemented deep learning algorithm. The study found that the image quality for adjusting brightness was lower at 0.56 detection ratio for brighter images and that the image quality for pixel size and artificial noise control decreased rapidly from images adjusted from the main image to the middle level. In the F-measure performance evaluation method, the change in noise-controlled image resolution was the highest at 0.53 points. The ability to detect occlusion zones by image quality will be used as a valuable criterion for actual application of deep learning in the future. In the acquiring image, it is expected to contribute a lot to the practical application of deep learning by providing a certain level of image acquisition.

Evaluation for applicability of river depth measurement method depending on vegetation effect using drone-based spatial-temporal hyperspectral image (드론기반 시공간 초분광영상을 활용한 식생유무에 따른 하천 수심산정 기법 적용성 검토)

  • Gwon, Yeonghwa;Kim, Dongsu;You, Hojun
    • Journal of Korea Water Resources Association
    • /
    • 제56권4호
    • /
    • pp.235-243
    • /
    • 2023
  • Due to the revision of the River Act and the enactment of the Act on the Investigation, Planning, and Management of Water Resources, a regular bed change survey has become mandatory and a system is being prepared such that local governments can manage water resources in a planned manner. Since the topography of a bed cannot be measured directly, it is indirectly measured via contact-type depth measurements such as level survey or using an echo sounder, which features a low spatial resolution and does not allow continuous surveying owing to constraints in data acquisition. Therefore, a depth measurement method using remote sensing-LiDAR or hyperspectral imaging-has recently been developed, which allows a wider area survey than the contact-type method as it acquires hyperspectral images from a lightweight hyperspectral sensor mounted on a frequently operating drone and by applying the optimal bandwidth ratio search algorithm to estimate the depth. In the existing hyperspectral remote sensing technique, specific physical quantities are analyzed after matching the hyperspectral image acquired by the drone's path to the image of a surface unit. Previous studies focus primarily on the application of this technology to measure the bathymetry of sandy rivers, whereas bed materials are rarely evaluated. In this study, the existing hyperspectral image-based water depth estimation technique is applied to rivers with vegetation, whereas spatio-temporal hyperspectral imaging and cross-sectional hyperspectral imaging are performed for two cases in the same area before and after vegetation is removed. The result shows that the water depth estimation in the absence of vegetation is more accurate, and in the presence of vegetation, the water depth is estimated by recognizing the height of vegetation as the bottom. In addition, highly accurate water depth estimation is achieved not only in conventional cross-sectional hyperspectral imaging, but also in spatio-temporal hyperspectral imaging. As such, the possibility of monitoring bed fluctuations (water depth fluctuation) using spatio-temporal hyperspectral imaging is confirmed.

Effects of Column Diameter on the Holdups of Bubble, Wake and Continuous Liquid Phase in Bubble Columns with Viscous Liquid Medium (점성액체 기포탑에서 탑의 직경이 기포, wake 및 연속액상 체류량에 미치는 영향)

  • Lim, Dae Ho;Jang, Ji Hwa;Kang, Yong;Jun, Ki Won
    • Korean Chemical Engineering Research
    • /
    • 제49권5호
    • /
    • pp.582-587
    • /
    • 2011
  • Holdup characteristics of bubble, wake and continuous liquid phases were investigated in bubble columns with viscous liquid media. Effects of column diameter(0.051, 0.076, 0.102 and 0.152 m ID), gas velocity($U_G$=0.02~0.16 m/s) and liquid viscosity(${\mu}_L$=0.001~0.050 $Pa{\cdot}s$) of continuous liquid media on the holdups of bubble, wake and continuous liquid phases were discussed. The three phase such as bubble, wake and continuous liquid phases were classified successfully by adapting the dual electrical resistivity probe method. Compressed filtered air and water or aqueous solutions of CMC(Carboxy Methyl Cellulose) were used as a gas and a liquid phase, respectively. To detect the wake as well as bubble phases in the bubble column continuously, a data acquisition system(DT 2805 Lab Card) with personal computer was used. The analog signals obtained from the probe circuit were processed to produce the digital data, from which the wake phase was detected behind the multi-bubbles as well as single bubbles rising in the bubble columns. The holdup of bubble and wake phases decreased but that of continuous liquid media increased, with an increase in the column diameter or liquid viscosity. However, the holdup of bubble and wake phases increased but that of continuous media decreased with an increase in the gas velocity. The holdup ratio of wake to wake to bubble phase decreased with an increase in the column diameter or gas velocity, however, increased with an increase in the viscosity of con-tinuous liquid media. The holdups of bubble, wake and continuous liquid media could be correlated in terms of operating variables within this experimental conditions as: ${\varepsilon}_B=0.043D^{-0.18}U_G^{0.56}{\mu}_L^{-0.13}$, ${\varepsilon}_W=0.003D^{-0.85}U_G^{0.46}{\mu}_L^{-0.10}$, ${\varepsilon}_C=1.179D^{0.09}U_G^{-0.13}{\mu}_L^{0.04}$.

Application of Off-axis Correction Method for EPID Based IMRT QA (EPID를 사용한 세기조절방사선치료의 정도관리에 있어 축이탈 보정(Off-axis Correction)의 적용)

  • Cho, Ilsung;Kwark, Jungwon;Park, Sung Ho;Ahn, Seung Do;Jeong, Dong Hyeok;Cho, Byungchul
    • Progress in Medical Physics
    • /
    • 제23권4호
    • /
    • pp.317-325
    • /
    • 2012
  • The Varian PORTALVISION (Varian Medical Systems, US) shows significant overresponses as the off-center distance increases compared to the predicted dose. In order to correct the dose discrepancy, the off-axis correction is applied to VARIAN iX linear accelerators. The portal dose for $38{\times}28cm^2$ open field is acquired for 6 MV, 15 MV photon beams and also are predicted by PDIP algorithm under the same condition of the portal dose acquisition. The off-axis correction is applied by modifying the $40{\times}40cm^2$ diagonal beam profile data which is used for the beam profile calibration. The ratios between predicted dose and measured dose is modeled as a function of off-axis distance with the $4^{th}$ polynomial and is applied to the $40{\times}40cm^2$ diagonal beam profile data as the weight to correct measured dose by EPID detector. The discrepancy between measured dose and predicted dose is reduced from $4.17{\pm}2.76$ CU to $0.18{\pm}0.8$ CU for 6 MV photon beam and from $3.23{\pm}2.59$ CU to $0.04{\pm}0.85$ CU for 15 MV photon beam. The passing rate of gamma analysis for the pyramid fluence patten with the 4%, 4 mm criteria is improved from 98.7% to 99.1% for 6 MV photon beam, from 99.8% to 99.9% for 15 MV photon beam. IMRT QA is also performed for randomly selected Head and Neck and Prostate IMRT plans after applying the off-axis correction. The gamma passing rare is improved by 3% on average, for Head and Neck cases: $94.7{\pm}3.2%$ to $98.2{\pm}1.4%$, for Prostate cases: $95.5{\pm}2.6%$, $98.4{\pm}1.8%$. The gamma analysis criteria is 3%, 3 mm with 10% threshold. It is considered that the off-axis correction might be an effective and easily adaptable means for correcting the discrepancy between measured dose and predicted dose for IMRT QA using EPID in clinic.

On-line Quality Assurance of Linear Accelerator with Electronic Portal Imaging System (전자포탈영상장치(EPID)를 이용한 선형가속기의 기하학적 QC/QA System)

  • Lee, Seok;Jang, Hye-Sook;Choi, Eun-Kyung;Kwon, Soo-Il;Lee, Byung-Yong
    • Progress in Medical Physics
    • /
    • 제9권3호
    • /
    • pp.127-136
    • /
    • 1998
  • On-line geometrical quality assurance system has been developed using electronic portal imaging system(OQuE). EPID system is networked into Pentium PC in order to transmit the acquisited images to analysis PC. Geometrical QA parameters, including light-radiation field congruence, collimator rotation axis, and gantry rotation axis can be easily analyzed with the help of graphic user interface(GUI) software. Equipped with the EPID (Portal Vision, Varian, USA), geometrical quality assurance of a linear accelerator (CL/2100/CD, Varian, USA), which is networked into OQuE, was performed to evaluate this system. Light-radiation field congruence tests by center of gravity analysis shows 0.2~0.3mm differences for various field sizes. Collimator (or Gantry) rotation axis for various angles could be obtained by superposing 4 shots of angles. The radius of collimator rotation axis is measured to 0.2mm for upper jaw collimator, and 0.1mm for lower jaw. Acquisited images for various gantry angles were rotated according to the gantry angle and actual center of image point obtained from collimator axis test. The rotated images are superpositioned and analyzed as the same method as collimator rotation axis. The radius of gantry rotation axis is calculated 0.3mm for anterior/posterior direction (gantry 0$^{\circ}$ and 170$^{\circ}$) and 0.7mm for right/left direction(gantry 90$^{\circ}$ and 260$^{\circ}$). Image acquisition for data analysis is faster than conventional method and the results turn out to be excellent for the development goal and accurate within a milimeter range. The OQuE system is proven to be a good tool for the geometrical quality assurance of linear accelerator using EPID.

  • PDF

A Methodology of Customer Churn Prediction based on Two-Dimensional Loyalty Segmentation (이차원 고객충성도 세그먼트 기반의 고객이탈예측 방법론)

  • Kim, Hyung Su;Hong, Seung Woo
    • Journal of Intelligence and Information Systems
    • /
    • 제26권4호
    • /
    • pp.111-126
    • /
    • 2020
  • Most industries have recently become aware of the importance of customer lifetime value as they are exposed to a competitive environment. As a result, preventing customers from churn is becoming a more important business issue than securing new customers. This is because maintaining churn customers is far more economical than securing new customers, and in fact, the acquisition cost of new customers is known to be five to six times higher than the maintenance cost of churn customers. Also, Companies that effectively prevent customer churn and improve customer retention rates are known to have a positive effect on not only increasing the company's profitability but also improving its brand image by improving customer satisfaction. Predicting customer churn, which had been conducted as a sub-research area for CRM, has recently become more important as a big data-based performance marketing theme due to the development of business machine learning technology. Until now, research on customer churn prediction has been carried out actively in such sectors as the mobile telecommunication industry, the financial industry, the distribution industry, and the game industry, which are highly competitive and urgent to manage churn. In addition, These churn prediction studies were focused on improving the performance of the churn prediction model itself, such as simply comparing the performance of various models, exploring features that are effective in forecasting departures, or developing new ensemble techniques, and were limited in terms of practical utilization because most studies considered the entire customer group as a group and developed a predictive model. As such, the main purpose of the existing related research was to improve the performance of the predictive model itself, and there was a relatively lack of research to improve the overall customer churn prediction process. In fact, customers in the business have different behavior characteristics due to heterogeneous transaction patterns, and the resulting churn rate is different, so it is unreasonable to assume the entire customer as a single customer group. Therefore, it is desirable to segment customers according to customer classification criteria, such as loyalty, and to operate an appropriate churn prediction model individually, in order to carry out effective customer churn predictions in heterogeneous industries. Of course, in some studies, there are studies in which customers are subdivided using clustering techniques and applied a churn prediction model for individual customer groups. Although this process of predicting churn can produce better predictions than a single predict model for the entire customer population, there is still room for improvement in that clustering is a mechanical, exploratory grouping technique that calculates distances based on inputs and does not reflect the strategic intent of an entity such as loyalties. This study proposes a segment-based customer departure prediction process (CCP/2DL: Customer Churn Prediction based on Two-Dimensional Loyalty segmentation) based on two-dimensional customer loyalty, assuming that successful customer churn management can be better done through improvements in the overall process than through the performance of the model itself. CCP/2DL is a series of churn prediction processes that segment two-way, quantitative and qualitative loyalty-based customer, conduct secondary grouping of customer segments according to churn patterns, and then independently apply heterogeneous churn prediction models for each churn pattern group. Performance comparisons were performed with the most commonly applied the General churn prediction process and the Clustering-based churn prediction process to assess the relative excellence of the proposed churn prediction process. The General churn prediction process used in this study refers to the process of predicting a single group of customers simply intended to be predicted as a machine learning model, using the most commonly used churn predicting method. And the Clustering-based churn prediction process is a method of first using clustering techniques to segment customers and implement a churn prediction model for each individual group. In cooperation with a global NGO, the proposed CCP/2DL performance showed better performance than other methodologies for predicting churn. This churn prediction process is not only effective in predicting churn, but can also be a strategic basis for obtaining a variety of customer observations and carrying out other related performance marketing activities.

Seismic wave propagation through surface basalts - implications for coal seismic surveys (지표 현무암을 통해 전파하는 탄성파의 거동 - 석탄 탄성파탐사에 적용)

  • Sun, Weijia;Zhou, Binzhong;Hatherly, Peter;Fu, Li-Yun
    • Geophysics and Geophysical Exploration
    • /
    • 제13권1호
    • /
    • pp.1-8
    • /
    • 2010
  • Seismic reflection surveying is one of the most widely used and effective techniques for coal seam structure delineation and risk mitigation for underground longwall mining. However, the ability of the method can be compromised by the presence of volcanic cover. This problem arises within parts of the Bowen and Sydney Basins of Australia and seismic surveying can be unsuccessful. As a consequence, such areas are less attractive for coal mining. Techniques to improve the success of seismic surveying over basalt flows are needed. In this paper, we use elastic wave-equation-based forward modelling techniques to investigate the effects and characteristics of seismic wave propagation under different settings involving changes in basalt properties, its thickness, lateral extent, relative position to the shot position and various forms of inhomogeneity. The modelling results suggests that: 1) basalts with high impedance contrasts and multiple flows generate strong multiples and weak reflectors; 2) thin basalts have less effect than thick basalts; 3) partial basalt cover has less effect than full basalt cover; 4) low frequency seismic waves (especially at large offsets) have better penetration through the basalt than high frequency waves; and 5) the deeper the coal seams are below basalts of limited extent, the less influence the basalts will have on the wave propagation. In addition to providing insights into the issues that arise when seismic surveying under basalts, these observations suggest that careful management of seismic noise and the acquisition of long-offset seismic data with low-frequency geophones have the potential to improve the seismic results.

A PLS Path Modeling Approach on the Cause-and-Effect Relationships among BSC Critical Success Factors for IT Organizations (PLS 경로모형을 이용한 IT 조직의 BSC 성공요인간의 인과관계 분석)

  • Lee, Jung-Hoon;Shin, Taek-Soo;Lim, Jong-Ho
    • Asia pacific journal of information systems
    • /
    • 제17권4호
    • /
    • pp.207-228
    • /
    • 2007
  • Measuring Information Technology(IT) organizations' activities have been limited to mainly measure financial indicators for a long time. However, according to the multifarious functions of Information System, a number of researches have been done for the new trends on measurement methodologies that come with financial measurement as well as new measurement methods. Especially, the researches on IT Balanced Scorecard(BSC), concept from BSC measuring IT activities have been done as well in recent years. BSC provides more advantages than only integration of non-financial measures in a performance measurement system. The core of BSC rests on the cause-and-effect relationships between measures to allow prediction of value chain performance measures to allow prediction of value chain performance measures, communication, and realization of the corporate strategy and incentive controlled actions. More recently, BSC proponents have focused on the need to tie measures together into a causal chain of performance, and to test the validity of these hypothesized effects to guide the development of strategy. Kaplan and Norton[2001] argue that one of the primary benefits of the balanced scorecard is its use in gauging the success of strategy. Norreklit[2000] insist that the cause-and-effect chain is central to the balanced scorecard. The cause-and-effect chain is also central to the IT BSC. However, prior researches on relationship between information system and enterprise strategies as well as connection between various IT performance measurement indicators are not so much studied. Ittner et al.[2003] report that 77% of all surveyed companies with an implemented BSC place no or only little interest on soundly modeled cause-and-effect relationships despite of the importance of cause-and-effect chains as an integral part of BSC. This shortcoming can be explained with one theoretical and one practical reason[Blumenberg and Hinz, 2006]. From a theoretical point of view, causalities within the BSC method and their application are only vaguely described by Kaplan and Norton. From a practical consideration, modeling corporate causalities is a complex task due to tedious data acquisition and following reliability maintenance. However, cause-and effect relationships are an essential part of BSCs because they differentiate performance measurement systems like BSCs from simple key performance indicator(KPI) lists. KPI lists present an ad-hoc collection of measures to managers but do not allow for a comprehensive view on corporate performance. Instead, performance measurement system like BSCs tries to model the relationships of the underlying value chain in cause-and-effect relationships. Therefore, to overcome the deficiencies of causal modeling in IT BSC, sound and robust causal modeling approaches are required in theory as well as in practice for offering a solution. The propose of this study is to suggest critical success factors(CSFs) and KPIs for measuring performance for IT organizations and empirically validate the casual relationships between those CSFs. For this purpose, we define four perspectives of BSC for IT organizations according to Van Grembergen's study[2000] as follows. The Future Orientation perspective represents the human and technology resources needed by IT to deliver its services. The Operational Excellence perspective represents the IT processes employed to develop and deliver the applications. The User Orientation perspective represents the user evaluation of IT. The Business Contribution perspective captures the business value of the IT investments. Each of these perspectives has to be translated into corresponding metrics and measures that assess the current situations. This study suggests 12 CSFs for IT BSC based on the previous IT BSC's studies and COBIT 4.1. These CSFs consist of 51 KPIs. We defines the cause-and-effect relationships among BSC CSFs for IT Organizations as follows. The Future Orientation perspective will have positive effects on the Operational Excellence perspective. Then the Operational Excellence perspective will have positive effects on the User Orientation perspective. Finally, the User Orientation perspective will have positive effects on the Business Contribution perspective. This research tests the validity of these hypothesized casual effects and the sub-hypothesized causal relationships. For the purpose, we used the Partial Least Squares approach to Structural Equation Modeling(or PLS Path Modeling) for analyzing multiple IT BSC CSFs. The PLS path modeling has special abilities that make it more appropriate than other techniques, such as multiple regression and LISREL, when analyzing small sample sizes. Recently the use of PLS path modeling has been gaining interests and use among IS researchers in recent years because of its ability to model latent constructs under conditions of nonormality and with small to medium sample sizes(Chin et al., 2003). The empirical results of our study using PLS path modeling show that the casual effects in IT BSC significantly exist partially in our hypotheses.

Expression of CD40, CD86, and HLA-DR in CD1c+ Myeloid Dendritic Cells Isolated from Peripheral Blood in Primary Adenocarcinoma of Lung (원발성 폐선암환자의 말초혈액에서 분리한 CD1c+ 골수성 수지상 세포에서의 CD40, CD86 및 HLA-DR의 발현)

  • Kang, Moon-Chul;Kang, Chang-Hyun;Kim, Young-Tae;Kim, Joo-Hyun
    • Journal of Chest Surgery
    • /
    • 제43권5호
    • /
    • pp.499-505
    • /
    • 2010
  • Background: There have been several reports using animal experiments that CD1-restricted T-cells have a key role in tumor immunity. To address this issue, we studied the expression of markers for CD1c+ myeloid dendritic cells (DCs) isolated from peripheral blood in the clinical setting. Material and Method: A total of 24 patients with radiologically suspected or histologically confirmed lung cancer who underwent pulmonary resection were enrolled in this study. The patients were divided according to histology findings into three groups: primary adenocarcinoma of lung (PACL), primary squamous cell carcinoma of lung (PSqCL) and benign lung disease (BLD). We obtained 20 mL of peripheral venous blood from patients using heparin-coated syringes. Using flow-cytometry after labeling with monoclonal antibodies, data acquisition and analysis were done. Result: The ratio of CD1c+CD19- dendritic cells to CD1c+ dendritic cells were not significantly different between the three groups. CD40 (p=0.171), CD86 (p=0.037) and HLA-DR (p=0.036) were less expressed in the PACL than the BLD group. Expression of CD40 (p=0.319), CD86 (p=0.036) and HLA-DR (p=0.085) were less expressed in the PACL than the PSqCL group, but the differences were only significant for CD86. Expression of co-stimulatory markers was not different between the PSqCL and BLD groups. Expression of markers for activated DCs were dramatically lower in the PACL group than in groups with other histology (CD40 (p=0.005), CD86 (p=0.013) HLA-DR (p=0.004). Conclusion: These results suggest the possibility that CD1c+ myeloid DCs participate in control of the tumor immunity system and that low expression of markers results in lack of an immune response triggered by dendritic cells in adenocarcinoma of the lung.