• Title/Summary/Keyword: 데이터 분할

Search Result 2,601, Processing Time 0.027 seconds

Effects of the Variability of Individual Data on the Group Results; an Acupuncture Study Using fMRI (기능적 자기공명영상을 이용한 침 연구에 있어서 개체 별 다양성이 그룹분석에 미치는 영향 연구)

  • Bae, Seong-In;Jahng, Geon-Ho;Ryu, Chang-Woo;Lim, Sabina
    • Progress in Medical Physics
    • /
    • v.20 no.4
    • /
    • pp.277-289
    • /
    • 2009
  • Recently, functional MRI has been used to investigate the neurobiological mechanisms of acupuncture and the specificity of acupoint. The group data tend to be regarded as more important than the individual data in the most of previous studies. This study was designed to investigate the effect of the variability of individual data on the group results. A functional MRI (fMRI) of the whole brain was performed in fifteen healthy subjects during placebo and acupuncture stimulations at the ST36 acupoint. After remaining at rest for 30 seconds, the acupuncture was inserted and twisted at the rate of 2 Hz for 45 seconds and then the acupuncture was removed immediately. This process was repeated three times. Individual and group analyses were performed by voxel-based analyses using SPM2 software. Visual inspections of the activation and deactivation maps from individual sessions have shown the large variability across fifteen subjects. This means that the group data reflected the brain activation responses of only a few subjects. We suggest that the individual data should be presented to demonstrate the effect of acupuncture.

  • PDF

Utility Evaluation on Application of Geometric Mean Depending on Depth of Kidney in Split Renal Function Test Using 99mTc-MAG3 (99mTc-MAG3를 이용한 상대적 신장 기능 평가 시 신장 깊이에 따른 기하평균 적용의 유용성 평가)

  • Lee, Eun-Byeul;Lee, Wang-Hui;Ahn, Sung-Min
    • Journal of radiological science and technology
    • /
    • v.39 no.2
    • /
    • pp.199-208
    • /
    • 2016
  • $^{99}mTc-MAG_3$ Renal scan is a method that acquires dynamic renal scan image by using $^{99}mTc-MAG_3$ and dynamically visualizes process of radioactive agent being absorbed to kidney and excreted continuously. Once the test starts, ratio in both kidneys in 1~2.5 minutes was measured to obtain split renal function and split renal function can be expressed in ratio based on overall renal function. This study is based on compares split renal function obtained from data acquired from posterior detector, which is a conventional renal function test method, with split renal function acquired from the geometric mean of values obtained from anterior and posterior detectors, and studies utility of attenuation compensation depending on difference in geometric mean kidney depth. From July, 2015 to February 2016, 33 patients who undertook $^{99}mTc-MAG_3$ Renal scan(13 male, 20 female, average age of 44.66 with range of 5~70, average height of 160.40cm, average weight of 55.40kg) were selected as subjects. Depth of kidney was shown to be 65.82 mm at average for left and 71.62 mm at average for right. In supine position, 30 out of 33 patients showed higher ratio of deep-situated kidney and lower ratio of shallow-situated kidney. Such result is deemed to be due to correction by attenuation between deep-situated kidney and detector and in case where there is difference between the depth of both kidneys such as, lesions in or around kidney, spine malformation, and ectopic kidney, ratio of deep-situated kidney must be compensated for more accurate calculation of split renal function, when compared to the conventional test method (posterior detector counting).

Development of Gated Myocardial SPECT Analysis Software and Evaluation of Left Ventricular Contraction Function (게이트 심근 SPECT 분석 소프트웨어의 개발과 좌심실 수축 기능 평가)

  • Lee, Byeong-Il;Lee, Dong-Soo;Lee, Jae-Sung;Chung, June-Key;Lee, Myung-Chul;Choi, Heung-Kook
    • The Korean Journal of Nuclear Medicine
    • /
    • v.37 no.2
    • /
    • pp.73-82
    • /
    • 2003
  • Objectives: A new software (Cardiac SPECT Analyzer: CSA) was developed for quantification of volumes and election fraction on gated myocardial SPECT. Volumes and ejection fraction by CSA were validated by comparing with those quantified by Quantitative Gated SPECT (QGS) software. Materials and Methods: Gated myocardial SPECT was peformed in 40 patients with ejection fraction from 15% to 85%. In 26 patients, gated myocardial SPECT was acquired again with the patients in situ. A cylinder model was used to eliminate noise semi-automatically and profile data was extracted using Gaussian fitting after smoothing. The boundary points of endo- and epicardium were found using an iterative learning algorithm. Enddiastolic (EDV) and endsystolic volumes (ESV) and election fraction (EF) were calculated. These values were compared with those calculated by QGS and the same gated SPECT data was repeatedly quantified by CSA and variation of the values on sequential measurements of the same patients on the repeated acquisition. Results: From the 40 patient data, EF, EDV and ESV by CSA were correlated with those by QGS with the correlation coefficients of 0.97, 0.92, 0.96. Two standard deviation (SD) of EF on Bland Altman plot was 10.1%. Repeated measurements of EF, EDV, and ESV by CSA were correlated with each other with the coefficients of 0.96, 0.99, and 0.99 for EF, EDV and ESV respectively. On repeated acquisition, reproducibility was also excellent with correlation coefficients of 0.89, 0.97, 0.98, and coefficient of variation of 8.2%, 5.4mL, 8.5mL and 2SD of 10.6%, 21.2mL, and 16.4mL on Bland Altman plot for EF, EDV and ESV. Conclusion: We developed the software of CSA for quantification of volumes and ejection fraction on gated myocardial SPECT. Volumes and ejection fraction quantified using this software was found valid for its correctness and precision.

Evaluation of Incident Detection Algorithms focused on APID, DES, DELOS and McMaster (돌발상황 검지알고리즘의 실증적 평가 (APID, DES, DELOS, McMaster를 중심으로))

  • Nam, Doo-Hee;Baek, Seung-Kirl;Kim, Sang-Gu
    • Journal of Korean Society of Transportation
    • /
    • v.22 no.7 s.78
    • /
    • pp.119-129
    • /
    • 2004
  • This paper is designed to report the results of development and validation procedures in relation to the Freeway Incident Management System (FIMS) prototype development as part of Intelligent Transportation Systems Research and Development program. The central core of the FIMS is an integration of the component parts and the modular, but the integrated system for freeway management. The whole approach has been component-orientated, with a secondary emphasis being placed on the traffic characteristics at the sites. The first action taken during the development process was the selection of the required data for each components within the existing infrastructure of Korean freeway system. After through review and analysis of vehicle detection data, the pilot site led to the utilization of different technologies in relation to the specific needs and character of the implementation. This meant that the existing system was tested in a different configuration at different sections of freeway, thereby increasing the validity and scope of the overall findings. The incident detection module has been performed according to predefined system validation specifications. The system validation specifications have identified two component data collection and analysis patterns which were outlined in the validation specifications; the on-line and off-line testing procedural frameworks. The off-line testing was achieved using asynchronous analysis, commonly in conjunction with simulation of device input data to take full advantage of the opportunity to test and calibrate the incident detection algorithms focused on APID, DES, DELOS and McMaster. The simulation was done with the use of synchronous analysis, thereby providing a means for testing the incident detection module.

Performance and Economic Analysis of Domestic Supercritical Coal-Fired Power Plant with Post-Combustion CO2 Capture Process (국내 초임계 석탄화력발전소에 연소 후 CO2 포집공정 설치 시 성능 및 경제성 평가)

  • Lee, Ji-Hyun;Kwak, No-Sang;Lee, In-Young;Jang, Kyung-Ryoung;Shim, Jae-Goo
    • Korean Chemical Engineering Research
    • /
    • v.50 no.2
    • /
    • pp.365-370
    • /
    • 2012
  • In this study, Economic analysis of supercritical coal-fired power plant with $CO_2$ capture process was performed. For this purpose, chemical absorption method using amine solvent, which is commercially available and most suitable for existing thermal power plant, was studied. For the evaluation of the economic analysis of coal-fired power plant with post-combustion $CO_2$ capture process in Korea, energy penalty after $CO_2$ capture was calculated using the power equivalent factor suggested by Bolland et al. And the overnight cost of power plant (or cost of plant construction) and the operation cost reported by the IEA (International Energy Agency) were used. Based on chemical absorption method using a amine solvent and 3.31 GJ/$tonCO_2$ as a regeneration energy in the stripper, the net power efficiency was reduced from 41.0% (without $CO_2$ capture) to 31.6% (with $CO_2$ capture) and the levelized cost of electricity was increased from 45.5 USD/MWh (Reference case, without $CO_2$ capture) to 73.9 USD/MWh (With $CO_2$ capture) and the cost of $CO_2$ avoided was estimated as 41.3 USD/$tonCO_2$.

High-Speed Implementation and Efficient Memory Usage of Min-Entropy Estimation Algorithms in NIST SP 800-90B (NIST SP 800-90B의 최소 엔트로피 추정 알고리즘에 대한 고속 구현 및 효율적인 메모리 사용 기법)

  • Kim, Wontae;Yeom, Yongjin;Kang, Ju-Sung
    • Journal of the Korea Institute of Information Security & Cryptology
    • /
    • v.28 no.1
    • /
    • pp.25-39
    • /
    • 2018
  • NIST(National Institute of Standards and Technology) has recently published SP 800-90B second draft which is the document for evaluating security of entropy source, a key element of a cryptographic random number generator(RNG), and provided a tool implemented on Python code. In SP 800-90B, the security evaluation of the entropy sources is a process of estimating min-entropy by several estimators. The process of estimating min-entropy is divided into IID track and non-IID track. In IID track, the entropy sources are estimated only from MCV estimator. In non-IID Track, the entropy sources are estimated from 10 estimators including MCV estimator. The running time of the NIST's tool in non-IID track is approximately 20 minutes and the memory usage is over 5.5 GB. For evaluation agencies that have to perform repeatedly evaluations on various samples, and developers or researchers who have to perform experiments in various environments, it may be inconvenient to estimate entropy using the tool and depending on the environment, it may be impossible to execute. In this paper, we propose high-speed implementations and an efficient memory usage technique for min-entropy estimation algorithm of SP 800-90B. Our major achievements are the three improved speed and efficient memory usage reduction methods which are the method applying advantages of C++ code for improving speed of MultiMCW estimator, the method effectively reducing the memory and improving speed of MultiMMC by rebuilding the data storage structure, and the method improving the speed of LZ78Y by rebuilding the data structure. The tool applied our proposed methods is 14 times faster and saves 13 times more memory usage than NIST's tool.

Factors Influencing the Activation of Brown Adipose Tissue in 18F-FDG PET/CT in National Cancer Center (양전자방출단층촬영 시 갈색지방조직 활성화에 영향을 미치는 요인 분석)

  • You, Yeon Wook;Lee, Chung Wun;Jung, Jae Hoon;Kim, Yun Cheol;Lee, Dong Eun;Park, So Hyeon;Kim, Tae-Sung
    • The Korean Journal of Nuclear Medicine Technology
    • /
    • v.25 no.1
    • /
    • pp.21-28
    • /
    • 2021
  • Purpose Brown fat, or brown adipose tissue (BAT), is involved in non-shivering thermogenesis and creates heat through glucose metabolism. BAT activation occurs stochastically by internal factors such as age, sex, and body mass index (BMI) and external factors such as temperature and environment. In this study, as a retrospective, electronic medical record (EMR) observation study, statistical analysis is conducted to confirm BAT activation and various factors. Materials and Methods From January 2018 to December 2019, EMR of patients who underwent PET/CT scan at the National Cancer Center for two years were collected, a total of 9155 patients were extracted, and 13442 case data including duplicate scan were targeted. After performing a univariable logistic regression analysis to determine whether BAT activation is affected by the environment (outdoor temperature) and the patient's condition (BMI, cancer type, sex, and age), A multivariable regression model that affects BAT activation was finally analyzed by selecting univariable factors with P<0.1. Results BAT activation occurred in 93 cases (0.7%). According to the results of univariable logistic regression analysis, the likelihood of BAT activation was increased in patients under 50 years old (P<0.001), in females (P<0.001), in lower outdoor temperature below 14.5℃ (P<0.001), in lower BMI (P<0.001) and in patients who had a injection before 12:30 PM (P<0.001). It decreased in higher BMI (P<0.001) and in patients diagnosed with lung cancer (P<0.05) In multivariable results, BAT activation was significantly increased in patients under 50 years (P<0.001), in females (P<0.001) and in lower outdoor temperature below 14.5℃ (P<0.001). It was significantly decreased in higher BMI (P<0.05). Conclusion A retrospective study of factors affecting BAT activation in patients who underwent PET/CT scan for 2 years at the National Cancer Center was conducted. The results confirmed that BAT was significantly activated in normal-weight women under 50 years old who underwent PET/CT scan in weather with an outdoor temperature of less than 14.5℃. Based on this result, the patient applied to the factor can be identified in advance, and it is thought that it will help to reduce BAT activation through several studies in the future.

Detection of Wildfire Burned Areas in California Using Deep Learning and Landsat 8 Images (딥러닝과 Landsat 8 영상을 이용한 캘리포니아 산불 피해지 탐지)

  • Youngmin Seo;Youjeong Youn;Seoyeon Kim;Jonggu Kang;Yemin Jeong;Soyeon Choi;Yungyo Im;Yangwon Lee
    • Korean Journal of Remote Sensing
    • /
    • v.39 no.6_1
    • /
    • pp.1413-1425
    • /
    • 2023
  • The increasing frequency of wildfires due to climate change is causing extreme loss of life and property. They cause loss of vegetation and affect ecosystem changes depending on their intensity and occurrence. Ecosystem changes, in turn, affect wildfire occurrence, causing secondary damage. Thus, accurate estimation of the areas affected by wildfires is fundamental. Satellite remote sensing is used for forest fire detection because it can rapidly acquire topographic and meteorological information about the affected area after forest fires. In addition, deep learning algorithms such as convolutional neural networks (CNN) and transformer models show high performance for more accurate monitoring of fire-burnt regions. To date, the application of deep learning models has been limited, and there is a scarcity of reports providing quantitative performance evaluations for practical field utilization. Hence, this study emphasizes a comparative analysis, exploring performance enhancements achieved through both model selection and data design. This study examined deep learning models for detecting wildfire-damaged areas using Landsat 8 satellite images in California. Also, we conducted a comprehensive comparison and analysis of the detection performance of multiple models, such as U-Net and High-Resolution Network-Object Contextual Representation (HRNet-OCR). Wildfire-related spectral indices such as normalized difference vegetation index (NDVI) and normalized burn ratio (NBR) were used as input channels for the deep learning models to reflect the degree of vegetation cover and surface moisture content. As a result, the mean intersection over union (mIoU) was 0.831 for U-Net and 0.848 for HRNet-OCR, showing high segmentation performance. The inclusion of spectral indices alongside the base wavelength bands resulted in increased metric values for all combinations, affirming that the augmentation of input data with spectral indices contributes to the refinement of pixels. This study can be applied to other satellite images to build a recovery strategy for fire-burnt areas.

Performance Analysis of Frequent Pattern Mining with Multiple Minimum Supports (다중 최소 임계치 기반 빈발 패턴 마이닝의 성능분석)

  • Ryang, Heungmo;Yun, Unil
    • Journal of Internet Computing and Services
    • /
    • v.14 no.6
    • /
    • pp.1-8
    • /
    • 2013
  • Data mining techniques are used to find important and meaningful information from huge databases, and pattern mining is one of the significant data mining techniques. Pattern mining is a method of discovering useful patterns from the huge databases. Frequent pattern mining which is one of the pattern mining extracts patterns having higher frequencies than a minimum support threshold from databases, and the patterns are called frequent patterns. Traditional frequent pattern mining is based on a single minimum support threshold for the whole database to perform mining frequent patterns. This single support model implicitly supposes that all of the items in the database have the same nature. In real world applications, however, each item in databases can have relative characteristics, and thus an appropriate pattern mining technique which reflects the characteristics is required. In the framework of frequent pattern mining, where the natures of items are not considered, it needs to set the single minimum support threshold to a too low value for mining patterns containing rare items. It leads to too many patterns including meaningless items though. In contrast, we cannot mine any pattern if a too high threshold is used. This dilemma is called the rare item problem. To solve this problem, the initial researches proposed approximate approaches which split data into several groups according to item frequencies or group related rare items. However, these methods cannot find all of the frequent patterns including rare frequent patterns due to being based on approximate techniques. Hence, pattern mining model with multiple minimum supports is proposed in order to solve the rare item problem. In the model, each item has a corresponding minimum support threshold, called MIS (Minimum Item Support), and it is calculated based on item frequencies in databases. The multiple minimum supports model finds all of the rare frequent patterns without generating meaningless patterns and losing significant patterns by applying the MIS. Meanwhile, candidate patterns are extracted during a process of mining frequent patterns, and the only single minimum support is compared with frequencies of the candidate patterns in the single minimum support model. Therefore, the characteristics of items consist of the candidate patterns are not reflected. In addition, the rare item problem occurs in the model. In order to address this issue in the multiple minimum supports model, the minimum MIS value among all of the values of items in a candidate pattern is used as a minimum support threshold with respect to the candidate pattern for considering its characteristics. For efficiently mining frequent patterns including rare frequent patterns by adopting the above concept, tree based algorithms of the multiple minimum supports model sort items in a tree according to MIS descending order in contrast to those of the single minimum support model, where the items are ordered in frequency descending order. In this paper, we study the characteristics of the frequent pattern mining based on multiple minimum supports and conduct performance evaluation with a general frequent pattern mining algorithm in terms of runtime, memory usage, and scalability. Experimental results show that the multiple minimum supports based algorithm outperforms the single minimum support based one and demands more memory usage for MIS information. Moreover, the compared algorithms have a good scalability in the results.

The Comparison of the Ultra-Violet Radiation of Summer Outdoor Screened by the Landscaping Shade Facilities and Tree (조경용 차양시설과 수목에 의한 하절기 옥외공간의 자외선 차단율 비교)

  • Lee, Chun-Seok;Ryu, Nam-Hyong
    • Journal of the Korean Institute of Landscape Architecture
    • /
    • v.41 no.6
    • /
    • pp.20-28
    • /
    • 2013
  • The purpose of this study was to compare the ultra-violet(UV) radiation under the landscaping shade facilities and tree with natural solar UV of the outdoor space at summer middays. The UVA+B and UVB were recorded every minute from the $20^{th}$ of June to the $26^{th}$ of September 2012 at a height of 1.1m above in the four different shading conditions, with fours same measuring system consisting of two couple of analog UVA+B sensor(220~370nm, Genicom's GUVA-T21GH) and UVB sensor(220~320nm, Genicom's GUVA-T21GH) and data acquisition systems(Comfile Tech.'s Moacon). Four different shading conditions were under an wooden shelter($W4.2m{\times}L4.2m{\times}H2.5m$), a polyester membrane structure ($W4.9m{\times}L4.9m{\times}H2.6m$), a Salix koreensis($H11{\times}B30$), and a brick-paved plot without any shading material. Based on the 648 records of 17 sunny days, the time serial difference of natural solar UVA+B and UVB for midday periods were analysed and compared, and statistical analysis about the difference between the four shading conditions was done based on the 2,052 records of daytime period from 10 A.M. to 4 P.M.. The major findings were as follows; 1. The average UVA+B under the wooden shelter, the membrane and the tree were $39{\mu}W/cm^2$(3.4%), $74{\mu}W/cm^2$(6.4%), $87{\mu}W/cm^2$(7.6%) respectively, while the solar UVA+B was $1.148{\mu}W/cm^2$. Which means those facilities and tree screened at least 93% of solar UV+B. 2. The average UVB under the wooden shelter, the membrane and the tree were $12{\mu}W/cm^2$(5.8%), $26{\mu}W/cm^2$(13%), $17{\mu}W/cm^2$(8.2%) respectively, while the solar UVB was $207{\mu}W/cm^2$. The membrane showed the highest level and the wooden shelter lowest. 3. According to the results of time serial analysis, the difference between the three shaded conditions around noon was very small, but the differences of early morning and late afternoon were apparently big. Which seems caused by the matter of the formal and structural characteristics of the shading facilities and tree, not by the shading materials itself. In summary, the performance of the four landscaping shade facilities and tree were very good at screening the solar UV at outdoor of summer middays, but poor at screening the lateral UV during early morning and late afternoon. Therefore, it can be apparently said that the more delicate design of shading facilities and big tree or forest to block the additional lateral UV, the more effective in conditioning the outdoor space reducing the useless or even harmful radiation for human activities.