• Title/Summary/Keyword: Dynamic parameter

Search Result 1,898, Processing Time 0.031 seconds

Differentiation of True Recurrence from Delayed Radiation Therapy-related Changes in Primary Brain Tumors Using Diffusion-weighted Imaging, Dynamic Susceptibility Contrast Perfusion Imaging, and Susceptibility-weighted Imaging (확산강조영상, 역동적조영관류영상, 자화율강조영상을 이용한 원발성 뇌종양환자에서의 종양재발과 지연성 방사선치료연관변화의 감별)

  • Kim, Dong Hyeon;Choi, Seung Hong;Ryoo, Inseon;Yoon, Tae Jin;Kim, Tae Min;Lee, Se-Hoon;Park, Chul-Kee;Kim, Ji-Hoon;Sohn, Chul-Ho;Park, Sung-Hye;Kim, Il Han
    • Investigative Magnetic Resonance Imaging
    • /
    • v.18 no.2
    • /
    • pp.120-132
    • /
    • 2014
  • Purpose : To compare dynamic susceptibility contrast imaging, diffusion-weighted imaging, and susceptibility-weighted imaging (SWI) for the differentiation of tumor recurrence and delayed radiation therapy (RT)-related changes in patients treated with RT for primary brain tumors. Materials and Methods: We enrolled 24 patients treated with RT for various primary brain tumors, who showed newly appearing enhancing lesions more than one year after completion of RT on follow-up MRI. The enhancing-lesions were confirmed as recurrences (n=14) or RT-changes (n=10). We calculated the mean values of normalized cerebral blood volume (nCBV), apparent diffusion coefficient (ADC), and proportion of dark signal intensity on SWI (proSWI) for the enhancing-lesions. All the values between the two groups were compared using t-test. A multivariable logistic regression model was used to determine the best predictor of differential diagnosis. The cutoff value of the best predictor obtained from receiver-operating characteristic curve analysis was applied to calculate the sensitivity, specificity, and accuracy for the diagnosis. Results: The mean nCBV value was significantly higher in the recurrence group than in the RT-change group (P=.004), and the mean proSWI was significantly lower in the recurrence group (P<.001). However, no significant difference was observed in the mean ADC values between the two groups. A multivariable logistic regression analysis showed that proSWI was the only independent variable for the differentiation; the sensitivity, specificity, and accuracy were 78.6% (11 of 14), 100% (10 of 10), and 87.5% (21 of 24), respectively. Conclusion: The proSWI was the most promising parameter for the differentiation of newly developed enhancing-lesions more than one year after RT completion in brain tumor patients.

Studies on Rheological Characterization of Barley ${\beta}-Glucan$ [mixed-linked $(1-3),(1-4)-{\beta}-D-Glucan$] (보리 ${\beta}-Glucan$ [mixed-linked $(1-3),(1-4)-{\beta}-D-Glucan$의 리올로지 특성)

  • Kim, Mi-Ok;Cha, Hee-Sook;Koo, Sung-Ja
    • Korean Journal of Food Science and Technology
    • /
    • v.25 no.1
    • /
    • pp.15-21
    • /
    • 1993
  • Crude ${\beta}-glucan$ extracted from Barley was purified by stepwise enzyme treatment (Thermostable ${\alpha}-amylase$, amyloglucosidase, protease). The Intrinsic Viscosity $[{\eta}]$ of the purified ${\beta}-glucan$ was determined by Cannon Fenske Capillary Viscometer (size 50, Cannon Instruments, State, College pa.) at different pH (2, 4, 7, 9, 11) and various salt concentration (0.01 M, 0.03 M, 0.05 M, 0.07 M, 0.1 M and 0.2 M). The $[{\eta}]$ of purified ${\beta}-glucan$ was ranged from $0.997{\sim}2.290\;dl/g$. The $[{\eta}]$ of purified ${\beta}-glucan$ at both alkali, acid condition were lower than those at pH 7. However, the alkali condition of puified ${\beta}-glucan$ solution showed less $[{\eta}]$ than the acid condition of this solution. From 0 M to 0.2 M salt concentration, the $[{\eta}]$ of purified ${\beta}-glucan$ solution was decreased to 0.03 M then increased to 0.05 M NaCl and remained constant to 0.2 M NaCl. The chain stiffness parameter of purified ${\beta}-glucan$ was not affected by temperature from $15^{\circ}C$ to $65^{\circ}C$. The shear rates of various ${\beta}-glucan$ conditions were determined by Bohlin Rheometer (Lund, Sweden). The ${\beta}-glucan$ concentration of 1.0 g/dl and 2.0 g/dl behaved as Newtonian fluid. However, above the concentration of 3.0 g/dl ${\beta}-glucan$ solution, it showed thixotropic and psedoplastic characteristics. Barley ${\beta}-glucan$ appears a damping at 0.5 frequency for the 4.0 g/dl solution. Below 0.5 frequency, it appears a viscous behavior property and above 0.5 frequency, it appears a elastic behavior property.

  • PDF

Investigation for Shoulder Kinematics Using Depth Sensor-Based Motion Analysis System (깊이 센서 기반 모션 분석 시스템을 사용한 어깨 운동학 조사)

  • Lee, Ingyu;Park, Jai Hyung;Son, Dong-Wook;Cho, Yongun;Ha, Sang Hoon;Kim, Eugene
    • Journal of the Korean Orthopaedic Association
    • /
    • v.56 no.1
    • /
    • pp.68-75
    • /
    • 2021
  • Purpose: The purpose of this study was to analyze the motion of the shoulder joint dynamically through a depth sensor-based motion analysis system for the normal group and patients group with shoulder disease and to report the results along with a review of the relevant literature. Materials and Methods: Seventy subjects participated in the study and were categorized as follows: 30 subjects in the normal group and 40 subjects in the group of patients with shoulder disease. The patients with shoulder disease were subdivided into the following four disease groups: adhesive capsulitis, impingement syndrome, rotator cuff tear, and cuff tear arthropathy. Repeating abduction and adduction three times, the angle over time was measured using a depth sensor-based motion analysis system. The maximum abduction angle (θmax), the maximum abduction angular velocity (ωmax), the maximum adduction angular velocity (ωmin), and the abduction/adduction time ratio (tabd/tadd) were calculated. The above parameters in the 30 subjects in the normal group and 40 subjects in the patients group were compared. In addition, the 30 subjects in the normal group and each subgroup (10 patients each) according to the four disease groups, giving a total of five groups, were compared. Results: Compared to the normal group, the maximum abduction angle (θmax), the maximum abduction angular velocity (ωmax), and the maximum adduction angular velocity (ωmin) were lower, and abduction/adduction time ratio (tabd/tadd) was higher in the patients with shoulder disease. A comparison of the subdivided disease groups revealed a lower maximum abduction angle (θmax) and the maximum abduction angular velocity (ωmax) in the adhesive capsulitis and cuff tear arthropathy groups than the normal group. In addition, the abduction/adduction time ratio (tabd/tadd) was higher in the adhesive capsulitis group, rotator cuff tear group, and cuff tear arthropathy group than in the normal group. Conclusion: Through an evaluation of the shoulder joint using the depth sensor-based motion analysis system, it was possible to measure the range of motion, and the dynamic motion parameter, such as angular velocity. These results show that accurate evaluations of the function of the shoulder joint and an in-depth understanding of shoulder diseases are possible.

A Theoretical Model for the Analysis of Residual Motion Artifacts in 4D CT Scans (이론적 모델을 이용한 4DCT에서의 Motion Artifact 분석)

  • Kim, Tae-Ho;Yoon, Jai-Woong;Kang, Seong-Hee;Suh, Tae-Suk
    • Progress in Medical Physics
    • /
    • v.23 no.3
    • /
    • pp.145-153
    • /
    • 2012
  • In this study, we quantify the residual motion artifact in 4D-CT scan using the dynamic lung phantom which could simulate respiratory target motion and suggest a simple one-dimension theoretical model to explain and characterize the source of motion artifacts in 4DCT scanning. We set-up regular 1D sine motion and adjusted three level of amplitude (10, 20, 30 mm) with fixed period (4s). The 4DCT scans are acquired in helical mode and phase information provided by the belt type respiratory monitoring system. The images were sorted into ten phase bins ranging from 0% to 90%. The reconstructed images were subsequently imported into the Treatment Planning System (CorePLAN, SC&J) for target delineation using a fixed contour window and dimensions of the three targets are measured along the direction of motion. Target dimension of each phase image have same changing trend. The error is minimum at 50% phase in all case (10, 20, 30 mm) and we found that ${\Delta}S$ (target dimension change) of 10, 20 and 30 mm amplitude were 0 (0%), 0.1 (5%), 0.1 (5%) cm respectively compare to the static image of target diameter (2 cm). while the error is maximum at 30% and 80% phase ${\Delta}S$ of 10, 20 and 30 mm amplitude were 0.2 (10%), 0.7 (35%), 0.9 (45%) cm respectively. Based on these result, we try to analysis the residual motion artifact in 4D-CT scan using a simple one-dimension theoretical model and also we developed a simulation program. Our results explain the effect of residual motion on each phase target displacement and also shown that residual motion artifact was affected that the target velocity at each phase. In this study, we focus on provides a more intuitive understanding about the residual motion artifact and try to explain the relationship motion parameters of the scanner, treatment couch and tumor. In conclusion, our results could help to decide the appropriate reconstruction phase and CT parameters which reduce the residual motion artifact in 4DCT.

A Study on the Relationship of Learning, Innovation Capability and Innovation Outcome (학습, 혁신역량과 혁신성과 간의 관계에 관한 연구)

  • Kim, Kui-Won
    • Journal of Korea Technology Innovation Society
    • /
    • v.17 no.2
    • /
    • pp.380-420
    • /
    • 2014
  • We increasingly see the importance of employees acquiring enough expert capability or innovation capability to prepare for ever growing uncertainties in their operation domains. However, despite the above circumstances, there have not been an enough number of researches on how operational input components for employees' innovation outcome, innovation activities such as acquisition, exercise and promotion effort of employee's innovation capability, and their resulting innovation outcome interact with each other. This trend is believed to have been resulted because most of the current researches on innovation focus on the units of country, industry and corporate entity levels but not on an individual corporation's innovation input components, innovation outcome and innovation activities themselves. Therefore, this study intends to avoid the currently prevalent study frames and views on innovation and focus more on the strategic policies required for the enhancement of an organization's innovation capabilities by quantitatively analyzing employees' innovation outcomes and their most suggested relevant innovation activities. The research model that this study deploys offers both linear and structural model on the trio of learning, innovation capability and innovation outcome, and then suggests the 4 relevant hypotheses which are quantitatively tested and analyzed as follows: Hypothesis 1] The different levels of innovation capability produce different innovation outcomes (accepted, p-value = 0.000<0.05). Hypothesis 2] The different amounts of learning time produce different innovation capabilities (rejected, p-value = 0.199, 0.220>0.05). Hypothesis 3] The different amounts of learning time produce different innovation outcomes. (accepted, p-value = 0.000<0.05). Hypothesis 4] the innovation capability acts as a significant parameter in the relationship of the amount of learning time and innovation outcome (structural modeling test). This structural model after the t-tests on Hypotheses 1 through 4 proves that irregular on-the-job training and e-learning directly affects the learning time factor while job experience level, employment period and capability level measurement also directly impacts on the innovation capability factor. Also this hypothesis gets further supported by the fact that the patent time absolutely and directly affects the innovation capability factor rather than the learning time factor. Through the 4 hypotheses, this study proposes as measures to maximize an organization's innovation outcome. firstly, frequent irregular on-the-job training that is based on an e-learning system, secondly, efficient innovation management of employment period, job skill levels, etc through active sponsorship and energization community of practice (CoP) as a form of irregular learning, and thirdly a model of Yί=f(e, i, s, t, w)+${\varepsilon}$ as an innovation outcome function that is soundly based on a smart system of capability level measurement. The innovation outcome function is what this study considers the most appropriate and important reference model.

Respiratory air flow transducer calibration technique for forced vital capacity test (노력성 폐활량검사시 호흡기류센서의 보정기법)

  • Cha, Eun-Jong;Lee, In-Kwang;Jang, Jong-Chan;Kim, Seong-Sik;Lee, Su-Ok;Jung, Jae-Kwan;Park, Kyung-Soon;Kim, Kyung-Ah
    • Journal of the Korea Academia-Industrial cooperation Society
    • /
    • v.10 no.5
    • /
    • pp.1082-1090
    • /
    • 2009
  • Peak expiratory flow rate(PEF) is a very important diagnostic parameter obtained from the forced vital capacity(FVC) test. The expiratory flow rate increases during the short initial time period and may cause measurement error in PEF particularly due to non-ideal dynamic characteristic of the transducer. The present study evaluated the initial rise slope($S_r$) on the flow rate signal to compensate the transducer output data. The 26 standard signals recommended by the American Thoracic Society(ATS) were generated and flown through the velocity-type respiratory air flow transducer with simultaneously acquiring the transducer output signal. Most PEF and the corresponding output($N_{PEF}$) were well fitted into a quadratic equation with a high enough correlation coefficient of 0.9997. But only two(ATS#2 and 26) signals resulted significant deviation of $N_{PEF}$ with relative errors>10%. The relationship between the relative error in $N_{PEF}$ and $S_r$ was found to be linear, based on which $N_{PEF}$ data were compensated. As a result, the 99% confidence interval of PEF error was turned out to be approximately 2.5%, which was less than a quarter of the upper limit of 10% recommended by ATS. Therefore, the present compensation technique was proved to be very accurate, complying the international standards of ATS, which would be useful to calibrate respiratory air flow transducers.

Predictive Clustering-based Collaborative Filtering Technique for Performance-Stability of Recommendation System (추천 시스템의 성능 안정성을 위한 예측적 군집화 기반 협업 필터링 기법)

  • Lee, O-Joun;You, Eun-Soon
    • Journal of Intelligence and Information Systems
    • /
    • v.21 no.1
    • /
    • pp.119-142
    • /
    • 2015
  • With the explosive growth in the volume of information, Internet users are experiencing considerable difficulties in obtaining necessary information online. Against this backdrop, ever-greater importance is being placed on a recommender system that provides information catered to user preferences and tastes in an attempt to address issues associated with information overload. To this end, a number of techniques have been proposed, including content-based filtering (CBF), demographic filtering (DF) and collaborative filtering (CF). Among them, CBF and DF require external information and thus cannot be applied to a variety of domains. CF, on the other hand, is widely used since it is relatively free from the domain constraint. The CF technique is broadly classified into memory-based CF, model-based CF and hybrid CF. Model-based CF addresses the drawbacks of CF by considering the Bayesian model, clustering model or dependency network model. This filtering technique not only improves the sparsity and scalability issues but also boosts predictive performance. However, it involves expensive model-building and results in a tradeoff between performance and scalability. Such tradeoff is attributed to reduced coverage, which is a type of sparsity issues. In addition, expensive model-building may lead to performance instability since changes in the domain environment cannot be immediately incorporated into the model due to high costs involved. Cumulative changes in the domain environment that have failed to be reflected eventually undermine system performance. This study incorporates the Markov model of transition probabilities and the concept of fuzzy clustering with CBCF to propose predictive clustering-based CF (PCCF) that solves the issues of reduced coverage and of unstable performance. The method improves performance instability by tracking the changes in user preferences and bridging the gap between the static model and dynamic users. Furthermore, the issue of reduced coverage also improves by expanding the coverage based on transition probabilities and clustering probabilities. The proposed method consists of four processes. First, user preferences are normalized in preference clustering. Second, changes in user preferences are detected from review score entries during preference transition detection. Third, user propensities are normalized using patterns of changes (propensities) in user preferences in propensity clustering. Lastly, the preference prediction model is developed to predict user preferences for items during preference prediction. The proposed method has been validated by testing the robustness of performance instability and scalability-performance tradeoff. The initial test compared and analyzed the performance of individual recommender systems each enabled by IBCF, CBCF, ICFEC and PCCF under an environment where data sparsity had been minimized. The following test adjusted the optimal number of clusters in CBCF, ICFEC and PCCF for a comparative analysis of subsequent changes in the system performance. The test results revealed that the suggested method produced insignificant improvement in performance in comparison with the existing techniques. In addition, it failed to achieve significant improvement in the standard deviation that indicates the degree of data fluctuation. Notwithstanding, it resulted in marked improvement over the existing techniques in terms of range that indicates the level of performance fluctuation. The level of performance fluctuation before and after the model generation improved by 51.31% in the initial test. Then in the following test, there has been 36.05% improvement in the level of performance fluctuation driven by the changes in the number of clusters. This signifies that the proposed method, despite the slight performance improvement, clearly offers better performance stability compared to the existing techniques. Further research on this study will be directed toward enhancing the recommendation performance that failed to demonstrate significant improvement over the existing techniques. The future research will consider the introduction of a high-dimensional parameter-free clustering algorithm or deep learning-based model in order to improve performance in recommendations.

Corporate Default Prediction Model Using Deep Learning Time Series Algorithm, RNN and LSTM (딥러닝 시계열 알고리즘 적용한 기업부도예측모형 유용성 검증)

  • Cha, Sungjae;Kang, Jungseok
    • Journal of Intelligence and Information Systems
    • /
    • v.24 no.4
    • /
    • pp.1-32
    • /
    • 2018
  • In addition to stakeholders including managers, employees, creditors, and investors of bankrupt companies, corporate defaults have a ripple effect on the local and national economy. Before the Asian financial crisis, the Korean government only analyzed SMEs and tried to improve the forecasting power of a default prediction model, rather than developing various corporate default models. As a result, even large corporations called 'chaebol enterprises' become bankrupt. Even after that, the analysis of past corporate defaults has been focused on specific variables, and when the government restructured immediately after the global financial crisis, they only focused on certain main variables such as 'debt ratio'. A multifaceted study of corporate default prediction models is essential to ensure diverse interests, to avoid situations like the 'Lehman Brothers Case' of the global financial crisis, to avoid total collapse in a single moment. The key variables used in corporate defaults vary over time. This is confirmed by Beaver (1967, 1968) and Altman's (1968) analysis that Deakins'(1972) study shows that the major factors affecting corporate failure have changed. In Grice's (2001) study, the importance of predictive variables was also found through Zmijewski's (1984) and Ohlson's (1980) models. However, the studies that have been carried out in the past use static models. Most of them do not consider the changes that occur in the course of time. Therefore, in order to construct consistent prediction models, it is necessary to compensate the time-dependent bias by means of a time series analysis algorithm reflecting dynamic change. Based on the global financial crisis, which has had a significant impact on Korea, this study is conducted using 10 years of annual corporate data from 2000 to 2009. Data are divided into training data, validation data, and test data respectively, and are divided into 7, 2, and 1 years respectively. In order to construct a consistent bankruptcy model in the flow of time change, we first train a time series deep learning algorithm model using the data before the financial crisis (2000~2006). The parameter tuning of the existing model and the deep learning time series algorithm is conducted with validation data including the financial crisis period (2007~2008). As a result, we construct a model that shows similar pattern to the results of the learning data and shows excellent prediction power. After that, each bankruptcy prediction model is restructured by integrating the learning data and validation data again (2000 ~ 2008), applying the optimal parameters as in the previous validation. Finally, each corporate default prediction model is evaluated and compared using test data (2009) based on the trained models over nine years. Then, the usefulness of the corporate default prediction model based on the deep learning time series algorithm is proved. In addition, by adding the Lasso regression analysis to the existing methods (multiple discriminant analysis, logit model) which select the variables, it is proved that the deep learning time series algorithm model based on the three bundles of variables is useful for robust corporate default prediction. The definition of bankruptcy used is the same as that of Lee (2015). Independent variables include financial information such as financial ratios used in previous studies. Multivariate discriminant analysis, logit model, and Lasso regression model are used to select the optimal variable group. The influence of the Multivariate discriminant analysis model proposed by Altman (1968), the Logit model proposed by Ohlson (1980), the non-time series machine learning algorithms, and the deep learning time series algorithms are compared. In the case of corporate data, there are limitations of 'nonlinear variables', 'multi-collinearity' of variables, and 'lack of data'. While the logit model is nonlinear, the Lasso regression model solves the multi-collinearity problem, and the deep learning time series algorithm using the variable data generation method complements the lack of data. Big Data Technology, a leading technology in the future, is moving from simple human analysis, to automated AI analysis, and finally towards future intertwined AI applications. Although the study of the corporate default prediction model using the time series algorithm is still in its early stages, deep learning algorithm is much faster than regression analysis at corporate default prediction modeling. Also, it is more effective on prediction power. Through the Fourth Industrial Revolution, the current government and other overseas governments are working hard to integrate the system in everyday life of their nation and society. Yet the field of deep learning time series research for the financial industry is still insufficient. This is an initial study on deep learning time series algorithm analysis of corporate defaults. Therefore it is hoped that it will be used as a comparative analysis data for non-specialists who start a study combining financial data and deep learning time series algorithm.