• Title/Summary/Keyword: Parameter

Search Result 23,065, Processing Time 0.059 seconds

Memory Organization for a Fuzzy Controller.

  • Jee, K.D.S.;Poluzzi, R.;Russo, B.
    • Proceedings of the Korean Institute of Intelligent Systems Conference
    • /
    • 1993.06a
    • /
    • pp.1041-1043
    • /
    • 1993
  • Fuzzy logic based Control Theory has gained much interest in the industrial world, thanks to its ability to formalize and solve in a very natural way many problems that are very difficult to quantify at an analytical level. This paper shows a solution for treating membership function inside hardware circuits. The proposed hardware structure optimizes the memoried size by using particular form of the vectorial representation. The process of memorizing fuzzy sets, i.e. their membership function, has always been one of the more problematic issues for the hardware implementation, due to the quite large memory space that is needed. To simplify such an implementation, it is commonly [1,2,8,9,10,11] used to limit the membership functions either to those having triangular or trapezoidal shape, or pre-definite shape. These kinds of functions are able to cover a large spectrum of applications with a limited usage of memory, since they can be memorized by specifying very few parameters ( ight, base, critical points, etc.). This however results in a loss of computational power due to computation on the medium points. A solution to this problem is obtained by discretizing the universe of discourse U, i.e. by fixing a finite number of points and memorizing the value of the membership functions on such points [3,10,14,15]. Such a solution provides a satisfying computational speed, a very high precision of definitions and gives the users the opportunity to choose membership functions of any shape. However, a significant memory waste can as well be registered. It is indeed possible that for each of the given fuzzy sets many elements of the universe of discourse have a membership value equal to zero. It has also been noticed that almost in all cases common points among fuzzy sets, i.e. points with non null membership values are very few. More specifically, in many applications, for each element u of U, there exists at most three fuzzy sets for which the membership value is ot null [3,5,6,7,12,13]. Our proposal is based on such hypotheses. Moreover, we use a technique that even though it does not restrict the shapes of membership functions, it reduces strongly the computational time for the membership values and optimizes the function memorization. In figure 1 it is represented a term set whose characteristics are common for fuzzy controllers and to which we will refer in the following. The above term set has a universe of discourse with 128 elements (so to have a good resolution), 8 fuzzy sets that describe the term set, 32 levels of discretization for the membership values. Clearly, the number of bits necessary for the given specifications are 5 for 32 truth levels, 3 for 8 membership functions and 7 for 128 levels of resolution. The memory depth is given by the dimension of the universe of the discourse (128 in our case) and it will be represented by the memory rows. The length of a world of memory is defined by: Length = nem (dm(m)+dm(fm) Where: fm is the maximum number of non null values in every element of the universe of the discourse, dm(m) is the dimension of the values of the membership function m, dm(fm) is the dimension of the word to represent the index of the highest membership function. In our case then Length=24. The memory dimension is therefore 128*24 bits. If we had chosen to memorize all values of the membership functions we would have needed to memorize on each memory row the membership value of each element. Fuzzy sets word dimension is 8*5 bits. Therefore, the dimension of the memory would have been 128*40 bits. Coherently with our hypothesis, in fig. 1 each element of universe of the discourse has a non null membership value on at most three fuzzy sets. Focusing on the elements 32,64,96 of the universe of discourse, they will be memorized as follows: The computation of the rule weights is done by comparing those bits that represent the index of the membership function, with the word of the program memor . The output bus of the Program Memory (μCOD), is given as input a comparator (Combinatory Net). If the index is equal to the bus value then one of the non null weight derives from the rule and it is produced as output, otherwise the output is zero (fig. 2). It is clear, that the memory dimension of the antecedent is in this way reduced since only non null values are memorized. Moreover, the time performance of the system is equivalent to the performance of a system using vectorial memorization of all weights. The dimensioning of the word is influenced by some parameters of the input variable. The most important parameter is the maximum number membership functions (nfm) having a non null value in each element of the universe of discourse. From our study in the field of fuzzy system, we see that typically nfm 3 and there are at most 16 membership function. At any rate, such a value can be increased up to the physical dimensional limit of the antecedent memory. A less important role n the optimization process of the word dimension is played by the number of membership functions defined for each linguistic term. The table below shows the request word dimension as a function of such parameters and compares our proposed method with the method of vectorial memorization[10]. Summing up, the characteristics of our method are: Users are not restricted to membership functions with specific shapes. The number of the fuzzy sets and the resolution of the vertical axis have a very small influence in increasing memory space. Weight computations are done by combinatorial network and therefore the time performance of the system is equivalent to the one of the vectorial method. The number of non null membership values on any element of the universe of discourse is limited. Such a constraint is usually non very restrictive since many controllers obtain a good precision with only three non null weights. The method here briefly described has been adopted by our group in the design of an optimized version of the coprocessor described in [10].

  • PDF

Development of Quantification Methods for the Myocardial Blood Flow Using Ensemble Independent Component Analysis for Dynamic $H_2^{15}O$ PET (동적 $H_2^{15}O$ PET에서 앙상블 독립성분분석법을 이용한 심근 혈류 정량화 방법 개발)

  • Lee, Byeong-Il;Lee, Jae-Sung;Lee, Dong-Soo;Kang, Won-Jun;Lee, Jong-Jin;Kim, Soo-Jin;Choi, Seung-Jin;Chung, June-Key;Lee, Myung-Chul
    • The Korean Journal of Nuclear Medicine
    • /
    • v.38 no.6
    • /
    • pp.486-491
    • /
    • 2004
  • Purpose: factor analysis and independent component analysis (ICA) has been used for handling dynamic image sequences. Theoretical advantages of a newly suggested ICA method, ensemble ICA, leaded us to consider applying this method to the analysis of dynamic myocardial $H_2^{15}O$ PET data. In this study, we quantified patients' blood flow using the ensemble ICA method. Materials and Methods: Twenty subjects underwent $H_2^{15}O$ PET scans using ECAT EXACT 47 scanner and myocardial perfusion SPECT using Vertex scanner. After transmission scanning, dynamic emission scans were initiated simultaneously with the injection of $555{\sim}740$ MBq $H_2^{15}O$. Hidden independent components can be extracted from the observed mixed data (PET image) by means of ICA algorithms. Ensemble learning is a variational Bayesian method that provides an analytical approximation to the parameter posterior using a tractable distribution. Variational approximation forms a lower bound on the ensemble likelihood and the maximization of the lower bound is achieved through minimizing the Kullback-Leibler divergence between the true posterior and the variational posterior. In this study, posterior pdf was approximated by a rectified Gaussian distribution to incorporate non-negativity constraint, which is suitable to dynamic images in nuclear medicine. Blood flow was measured in 9 regions - apex, four areas in mid wall, and four areas in base wall. Myocardial perfusion SPECT score and angiography results were compared with the regional blood flow. Results: Major cardiac components were separated successfully by the ensemble ICA method and blood flow could be estimated in 15 among 20 patients. Mean myocardial blood flow was $1.2{\pm}0.40$ ml/min/g in rest, $1.85{\pm}1.12$ ml/min/g in stress state. Blood flow values obtained by an operator in two different occasion were highly correlated (r=0.99). In myocardium component image, the image contrast between left ventricle and myocardium was 1:2.7 in average. Perfusion reserve was significantly different between the regions with and without stenosis detected by the coronary angiography (P<0.01). In 66 segment with stenosis confirmed by angiography, the segments with reversible perfusion decrease in perfusion SPECT showed lower perfusion reserve values in $H_2^{15}O$ PET. Conclusions: Myocardial blood flow could be estimated using an ICA method with ensemble learning. We suggest that the ensemble ICA incorporating non-negative constraint is a feasible method to handle dynamic image sequence obtained by the nuclear medicine techniques.

Research on Perfusion CT in Rabbit Brain Tumor Model (토끼 뇌종양 모델에서의 관류 CT 영상에 관한 연구)

  • Ha, Bon-Chul;Kwak, Byung-Kook;Jung, Ji-Sung;Lim, Cheong-Hwan;Jung, Hong-Ryang
    • Journal of radiological science and technology
    • /
    • v.35 no.2
    • /
    • pp.165-172
    • /
    • 2012
  • We investigated the vascular characteristics of tumors and normal tissue using perfusion CT in the rabbit brain tumor model. The VX2 carcinoma concentration of $1{\times}10^7$ cells/ml(0.1ml) was implanted in the brain of nine New Zealand white rabbits (weight: 2.4kg-3.0kg, mean: 2.6kg). The perfusion CT was scanned when the tumors were grown up to 5mm. The tumor volume and perfusion value were quantitatively analyzed by using commercial workstation (advantage windows workstation, AW, version 4.2, GE, USA). The mean volume of implanted tumors was $316{\pm}181mm^3$, and the biggest and smallest volumes of tumor were 497 $mm^3$ and 195 $mm^3$, respectively. All the implanted tumors in rabbits are single-nodular tumors, and intracranial metastasis was not observed. In the perfusion CT, cerebral blood volume (CBV) were $74.40{\pm}9.63$, $16.08{\pm}0.64$, $15.24{\pm}3.23$ ml/100g in the tumor core, ipsilateral normal brain, and contralateral normal brain, respectively ($p{\leqq}0.05$). In the cerebral blood flow (CBF), there were significant differences between the tumor core and both normal brains ($p{\leqq}0.05$), but no significant differences between ipsilateral and contralateral normal brains ($962.91{\pm}75.96$ vs. $357.82{\pm}12.82$ vs. $323.19{\pm}83.24$ ml/100g/min). In the mean transit time (MTT), there were significant differences between the tumor core and both normal brains ($p{\leqq}0.05$), but no significant differences between ipsilateral and contralateral normal brains ($4.37{\pm}0.19$ vs. $3.02{\pm}0.41$ vs. $2.86{\pm}0.22$ sec). In the permeability surface (PS), there were significant differences among the tumor core, ipsilateral and contralateral normal brains ($47.23{\pm}25.45$ vs. $14.54{\pm}1.60$ vs. $6.81{\pm}4.20$ ml/100g/min)($p{\leqq}0.05$). In the time to peak (TTP) were no significant differences among the tumor core, ipsilateral and contralateral normal brains. In the positive enhancement integral (PEI), there were significant differences among the tumor core, ipsilateral and contralateral brains ($61.56{\pm}16.07$ vs. $12.58{\pm}2.61$ vs. $8.26{\pm}5.55$ ml/100g). ($p{\leqq}0.05$). In the maximum slope of increase (MSI), there were significant differences between the tumor core and both normal brain($p{\leqq}0.05$), but no significant differences between ipsilateral and contralateral normal brains ($13.18{\pm}2.81$ vs. $6.99{\pm}1.73$ vs. $6.41{\pm}1.39$ HU/sec). Additionally, in the maximum slope of decrease (MSD), there were significant differences between the tumor core and contralateral normal brain($p{\leqq}0.05$), but no significant differences between the tumor core and ipsilateral normal brain($4.02{\pm}1.37$ vs. $4.66{\pm}0.83$ vs. $6.47{\pm}1.53$ HU/sec). In conclusion, the VX2 tumors were implanted in the rabbit brain successfully, and stereotactic inoculation method make single-nodular type of tumor that was no metastasis in intracranial, suitable for comparative study between tumors and normal tissues. Therefore, perfusion CT would be a useful diagnostic tool capable of reflecting the vascularity of the tumors.

Low Temperature Growth of MCN(M=Ti, Hf) Coating Layers by Plasma Enhanced MOCVD and Study on Their Characteristics (플라즈마 보조 유기금속 화학기상 증착법에 의한 MCN(M=Ti, Hf) 코팅막의 저온성장과 그들의 특성연구)

  • Boo, Jin-Hyo;Heo, Cheol-Ho;Cho, Yong-Ki;Yoon, Joo-Sun;Han, Jeon-G.
    • Journal of the Korean Vacuum Society
    • /
    • v.15 no.6
    • /
    • pp.563-575
    • /
    • 2006
  • Ti(C,N) films are synthesized by pulsed DC plasma enhanced chemical vapor deposition (PEMOCVD) using metal-organic compounds of tetrakis diethylamide titanium at $200-300^{\circ}C$. To compare plasma parameter, in this study, $H_2$ and $He/H_2$ gases are used as carrier gas. The effect of $N_2\;and\;NH_3$ gases as reactive gas is also evaluated in reduction of C content of the films. Radical formation and ionization behaviors in plasma are analyzed in-situ by optical emission spectroscopy (OES) at various pulsed bias voltages and gas species. He and $H_2$ mixture is very effective in enhancing ionization of radicals, especially for the $N_2$. Ammonia $(NH_3)$ gas also highly reduces the formation of CN radical, thereby decreasing C content of Ti(C, N) films in a great deal. The microhardness of film is obtained to be $1,250\;Hk_{0.01}\;to\;1,760\;Hk_{0.01}$ depending on gas species and bias voltage. Higher hardness can be obtained under the conditions of $H_2\;and\;N_2$ gases as well as bias voltage of 600 V. Hf(C, N) films were also obtained by pulsed DC PEMOCYB from tetrakis diethyl-amide hafnium and $N_2/He-H_2$ mixture. The depositions were carried out at temperature of below $300^{\circ}C$, total chamber pressure of 1 Torr and varying the deposition parameters. Influences of the nitrogen contents in the plasma decreased the growth rate and attributed to amorphous components, to the high carbon content of the film. In XRD analysis the domain lattice plain was (111) direction and the maximum microhardness was observed to be $2,460\;Hk_{0.025}$ for a Hf(C,N) film grown under -600 V and 0.1 flow rate of nitrogen. The optical emission spectra measured during PEMOCVD processes of Hf(C, N) film growth were also discussed. $N_2,\;N_2^+$, H, He, CH, CN radicals and metal species(Hf) were detected and CH, CN radicals that make an important role of total PEMOCVD process increased carbon content.

Comparison of Activity Capacity Change and GFR Value Change According to Matrix Size during 99mTc-DTPA Renal Dynamic Scan (99mTc-DTPA 신장 동적 검사(Renal Dynamic Scan) 시 동위원소 용량 변화와 Matrix Size 변경에 따른 사구체 여과율(Glomerular Filtration Rate, GFR) 수치 변화 비교)

  • Kim, Hyeon;Do, Yong-Ho;Kim, Jae-Il;Choi, Hyeon-Jun;Woo, Jae-Ryong;Bak, Chan-Rok;Ha, Tae-Hwan
    • The Korean Journal of Nuclear Medicine Technology
    • /
    • v.24 no.1
    • /
    • pp.27-32
    • /
    • 2020
  • Purpose Glomerular Filtration Rate(GFR) is an important indicator for evaluating renal function and monitoring the progress of renal disease. Currently, the method of measuring GFR in clinical trials by using serum creatinine value and 99mTc-DTPA(diethylenetriamine pentaacetic acid) renal dynamic scan is still useful. After the Gates method of formula was announced, when 99mTc-DTPA Renal dynamic scan is taken, it is applied the GFR is measured using a gamma camera. The purpose of this paper is to measure the GFR by applying the Gates method of formula. It is according to effect activity and matrix size that is related in the GFR. Materials and Methods Data from 5 adult patients (patient age = 62 ± 5, 3 males, 2 females) who had been examined 99mTc-DTPA Renal dynamic scan were analyzed. A dynamic image was obtained for 21 minutes after instantaneous injection of 99mTc-DTPA 15 mCi into the patient's vein. To evaluate the glomerular filtration rate according to changes in activity and matrix size, total counts were measured after setting regions of interest in both kidneys and tissues in 2-3 minutes. The distance from detector to the table was maintained at 30cm, and the capacity of the pre-syringe (PR) was set to 15, 20, 25, 30 mCi, and each the capacity of post-syringe (PO) was 1, 5, 10, 15 mCi is set to evaluate the activity change. And then, each matrix size was changed to 32 × 32, 64 × 64, 128 × 128, 256 × 256, 512 × 512, and 1024 × 1024 to compare and to evaluate the values. Results As the activity increased in matrix size, the difference in GFR gradually decreased from 52.95% at the maximum to 16.67% at the minimum. The GFR value according to the change of matrix size was similar to 2.4%, 0.2%, 0.2% of difference when changing from 128 to 256, 256 to 512, and 512 to 1024, but 54.3% of difference when changing from 32 to 64 and 39.43% of difference when changing from 64 to 128. Finally, based on the presently used protocol, 256 × 256, PR 15 mCi and PO 1 mCi, the GFR value was the largest difference with 82% in PR 15 mCi and PO 1 mCi. conditions, and at the least difference is 0.2% in the conditions of PR 30 mCi and PO 15 mCi. Conclusion Through this paper, it was confirmed that when measuring the GFR using the gate method in the 99mTc-DTPA renal dynamic scan. The GFR was affected by activity and matrix size changes. Therefore, it is considered that when taking the 99mTc-DTPA renal dynamic scan, is should be careful by applying appropriate parameters when calculating GFR in the every hospital.

Corporate Default Prediction Model Using Deep Learning Time Series Algorithm, RNN and LSTM (딥러닝 시계열 알고리즘 적용한 기업부도예측모형 유용성 검증)

  • Cha, Sungjae;Kang, Jungseok
    • Journal of Intelligence and Information Systems
    • /
    • v.24 no.4
    • /
    • pp.1-32
    • /
    • 2018
  • In addition to stakeholders including managers, employees, creditors, and investors of bankrupt companies, corporate defaults have a ripple effect on the local and national economy. Before the Asian financial crisis, the Korean government only analyzed SMEs and tried to improve the forecasting power of a default prediction model, rather than developing various corporate default models. As a result, even large corporations called 'chaebol enterprises' become bankrupt. Even after that, the analysis of past corporate defaults has been focused on specific variables, and when the government restructured immediately after the global financial crisis, they only focused on certain main variables such as 'debt ratio'. A multifaceted study of corporate default prediction models is essential to ensure diverse interests, to avoid situations like the 'Lehman Brothers Case' of the global financial crisis, to avoid total collapse in a single moment. The key variables used in corporate defaults vary over time. This is confirmed by Beaver (1967, 1968) and Altman's (1968) analysis that Deakins'(1972) study shows that the major factors affecting corporate failure have changed. In Grice's (2001) study, the importance of predictive variables was also found through Zmijewski's (1984) and Ohlson's (1980) models. However, the studies that have been carried out in the past use static models. Most of them do not consider the changes that occur in the course of time. Therefore, in order to construct consistent prediction models, it is necessary to compensate the time-dependent bias by means of a time series analysis algorithm reflecting dynamic change. Based on the global financial crisis, which has had a significant impact on Korea, this study is conducted using 10 years of annual corporate data from 2000 to 2009. Data are divided into training data, validation data, and test data respectively, and are divided into 7, 2, and 1 years respectively. In order to construct a consistent bankruptcy model in the flow of time change, we first train a time series deep learning algorithm model using the data before the financial crisis (2000~2006). The parameter tuning of the existing model and the deep learning time series algorithm is conducted with validation data including the financial crisis period (2007~2008). As a result, we construct a model that shows similar pattern to the results of the learning data and shows excellent prediction power. After that, each bankruptcy prediction model is restructured by integrating the learning data and validation data again (2000 ~ 2008), applying the optimal parameters as in the previous validation. Finally, each corporate default prediction model is evaluated and compared using test data (2009) based on the trained models over nine years. Then, the usefulness of the corporate default prediction model based on the deep learning time series algorithm is proved. In addition, by adding the Lasso regression analysis to the existing methods (multiple discriminant analysis, logit model) which select the variables, it is proved that the deep learning time series algorithm model based on the three bundles of variables is useful for robust corporate default prediction. The definition of bankruptcy used is the same as that of Lee (2015). Independent variables include financial information such as financial ratios used in previous studies. Multivariate discriminant analysis, logit model, and Lasso regression model are used to select the optimal variable group. The influence of the Multivariate discriminant analysis model proposed by Altman (1968), the Logit model proposed by Ohlson (1980), the non-time series machine learning algorithms, and the deep learning time series algorithms are compared. In the case of corporate data, there are limitations of 'nonlinear variables', 'multi-collinearity' of variables, and 'lack of data'. While the logit model is nonlinear, the Lasso regression model solves the multi-collinearity problem, and the deep learning time series algorithm using the variable data generation method complements the lack of data. Big Data Technology, a leading technology in the future, is moving from simple human analysis, to automated AI analysis, and finally towards future intertwined AI applications. Although the study of the corporate default prediction model using the time series algorithm is still in its early stages, deep learning algorithm is much faster than regression analysis at corporate default prediction modeling. Also, it is more effective on prediction power. Through the Fourth Industrial Revolution, the current government and other overseas governments are working hard to integrate the system in everyday life of their nation and society. Yet the field of deep learning time series research for the financial industry is still insufficient. This is an initial study on deep learning time series algorithm analysis of corporate defaults. Therefore it is hoped that it will be used as a comparative analysis data for non-specialists who start a study combining financial data and deep learning time series algorithm.

Effectiveness Assessment on Jaw-Tracking in Intensity Modulated Radiation Therapy and Volumetric Modulated Arc Therapy for Esophageal Cancer (식도암 세기조절방사선치료와 용적세기조절회전치료에 대한 Jaw-Tracking의 유용성 평가)

  • Oh, Hyeon Taek;Yoo, Soon Mi;Jeon, Soo Dong;Kim, Min Su;Song, Heung Kwon;Yoon, In Ha;Back, Geum Mun
    • The Journal of Korean Society for Radiation Therapy
    • /
    • v.31 no.1
    • /
    • pp.33-41
    • /
    • 2019
  • Purpose : To evaluate the effectiveness of Jaw-tracking(JT) technique in Intensity-modulated radiation therapy(IMRT) and Volumetric-modulated arc therapy(VMAT) for radiation therapy of esophageal cancer by analyzing volume dose of perimetrical normal organs along with the low-dose volume regions. Materials and Method: A total of 27 patients were selected who received radiation therapy for esophageal cancer with using $VitalBeam^{TM}$(Varian Medical System, U.S.A) in our hospital. Using Eclipse system(Ver. 13.6 Varian, U.S.A), radiation treatment planning was set up with Jaw-tracking technique(JT) and Non-Jaw-tracking technique(NJT), and was conducted for the patients with T-shaped Planning target volume(PTV), including Supraclavicular lymph nodes(SCL). PTV was classified into whether celiac area was included or not to identify the influence on the radiation field. To compare the treatment plans, Organ at risk(OAR) was defined to bilateral lung, heart, and spinal cord and evaluated for Conformity index(CI) and Homogeneity index(HI). Portal dosimetry was performed to verify a clinical application using Electronic portal imaging device(EPID) and Gamma analysis was performed with establishing thresholds of radiation field as a parameter, with various range of 0 %, 5 %, and 10 %. Results: All treatment plans were established on gamma pass rates of 95 % with 3 mm/3 % criteria. For a threshold of 10 %, both JT and NJT passed with rate of more than 95 % and both gamma passing rate decreased more than 1 % in IMRT as the low dose threshold decreased to 5 % and 0 %. For the case of JT in IMRT on PTV without celiac area, $V_5$ and $V_{10}$ of both lung showed a decrease by respectively 8.5 % and 5.3 % in average and up to 14.7 %. A $D_{mean}$ decreased by $72.3{\pm}51cGy$, while there was an increase in radiation dose reduction in PTV including celiac area. A $D_{mean}$ of heart decreased by $68.9{\pm}38.5cGy$ and that of spinal cord decreased by $39.7{\pm}30cGy$. For the case of JT in VMAT, $V_5$ decreased by 2.5 % in average in lungs, and also a little amount in heart and spinal cord. Radiation dose reduction of JT showed an increase when PTV includes celiac area in VMAT. Conclusion: In the radiation treatment planning for esophageal cancer, IMRT showed a significant decrease in $V_5$, and $V_{10}$ of both lungs when applying JT, and dose reduction was greater when the irradiated area in low-dose field is larger. Therefore, IMRT is more advantageous in applying JT than VMAT for radiation therapy of esophageal cancer and can protect the normal organs from MLC leakage and transmitted doses in low-dose field.

Implementation Strategy for the Elderly Care Solution Based on Usage Log Analysis: Focusing on the Case of Hyodol Product (사용자 로그 분석에 기반한 노인 돌봄 솔루션 구축 전략: 효돌 제품의 사례를 중심으로)

  • Lee, Junsik;Yoo, In-Jin;Park, Do-Hyung
    • Journal of Intelligence and Information Systems
    • /
    • v.25 no.3
    • /
    • pp.117-140
    • /
    • 2019
  • As the aging phenomenon accelerates and various social problems related to the elderly of the vulnerable are raised, the need for effective elderly care solutions to protect the health and safety of the elderly generation is growing. Recently, more and more people are using Smart Toys equipped with ICT technology for care for elderly. In particular, log data collected through smart toys is highly valuable to be used as a quantitative and objective indicator in areas such as policy-making and service planning. However, research related to smart toys is limited, such as the development of smart toys and the validation of smart toy effectiveness. In other words, there is a dearth of research to derive insights based on log data collected through smart toys and to use them for decision making. This study will analyze log data collected from smart toy and derive effective insights to improve the quality of life for elderly users. Specifically, the user profiling-based analysis and elicitation of a change in quality of life mechanism based on behavior were performed. First, in the user profiling analysis, two important dimensions of classifying the type of elderly group from five factors of elderly user's living management were derived: 'Routine Activities' and 'Work-out Activities'. Based on the dimensions derived, a hierarchical cluster analysis and K-Means clustering were performed to classify the entire elderly user into three groups. Through a profiling analysis, the demographic characteristics of each group of elderlies and the behavior of using smart toy were identified. Second, stepwise regression was performed in eliciting the mechanism of change in quality of life. The effects of interaction, content usage, and indoor activity have been identified on the improvement of depression and lifestyle for the elderly. In addition, it identified the role of user performance evaluation and satisfaction with smart toy as a parameter that mediated the relationship between usage behavior and quality of life change. Specific mechanisms are as follows. First, the interaction between smart toy and elderly was found to have an effect of improving the depression by mediating attitudes to smart toy. The 'Satisfaction toward Smart Toy,' a variable that affects the improvement of the elderly's depression, changes how users evaluate smart toy performance. At this time, it has been identified that it is the interaction with smart toy that has a positive effect on smart toy These results can be interpreted as an elderly with a desire to meet emotional stability interact actively with smart toy, and a positive assessment of smart toy, greatly appreciating the effectiveness of smart toy. Second, the content usage has been confirmed to have a direct effect on improving lifestyle without going through other variables. Elderly who use a lot of the content provided by smart toy have improved their lifestyle. However, this effect has occurred regardless of the attitude the user has toward smart toy. Third, log data show that a high degree of indoor activity improves both the lifestyle and depression of the elderly. The more indoor activity, the better the lifestyle of the elderly, and these effects occur regardless of the user's attitude toward smart toy. In addition, elderly with a high degree of indoor activity are satisfied with smart toys, which cause improvement in the elderly's depression. However, it can be interpreted that elderly who prefer outdoor activities than indoor activities, or those who are less active due to health problems, are hard to satisfied with smart toys, and are not able to get the effects of improving depression. In summary, based on the activities of the elderly, three groups of elderly were identified and the important characteristics of each type were identified. In addition, this study sought to identify the mechanism by which the behavior of the elderly on smart toy affects the lives of the actual elderly, and to derive user needs and insights.

Feasibility of Mixed-Energy Partial Arc VMAT Plan with Avoidance Sector for Prostate Cancer (전립선암 방사선치료 시 회피 영역을 적용한 혼합 에너지 VMAT 치료 계획의 평가)

  • Hwang, Se Ha;NA, Kyoung Su;Lee, Je Hee
    • The Journal of Korean Society for Radiation Therapy
    • /
    • v.32
    • /
    • pp.17-29
    • /
    • 2020
  • Purpose: The purpose of this work was to investigate the dosimetric impact of mixed energy partial arc technique on prostate cancer VMAT. Materials and Methods: This study involved prostate only patients planned with 70Gy in 30 fractions to the planning target volume (PTV). Femoral heads, Bladder and Rectum were considered as oragan at risk (OARs). For this study, mixed energy partial arcs (MEPA) were generated with gantry angle set to 180°~230°, 310°~50° for 6MV arc and 130°~50°, 310°~230° for 15MV arc. Each arc set the avoidance sector which is gantry angle 230°~310°, 50°~130° at first arc and 50°~310° at second arc. After that, two plans were summed and were analyzed the dosimetry parameter of each structure such as Maximum dose, Mean dose, D2%, Homogeneity index (HI) and Conformity Index (CI) for PTV and Maximum dose, Mean dose, V70Gy, V50Gy, V30Gy, and V20Gy for OARs and Monitor Unit (MU) with 6MV 1 ARC, 6MV, 10MV, 15MV 2 ARC plan. Results: In MEPA, the maximum dose, mean dose and D2% were lower than 6MV 1 ARC plan(p<0.0005). However, the average difference of maximum dose was 0.24%, 0.39%, 0.60% (p<0.450, 0.321, 0.139) higher than 6MV, 10MV, 15MV 2 ARC plan, respectively and D2% was 0.42%, 0.49%, 0.59% (p<0.073, 0.087, 0.033) higher than compared plans. The average difference of mean dose was 0.09% lower than 10MV 2 ARC plan, but it is 0.27%, 0.12% (p<0.184, 0.521) higher than 6MV 2 ARC, 15MV 2 ARC plan, respectively. HI was 0.064±0.006 which is the lowest value (p<0.005, 0.357, 0.273, 0.801) among the all plans. For CI, there was no significant differences which were 1.12±0.038 in MEPA, 1.12±0.036, 1.11±0.024, 1.11±0.030, 1.12±0.027 in 6MV 1 ARC, 6MV, 10MV, 15MV 2 ARC, respectively. MEPA produced significantly lower rectum dose. Especially, V70Gy, V50Gy, V30Gy, V20Gy were 3.40, 16.79, 37.86, 48.09 that were lower than other plans. For bladder dose, V30Gy, V20Gy were lower than other plans. However, the mean dose of both femoral head were 9.69±2.93, 9.88±2.5 which were 2.8Gy~3.28Gy higher than other plans. The mean MU of MEPA were 19.53% lower than 6MV 1 ARC, 5.7% lower than 10MV 2 ARC respectively. Conclusion: This study for prostate radiotherapy demonstrated that a choice of MEPA VMAT has the potential to minimize doses to OARs and improve homogeneity to PTV at the expense of a moderate increase in maximum and mean dose to the femoral heads.

Pipetting Stability and Improvement Test of the Robotic Liquid Handling System Depending on Types of Liquid (용액에 따른 자동분주기의 분주능력 평가와 분주력 향상 실험)

  • Back, Hyangmi;Kim, Youngsan;Yun, Sunhee;Heo, Uisung;Kim, Hosin;Ryu, Hyeonggi;Lee, Guiwon
    • The Korean Journal of Nuclear Medicine Technology
    • /
    • v.20 no.2
    • /
    • pp.62-68
    • /
    • 2016
  • Purpose In a cyclosporine experiment using a robotic liquid handing system has found a deviation of its standard curve and low reproducibility of patients's results. The difference of the test is that methanol is mixed with samples and the extractions are used for the test. Therefore, we assumed that the abnormal test results came from using methanol and conducted this test. In a manual of a robotic liquid handling system mentions that we can choose several setting parameters depending on the viscosity of the liquids being used, the size of the sampling tips and the motor speeds that you elect to use but there's no exact order. This study was undertaken to confirm pipetting ability depending on types of liquids and investigate proper setting parameters for the optimum dispensing ability. Materials and Methods 4types of liquids(water, serum, methanol, PEG 6000(25%)) and $TSH^{125}I$ tracer(515 kBq) are used to confirm pipetting ability. 29 specimens for Cyclosporine test are used to compare results. Prepare 8 plastic tubes for each of the liquids and with multi pipette $400{\mu}l$ of each liquid is dispensed to 8 tubes and $100{\mu}l$ of $TSH^{125}I$ tracer are dispensed to all of the tubes. From the prepared samples, $100{\mu}l$ of liquids are dispensed using a robotic liquid handing system, counted and calculated its CV(%) depending on types of liquids. And then by adjusting several setting parameters(air gap, dispense time, delay time) the change of the CV(%)are calcutated and finds optimum setting parameters. 29 specimens are tested with 3 methods. The first(A) is manual method and the second(B) is used robotic liquid handling system with existing parameters. The third(C) is used robotic liquid handling system with adjusted parameters. Pipetting ability depending on types of liquids is assessed with CV(%). On the basis of (A), patients's test results are compared (A)and(B), (A)and(C) and they are assessed with %RE(%Relative error) and %Diff(%Difference). Results The CV(%) of the CPM depending on liquid types were water 0.88, serum 0.95, methanol 10.22 and PEG 0.68. As expected dispensing of methanol using a liquid handling system was the problem and others were good. The methanol's dispensing were conducted by adjusting several setting parameters. When transport air gap 0 was adjusted to 2 and 5, CV(%) were 20.16, 12.54 and when system air gap 0 was adjusted to 2 and 5, CV(%) were 8.94, 1.36. When adjusted to system air gap 2, transport air gap 2 was 12.96 and adjusted to system air gap 5, Transport air gap 5 was 1.33. When dispense speed was adjusted 300 to 100, CV(%) was 13.32 and when dispense delay was adjusted 200 to 100 was 13.55. When compared (B) to (A), the result increased 99.44% and %RE was 93.59%. When compared (C-system air gap was adjusted 0 to 5) to (A), the result increased 6.75% and %RE was 5.10%. Conclusion Adjusting speed and delay time of aspiration and dispense was meaningless but changing system air gap was effective. By adjusting several parameters proper value was found and it affected the practical result of the experiment. To optimize the system active efforts are needed through the test and in case of dispensing new types of liquids proper test is required to check the liquid is suitable for using the equipment.

  • PDF