• Title/Summary/Keyword: System-level Analysis

Search Result 7,469, Processing Time 0.04 seconds

Analysis of Variation for Parallel Test between Reagent Lots in in-vitro Laboratory of Nuclear Medicine Department (핵의학 체외검사실에서 시약 lot간 parallel test 시 변이 분석)

  • Chae, Hong Joo;Cheon, Jun Hong;Lee, Sun Ho;Yoo, So Yeon;Yoo, Seon Hee;Park, Ji Hye;Lim, Soo Yeon
    • The Korean Journal of Nuclear Medicine Technology
    • /
    • v.23 no.2
    • /
    • pp.51-58
    • /
    • 2019
  • Purpose In in-vitro laboratories of nuclear medicine department, when the reagent lot or reagent lot changes Comparability test or parallel test is performed to determine whether the results between lots are reliable. The most commonly used standard domestic laboratories is to obtain %difference from the difference in results between two lots of reagents, and then many laboratories are set the standard to less than 20% at low concentrations and less than 10% at medium and high concentrations. If the range is deviated from the standard, the test is considered failed and it is repeated until the result falls within the standard range. In this study, several tests are selected that are performed in nuclear medicine in-vitro laboratories to analyze parallel test results and to establish criteria for customized percent difference for each test. Materials and Methods From January to November 2018, the result of parallel test for reagent lot change is analyzed for 7 items including thyroid-stimulating hormone (TSH), free thyroxine (FT4), carcinoembryonic antigen (CEA), CA-125, prostate-specific antigen (PSA), HBs-Ab and Insulin. The RIA-MAT 280 system which adopted the principle of IRMA is used for TSH, FT4, CEA, CA-125 and PSA. TECAN automated dispensing equipment and GAMMA-10 is used to measure insulin test. For the test of HBs-Ab, HAMILTON automated dispensing equipment and Cobra Gamma ray measuring instrument are used. Separate reagent, customized calibrator and quality control materials are used in this experiment. Results 1. TSH [%diffrence Max / Mean / Median] (P-value by t-test > 0.05) C-1(low concentration) [14.8 / 4.4 / 3.7 / 0.0 ] C-2(middle concentration) [10.1 / 4.2 / 3.7 / 0.0] 2. FT4 [%diffrence Max / Mean / Median] (P-value by t-test > 0.05) C-1(low concentration) [10.0 / 4.2 / 3.9 / 0.0] C-2(high concentration) [9.6 / 3.3 / 3.1 / 0.0 ] 3. CA-125 [%diffrence Max / Mean / Median] (P-value by t-test > 0.05) C-1(middle concentration) [9.6 / 4.3 / 4.3 / 0.3] C-2(high concentration) [6.5 / 3.5 / 4.3 / 0.4] 4. CEA [%diffrence Max / Mean / median] (P-value by t-test > 0.05) C-1(low concentration) [9.8 / 4.2 / 3.0 / 0.0] C-2(middle concentration) [8.7 / 3.7 / 2.3 / 0.3] 5. PSA [%diffrence Max / Mean / Median] (P-value by t-test > 0.05) C-1(low concentration) [15.4 / 7.6 / 8.2 / 0.0] C-2(middle concentration) [8.8 / 4.5 / 4.8 / 0.9] 6. HBs-Ab [%diffrence Max / Mean / Median] (P-value by t-test > 0.05) C-1(middle concentration) [9.6 / 3.7 / 2.7 / 0.2] C-2(high concentration) [8.9 / 4.1 / 3.6 / 0.3] 7. Insulin [%diffrence Max / Mean / Median] (P-value by t-test > 0.05) C-1(middle concentration) [8.7 / 3.1 / 2.4 / 0.9] C-2(high concentration) [8.3 / 3.2 / 1.5 / 0.1] In some low concentration measurements, the percent difference is found above 10 to nearly 15 percent in result of target value calculated at a lower concentration. In addition, when the value is measured after Standard level 6, which is the highest value of reagents in the dispensing sequence, the result would have been affected by a hook effect. Overall, there was no significant difference in lot change of quality control material (p-value>0.05). Conclusion Variations between reagent lots are not large in immunoradiometric assays. It is likely that this is due to the selection of items that have relatively high detection rate in the immunoradiometric method and several remeasurements. In most test results, the difference was less than 10 percent, which was within the standard range. TSH control level 1 and PSA control level 1, which have low concentration target value, exceeded 10 percent more than twice, but it did not result in a value that was near 20 percent. As a result, it is required to perform a longer period of observation for more homogenized average results and to obtain laboratory-specific acceptance criteria for each item. Also, it is advised to study observations considering various variables.

Information Privacy Concern in Context-Aware Personalized Services: Results of a Delphi Study

  • Lee, Yon-Nim;Kwon, Oh-Byung
    • Asia pacific journal of information systems
    • /
    • v.20 no.2
    • /
    • pp.63-86
    • /
    • 2010
  • Personalized services directly and indirectly acquire personal data, in part, to provide customers with higher-value services that are specifically context-relevant (such as place and time). Information technologies continue to mature and develop, providing greatly improved performance. Sensory networks and intelligent software can now obtain context data, and that is the cornerstone for providing personalized, context-specific services. Yet, the danger of overflowing personal information is increasing because the data retrieved by the sensors usually contains privacy information. Various technical characteristics of context-aware applications have more troubling implications for information privacy. In parallel with increasing use of context for service personalization, information privacy concerns have also increased such as an unrestricted availability of context information. Those privacy concerns are consistently regarded as a critical issue facing context-aware personalized service success. The entire field of information privacy is growing as an important area of research, with many new definitions and terminologies, because of a need for a better understanding of information privacy concepts. Especially, it requires that the factors of information privacy should be revised according to the characteristics of new technologies. However, previous information privacy factors of context-aware applications have at least two shortcomings. First, there has been little overview of the technology characteristics of context-aware computing. Existing studies have only focused on a small subset of the technical characteristics of context-aware computing. Therefore, there has not been a mutually exclusive set of factors that uniquely and completely describe information privacy on context-aware applications. Second, user survey has been widely used to identify factors of information privacy in most studies despite the limitation of users' knowledge and experiences about context-aware computing technology. To date, since context-aware services have not been widely deployed on a commercial scale yet, only very few people have prior experiences with context-aware personalized services. It is difficult to build users' knowledge about context-aware technology even by increasing their understanding in various ways: scenarios, pictures, flash animation, etc. Nevertheless, conducting a survey, assuming that the participants have sufficient experience or understanding about the technologies shown in the survey, may not be absolutely valid. Moreover, some surveys are based solely on simplifying and hence unrealistic assumptions (e.g., they only consider location information as a context data). A better understanding of information privacy concern in context-aware personalized services is highly needed. Hence, the purpose of this paper is to identify a generic set of factors for elemental information privacy concern in context-aware personalized services and to develop a rank-order list of information privacy concern factors. We consider overall technology characteristics to establish a mutually exclusive set of factors. A Delphi survey, a rigorous data collection method, was deployed to obtain a reliable opinion from the experts and to produce a rank-order list. It, therefore, lends itself well to obtaining a set of universal factors of information privacy concern and its priority. An international panel of researchers and practitioners who have the expertise in privacy and context-aware system fields were involved in our research. Delphi rounds formatting will faithfully follow the procedure for the Delphi study proposed by Okoli and Pawlowski. This will involve three general rounds: (1) brainstorming for important factors; (2) narrowing down the original list to the most important ones; and (3) ranking the list of important factors. For this round only, experts were treated as individuals, not panels. Adapted from Okoli and Pawlowski, we outlined the process of administrating the study. We performed three rounds. In the first and second rounds of the Delphi questionnaire, we gathered a set of exclusive factors for information privacy concern in context-aware personalized services. The respondents were asked to provide at least five main factors for the most appropriate understanding of the information privacy concern in the first round. To do so, some of the main factors found in the literature were presented to the participants. The second round of the questionnaire discussed the main factor provided in the first round, fleshed out with relevant sub-factors. Respondents were then requested to evaluate each sub factor's suitability against the corresponding main factors to determine the final sub-factors from the candidate factors. The sub-factors were found from the literature survey. Final factors selected by over 50% of experts. In the third round, a list of factors with corresponding questions was provided, and the respondents were requested to assess the importance of each main factor and its corresponding sub factors. Finally, we calculated the mean rank of each item to make a final result. While analyzing the data, we focused on group consensus rather than individual insistence. To do so, a concordance analysis, which measures the consistency of the experts' responses over successive rounds of the Delphi, was adopted during the survey process. As a result, experts reported that context data collection and high identifiable level of identical data are the most important factor in the main factors and sub factors, respectively. Additional important sub-factors included diverse types of context data collected, tracking and recording functionalities, and embedded and disappeared sensor devices. The average score of each factor is very useful for future context-aware personalized service development in the view of the information privacy. The final factors have the following differences comparing to those proposed in other studies. First, the concern factors differ from existing studies, which are based on privacy issues that may occur during the lifecycle of acquired user information. However, our study helped to clarify these sometimes vague issues by determining which privacy concern issues are viable based on specific technical characteristics in context-aware personalized services. Since a context-aware service differs in its technical characteristics compared to other services, we selected specific characteristics that had a higher potential to increase user's privacy concerns. Secondly, this study considered privacy issues in terms of service delivery and display that were almost overlooked in existing studies by introducing IPOS as the factor division. Lastly, in each factor, it correlated the level of importance with professionals' opinions as to what extent users have privacy concerns. The reason that it did not select the traditional method questionnaire at that time is that context-aware personalized service considered the absolute lack in understanding and experience of users with new technology. For understanding users' privacy concerns, professionals in the Delphi questionnaire process selected context data collection, tracking and recording, and sensory network as the most important factors among technological characteristics of context-aware personalized services. In the creation of a context-aware personalized services, this study demonstrates the importance and relevance of determining an optimal methodology, and which technologies and in what sequence are needed, to acquire what types of users' context information. Most studies focus on which services and systems should be provided and developed by utilizing context information on the supposition, along with the development of context-aware technology. However, the results in this study show that, in terms of users' privacy, it is necessary to pay greater attention to the activities that acquire context information. To inspect the results in the evaluation of sub factor, additional studies would be necessary for approaches on reducing users' privacy concerns toward technological characteristics such as highly identifiable level of identical data, diverse types of context data collected, tracking and recording functionality, embedded and disappearing sensor devices. The factor ranked the next highest level of importance after input is a context-aware service delivery that is related to output. The results show that delivery and display showing services to users in a context-aware personalized services toward the anywhere-anytime-any device concept have been regarded as even more important than in previous computing environment. Considering the concern factors to develop context aware personalized services will help to increase service success rate and hopefully user acceptance for those services. Our future work will be to adopt these factors for qualifying context aware service development projects such as u-city development projects in terms of service quality and hence user acceptance.

Usefulness of Stomach Extension after Drinking Orange Juice in PET/CT Whole Body Scan (PET/CT 전신 영상에서 오렌지 주스(Orange Juice)를 이용한 위장 확장 영상의 유용성)

  • Cho, Seok-Won;Chung, Seok;Oh, Shin-Hyun;Park, Hoon-Hee;Kim, Jae-Sam;Lee, Chang-Ho
    • The Korean Journal of Nuclear Medicine Technology
    • /
    • v.13 no.1
    • /
    • pp.86-92
    • /
    • 2009
  • Purpose: The PET/CT has a clear distinction on the lesion of the functional image by adding anatomical information. It also could reduce the examination time using CT data as the attenuation-correction. When the stomach was contracted from a fast, it could bring a misinterpretation of the cancer of the lesion with a presence of physiological $^{18}F$-FDG uptake in stomach and it occasionally would bring an additional scan to confirm. To complement this shortcoming, the method that the patients had water before the examination to extend the stomach had been attempted. However, a short excretion time of the stomach did not give sufficiently extended image of the stomach. Then the patients had additional water and had the examination again. Therefore, the noticed fact is that the stomach excretion time depends on calories, protein content, and the level of carbohydrate. In this study, we use an orange juice to evaluate the extension of the stomach and usefulness of it. Materials and Methods: PET/CT scan were obtained on total 150 of patient from February 2008 to October2008, There were 3 groups in this study and each group had 50 patients. First group drank nothing, Second group drank water and third group drank orange juice. The patients (man 25, female 25) not drinking are the age of 30~71 years old (average: 54), the patients (man: 25, female: 25) drinking water (400 cc) are the age of 28~71 years old (average: 54) and the patients (man: 25, female: 25) drinking orange juice (400 cc) are the age of 32~74 years old (average: 56). The patients were fasted in 6-8 hours before the test, the patients were not diabetic. $^{18}F$-FDG 370~555 MBq were injected intravenously. The patients were in stable position for 1 hour, than the image was obtained. The patients drank water and other patients drank orange juice before Whole body scan. The image scan started from mid-femur to skull base. The emission scan acquired for three minutes per bed and the images were reconstructed. Stomach extension analysis is measured from vertical and horizontal length. Results: Stomach Extension was described as the vertical length of the Non Drink Group was $1.20{\pm}0.50\;cm$, horizontal length was $1.4{\pm}0.53\;cm$, the vertical length of the Water Drink Group was $1.67{\pm}0.63\;cm$, horizontal length was $1.65{\pm}0.77\;cm$, the vertical length of Orange juice Drink Group was $3.48{\pm}0.77\;cm$, horizontal length was $3.66{\pm}0.77\;cm$ in coronal image. Stomach Extension was described the vertical length of the Non Drink Group was $2.03{\pm}0.62\;cm$, horizontal length was $1.69{\pm}0.68\;cm$, the vertical length of Water Drink Group was $5.34{\pm}1.62\;cm$, horizontal length was $2.45{\pm}0.72\;cm$, the vertical length of Orange juice Drink Group was $7.74{\pm}1.62\;cm$, horizontal length was $3.57{\pm}0.77\;cm$ in transverse image. The Stomach Extension has specific differences (p<0.001). The SUVs shows the Non Drink Group were measured as Liver $2.52{\pm}0.42$, Lung $0.51{\pm}0.14$, the Water Drink Group were measured as Liver $2.47{\pm}0.38$, Lung $0.50{\pm}0.14$, Orange juice Drink Group were measured as Liver $2.47{\pm}0.38$, Lung $0.50{\pm}0.14$. The SUVs did not have specific differences (p>0.759). Conclusions: There was not a large difference of SUV in three groups. When the patients drank Orange juice and water, the range extension of stomach was higher than without drinking nothing and it was possible to acquire fully extended images. Therefore, it will be possible that unnecessary additional stomach scans will be reduced by drinking orange juice before the examination so that the patients' claim from uncomfortable and long period of fast will be minimized.

  • PDF

Evaluation of Cryptosporidiurn Disinfection by Ozone and Ultraviolet Irradiation Using Viability and Infectivity Assays (크립토스포리디움의 활성/감염성 판별법을 이용한 오존 및 자외선 소독능 평가)

  • Park Sang-Jung;Cho Min;Yoon Je-Yong;Jun Yong-Sung;Rim Yeon-Taek;Jin Ing-Nyol;Chung Hyen-Mi
    • Journal of Life Science
    • /
    • v.16 no.3 s.76
    • /
    • pp.534-539
    • /
    • 2006
  • In the ozone disinfection unit process of a piston type batch reactor with continuous ozone analysis using a flow injection analysis (FIA) system, the CT values for 1 log inactivation of Cryptosporidium parvum by viability assays of DAPI/PI and excystation were $1.8{\sim}2.2\;mg/L{\cdot}min$ at $25^{\circ}C$ and $9.1mg/L{\cdot}min$ at $5^{\circ}C$, respectively. At the low temperature, ozone requirement rises $4{\sim}5$ times higher in order to achieve the same level of disinfection at room temperature. In a 40 L scale pilot plant with continuous flow and constant 5 minutes retention time, disinfection effects were evaluated using excystation, DAPI/PI, and cell infection method at the same time. About 0.2 log inactivation of Cryptosporidium by DAPI/PI and excystation assay, and 1.2 log inactivation by cell infectivity assay were estimated, respectively, at the CT value of about $8mg/L{\cdot}min$. The difference between DAPI/PI and excystation assay was not significant in evaluating CT values of Cryptosporidium by ozone in both experiment of the piston and the pilot reactors. However, there was significant difference between viability assay based on the intact cell wall structure and function and infectivity assay based on the developing oocysts to sporozoites and merozoites in the pilot study. The stage of development should be more sensitive to ozone oxidation than cell wall intactness of oocysts. The difference of CT values estimated by viability assay between two studies may partly come from underestimation of the residual ozone concentration due to the manual monitoring in the pilot study, or the difference of the reactor scale (50 mL vs 40 L) and types (batch vs continuous). Adequate If value to disinfect 1 and 2 log scale of Cryptosporidium in UV irradiation process was 25 $mWs/cm^2$ and 50 $mWs/cm^2$, respectively, at $25^{\circ}C$ by DAPI/PI. At $5^{\circ}C$, 40 $mWs/cm^2$ was required for disinfecting 1 log Cryptosporidium, and 80 $mWs/cm^2$ for disinfecting 2 log Cryptosporidium. It was thought that about 60% increase of If value requirement to compensate for the $20^{\circ}C$ decrease in temperature was due to the low voltage low output lamp letting weaker UV rays occur at lower temperatures.

A Study of Factors Associated with Software Developers Job Turnover (데이터마이닝을 활용한 소프트웨어 개발인력의 업무 지속수행의도 결정요인 분석)

  • Jeon, In-Ho;Park, Sun W.;Park, Yoon-Joo
    • Journal of Intelligence and Information Systems
    • /
    • v.21 no.2
    • /
    • pp.191-204
    • /
    • 2015
  • According to the '2013 Performance Assessment Report on the Financial Program' from the National Assembly Budget Office, the unfilled recruitment ratio of Software(SW) Developers in South Korea was 25% in the 2012 fiscal year. Moreover, the unfilled recruitment ratio of highly-qualified SW developers reaches almost 80%. This phenomenon is intensified in small and medium enterprises consisting of less than 300 employees. Young job-seekers in South Korea are increasingly avoiding becoming a SW developer and even the current SW developers want to change careers, which hinders the national development of IT industries. The Korean government has recently realized the problem and implemented policies to foster young SW developers. Due to this effort, it has become easier to find young SW developers at the beginning-level. However, it is still hard to recruit highly-qualified SW developers for many IT companies. This is because in order to become a SW developing expert, having a long term experiences are important. Thus, improving job continuity intentions of current SW developers is more important than fostering new SW developers. Therefore, this study surveyed the job continuity intentions of SW developers and analyzed the factors associated with them. As a method, we carried out a survey from September 2014 to October 2014, which was targeted on 130 SW developers who were working in IT industries in South Korea. We gathered the demographic information and characteristics of the respondents, work environments of a SW industry, and social positions for SW developers. Afterward, a regression analysis and a decision tree method were performed to analyze the data. These two methods are widely used data mining techniques, which have explanation ability and are mutually complementary. We first performed a linear regression method to find the important factors assaociated with a job continuity intension of SW developers. The result showed that an 'expected age' to work as a SW developer were the most significant factor associated with the job continuity intention. We supposed that the major cause of this phenomenon is the structural problem of IT industries in South Korea, which requires SW developers to change the work field from developing area to management as they are promoted. Also, a 'motivation' to become a SW developer and a 'personality (introverted tendency)' of a SW developer are highly importantly factors associated with the job continuity intention. Next, the decision tree method was performed to extract the characteristics of highly motivated developers and the low motivated ones. We used well-known C4.5 algorithm for decision tree analysis. The results showed that 'motivation', 'personality', and 'expected age' were also important factors influencing the job continuity intentions, which was similar to the results of the regression analysis. In addition to that, the 'ability to learn' new technology was a crucial factor for the decision rules of job continuity. In other words, a person with high ability to learn new technology tends to work as a SW developer for a longer period of time. The decision rule also showed that a 'social position' of SW developers and a 'prospect' of SW industry were minor factors influencing job continuity intensions. On the other hand, 'type of an employment (regular position/ non-regular position)' and 'type of company (ordering company/ service providing company)' did not affect the job continuity intension in both methods. In this research, we demonstrated the job continuity intentions of SW developers, who were actually working at IT companies in South Korea, and we analyzed the factors associated with them. These results can be used for human resource management in many IT companies when recruiting or fostering highly-qualified SW experts. It can also help to build SW developer fostering policy and to solve the problem of unfilled recruitment of SW Developers in South Korea.

Analysis of the Time-dependent Relation between TV Ratings and the Content of Microblogs (TV 시청률과 마이크로블로그 내용어와의 시간대별 관계 분석)

  • Choeh, Joon Yeon;Baek, Haedeuk;Choi, Jinho
    • Journal of Intelligence and Information Systems
    • /
    • v.20 no.1
    • /
    • pp.163-176
    • /
    • 2014
  • Social media is becoming the platform for users to communicate their activities, status, emotions, and experiences to other people. In recent years, microblogs, such as Twitter, have gained in popularity because of its ease of use, speed, and reach. Compared to a conventional web blog, a microblog lowers users' efforts and investment for content generation by recommending shorter posts. There has been a lot research into capturing the social phenomena and analyzing the chatter of microblogs. However, measuring television ratings has been given little attention so far. Currently, the most common method to measure TV ratings uses an electronic metering device installed in a small number of sampled households. Microblogs allow users to post short messages, share daily updates, and conveniently keep in touch. In a similar way, microblog users are interacting with each other while watching television or movies, or visiting a new place. In order to measure TV ratings, some features are significant during certain hours of the day, or days of the week, whereas these same features are meaningless during other time periods. Thus, the importance of features can change during the day, and a model capturing the time sensitive relevance is required to estimate TV ratings. Therefore, modeling time-related characteristics of features should be a key when measuring the TV ratings through microblogs. We show that capturing time-dependency of features in measuring TV ratings is vitally necessary for improving their accuracy. To explore the relationship between the content of microblogs and TV ratings, we collected Twitter data using the Get Search component of the Twitter REST API from January 2013 to October 2013. There are about 300 thousand posts in our data set for the experiment. After excluding data such as adverting or promoted tweets, we selected 149 thousand tweets for analysis. The number of tweets reaches its maximum level on the broadcasting day and increases rapidly around the broadcasting time. This result is stems from the characteristics of the public channel, which broadcasts the program at the predetermined time. From our analysis, we find that count-based features such as the number of tweets or retweets have a low correlation with TV ratings. This result implies that a simple tweet rate does not reflect the satisfaction or response to the TV programs. Content-based features extracted from the content of tweets have a relatively high correlation with TV ratings. Further, some emoticons or newly coined words that are not tagged in the morpheme extraction process have a strong relationship with TV ratings. We find that there is a time-dependency in the correlation of features between the before and after broadcasting time. Since the TV program is broadcast at the predetermined time regularly, users post tweets expressing their expectation for the program or disappointment over not being able to watch the program. The highly correlated features before the broadcast are different from the features after broadcasting. This result explains that the relevance of words with TV programs can change according to the time of the tweets. Among the 336 words that fulfill the minimum requirements for candidate features, 145 words have the highest correlation before the broadcasting time, whereas 68 words reach the highest correlation after broadcasting. Interestingly, some words that express the impossibility of watching the program show a high relevance, despite containing a negative meaning. Understanding the time-dependency of features can be helpful in improving the accuracy of TV ratings measurement. This research contributes a basis to estimate the response to or satisfaction with the broadcasted programs using the time dependency of words in Twitter chatter. More research is needed to refine the methodology for predicting or measuring TV ratings.

Ensemble Learning with Support Vector Machines for Bond Rating (회사채 신용등급 예측을 위한 SVM 앙상블학습)

  • Kim, Myoung-Jong
    • Journal of Intelligence and Information Systems
    • /
    • v.18 no.2
    • /
    • pp.29-45
    • /
    • 2012
  • Bond rating is regarded as an important event for measuring financial risk of companies and for determining the investment returns of investors. As a result, it has been a popular research topic for researchers to predict companies' credit ratings by applying statistical and machine learning techniques. The statistical techniques, including multiple regression, multiple discriminant analysis (MDA), logistic models (LOGIT), and probit analysis, have been traditionally used in bond rating. However, one major drawback is that it should be based on strict assumptions. Such strict assumptions include linearity, normality, independence among predictor variables and pre-existing functional forms relating the criterion variablesand the predictor variables. Those strict assumptions of traditional statistics have limited their application to the real world. Machine learning techniques also used in bond rating prediction models include decision trees (DT), neural networks (NN), and Support Vector Machine (SVM). Especially, SVM is recognized as a new and promising classification and regression analysis method. SVM learns a separating hyperplane that can maximize the margin between two categories. SVM is simple enough to be analyzed mathematical, and leads to high performance in practical applications. SVM implements the structuralrisk minimization principle and searches to minimize an upper bound of the generalization error. In addition, the solution of SVM may be a global optimum and thus, overfitting is unlikely to occur with SVM. In addition, SVM does not require too many data sample for training since it builds prediction models by only using some representative sample near the boundaries called support vectors. A number of experimental researches have indicated that SVM has been successfully applied in a variety of pattern recognition fields. However, there are three major drawbacks that can be potential causes for degrading SVM's performance. First, SVM is originally proposed for solving binary-class classification problems. Methods for combining SVMs for multi-class classification such as One-Against-One, One-Against-All have been proposed, but they do not improve the performance in multi-class classification problem as much as SVM for binary-class classification. Second, approximation algorithms (e.g. decomposition methods, sequential minimal optimization algorithm) could be used for effective multi-class computation to reduce computation time, but it could deteriorate classification performance. Third, the difficulty in multi-class prediction problems is in data imbalance problem that can occur when the number of instances in one class greatly outnumbers the number of instances in the other class. Such data sets often cause a default classifier to be built due to skewed boundary and thus the reduction in the classification accuracy of such a classifier. SVM ensemble learning is one of machine learning methods to cope with the above drawbacks. Ensemble learning is a method for improving the performance of classification and prediction algorithms. AdaBoost is one of the widely used ensemble learning techniques. It constructs a composite classifier by sequentially training classifiers while increasing weight on the misclassified observations through iterations. The observations that are incorrectly predicted by previous classifiers are chosen more often than examples that are correctly predicted. Thus Boosting attempts to produce new classifiers that are better able to predict examples for which the current ensemble's performance is poor. In this way, it can reinforce the training of the misclassified observations of the minority class. This paper proposes a multiclass Geometric Mean-based Boosting (MGM-Boost) to resolve multiclass prediction problem. Since MGM-Boost introduces the notion of geometric mean into AdaBoost, it can perform learning process considering the geometric mean-based accuracy and errors of multiclass. This study applies MGM-Boost to the real-world bond rating case for Korean companies to examine the feasibility of MGM-Boost. 10-fold cross validations for threetimes with different random seeds are performed in order to ensure that the comparison among three different classifiers does not happen by chance. For each of 10-fold cross validation, the entire data set is first partitioned into tenequal-sized sets, and then each set is in turn used as the test set while the classifier trains on the other nine sets. That is, cross-validated folds have been tested independently of each algorithm. Through these steps, we have obtained the results for classifiers on each of the 30 experiments. In the comparison of arithmetic mean-based prediction accuracy between individual classifiers, MGM-Boost (52.95%) shows higher prediction accuracy than both AdaBoost (51.69%) and SVM (49.47%). MGM-Boost (28.12%) also shows the higher prediction accuracy than AdaBoost (24.65%) and SVM (15.42%)in terms of geometric mean-based prediction accuracy. T-test is used to examine whether the performance of each classifiers for 30 folds is significantly different. The results indicate that performance of MGM-Boost is significantly different from AdaBoost and SVM classifiers at 1% level. These results mean that MGM-Boost can provide robust and stable solutions to multi-classproblems such as bond rating.

Studies on the Roadside Revegetation and Landscape Reconstruction Measures (도로녹화(道路綠化) 및 도로조경기술개발(道路造景技術開発)에 관(関)한 연구(硏究))

  • Woo, Bo Myeong;Son, Doo Sik
    • Journal of Korean Society of Forest Science
    • /
    • v.48 no.1
    • /
    • pp.1-24
    • /
    • 1980
  • One of the most important basic problems for developing the new techniques in the field of road landscape planting practices in Korea, is to clarify, analyse, and evaluate the existing technical level through actual field survey on the various kinds of planting techniques. This study is, therefore, aimed at the good grasp of detail essences of the existing level of road landscape planting techniques through field investigations of the executed sites. In this study, emphasized efforts are made to the detail analysis and systematic rearrangements of such main subjects as; 1) principles and functions of the road landscape planting techniques; 2) essential elements in planning of it; 3) advanced practices in execution of planting of it; 4) and improved methods in maintenance of plants and lands as an entire system of road landscape planting techniques. The road landscape planting techniques could be explained as the planting and landscaping practices to improve the road function through introduction of plants (green-environment) on and around the roads. The importances of these techniques have been recognized by the landscape architects and road engineers, and they also emphasize not on]y the establishment of road landscape features but also conservation of human's life environment by planting of suitable trees, shrubs, and other vegetations around the roads. It is essentially required to improve the present p]anting practices for establishment of the beautiful road landscape features, specially in planning, design, execution, establishment, and maintenance of plantings of the environmental conservation belts, roadside trees, footpathes, median strips, traffic islands, interchanges, rest areas, and including the adjoining route roads.

  • PDF

Statistical Analysis of Operating Efficiency and Failures of a Medical Linear Accelerator for Ten Years (선형가속기의 10년간 가동률과 고장률에 관한 통계분석)

  • Ju Sang Gyu;Huh Seung Jae;Han Youngyih;Seo Jeong Min;Kim Won Kyou;Kim Tae Jong;Shin Eun Hyuk;Park Ju Young;Yeo Inhwan J.;Choi David R.;Ahn Yong Chan;Park Won;Lim Do Hoon
    • Radiation Oncology Journal
    • /
    • v.23 no.3
    • /
    • pp.186-193
    • /
    • 2005
  • Purpose: To improve the management of a medical linear accelerator, the records of operational failures of a Varian CL2l00C over a ten year period were retrospectively analyzed. Materials and Methods: The failures were classified according to the involved functional subunits, with each class rated Into one of three levels depending on the operational conditions. The relationships between the failure rate and working ratio and between the failure rate and outside temperature were investigated. In addition, the average life time of the main part and the operating efficiency over the last 4 years were analyzed. Results: Among the recorded failures (total 587 failures), the most frequent failure was observed in the parts related with the collimation system, including the monitor chamber, which accounted for $20\%$ of all failures. With regard to the operational conditions, 2nd level of failures, which temporally interrupted treatments, were the most frequent. Third level of failures, which interrupted treatment for more than several hours, were mostly caused by the accelerating subunit. The number of failures was increased with number of treatments and operating time. The average life-times of the Klystron and Thyratron became shorter as the working ratio increased, and were 42 and $83\%$ of the expected values, respectively. The operating efficiency was maintained at $95\%$ or higher, but this value slightly decreased. There was no significant correlation between the number of failures and the outside temperature. Conclusion: The maintenance of detailed equipment problems and failures records over a long period of time can provide good knowledge of equipment function as well as the capability of predicting future failure. Wore rigorous equipment maintenance Is required for old medical linear accelerators for the advanced avoidance of serious failure and to improve the qualify of patient treatment.

Analysis of Indicator Microorganism Concentration in the Rice Cultural Plot after Reclaimed Water Irrigation (하수처리수 관개후 벼재배 시험구에서 지표미생물 거동 분석)

  • Jung, Kwang-Wook;Jeon, Ji-Hong;Ham, Jong-Hwa;Yoon, Chun-Gyeong
    • Korean Journal of Ecology and Environment
    • /
    • v.37 no.1 s.106
    • /
    • pp.112-121
    • /
    • 2004
  • A study was performed to examine the effects of UV-disinfected reclaimed water on microorganism concentration during rice culture. Four treatments were used and each one was triplicated to evaluate the changes of microorganism concentrations: stream water irrigation (STR), biofilter effluent irrigation (BE), UV-disinfected water irrigation with dose of 6 mW ${\cdot}$ s $cm{-2}$ (UV-6), and UV-disinfected water irrigation with dose of 16 mW ${\cdot}$ s $cm{-2}$ (UV-16). The indicator microorganisms of interest were total coliform (TC), fecal coliform (FC), and E. coli. The biofilter effluent from 16-unit apartment sewage treatment plant was used as reclaimed water and flowthrough type UV-disinfection system was used. Concentrations of indicator microorganisms in the treatment plots ranged from $10^2$ to $10^5$ MPN/100 mL during 24 hours after irrigation in May and June, where initial irrigation water for transplanting reparation was biofilter-effluent without UV-disinfection. It implies that initial irrigation using only non-disinfected reclaimed water for puddling in paddy field can be health-concerned because of more chance of farmer's physical contact with elevated concentration of microorganisms. The concentrations of microorganisms varied widely with rainfall, and treatments using UV-disinfected water irrigation showed significantly lower concentrations than others and their levels were within the range of paddy rice field with normal surface water irrigation. The mean concentrations of STR and BE during growing season were in the range of 4 ${\times}\;10^3$ MPN/100 mL for TC, and 2${\times}\;10^3$ MPN/100 mL for FC and E, Coli, While mean concentrations of UV-S and UV-lS were less than 1${\times}\;10^3$ MPN/100 mL for all the indicator microorganisms. Overall, UV-disinfection was thought to be feasible and practical alternative for agricultural reuse of secondary level effluent in Korea.