• 제목/요약/키워드: determining set

Search Result 608, Processing Time 0.207 seconds

Visualizing the Results of Opinion Mining from Social Media Contents: Case Study of a Noodle Company (소셜미디어 콘텐츠의 오피니언 마이닝결과 시각화: N라면 사례 분석 연구)

  • Kim, Yoosin;Kwon, Do Young;Jeong, Seung Ryul
    • Journal of Intelligence and Information Systems
    • /
    • v.20 no.4
    • /
    • pp.89-105
    • /
    • 2014
  • After emergence of Internet, social media with highly interactive Web 2.0 applications has provided very user friendly means for consumers and companies to communicate with each other. Users have routinely published contents involving their opinions and interests in social media such as blogs, forums, chatting rooms, and discussion boards, and the contents are released real-time in the Internet. For that reason, many researchers and marketers regard social media contents as the source of information for business analytics to develop business insights, and many studies have reported results on mining business intelligence from Social media content. In particular, opinion mining and sentiment analysis, as a technique to extract, classify, understand, and assess the opinions implicit in text contents, are frequently applied into social media content analysis because it emphasizes determining sentiment polarity and extracting authors' opinions. A number of frameworks, methods, techniques and tools have been presented by these researchers. However, we have found some weaknesses from their methods which are often technically complicated and are not sufficiently user-friendly for helping business decisions and planning. In this study, we attempted to formulate a more comprehensive and practical approach to conduct opinion mining with visual deliverables. First, we described the entire cycle of practical opinion mining using Social media content from the initial data gathering stage to the final presentation session. Our proposed approach to opinion mining consists of four phases: collecting, qualifying, analyzing, and visualizing. In the first phase, analysts have to choose target social media. Each target media requires different ways for analysts to gain access. There are open-API, searching tools, DB2DB interface, purchasing contents, and so son. Second phase is pre-processing to generate useful materials for meaningful analysis. If we do not remove garbage data, results of social media analysis will not provide meaningful and useful business insights. To clean social media data, natural language processing techniques should be applied. The next step is the opinion mining phase where the cleansed social media content set is to be analyzed. The qualified data set includes not only user-generated contents but also content identification information such as creation date, author name, user id, content id, hit counts, review or reply, favorite, etc. Depending on the purpose of the analysis, researchers or data analysts can select a suitable mining tool. Topic extraction and buzz analysis are usually related to market trends analysis, while sentiment analysis is utilized to conduct reputation analysis. There are also various applications, such as stock prediction, product recommendation, sales forecasting, and so on. The last phase is visualization and presentation of analysis results. The major focus and purpose of this phase are to explain results of analysis and help users to comprehend its meaning. Therefore, to the extent possible, deliverables from this phase should be made simple, clear and easy to understand, rather than complex and flashy. To illustrate our approach, we conducted a case study on a leading Korean instant noodle company. We targeted the leading company, NS Food, with 66.5% of market share; the firm has kept No. 1 position in the Korean "Ramen" business for several decades. We collected a total of 11,869 pieces of contents including blogs, forum contents and news articles. After collecting social media content data, we generated instant noodle business specific language resources for data manipulation and analysis using natural language processing. In addition, we tried to classify contents in more detail categories such as marketing features, environment, reputation, etc. In those phase, we used free ware software programs such as TM, KoNLP, ggplot2 and plyr packages in R project. As the result, we presented several useful visualization outputs like domain specific lexicons, volume and sentiment graphs, topic word cloud, heat maps, valence tree map, and other visualized images to provide vivid, full-colored examples using open library software packages of the R project. Business actors can quickly detect areas by a swift glance that are weak, strong, positive, negative, quiet or loud. Heat map is able to explain movement of sentiment or volume in categories and time matrix which shows density of color on time periods. Valence tree map, one of the most comprehensive and holistic visualization models, should be very helpful for analysts and decision makers to quickly understand the "big picture" business situation with a hierarchical structure since tree-map can present buzz volume and sentiment with a visualized result in a certain period. This case study offers real-world business insights from market sensing which would demonstrate to practical-minded business users how they can use these types of results for timely decision making in response to on-going changes in the market. We believe our approach can provide practical and reliable guide to opinion mining with visualized results that are immediately useful, not just in food industry but in other industries as well.

Individualized Determination of Lower Margin in Pelvic Radiation Field after Low Anterior Resection for Rectal Cancer Resulted in Equivalent Local Control and Radiation Volume Reduction Compared with Traditional Method (하전방 절제술을 시행한 직장암 환자에서 방사선조사 영역 하연의 개별화)

  • Park Suk Won;Ahn Yong Chan;Huh Seung Jae;Chun Ho Kyung;Kang Won Ki;Kim Dae Yong;Lim Do Hoon;Noh Young Ju;Lee Jung Eun
    • Radiation Oncology Journal
    • /
    • v.18 no.3
    • /
    • pp.194-199
    • /
    • 2000
  • Purpose : Then determining the lower margin of post-operative pelvic radiation therapy field according to the traditional method (recommended by Gunderson), the organs located in the low pelvic cavity and the perineum are vulnerable to unnecessary radiation. This study evaluated the effect of individualized determination of the lower margin at 2 cm to 3 cm below the anastomotic site on the failure patterns. Materials and Methods . Authors included ぉ patients with modified Astler-Coiler (MAC) stages from B2 through C3, who received low anterior resection and post-operative pelvic radiation therapy from Sept. 1994 to May 1998 at Samsung Medical Center, Sungkyunkwan University. The numbers of male and female patients were 44 and 44, and the median age was 57 years (range: 32-81 years). Three field technique (posterior-anterior and bilateral portals) by 6, 10, 15 MV X-rays was used to deliver 4,500 cGy to the whole pelvis followed by Sn cGy's small field boost to the tumor bed over 5.5 weeks. Sixteen patients received radiation therapy by traditional field margin determination, and the lower margin was set either at the low margin of the obturator foramen or at 2 cm to 3 cm below the anastomotic site, whichever is lower. In 72 patients, the lower margin was set at 2 cm to 3 cm below the anastomotic site, irrespectively of the obturator foramen, by which the reduction of radiation volume was possible in 55 patients ($76\%$). Authors evaluated and compared survival, local control, and disease-free survival rates of these two groups. Results : The median follow-up period was 27 months (range : 7-58 months). MAC stages B2 in 32($36\%$), B3 in 2 ($2\%$), Cl in 2 ($2\%$), C2 in 50 ($57\%$), and C3 in 2 ($2\%$) Patients, respectively. The entire patients' overall survival rates at 2 and 4 years were $94\%$ and $68\%$, respectively, and disease-free survival rates at 2 and 4 years were $86\%$ and $58\%$, respectively. The first failure sites were local only in 4, distant only in 14, and combined local and distant in 1 patient, respectively. There was no significant difference with respect to local control and disease-free survival rates ( p=0.42, p=0.68) between two groups of different lower margin determination policies. Conclusion : The new concept in the individualized determination of the lower margin depending on the anastomotic site has led to the equivalent local control and disease-free survival rates, and is expected to contribute to the reduction of unnecessary radiation-related morbidity by reduction of radiation volume, compared with the traditional method of lower margin determination.

  • PDF

The Changes of System Design Premises and the Structural Reforms of Korean Government S&T Development Management System (시스템 설계전제의 변화와 공공부문 과학기술발전관리시스템 구조의 개혁)

  • 노화준
    • Journal of Technology Innovation
    • /
    • v.5 no.2
    • /
    • pp.1-21
    • /
    • 1997
  • The objective of this paper is to think about what structural reforms of the Korean government S&T development management system might be. Korean society is currently experiencing a drastic socio-economic transformation. The results of this transformation should be reflected on the determining process of the directions and breadths of structural reforms of government S&T development management system. Because the government system design will be based on the premises of socio-economic conditions under which administrative activities perform and also this socio-economic changes can influence on changes of the premises of government management system design. Moreover, S&T development management system is a subsystem of government system so that the directions of structural reform of those subsystems should be considered in the broad framework changes in the development management system of the government. For the last forty years, the Korean government S&T development management system has been based on the premises including transformation from an agrarian society to an industrial society, authoritarianism and centrally controlled institutions, and exteremely small portions of private investments for science and thechonology R & D of the total. Recently, however, the premises of Korean government S&T development management system have rapidly changed. the characteristics of these changes are including tranformation from an industrial society to a knowledge and information intensive society, globalization, localization, and relatively large portion of private investments for science and technology R & C of the total. The basis of government reforms in Korea was the realization of the performances and values through the enhancement of national competitive capacity, attainment of lean government, decentralization and autonomy. However, the Korean government has attached a symbolic value of strategic organizations representing strong policy intentions of government for the science and technology based development. Most problems associated with the Korean government S&T development management system have grown worse during 1990s. Many people perceive that considerable part of this problem was generated because the government could not properly adapt itself to new administrative environment and the paradigm shift in its role. First of all, the Korean government S&T development management system as a whole failed to develop an integrated vision under which processes in formulating science and thechology development goals and developing consistent government plans concerning science and technology development are guided. Second, most of the local governments have little organizational capacity and manpowers to handle localized activities to promote science and technology in their regions. Third, the measure to coordinate and set priorities to invest resources for the development of science and technology was not effective. Fourth, the Most has been losing its reputation as the symbol of ideological commitment of the top policy maker to promote science and technology. Various ideas to reform government S&T development management system have been suggested recently. Most frequently cited ideas are as follow : (ⅰ)strengthen the functions of MoST by supplementing the strong incentive and regulatory measures; (ⅱ)create a new Ministry of Education, Science & Technology and Research by merging the Ministry of Education and the MoST; (ⅲ)create a new Ministry of Science & Technology and Industry ; and(ⅳ)create a National Science and Technology Policy Council under the chairmanship of the President. Four alternatives suggested have been widely discussed among the interested parties and they each have merits as well as weaknesses. The first alternative could be seen as an alternative which cannot resolve current conflicts among various ministries concerning priority setting and resource allocation. However, this alternatives can be seen as a way of showing the top policymaker's strong intention to emphasize science and technology based development. Second alternative is giving a strategic to emphasize on the training and supplying qualified manpower to meet knowledge and information intensive future society. This alternative is considered to be consistent with the new administrative paradigm emphasizing lean government and decentralization. However, opponents are worrying about the linkages and cooperative research between university and industry could be weakening. The third alternative has been adopted mostly in nations which have strong basic science research but weak industrial innovation traditions. Main weakness of this alternative for Korea is that Korean science and technology development system has no strong basic science and technology research traditions. The fourth alternative is consistent with new administrative paradigms and government reform bases. However, opponents to this alternative are worried that the intensive development of science and technology because of Korea's low potential research capabilities in science and technology development. Considerning the present Korean socio-economic situation which demands highly qualified human resources and development strategies which emphasizes the accumulations of knowledge-based stocks, I would like to suggest the route of creating a new Ministry of Education, Science & Technology and Research by intergrating education administration functions and science & technology development function into one ministry.

  • PDF

Investigating Dynamic Mutation Process of Issues Using Unstructured Text Analysis (부도예측을 위한 KNN 앙상블 모형의 동시 최적화)

  • Min, Sung-Hwan
    • Journal of Intelligence and Information Systems
    • /
    • v.22 no.1
    • /
    • pp.139-157
    • /
    • 2016
  • Bankruptcy involves considerable costs, so it can have significant effects on a country's economy. Thus, bankruptcy prediction is an important issue. Over the past several decades, many researchers have addressed topics associated with bankruptcy prediction. Early research on bankruptcy prediction employed conventional statistical methods such as univariate analysis, discriminant analysis, multiple regression, and logistic regression. Later on, many studies began utilizing artificial intelligence techniques such as inductive learning, neural networks, and case-based reasoning. Currently, ensemble models are being utilized to enhance the accuracy of bankruptcy prediction. Ensemble classification involves combining multiple classifiers to obtain more accurate predictions than those obtained using individual models. Ensemble learning techniques are known to be very useful for improving the generalization ability of the classifier. Base classifiers in the ensemble must be as accurate and diverse as possible in order to enhance the generalization ability of an ensemble model. Commonly used methods for constructing ensemble classifiers include bagging, boosting, and random subspace. The random subspace method selects a random feature subset for each classifier from the original feature space to diversify the base classifiers of an ensemble. Each ensemble member is trained by a randomly chosen feature subspace from the original feature set, and predictions from each ensemble member are combined by an aggregation method. The k-nearest neighbors (KNN) classifier is robust with respect to variations in the dataset but is very sensitive to changes in the feature space. For this reason, KNN is a good classifier for the random subspace method. The KNN random subspace ensemble model has been shown to be very effective for improving an individual KNN model. The k parameter of KNN base classifiers and selected feature subsets for base classifiers play an important role in determining the performance of the KNN ensemble model. However, few studies have focused on optimizing the k parameter and feature subsets of base classifiers in the ensemble. This study proposed a new ensemble method that improves upon the performance KNN ensemble model by optimizing both k parameters and feature subsets of base classifiers. A genetic algorithm was used to optimize the KNN ensemble model and improve the prediction accuracy of the ensemble model. The proposed model was applied to a bankruptcy prediction problem by using a real dataset from Korean companies. The research data included 1800 externally non-audited firms that filed for bankruptcy (900 cases) or non-bankruptcy (900 cases). Initially, the dataset consisted of 134 financial ratios. Prior to the experiments, 75 financial ratios were selected based on an independent sample t-test of each financial ratio as an input variable and bankruptcy or non-bankruptcy as an output variable. Of these, 24 financial ratios were selected by using a logistic regression backward feature selection method. The complete dataset was separated into two parts: training and validation. The training dataset was further divided into two portions: one for the training model and the other to avoid overfitting. The prediction accuracy against this dataset was used to determine the fitness value in order to avoid overfitting. The validation dataset was used to evaluate the effectiveness of the final model. A 10-fold cross-validation was implemented to compare the performances of the proposed model and other models. To evaluate the effectiveness of the proposed model, the classification accuracy of the proposed model was compared with that of other models. The Q-statistic values and average classification accuracies of base classifiers were investigated. The experimental results showed that the proposed model outperformed other models, such as the single model and random subspace ensemble model.

Electrochemical Measurement of Salt Content in Soysauce and Margarine (간장 및 마가린중의 식염함량의 전기화학적 측정법)

  • Lee, Jong-Hyeok;Lee, Byeong-Ho
    • Korean Journal of Food Science and Technology
    • /
    • v.25 no.2
    • /
    • pp.105-108
    • /
    • 1993
  • A new devised conductivity meter was used in the rapid and convenient determination of salt contents of soysauce and margarine. The equation $(1){\sim}(5)$ was set up between the electric conductivity (x) for 100 times diluted solution of soysauce and the salt contents (y). y=0.083x-1.253 $(at\;15^{\circ}C)$ (1) y=0.077x-2.062 $(at\;20^{\circ}C)$ (2) y=0.071x-2.686 $(at\;25^{\circ}C)$ (3) y=0.066x-3.153 $(at\;30^{\circ}C)$ (4) y=0.062x-3.522 $(at\;35^{\circ}C)$ (5) y=(-0.001139t+0.0999)x+(-0.126t+0.557) $(temperature\;range;\;15{\sim}35^{\circ}C)$ (6) y=salt contents [%], x=conductivity $[{\mu}{\Omega}^{-1}{\cdot}cm^{-1}]$, $t=temp.\;[^{\circ}C]$. The salt contents could be estimated by the equation $(1){\sim}(6)$ and the measured conductivity. The estimated salt contents agreed with that determined by conventional method within 0.27[%] as salt contents. For margarine, the equation (7) was setup between the conductivity (x) and the salt contents (y) y=0.00266x+0.057 $(at\;20^{\circ}C)$ (7) y=salt contents [%], x=conductivity $[{\mu}{\Omega}^{-1}{\cdot}cm^{-1}]$ The salt contents estimated with the equation (7) and the measured condutivity agreed with that determined by conventional method within 0.028[%] as salt contents. The electric conductivity obtained with conductivity meter could be a valuable criteria salt contents test of Korean soysauce and margarine determining in a few second or minute by handy compact portable meter.

  • PDF

The Effect of Meta-Features of Multiclass Datasets on the Performance of Classification Algorithms (다중 클래스 데이터셋의 메타특징이 판별 알고리즘의 성능에 미치는 영향 연구)

  • Kim, Jeonghun;Kim, Min Yong;Kwon, Ohbyung
    • Journal of Intelligence and Information Systems
    • /
    • v.26 no.1
    • /
    • pp.23-45
    • /
    • 2020
  • Big data is creating in a wide variety of fields such as medical care, manufacturing, logistics, sales site, SNS, and the dataset characteristics are also diverse. In order to secure the competitiveness of companies, it is necessary to improve decision-making capacity using a classification algorithm. However, most of them do not have sufficient knowledge on what kind of classification algorithm is appropriate for a specific problem area. In other words, determining which classification algorithm is appropriate depending on the characteristics of the dataset was has been a task that required expertise and effort. This is because the relationship between the characteristics of datasets (called meta-features) and the performance of classification algorithms has not been fully understood. Moreover, there has been little research on meta-features reflecting the characteristics of multi-class. Therefore, the purpose of this study is to empirically analyze whether meta-features of multi-class datasets have a significant effect on the performance of classification algorithms. In this study, meta-features of multi-class datasets were identified into two factors, (the data structure and the data complexity,) and seven representative meta-features were selected. Among those, we included the Herfindahl-Hirschman Index (HHI), originally a market concentration measurement index, in the meta-features to replace IR(Imbalanced Ratio). Also, we developed a new index called Reverse ReLU Silhouette Score into the meta-feature set. Among the UCI Machine Learning Repository data, six representative datasets (Balance Scale, PageBlocks, Car Evaluation, User Knowledge-Modeling, Wine Quality(red), Contraceptive Method Choice) were selected. The class of each dataset was classified by using the classification algorithms (KNN, Logistic Regression, Nave Bayes, Random Forest, and SVM) selected in the study. For each dataset, we applied 10-fold cross validation method. 10% to 100% oversampling method is applied for each fold and meta-features of the dataset is measured. The meta-features selected are HHI, Number of Classes, Number of Features, Entropy, Reverse ReLU Silhouette Score, Nonlinearity of Linear Classifier, Hub Score. F1-score was selected as the dependent variable. As a result, the results of this study showed that the six meta-features including Reverse ReLU Silhouette Score and HHI proposed in this study have a significant effect on the classification performance. (1) The meta-features HHI proposed in this study was significant in the classification performance. (2) The number of variables has a significant effect on the classification performance, unlike the number of classes, but it has a positive effect. (3) The number of classes has a negative effect on the performance of classification. (4) Entropy has a significant effect on the performance of classification. (5) The Reverse ReLU Silhouette Score also significantly affects the classification performance at a significant level of 0.01. (6) The nonlinearity of linear classifiers has a significant negative effect on classification performance. In addition, the results of the analysis by the classification algorithms were also consistent. In the regression analysis by classification algorithm, Naïve Bayes algorithm does not have a significant effect on the number of variables unlike other classification algorithms. This study has two theoretical contributions: (1) two new meta-features (HHI, Reverse ReLU Silhouette score) was proved to be significant. (2) The effects of data characteristics on the performance of classification were investigated using meta-features. The practical contribution points (1) can be utilized in the development of classification algorithm recommendation system according to the characteristics of datasets. (2) Many data scientists are often testing by adjusting the parameters of the algorithm to find the optimal algorithm for the situation because the characteristics of the data are different. In this process, excessive waste of resources occurs due to hardware, cost, time, and manpower. This study is expected to be useful for machine learning, data mining researchers, practitioners, and machine learning-based system developers. The composition of this study consists of introduction, related research, research model, experiment, conclusion and discussion.

Survey on Radiotherpv Protocols for the Rectal Cancers Among the Korean Radiation Oncologists in 2002 for the Development of the Patterns of Care Study of Radiation Therapy (방사선치료 Patterns of Care Study 개발을 위한 2002년 한국 방사선종양학과 전문의들의 직장암 방사선치료 원칙 조사연구)

  • Kim, Jong-Hoon;Kim, Dae-Yong;Kim, Young-Ho;Kim, Woo-Chul;Kim, Chul-Yong;Sung, Jin-Shil;Son, Seung-Chang;Shin, Hyun-Su;An, Young-Chan;Oh, Do-Hum;Oh, One-Yong;Yu, Mi-Ryung;Yu, Hung-Jun
    • Radiation Oncology Journal
    • /
    • v.21 no.1
    • /
    • pp.44-65
    • /
    • 2003
  • Purpose : To conduct nationwide surgery on the principles In radiotherapy for rectal center, and develop the framework of a database of Korean Patterns of Care Study. Materials and Methods : A consensus committee was established to develop a tool for measuring the Patterns in radiotherapy Protocols for rectal cancer. The Panel was composed of radiation oncologists from 18 hospitals in Seoul Wetropolltan area. The committee developed a survey format to analyze radiation oncologist's treatment principles for rectal cancer. The survey items developed for measuring the treatment principles were composed of 1) 8 eliglblllty criteria, 2) 20 Items for staging work-ups and prognostic factors, 3) 7 Items for principles of combined surgery and chemotherapy, 4) 9 patient set-ups, 5) 19 determining radiation fields, 6) S radiotherapy treatment pians, 7) 4 physicalilaboratory examination to monitor a patient's condition during treatment, and 8) 10 follow-up evaluations. These items were sent to radiation oncoioglsts In charge of gastrolntestlnal malignancies in all hospitals (48 hospitals) In Korea to which 30 replies were received (63$\%$). Results : Most of the surrey Items were replied to without no major between the repliers, but with the fellowing items only 50$\%$ of repliers were in agreement : 1) Indications of preoperative radiation, 2) use of endorectal ultrasound, CT scan, and bone scan for staging work-ups, 3) principles of combining chemotherapy with radiotherapy, 4) use of contrast material for small bowel delineation during simulation, 5) determination of field margins, and 6) use of CEA and colonoscopy for follow-up evaluations. Conclusions : The Items where considerable disaggrement was shown among the radiation oncologists seemed to make no serious difference In the treatment outcome, but a practical and reasonable consensus should be reached by the committee, with logical processes of agreement. These Items can be used for a basic database for the Patterns of Care Study, which will survey the practical radiotherapy Patterns for rectal cancer in Korea.

A Preliminary Study on Setting Philosophy and Curriculum Development in Nursing Education (간호교육 철학정립 및 교육과정 개발을 위한 기초조사)

  • 정연강;김윤회;양광희;한경자;한상임
    • Journal of Korean Academy of Nursing
    • /
    • v.18 no.2
    • /
    • pp.162-188
    • /
    • 1988
  • The purpose of this study is to guide the direction of the Korean nursing education to analysize ⑴ the philosophy and objectives ⑵ curriculum, and ⑶ educational environment. This analysis is based on the data from 50 nursing schools (14 4-year colleges and 35 3-year colleges) The survey was conducted from Dec. 1986 through Jan. 1987 by mail. 1) Educational philosophy and objectives 10 4-year colleges and 8 3-year college program have curricular philosoph. Most popular curricular philosophies are human beings, health, nursing, nursology, nursing education, nurses role in the present and in the future. 10 nursing schools mentioned that human being is the subject to interact with : environment physically, mentally and socially. 2 schools mentioned that health is the state of functioning well physically, mentally and socially. 13 schools mentioned that the nursing is the dynamic act to maintain and to promote the highest possible level of health. 4 schools mentioned that the nursology is an applied science. 4 schools mentioned that nursing education is the process to induce the behavioural changes based on the individual ability. There is different opinion about the nurses' role between 4-year college and 3-year college. In the responses from 4-year colleges they focus on the leadership in effective changes, self-regulating and self-determining responsibilities, applying the new technology, continuing education, and participation in research to further nursing knowledge. In the responses from 3-year colleges, they focus on the education in college, primary health care nursing, direct care provider and public health education. Among 50 respondents 40 schools have educational goals which can be divided into two categories. One is to establish the moral and the other is to develop the professionalism. 2) Curriculm The analsis of curriculum is only based on the data from the 4-year colleges because the most of 3-year colleges follow the curriculum guideline set by the Ministry of Education. a) Comparison of the credits in cultural subject and in nursing major. The average required credit for graduation is 154.6 and the median credit is the range of 140-149. The average credit of cultural subjects is 43.4. In detail, the average number of credit of required course and elective courses are 24.1 and 19.3 respectively. The average credit for major subject is 111.2. In detail, the average credit for required courses and electives course are 100.9 and 10.4 respectively. In 5 colleges, students are offered even on elective course b) Comparison of the credit by class. The average earned credits are as follows : 41.1 in freshman, 400 in sophormore 38.3 in junior and 32.4 in senior. Cultural subjects are studied in early phases. c) Comparison of the compulsory and elective cultural subject by institute. The range of credit is 7-43 in compulsory cultural subjects and there are lot of differences among institutions. While all respondents require liberal arts as compulsary subjects, few respondents lists social science, natural science and behavioral science as required subjects. Social science-related subjects are frequently chosen as cultural subjects d) Distribution of creditsin cultural subjects by institute. The liberal art subjects are taught in 20 institute. English and physical education courses are taught in all instituions. The social science subjects are taught in 15 colleges and the basic Psycology and the Basic sociology are the most popular subjects. The natural science subjects are taught in 7 colleges and Biology and Chemistry are the most popular subjects among them. e) Distribution of credits in major basic courses by institute. Most of the institutes select Anatomy, Microbiology, Physiology, biochemistry and Pathology as basic major courses. f) Comparison of the required and elective courses for nursing major by institutions. Subjects and credit ranges in major are varing by institute. More than half of the respondents select the following subjects as required major subjects. (1) Adults Health Nursing and Practice (19.5 credits) (2) Mother and Child Care and Practice (8.9 credits) (3) Community Health Care and Practice (8.5 credits) (4) Psychiatric Nursing Care and Practice (8.1 credits) (5) Nursing Management and Practice (3.9 credits) (6) Fundamental of Nursing, Nursing Research and Health Assessment and Practice. Three institutions select Introduction to nursing, Rehabilitation Nursing, School Nursing, Public Health Nursing, Nursing English, Communication, Human Development as electives in nursing major. 3) Educational environment a) Nursing institution There are forty-three 3-year colleges and seventeen 4-year colleges and 81.4% of which are private b) Number of students and faculty 19.2% of the students are in 4-year colleges and 80.8% of the students are in 3-year colleges. In 4-year colleges, the number of nursing faculty members is in the other of assistant professor, instructor and professor. In 3-year colleges, the orderiis lecturer, associate professor, full time instructor and assistant professor. In 4-year colleges, 18.8 students are allocated per nursing faculty and in 3-year colleges, 33.1 students are allocated per nursing faculty. c) Clinical practices 66.7% of the 4-year colleges practice over 1201 hours in clinic and 28.5% of 3-year colleges practice over 1201 hours in clinic. In 4-year colleges, 11.5 students are allocated per nursing faculty and in 3-year colleges,17 students are allocated per nursing faculty The survey shows no difference in the procedure between 4-year colleges and 3-year colleges but 3-year colleges choose the more variety practicing site such as special hospital and community health clinic. d) Audiovisual facilities The survey shows a lot of difference in audiovisual facilities among institution and 3-year colleges are less equipped than 4-year colleges.

  • PDF

Evaluating the Impact of Attenuation Correction Difference According to the Lipiodol in PET/CT after TACE (간동맥 화학 색전술에 사용하는 Lipiodol에 의한 감쇠 오차가 PET/CT검사에서 영상에 미치는 영향 평가)

  • Cha, Eun Sun;Hong, Gun chul;Park, Hoon;Choi, Choon Ki;Seok, Jae Dong
    • The Korean Journal of Nuclear Medicine Technology
    • /
    • v.17 no.1
    • /
    • pp.67-70
    • /
    • 2013
  • Purpose: Surge in patients with hepatocellular carcinoma, hepatic artery chemical embolization is one of the effective interventional procedures. The PET/CT examination plays an important role in determining the presence of residual cancer cells and metastasis, and prognosis after embolization. The other hand, the hepatic artery chemical embolization of embolic material used lipiodol produced artifacts in the PET/CT examination, and these artifacts results in quantitative evaluation influence. This study, the radioactivity density and the percentage error was evaluated by the extent of the impact of lipiodol in the image of PET/CT. Materials and Methods: 1994 NEMA Phantom was acquired for 2 minutes and 30 seconds per bed after the Teflon, water and lipiodol filled, and these three inserts into the enough to mix the rest behind radioactive injection with $20{\pm}10MBq$. Phantom reconfigure with the iterative reconstruction method the number of iterations for two times by law, a subset of 20 errors. We set up region of interest at each area of the Teflon, water, lipiodol, insert artifact occurs between regions, and background and it was calculated and compared by the radioactivity density(kBq/ml) and the% Difference. Results: Radioactivity density of the each region of interest area with the teflon, water, lipiodol, insert artifact occurs between regions, background activity was $0.09{\pm}0.04$, $0.40{\pm}0.17$, $1.55{\pm}0.75$, $2.5{\pm}1.09$, $2.65{\pm}1.16 kBq/ml$ (P <0.05) and it was statistically significant results. Percentage error of lipiodol in each area was 118%, compared to the water compared with the background activity 52%, compared with a teflon was 180% of the difference. Conclusion: We found that the error due to under the influence of the attenuation correction when PET/CT scans after lipiodol injection performed, and the radioactivity density is higher than compared to other implants, lower than background. Applying the nonattenuation correction images, and after hepatic artery chemical embolization who underwent PET/CT imaging so that the test should be take the consideration to the extent of the impact of lipiodol be.

  • PDF

Analysis of Terpenoids as Volatile Compound Released During the Drying Process of Cryptomeria japonica (삼나무 건조 중 발생하는 휘발성 유기화합물 Terpenoids의 분석)

  • Lee, Su-Yeon;Gwak, Ki-Seob;Kim, Seon-Hong;Lee, Jun-Jae;Yeo, Hwan-Myeong;Choi, In-Gyu
    • Journal of the Korean Wood Science and Technology
    • /
    • v.38 no.3
    • /
    • pp.242-250
    • /
    • 2010
  • The aim of this study was to investigate the terpenoids of Total Volatile Organic Compounds (VOCs) released during drying of Cryptomeria japonica using the thermal extractor (TE). Considering the drying process of C. japonica, temperatures of TE were set at $27^{\circ}C$, $60^{\circ}C$, $80^{\circ}C$, $100^{\circ}C$, and $120^{\circ}C$, respectively. As the result, the emission factors of VOCs and terpenoids were increased as temperature increased. The amount of terpenoids included in VOCs emission factors were 87.5%, 81.6%, 83.6%, 90.1%, and 97.3% depending on above temperatures, respectively. Especially at$100^{\circ}C$ and $120^{\circ}C$, the amount of terpenoids were measured more than 90%. ${\delta}$-cadinene was the highest yield at each temperature and 32 types of terpenoids were collected. Emitted terpenoids were classified into the sesquiterpene group which consists of 15 carbon sources. These 32 sesquiterpenes were used for determining the useful bioactivity such as antifungal activity by the agar dilution. As the result, they showed the antifungal activity against Trichophyton rubrum, Trichophyton mentagrophytes, Microsporum gypseum. The 5,000 ppm concentration of terpenoids showed a strong activity with 100% against the 3 fungi. At the 1,000 ppm concentration of terpenoids, the antifungal activities against three fungi were 95.2%, 98.7%, and 97.3%, and their activities were a little inhibited at 100 ppm concentration.