• Title/Summary/Keyword: choice set

Search Result 417, Processing Time 0.022 seconds

Analysis of the Importance of Subjects to Improve the Educational Curriculum in the Radiological Science - Focused on Radiological Technologists - (방사선(학)과 교육과정 개선을 위한 교과목 중요도 분석 - 방사선사를 중심으로 -)

  • Kim, Jung-Hoon;Ko, Seong-Jin;Kang, Se-Sik;Kim, Dong-Hyun;Kim, Chang-Soo
    • Journal of radiological science and technology
    • /
    • v.35 no.2
    • /
    • pp.125-132
    • /
    • 2012
  • In this study a group of experts and clinical radiological technologists were surveyed to evaluate the clinical importance of current subjects in the radiological sciences. For the data collection and analysis, an open-ended questionnaire was distributed to the group of experts, and a multiple choice questionnaire was distributed to radiological technologists. Subjects were classified into 9 groups for analysis of the importance of subjects, and in regard to the questionnaire design for measurement of variables, departments and type of hospital were set up as independent variables, and the 9 groups of subjects were set up as dependent variables. As a result, clinical radiological technologists perceived Diagnostic Imaging Technology and practical courses, including general radiography, CT and MRI, as the most clinically necessary subjects, and the group of experts placed most weight on basic courses for the major. The result of this study suggests that the curriculum should be revised in a way that combines theory and practice in order to foster radiological technologists capable of adapting to the rapidly changing healthcare environment.

Exploring Learning Progressions for Global Warming: Focus on Middle School Level (지구 온난화에 대한 학습발달과정 탐색: 중학교를 중심으로)

  • Yu, Eun-Jeong;Lee, Kiyoung;Kwak, Youngsun;Park, Jaeyong
    • Journal of Science Education
    • /
    • v.46 no.1
    • /
    • pp.1-16
    • /
    • 2022
  • The purpose of this study is to explore learning progressions for global warming at middle school level. For this purpose, we conducted a construct modeling approach that specifies constructs, item designs, outcome spaces, and measurement model steps from April to October, 2021. In order to develop student assessment items, we analyzed the 2015 revised curriculum and textbooks of middle school and categorized a concept hierarchy for each construct to create a construct map. The assessment items were developed into multiple-choice, short answer, and essay questions according to the selected constructs to strengthen the linkage between the constructs and the items. Based on the three-step grading criteria for each item, an online assessment of 21 minor items developed for middle school students show that many students met 'high' level, but none met 'low' level. In this manner, the initial set lower anchor was reset to level 0, the original set upper anchor was lowered from level 4 to level 3, and the hypothetical learning progression for global warming was presented in the following order: phenomenal, conceptual, and mechanical understandings. The results of the research have raised implications for reorganizing the next science curriculum and improving the assessment system.

Major Class Recommendation System based on Deep learning using Network Analysis (네트워크 분석을 활용한 딥러닝 기반 전공과목 추천 시스템)

  • Lee, Jae Kyu;Park, Heesung;Kim, Wooju
    • Journal of Intelligence and Information Systems
    • /
    • v.27 no.3
    • /
    • pp.95-112
    • /
    • 2021
  • In university education, the choice of major class plays an important role in students' careers. However, in line with the changes in the industry, the fields of major subjects by department are diversifying and increasing in number in university education. As a result, students have difficulty to choose and take classes according to their career paths. In general, students choose classes based on experiences such as choices of peers or advice from seniors. This has the advantage of being able to take into account the general situation, but it does not reflect individual tendencies and considerations of existing courses, and has a problem that leads to information inequality that is shared only among specific students. In addition, as non-face-to-face classes have recently been conducted and exchanges between students have decreased, even experience-based decisions have not been made as well. Therefore, this study proposes a recommendation system model that can recommend college major classes suitable for individual characteristics based on data rather than experience. The recommendation system recommends information and content (music, movies, books, images, etc.) that a specific user may be interested in. It is already widely used in services where it is important to consider individual tendencies such as YouTube and Facebook, and you can experience it familiarly in providing personalized services in content services such as over-the-top media services (OTT). Classes are also a kind of content consumption in terms of selecting classes suitable for individuals from a set content list. However, unlike other content consumption, it is characterized by a large influence of selection results. For example, in the case of music and movies, it is usually consumed once and the time required to consume content is short. Therefore, the importance of each item is relatively low, and there is no deep concern in selecting. Major classes usually have a long consumption time because they have to be taken for one semester, and each item has a high importance and requires greater caution in choice because it affects many things such as career and graduation requirements depending on the composition of the selected classes. Depending on the unique characteristics of these major classes, the recommendation system in the education field supports decision-making that reflects individual characteristics that are meaningful and cannot be reflected in experience-based decision-making, even though it has a relatively small number of item ranges. This study aims to realize personalized education and enhance students' educational satisfaction by presenting a recommendation model for university major class. In the model study, class history data of undergraduate students at University from 2015 to 2017 were used, and students and their major names were used as metadata. The class history data is implicit feedback data that only indicates whether content is consumed, not reflecting preferences for classes. Therefore, when we derive embedding vectors that characterize students and classes, their expressive power is low. With these issues in mind, this study proposes a Net-NeuMF model that generates vectors of students, classes through network analysis and utilizes them as input values of the model. The model was based on the structure of NeuMF using one-hot vectors, a representative model using data with implicit feedback. The input vectors of the model are generated to represent the characteristic of students and classes through network analysis. To generate a vector representing a student, each student is set to a node and the edge is designed to connect with a weight if the two students take the same class. Similarly, to generate a vector representing the class, each class was set as a node, and the edge connected if any students had taken the classes in common. Thus, we utilize Node2Vec, a representation learning methodology that quantifies the characteristics of each node. For the evaluation of the model, we used four indicators that are mainly utilized by recommendation systems, and experiments were conducted on three different dimensions to analyze the impact of embedding dimensions on the model. The results show better performance on evaluation metrics regardless of dimension than when using one-hot vectors in existing NeuMF structures. Thus, this work contributes to a network of students (users) and classes (items) to increase expressiveness over existing one-hot embeddings, to match the characteristics of each structure that constitutes the model, and to show better performance on various kinds of evaluation metrics compared to existing methodologies.

Market Structure Analysis of Automobile Market in U.S.A (미국자동차시장의 구조분석)

  • Choi, In-Hye;Lee, Seo-Goo;Yi, Seong-Keun
    • Journal of Global Scholars of Marketing Science
    • /
    • v.18 no.1
    • /
    • pp.141-156
    • /
    • 2008
  • Market structure analysis is a very useful tool to analyze the competition boundary of the brand or the company. But most of the studies in market structure analysis, the concern lies in nondurable goods such as candies, soft drink and etc. because of the their availability of the data. In the field of durable goods, the limitation of the data availability and the repurchase time period constrain the study. In the analysis of the automobile market, those of views might be more persuasive. The purpose of this study is to analyze the structure of automobile market based on some idea suggested by prior studies. Usually the buyers of the automobile tend to buy upper tier when they buy in the next time. That kind of behavior make it impossible to analyze the structure of automobile market under the level of automobile model. For that reason I tried to analyze the market structure in the brand or company level. In this study, consideration data was used for market structure analysis. The reasons why we used the consideration data are summarized as following. Firstly, as the repurchase time cycle is too long, brand switching data which is used for the market analysis of nondurable good is not avaliable. Secondly, as we mentioned, the buyers of the automobile tend to buy upper tier when they buy in the next time. We used survey data collected in the U.S.A. market in the year of 2005 through questionaire. The sample size was 8,291. The number of brand analyzed in this study was 9 among 37 which was being sold in U.S.A. market. Their market share was around 50%. The brands considered were BMW, Chevrolet, Chrysler, Dodge, Ford, Honda, Mercedes, and Toyota. �� ratio was derived from frequency of the consideration set. Actually the frequency is different from the brand switch concept. In this study to compute the �� ratio, the frequency of the consideration set was used like a frequency of brand switch for convenience. The study can be divided into 2 steps. The first step is to build hypothetical market structures. The second step is to choose the best structure based on the hypothetical market structures, Usually logit analysis is used for the choice best structure. In this study we built 3 hypothetical market structure. They are type-cost, cost-type, and unstructured. We classified the automobile into 5 types, sedan, SUV(Sport Utility Vehicle), Pickup, Mini Van, and Full-size Van. As for purchasing cost, we classified it 2 groups based on the median value. The median value was $28,800. To decide best structure among them, maximum likelihood test was used. Resulting from market structure analysis, we find that the automobile market of USA is hierarchically structured in the form of 'automobile type - purchasing cost'. That is, result showed that automobile buyers considered function or usage first and purchasing cost next. This study has some limitations in the analysis level and variable selection. First, in this study only type of the automobile and purchasing cost were as attributes considered for purchase. Considering other attributes is very needful. Because of the attributes considered, only 3 hypothetical structure could be analyzed. Second, due to the data, brand level analysis was tried. But model level analysis would be better because automobile buyers consider model not brand. To conduct model level study more cases should be obtained. That is for acquiring the better practical meaning, brand level analysis should be conducted when we consider the actual competition which occurred in the real market. Third, the variable selection for building nested logit model was very limited to some avaliable data. In spite of those limitations, the importance of this study lies in the trial of market structure analysis of durable good.

  • PDF

Comparison of Collimator Choice on Image Quality of I-131 in SPECT/CT (I-131 SPECT/CT 검사의 에서 조준기 종류에 따른 영상 비교 평가)

  • Kim, Jung Yul;Kim, Joo Yeon;Nam-Koong, Hyuk;Kang, Chun Goo;Kim, Jae Sam
    • The Korean Journal of Nuclear Medicine Technology
    • /
    • v.18 no.1
    • /
    • pp.33-42
    • /
    • 2014
  • Purpose: I-131 scan using High Energy (HE) collimator is generally used. While, Medium Energy (ME) collimator is not suggested to use in result of an excessive septal penetration effects, it is used to improve the sensitivities of count rate on lower dose of I-131. This research aims to evaluate I-131 SPECT/CT image quality using by HE and ME collimator and also find out the possibility of ME collimator clinical application. Materials and Methods: ME and HE collimator are substituted as Siemens symbia T16 SPECT/CT, using I-131 point source and NEMA NU-2 IQ phantom. Single Energy Window (SEW) and Triple Energy Windows (TEW) are applied for image acquisition and images with CTAC and Scatter correction application or not, applied different number of iteration and sub set are reconstructed by IR method, flash 3D. By analysis of acquired image, the comparison on sensitivities, contrast, noise and aspect ratio of two collimators are able to be evaluated. Results: ME Collimator is ahead of HE collimator in terms of sensitivity (ME collimator: 188.18 cps/MBq, HE collimator: 46.31 cps/MBq). For contrast, reconstruction image used by HE collimator with TEW, 16 subset 8 iteration applied CTAC is shown the highest contrast (TCQI=190.64). In same condition, ME collimator has lower contrast than HE collimator (TCQI=66.05). The lowest aspect ratio for ME collimator and HE collimator are 1.065 with SEW, CTAC (+) and 1.024 with TEW, CTAC (+) respectively. Conclusion: Selecting a proper collimator is important factor for image quality. This research finding tells that HE collimator, which is generally used for I-131 scan emitted high energy ${\gamma}$-ray is the most recommendable collimator for image quality. However, ME collimator is also applicable in condition of lower dose, lower sensitive if utilizing energy window, matrix size, IR parameter, CTAC and scatter correction appropriately.

  • PDF

Empirical Evaluation on the Size of E-Book Devices in User Comprehensive View (사용자의 이해력 관점에서 전자책 장치의 크기에 관한 실험적 평가)

  • Son, Yong-Bum;Kim, Young-Hak
    • The Journal of the Korea Contents Association
    • /
    • v.12 no.8
    • /
    • pp.167-177
    • /
    • 2012
  • Recently, with the rapid development of information technology the field of e-book market is growing rapidly. The choice of an e-book device to improve user's comprehension is one of very important elements. The effectiveness evaluation between e-books and paper books has been studied previously, but there have not been progressed actively researches on the size of e-book devices based on user's comprehension. Considering these aspects, we in this paper selected e-book devices such as currently available PDA, netbook, and notebook, and then carried out the experiment about which device has the highest user's comprehension depending on the size of e-book devices. Understanding and memory about the content on the display were set as main factors in order to evaluate user's comprehension. We prepared in advance multiple examples of e-books and English words with similar difficulty, and evaluated user's comprehension through answering questions for each example after doing the experiment. 90 undergraduate students who use most widely e-books participated in the experiment, and the result was analyzed using SPSS statistical package. The experiment result showed that user's comprehension was higher in e-book device with middle size rather than the one with big size in display size.

The guideline for choosing the right-size of tree for boosting algorithm (부스팅 트리에서 적정 트리사이즈의 선택에 관한 연구)

  • Kim, Ah-Hyoun;Kim, Ji-Hyun;Kim, Hyun-Joong
    • Journal of the Korean Data and Information Science Society
    • /
    • v.23 no.5
    • /
    • pp.949-959
    • /
    • 2012
  • This article is to find the right size of decision trees that performs better for boosting algorithm. First we defined the tree size D as the depth of a decision tree. Then we compared the performance of boosting algorithm with different tree sizes in the experiment. Although it is an usual practice to set the tree size in boosting algorithm to be small, we figured out that the choice of D has a significant influence on the performance of boosting algorithm. Furthermore, we found out that the tree size D need to be sufficiently large for some dataset. The experiment result shows that there exists an optimal D for each dataset and choosing the right size D is important in improving the performance of boosting. We also tried to find the model for estimating the right size D suitable for boosting algorithm, using variables that can explain the nature of a given dataset. The suggested model reveals that the optimal tree size D for a given dataset can be estimated by the error rate of stump tree, the number of classes, the depth of a single tree, and the gini impurity.

Convergence Analysis of the Least Mean Fourth Adaptive Algorithm (최소평균사승 적응알고리즘의 수렴특성 분석)

  • Cho, Sung-Ho;Kim, Hyung-Jung;Lee, Jong-Won
    • The Journal of the Acoustical Society of Korea
    • /
    • v.14 no.1E
    • /
    • pp.56-64
    • /
    • 1995
  • The least mean fourth (LMF) adaptive algorithm is a stochastic gradient method that minimizes the error in the mean fourth sense. Despite its potential advantages, the algorithm is much less popular than the conventional least mean square (LMS) algorithm in practice. This seems partly because the analysis of the LMF algorithm is much more difficult than that of the LMS algorithm, and thus not much still has been known about the algorithm. In this paper, we explore the statistical convergence behavior of the LMF algorithm when the input to the adaptive filter is zero-mean, wide-sense stationary, and Gaussian. Under a system idenrification mode, a set of nonlinear evolution equations that characterizes the mean and mean-squared behavior of the algorithm is derived. A condition for the conbergence is then found, and it turns out that the conbergence of the LMF algorithm strongly depends on the choice of initial conditions. Performances of the LMF algorithm are compared with those of the LMS algorithm. It is observed that the mean convergence of the LMF algorithm is much faster than that of the LMS algorithm when the two algorithms are designed to achieve the same steady-state mean-squared estimation error.

  • PDF

Forecasting the Precipitation of the Next Day Using Deep Learning (딥러닝 기법을 이용한 내일강수 예측)

  • Ha, Ji-Hun;Lee, Yong Hee;Kim, Yong-Hyuk
    • Journal of the Korean Institute of Intelligent Systems
    • /
    • v.26 no.2
    • /
    • pp.93-98
    • /
    • 2016
  • For accurate precipitation forecasts the choice of weather factors and prediction method is very important. Recently, machine learning has been widely used for forecasting precipitation, and artificial neural network, one of machine learning techniques, showed good performance. In this paper, we suggest a new method for forecasting precipitation using DBN, one of deep learning techniques. DBN has an advantage that initial weights are set by unsupervised learning, so this compensates for the defects of artificial neural networks. We used past precipitation, temperature, and the parameters of the sun and moon's motion as features for forecasting precipitation. The dataset consists of observation data which had been measured for 40 years from AWS in Seoul. Experiments were based on 8-fold cross validation. As a result of estimation, we got probabilities of test dataset, so threshold was used for the decision of precipitation. CSI and Bias were used for indicating the precision of precipitation. Our experimental results showed that DBN performed better than MLP.

Accuracy of Imputation of Microsatellite Markers from BovineSNP50 and BovineHD BeadChip in Hanwoo Population of Korea

  • Sharma, Aditi;Park, Jong-Eun;Park, Byungho;Park, Mi-Na;Roh, Seung-Hee;Jung, Woo-Young;Lee, Seung-Hwan;Chai, Han-Ha;Chang, Gul-Won;Cho, Yong-Min;Lim, Dajeong
    • Genomics & Informatics
    • /
    • v.16 no.1
    • /
    • pp.10-13
    • /
    • 2018
  • Until now microsatellite (MS) have been a popular choice of markers for parentage verification. Recently many countries have moved or are in process of moving from MS markers to single nucleotide polymorphism (SNP) markers for parentage testing. FAO-ISAG has also come up with a panel of 200 SNPs to replace the use of MS markers in parentage verification. However, in many countries most of the animals were genotyped by MS markers till now and the sudden shift to SNP markers will render the data of those animals useless. As National Institute of Animal Science in South Korea plans to move from standard ISAG recommended MS markers to SNPs, it faces the dilemma of exclusion of old animals that were genotyped by MS markers. Thus to facilitate this shift from MS to SNPs, such that the existing animals with MS data could still be used for parentage verification, this study was performed. In the current study we performed imputation of MS markers from the SNPs in the 500-kb region of the MS marker on either side. This method will provide an easy option for the labs to combine the data from the old and the current set of animals. It will be a cost efficient replacement of genotyping with the additional markers. We used 1,480 Hanwoo animals with both the MS data and SNP data to impute in the validation animals. We also compared the imputation accuracy between BovineSNP50 and BovineHD BeadChip. In our study the genotype concordance of 40% and 43% was observed in the BovineSNP50 and BovineHD BeadChip respectively.