• Title/Summary/Keyword: domain model

Search Result 3,738, Processing Time 0.03 seconds

An Expert System for the Estimation of the Growth Curve Parameters of New Markets (신규시장 성장모형의 모수 추정을 위한 전문가 시스템)

  • Lee, Dongwon;Jung, Yeojin;Jung, Jaekwon;Park, Dohyung
    • Journal of Intelligence and Information Systems
    • /
    • v.21 no.4
    • /
    • pp.17-35
    • /
    • 2015
  • Demand forecasting is the activity of estimating the quantity of a product or service that consumers will purchase for a certain period of time. Developing precise forecasting models are considered important since corporates can make strategic decisions on new markets based on future demand estimated by the models. Many studies have developed market growth curve models, such as Bass, Logistic, Gompertz models, which estimate future demand when a market is in its early stage. Among the models, Bass model, which explains the demand from two types of adopters, innovators and imitators, has been widely used in forecasting. Such models require sufficient demand observations to ensure qualified results. In the beginning of a new market, however, observations are not sufficient for the models to precisely estimate the market's future demand. For this reason, as an alternative, demands guessed from those of most adjacent markets are often used as references in such cases. Reference markets can be those whose products are developed with the same categorical technologies. A market's demand may be expected to have the similar pattern with that of a reference market in case the adoption pattern of a product in the market is determined mainly by the technology related to the product. However, such processes may not always ensure pleasing results because the similarity between markets depends on intuition and/or experience. There are two major drawbacks that human experts cannot effectively handle in this approach. One is the abundance of candidate reference markets to consider, and the other is the difficulty in calculating the similarity between markets. First, there can be too many markets to consider in selecting reference markets. Mostly, markets in the same category in an industrial hierarchy can be reference markets because they are usually based on the similar technologies. However, markets can be classified into different categories even if they are based on the same generic technologies. Therefore, markets in other categories also need to be considered as potential candidates. Next, even domain experts cannot consistently calculate the similarity between markets with their own qualitative standards. The inconsistency implies missing adjacent reference markets, which may lead to the imprecise estimation of future demand. Even though there are no missing reference markets, the new market's parameters can be hardly estimated from the reference markets without quantitative standards. For this reason, this study proposes a case-based expert system that helps experts overcome the drawbacks in discovering referential markets. First, this study proposes the use of Euclidean distance measure to calculate the similarity between markets. Based on their similarities, markets are grouped into clusters. Then, missing markets with the characteristics of the cluster are searched for. Potential candidate reference markets are extracted and recommended to users. After the iteration of these steps, definite reference markets are determined according to the user's selection among those candidates. Then, finally, the new market's parameters are estimated from the reference markets. For this procedure, two techniques are used in the model. One is clustering data mining technique, and the other content-based filtering of recommender systems. The proposed system implemented with those techniques can determine the most adjacent markets based on whether a user accepts candidate markets. Experiments were conducted to validate the usefulness of the system with five ICT experts involved. In the experiments, the experts were given the list of 16 ICT markets whose parameters to be estimated. For each of the markets, the experts estimated its parameters of growth curve models with intuition at first, and then with the system. The comparison of the experiments results show that the estimated parameters are closer when they use the system in comparison with the results when they guessed them without the system.

A Hybrid Recommender System based on Collaborative Filtering with Selective Use of Overall and Multicriteria Ratings (종합 평점과 다기준 평점을 선택적으로 활용하는 협업필터링 기반 하이브리드 추천 시스템)

  • Ku, Min Jung;Ahn, Hyunchul
    • Journal of Intelligence and Information Systems
    • /
    • v.24 no.2
    • /
    • pp.85-109
    • /
    • 2018
  • Recommender system recommends the items expected to be purchased by a customer in the future according to his or her previous purchase behaviors. It has been served as a tool for realizing one-to-one personalization for an e-commerce service company. Traditional recommender systems, especially the recommender systems based on collaborative filtering (CF), which is the most popular recommendation algorithm in both academy and industry, are designed to generate the items list for recommendation by using 'overall rating' - a single criterion. However, it has critical limitations in understanding the customers' preferences in detail. Recently, to mitigate these limitations, some leading e-commerce companies have begun to get feedback from their customers in a form of 'multicritera ratings'. Multicriteria ratings enable the companies to understand their customers' preferences from the multidimensional viewpoints. Moreover, it is easy to handle and analyze the multidimensional ratings because they are quantitative. But, the recommendation using multicritera ratings also has limitation that it may omit detail information on a user's preference because it only considers three-to-five predetermined criteria in most cases. Under this background, this study proposes a novel hybrid recommendation system, which selectively uses the results from 'traditional CF' and 'CF using multicriteria ratings'. Our proposed system is based on the premise that some people have holistic preference scheme, whereas others have composite preference scheme. Thus, our system is designed to use traditional CF using overall rating for the users with holistic preference, and to use CF using multicriteria ratings for the users with composite preference. To validate the usefulness of the proposed system, we applied it to a real-world dataset regarding the recommendation for POI (point-of-interests). Providing personalized POI recommendation is getting more attentions as the popularity of the location-based services such as Yelp and Foursquare increases. The dataset was collected from university students via a Web-based online survey system. Using the survey system, we collected the overall ratings as well as the ratings for each criterion for 48 POIs that are located near K university in Seoul, South Korea. The criteria include 'food or taste', 'price' and 'service or mood'. As a result, we obtain 2,878 valid ratings from 112 users. Among 48 items, 38 items (80%) are used as training dataset, and the remaining 10 items (20%) are used as validation dataset. To examine the effectiveness of the proposed system (i.e. hybrid selective model), we compared its performance to the performances of two comparison models - the traditional CF and the CF with multicriteria ratings. The performances of recommender systems were evaluated by using two metrics - average MAE(mean absolute error) and precision-in-top-N. Precision-in-top-N represents the percentage of truly high overall ratings among those that the model predicted would be the N most relevant items for each user. The experimental system was developed using Microsoft Visual Basic for Applications (VBA). The experimental results showed that our proposed system (avg. MAE = 0.584) outperformed traditional CF (avg. MAE = 0.591) as well as multicriteria CF (avg. AVE = 0.608). We also found that multicriteria CF showed worse performance compared to traditional CF in our data set, which is contradictory to the results in the most previous studies. This result supports the premise of our study that people have two different types of preference schemes - holistic and composite. Besides MAE, the proposed system outperformed all the comparison models in precision-in-top-3, precision-in-top-5, and precision-in-top-7. The results from the paired samples t-test presented that our proposed system outperformed traditional CF with 10% statistical significance level, and multicriteria CF with 1% statistical significance level from the perspective of average MAE. The proposed system sheds light on how to understand and utilize user's preference schemes in recommender systems domain.

Factors Associated with Care Burden among Family Caregivers of Terminally Ill Cancer Patients (말기암환자 가족 간병인의 간병 부담과 관련된 요인)

  • Lee, Jee Hye;Park, Hyun Kyung;Hwang, In Cheol;Kim, Hyo Min;Koh, Su-Jin;Kim, Young Sung;Lee, Yong Joo;Choi, Youn Seon;Hwang, Sun Wook;Ahn, Hong Yup
    • Journal of Hospice and Palliative Care
    • /
    • v.19 no.1
    • /
    • pp.61-69
    • /
    • 2016
  • Purpose: It is important to alleviate care burden for terminal cancer patients and their families. This study investigated the factors associated with care burden among family caregivers (FCs) of terminally ill cancer patients. Methods: We analyzed data from 289 FCs of terminal cancer patients who were admitted to palliative care units of seven medical centers in Korea. Care burden was assessed using the Korean version of Caregiver Reaction Assessment (CRA) scale which comprises five domains. A multivariate logistic regression model with stepwise variable selection was used to identify factors associated with care burden. Results: Diverse associating factors were identified in each CRA domain. Emotional factors had broad influence on care burden. FCs with emotional distress were more likely to experience changes to their daily routine (adjusted odds ratio (aOR), 2.54; 95% confidence interval (CI), 1.29~5.02), lack of family support (aOR, 2.27; 95% CI, 1.04~4.97) and health issues (aOR, 5.44; 2.50~11.88). Family functionality clearly reflected a lack of support, and severe family dysfunction was linked to financial issues as well. FCs without religion or comorbid conditions felt more burdened. The caregiving duration and daily caregiving hours significantly predicted FCs' lifestyle changes and physical burden. FCs who were employed, had weak social support or could not visit frequently, had a low self-esteem. Conclusion: This study indicates that it is helpful to understand FCs' emotional status and family functions to assess their care burden. Thus, efforts are needed to lessen their financial burden through social support systems.

A Characterization of Oil Sand Reservoir and Selections of Optimal SAGD Locations Based on Stochastic Geostatistical Predictions (지구통계 기법을 이용한 오일샌드 저류층 해석 및 스팀주입중력법을 이용한 비투멘 회수 적지 선정 사전 연구)

  • Jeong, Jina;Park, Eungyu
    • Economic and Environmental Geology
    • /
    • v.46 no.4
    • /
    • pp.313-327
    • /
    • 2013
  • In the study, three-dimensional geostatistical simulations on McMurray Formation which is the largest oil sand reservoir in Athabasca area, Canada were performed, and the optimal site for steam assisted gravity drainage (SAGD) was selected based on the predictions. In the selection, the factors related to the vertical extendibility of steam chamber were considered as the criteria for an optimal site. For the predictions, 110 borehole data acquired from the study area were analyzed in the Markovian transition probability (TP) framework and three-dimensional distributions of the composing media were predicted stochastically through an existing TP based geostatistical model. The potential of a specific medium at a position within the prediction domain was estimated from the ensemble probability based on the multiple realizations. From the ensemble map, the cumulative thickness of the permeable media (i.e. Breccia and Sand) was analyzed and the locations with the highest potential for SAGD applications were delineated. As a supportive criterion for an optimal SAGD site, mean vertical extension of a unit permeable media was also delineated through transition rate based computations. The mean vertical extension of a permeable media show rough agreement with the cumulative thickness in their general distribution. However, the distributions show distinctive disagreement at a few locations where the cumulative thickness was higher due to highly alternating juxtaposition of the permeable and the less permeable media. This observation implies that the cumulative thickness alone may not be a sufficient criterion for an optimal SAGD site and the mean vertical extension of the permeable media needs to be jointly considered for the sound selections.

Comparison of Association Rule Learning and Subgroup Discovery for Mining Traffic Accident Data (교통사고 데이터의 마이닝을 위한 연관규칙 학습기법과 서브그룹 발견기법의 비교)

  • Kim, Jeongmin;Ryu, Kwang Ryel
    • Journal of Intelligence and Information Systems
    • /
    • v.21 no.4
    • /
    • pp.1-16
    • /
    • 2015
  • Traffic accident is one of the major cause of death worldwide for the last several decades. According to the statistics of world health organization, approximately 1.24 million deaths occurred on the world's roads in 2010. In order to reduce future traffic accident, multipronged approaches have been adopted including traffic regulations, injury-reducing technologies, driving training program and so on. Records on traffic accidents are generated and maintained for this purpose. To make these records meaningful and effective, it is necessary to analyze relationship between traffic accident and related factors including vehicle design, road design, weather, driver behavior etc. Insight derived from these analysis can be used for accident prevention approaches. Traffic accident data mining is an activity to find useful knowledges about such relationship that is not well-known and user may interested in it. Many studies about mining accident data have been reported over the past two decades. Most of studies mainly focused on predict risk of accident using accident related factors. Supervised learning methods like decision tree, logistic regression, k-nearest neighbor, neural network are used for these prediction. However, derived prediction model from these algorithms are too complex to understand for human itself because the main purpose of these algorithms are prediction, not explanation of the data. Some of studies use unsupervised clustering algorithm to dividing the data into several groups, but derived group itself is still not easy to understand for human, so it is necessary to do some additional analytic works. Rule based learning methods are adequate when we want to derive comprehensive form of knowledge about the target domain. It derives a set of if-then rules that represent relationship between the target feature with other features. Rules are fairly easy for human to understand its meaning therefore it can help provide insight and comprehensible results for human. Association rule learning methods and subgroup discovery methods are representing rule based learning methods for descriptive task. These two algorithms have been used in a wide range of area from transaction analysis, accident data analysis, detection of statistically significant patient risk groups, discovering key person in social communities and so on. We use both the association rule learning method and the subgroup discovery method to discover useful patterns from a traffic accident dataset consisting of many features including profile of driver, location of accident, types of accident, information of vehicle, violation of regulation and so on. The association rule learning method, which is one of the unsupervised learning methods, searches for frequent item sets from the data and translates them into rules. In contrast, the subgroup discovery method is a kind of supervised learning method that discovers rules of user specified concepts satisfying certain degree of generality and unusualness. Depending on what aspect of the data we are focusing our attention to, we may combine different multiple relevant features of interest to make a synthetic target feature, and give it to the rule learning algorithms. After a set of rules is derived, some postprocessing steps are taken to make the ruleset more compact and easier to understand by removing some uninteresting or redundant rules. We conducted a set of experiments of mining our traffic accident data in both unsupervised mode and supervised mode for comparison of these rule based learning algorithms. Experiments with the traffic accident data reveals that the association rule learning, in its pure unsupervised mode, can discover some hidden relationship among the features. Under supervised learning setting with combinatorial target feature, however, the subgroup discovery method finds good rules much more easily than the association rule learning method that requires a lot of efforts to tune the parameters.

The Analysis on the Relationship between Firms' Exposures to SNS and Stock Prices in Korea (기업의 SNS 노출과 주식 수익률간의 관계 분석)

  • Kim, Taehwan;Jung, Woo-Jin;Lee, Sang-Yong Tom
    • Asia pacific journal of information systems
    • /
    • v.24 no.2
    • /
    • pp.233-253
    • /
    • 2014
  • Can the stock market really be predicted? Stock market prediction has attracted much attention from many fields including business, economics, statistics, and mathematics. Early research on stock market prediction was based on random walk theory (RWT) and the efficient market hypothesis (EMH). According to the EMH, stock market are largely driven by new information rather than present and past prices. Since it is unpredictable, stock market will follow a random walk. Even though these theories, Schumaker [2010] asserted that people keep trying to predict the stock market by using artificial intelligence, statistical estimates, and mathematical models. Mathematical approaches include Percolation Methods, Log-Periodic Oscillations and Wavelet Transforms to model future prices. Examples of artificial intelligence approaches that deals with optimization and machine learning are Genetic Algorithms, Support Vector Machines (SVM) and Neural Networks. Statistical approaches typically predicts the future by using past stock market data. Recently, financial engineers have started to predict the stock prices movement pattern by using the SNS data. SNS is the place where peoples opinions and ideas are freely flow and affect others' beliefs on certain things. Through word-of-mouth in SNS, people share product usage experiences, subjective feelings, and commonly accompanying sentiment or mood with others. An increasing number of empirical analyses of sentiment and mood are based on textual collections of public user generated data on the web. The Opinion mining is one domain of the data mining fields extracting public opinions exposed in SNS by utilizing data mining. There have been many studies on the issues of opinion mining from Web sources such as product reviews, forum posts and blogs. In relation to this literatures, we are trying to understand the effects of SNS exposures of firms on stock prices in Korea. Similarly to Bollen et al. [2011], we empirically analyze the impact of SNS exposures on stock return rates. We use Social Metrics by Daum Soft, an SNS big data analysis company in Korea. Social Metrics provides trends and public opinions in Twitter and blogs by using natural language process and analysis tools. It collects the sentences circulated in the Twitter in real time, and breaks down these sentences into the word units and then extracts keywords. In this study, we classify firms' exposures in SNS into two groups: positive and negative. To test the correlation and causation relationship between SNS exposures and stock price returns, we first collect 252 firms' stock prices and KRX100 index in the Korea Stock Exchange (KRX) from May 25, 2012 to September 1, 2012. We also gather the public attitudes (positive, negative) about these firms from Social Metrics over the same period of time. We conduct regression analysis between stock prices and the number of SNS exposures. Having checked the correlation between the two variables, we perform Granger causality test to see the causation direction between the two variables. The research result is that the number of total SNS exposures is positively related with stock market returns. The number of positive mentions of has also positive relationship with stock market returns. Contrarily, the number of negative mentions has negative relationship with stock market returns, but this relationship is statistically not significant. This means that the impact of positive mentions is statistically bigger than the impact of negative mentions. We also investigate whether the impacts are moderated by industry type and firm's size. We find that the SNS exposures impacts are bigger for IT firms than for non-IT firms, and bigger for small sized firms than for large sized firms. The results of Granger causality test shows change of stock price return is caused by SNS exposures, while the causation of the other way round is not significant. Therefore the correlation relationship between SNS exposures and stock prices has uni-direction causality. The more a firm is exposed in SNS, the more is the stock price likely to increase, while stock price changes may not cause more SNS mentions.

An Effect of Compassion, Moral Obligation on Social Entrepreneurial Intention: Examining the Moderating Role of Perceived Social Support (공감, 도덕적 의무감, 사회적 지지에 대한 인식이 사회적 기업가적 의도에 미치는 영향)

  • Lee, Chaewon;Oh, Hyemi
    • Asia-Pacific Journal of Business Venturing and Entrepreneurship
    • /
    • v.12 no.5
    • /
    • pp.127-139
    • /
    • 2017
  • In recent 10 years the attention to social entrepreneurship has raised increasing among scholars, public sector, and community development. However less research has been conducted on how social entrepreneurship intention create a social enterprise and what factors can be affected to the social entrepreneurial intentions. This paper aims at contributing to identify the antecedents of entrepreneurial behavior and intentions. Especially, we have had a strong interests in compassion factors which haven't been used as important variables to encourage for people to do social entrepreneurial activities. Also, we try to find the moral obligation and perceived social support as antecedents of social entrepreneurial intentions. Finding show that compassion and moral obligation affect to the social entrepreneurial intention. Especially this study identify the external factor of society with the variable, perceived social support. Once individuals recognize that the infrastructure and societal positive mood on social entrepreneurship is friendly to social entrepreneurship, people have a tendency to try to do some social entrepreneurial activities. Only few empirical studies exist in this research domain. A study of more than 271 Korean college students has studied which personal traits predict certain characteristics of social entrepreneurs (such as having social vision or looking for social innovational opportunities). In addition to those antecedents, students experience is the critical factor that enabled continued expansion of the social entrepreneurial activities. The results of this research show how we can nurture social entrepreneurs and how we can develop the social environment to promote social entrepreneurship.

  • PDF

Dispute of Part-Whole Representation in Conceptual Modeling (부분-전체 관계에 관한 개념적 모델링의 논의에 관하여)

  • Kim, Taekyung;Park, Jinsoo;Rho, Sangkyu
    • Journal of Intelligence and Information Systems
    • /
    • v.18 no.4
    • /
    • pp.97-116
    • /
    • 2012
  • Conceptual modeling is an important step for successful system development. It helps system designers and business practitioners share the same view on domain knowledge. If the work is successful, a result of conceptual modeling can be beneficial in increasing productivity and reducing failures. However, the value of conceptual modeling is unlikely to be evaluated uniformly because we are lack of agreement on how to elicit concepts and how to represent those with conceptual modeling constructs. Especially, designing relationships between components, also known as part-whole relationships, have been regarded as complicated work. The recent study, "Representing Part-Whole Relations in Conceptual Modeling : An Empirical Evaluation" (Shanks et al., 2008), published in MIS Quarterly, can be regarded as one of positive efforts. Not only the study is one of few attempts of trying to clarify how to select modeling alternatives in part-whole design, but also it shows results based on an empirical experiment. Shanks et al. argue that there are two modeling alternatives to represent part-whole relationships : an implicit representation and an explicit one. By conducting an experiment, they insist that the explicit representation increases the value of a conceptual model. Moreover, Shanks et al. justify their findings by citing the BWW ontology. Recently, the study from Shanks et al. faces criticism. Allen and March (2012) argue that Shanks et al.'s experiment is lack of validity and reliability since the experimental setting suffers from error-prone and self-defensive design. They point out that the experiment is intentionally fabricated to support the idea, as such that using concrete UML concepts results in positive results in understanding models. Additionally, Allen and March add that the experiment failed to consider boundary conditions; thus reducing credibility. Shanks and Weber (2012) contradict flatly the argument suggested by Allen and March (2012). To defend, they posit the BWW ontology is righteously applied in supporting the research. Moreover, the experiment, they insist, can be fairly acceptable. Therefore, Shanks and Weber argue that Allen and March distort the true value of Shanks et al. by pointing out minor limitations. In this study, we try to investigate the dispute around Shanks et al. in order to answer to the following question : "What is the proper value of the study conducted by Shanks et al.?" More profoundly, we question whether or not using the BWW ontology can be the only viable option of exploring better conceptual modeling methods and procedures. To understand key issues around the dispute, first we reviewed previous studies relating to the BWW ontology. We critically reviewed both of Shanks and Weber and Allen and March. With those findings, we further discuss theories on part-whole (or part-of) relationships that are rarely treated in the dispute. As a result, we found three additional evidences that are not sufficiently covered by the dispute. The main focus of the dispute is on the errors of experimental methods: Shanks et al. did not use Bunge's Ontology properly; the refutation of a paradigm shift is lack of concrete, logical rationale; the conceptualization on part-whole relations should be reformed. Conclusively, Allen and March indicate properly issues that weaken the value of Shanks et al. In general, their criticism is reasonable; however, they do not provide sufficient answers how to anchor future studies on part-whole relationships. We argue that the use of the BWW ontology should be rigorously evaluated by its original philosophical rationales surrounding part-whole existence. Moreover, conceptual modeling on the part-whole phenomena should be investigated with more plentiful lens of alternative theories. The criticism on Shanks et al. should not be regarded as a contradiction on evaluating modeling methods of alternative part-whole representations. To the contrary, it should be viewed as a call for research on usable and useful approaches to increase value of conceptual modeling.

Use of Human Serum Albumin Fusion Tags for Recombinant Protein Secretory Expression in the Methylotrophic Yeast Hansenula polymorpha (메탄올 자화효모 Hansenula polymorpha에서의 재조합 단백질 분비발현을 위한 인체 혈청 알부민 융합단편의 활용)

  • Song, Ji-Hye;Hwang, Dong Hyeon;Oh, Doo-Byoung;Rhee, Sang Ki;Kwon, Ohsuk
    • Microbiology and Biotechnology Letters
    • /
    • v.41 no.1
    • /
    • pp.17-25
    • /
    • 2013
  • The thermotolerant methylotrophic yeast Hansenula polymorpha is an attractive model organism for various fundamental studies, such as the genetic control of enzymes involved in methanol metabolism, peroxisome biogenesis, nitrate assimilation, and resistance to heavy metals and oxidative stresses. In addition, H. polymorpha has been highlighted as a promising recombinant protein expression host, especially due to the availability of strong and tightly regulatable promoters. In this study, we investigated the possibility of employing human serum albumin (HSA) as the fusion tag for the secretory expression of heterologous proteins in H. polymorpha. A set of four expression cassettes, which contained the methanol oxidase (MOX) promoter, translational HSA fusion tag, and the terminator of MOX, were constructed. The expression cassettes were also designed to contain sequences for accessory elements including His8-tag, $2{\times}(Gly_4Ser_1)$ linkers, tobacco etch virus protease recognition sites (Tev), multi-cloning sites, and strep-tags. To determine the effects of the size of the HSA fusion tag on the secretory expression of the target protein, each cassette contained the HSA gene fragment truncated at a specific position based on its domain structure. By using the Green fluorescence protein gene as the reporter, the properties of each expression cassette were compared in various conditions. Our results suggest that the translational HSA fusion tag is an efficient tool for the secretory expression of recombinant proteins in H. polymorpha.

Development of Science Academic Emotion Scale for Elementary Students (초등학생 과학 학습정서 검사 도구 개발)

  • Kim, Dong-Hyun;Kim, Hyo-Nam
    • Journal of The Korean Association For Science Education
    • /
    • v.33 no.7
    • /
    • pp.1367-1384
    • /
    • 2013
  • The purpose of this study was to develop a Science Academic Emotion Scale for Elementary Students. To make a scale, authors extract a core of 14 emotions related to science learning situations from Kim & Kim (2013) and literature review. Items on the scale consisted of 14 emotions and science learning situations. The first preliminary scale had 174 items on it. The number of 174 items was reduced and elaborated on by three science educators. Authors verified the scale using exploratory factor analysis, confirmatory factor analysis, inter-item consistency and concurrent validity. The second preliminary scale consisted of 141 items. The preliminary scale was reduced to seven factors and 56 items by applying exploratory factor analysis twice. The seven factors include: enjoyment contentment interest, boredom, shame, discontent, anger, anxiety, and laziness. The 56 items were elaborated on by five science educators. The scale with 56 items was fixed with seven factors and 35 items to get the final scale by applying confirmatory factor analysis twice. Except for Chi-square and GFI (Goodness of Fit Index), other various goodness of fit characteristics of the seven factors and 35 items model showed good estimated figures. The Cronbach of the scale was 0.85. The Cronbach of seven factors are 0.95 in enjoyment contentment interest, 0.81 in boredom, 0.87 in shame, 0.82 in discontent, 0.87 in anger, 0.77 in anxiety, 0.81 in laziness. The correlation coefficient was 0.59 in enjoyment contentment interest, 0.54 in anxiety, 0.42 in shame, and 0.28 in boredom, which were estimated using the Science Academic Emotion Scale and National Assessment System of Science-Related Affective Domain (Kim et al., 1998). Based on the results, authors judged that the Science Academic Emotion Scale for Elementary Students achieved an acceptable validity and reliability.