• Title/Summary/Keyword: systems approach method

Search Result 3,708, Processing Time 0.03 seconds

A Study on the Intelligent Online Judging System Using User-Based Collaborative Filtering

  • Hyun Woo Kim;Hye Jin Yun;Kwihoon Kim
    • Journal of the Korea Society of Computer and Information
    • /
    • v.29 no.1
    • /
    • pp.273-285
    • /
    • 2024
  • With the active utilization of Online Judge (OJ) systems in the field of education, various studies utilizing learner data have emerged. This research proposes a problem recommendation based on a user-based collaborative filtering approach with learner data to support learners in their problem selection. Assistance in learners' problem selection within the OJ system is crucial for enhancing the effectiveness of education as it impacts the learning path. To achieve this, this system identifies learners with similar problem-solving tendencies and utilizes their problem-solving history. The proposed technique has been implemented on an OJ site in the fields of algorithms and programming, operated by the Chungbuk Education Research and Information Institute. The technique's service utility and usability were assessed through expert reviews using the Delphi technique. Additionally, it was piloted with site users, and an analysis of the ratio of correctness revealed approximately a 16% higher submission rate for recommended problems compared to the overall submissions. A survey targeting users who used the recommended problems yielded a 78% response rate, with the majority indicating that the feature was helpful. However, low selection rates of recommended problems and low response rates within the subset of users who used recommended problems highlight the need for future research focusing on improving accessibility, enhancing user feedback collection, and diversifying learner data analysis.

Analyzing Contextual Polarity of Unstructured Data for Measuring Subjective Well-Being (주관적 웰빙 상태 측정을 위한 비정형 데이터의 상황기반 긍부정성 분석 방법)

  • Choi, Sukjae;Song, Yeongeun;Kwon, Ohbyung
    • Journal of Intelligence and Information Systems
    • /
    • v.22 no.1
    • /
    • pp.83-105
    • /
    • 2016
  • Measuring an individual's subjective wellbeing in an accurate, unobtrusive, and cost-effective manner is a core success factor of the wellbeing support system, which is a type of medical IT service. However, measurements with a self-report questionnaire and wearable sensors are cost-intensive and obtrusive when the wellbeing support system should be running in real-time, despite being very accurate. Recently, reasoning the state of subjective wellbeing with conventional sentiment analysis and unstructured data has been proposed as an alternative to resolve the drawbacks of the self-report questionnaire and wearable sensors. However, this approach does not consider contextual polarity, which results in lower measurement accuracy. Moreover, there is no sentimental word net or ontology for the subjective wellbeing area. Hence, this paper proposes a method to extract keywords and their contextual polarity representing the subjective wellbeing state from the unstructured text in online websites in order to improve the reasoning accuracy of the sentiment analysis. The proposed method is as follows. First, a set of general sentimental words is proposed. SentiWordNet was adopted; this is the most widely used dictionary and contains about 100,000 words such as nouns, verbs, adjectives, and adverbs with polarities from -1.0 (extremely negative) to 1.0 (extremely positive). Second, corpora on subjective wellbeing (SWB corpora) were obtained by crawling online text. A survey was conducted to prepare a learning dataset that includes an individual's opinion and the level of self-report wellness, such as stress and depression. The participants were asked to respond with their feelings about online news on two topics. Next, three data sources were extracted from the SWB corpora: demographic information, psychographic information, and the structural characteristics of the text (e.g., the number of words used in the text, simple statistics on the special characters used). These were considered to adjust the level of a specific SWB. Finally, a set of reasoning rules was generated for each wellbeing factor to estimate the SWB of an individual based on the text written by the individual. The experimental results suggested that using contextual polarity for each SWB factor (e.g., stress, depression) significantly improved the estimation accuracy compared to conventional sentiment analysis methods incorporating SentiWordNet. Even though literature is available on Korean sentiment analysis, such studies only used only a limited set of sentimental words. Due to the small number of words, many sentences are overlooked and ignored when estimating the level of sentiment. However, the proposed method can identify multiple sentiment-neutral words as sentiment words in the context of a specific SWB factor. The results also suggest that a specific type of senti-word dictionary containing contextual polarity needs to be constructed along with a dictionary based on common sense such as SenticNet. These efforts will enrich and enlarge the application area of sentic computing. The study is helpful to practitioners and managers of wellness services in that a couple of characteristics of unstructured text have been identified for improving SWB. Consistent with the literature, the results showed that the gender and age affect the SWB state when the individual is exposed to an identical queue from the online text. In addition, the length of the textual response and usage pattern of special characters were found to indicate the individual's SWB. These imply that better SWB measurement should involve collecting the textual structure and the individual's demographic conditions. In the future, the proposed method should be improved by automated identification of the contextual polarity in order to enlarge the vocabulary in a cost-effective manner.

A Study on the Development Strategy of Smart Learning for Public Education (스마트러닝의 공교육 정착을 위한 성공전략 연구)

  • Kim, Taisiya;Cho, Ji Yeon;Lee, Bong Gyou
    • Journal of Internet Computing and Services
    • /
    • v.16 no.6
    • /
    • pp.123-131
    • /
    • 2015
  • Recently the development of ICT has a big impact on education field, and diffusion of smart devices has brought new education paradigm. Since people has an opportunity to use various contents anytime and communicate in an interactive way, the method of learning has changing. In 2011, Korean government has established the smart education promotion plan to be a first mover in the paradigm shift from e-learning to smart learning. Especially, government aimed to improve the quality of learning materials and method in public schools, and also to decrease the high expenditure on private education. However, the achievement of smart education policy has not emerged yet, and the refinement of smart learning policy and strategy is essential at this moment. Therefore, the purpose of this study is to propose the successful strategies for smart learning in public education. First, this study explores the status of public education and smart learning environment in Korea. Then, it derives the key success factors through SWOT(Strength, Weakness, Opportunity, Threat) analysis, and suggests strategic priorities through AHP(Analytic Hierarchy Priority) method. The interview and survey were conducted with total 20 teachers, who works in public schools. As a results, focusing on weakness-threat(WT) strategy is the most prior goal for public education, to activate the smart learning. As sub-factors, promoting the education programs for teachers($W_2$), which is still a weakness, appeared as the most important factor to be improved. The second sub-factor with high priority was an efficient optimizing the capability of new learning method($S_4$), which is a strength of systematic public education environment. The third sub-factor with high priority was the extension of limited government support($T_4$), which could be a threat to other public schools with no financial support. In other words, the results implicate that government institution factors should be considered with high priority to make invisible achievement in smart learning. This study is significant as an initial approach with strategic perspective for public education. While the limitation of this study is that survey and interview were conducted with only teachers. Accordingly, the future study needs to be analyzed in effectiveness and feasibility, by considering perspectives from field experts and policy makers.

Development of a Stock Trading System Using M & W Wave Patterns and Genetic Algorithms (M&W 파동 패턴과 유전자 알고리즘을 이용한 주식 매매 시스템 개발)

  • Yang, Hoonseok;Kim, Sunwoong;Choi, Heung Sik
    • Journal of Intelligence and Information Systems
    • /
    • v.25 no.1
    • /
    • pp.63-83
    • /
    • 2019
  • Investors prefer to look for trading points based on the graph shown in the chart rather than complex analysis, such as corporate intrinsic value analysis and technical auxiliary index analysis. However, the pattern analysis technique is difficult and computerized less than the needs of users. In recent years, there have been many cases of studying stock price patterns using various machine learning techniques including neural networks in the field of artificial intelligence(AI). In particular, the development of IT technology has made it easier to analyze a huge number of chart data to find patterns that can predict stock prices. Although short-term forecasting power of prices has increased in terms of performance so far, long-term forecasting power is limited and is used in short-term trading rather than long-term investment. Other studies have focused on mechanically and accurately identifying patterns that were not recognized by past technology, but it can be vulnerable in practical areas because it is a separate matter whether the patterns found are suitable for trading. When they find a meaningful pattern, they find a point that matches the pattern. They then measure their performance after n days, assuming that they have bought at that point in time. Since this approach is to calculate virtual revenues, there can be many disparities with reality. The existing research method tries to find a pattern with stock price prediction power, but this study proposes to define the patterns first and to trade when the pattern with high success probability appears. The M & W wave pattern published by Merrill(1980) is simple because we can distinguish it by five turning points. Despite the report that some patterns have price predictability, there were no performance reports used in the actual market. The simplicity of a pattern consisting of five turning points has the advantage of reducing the cost of increasing pattern recognition accuracy. In this study, 16 patterns of up conversion and 16 patterns of down conversion are reclassified into ten groups so that they can be easily implemented by the system. Only one pattern with high success rate per group is selected for trading. Patterns that had a high probability of success in the past are likely to succeed in the future. So we trade when such a pattern occurs. It is a real situation because it is measured assuming that both the buy and sell have been executed. We tested three ways to calculate the turning point. The first method, the minimum change rate zig-zag method, removes price movements below a certain percentage and calculates the vertex. In the second method, high-low line zig-zag, the high price that meets the n-day high price line is calculated at the peak price, and the low price that meets the n-day low price line is calculated at the valley price. In the third method, the swing wave method, the high price in the center higher than n high prices on the left and right is calculated as the peak price. If the central low price is lower than the n low price on the left and right, it is calculated as valley price. The swing wave method was superior to the other methods in the test results. It is interpreted that the transaction after checking the completion of the pattern is more effective than the transaction in the unfinished state of the pattern. Genetic algorithms(GA) were the most suitable solution, although it was virtually impossible to find patterns with high success rates because the number of cases was too large in this simulation. We also performed the simulation using the Walk-forward Analysis(WFA) method, which tests the test section and the application section separately. So we were able to respond appropriately to market changes. In this study, we optimize the stock portfolio because there is a risk of over-optimized if we implement the variable optimality for each individual stock. Therefore, we selected the number of constituent stocks as 20 to increase the effect of diversified investment while avoiding optimization. We tested the KOSPI market by dividing it into six categories. In the results, the portfolio of small cap stock was the most successful and the high vol stock portfolio was the second best. This shows that patterns need to have some price volatility in order for patterns to be shaped, but volatility is not the best.

A Study on Ontology and Topic Modeling-based Multi-dimensional Knowledge Map Services (온톨로지와 토픽모델링 기반 다차원 연계 지식맵 서비스 연구)

  • Jeong, Hanjo
    • Journal of Intelligence and Information Systems
    • /
    • v.21 no.4
    • /
    • pp.79-92
    • /
    • 2015
  • Knowledge map is widely used to represent knowledge in many domains. This paper presents a method of integrating the national R&D data and assists of users to navigate the integrated data via using a knowledge map service. The knowledge map service is built by using a lightweight ontology and a topic modeling method. The national R&D data is integrated with the research project as its center, i.e., the other R&D data such as research papers, patents, and reports are connected with the research project as its outputs. The lightweight ontology is used to represent the simple relationships between the integrated data such as project-outputs relationships, document-author relationships, and document-topic relationships. Knowledge map enables us to infer further relationships such as co-author and co-topic relationships. To extract the relationships between the integrated data, a Relational Data-to-Triples transformer is implemented. Also, a topic modeling approach is introduced to extract the document-topic relationships. A triple store is used to manage and process the ontology data while preserving the network characteristics of knowledge map service. Knowledge map can be divided into two types: one is a knowledge map used in the area of knowledge management to store, manage and process the organizations' data as knowledge, the other is a knowledge map for analyzing and representing knowledge extracted from the science & technology documents. This research focuses on the latter one. In this research, a knowledge map service is introduced for integrating the national R&D data obtained from National Digital Science Library (NDSL) and National Science & Technology Information Service (NTIS), which are two major repository and service of national R&D data servicing in Korea. A lightweight ontology is used to design and build a knowledge map. Using the lightweight ontology enables us to represent and process knowledge as a simple network and it fits in with the knowledge navigation and visualization characteristics of the knowledge map. The lightweight ontology is used to represent the entities and their relationships in the knowledge maps, and an ontology repository is created to store and process the ontology. In the ontologies, researchers are implicitly connected by the national R&D data as the author relationships and the performer relationships. A knowledge map for displaying researchers' network is created, and the researchers' network is created by the co-authoring relationships of the national R&D documents and the co-participation relationships of the national R&D projects. To sum up, a knowledge map-service system based on topic modeling and ontology is introduced for processing knowledge about the national R&D data such as research projects, papers, patent, project reports, and Global Trends Briefing (GTB) data. The system has goals 1) to integrate the national R&D data obtained from NDSL and NTIS, 2) to provide a semantic & topic based information search on the integrated data, and 3) to provide a knowledge map services based on the semantic analysis and knowledge processing. The S&T information such as research papers, research reports, patents and GTB are daily updated from NDSL, and the R&D projects information including their participants and output information are updated from the NTIS. The S&T information and the national R&D information are obtained and integrated to the integrated database. Knowledge base is constructed by transforming the relational data into triples referencing R&D ontology. In addition, a topic modeling method is employed to extract the relationships between the S&T documents and topic keyword/s representing the documents. The topic modeling approach enables us to extract the relationships and topic keyword/s based on the semantics, not based on the simple keyword/s. Lastly, we show an experiment on the construction of the integrated knowledge base using the lightweight ontology and topic modeling, and the knowledge map services created based on the knowledge base are also introduced.

Export Control System based on Case Based Reasoning: Design and Evaluation (사례 기반 지능형 수출통제 시스템 : 설계와 평가)

  • Hong, Woneui;Kim, Uihyun;Cho, Sinhee;Kim, Sansung;Yi, Mun Yong;Shin, Donghoon
    • Journal of Intelligence and Information Systems
    • /
    • v.20 no.3
    • /
    • pp.109-131
    • /
    • 2014
  • As the demand of nuclear power plant equipment is continuously growing worldwide, the importance of handling nuclear strategic materials is also increasing. While the number of cases submitted for the exports of nuclear-power commodity and technology is dramatically increasing, preadjudication (or prescreening to be simple) of strategic materials has been done so far by experts of a long-time experience and extensive field knowledge. However, there is severe shortage of experts in this domain, not to mention that it takes a long time to develop an expert. Because human experts must manually evaluate all the documents submitted for export permission, the current practice of nuclear material export is neither time-efficient nor cost-effective. Toward alleviating the problem of relying on costly human experts only, our research proposes a new system designed to help field experts make their decisions more effectively and efficiently. The proposed system is built upon case-based reasoning, which in essence extracts key features from the existing cases, compares the features with the features of a new case, and derives a solution for the new case by referencing similar cases and their solutions. Our research proposes a framework of case-based reasoning system, designs a case-based reasoning system for the control of nuclear material exports, and evaluates the performance of alternative keyword extraction methods (full automatic, full manual, and semi-automatic). A keyword extraction method is an essential component of the case-based reasoning system as it is used to extract key features of the cases. The full automatic method was conducted using TF-IDF, which is a widely used de facto standard method for representative keyword extraction in text mining. TF (Term Frequency) is based on the frequency count of the term within a document, showing how important the term is within a document while IDF (Inverted Document Frequency) is based on the infrequency of the term within a document set, showing how uniquely the term represents the document. The results show that the semi-automatic approach, which is based on the collaboration of machine and human, is the most effective solution regardless of whether the human is a field expert or a student who majors in nuclear engineering. Moreover, we propose a new approach of computing nuclear document similarity along with a new framework of document analysis. The proposed algorithm of nuclear document similarity considers both document-to-document similarity (${\alpha}$) and document-to-nuclear system similarity (${\beta}$), in order to derive the final score (${\gamma}$) for the decision of whether the presented case is of strategic material or not. The final score (${\gamma}$) represents a document similarity between the past cases and the new case. The score is induced by not only exploiting conventional TF-IDF, but utilizing a nuclear system similarity score, which takes the context of nuclear system domain into account. Finally, the system retrieves top-3 documents stored in the case base that are considered as the most similar cases with regard to the new case, and provides them with the degree of credibility. With this final score and the credibility score, it becomes easier for a user to see which documents in the case base are more worthy of looking up so that the user can make a proper decision with relatively lower cost. The evaluation of the system has been conducted by developing a prototype and testing with field data. The system workflows and outcomes have been verified by the field experts. This research is expected to contribute the growth of knowledge service industry by proposing a new system that can effectively reduce the burden of relying on costly human experts for the export control of nuclear materials and that can be considered as a meaningful example of knowledge service application.

Multilateral Approach to forming Air Logistics Hub on North East Asia Region (동북아 항공물류허브을 구축하기 위한 다자적 접근방안)

  • Hong, Seock-Jin
    • The Korean Journal of Air & Space Law and Policy
    • /
    • v.19 no.2
    • /
    • pp.97-136
    • /
    • 2004
  • The Northeast Asian air cargo market has expanded tremendously as a result of the opening up of the Chinese market. The importance of the Asia-Pacific region in the global air transport has also increased. The exchange of human and material resources, services, and information in Northeast Asia, which is expected to increase in the near future, requires that the airlines operating within this region adopt a more liberalized approach. This paper introduced alternatives which can be applied to the Northeast Asian airlines industry so as to bring about the integration of regional air transport: First, this paper found a need for individual Northeast Asian nations to alter their policies towards the airlines industry. Second, each country should further liberalize their respective domestic air transport. Third, there is a need for freer air service agreements to be signed between the nations of Northeast Asia. Fourth, the strategic alliances between the airlines operating in Northeast Asia should be further strengthened. Fifth, this liberalization process should be carried out in an incremental manner, beginning with more competitive airports and routes, or with less-in-demand routes. Sixth, there is a need for a shuttle system to be put into place between the main airports in China, Korea, and Japan. Seventh, these three nations jointly develop aviation safety and security systems that are in accordance with international standards. Eighth, the liberalization process of the aviation industry should be undertaken in conjunction with other related fields. Ninth, organizations linking together civil aviation organization in the Asia-Pacific area should be formed, as should each government linking together. By doing so, these countries will be able to establish regular venues through which to exchange opinions on the integration and liberalization of the air cargo market so as to induce the gradual liberalization of the actual market. The liberalization of the air transport in Northeast Asia will prove to be a daunting task in the short term. However, if the Chinese airlines continue to exhibit continuous growth and Japanese airlines are able to complete their move towards a low-cost structure, this process could be completed earlier than expected. Over the last twenty five years the air transport has undergone tremendous changes. The most important factor behind these changes has been the increased liberalization of the market. As a result, rates have decreased while demand has increased. This has resulted in turning the air transport industry, which was long perceived as an industry in decline, into a high-growth industry. The only method of increasing regional exchanges in the air transport is to pursue further liberalization. The country which implements this liberalization process at the earliest date may very well emerge as a leading force within the air transport industry.

  • PDF

An Evaluation Model for Software Usability using Mental Model and Emotional factors (정신모형과 감성 요소를 이용한 소프트웨어 사용성 평가 모델 개발)

  • 김한샘;김효영;한혁수
    • Journal of KIISE:Software and Applications
    • /
    • v.30 no.1_2
    • /
    • pp.117-128
    • /
    • 2003
  • Software usability is a characteristic of the software that is decided based on learnability, effectiveness, and satisfaction when it is evaluated. The usability is a main factor of the software quality. A software has to be continuously improved by taking guidelines that comes from the usability evaluation. Usability factors may vary among the different software products and even for the same factor, the users may have different opinions according to their experience and knowledge. Therefore, a usability evaluation process must be developed with the consideration of many factors like various applications and users. Existing systems such as satisfaction evaluation and performance evaluation only evaluate the result and do not perform cause analysis. And also unified evaluation items and contents do not reflect the characteristics of the products. To address these problems, this paper presents a evaluation model that is based on the mental model of user and the problems, this paper presents a evaluation model that is based on the mental model of user and the emotion of users. This model uses evaluation factors of the user task which are extracted by analyzing usage of the target product. In the mental model approach, the conceptual model of designer and the mental model of the user are compared and the differences are taken as a gap also reported as a part to be improved in the future. In the emotional factor approach, the emotional factors are extracted for the target products and evaluated in terms of the emotional factors. With this proposed method, we can evaluate the software products with customized attributes of the products and deduce the guidelines for the future improvements. We also takes the GUI framework as a sample case and extracts the directions for improvement. As this model analyzes tasks of users and uses evaluation factors for each task, it is capable of not only reflecting the characteristics of the product, but exactly identifying the items that should be modified and improved.

Early Identification of Gifted Young Children and Dynamic assessment (유아 영재의 판별과 역동적 평가)

  • 장영숙
    • Journal of Gifted/Talented Education
    • /
    • v.11 no.3
    • /
    • pp.131-153
    • /
    • 2001
  • The importance of identifying gifted children during early childhood is becoming recognized. Nonetheless, most researchers preferred to study the primary and secondary levels where children are already and more clearly demonstrating what talents they have, and where more reliable predictions of gifted may be made. Comparatively lisle work has been done in this area. When we identify giftedness during early childhood, we have to consider the potential of the young children rather than on actual achievement. Giftedness during early childhood is still developing and less stable than that of older children and this prevents us from making firm and accurate predictions based on children's actual achievement. Dynamic assessment, based on Vygotsky's concept of the zone of proximal development(ZPD), suggests a new idea in the way the gifted young children are identified. In light of dynamic assessment, for identifying the potential giftedness of young children. we need to involve measuring both unassisted and assisted performance. Dynamic assessment usually consists of a test-intervene-retest format that focuses attention on the improvement in child performance when an adult provides mediated assistance on how to master the testing task. The advantages of the dynamic assessment are as follows: First, the dynamic assessment approach can provide a useful means for assessing young gifted child who have not demonstrated high ability on traditional identification method. Second, the dynamic assessment approach can assess the learning process of young children. Third, the dynamic assessment can lead an individualized education by the early identification of young gifted children. Fourth, the dynamic assessment can be a more accurate predictor of potential by linking diagnosis and instruction. Thus, it can make us provide an educational treatment effectively for young gifted children.

  • PDF

Stock Price Prediction by Utilizing Category Neutral Terms: Text Mining Approach (카테고리 중립 단어 활용을 통한 주가 예측 방안: 텍스트 마이닝 활용)

  • Lee, Minsik;Lee, Hong Joo
    • Journal of Intelligence and Information Systems
    • /
    • v.23 no.2
    • /
    • pp.123-138
    • /
    • 2017
  • Since the stock market is driven by the expectation of traders, studies have been conducted to predict stock price movements through analysis of various sources of text data. In order to predict stock price movements, research has been conducted not only on the relationship between text data and fluctuations in stock prices, but also on the trading stocks based on news articles and social media responses. Studies that predict the movements of stock prices have also applied classification algorithms with constructing term-document matrix in the same way as other text mining approaches. Because the document contains a lot of words, it is better to select words that contribute more for building a term-document matrix. Based on the frequency of words, words that show too little frequency or importance are removed. It also selects words according to their contribution by measuring the degree to which a word contributes to correctly classifying a document. The basic idea of constructing a term-document matrix was to collect all the documents to be analyzed and to select and use the words that have an influence on the classification. In this study, we analyze the documents for each individual item and select the words that are irrelevant for all categories as neutral words. We extract the words around the selected neutral word and use it to generate the term-document matrix. The neutral word itself starts with the idea that the stock movement is less related to the existence of the neutral words, and that the surrounding words of the neutral word are more likely to affect the stock price movements. And apply it to the algorithm that classifies the stock price fluctuations with the generated term-document matrix. In this study, we firstly removed stop words and selected neutral words for each stock. And we used a method to exclude words that are included in news articles for other stocks among the selected words. Through the online news portal, we collected four months of news articles on the top 10 market cap stocks. We split the news articles into 3 month news data as training data and apply the remaining one month news articles to the model to predict the stock price movements of the next day. We used SVM, Boosting and Random Forest for building models and predicting the movements of stock prices. The stock market opened for four months (2016/02/01 ~ 2016/05/31) for a total of 80 days, using the initial 60 days as a training set and the remaining 20 days as a test set. The proposed word - based algorithm in this study showed better classification performance than the word selection method based on sparsity. This study predicted stock price volatility by collecting and analyzing news articles of the top 10 stocks in market cap. We used the term - document matrix based classification model to estimate the stock price fluctuations and compared the performance of the existing sparse - based word extraction method and the suggested method of removing words from the term - document matrix. The suggested method differs from the word extraction method in that it uses not only the news articles for the corresponding stock but also other news items to determine the words to extract. In other words, it removed not only the words that appeared in all the increase and decrease but also the words that appeared common in the news for other stocks. When the prediction accuracy was compared, the suggested method showed higher accuracy. The limitation of this study is that the stock price prediction was set up to classify the rise and fall, and the experiment was conducted only for the top ten stocks. The 10 stocks used in the experiment do not represent the entire stock market. In addition, it is difficult to show the investment performance because stock price fluctuation and profit rate may be different. Therefore, it is necessary to study the research using more stocks and the yield prediction through trading simulation.