• Title/Summary/Keyword: Knowledge-based systems

Search Result 2,136, Processing Time 0.032 seconds

Application Methods and Development Assessment Tools for Creative Convergence Education Programs for Elementary and Secondary Schools based on Hyper Blended Practical Model (하이퍼 블렌디드 실천모델 기반 초·중등 창의 융합 교육 프로그램 평가도구 개발 및 적용 방안)

  • Choi, Eunsun;Park, Namje
    • The Journal of the Convergence on Culture Technology
    • /
    • v.8 no.2
    • /
    • pp.117-129
    • /
    • 2022
  • The ability to creatively pursue new knowledge and perspectives across various disciplines has established itself as a basic literacy for living in the 21st-century convergence era. With the development of various creative education programs, assessment tools that can objectively and systematically evaluate learners' academic achievement are also required. Therefore, this paper proposed the self assessment, peer assessment, creativity assessment, and reflection tool based on the hyper blended practical model as assessment tools for creative convergence education programs for elementary and secondary school students. The developed assessment tools attempted to develop more completed evaluation methods by modifying two items and deleting four items through validity tests. In addition, the evaluation tool was applied to 596 elementary and secondary school students nationwide, and the application results were analyzed through one-way ANOVA and Wordcloud system. As a result of the analysis, it was found that the self assessment and the reflection tool need to develop questions according to the grade group. In addition, we proposed to use these assessment tools in blended classes or various educational activities in the changing classroom environment. We hope that this paper provides implications for developing evaluation systems and tools for creative convergence education.

A Case Study on Utilizing Open-Source Software SDL in C Programming Language Learning (C 프로그래밍 언어 학습에 공개 소스 소프트웨어 SDL 활용 사례 연구)

  • Kim, Sung Deuk
    • Journal of Practical Engineering Education
    • /
    • v.14 no.1
    • /
    • pp.1-10
    • /
    • 2022
  • Learning C programming language in electronics education is an important basic education course for understanding computer programming and acquiring the ability to use microprocessors in embedded systems. In order to focus on understanding basic grammar and algorithms, it is a common teaching method to write programs based on C standard library functions in the console window and learn theory and practice in parallel. However, if a student wants to start a project activity or go to a deeper stage after acquiring some basic knowledge of the C language, using only the C standard library function in the console window limits what a student can express or control with the C program. For the purpose of making it easier for a student to use graphics or multimedia resources and increase educational value, this paper studies a case of applying Simple DirectMedia Layer (SDL), an open source software, into the C programming language learning process. The SDL-based programming course applied after completing the basic programming curriculum performed in the console window is introduced, and the educational value is evaluated through a survey. As a result, more than 56% of the respondents expressed positive opinions in terms of improved application ability, stimulating interest, and overall usefulness, and less than 4% of them had negative opinions.

Innovative Technology of Teaching Moodle in Higher Pedagogical Education: from Theory to Pactice

  • Iryna, Rodionova;Serhii, Petrenko;Nataliia, Hoha;Kushevska, Natalia;Tetiana, Siroshtan
    • International Journal of Computer Science & Network Security
    • /
    • v.22 no.8
    • /
    • pp.153-162
    • /
    • 2022
  • Relevance. Innovative activities in education should be aimed at ensuring the comprehensive development of the individual and professional development of students. The main idea of modular technology is that the student should learn by himself, and the teacher manages his learning activities. The advantage of modular technology is the ability of the teacher to design the study of the material in the most interesting and accessible forms for this part of the study group and at the same time achieve the best learning results. Innovative Moodle technology. it is gaining popularity every day, significantly expanding the space of teaching and learning, allowing students to study inter-faculty university programs in depth. The purpose of this study is to assess the quality of implementation of the e-learning system Moodle. The study was conducted at the South Ukrainian National Pedagogical University named after K. D. Ushinsky in order to identify barriers to the effective implementation of innovative distance learning technologies Moodle and introduce a new model that will have a positive impact on the development of e-learning. Methodology. The paper used a combination of theoretical and empirical research methods. These include: scientific analysis of sources on this issue, which allowed us to formulate the initial provisions of the study; analysis of the results of students 'educational activities; pedagogical experiment; questionnaires; monitoring of students' activities in practical classes. Results. This article evaluates the implementation of the principles of distance learning in the process of teaching and learning at the University in terms of quality. The experiment involved 1,250 students studying at the South Ukrainian National Pedagogical University named after K. D. Ushinsky. The survey helped to identify the main barriers to the effective implementation of modern distance learning technologies in the educational process of the University: the lack of readiness of teachers and parents, the lack of necessary skills in applying computer systems of online learning, the inability to interact with the teaching staff and teachers, the lack of a sufficient number of academic consultants online. In addition, internal problems are investigated: limited resources, unevenly distributed marketing advantages, inappropriate administrative structure, and lack of innovative physical capabilities. The article allows us to solve these problems by gradually implementing a distance learning model that is suitable for any university, regardless of its specialization. The Moodle-based e-learning system proposed in this paper was designed to eliminate the identified barriers. Models for implementing distance learning in the learning process were built according to the CAPDM methodology, which helps universities and other educational service providers develop and manage world-class online distance learning programs. Prospects for further research focus on evaluating students' knowledge and abilities over the next six months after the introduction of the proposed Moodle-based program.

Multi-day Trip Planning System with Collaborative Recommendation (협업적 추천 기반의 여행 계획 시스템)

  • Aprilia, Priska;Oh, Kyeong-Jin;Hong, Myung-Duk;Ga, Myeong-Hyeon;Jo, Geun-Sik
    • Journal of Intelligence and Information Systems
    • /
    • v.22 no.1
    • /
    • pp.159-185
    • /
    • 2016
  • Planning a multi-day trip is a complex, yet time-consuming task. It usually starts with selecting a list of points of interest (POIs) worth visiting and then arranging them into an itinerary, taking into consideration various constraints and preferences. When choosing POIs to visit, one might ask friends to suggest them, search for information on the Web, or seek advice from travel agents; however, those options have their limitations. First, the knowledge of friends is limited to the places they have visited. Second, the tourism information on the internet may be vast, but at the same time, might cause one to invest a lot of time reading and filtering the information. Lastly, travel agents might be biased towards providers of certain travel products when suggesting itineraries. In recent years, many researchers have tried to deal with the huge amount of tourism information available on the internet. They explored the wisdom of the crowd through overwhelming images shared by people on social media sites. Furthermore, trip planning problems are usually formulated as 'Tourist Trip Design Problems', and are solved using various search algorithms with heuristics. Various recommendation systems with various techniques have been set up to cope with the overwhelming tourism information available on the internet. Prediction models of recommendation systems are typically built using a large dataset. However, sometimes such a dataset is not always available. For other models, especially those that require input from people, human computation has emerged as a powerful and inexpensive approach. This study proposes CYTRIP (Crowdsource Your TRIP), a multi-day trip itinerary planning system that draws on the collective intelligence of contributors in recommending POIs. In order to enable the crowd to collaboratively recommend POIs to users, CYTRIP provides a shared workspace. In the shared workspace, the crowd can recommend as many POIs to as many requesters as they can, and they can also vote on the POIs recommended by other people when they find them interesting. In CYTRIP, anyone can make a contribution by recommending POIs to requesters based on requesters' specified preferences. CYTRIP takes input on the recommended POIs to build a multi-day trip itinerary taking into account the user's preferences, the various time constraints, and the locations. The input then becomes a multi-day trip planning problem that is formulated in Planning Domain Definition Language 3 (PDDL3). A sequence of actions formulated in a domain file is used to achieve the goals in the planning problem, which are the recommended POIs to be visited. The multi-day trip planning problem is a highly constrained problem. Sometimes, it is not feasible to visit all the recommended POIs with the limited resources available, such as the time the user can spend. In order to cope with an unachievable goal that can result in no solution for the other goals, CYTRIP selects a set of feasible POIs prior to the planning process. The planning problem is created for the selected POIs and fed into the planner. The solution returned by the planner is then parsed into a multi-day trip itinerary and displayed to the user on a map. The proposed system is implemented as a web-based application built using PHP on a CodeIgniter Web Framework. In order to evaluate the proposed system, an online experiment was conducted. From the online experiment, results show that with the help of the contributors, CYTRIP can plan and generate a multi-day trip itinerary that is tailored to the users' preferences and bound by their constraints, such as location or time constraints. The contributors also find that CYTRIP is a useful tool for collecting POIs from the crowd and planning a multi-day trip.

A Comparative Case Study on the Adaptation Process of Advanced Information Technology: A Grounded Theory Approach for the Appropriation Process (신기술 사용 과정에 관한 비교 사례 연구: 기술 전유 과정의 근거이론적 접근)

  • Choi, Hee-Jae;Lee, Zoon-Ky
    • Asia pacific journal of information systems
    • /
    • v.19 no.3
    • /
    • pp.99-124
    • /
    • 2009
  • Many firms in Korea have adopted and used advanced information technology in an effort to boost efficiency. The process of adapting to the new technology, at the same time, can vary from one firm to another. As such, this research focuses on several relevant factors, especially the roles of social interaction as a key variable that influences the technology adaptation process and the outcomes. Thus far, how a firm goes through the adaptation process to the new technology has not been yet fully explored. Previous studies on changes undergone by a firm or an organization due to information technology have been pursued from various theoretical points of views, evolved from technological and institutional views to an integrated social technology views. The technology adaptation process has been understood to be something that evolves over time and has been regarded as cycles between misalignments and alignments, gradually approaching the stable aligned state. The adaptation process of the new technology was defined as "appropriation" process according to Poole and DeSanctis (1994). They suggested that this process is not automatically determined by the technology design itself. Rather, people actively select how technology structures should be used; accordingly, adoption practices vary. But concepts of the appropriation process in these studies are not accurate while suggested propositions are not clear enough to apply in practice. Furthermore, these studies do not substantially suggest which factors are changed during the appropriation process and what should be done to bring about effective outcomes. Therefore, research objectives of this study lie in finding causes for the difference in ways in which advanced information technology has been used and adopted among organizations. The study also aims to explore how a firm's interaction with social as well as technological factors affects differently in resulting organizational changes. Detail objectives of this study are as follows. First, this paper primarily focuses on the appropriation process of advanced information technology in the long run, and we look into reasons for the diverse types of the usage. Second, this study is to categorize each phases in the appropriation process and make clear what changes occur and how they are evolved during each phase. Third, this study is to suggest the guidelines to determine which strategies are needed in an individual, group and organizational level. For this, a substantially grounded theory that can be applied to organizational practice has been developed from a longitudinal comparative case study. For these objectives, the technology appropriation process was explored based on Structuration Theory by Giddens (1984), Orlikoski and Robey (1991) and Adaptive Structuration Theory by Poole and DeSanctis (1994), which are examples of social technology views on organizational change by technology. Data have been obtained from interviews, observations of medical treatment task, and questionnaires administered to group members who use the technology. Data coding was executed in three steps following the grounded theory approach. First of all, concepts and categories were developed from interviews and observation data in open coding. Next, in axial coding, we related categories to subcategorize along the lines of their properties and dimensions through the paradigm model. Finally, the grounded theory about the appropriation process was developed through the conditional/consequential matrix in selective coding. In this study eight hypotheses about the adaptation process have been clearly articulated. Also, we found that the appropriation process involves through three phases, namely, "direct appropriation," "cooperate with related structures," and "interpret and make judgments." The higher phases of appropriation move, the more users represent various types of instrumental use and attitude. Moreover, the previous structures like "knowledge and experience," "belief that other members know and accept the use of technology," "horizontal communication," and "embodiment of opinion collection process" are evolved to higher degrees in their dimensions of property. Furthermore, users continuously create new spirits and structures, while removing some of the previous ones at the same time. Thus, from longitudinal view, faithful and unfaithful appropriation methods appear recursively, but gradually faithful appropriation takes over the other. In other words, the concept of spirits and structures has been changed in the adaptation process over time for the purpose of alignment between the task and other structures. These findings call for a revised or extended model of structural adaptation in IS (Information Systems) literature now that the vague adaptation process in previous studies has been clarified through the in-depth qualitative study, identifying each phrase with accuracy. In addition, based on these results some guidelines can be set up to help determine which strategies are needed in an individual, group, and organizational level for the purpose of effective technology appropriation. In practice, managers can focus on the changes of spirits and elevation of the structural dimension to achieve effective technology use.

New horizon of geographical method (인문지리학 방법론의 새로운 지평)

  • ;Choi, Byung-Doo
    • Journal of the Korean Geographical Society
    • /
    • v.38
    • /
    • pp.15-36
    • /
    • 1988
  • In this paper, I consider the development of methods in contemporary human geography in terms of a dialectical relation of action and structure, and try to draw a new horizon of method toward which geographical research and spatial theory would develop. The positivist geography which was dominent during 1960s has been faced both with serious internal reflections and strong external criticisms in the 1970s. The internal reflections that pointed out its ignorance of spatial behavior of decision-makers and its simplication of complex spatial relations have developed behavioural geography and systems-theoretical approach. Yet this kinds of alternatives have still standed on the positivist, geography, even though they have seemed to be more real and complicate than the previous one, The external criticisms that have argued against the positivist method as phenomenalism and instrumentalism suggest some alternatives: humanistic geography which emphasizes intention and action of human subject and meaning-understanding, and structuralist geography which stresses on social structure as a totality which would produce spatial phenomena, and a theoretical formulation. Human geography today can be characterized by a strain and conflict between these methods, and hence rezuires a synthetic integration between them. Philosophy and social theory in general are in the same in which theories of action and structural analysis have been complementary or conflict with each other. Human geography has fallen into a further problematic with the introduction of a method based on so-called political ecnomy. This method has been suggested not merely as analternative to the positivist geography, but also as a theoretical foundation for critical analysis of space. The political economy of space with has analyzed the capitalist space and tried to theorize its transformation may be seen either as following humanistic(or Hegelian) Marxism, such as represented in Lefebvre's work, or as following structuralist Marxism, such as developed in Castelles's or Harvey's work. The spatial theory following humanistic Marxism has argued for a dialectic relation between 'the spatial' and 'the social', and given more attention to practicing human agents than to explaining social structures. on the contray, that based on structuralist Marxism has argued for social structures producing spatial phenomena, and focused on theorising the totality of structures, Even though these two perspectives tend more recently to be convergent in a way that structuralist-Marxist. geographers relate the domain of economic and political structures with that of action in their studies of urban culture and experience under capitalism, the political ecnomy of space needs an integrated method with which one can overcome difficulties of orthhodox Marxism. Some novel works in philosophy and social theory have been developed since the end of 1970s which have oriented towards an integrated method relating a series of concepts of action and structure, and reconstructing historical materialism. They include Giddens's theory of structuration, foucault's geneological analysis of power-knowledge, and Habermas's theory of communicative action. Ther are, of course, some fundamental differences between these works. Giddens develops a theory which relates explicitly the domain of action and that of structure in terms of what he calls the 'duality of structure', and wants to bring time-space relations into the core of social theory. Foucault writes a history in which strategically intentional but nonsubjective power relations have emerged and operated by virtue of multiple forms of constrainst wihthin specific spaces, while refusing to elaborate any theory which would underlie a political rationalization. Habermas analyzes how the Western rationalization of ecnomic and political systems has colonized the lifeworld in which we communicate each other, and wants to formulate a new normative foundation for critical theory of society which highlights communicatie reason (without any consideration of spatial concepts). On the basis of the above consideration, this paper draws a new norizon of method in human geography and spatial theory, some essential ideas of which can be summarized as follows: (1) the concept of space especially in terms of its relation to sociery. Space is not an ontological entity whch is independent of society and has its own laws of constitution and transformation, but it can be produced and reproduced only by virtue of its relation to society. Yet space is not merlely a material product of society, but also a place and medium in and through which socety can be maintained or transformed.(2) the constitution of space in terms of the relation between action and structure. Spatial actors who are always knowledgeable under conditions of socio-spatial structure produce and reproduce their context of action, that is, structure; and spatial structures as results of human action enable as well as constrain it. Spatial actions can be distinguished between instrumental-strategicaction oriented to success and communicative action oriented to understanding, which (re)produce respectively two different spheres of spatial structure in different ways: the material structure of economic and political systems-space in an unknowledged and unitended way, and the symbolic structure of social and cultural life-space in an acknowledged and intended way. (3) the capitalist space in terms of its rationalization. The ideal development of space would balance the rationalizations of system space and life-space in a way that system space providers material conditions for the maintainance of the life-space, and the life-space for its further development. But the development of capitalist space in reality is paradoxical and hence crisis-ridden. The economic and poltical system-space, propelled with the steering media like money, and power, has outstriped the significance of communicative action, and colonized the life-space. That is, we no longer live in a space mediated communicative action, but one created for and by money and power. But no matter how seriously our everyday life-space has been monetalrized and bureaucratised, here lies nevertheless the practical potential which would rehabilitate the meaning of space, the meaning of our life on the Earth.

  • PDF

Real-time CRM Strategy of Big Data and Smart Offering System: KB Kookmin Card Case (KB국민카드의 빅데이터를 활용한 실시간 CRM 전략: 스마트 오퍼링 시스템)

  • Choi, Jaewon;Sohn, Bongjin;Lim, Hyuna
    • Journal of Intelligence and Information Systems
    • /
    • v.25 no.2
    • /
    • pp.1-23
    • /
    • 2019
  • Big data refers to data that is difficult to store, manage, and analyze by existing software. As the lifestyle changes of consumers increase the size and types of needs that consumers desire, they are investing a lot of time and money to understand the needs of consumers. Companies in various industries utilize Big Data to improve their products and services to meet their needs, analyze unstructured data, and respond to real-time responses to products and services. The financial industry operates a decision support system that uses financial data to develop financial products and manage customer risks. The use of big data by financial institutions can effectively create added value of the value chain, and it is possible to develop a more advanced customer relationship management strategy. Financial institutions can utilize the purchase data and unstructured data generated by the credit card, and it becomes possible to confirm and satisfy the customer's desire. CRM has a granular process that can be measured in real time as it grows with information knowledge systems. With the development of information service and CRM, the platform has change and it has become possible to meet consumer needs in various environments. Recently, as the needs of consumers have diversified, more companies are providing systematic marketing services using data mining and advanced CRM (Customer Relationship Management) techniques. KB Kookmin Card, which started as a credit card business in 1980, introduced early stabilization of processes and computer systems, and actively participated in introducing new technologies and systems. In 2011, the bank and credit card companies separated, leading the 'Hye-dam Card' and 'One Card' markets, which were deviated from the existing concept. In 2017, the total use of domestic credit cards and check cards grew by 5.6% year-on-year to 886 trillion won. In 2018, we received a long-term rating of AA + as a result of our credit card evaluation. We confirmed that our credit rating was at the top of the list through effective marketing strategies and services. At present, Kookmin Card emphasizes strategies to meet the individual needs of customers and to maximize the lifetime value of consumers by utilizing payment data of customers. KB Kookmin Card combines internal and external big data and conducts marketing in real time or builds a system for monitoring. KB Kookmin Card has built a marketing system that detects realtime behavior using big data such as visiting the homepage and purchasing history by using the customer card information. It is designed to enable customers to capture action events in real time and execute marketing by utilizing the stores, locations, amounts, usage pattern, etc. of the card transactions. We have created more than 280 different scenarios based on the customer's life cycle and are conducting marketing plans to accommodate various customer groups in real time. We operate a smart offering system, which is a highly efficient marketing management system that detects customers' card usage, customer behavior, and location information in real time, and provides further refinement services by combining with various apps. This study aims to identify the traditional CRM to the current CRM strategy through the process of changing the CRM strategy. Finally, I will confirm the current CRM strategy through KB Kookmin card's big data utilization strategy and marketing activities and propose a marketing plan for KB Kookmin card's future CRM strategy. KB Kookmin Card should invest in securing ICT technology and human resources, which are becoming more sophisticated for the success and continuous growth of smart offering system. It is necessary to establish a strategy for securing profit from a long-term perspective and systematically proceed. Especially, in the current situation where privacy violation and personal information leakage issues are being addressed, efforts should be made to induce customers' recognition of marketing using customer information and to form corporate image emphasizing security.

Development of Drawing & Specification Management System Using 3D Object-based Product Model (3차원 객체기반 모델을 이용한 설계도면 및 시방서관리 시스템 구축)

  • Kim Hyun-nam;Wang Il-kook;Chin Sang-yoon
    • Korean Journal of Construction Engineering and Management
    • /
    • v.1 no.3 s.3
    • /
    • pp.124-134
    • /
    • 2000
  • In construction projects, the design information, which should contain accurate product information in a systematic way, needs to be applicable through the life-cycle of projects. However, paper-based 2D drawings and relevant documents has difficulties in communicating and sharing the owner's and architect's intention and requirement effectively and building a corporate knowledge base through on-going projects due to Tack of interoperability between specific task or function-oriented software and handling massive information. Meanwhile, computer and information technologies are being developed so rapidly that the practitioners are even hard to adapt them into the industry efficiently. 3D modeling capabilities in CAD systems are enormously developed and enables users to associate 3D models with other relevant information. However, this still requires a great deal of efforts and costs to have all the design information represented in CAD system, and the sophisticated system is difficult to manage. This research focuses on the transition period from 2D-based design Information management to 3D-based, which means co-existence of 2D and 3D-based management. This research proposes a model of a compound system of 2D and 3D-based CAD system which presents the general design information using 3D model integrating with 2D CAD drawings for detailed design information. This research developed an integrated information management system for design and specification by associating 2D drawings and 3D models, where 2D drawings represents detailed design and parts that are hard to express in 3D objects. To do this, related management processes was analyzed to build an information model which in turn became the basis of the integrated information management system.

  • PDF

Deriving adoption strategies of deep learning open source framework through case studies (딥러닝 오픈소스 프레임워크의 사례연구를 통한 도입 전략 도출)

  • Choi, Eunjoo;Lee, Junyeong;Han, Ingoo
    • Journal of Intelligence and Information Systems
    • /
    • v.26 no.4
    • /
    • pp.27-65
    • /
    • 2020
  • Many companies on information and communication technology make public their own developed AI technology, for example, Google's TensorFlow, Facebook's PyTorch, Microsoft's CNTK. By releasing deep learning open source software to the public, the relationship with the developer community and the artificial intelligence (AI) ecosystem can be strengthened, and users can perform experiment, implementation and improvement of it. Accordingly, the field of machine learning is growing rapidly, and developers are using and reproducing various learning algorithms in each field. Although various analysis of open source software has been made, there is a lack of studies to help develop or use deep learning open source software in the industry. This study thus attempts to derive a strategy for adopting the framework through case studies of a deep learning open source framework. Based on the technology-organization-environment (TOE) framework and literature review related to the adoption of open source software, we employed the case study framework that includes technological factors as perceived relative advantage, perceived compatibility, perceived complexity, and perceived trialability, organizational factors as management support and knowledge & expertise, and environmental factors as availability of technology skills and services, and platform long term viability. We conducted a case study analysis of three companies' adoption cases (two cases of success and one case of failure) and revealed that seven out of eight TOE factors and several factors regarding company, team and resource are significant for the adoption of deep learning open source framework. By organizing the case study analysis results, we provided five important success factors for adopting deep learning framework: the knowledge and expertise of developers in the team, hardware (GPU) environment, data enterprise cooperation system, deep learning framework platform, deep learning framework work tool service. In order for an organization to successfully adopt a deep learning open source framework, at the stage of using the framework, first, the hardware (GPU) environment for AI R&D group must support the knowledge and expertise of the developers in the team. Second, it is necessary to support the use of deep learning frameworks by research developers through collecting and managing data inside and outside the company with a data enterprise cooperation system. Third, deep learning research expertise must be supplemented through cooperation with researchers from academic institutions such as universities and research institutes. Satisfying three procedures in the stage of using the deep learning framework, companies will increase the number of deep learning research developers, the ability to use the deep learning framework, and the support of GPU resource. In the proliferation stage of the deep learning framework, fourth, a company makes the deep learning framework platform that improves the research efficiency and effectiveness of the developers, for example, the optimization of the hardware (GPU) environment automatically. Fifth, the deep learning framework tool service team complements the developers' expertise through sharing the information of the external deep learning open source framework community to the in-house community and activating developer retraining and seminars. To implement the identified five success factors, a step-by-step enterprise procedure for adoption of the deep learning framework was proposed: defining the project problem, confirming whether the deep learning methodology is the right method, confirming whether the deep learning framework is the right tool, using the deep learning framework by the enterprise, spreading the framework of the enterprise. The first three steps (i.e. defining the project problem, confirming whether the deep learning methodology is the right method, and confirming whether the deep learning framework is the right tool) are pre-considerations to adopt a deep learning open source framework. After the three pre-considerations steps are clear, next two steps (i.e. using the deep learning framework by the enterprise and spreading the framework of the enterprise) can be processed. In the fourth step, the knowledge and expertise of developers in the team are important in addition to hardware (GPU) environment and data enterprise cooperation system. In final step, five important factors are realized for a successful adoption of the deep learning open source framework. This study provides strategic implications for companies adopting or using deep learning framework according to the needs of each industry and business.

A Real-Time Stock Market Prediction Using Knowledge Accumulation (지식 누적을 이용한 실시간 주식시장 예측)

  • Kim, Jin-Hwa;Hong, Kwang-Hun;Min, Jin-Young
    • Journal of Intelligence and Information Systems
    • /
    • v.17 no.4
    • /
    • pp.109-130
    • /
    • 2011
  • One of the major problems in the area of data mining is the size of the data, as most data set has huge volume these days. Streams of data are normally accumulated into data storages or databases. Transactions in internet, mobile devices and ubiquitous environment produce streams of data continuously. Some data set are just buried un-used inside huge data storage due to its huge size. Some data set is quickly lost as soon as it is created as it is not saved due to many reasons. How to use this large size data and to use data on stream efficiently are challenging questions in the study of data mining. Stream data is a data set that is accumulated to the data storage from a data source continuously. The size of this data set, in many cases, becomes increasingly large over time. To mine information from this massive data, it takes too many resources such as storage, money and time. These unique characteristics of the stream data make it difficult and expensive to store all the stream data sets accumulated over time. Otherwise, if one uses only recent or partial of data to mine information or pattern, there can be losses of valuable information, which can be useful. To avoid these problems, this study suggests a method efficiently accumulates information or patterns in the form of rule set over time. A rule set is mined from a data set in stream and this rule set is accumulated into a master rule set storage, which is also a model for real-time decision making. One of the main advantages of this method is that it takes much smaller storage space compared to the traditional method, which saves the whole data set. Another advantage of using this method is that the accumulated rule set is used as a prediction model. Prompt response to the request from users is possible anytime as the rule set is ready anytime to be used to make decisions. This makes real-time decision making possible, which is the greatest advantage of this method. Based on theories of ensemble approaches, combination of many different models can produce better prediction model in performance. The consolidated rule set actually covers all the data set while the traditional sampling approach only covers part of the whole data set. This study uses a stock market data that has a heterogeneous data set as the characteristic of data varies over time. The indexes in stock market data can fluctuate in different situations whenever there is an event influencing the stock market index. Therefore the variance of the values in each variable is large compared to that of the homogeneous data set. Prediction with heterogeneous data set is naturally much more difficult, compared to that of homogeneous data set as it is more difficult to predict in unpredictable situation. This study tests two general mining approaches and compare prediction performances of these two suggested methods with the method we suggest in this study. The first approach is inducing a rule set from the recent data set to predict new data set. The seocnd one is inducing a rule set from all the data which have been accumulated from the beginning every time one has to predict new data set. We found neither of these two is as good as the method of accumulated rule set in its performance. Furthermore, the study shows experiments with different prediction models. The first approach is building a prediction model only with more important rule sets and the second approach is the method using all the rule sets by assigning weights on the rules based on their performance. The second approach shows better performance compared to the first one. The experiments also show that the suggested method in this study can be an efficient approach for mining information and pattern with stream data. This method has a limitation of bounding its application to stock market data. More dynamic real-time steam data set is desirable for the application of this method. There is also another problem in this study. When the number of rules is increasing over time, it has to manage special rules such as redundant rules or conflicting rules efficiently.