• Title/Summary/Keyword: 컴퓨팅지식평가

Search Result 48, Processing Time 0.025 seconds

Development and application of supervised learning-centered machine learning education program using micro:bit (마이크로비트를 활용한 지도학습 중심의 머신러닝 교육 프로그램의 개발과 적용)

  • Lee, Hyunguk;Yoo, Inhwan
    • Journal of The Korean Association of Information Education
    • /
    • v.25 no.6
    • /
    • pp.995-1003
    • /
    • 2021
  • As the need for artificial intelligence (AI) education, which will become the core of the upcoming intelligent information society rises, the national level is also focusing attention by including artificial intelligence-related content in the curriculum. In this study, the PASPA education program was presented to enhance students' creative problem-solving ability in the process of solving problems in daily life through supervised machine learning. And Micro:bit, a physical computing tool, was used to enhance the learning effect. The teaching and learning process applied to the PASPA education program consists of five steps: Problem Recoginition, Argument, Setting data standard, Programming, Application and evaluation. As a result of applying this educational program to students, it was confirmed that the creative problem-solving ability improved, and it was confirmed that there was a significant difference in knowledge and thinking in specific areas and critical and logical thinking in detailed areas.

The Current State and Tasks of Citizen Science in Korea (한국 시민과학의 현황과 과제)

  • Park, Jin Hee
    • Journal of Science and Technology Studies
    • /
    • v.18 no.2
    • /
    • pp.7-41
    • /
    • 2018
  • The projects of citizen science which is originated from citizen data collecting action driven by governmental institutes and science associations have been implemented with different form of collaboration with scientists. The themes of citizen science has extended from the ecology to astronomy, distributed computing, and particle physics. Citizen science could contribute to the advancement of science through cost-effective science research based on citizen volunteer data collecting. In addition, citizen science enhance the public understanding of science by increasing knowledge of citizen participants. The community-led citizen science projects could raise public awareness of environmental problems and promote the participation in environmental problem-solving. Citizen science projects based on local tacit knowledge can be of benefit to the local environmental policy decision making and implementation of policy. These social values of citizen science make many countries develop promoting policies of citizen science. The korean government also has introduced some citizen science projects. However there are some obstacles, such as low participation of citizen and scientists in projects which the government has to overcome in order to promote citizen science. It is important that scientists could recognize values of citizen science through the successful government driven citizen science projects and the evaluation tool of scientific career could be modified in order to promote scientist's participation. The project management should be well planned to intensify citizen participation. The government should prepare open data policy which could support a data reliability of the community-led monitoring projects. It is also desirable that a citizen science network could be made with the purpose of sharing best practices of citizen science.

Development of a Maker Education Program Using Cement and Mold for Middle School Students and Effect on Convergence Ability for Creativity (시멘트와 거푸집을 이용한 중학교 메이커 교육 프로그램이 창의융합 역량에 미치는 효과)

  • Kim, Seong-Soo;Yoo, Hyun-Seok
    • Journal of the Korea Convergence Society
    • /
    • v.10 no.6
    • /
    • pp.129-138
    • /
    • 2019
  • The trend of maker education has been mainly focused on program using digital devices, but maker education programs that can make students' creative ideas instantly in various shapes and make them by hand is insufficient. Therefore, in this study, we developed a maker education program using cement and molds and analyzed Effect on the convergence ability for creativity of students. In the preparation stage, educational use cases about cement and molds and the study objects and contents were extracted through the literature review. In the development stage, teaching-learning materials were developed and validity and evaluation tools to measure convergence ability for creativity were selected. In the implementation stage, the expert validity test on the teaching-learning materials and convergence ability test for creativity was evaluated, In the evaluation stage, the effects of the whole area and sub-area of the convergence ability for creativity was analysed. As a result of the t-test for the whole area of convergence ability for creativity, the students who took the maker education program showed a significant change. The test results on the teaching-learning materials showed a positive response to the communication, cooperation ability, knowledge and humanism.

Evaluation of Web Service Similarity Assessment Methods (웹서비스 유사성 평가 방법들의 실험적 평가)

  • Hwang, You-Sub
    • Journal of Intelligence and Information Systems
    • /
    • v.15 no.4
    • /
    • pp.1-22
    • /
    • 2009
  • The World Wide Web is transitioning from being a mere collection of documents that contain useful information toward providing a collection of services that perform useful tasks. The emerging Web service technology has been envisioned as the next technological wave and is expected to play an important role in this recent transformation of the Web. By providing interoperable interface standards for application-to-application communication, Web services can be combined with component based software development to promote application interaction and integration both within and across enterprises. To make Web services for service-oriented computing operational, it is important that Web service repositories not only be well-structured but also provide efficient tools for developers to find reusable Web service components that meet their needs. As the potential of Web services for service-oriented computing is being widely recognized, the demand for effective Web service discovery mechanisms is concomitantly growing. A number of techniques for Web service discovery have been proposed, but the discovery challenge has not been satisfactorily addressed. Unfortunately, most existing solutions are either too rudimentary to be useful or too domain dependent to be generalizable. In this paper, we propose a Web service organizing framework that combines clustering techniques with string matching and leverages the semantics of the XML-based service specification in WSDL documents. We believe that this is one of the first attempts at applying data mining techniques in the Web service discovery domain. Our proposed approach has several appealing features : (1) It minimizes the requirement of prior knowledge from both service consumers and publishers; (2) It avoids exploiting domain dependent ontologies; and (3) It is able to visualize the semantic relationships among Web services. We have developed a prototype system based on the proposed framework using an unsupervised artificial neural network and empirically evaluated the proposed approach and tool using real Web service descriptions drawn from operational Web service registries. We report on some preliminary results demonstrating the efficacy of the proposed approach.

  • PDF

A Multi-Middleware Bridge for Dynamic Extensibility and Load Balancing in Home Network Environments (홈 네트워크 환경에서의 동적 확장성과 부하분산을 위한 다중 미들웨어 브리지)

  • Kim, Youn-Woo;Jang, Hyun-Su;Song, Chang-Hwan;Eom, Young-Ik
    • The KIPS Transactions:PartA
    • /
    • v.16A no.4
    • /
    • pp.263-272
    • /
    • 2009
  • For implementing the ubiquitous computing environments with smart home infrastructures, various research on the home network have been performed by several research institutes and companies. Due to the various home network middleware that are developed recently, the standardization of the home network middleware is being delayed and it calls for the middleware bridge which solves the interoperability problem among the heterogeneous middlewares. Now the research on the scheme for interoperability and the development of the various bridges are in progress, such as one-to-one bridge supporting interoperability between two middlewares and one-to-many bridge supporting interoperability among the multi-middlewares. However, existing systems and schemes does not consider the dynamic extensibility and performance that is particularly needed in the smart home environments. The middleware bridge should provide bridge extensibility with zero-configuration for non-expert users. It should also provide the load balancing scheme for efficient and proper traffic distribution. In this paper, we propose a Multi-Middleware Bridge(MMB) for dynamic extensibility and load balancing in home network environments. MMB provides bridge scalability and load balancing through the distributed system structure. We also verify the features such as interoperability, bridge extensibility, and the performance of the load balancing algorithm.

Scalable RDFS Reasoning Using the Graph Structure of In-Memory based Parallel Computing (인메모리 기반 병렬 컴퓨팅 그래프 구조를 이용한 대용량 RDFS 추론)

  • Jeon, MyungJoong;So, ChiSeoung;Jagvaral, Batselem;Kim, KangPil;Kim, Jin;Hong, JinYoung;Park, YoungTack
    • Journal of KIISE
    • /
    • v.42 no.8
    • /
    • pp.998-1009
    • /
    • 2015
  • In recent years, there has been a growing interest in RDFS Inference to build a rich knowledge base. However, it is difficult to improve the inference performance with large data by using a single machine. Therefore, researchers are investigating the development of a RDFS inference engine for a distributed computing environment. However, the existing inference engines cannot process data in real-time, are difficult to implement, and are vulnerable to repetitive tasks. In order to overcome these problems, we propose a method to construct an in-memory distributed inference engine that uses a parallel graph structure. In general, the ontology based on a triple structure possesses a graph structure. Thus, it is intuitive to design a graph structure-based inference engine. Moreover, the RDFS inference rule can be implemented by utilizing the operator of the graph structure, and we can thus design the inference engine according to the graph structure, and not the structure of the data table. In this study, we evaluate the proposed inference engine by using the LUBM1000 and LUBM3000 data to test the speed of the inference. The results of our experiment indicate that the proposed in-memory distributed inference engine achieved a performance of about 10 times faster than an in-storage inference engine.

Design and Implementation of the SSL Component based on CBD (CBD에 기반한 SSL 컴포넌트의 설계 및 구현)

  • Cho Eun-Ae;Moon Chang-Joo;Baik Doo-Kwon
    • Journal of KIISE:Computing Practices and Letters
    • /
    • v.12 no.3
    • /
    • pp.192-207
    • /
    • 2006
  • Today, the SSL protocol has been used as core part in various computing environments or security systems. But, the SSL protocol has several problems, because of the rigidity on operating. First, SSL protocol brings considerable burden to the CPU utilization so that performance of the security service in encryption transaction is lowered because it encrypts all data which is transferred between a server and a client. Second, SSL protocol can be vulnerable for cryptanalysis due to the key in fixed algorithm being used. Third, it is difficult to add and use another new cryptography algorithms. Finally. it is difficult for developers to learn use cryptography API(Application Program Interface) for the SSL protocol. Hence, we need to cover these problems, and, at the same time, we need the secure and comfortable method to operate the SSL protocol and to handle the efficient data. In this paper, we propose the SSL component which is designed and implemented using CBD(Component Based Development) concept to satisfy these requirements. The SSL component provides not only data encryption services like the SSL protocol but also convenient APIs for the developer unfamiliar with security. Further, the SSL component can improve the productivity and give reduce development cost. Because the SSL component can be reused. Also, in case of that new algorithms are added or algorithms are changed, it Is compatible and easy to interlock. SSL Component works the SSL protocol service in application layer. First of all, we take out the requirements, and then, we design and implement the SSL Component, confidentiality and integrity component, which support the SSL component, dependently. These all mentioned components are implemented by EJB, it can provide the efficient data handling when data is encrypted/decrypted by choosing the data. Also, it improves the usability by choosing data and mechanism as user intend. In conclusion, as we test and evaluate these component, SSL component is more usable and efficient than existing SSL protocol, because the increase rate of processing time for SSL component is lower that SSL protocol's.

Transfer Learning using Multiple ConvNet Layers Activation Features with Principal Component Analysis for Image Classification (전이학습 기반 다중 컨볼류션 신경망 레이어의 활성화 특징과 주성분 분석을 이용한 이미지 분류 방법)

  • Byambajav, Batkhuu;Alikhanov, Jumabek;Fang, Yang;Ko, Seunghyun;Jo, Geun Sik
    • Journal of Intelligence and Information Systems
    • /
    • v.24 no.1
    • /
    • pp.205-225
    • /
    • 2018
  • Convolutional Neural Network (ConvNet) is one class of the powerful Deep Neural Network that can analyze and learn hierarchies of visual features. Originally, first neural network (Neocognitron) was introduced in the 80s. At that time, the neural network was not broadly used in both industry and academic field by cause of large-scale dataset shortage and low computational power. However, after a few decades later in 2012, Krizhevsky made a breakthrough on ILSVRC-12 visual recognition competition using Convolutional Neural Network. That breakthrough revived people interest in the neural network. The success of Convolutional Neural Network is achieved with two main factors. First of them is the emergence of advanced hardware (GPUs) for sufficient parallel computation. Second is the availability of large-scale datasets such as ImageNet (ILSVRC) dataset for training. Unfortunately, many new domains are bottlenecked by these factors. For most domains, it is difficult and requires lots of effort to gather large-scale dataset to train a ConvNet. Moreover, even if we have a large-scale dataset, training ConvNet from scratch is required expensive resource and time-consuming. These two obstacles can be solved by using transfer learning. Transfer learning is a method for transferring the knowledge from a source domain to new domain. There are two major Transfer learning cases. First one is ConvNet as fixed feature extractor, and the second one is Fine-tune the ConvNet on a new dataset. In the first case, using pre-trained ConvNet (such as on ImageNet) to compute feed-forward activations of the image into the ConvNet and extract activation features from specific layers. In the second case, replacing and retraining the ConvNet classifier on the new dataset, then fine-tune the weights of the pre-trained network with the backpropagation. In this paper, we focus on using multiple ConvNet layers as a fixed feature extractor only. However, applying features with high dimensional complexity that is directly extracted from multiple ConvNet layers is still a challenging problem. We observe that features extracted from multiple ConvNet layers address the different characteristics of the image which means better representation could be obtained by finding the optimal combination of multiple ConvNet layers. Based on that observation, we propose to employ multiple ConvNet layer representations for transfer learning instead of a single ConvNet layer representation. Overall, our primary pipeline has three steps. Firstly, images from target task are given as input to ConvNet, then that image will be feed-forwarded into pre-trained AlexNet, and the activation features from three fully connected convolutional layers are extracted. Secondly, activation features of three ConvNet layers are concatenated to obtain multiple ConvNet layers representation because it will gain more information about an image. When three fully connected layer features concatenated, the occurring image representation would have 9192 (4096+4096+1000) dimension features. However, features extracted from multiple ConvNet layers are redundant and noisy since they are extracted from the same ConvNet. Thus, a third step, we will use Principal Component Analysis (PCA) to select salient features before the training phase. When salient features are obtained, the classifier can classify image more accurately, and the performance of transfer learning can be improved. To evaluate proposed method, experiments are conducted in three standard datasets (Caltech-256, VOC07, and SUN397) to compare multiple ConvNet layer representations against single ConvNet layer representation by using PCA for feature selection and dimension reduction. Our experiments demonstrated the importance of feature selection for multiple ConvNet layer representation. Moreover, our proposed approach achieved 75.6% accuracy compared to 73.9% accuracy achieved by FC7 layer on the Caltech-256 dataset, 73.1% accuracy compared to 69.2% accuracy achieved by FC8 layer on the VOC07 dataset, 52.2% accuracy compared to 48.7% accuracy achieved by FC7 layer on the SUN397 dataset. We also showed that our proposed approach achieved superior performance, 2.8%, 2.1% and 3.1% accuracy improvement on Caltech-256, VOC07, and SUN397 dataset respectively compare to existing work.