• Title/Summary/Keyword: Generate Data

Search Result 3,066, Processing Time 0.034 seconds

Concept and Characteristics of Intelligent Science Lab (지능형 과학실의 개념과 특징)

  • Hong, Oksu;Kim, Kyoung Mi;Lee, Jae Young;Kim, Yool
    • Journal of The Korean Association For Science Education
    • /
    • v.42 no.2
    • /
    • pp.177-184
    • /
    • 2022
  • This article aims to explain the concept and characteristics of the 'Intelligent Science Lab', which is being promoted nationwide in Korea since 2021. The Korean Ministry of Education creates a master plan containing a vision for science education every five years. The most recently announced '4th Master plan for science education (2020-2024)' emphasizes the policy of setting up an 'intelligent science lab' in all elementary and secondary schools as an online and offline space for scientific inquiry using advanced technologies, such as Internet of Things and Augmented and Virtual Reality. The 'Intelligent Science Lab' project is being pursued in two main directions: (1) developing an online platform named 'Intelligent Science Lab-ON' that supports science inquiry classes, and (2) building a science lab space in schools that encourages active student participation while utilizing the online platform. This article presents the key features of the 'Intelligent Science Lab-ON' and the characteristics of intelligent science lab spaces newly built in schools. Furthermore, it introduces inquiry-based science learning programs developed for intelligent science labs. These programs include scientific inquiry activities in which students generate and collect data ('data generation' type), utilize datasets provided by the online platform ('data utilization' type), or utilize open and public data sources ('open data source' type). The Intelligent Science Lab project is expected to not only encourage students to engage in scientific inquiry that solves individual and social problems based on real data, but also contribute to presenting a model of online and offline linked scientific inquiry lessons required in the post-COVID-19 era.

A study on the Generation Method of Aircraft Wing Flexure Data Using Generative Adversarial Networks (생성적 적대 신경망을 이용한 항공기 날개 플렉셔 데이터 생성 방안에 관한 연구)

  • Ryu, Kyung-Don
    • Journal of Advanced Navigation Technology
    • /
    • v.26 no.3
    • /
    • pp.179-184
    • /
    • 2022
  • The accurate wing flexure model is required to improve the transfer alignment performance of guided weapon system mounted on a wing of fighter aircraft or armed helicopter. In order to solve this problem, mechanical or stochastical modeling methods have been studying, but modeling accuracy is too low to be applied to weapon systems. The deep learning techniques that have been studying recently are suitable for nonlinear. However, operating fighter aircraft for deep-learning modeling to secure a large amount of data is practically difficult. In this paper, it was used to generate amount of flexure data samples that are similar to the actual flexure data. And it was confirmed that generated data is similar to the actual data by utilizing "measures of similarity" which measures how much alike the two data objects are.

Factors Clustering Approach to Parametric Cost Estimates And OLAP Driver

  • JaeHo, Cho;BoSik, Son;JaeYoul, Chun
    • International conference on construction engineering and project management
    • /
    • 2009.05a
    • /
    • pp.707-716
    • /
    • 2009
  • The role of cost modeller is to facilitate the design process by systematic application of cost factors so as to maintain a sensible and economic relationship between cost, quantity, utility and appearance which thus helps in achieving the client's requirements within an agreed budget. There are a number of research on cost estimates in the early design stage based on the improvement of accuracy or impact factors. It is common knowledge that cost estimates are undertaken progressively throughout the design stage and make use of the information that is available at each phase, through the related research up to now. In addition, Cost estimates in the early design stage shall analyze the information under the various kinds of precondition before reaching the more developed design because a design can be modified and changed in all process depending on clients' requirements. Parametric cost estimating models have been adopted to support decision making in a changeable environment, in the early design stage. These models are using a similar instance or a pattern of historical case to be constituted in project information, geographic design features, relevant data to quantity or cost, etc. OLAP technique analyzes a subject data by multi-dimensional points of view; it supports query, analysis, comparison of required information by diverse queries. OLAP's data structure matches well with multiview-analysis framework. Accordingly, this study implements multi-dimensional information system for case based quantity data related to design information that is utilizing OLAP's technology, and then analyzes impact factors of quantity by the design criteria or parameter of the same meaning. On the basis of given factors examined above, this study will generate the rules on quantity measure and produce resemblance class using clustering of data mining. These sorts of knowledge-base consist of a set of classified data as group patterns, of which will be appropriate stand on the parametric cost estimating method.

  • PDF

Accuracy Assessment of Reservoir Depth Measurement Data by Unmanned Boat using GIS (GIS를 이용한 무인보트의 저수지 수심측정자료 정확도 평가)

  • Kim, Dae-Sik
    • Journal of Korean Society of Rural Planning
    • /
    • v.30 no.3
    • /
    • pp.75-84
    • /
    • 2024
  • This study developed the procedure and method for the accuracy assessment of unmanned boat survey data, based on the reservoir water depth data of Misan Reservoir, measured by the manned and unmanned boats in 2009 by Korea Rural Community Corporation. In the first step, this study devised the method to extract the contour map of NGIS data in AutoCAD to generate easily the reservoir boundary map used to set the survey range of reservoir water depth and to test the survey accuracy. The surveyed data coordinate systems of the manned and the unmanned boat were also unified by using ArcGIS for the standards of accuracy assessment. In the accuracy assessment, the spatial correlation coefficient of the grid maps of the two measurement results was 0.95, showing high pattern similarity, although the average error was high at 78cm. To analyze in more detail assessment, this study generated randomly the 3,250m transverse profile route (PR), and then extracted grid values of water depth on the PR. In the results of analysis to the extracted depth data on PR, the error average difference of the unmanned boat measurements was 73.18cm and the standard deviation of the error was 55cm compared to the manned boat. This study set these values as the standard for the correction value by average shift and noise removal of the unmanned boat measurement data. By correcting the unmanned boat measurements with these values, this study has high accuracy results, the reservoir water depth and surface area curve with R2 = 0.97 and the water depth and storage volume curve with R2 = 0.999.

Design of Client-Server Model For Effective Processing and Utilization of Bigdata (빅데이터의 효과적인 처리 및 활용을 위한 클라이언트-서버 모델 설계)

  • Park, Dae Seo;Kim, Hwa Jong
    • Journal of Intelligence and Information Systems
    • /
    • v.22 no.4
    • /
    • pp.109-122
    • /
    • 2016
  • Recently, big data analysis has developed into a field of interest to individuals and non-experts as well as companies and professionals. Accordingly, it is utilized for marketing and social problem solving by analyzing the data currently opened or collected directly. In Korea, various companies and individuals are challenging big data analysis, but it is difficult from the initial stage of analysis due to limitation of big data disclosure and collection difficulties. Nowadays, the system improvement for big data activation and big data disclosure services are variously carried out in Korea and abroad, and services for opening public data such as domestic government 3.0 (data.go.kr) are mainly implemented. In addition to the efforts made by the government, services that share data held by corporations or individuals are running, but it is difficult to find useful data because of the lack of shared data. In addition, big data traffic problems can occur because it is necessary to download and examine the entire data in order to grasp the attributes and simple information about the shared data. Therefore, We need for a new system for big data processing and utilization. First, big data pre-analysis technology is needed as a way to solve big data sharing problem. Pre-analysis is a concept proposed in this paper in order to solve the problem of sharing big data, and it means to provide users with the results generated by pre-analyzing the data in advance. Through preliminary analysis, it is possible to improve the usability of big data by providing information that can grasp the properties and characteristics of big data when the data user searches for big data. In addition, by sharing the summary data or sample data generated through the pre-analysis, it is possible to solve the security problem that may occur when the original data is disclosed, thereby enabling the big data sharing between the data provider and the data user. Second, it is necessary to quickly generate appropriate preprocessing results according to the level of disclosure or network status of raw data and to provide the results to users through big data distribution processing using spark. Third, in order to solve the problem of big traffic, the system monitors the traffic of the network in real time. When preprocessing the data requested by the user, preprocessing to a size available in the current network and transmitting it to the user is required so that no big traffic occurs. In this paper, we present various data sizes according to the level of disclosure through pre - analysis. This method is expected to show a low traffic volume when compared with the conventional method of sharing only raw data in a large number of systems. In this paper, we describe how to solve problems that occur when big data is released and used, and to help facilitate sharing and analysis. The client-server model uses SPARK for fast analysis and processing of user requests. Server Agent and a Client Agent, each of which is deployed on the Server and Client side. The Server Agent is a necessary agent for the data provider and performs preliminary analysis of big data to generate Data Descriptor with information of Sample Data, Summary Data, and Raw Data. In addition, it performs fast and efficient big data preprocessing through big data distribution processing and continuously monitors network traffic. The Client Agent is an agent placed on the data user side. It can search the big data through the Data Descriptor which is the result of the pre-analysis and can quickly search the data. The desired data can be requested from the server to download the big data according to the level of disclosure. It separates the Server Agent and the client agent when the data provider publishes the data for data to be used by the user. In particular, we focus on the Big Data Sharing, Distributed Big Data Processing, Big Traffic problem, and construct the detailed module of the client - server model and present the design method of each module. The system designed on the basis of the proposed model, the user who acquires the data analyzes the data in the desired direction or preprocesses the new data. By analyzing the newly processed data through the server agent, the data user changes its role as the data provider. The data provider can also obtain useful statistical information from the Data Descriptor of the data it discloses and become a data user to perform new analysis using the sample data. In this way, raw data is processed and processed big data is utilized by the user, thereby forming a natural shared environment. The role of data provider and data user is not distinguished, and provides an ideal shared service that enables everyone to be a provider and a user. The client-server model solves the problem of sharing big data and provides a free sharing environment to securely big data disclosure and provides an ideal shared service to easily find big data.

Automatic Training Corpus Generation Method of Named Entity Recognition Using Knowledge-Bases (개체명 인식 코퍼스 생성을 위한 지식베이스 활용 기법)

  • Park, Youngmin;Kim, Yejin;Kang, Sangwoo;Seo, Jungyun
    • Korean Journal of Cognitive Science
    • /
    • v.27 no.1
    • /
    • pp.27-41
    • /
    • 2016
  • Named entity recognition is to classify elements in text into predefined categories and used for various departments which receives natural language inputs. In this paper, we propose a method which can generate named entity training corpus automatically using knowledge bases. We apply two different methods to generate corpus depending on the knowledge bases. One of the methods attaches named entity labels to text data using Wikipedia. The other method crawls data from web and labels named entities to web text data using Freebase. We conduct two experiments to evaluate corpus quality and our proposed method for generating Named entity recognition corpus automatically. We extract sentences randomly from two corpus which called Wikipedia corpus and Web corpus then label them to validate both automatic labeled corpus. We also show the performance of named entity recognizer trained by corpus generated in our proposed method. The result shows that our proposed method adapts well with new corpus which reflects diverse sentence structures and the newest entities.

  • PDF

A Practical standard Air Flow Generator System to Calibrate and Compare Performance of Two Different Respiratory Air Flow Measurement Modules (호흡기류 계측모듈의 교정과 성능 비교를 위한 실용적인 표준기류 생성 시스템)

  • Lee, In-Kwang;Park, Mi-Jung;Lee, Sang-Bong;Kim, Kyoung-Ok;Cha, Eun-Jong;Kim, Kyung-Ah
    • Journal of Biomedical Engineering Research
    • /
    • v.36 no.4
    • /
    • pp.115-122
    • /
    • 2015
  • A standard air flow generator system was developed to generate air flows of various levels simultaneously applied to two different air flow transducer modules. Axes of two identical standard syringes for spirometer calibration were connected with each other and driven by a servo-motor. Linear displacement transducer was also connected to the syringe axis to accurately acquire the volume change signal. The user can select either sinusoidal or square waveform of volume change and manually input any volume as well as maximal flow rate levels ranging 0~3 l and 0~15 l/s, respectively. Various volume and flow levels were input to operate the system, then the volume signal was acquired followed by numerical differentiation to obtain the air flow signal. The measured volumes and maximal air flow rates were compared with the user input data. The relative errors between the user-input and the measured stroke volumes were all within 0.5%, demonstrating very accurate driving of the system. In case of the maximal flow rate, relatively large error was observed when the syringe was driven very fast within a very short time duration. However, except for these few data, most measured flow rates revealed relative errors of approximately 2%. When the measure and user-input stroke volume and maximal flow rate data were analyzed by linear regression analysis, respectively, the correlation coefficients were satisfactorily higher than 0.99 (p < 0.0001). These results demonstrate that the servo-motor controls the syringes with enough accuracy to generate standard air flows. Therefore, the present system would be very much practical for calibration process as well as performance evaluation and comparison of two different air flow transducer modules.

Design of Convergence Contents information quality of u-convergence tourist information3.0 using flow Theory (플로우 이론을 이용한 u-융복합 관광정보3.0 의 융복합 콘텐츠 정보품질 설계)

  • Sun, Su-Kyun;Lee, Seung-woo
    • Journal of Digital Convergence
    • /
    • v.13 no.9
    • /
    • pp.191-199
    • /
    • 2015
  • The Journal of Digital Policy & Management. This space is for the abstract of your study in English. In this paper, we propose a u-convergence Tourist Information 3.0 System using Flow Theory. It generates a sense of u-challenge and u-skills to maximize the enjoyment of tourists is u-convergence Tourist Information 3.0. This is a challenge to good sense and adjust the rating of the Convergence Contents information quality(CCIQ) analysis to maximize the enjoyment of tourists. Convergence Contents information quality(CCIQ) of the conductive continuity of the content closed antecedents u-conductive sense, the tourist synchronization adequacy may generate data that can be analyzed. Content Information Quality of rating is the leading factor in the ability of the u-skill mastery of tourists, can generate data availability. The result is to create a meta-model is referred to as content information to reach the best quality maximize enjoyment. Design a sense of u-challenge the skill of the information quality of the tourist information content has the advantage of being able to identify the data formation has the pleasure of tourists. By applying to future national competent standard it is expected to maximize the enjoyment of the job.

Intertidal DEM Generation Using Waterline Extracted from Remotely Sensed Data (원격탐사 자료로부터 해안선 추출에 의한 조간대 DEM 생성)

  • 류주형;조원진;원중선;이인태;전승수
    • Korean Journal of Remote Sensing
    • /
    • v.16 no.3
    • /
    • pp.221-233
    • /
    • 2000
  • An intertidal topography is continuously changed due to morphodynamics processes. Detection and measurement of topographic change for a tidal flat is important to make an integrated coastal area management plan as well as to carry out sedimentologic study. The objective of this study is to generate intertidal DEM using leveling data and waterlines extracted from optical and microwave remotely sensed data in a relatively short period. Waterline is defined as the border line between exposed tidal flat and water body. The contour of the terrain height in tidal flat is equivalent to the waterline. One can utilize satellite images to generate intertidal DEM over large areas. Extraction of the waterline in a SAR image is a difficult task to perform partly because of the presence of speckle and partly because of similarity between the signal returned from the sea surface and that from the exposed tidal flat surface or land. Waterlines in SAR intensity and coherence map can effectively be extracted with MSP-RoA edge detector. From multiple images obtained over a range of tide elevation, it is possible to build up a set of heighted waterline within intertidal zone, and then a gridded DEM can be interpolated. We have tested the proposed method over the Gomso Bay, and succeeded in generating intertidal DEM with relatively high accuracy.

A Study on the Win-Loss Prediction Analysis of Korean Professional Baseball by Artificial Intelligence Model (인공지능 모델에 따른 한국 프로야구의 승패 예측 분석에 관한 연구)

  • Kim, Tae-Hun;Lim, Seong-Won;Koh, Jin-Gwang;Lee, Jae-Hak
    • The Journal of Bigdata
    • /
    • v.5 no.2
    • /
    • pp.77-84
    • /
    • 2020
  • In this study, we conducted a study on the win-loss predicton analysis of korean professional baseball by artificial intelligence models. Based on the model, we predicted the winner as well as each team's final rank in the league. Additionally, we developed a website for viewers' understanding. In each game's first, third, and fifth inning, we analyze to select the best model that performs the highest accuracy and minimizes errors. Based on the result, we generate the rankings. We used the predicted data started from May 5, the season's opening day, to August 30, 2020 to generate the rankings. In the games which Kia Tigers did not play, however, we used actual games' results in the data. KNN and AdaBoost selected the most optimized machine learning model. As a result, we observe a decreasing trend of the predicted results' ranking error as the season progresses. The deep learning model recorded 89% of the model accuracy. It provides the same result of decreasing ranking error trends of the predicted results that we observe in the machine learning model. We estimate that this study's result applies to future KBO predictions as well as other fields. We expect broadcasting enhancements by posting the predicted winning percentage per inning which is generated by AI algorism. We expect this will bring new interest to the KBO fans. Furthermore, the prediction generated at each inning would provide insights to teams so that they can analyze data and come up with successful strategies.