• Title/Summary/Keyword: basic data

Search Result 18,400, Processing Time 0.041 seconds

Estimation of Basic Wind Speeds Reflecting Recent Wind Speed Data (최신 풍속자료를 반영한 기본풍속 산정)

  • Choi, Sang-Hyun;Seo, Kyung-Seok;Sung, Ik-Hyun;Lee, Su-Hyung
    • Journal of the Korean Society of Hazard Mitigation
    • /
    • v.10 no.1
    • /
    • pp.9-14
    • /
    • 2010
  • Recent increase in the strength and frequency of typoons due to climate change claims reconsideration of the design wind load in existing design codes for civil engineering structures in which the basic wind speed is estimated based on meteorological data by mid 1990s. In this paper, based on wind speed data at 76 observatories in Korea from 1961 through 2008, the basic wind speeds which can be utilized in designing civil engineering structures including buildings and bridges are estimated using the statistical process. The return period of the wind speed for each location is determined using the Gumbel distribution. The results for considered locations are compared to the existing design codes. Also, for design applications, the wind speed map, which classifies the country into four basic wind speed zones, is proposed using the resulting basic wind speeds.

Exercise Adherence Model of Middle-Aged based on Theory of Self-determination

  • Lee, Miok
    • Journal of the Korea Society of Computer and Information
    • /
    • v.23 no.10
    • /
    • pp.143-149
    • /
    • 2018
  • The purpose of this study was to construct and validate a middle - aged exercise adherence model. The model was designed based on self - determination theory. Participants were 215 middle-aged men and women aged 40-60 who had been exercising for more than six months. Data was collected from four big cities of Seoul, Busan, Gwangju and Daejeon in Korea, using a questionnaire consisting of basic psychological needs, intrinsic motivation, social support, and exercise adherence. Data were analyzed with SPSS 19.0 and AMOS 20.0. Social support and exercise adherence of the questionnaire were partially revised and verified by confirmatory factor analysis. The results of the study were as follows. The model's fit indices: GFI = .938, AGFI) = .915, NFI = .912, CFI = .941, and RMSEA = 0.041. The model satisfied the model fit of the structural model equation. This study model based on self - determination theory was confirmed that basic psychological needs, intrinsic motivation, and social support were important factors for the middle - aged's exercise adherence. Basic psychological need and intrinsic motivation had a direct influence on the adherence of exercise, and social support indirectly influenced the exercise adherence through intrinsic motivation. Both basic psychological needs and social support directly affected internal motivation. The most influential factor in the middle - aged's exercise adherence was intrinsic motivation. In conclusion, it was found that intrinsic motivation such as interest and fun is important for the middle - aged to continue the exercise. Also, the basic psychological needs were important for middle aged's exercise adherence. The results of this study will provide basic data for restoring or maintaining health by continuing exercise. Strategies that enhance intrinsic motivation are needed when a chronic ill person needs to continue long-term exercising.

The Development and Application of the Big Data Analysis Course for the Improvement of the Data Literacy Competency of Teacher Training College Students (예비교사의 데이터 리터러시 역량 증진을 위한 빅데이터 분석 교양강좌의 개발 및 적용)

  • Kim, Seulki;Kim, Taeyoung
    • Journal of The Korean Association of Information Education
    • /
    • v.26 no.2
    • /
    • pp.141-151
    • /
    • 2022
  • Recently, basic literacy education related to digital literacy and data literacy has been emphasized for students who will live in a rapidly developing future digital society. Accordingly, demand for education to improve big data and data literacy is also increasing in general universities and universities of education as basic knowledge. Therefore, this study designed and applied big data analysis courses for pre-service teachers and analyzed the impact on data literacy. As a result of analyzing the interest and understanding of the input program, it was confirmed that it was an appropriate form for the level of pre-service teachers, and there was a significant improvement in competencies in all areas of 'knowledge', 'skills', and 'values and attitudes' of data literacy. It is hoped that the results of this study will contribute to enhancing the data literacy of students and pre-served teachers by helping with systematic data literacy educational research.

Adolescent Girls' Bodice Pattern Fit Using the 3-Dimensional Virtual Clothing System (3차원 가상 착의 시스템을 이용한 여자 청소년용 길 원형 맞음새 연구)

  • Kim, Dohkyung;Chun, Jongsuk
    • Human Ecology Research
    • /
    • v.54 no.3
    • /
    • pp.279-292
    • /
    • 2016
  • This research predicted the fit of the basic bodice patterns worn on adolescent girls' 3-dimensional scanned bodies. Six 3-dimensional scanned bodies were selected from the sixth Size Korea data. Each of them had good body posture and represented one of the three garment sizes: 79-160, 82-160, and 85-160. Experimental basic bodice patterns were drafted by three basic bodice pattern making methods. The fit of the basic bodice pattern was analyzed by the CLO 3D virtual clothing system. The results showed that the experimental basic bodice patterns did not fit well at the neck, shoulder, and back for adolescent girls. The fit of the basic bodice patterns varied by pattern making method or size. The basic bodice pattern A with the waist darts ending above the breast line showed the best fit among the three different types of experimental pattern. Among the three sizes 79-160, 82-160, and 85-120, size 79-160 basic bodice pattern showed the worst fit for adolescent girls. The results show that the placement and size of the bodice darts affect the basic bodice pattern fit. The basic bodice pattern making method of size 79-160 for adolescent girls should be studied in a future study.

On Mathematical Representation and Integration Theory for GIS Application of Remote Sensing and Geological Data

  • Moon, Woo-Il M.
    • Korean Journal of Remote Sensing
    • /
    • v.10 no.2
    • /
    • pp.37-48
    • /
    • 1994
  • In spatial information processing, particularly in non-renewable resource exploration, the spatial data sets, including remote sensing, geophysical and geochemical data, have to be geocoded onto a reference map and integrated for the final analysis and interpretation. Application of a computer based GIS(Geographical Information System of Geological Information System) at some point of the spatial data integration/fusion processing is now a logical and essential step. It should, however, be pointed out that the basic concepts of the GIS based spatial data fusion were developed with insufficient mathematical understanding of spatial characteristics or quantitative modeling framwork of the data. Furthermore many remote sensing and geological data sets, available for many exploration projects, are spatially incomplete in coverage and interduce spatially uneven information distribution. In addition, spectral information of many spatial data sets is often imprecise due to digital rescaling. Direct applications of GIS systems to spatial data fusion can therefore result in seriously erroneous final results. To resolve this problem, some of the important mathematical information representation techniques are briefly reviewed and discussed in this paper with condideration of spatial and spectral characteristics of the common remote sensing and exploration data. They include the basic probabilistic approach, the evidential belief function approach (Dempster-Shafer method) and the fuzzy logic approach. Even though the basic concepts of these three approaches are different, proper application of the techniques and careful interpretation of the final results are expected to yield acceptable conclusions in cach case. Actual tests with real data (Moon, 1990a; An etal., 1991, 1992, 1993) have shown that implementation and application of the methods discussed in this paper consistently provide more accurate final results than most direct applications of GIS techniques.

Optical Flow Estimation of a Fluid Based on a Physical Model

  • Kim, Jin-Woo
    • Journal of information and communication convergence engineering
    • /
    • v.7 no.4
    • /
    • pp.539-544
    • /
    • 2009
  • An estimation of 3D velocity field including occluded parts without maxing tracer to the fluid had not only never been proposed but also impossible by the conventional computer vision algorithm. In this paper, we propose a new method of three dimensional optical flow of the fluid based on physical model, where some boundary conditions are given from a priori knowledge of the flow configuration. Optical flow is obtained by minimizing the mean square errors of a basic constraint and the matching error terms with visual data using Euler equations. Here, Navier-Stokes motion equations and the differences between occluded data and observable data are employed as the basic constrains. we verify the effectiveness of our proposed method by applying our algorithm to simulated data with partly artificially deleted and recovering the lacking data. Next, applying our method to the fluid of observable surface data and the knowledge of boundary conditions, we demonstrate that 3D optical flow are obtained by proposed algorithm.

Development of microcomputer-based on-line measurement system. (마이크로컴퓨터를 이용한 온-라인 측정 시스템의 개발)

  • ;;Chung, Myung Kyoon;Lee, Dong In
    • Transactions of the Korean Society of Mechanical Engineers
    • /
    • v.5 no.4
    • /
    • pp.274-283
    • /
    • 1981
  • An inexpensive and very simple microcomputer-aided measurement system has been designed for on-line experiments, which perform simultaneously data acquisition, data recorditing, calculations with the data, and positioning of necessary sensors. Interfacting between the microcomputer and the data acquisition board which consists of A/D converter, analog multiplexer, and sample-and-holder, etc. has been done through IEEE-488 interface port and parallel user port both provided by the PET computer's main logic board. Data and control signals are transfered between devices without handshaking. By utilizing BASIC commands PEEK, POKE, SYS, USR which are offered by PET microcomputer, it is possible to link machine code subroutines into the main BASIC program. This facilitates ease of data transfer, programming, and speedy execution of the program. In addition, an X-Y scanning table has been concected to the system in order to automatically position measuring sensors along a pre-determined path of interest.

Assessment of Improving SWAT Weather Input Data using Basic Spatial Interpolation Method

  • Felix, Micah Lourdes;Choi, Mikyoung;Zhang, Ning;Jung, Kwansue
    • Proceedings of the Korea Water Resources Association Conference
    • /
    • 2022.05a
    • /
    • pp.368-368
    • /
    • 2022
  • The Soil and Water Assessment Tool (SWAT) has been widely used to simulate the long-term hydrological conditions of a catchment. Two output variables, outflow and sediment yield have been widely investigated in the field of water resources management, especially in determining the conditions of ungauged subbasins. The presence of missing data in weather input data can cause poor representation of the climate conditions in a catchment especially for large or mountainous catchments. Therefore, in this study, a custom module was developed and evaluated to determine the efficiency of utilizing basic spatial interpolation methods in the estimation of weather input data. The module has been written in Python language and can be considered as a pre-processing module prior to using the SWAT model. The results of this study suggests that the utilization of the proposed pre-processing module can improve the simulation results for both outflow and sediment yield in a catchment, even in the presence of missing data.

  • PDF

A Study on Establishment of Time Series Model for Deriving Financial Outlook of Basic Research Support Programs (기초연구지원사업의 재정소요 전망 도출을 위한 시계열 모형 수립 연구)

  • Yun, Sujin;Lee, Sangkyoung;Yeom, Kyunghwan;Shin, Aelee
    • Journal of Technology Innovation
    • /
    • v.27 no.4
    • /
    • pp.21-48
    • /
    • 2019
  • In the basic research field, quantitative expansion is carried out with active support from the government, but there is no research and policy data suggesting systematic investment plans or data-based financial requirements yet. Therefore, this study predicted future financial requirements of basic research support programs by using time series prediction model. In order to consider various factors including the characteristics of the basic research field, we selected the ARIMAX model which can reflect the effect of multi valuable factors rather than the ARIMA model which predicts the value of single factor over time. We compared the predictions of ARIMAX and ARIMA models for model suitability and found that the ARIMAX model improves the prediction error rate. Based on the ARIMAX model, we predicted the fiscal spending of basic research support programs for five years from 2017 to 2021. This study has significance in that it considers the financial requirements of the basic research support programs as a pilot research conducted by applying a time series model, which is a statistical approach, and multi-variate rather than single-variate. In addition, considering the policy trends that emphasize the importance of basic research investment such as 'the expansion of basic research budget twice', which is the current government's national policy task, it can be used as reference data in establishing basic research investment strategy.

Proteomics Data Analysis using Representative Database

  • Kwon, Kyung-Hoon;Park, Gun-Wook;Kim, Jin-Young;Park, Young-Mok;Yoo, Jong-Shin
    • Bioinformatics and Biosystems
    • /
    • v.2 no.2
    • /
    • pp.46-51
    • /
    • 2007
  • In the proteomics research using mass spectrometry, the protein database search gives the protein information from the peptide sequences that show the best match with the tandem mass spectra. The protein sequence database has been a powerful knowledgebase for this protein identification. However, as we accumulate the protein sequence information in the database, the database size gets to be huge. Now it becomes hard to consider all the protein sequences in the database search because it consumes much computing time. For the high-throughput analysis of the proteome, usually we have used the non-redundant refined database such as IPI human database of European Bioinformatics Institute. While the non-redundant database can supply the search result in high speed, it misses the variation of the protein sequences. In this study, we have concerned the proteomics data in the point of protein similarities and used the network analysis tool to build a new analysis method. This method will be able to save the computing time for the database search and keep the sequence variation to catch the modified peptides.

  • PDF