• Title/Summary/Keyword: Generate Data

Search Result 3,066, Processing Time 0.03 seconds

Projection and Analysis of Future Temperature and Precipitation using LARS-WG Downscaling Technique - For 8 Meteorological Stations of South Korea - (LARS-WG 상세화 기법을 적용한 미래 기온 및 강수량 전망 및 분석 - 우리나라 8개 기상관측소를 대상으로 -)

  • Shin, Hyung-Jin;Park, Min-Ji;Joh, Hyung-Kyung;Park, Geun-Ae;Kim, Seong-Joon
    • Journal of The Korean Society of Agricultural Engineers
    • /
    • v.52 no.4
    • /
    • pp.83-91
    • /
    • 2010
  • Generally, the GCM (General Circulation Model) data by IPCC climate change scenarios are used for future weather prediction. IPCC GCM models predict well for the continental scale, but is not good for the regional scale. This paper tried to generate future temperature and precipitation of 8 scattered meteorological stations in South Korea by using the MIROC3.2 hires GCM data and applying LARS-WG downscaling method. The MIROC3.2 A1B scenario data were adopted because it has the similar pattern comparing with the observed data (1977-2006) among the scenarios. The results showed that both the future precipitation and temperature increased. The 2080s annual temperature increased $3.8{\sim}5.0^{\circ}C$. Especially the future temperature increased up to $4.5{\sim}7.8^{\circ}C$ in winter period (December-February). The future annual precipitation of 2020s, 2050s, and 2080s increased 17.5 %, 27.5 %, and 39.0 % respectively. From the trend analysis for the future projected results, the above middle region of South Korea showed a statistical significance for winter precipitation and south region for summer rainfall.

A Test Data Generation to Raise User-Defined Exceptions in First-Order Functional Programs (주어진 프로그램에서 예외상황을 발생시키는 테스트 데이타 생성 방법)

  • Ryu, Suk-Young;Yi, Kwang-Keun
    • Journal of KIISE:Software and Applications
    • /
    • v.27 no.4
    • /
    • pp.342-356
    • /
    • 2000
  • We present a static analysis method to automatically generate test data that raise exceptions in input programs. Using the test data from our analysis, a programmer can check whether the raised exceptions are correctly handled with respect to the program's specification. For a given program, starting from the initial constraint that a particular raise expression should be executed, our analysis derives necessary constraints for its input variable. Correctness of our analysis assures that any value that satisfies the derived constraints for the input variable will activate the designated raise expression. In this paper, we formally present such an analysis for a first-order language with the ML-style exception handling constructs and algebraic data values, prove its correctness, and show a set of examples.

  • PDF

Set Covering-based Feature Selection of Large-scale Omics Data (Set Covering 기반의 대용량 오믹스데이터 특징변수 추출기법)

  • Ma, Zhengyu;Yan, Kedong;Kim, Kwangsoo;Ryoo, Hong Seo
    • Journal of the Korean Operations Research and Management Science Society
    • /
    • v.39 no.4
    • /
    • pp.75-84
    • /
    • 2014
  • In this paper, we dealt with feature selection problem of large-scale and high-dimensional biological data such as omics data. For this problem, most of the previous approaches used simple score function to reduce the number of original variables and selected features from the small number of remained variables. In the case of methods that do not rely on filtering techniques, they do not consider the interactions between the variables, or generate approximate solutions to the simplified problem. Unlike them, by combining set covering and clustering techniques, we developed a new method that could deal with total number of variables and consider the combinatorial effects of variables for selecting good features. To demonstrate the efficacy and effectiveness of the method, we downloaded gene expression datasets from TCGA (The Cancer Genome Atlas) and compared our method with other algorithms including WEKA embeded feature selection algorithms. In the experimental results, we showed that our method could select high quality features for constructing more accurate classifiers than other feature selection algorithms.

Compressing Method of NetCDF Files Based on Sparse Matrix (희소행렬 기반 NetCDF 파일의 압축 방법)

  • Choi, Gyuyeun;Heo, Daeyoung;Hwang, Suntae
    • KIISE Transactions on Computing Practices
    • /
    • v.20 no.11
    • /
    • pp.610-614
    • /
    • 2014
  • Like many types of scientific data, results from simulations of volcanic ash diffusion are of a clustered sparse matrix in the netCDF format. Since these data sets are large in size, they generate high storage and transmission costs. In this paper, we suggest a new method that reduces the size of the data of volcanic ash diffusion simulations by converting the multi-dimensional index to a single dimension and keeping only the starting point and length of the consecutive zeros. This method presents performance that is almost as good as that of ZIP format compression, but does not destroy the netCDF structure. The suggested method is expected to allow for storage space to be efficiently used by reducing both the data size and the network transmission time.

AI Platform Solution Service and Trends (글로벌 AI 플랫폼 솔루션 서비스와 발전 방향)

  • Lee, Kang-Yoon;Kim, Hye-rim;Kim, Jin-soo
    • The Journal of Bigdata
    • /
    • v.2 no.2
    • /
    • pp.9-16
    • /
    • 2017
  • Global Platform Solution Company (aka Amazon, Google, MS, IBM) who has cloud platform, are driving AI and Big Data service on their cloud platform. It will dramatically change Enterprise business value chain and infrastructures in Supply Chain Management, Enterprise Resource Planning in Customer relationship Management. Enterprise are focusing the channel with customers and Business Partners and also changing their infrastructures to platform by integrating data. It will be Digital Transformation for decision support. AI and Deep learning technology are rapidly combined to their data driven platform, which supports mobile, social and big data. The collaboration of platform service with business partner and the customer will generate new ecosystem market and it will be the new way of enterprise revolution as a part of the 4th industrial revolution.

  • PDF

The Effects of Typhoon Initialization and Dropwindsonde Data Assimilation on Direct and Indirect Heavy Rainfall Simulation in WRF model

  • Lee, Ji-Woo
    • Journal of the Korean earth science society
    • /
    • v.36 no.5
    • /
    • pp.460-475
    • /
    • 2015
  • A number of heavy rainfall events on the Korean Peninsula are indirectly influenced by tropical cyclones (TCs) when they are located in southeastern China. In this study, a heavy rainfall case in the middle Korean region is selected to examine the influence of typhoon simulation performance on predictability of remote rainfall over Korea as well as direct rainfall over Taiwan. Four different numerical experiments are conducted using Weather Research and Forecasting (WRF) model, toggling on and off two different improvements on typhoon in the model initial condition (IC), which are TC bogussing initialization and dropwindsonde observation data assimilation (DA). The Geophysical Fluid Dynamics Laboratory TC initialization algorithm is implemented to generate the bogused vortex instead of the initial typhoon, while the airborne observation obtained from dropwindsonde is applied by WRF Three-dimensional variational data assimilation. Results show that use of both TC initialization and DA improves predictability of TC track as well as rainfall over Korea and Taiwan. Without any of IC improvement usage, the intensity of TC is underestimated during the simulation. Using TC initialization alone improves simulation of direct rainfall but not of indirect rainfall, while using DA alone has a negative impact on the TC track forecast. This study confirms that the well-suited TC simulation over southeastern China improves remote rainfall predictability over Korea as well as TC direct rainfall over Taiwan.

A Statistical Analysis of SNPs, In-Dels, and Their Flanking Sequences in Human Genomic Regions

  • Shin, Seung-Wook;Kim, Young-Joo;Kim, Byung-Dong
    • Genomics & Informatics
    • /
    • v.5 no.2
    • /
    • pp.68-76
    • /
    • 2007
  • Due to the increasing interest in SNPs and mutational hot spots for disease traits, it is becoming more important to define and understand the relationship between SNPs and their flanking sequences. To study the effects of flanking sequences on SNPs, statistical approaches are necessary to assess bias in SNP data. In this study we mainly applied Markov chains for SNP sequences, particularly those located in intronic regions, and for analysis of in-del data. All of the pertaining sequences showed a significant tendency to generate particular SNP types. Most sequences flanking SNPs had lower complexities than average sequences, and some of them were associated with microsatellites. Moreover, many Alu repeats were found in the flanking sequences. We observed an elevated frequency of single-base-pair repeat-like sequences, mirror repeats, and palindromes in the SNP flanking sequence data. Alu repeats are hypothesized to be associated with C-to-T transition mutations or A-to-I RNA editing. In particular, the in-del data revealed an association between particular changes such as palindromes or mirror repeats. Results indicate that the mechanism of induction of in-del transitions is probably very different from that which is responsible for other SNPs. From a statistical perspective, frequent DNA lesions in some regions probably have effects on the occurrence of SNPs.

Electron Energy Distribution for a Research Electron LINAC

  • Lim, Heuijin;Lee, Manwoo;Yi, Jungyu;Kang, Sang Koo;Kim, Me Young;Jeong, Dong Hyeok
    • Progress in Medical Physics
    • /
    • v.28 no.2
    • /
    • pp.49-53
    • /
    • 2017
  • The energy distribution was calculated for an electron beam from an electron linear accelerator developed for medical applications using computational methods. The depth dose data for monoenergetic electrons from 0.1 MeV to 8.0 MeV were calculated by the DOSXYZ/nrc code. The calculated data were used to generate the energy distribution from the measured depth dose data by numerical iterations. The measured data in a previous work and an in-house computer program were used for the generation of energy distribution. As results, the mean energy and most probable energy of the energy distribution were 5.7 MeV and 6.2 MeV, respectively. These two values agreed with those determined by the IAEA dosimetry protocol using the measured depth dose.

A Study on the Parameter Analysis for the Quantitative Evaluation of Spasticity Implementing Pendulum Test (경직의 정량 평가를 위한 진자실험의 변수분석)

  • Lim, Hyun-Kyoon;Lee, Young-Shin;Cho, Kang-Hee;Chae, Jin-Mok;Kim, Bong-Ok
    • Proceedings of the KSME Conference
    • /
    • 2000.04a
    • /
    • pp.268-273
    • /
    • 2000
  • Velocity-dependent increase in tonic stretch reflexes is one of the prominent characteristics of spasticity. It is very important to evaluate spasticity objectively and quantitatively before and after treatment for physicians. An accurate quantitative biomechanical evaluation for the spasticity which is caused by the disorder of central nervous system is made in this study. A sudden leg dropper which is designed to generate objective testing environment at every trial gives very effective environment for the test. Kinematic data are archived by the 3-dimensional motion analysis system($Elite^{(R)}$, B.T.S., Italy). Kinematic data are angle and angular velocity of lower limb joints, and length and lengthening velocity of lower limb muscle. A program is also developed to analyze the kinematic data of lower limb, contraction and relaxation length of muscles, and dynamic EMG data at the same tim. To evaluate spasticity quantitatively, total 31 parameters extracted from goniogram, EMG and muscle model are analyzed. Statistical analysis are made for bilateral correlations for all parameters. The described instrumentation and parameters to make quantitative and objective evaluation of spasticity shows good results.

  • PDF

Application of the STEM II to air pollutant transport/chemistry/deposition in the Korea and Eastern China Area - I. Data preparation and Model verification (STEM II를 이용한 한국과 중국동부 지역의 대기오염물질 이동/화학/침착 모사에 관한 연구 - I. 입력자료 작성과 모델 검증)

  • 이상인;조석연;심상규
    • Journal of Korean Society for Atmospheric Environment
    • /
    • v.10 no.4
    • /
    • pp.260-280
    • /
    • 1994
  • The STEM II(Sulfur Transport Eulerian Model II) was adapted to simulate transport/ chemistry/deposition of air Pollutants in the Eastern China and Korea. A 32 hour model simulation starting from 9 A.M. of 1989 November 25 during which no preciptation was observed. The Prevailing wind direction is from west to east. The MM4(Meteorological Model Version 4) was used to generate meteorological data such as temperatures, horizontal wind velocities and directions, humidities, air densities. Eddy diffusivities, dry deposition velocities and vertical wind velocities were calculated from the meteorological data. The initial condition and the emission data base were constructed from the measurements and governmental reports respectively. The model predictions of NO, NO$_2$, SO$_2$, $O_3$ at Seoul, Inchon and Pusan agree reasonably well with measurements. The model's predictability for the primary air pollutants is improved considerably as the time passes. Thus, it is concluded that the model's predictability can be significantly enhanced by reducing the uncertainties of initial conditions.

  • PDF