• Title/Summary/Keyword: model initialization

Search Result 108, Processing Time 0.03 seconds

Numerical Study on the Impact of SST Spacial Distribution on Regional Circulation (상세 해수면 온도자료의 반영에 따른 국지 기상정 개선에 관한 수치연구)

  • Jeon, Won-Bae;Lee, Hwa-Woon;Lee, Soon-Hwan;Choi, Hyun-Jung;Leem, Heon-Ho
    • Journal of Korean Society for Atmospheric Environment
    • /
    • v.25 no.4
    • /
    • pp.304-315
    • /
    • 2009
  • Numerical simulations were carried out to understand the effect of Sea Surface Temperature (SST) spatial distribution on regional circulation. A three-dimensional non-hydrostatic atmospheric model RAMS, version 6.0, was applied to examine the impact of SST forcing on regional circulation. New Generation Sea Surface Temperature (NGSST) data were implemented to RAMS to compare the results of modeling with default SST data. Several numerical experiments have been undertaken to evaluate the effect of SST for initialization. First was the case with NGSST data (Case NG), second was the case with RAMS monthly data (Case RM) and third was the case with seasonally averaged RAMS monthly data (Case RS). Case NG showed accurate spatial distributions of SST but, the results of RM and RS were $3{\sim}4^{\circ}C$ lower than buoy observation data. By analyzing practical sea surface conditions, large difference in horizontal temperature and wind field for each run were revealed. Case RM and Case RS showed similar horizontal and vertical distributions of temperature and wind field but, Case NG estimated the intensity of sea breeze weakly and land breeze strongly. These differences were due to the difference of the temperature gradient caused by different spatial distributions of SST. Diurnal variations of temperature and wind speed for Case NG indicated great agreement with the observation data and statistics such as root mean squared error, index of agreement, regression were also better than Case RM and Case RS.

Priority-based reservation protocol for variable-length messages in a WDM-based optical subscriber network (WDM 기반의 광 가입자 망에서 우선순위 기반의 효율적인 가변 길이 메시지 예약 프로토콜)

  • Lee Jae hwoon
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.30 no.4B
    • /
    • pp.153-161
    • /
    • 2005
  • In a multi-channel network based of wavelength division multiplexing (WDM) technology, an efficient protocol is needed in order for the transmitter and receiver to be tuned to the same wavelength during message transmission time. This paper proposes a priority-based reservation protocol that can efficiently support variable-length messages that require different QoS requirements. In this proposed protocol, high-priority nodes can reserve the data channel before low-priority nodes. However, once a node reserves a data channel successfully, it can persistently use the reserved channel till message transmission is finished regardless of the priority of the node. Moreover, the protocol can operate independently of the number of nodes, and any new node can join the network anytime without requiring network re-initialization. The protocol is analyzed with a finite population model and the throughput-delay characteristics are investigated as performance measures.

A design and implementation of VHDL-to-C mapping in the VHDL compiler back-end (VHDL 컴파일러 후반부의 VHDL-to-C 사상에 관한 설계 및 구현)

  • 공진흥;고형일
    • Journal of the Korean Institute of Telematics and Electronics C
    • /
    • v.35C no.12
    • /
    • pp.1-12
    • /
    • 1998
  • In this paper, a design and implementation of VHDL-to-C mapping in the VHDL compiler back-end is described. The analyzed data in an intermediate format(IF), produced by the compiler front-end, is transformed into a C-code model of VHDL semantics by the VHDL-to-C mapper. The C-code model for VHDL semantics is based on a functional template, including declaration, elaboration, initialization and execution parts. The mapping is carried out by utilizing C mapping templates of 129 types classified by mapping units and functional semantics, and iterative algorithms, which are combined with terminal information, to produce C codes. In order to generate the C program, the C codes are output to the functional template either directly or by combining the higher mapping result with intermediate mapping codes in the data queue. In experiments, it is shown that the VHDL-to-C mapper could completely deal with the VHDL analyzed programs from the compiler front-end, which deal with about 96% of major VHDL syntactic programs in the Validation Suite. As for the performance, it is found that the code size of VHDL-to-C is less than that of interpreter and worse than direct code compiler of which generated code is increased more rapidly with the size of VHDL design, and that the VHDL-to-C timing overhead is needed to be improved by the optimized implementation of mapping mechanism.

  • PDF

Modelling Gas Production Induced Seismicity Using 2D Hydro-Mechanical Coupled Particle Flow Code: Case Study of Seismicity in the Natural Gas Field in Groningen Netherlands (2차원 수리-역학적 연계 입자유동코드를 사용한 가스생산 유발지진 모델링: 네덜란드 그로닝엔 천연가스전에서의 지진 사례 연구)

  • Jeoung Seok Yoon;Anne Strader;Jian Zhou;Onno Dijkstra;Ramon Secanell;Ki-Bok Min
    • Tunnel and Underground Space
    • /
    • v.33 no.1
    • /
    • pp.57-69
    • /
    • 2023
  • In this study, we simulated induced seismicity in the Groningen natural gas reservoir using 2D hydro-mechanical coupled discrete element modelling (DEM). The code used is PFC2D (Particle Flow Code 2D), a commercial software developed by Itasca, and in order to apply to this study we further developed 1)initialization of inhomogeneous reservoir pressure distribution, 2)a non-linear pressure-time history boundary condition, 3)local stress field monitoring logic. We generated a 2D reservoir model with a size of 40 × 50 km2 and a complex fault system, and simulated years of pressure depletion with a time range between 1960 and 2020. We simulated fault system failure induced by pressure depletion and reproduced the spatiotemporal distribution of induced seismicity and assessed its failure mechanism. Also, we estimated the ground subsidence distribution and confirmed its similarity to the field measurements in the Groningen region. Through this study, we confirm the feasibility of the presented 2D hydro-mechanical coupled DEM in simulating the deformation of a complex fault system by hydro-mechanical coupled processes.

Interactions between Soil Moisture and Weather Prediction in Rainfall-Runoff Application : Korea Land Data Assimilation System(KLDAS) (수리 모형을 이용한 Korea Land Data Assimilation System (KLDAS) 자료의 수문자료에 대한 영향력 분석)

  • Jung, Yong;Choi, Minha
    • 한국방재학회:학술대회논문집
    • /
    • 2011.02a
    • /
    • pp.172-172
    • /
    • 2011
  • The interaction between land surface and atmosphere is essentially affected by hydrometeorological variables including soil moisture. Accurate estimation of soil moisture at spatial and temporal scales is crucial to better understand its roles to the weather systems. The KLDAS(Korea Land Data Assimilation System) is a regional, specifically Korea peninsula land surface information systems. As other prior land data assimilation systems, this can provide initial soil field information which can be used in atmospheric simulations. For this study, as an enabling high-resolution tool, weather research and forecasting(WRF-ARW) model is applied to produce precipitation data using GFS(Global Forecast System) with GFS embedded and KLDAS soil moisture information as initialization data. WRF-ARW generates precipitation data for a specific region using different parameters in physics options. The produced precipitation data will be employed for simulations of Hydrological Models such as HEC(Hydrologic Engineering Center) - HMS(Hydrologic Modeling System) as predefined input data for selected regional water responses. The purpose of this study is to show the impact of a hydrometeorological variable such as soil moisture in KLDAS on hydrological consequences in Korea peninsula. The study region, Chongmi River Basin, is located in the center of Korea Peninsular. This has 60.8Km river length and 17.01% slope. This region mostly consists of farming field however the chosen study area placed in mountainous area. The length of river basin perimeter is 185Km and the average width of river is 9.53 meter with 676 meter highest elevation in this region. We have four different observation locations : Sulsung, Taepyung, Samjook, and Sangkeug observatoriesn, This watershed is selected as a tentative research location and continuously studied for getting hydrological effects from land surface information. Simulations for a real regional storm case(June 17~ June 25, 2006) are executed. WRF-ARW for this case study used WSM6 as a micro physics, Kain-Fritcsch Scheme for cumulus scheme, and YSU scheme for planetary boundary layer. The results of WRF simulations generate excellent precipitation data in terms of peak precipitation and date, and the pattern of daily precipitation for four locations. For Sankeug observatory, WRF overestimated precipitation approximately 100 mm/day on July 17, 2006. Taepyung and Samjook display that WRF produced either with KLDAS or with GFS embedded initial soil moisture data higher precipitation amounts compared to observation. Results and discussions in detail on accuracy of prediction using formerly mentioned manners are going to be presented in 2011 Annual Conference of the Korean Society of Hazard Mitigation.

  • PDF

Underdetermined blind source separation using normalized spatial covariance matrix and multichannel nonnegative matrix factorization (멀티채널 비음수 행렬분해와 정규화된 공간 공분산 행렬을 이용한 미결정 블라인드 소스 분리)

  • Oh, Son-Mook;Kim, Jung-Han
    • The Journal of the Acoustical Society of Korea
    • /
    • v.39 no.2
    • /
    • pp.120-130
    • /
    • 2020
  • This paper solves the problem in underdetermined convolutive mixture by improving the disadvantages of the multichannel nonnegative matrix factorization technique widely used in blind source separation. In conventional researches based on Spatial Covariance Matrix (SCM), each element composed of values such as power gain of single channel and correlation tends to degrade the quality of the separated sources due to high variance. In this paper, level and frequency normalization is performed to effectively cluster the estimated sources. Therefore, we propose a novel SCM and an effective distance function for cluster pairs. In this paper, the proposed SCM is used for the initialization of the spatial model and used for hierarchical agglomerative clustering in the bottom-up approach. The proposed algorithm was experimented using the 'Signal Separation Evaluation Campaign 2008 development dataset'. As a result, the improvement in most of the performance indicators was confirmed by utilizing the 'Blind Source Separation Eval toolbox', an objective source separation quality verification tool, and especially the performance superiority of the typical SDR of 1 dB to 3.5 dB was verified.

Sentiment Analysis of Korean Reviews Using CNN: Focusing on Morpheme Embedding (CNN을 적용한 한국어 상품평 감성분석: 형태소 임베딩을 중심으로)

  • Park, Hyun-jung;Song, Min-chae;Shin, Kyung-shik
    • Journal of Intelligence and Information Systems
    • /
    • v.24 no.2
    • /
    • pp.59-83
    • /
    • 2018
  • With the increasing importance of sentiment analysis to grasp the needs of customers and the public, various types of deep learning models have been actively applied to English texts. In the sentiment analysis of English texts by deep learning, natural language sentences included in training and test datasets are usually converted into sequences of word vectors before being entered into the deep learning models. In this case, word vectors generally refer to vector representations of words obtained through splitting a sentence by space characters. There are several ways to derive word vectors, one of which is Word2Vec used for producing the 300 dimensional Google word vectors from about 100 billion words of Google News data. They have been widely used in the studies of sentiment analysis of reviews from various fields such as restaurants, movies, laptops, cameras, etc. Unlike English, morpheme plays an essential role in sentiment analysis and sentence structure analysis in Korean, which is a typical agglutinative language with developed postpositions and endings. A morpheme can be defined as the smallest meaningful unit of a language, and a word consists of one or more morphemes. For example, for a word '예쁘고', the morphemes are '예쁘(= adjective)' and '고(=connective ending)'. Reflecting the significance of Korean morphemes, it seems reasonable to adopt the morphemes as a basic unit in Korean sentiment analysis. Therefore, in this study, we use 'morpheme vector' as an input to a deep learning model rather than 'word vector' which is mainly used in English text. The morpheme vector refers to a vector representation for the morpheme and can be derived by applying an existent word vector derivation mechanism to the sentences divided into constituent morphemes. By the way, here come some questions as follows. What is the desirable range of POS(Part-Of-Speech) tags when deriving morpheme vectors for improving the classification accuracy of a deep learning model? Is it proper to apply a typical word vector model which primarily relies on the form of words to Korean with a high homonym ratio? Will the text preprocessing such as correcting spelling or spacing errors affect the classification accuracy, especially when drawing morpheme vectors from Korean product reviews with a lot of grammatical mistakes and variations? We seek to find empirical answers to these fundamental issues, which may be encountered first when applying various deep learning models to Korean texts. As a starting point, we summarized these issues as three central research questions as follows. First, which is better effective, to use morpheme vectors from grammatically correct texts of other domain than the analysis target, or to use morpheme vectors from considerably ungrammatical texts of the same domain, as the initial input of a deep learning model? Second, what is an appropriate morpheme vector derivation method for Korean regarding the range of POS tags, homonym, text preprocessing, minimum frequency? Third, can we get a satisfactory level of classification accuracy when applying deep learning to Korean sentiment analysis? As an approach to these research questions, we generate various types of morpheme vectors reflecting the research questions and then compare the classification accuracy through a non-static CNN(Convolutional Neural Network) model taking in the morpheme vectors. As for training and test datasets, Naver Shopping's 17,260 cosmetics product reviews are used. To derive morpheme vectors, we use data from the same domain as the target one and data from other domain; Naver shopping's about 2 million cosmetics product reviews and 520,000 Naver News data arguably corresponding to Google's News data. The six primary sets of morpheme vectors constructed in this study differ in terms of the following three criteria. First, they come from two types of data source; Naver news of high grammatical correctness and Naver shopping's cosmetics product reviews of low grammatical correctness. Second, they are distinguished in the degree of data preprocessing, namely, only splitting sentences or up to additional spelling and spacing corrections after sentence separation. Third, they vary concerning the form of input fed into a word vector model; whether the morphemes themselves are entered into a word vector model or with their POS tags attached. The morpheme vectors further vary depending on the consideration range of POS tags, the minimum frequency of morphemes included, and the random initialization range. All morpheme vectors are derived through CBOW(Continuous Bag-Of-Words) model with the context window 5 and the vector dimension 300. It seems that utilizing the same domain text even with a lower degree of grammatical correctness, performing spelling and spacing corrections as well as sentence splitting, and incorporating morphemes of any POS tags including incomprehensible category lead to the better classification accuracy. The POS tag attachment, which is devised for the high proportion of homonyms in Korean, and the minimum frequency standard for the morpheme to be included seem not to have any definite influence on the classification accuracy.

Intelligent Optimal Route Planning Based on Context Awareness (상황인식 기반 지능형 최적 경로계획)

  • Lee, Hyun-Jung;Chang, Yong-Sik
    • Asia pacific journal of information systems
    • /
    • v.19 no.2
    • /
    • pp.117-137
    • /
    • 2009
  • Recently, intelligent traffic information systems have enabled people to forecast traffic conditions before hitting the road. These convenient systems operate on the basis of data reflecting current road and traffic conditions as well as distance-based data between locations. Thanks to the rapid development of ubiquitous computing, tremendous context data have become readily available making vehicle route planning easier than ever. Previous research in relation to optimization of vehicle route planning merely focused on finding the optimal distance between locations. Contexts reflecting the road and traffic conditions were then not seriously treated as a way to resolve the optimal routing problems based on distance-based route planning, because this kind of information does not have much significant impact on traffic routing until a a complex traffic situation arises. Further, it was also not easy to take into full account the traffic contexts for resolving optimal routing problems because predicting the dynamic traffic situations was regarded a daunting task. However, with rapid increase in traffic complexity the importance of developing contexts reflecting data related to moving costs has emerged. Hence, this research proposes a framework designed to resolve an optimal route planning problem by taking full account of additional moving cost such as road traffic cost and weather cost, among others. Recent technological development particularly in the ubiquitous computing environment has facilitated the collection of such data. This framework is based on the contexts of time, traffic, and environment, which addresses the following issues. First, we clarify and classify the diverse contexts that affect a vehicle's velocity and estimates the optimization of moving cost based on dynamic programming that accounts for the context cost according to the variance of contexts. Second, the velocity reduction rate is applied to find the optimal route (shortest path) using the context data on the current traffic condition. The velocity reduction rate infers to the degree of possible velocity including moving vehicles' considerable road and traffic contexts, indicating the statistical or experimental data. Knowledge generated in this papercan be referenced by several organizations which deal with road and traffic data. Third, in experimentation, we evaluate the effectiveness of the proposed context-based optimal route (shortest path) between locations by comparing it to the previously used distance-based shortest path. A vehicles' optimal route might change due to its diverse velocity caused by unexpected but potential dynamic situations depending on the road condition. This study includes such context variables as 'road congestion', 'work', 'accident', and 'weather' which can alter the traffic condition. The contexts can affect moving vehicle's velocity on the road. Since these context variables except for 'weather' are related to road conditions, relevant data were provided by the Korea Expressway Corporation. The 'weather'-related data were attained from the Korea Meteorological Administration. The aware contexts are classified contexts causing reduction of vehicles' velocity which determines the velocity reduction rate. To find the optimal route (shortest path), we introduced the velocity reduction rate in the context for calculating a vehicle's velocity reflecting composite contexts when one event synchronizes with another. We then proposed a context-based optimal route (shortest path) algorithm based on the dynamic programming. The algorithm is composed of three steps. In the first initialization step, departure and destination locations are given, and the path step is initialized as 0. In the second step, moving costs including composite contexts into account between locations on path are estimated using the velocity reduction rate by context as increasing path steps. In the third step, the optimal route (shortest path) is retrieved through back-tracking. In the provided research model, we designed a framework to account for context awareness, moving cost estimation (taking both composite and single contexts into account), and optimal route (shortest path) algorithm (based on dynamic programming). Through illustrative experimentation using the Wilcoxon signed rank test, we proved that context-based route planning is much more effective than distance-based route planning., In addition, we found that the optimal solution (shortest paths) through the distance-based route planning might not be optimized in real situation because road condition is very dynamic and unpredictable while affecting most vehicles' moving costs. For further study, while more information is needed for a more accurate estimation of moving vehicles' costs, this study still stands viable in the applications to reduce moving costs by effective route planning. For instance, it could be applied to deliverers' decision making to enhance their decision satisfaction when they meet unpredictable dynamic situations in moving vehicles on the road. Overall, we conclude that taking into account the contexts as a part of costs is a meaningful and sensible approach to in resolving the optimal route problem.