• Title/Summary/Keyword: computer simulations

Search Result 3,392, Processing Time 0.035 seconds

A Page Replacement Scheme Based on Recency and Frequency (최근성과 참조 횟수에 기반한 페이지 교체 기법)

  • Lee, Seung-Hoon;Lee, Jong-Woo;Cho, Seong-Je
    • The KIPS Transactions:PartA
    • /
    • v.8A no.4
    • /
    • pp.469-478
    • /
    • 2001
  • In the virtual memory system, page replacement policy exerts a great influence on the performance of demand paging. There are LRU(Least Recently Used) and LFU (Least Frequently Used) as the typical replacement policies. The LRU policy performs effectively in many cases and adapts well to the changing workloads compared to other policies. It however cannot distinguish well between frequently and infrequently referenced pages. The LFU policy requires that the page with the smallest reference count be replaced. Though it considers all the references in the past, it cannot discriminate between references that occurred far back in the past and the more recent ones. Thus, it cannot adapt well to the changing workload. In this paper, we first analyze memory reference patterns of eight applications. The patterns show that the recently referenced pages or the frequently referenced pages are accessed continuously as the case may be. So it is rather hard to optimize page replacement scheme by using just one of the LRU or LFU policy. This paper makes an attempt to combine the advantages of the two policies and proposes a new page replacement policy. In the proposed policy, paging list is divided into two lists (LRU and LFU lists). By keeping the two lists in recency and reference frequency order respectively, we try to restrain the highly referenced pages in the past from being replaced by the LRU policy. Results from trace-driven simulations show that there exists points on the spectrum at which the proposed policy performs better than the previously known policies for the workloads we considered. Especially, we can see that our policy outperforms the existing ones in such applications that have reference patterns of re-accessing the frequently referenced pages in the past after some time.

  • PDF

An Installation and Model Assessment of the UM, U.K. Earth System Model, in a Linux Cluster (U.K. 지구시스템모델 UM의 리눅스 클러스터 설치와 성능 평가)

  • Daeok Youn;Hyunggyu Song;Sungsu Park
    • Journal of the Korean earth science society
    • /
    • v.43 no.6
    • /
    • pp.691-711
    • /
    • 2022
  • The state-of-the-art Earth system model as a virtual Earth is required for studies of current and future climate change or climate crises. This complex numerical model can account for almost all human activities and natural phenomena affecting the atmosphere of Earth. The Unified Model (UM) from the United Kingdom Meteorological Office (UK Met Office) is among the best Earth system models as a scientific tool for studying the atmosphere. However, owing to the expansive numerical integration cost and substantial output size required to maintain the UM, individual research groups have had to rely only on supercomputers. The limitations of computer resources, especially the computer environment being blocked from outside network connections, reduce the efficiency and effectiveness of conducting research using the model, as well as improving the component codes. Therefore, this study has presented detailed guidance for installing a new version of the UM on high-performance parallel computers (Linux clusters) owned by individual researchers, which would help researchers to easily work with the UM. The numerical integration performance of the UM on Linux clusters was also evaluated for two different model resolutions, namely N96L85 (1.875° ×1.25° with 85 vertical levels up to 85 km) and N48L70 (3.75° ×2.5° with 70 vertical levels up to 80 km). The one-month integration times using 256 cores for the AMIP and CMIP simulations of N96L85 resolution were 169 and 205 min, respectively. The one-month integration time for an N48L70 AMIP run using 252 cores was 33 min. Simulated results on 2-m surface temperature and precipitation intensity were compared with ERA5 re-analysis data. The spatial distributions of the simulated results were qualitatively compared to those of ERA5 in terms of spatial distribution, despite the quantitative differences caused by different resolutions and atmosphere-ocean coupling. In conclusion, this study has confirmed that UM can be successfully installed and used in high-performance Linux clusters.

A study of Development of Transmission Systems for Terrestrial Single Channel Fixed 4K UHD & Mobile HD Convergence Broadcasting by Employing FEF (Future Extension Frame) Multiplexing Technique (FEF (Future Extension Frame) 다중화 기법을 이용한 지상파 단일 채널 고정 4K UHD & 이동 HD 융합방송 전송시스템 개발에 관한 연구)

  • Oh, JongGyu;Won, YongJu;Lee, JinSeop;Kim, JoonTae
    • Journal of Broadcast Engineering
    • /
    • v.20 no.2
    • /
    • pp.310-339
    • /
    • 2015
  • In this paper, the possibility of a terrestrial fixed 4K UHD (Ultra High Definition) and mobile HD (High Definition) convergence broadcasting service through a single channel employing the FEF (Future Extension Frame) multiplexing technique in DVB (Digital Video Broadcasting)-T2 (Second Generation Terrestrial) systems is examined. The performance of such a service is also investigated. FEF multiplexing technology can be used to adjust the FFT (fast Fourier transform) and CP (cyclic prefix) size for each layer, whereas M-PLP (Multiple-Physical Layer Pipe) multiplexing technology in DVB-T2 systems cannot. The convergence broadcasting service scenario, which can provide fixed 4K UHD and mobile HD broadcasting through a single terrestrial channel, is described, and transmission requirements of the SHVC (Scalable High Efficiency Video Coding) technique are predicted. A convergence broadcasting transmission system structure is described by employing FEF and transmission technologies in DVB-T2 systems. Optimized transmission parameters are drawn to transmit 4K UHD and HD convergence broadcasting by employing a convergence broadcasting transmission structure, and the reception performance of the optimized transmission parameters under AWGN (additive white Gaussian noise), static Brazil-D, and time-varying TU (Typical Urban)-6 channels is examined using computer simulations to find the TOV (threshold of visibility). From the results, for the 6 and 8 MHz bandwidths, reliable reception of both fixed 4K UHD and mobile HD layer data can be achieved under a static fixed and very fast fading multipath channel.

Numerical Simulation of Dynamic Response of Seabed and Structure due to the Interaction among Seabed, Composite Breakwater and Irregular Waves (II) (불규칙파-해저지반-혼성방파제의 상호작용에 의한 지반과 구조물의 동적응답에 관한 수치시뮬레이션 (II))

  • Lee, Kwang-Ho;Baek, Dong-Jin;Kim, Do-Sam;Kim, Tae-Hyung;Bae, Ki-Seong
    • Journal of Korean Society of Coastal and Ocean Engineers
    • /
    • v.26 no.3
    • /
    • pp.174-183
    • /
    • 2014
  • Seabed beneath and near coastal structures may undergo large excess pore water pressure composed of oscillatory and residual components in the case of long durations of high wave loading. This excess pore water pressure may reduce effective stress and, consequently, the seabed may liquefy. If liquefaction occurs in the seabed, the structure may sink, overturn, and eventually increase the failure potential. In this study, to evaluate the liquefaction potential on the seabed, numerical analysis was conducted using the expanded 2-dimensional numerical wave tank to account for an irregular wave field. In the condition of an irregular wave field, the dynamic wave pressure and water flow velocity acting on the seabed and the surface boundary of the composite breakwater structure were estimated. Simulation results were used as input data in a finite element computer program for elastoplastic seabed response. Simulations evaluated the time and spatial variations in excess pore water pressure, effective stress, and liquefaction potential in the seabed. Additionally, the deformation of the seabed and the displacement of the structure as a function of time were quantitatively evaluated. From the results of the analysis, the liquefaction potential at the seabed in front and rear of the composite breakwater was identified. Since the liquefied seabed particles have no resistance to force, scour potential could increase on the seabed. In addition, the strength decrease of the seabed due to the liquefaction can increase the structural motion and significantly influence the stability of the composite breakwater. Due to limitations of allowable paper length, the studied results were divided into two portions; (I) focusing on the dynamic response of structure, acceleration, deformation of seabed, and (II) focusing on the time variation in excess pore water pressure, liquefaction, effective stress path in the seabed. This paper corresponds to (II).

Numerical Simulation of Dynamic Response of Seabed and Structure due to the Interaction among Seabed, Composite Breakwater and Irregular Waves (I) (불규칙파-해저지반-혼성방파제의 상호작용에 의한 지반과 구조물의 동적응답에 관한 수치시뮬레이션 (I))

  • Lee, Kwang-Ho;Baek, Dong-Jin;Kim, Do-Sam;Kim, Tae-Hyung;Bae, Ki-Seong
    • Journal of Korean Society of Coastal and Ocean Engineers
    • /
    • v.26 no.3
    • /
    • pp.160-173
    • /
    • 2014
  • Seabed beneath and near coastal structures may undergo large excess pore water pressure composed of oscillatory and residual components in the case of long durations of high wave loading. This excess pore water pressure may reduce effective stress and, consequently, the seabed may liquefy. If liquefaction occurs in the seabed, the structure may sink, overturn, and eventually increase the failure potential. In this study, to evaluate the liquefaction potential on the seabed, numerical analysis was conducted using the expanded 2-dimensional numerical wave tank to account for an irregular wave field. In the condition of an irregular wave field, the dynamic wave pressure and water flow velocity acting on the seabed and the surface boundary of the composite breakwater structure were estimated. Simulation results were used as input data in a finite element computer program for elastoplastic seabed response. Simulations evaluated the time and spatial variations in excess pore water pressure, effective stress, and liquefaction potential in the seabed. Additionally, the deformation of the seabed and the displacement of the structure as a function of time were quantitatively evaluated. From the results of the analysis, the liquefaction potential at the seabed in front and rear of the composite breakwater was identified. Since the liquefied seabed particles have no resistance to force, scour potential could increase on the seabed. In addition, the strength decrease of the seabed due to the liquefaction can increase the structural motion and significantly influence the stability of the composite breakwater. Due to limitations of allowable paper length, the studied results were divided into two portions; (I) focusing on the dynamic response of structure, acceleration, deformation of seabed, and (II) focusing on the time variation in excess pore water pressure, liquefaction, effective stress path in the seabed. This paper corresponds to (I).

Effect of Market Basket Size on the Accuracy of Association Rule Measures (장바구니 크기가 연관규칙 척도의 정확성에 미치는 영향)

  • Kim, Nam-Gyu
    • Asia pacific journal of information systems
    • /
    • v.18 no.2
    • /
    • pp.95-114
    • /
    • 2008
  • Recent interests in data mining result from the expansion of the amount of business data and the growing business needs for extracting valuable knowledge from the data and then utilizing it for decision making process. In particular, recent advances in association rule mining techniques enable us to acquire knowledge concerning sales patterns among individual items from the voluminous transactional data. Certainly, one of the major purposes of association rule mining is to utilize acquired knowledge in providing marketing strategies such as cross-selling, sales promotion, and shelf-space allocation. In spite of the potential applicability of association rule mining, unfortunately, it is not often the case that the marketing mix acquired from data mining leads to the realized profit. The main difficulty of mining-based profit realization can be found in the fact that tremendous numbers of patterns are discovered by the association rule mining. Due to the many patterns, data mining experts should perform additional mining of the results of initial mining in order to extract only actionable and profitable knowledge, which exhausts much time and costs. In the literature, a number of interestingness measures have been devised for estimating discovered patterns. Most of the measures can be directly calculated from what is known as a contingency table, which summarizes the sales frequencies of exclusive items or itemsets. A contingency table can provide brief insights into the relationship between two or more itemsets of concern. However, it is important to note that some useful information concerning sales transactions may be lost when a contingency table is constructed. For instance, information regarding the size of each market basket(i.e., the number of items in each transaction) cannot be described in a contingency table. It is natural that a larger basket has a tendency to consist of more sales patterns. Therefore, if two itemsets are sold together in a very large basket, it can be expected that the basket contains two or more patterns and that the two itemsets belong to mutually different patterns. Therefore, we should classify frequent itemset into two categories, inter-pattern co-occurrence and intra-pattern co-occurrence, and investigate the effect of the market basket size on the two categories. This notion implies that any interestingness measures for association rules should consider not only the total frequency of target itemsets but also the size of each basket. There have been many attempts on analyzing various interestingness measures in the literature. Most of them have conducted qualitative comparison among various measures. The studies proposed desirable properties of interestingness measures and then surveyed how many properties are obeyed by each measure. However, relatively few attentions have been made on evaluating how well the patterns discovered by each measure are regarded to be valuable in the real world. In this paper, attempts are made to propose two notions regarding association rule measures. First, a quantitative criterion for estimating accuracy of association rule measures is presented. According to this criterion, a measure can be considered to be accurate if it assigns high scores to meaningful patterns that actually exist and low scores to arbitrary patterns that co-occur by coincidence. Next, complementary measures are presented to improve the accuracy of traditional association rule measures. By adopting the factor of market basket size, the devised measures attempt to discriminate the co-occurrence of itemsets in a small basket from another co-occurrence in a large basket. Intensive computer simulations under various workloads were performed in order to analyze the accuracy of various interestingness measures including traditional measures and the proposed measures.

Monte Carlo Simulations of Selection Responses for Improving High Meat Qualities Using Real Time Ultrasound in Korean Cattle (초음파측정 활용 고급육형 한우개량을 위한 선발반응 Monte Carlo 모의실험)

  • Lee, D. H.
    • Journal of Animal Science and Technology
    • /
    • v.45 no.3
    • /
    • pp.343-354
    • /
    • 2003
  • Simulation studies were carried out to investigate the responses of selection for three carcass traits (longissimus muscle area: EMA, fat thickness: BF, and marbling score: MS) based on either adjusted phenotypes (APH) or estimated breeding values (EBV) in multivariate animal model with different breeding schemes. Selection responses were estimated and compared on six different models with respect to breeding schemes using either carcass measurements or real time ultrasonic (RTU) scans generated by Monte Carlo computer simulation supporting closed breeding population. From the base population with 100 sires and 2000 dams, 20 sires and 1000 dams by each generation were selected by either APH or EBV for 10 generations. Relative economic weights were equal of three traits as EMA(1): BF(-1) : MS(1) for standardized either APH or EBV. For first two models which were similarly designed with current progeny-test program in Korean cattle, three carcass traits with records either only on male progenies (Model 1) or on male and female progenies (Model 2) were used for selecting breeding stocks. Subsequently, generation intervals on males were assumed as 6${\sim}$10 years in these two models. The other two models were designed with tools of selection by RTU rather than carcass measurements with genetic correlations of 0.81${\sim}$0.97 between RTU and corresponding carcass traits in addition to whether with records (Model 4) or without records (Model 3) on female. In these cases, generation intervals on males were assumed as 2${\sim}$4 years. The remaining last two models were designed as similar with Models 3 and 4 except genetic correlations of 0.63${\sim}$0.68 between RTU and corresponding carcass traits with records (Model 6) and without records (Model 5) on females. The results from 10 replicates on each model and selecting methods suggested that responses indirect selection for carcass traits in Model 4 were 1.66${\sim}$2.44 times efficient rather than those in Model 1. Otherwise, in Model 6 with assuming moderate genetic correlations, those efficiencies were 1.18${\sim}$2.08 times with comparing to responses in Model 1. However, selection response for marbling score was the smallest among three carcass traits because of small variation of measurements. From these results, this study suggested that indirect selection using RTU technology for improving high meat qualities in Korean cattle would be valuable with modifying measuring rules of marbling score forward to large variation or modifying relative economic weight for selection.

A Computer Simulation for Small Animal Iodine-125 SPECT Development (소동물 Iodine-125 SPECT 개발을 위한 컴퓨터 시뮬레이션)

  • Jung, Jin-Ho;Choi, Yong;Chung, Yong-Hyun;Song, Tae-Yong;Jeong, Myung-Hwan;Hong, Key-Jo;Min, Byung-Jun;Choe, Yearn-Seong;Lee, Kyung-Han;Kim, Byung-Tae
    • The Korean Journal of Nuclear Medicine
    • /
    • v.38 no.1
    • /
    • pp.74-84
    • /
    • 2004
  • Purpose: Since I-125 emits low energy (27-35 keV) radiation, thinner crystal and collimator could be employed and, hence, it is favorable to obtain high quality images. The purpose of this study was to derive the optimized parameters of I-125 SPECT using a new simulation tool, GATE (Geant4 Application for Tomographic Emission). Materials and Methods: To validate the simulation method, gamma camera developed by Weisenberger et al. was modeled. Nal(T1) plate crystal was used and its thickness was determined by calculating detection efficiency. Spatial resolution and sensitivity curves were estimated by changing variable parameters for parallel-hole and pinhole collimator. Peformances of I-125 SPECT equipped with the optimal collimator were also estimated. Results: in the validation study, simulations were found to agree well with experimental measurements in spatial resolution (4%) and sensitivity (3%). In order to acquire 98% gamma ray detection efficiency, Nal(T1) thickness was determined to be 1 mm. Hole diameter (mm), length (mm) and shape were chosen to be 0.2:5:square and 0.5:10:hexagonal for high resolution (HR) and general purpose (GP) parallel-hole collimator, respectively. Hole diameter, channel height and acceptance angle of pinhole (PH) collimator were determined to be 0.25 mm, 0.1 mm and 90 degree. The spatial resolutions of reconstructed image of the I-125 SPECT employing HR:GP:PH were 1.2:1.7:0.8 mm. The sensitivities of HR:GP:PH were 39.7:71.9:5.5 cps/MBq. Conclusion: The optimal crystal and collimator parameters for I-125 Imaging were derived by simulation using GATE. The results indicate that excellent resolution and sensitivity imaging is feasible using I-125 SPECT.

Multi-user Diversity Scheduling Methods Using Superposition Coding Multiplexing (중첩 코딩 다중화를 이용한 다중 사용자 다이버시티 스케줄링 방법)

  • Lee, Min;Oh, Seong-Keun
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.35 no.4A
    • /
    • pp.332-340
    • /
    • 2010
  • In this paper, we deal with multi-user diversity scheduling methods that transmit simultaneously signals from multiple users using superposition coding multiplexing. These methods can make various scheduling methods be obtained, according to strategies for user selection priority from the first user to the first-following users, strategies for per-user power allocation, and resulting combining strategies. For the first user selection, we consider three strategies such as 1) higher priority for a user with a better channel state, 2) following the proportional fair scheduling (PFS) priority, 3) higher priority for a user with a lower average serving rate. For selection of the first-following users, we consider the identical strategies for the first user selection. However, in the second strategy, we can decide user priorities according to the original PFS ordering, or only once an additional user for power allocation according to the PFS criterion by considering a residual power and inter-user interference. In the strategies for power allocation, we consider two strategies as follows. In the first strategy, it allocates a power to provide a permissible per-user maximum rate. In the second strategy, it allocates a power to provide a required per-user minimum rate, and then it reallocates the residual power to respective users with a rate greater than the required minimum and less than the permissible maximum. We consider three directions for scheduling such as maximizing the sum rate, maximizing the fairness, and maximizing the sum rate while maintaining the PFS fairness. We select the max CIR, max-min fair, and PF scheduling methods as their corresponding reference methods [1 and references therein], and then we choose candidate scheduling methods which performances are similar to or better than those of the corresponding reference methods in terms of the sum rate or the fairness while being better than their corresponding performances in terms of the alternative metric (fairness or sum rate). Through computer simulations, we evaluate the sum rate and Jain’s fairness index (JFI) performances of various scheduling methods according to the number of users.

Radio location algorithm in microcellular wide-band CDMA environment (마이크로 셀룰라 Wide-band CDMA 환경에서의 위치 추정 알고리즘)

  • Chang, Jin-Weon;Han, Il;Sung, Dan-Keun;Shin, Bung-Chul;Hong, Een-Kee
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.23 no.8
    • /
    • pp.2052-2063
    • /
    • 1998
  • Various full-scale radio location systems have been developed since ground-based radio navigation systems appeared during World War II, and more recently global positioning systems (GPS) have been widely used as a representative location system. In addition, radio location systems based on cellular systems are intensively being studied as cellular services become more and more popular. However, these studies have been focused mainly on macrocellular systems of which based stations are mutually synchronized. There has been no study about systems of which based stations are asynchronous. In this paper, we proposed two radio location algorithms in microcellular CDMA systems of which base stations are asychronous. The one is to estimate the position of a personal station at the center of rectangular shaped area which approximates the realistic common area. The other, as a method based on road map, is to first find candidate positions, the centers of roads pseudo-range-distant from the base station which the personal station belongs to and then is to estimate the position by monitoring the pilot signal strengths of neighboring base stations. We compare these two algorithms with three wide-spread algorithms through computer simulations and investigate interference effect on measuring pseudo ranges. The proposed algorithms require no recursive calculations and yield smaller position error than the existing algorithms because of less affection of non-line-of-signt propagation in microcellular environments.

  • PDF