• Title/Summary/Keyword: 근사적 분산

Search Result 118, Processing Time 0.029 seconds

Permeability of Viscous Flow Through Packed Bed of Bidisperse Hard Spheres (이분산 구형 입자로 구성된 충전층을 흐르는 점성 유체 흐름의 투과도)

  • Sohn, Hyunjin;Koo, Sangkyun
    • Korean Chemical Engineering Research
    • /
    • v.50 no.1
    • /
    • pp.66-71
    • /
    • 2012
  • We deal with a problem to determine experimentally as well as theoretically permeability of incompressible viscous flow through packed bed of bidisperse hard spheres in size. For the size ratios of large to small spheres ${\lambda}$=1.25 and 2, we set up bidisperse packing and measured porosity and permeability at various volumetric ratios of small to large spheres ${\gamma}$. Bidisperse packing shows lower porosity and permeability than monodisperse packing does. Variation of porosity as a function of ${\gamma}$ does not match with that of permeability. A theoretical expression for predicting permeability of a viscous flow for packed bed of bidisperse packing is derived based on calculation of drag force acting on each sphere and its predictions are compared with the experimental data and those from some relations previously suggested. It is found that our theory shows better agreement with experimental results than the previous studies and is proved to be quite simple and accurate in estimating the permeability.

Design of a Mechanical Joint for Zero Moment Crane By Kriging (크리깅을 이용한 제로 모멘트 크레인에 적용되는 조인트의 설계)

  • Kim, Jae-Wook;Jangn, In-Gwun;Kwak, Byung-Man
    • Transactions of the Korean Society of Mechanical Engineers A
    • /
    • v.34 no.5
    • /
    • pp.597-604
    • /
    • 2010
  • This study focuses on the design of a mechanical joint for a zero moment crane (ZMC), which is a specialized loading/unloading system used in a mobile harbor (MH). The mechanical joint is based on the concept of zero moment point (ZMP), and it plays an important role in stabilizing a ZMC. For effective stabilization, it is necessary to ensure that the mechanical joint is robust to a wide variety of loads; further, the joint must allow the structures connected to it to perform rotational motion with two degrees of freedom By adopting a traditional design process, we designed a new mechanical joint; in this design, a universal joint is coupled with a spherical joint, and then, deformable rolling elements are incorporated. The rolling elements facilitate load distribution and help in decreasing power loss during loading/unloading. Because of the complexity of the proposed system, Kriging-based approximate optimization method is used for enhancing the optimization efficiency. In order to validate the design of the proposed mechanical joint, a structural analysis is performed, and a small-scale prototype is built.

A Polynomial-Time Algorithm for Linear Cutting Stock Problem (선형 재료절단 문제의 다항시간 알고리즘)

  • Lee, Sang-Un
    • Journal of the Korea Society of Computer and Information
    • /
    • v.18 no.7
    • /
    • pp.149-155
    • /
    • 2013
  • Commonly, one seeks a particular pattern suitable for stock cutting and the number of such patterns through linear programming. However, since the number of the patterns increases exponentially, it is nearly impossible to predetermine all the existing patterns beforehand. This paper thus proposes an algorithm whereby one could accurately predetermine the number of existing patterns by applying Suliman's feasible pattern method. Additionally, this paper suggests a methodology by which one may obtain exact polynomial-time solutions for feasible patterns without applying linear programming or approximate algorithm. The suggested methodology categorizes the feasible patterns by whether the frequency of first occurrence of all the demands is distributed in 0 loss or in various losses. When applied to 2 data sets, the proposes algorithm is found to be successful in obtaining the optimal solutions.

Application and Comparison of GeoWEPP model and USLE model to Natural Small Catchment - A Case Study in Danwol-dong, Icheon-si (소유역에서의 토사유출 산정을 위한 GeoWEPP model과 USLE의 비교.적용 연구 - 이천시 단월동 유역을 사례로)

  • Kim, Min-Seok;Kim, Jin-Kwan;Yang, Dong-Yoon
    • Economic and Environmental Geology
    • /
    • v.40 no.1 s.182
    • /
    • pp.103-113
    • /
    • 2007
  • The empirical USLE and the physically-based GeoWEPP which were distributed model linked with GIS (Geographical Information System) were applied to small natural catchment located in Icheon-si, Gyeonggi-do, South Korea. The results using by two models were total sediment yield from study catchment between January, 2004 and January, 2005. During the study period, the observed total sediment yield was 270.54 ton and the total sediment yield computed by USLE and GeoWEPP model were 358.1 ton and 283.30 ton, respectively. Each of results computed by USLE and GeoWEPP overestimated more than the observed total sediment yield, but, based on the results, the total sediment yield computed by GeoWEPP approximated to the observed result. We suggest that the reason why the total sediment yield using by models overestimated was that computed amounts by two models did not contain the amount of suspended sediment flowed over the weir.

An Efficient Task Assignment Algorithm for Heterogeneous Multi-Computers (이종의 다중컴퓨터에서 태스크 할당을 위한 효율적인 알고리즘)

  • Seo, Kyung-Ryong;Yeo, Jeong-Mo
    • The Transactions of the Korea Information Processing Society
    • /
    • v.5 no.5
    • /
    • pp.1151-1161
    • /
    • 1998
  • In this paper, we are considering a heterogeneous processor system in which each processor may have different performance and reliability characteristics. In other to fully utilize this diversity of processing power it is advantageous to assign the program modules of a distributed program to the processors in such a way that the execution time of the entire program is minimized. This assignment of tasks to processors to maximize performance is commonly called load balancing, since the overloaded processors can perform their own processing with the performance degradation. For the task assignment problem, we propose a new objective function which formulates this imbalancing cost. Thus the task assignment problem is to be carried out so that each module is assigned to a processor whose capabilities are most appropriate for the module, and the total cost is minimized that sum of inter-processor communication cost and execution cost and imbalance cost of the assignment. To find optimal assignment is known to be NP-hard, and thus we proposed an efficient heuristic algorithm with time complexity $O(n^2m)$ in case of m task modules and n processors.

  • PDF

Cross Correlations between Probability Weighted Moments at Each Sites Using Monte Carlo Simulation (Monte Carlo 모의를 이용한 지점 간 확률가중모멘트의 교차상관관계)

  • Shin, Hong-Joon;Jung, Young-Hun;Heo, Jun-Haeng
    • Journal of Korea Water Resources Association
    • /
    • v.42 no.3
    • /
    • pp.227-234
    • /
    • 2009
  • In this study, cross correlations among sample data at each site are calculated to obtain the asymptotic cross correlations among probability weighted moments at each site using Monte Carlo simulation. As a result, the relations between the asymptotic cross correlations among probability weighted moments and the inter-site dependence among sample data at each site are nearly a linear relation with slope 1. The smaller ratio of concurrent data size to entire sample size is, the weaker the relationship grows. Simple power function which the correction term in power function accounts for the differences of the sample size between two sites was fitted to each case to estimate the parameter. It is noted that this result can be used in the various researches which include the estimation of the variance of quantile considering cross correlations.

Evaluation of Uncertainty Importance Measure for Monotonic Function (단조함수에 대한 불확실성 중요도 측도의 평가)

  • Cho, Jae-Gyeun
    • Journal of Korea Society of Industrial Information Systems
    • /
    • v.15 no.5
    • /
    • pp.179-185
    • /
    • 2010
  • In a sensitivity analysis, an uncertainty importance measure is often used to assess how much uncertainty of an output is attributable to the uncertainty of an input, and thus, to identify those inputs whose uncertainties need to be reduced to effectively reduce the uncertainty of output. A function is called monotonic if the output is either increasing or decreasing with respect to any of the inputs. In this paper, for a monotonic function, we propose a method for evaluating the measure which assesses the expected percentage reduction in the variance of output due to ascertaining the value of input. The proposed method can be applied to the case that the output is expressed as linear and nonlinear monotonic functions of inputs, and that the input follows symmetric and asymmetric distributions. In addition, the proposed method provides a stable uncertainty importance of each input by discretizing the distribution of input to the discrete distribution. However, the proposed method is computationally demanding since it is based on Monte Carlo simulation.

A Study of Shelf Life about Li-ion Battery (리튬 2차 전지의 저장 수명에 관한 연구)

  • Kim, Dong-seong;Jin, Hong-Sik
    • Journal of the Korea Academia-Industrial cooperation Society
    • /
    • v.21 no.12
    • /
    • pp.339-345
    • /
    • 2020
  • In the field of defense, one-shot devices such as missiles are stored for a long period of time after they are manufactured, so it is essential to predict their storage life. A study was conducted to find the shelf life of a Li-ion battery used in one-shot devices. To do this, a Li-ion battery that has been used in weapon systems for more than 5 years was secured. A non-functional test was performed on the battery to check for external changes or failures. After the non-functional test, a discharge test was performed to measure the performance after storing it. Through the test, the performance was checked, including the initial charging voltage, discharge time, and battery temperature, and the trend of the change was identified. An F-test, One-way ANOVA, and regression analysis were performed to verify the aging, and the shelf life of the battery was estimated by an approximation formula that was derived through a regression analysis. As a result of the ANOVA, the p-value was less than the reference value of 0.05, and the performance of the battery decreased by more than 15% after a certain period of time. This change is assumed to result from the change in physical properties of the lithium polymer cell.

Random Balance between Monte Carlo and Temporal Difference in off-policy Reinforcement Learning for Less Sample-Complexity (오프 폴리시 강화학습에서 몬테 칼로와 시간차 학습의 균형을 사용한 적은 샘플 복잡도)

  • Kim, Chayoung;Park, Seohee;Lee, Woosik
    • Journal of Internet Computing and Services
    • /
    • v.21 no.5
    • /
    • pp.1-7
    • /
    • 2020
  • Deep neural networks(DNN), which are used as approximation functions in reinforcement learning (RN), theoretically can be attributed to realistic results. In empirical benchmark works, time difference learning (TD) shows better results than Monte-Carlo learning (MC). However, among some previous works show that MC is better than TD when the reward is very rare or delayed. Also, another recent research shows when the information observed by the agent from the environment is partial on complex control works, it indicates that the MC prediction is superior to the TD-based methods. Most of these environments can be regarded as 5-step Q-learning or 20-step Q-learning, where the experiment continues without long roll-outs for alleviating reduce performance degradation. In other words, for networks with a noise, a representative network that is regardless of the controlled roll-outs, it is better to learn MC, which is robust to noisy rewards than TD, or almost identical to MC. These studies provide a break with that TD is better than MC. These recent research results show that the way combining MC and TD is better than the theoretical one. Therefore, in this study, based on the results shown in previous studies, we attempt to exploit a random balance with a mixture of TD and MC in RL without any complicated formulas by rewards used in those studies do. Compared to the DQN using the MC and TD random mixture and the well-known DQN using only the TD-based learning, we demonstrate that a well-performed TD learning are also granted special favor of the mixture of TD and MC through an experiments in OpenAI Gym.

Localization Scheme with Weighted Multiple Rings in Wireless Sensor Networks (무선 센서 네트워크에서 가중 다중 링을 이용한 측위 기법)

  • Ahn, Hong-Beom;Hong, Jin-Pyo
    • Journal of KIISE:Information Networking
    • /
    • v.37 no.5
    • /
    • pp.409-414
    • /
    • 2010
  • The applications based on geographical location are increasing rapidly in wireless sensor networks (WSN). Recently, various localization algorithms have been proposed but the majority of algorithms rely on the specific hardware to measure the distance from the signal sources. In this paper, we propose the Weighted Multiple Rings Localization(WMRL). We assume that each deployed anchor node may periodically emit the successive beacon signals of the different power level. Then, the beacon signals form the concentric rings depending on their emitted power level, theoretically. The proposed algorithm defines the different weighting factor based on the ratio of each radius of ring. Also, If a sensor node may listen, it can find the innermost ring of the propagated signal for each anchor node. Based on this information, the location of a sensor node is derived by a weighted sum of coordinates of the surrounding anchor nodes. Our proposed algorithm is fully distributed and does not require any additional hardwares and the unreliable distance indications such as RSSI and LQI. Nevertheless, the simulation results show that the WMRL with two rings twice outperforms centroid algorithm. In the case of WMRL with three rings, the accuracy is approximately equal to WCL(Weighted Centroid Localization).