• Title/Summary/Keyword: Random Process

Search Result 1,656, Processing Time 0.034 seconds

Two-dimensional concrete meso-modeling research based on pixel matrix and skeleton theory

  • Jingwei Ying;Yujun Jian;Jianzhuang Xiao
    • Computers and Concrete
    • /
    • v.33 no.6
    • /
    • pp.671-688
    • /
    • 2024
  • The modeling efficiency of concrete meso-models close to real concrete is one of the important issues that limit the accuracy of mechanical simulation. In order to improve the modeling efficiency and the closeness of the numerical aggregate shape to the real aggregate, this paper proposes a method for generating a two-dimensional concrete meso-model based on pixel matrix and skeleton theory. First, initial concrete model (a container for placing aggregate) is generated using pixel matrix. Then, the skeleton curve of the residual space that is the model after excluding the existing aggregate is obtained using a thinning algorithm. Finally, the final model is obtained by placing the aggregate according to the curve branching points. Compared with the traditional Monte Carlo placement method, the proposed method greatly reduces the number of overlaps between aggregates by up to 95%, and the placement efficiency does not significantly decrease with increasing aggregate content. The model developed is close to the actual concrete experiments in terms of aggregate gradation, aspect ratio, asymmetry, concavity and convexity, and old-new mortar ratio, cracking form, and stress-strain curve. In addition, the cracking loss process of concrete under uniaxial compression was explained at the mesoscale.

Effects of wilting on silage quality: a meta-analysis

  • Muhammad Ridla;Hajrian Rizqi Albarki;Sazli Tutur Risyahadi;Sukarman Sukarman
    • Animal Bioscience
    • /
    • v.37 no.7
    • /
    • pp.1185-1195
    • /
    • 2024
  • Objective: This meta-analysis aimed to evaluate the impact of wilted and unwilted silage on various parameters, such as nutrient content, fermentation quality, bacterial populations, and digestibility. Methods: Thirty-six studies from Scopus were included in the database and analyzed using a random effects model in OpenMEE software. The studies were grouped into two categories: wilting silage (experiment group) and non-wilting silage (control group). Publication bias was assessed using a fail-safe number. Results: The results showed that wilting before ensiling significantly increased the levels of dry matter, water-soluble carbohydrates, neutral detergent fiber, and acid detergent fiber, compared to non-wilting silage (p<0.05). However, wilting significantly decreased dry matter losses, lactic acid, acetic acid, butyric acid, and ammonia levels (p<0.05). The pH, crude protein, and ash contents remained unaffected by the wilting process. Additionally, the meta-analysis revealed no significant differences in bacterial populations, including lactic acid bacteria, yeast, and aerobic bacteria, or in vitro dry matter digestibility between the two groups (p>0.05). Conclusion: Wilting before ensiling significantly improved silage quality by increasing dry matter and water-soluble carbohydrates, as well as reducing dry matter losses, butyric acid, and ammonia. Importantly, wilting did not have a significant impact on pH, crude protein, or in vitro dry matter digestibility.

A Study on the Hydraulic Characteristics of Rashig Super-Ring Random Packing in a Counter-Current Packed Tower (역류식 충전탑에서 Raschig Super-ring Random Packing의 수력학적 특성에 대한 연구)

  • Kang, Sung Jin;Lim, Dong-Ha
    • Clean Technology
    • /
    • v.26 no.2
    • /
    • pp.102-108
    • /
    • 2020
  • In recent years, packed column has been widely used in separation processes, such as absorption, desorption, distillation, and extraction, in the petrochemical, fine chemistry, and environmental industries. Packed column is used as a contacting facility for gas-liquid and liquid-liquid systems filled with random packed materials in the column. Packed column has various advantages such as low pressure drop, economical efficiency, thermally sensitive liquids, easy repairing restoration, and noxious gas treatment. The performance of a packed column is highly dependent on the maintenance of good gas and liquid distribution throughout a packed bed; thus, this is an important consideration in a design of packed column. In this study, hydraulic pressure drop, hold-up as a function of liquid load, and mass transfer in the air, air/water, and air-NH3/water systems were studied to find the geometrical characteristic for raschig super-ring experiment dry pressure drop. Based on the results, design factors and operating conditions to handle noxious gases were obtained. The dry pressure drop of the random packing raschig super-ring was linearly increased as a function of gas capacity factor with various liquid loads in the Air/Water system. This result is lower than that of 35 mm Pall-ring, which is most commonly used in the industrial field. Also, it can be found that the hydraulic pressure drop of raschig super-ring is consistently increased by gas capacity factor with various liquid loads. When gas capacity factor with various liquid loads is increased from 1.855 to 2.323 kg-1/2 m-1/2 S-1, hydraulic pressure drop increases around 17%. Finally, the liquid hold-up related to packing volume, which is a parameter of specific liquid load depending on gas capacity factor, shows consistent increase by around 3.84 kg-1/2 m-1/2 S-1 of the gas capacity factor. However, liquid hold-up significantly increases above it.

Comparative Study on the Estimation of CO2 absorption Equilibrium in Methanol using PC-SAFT equation of state and Two-model approach. (메탄올의 이산화탄소 흡수평형 추산에 대한 PC-SAFT모델식과 Two-model approach 모델식의 비교연구)

  • Noh, Jaehyun;Park, Hoey Kyung;Kim, Dongsun;Cho, Jungho
    • Journal of the Korea Academia-Industrial cooperation Society
    • /
    • v.18 no.10
    • /
    • pp.136-152
    • /
    • 2017
  • The thermodynamic models, PC-SAFT (Perturbed-Chain Statistical Associated Fluid Theory) state equation and the Two-model approach liquid activity coefficient model NRTL (Non Random Two Liquid) + Henry + Peng-Robinson, for modeling the Rectisol process using methanol aqueous solution as the $CO_2$ removal solvent were compared. In addition, to determine the new binary interaction parameters of the PC-SAFT state equations and the Henry's constant of the two-model approach, absorption equilibrium experiments between carbon dioxide and methanol at 273.25K and 262.35K were carried out and regression analysis was performed. The accuracy of the newly determined parameters was verified through the regression results of the experimental data. These model equations and validated parameters were used to model the carbon dioxide removal process. In the case of using the two-model approach, the methanol solvent flow rate required to remove 99.00% of $CO_2$ was estimated to be approximately 43.72% higher, the cooling water consumption in the distillation tower was 39.22% higher, and the steam consumption was 43.09% higher than that using PC-SAFT EOS. In conclusion, the Rectisol process operating under high pressure was designed to be larger than that using the PC-SAFT state equation when modeled using the liquid activity coefficient model equation with Henry's relation. For this reason, if the quantity of low-solubility gas components dissolved in a liquid at a constant temperature is proportional to the partial pressure of the gas phase, the carbon dioxide with high solubility in methanol does not predict the absorption characteristics between methanol and carbon dioxide.

A hybrid algorithm for the synthesis of computer-generated holograms

  • Nguyen The Anh;An Jun Won;Choe Jae Gwang;Kim Nam
    • Proceedings of the Optical Society of Korea Conference
    • /
    • 2003.07a
    • /
    • pp.60-61
    • /
    • 2003
  • A new approach to reduce the computation time of genetic algorithm (GA) for making binary phase holograms is described. Synthesized holograms having diffraction efficiency of 75.8% and uniformity of 5.8% are proven in computer simulation and experimentally demonstrated. Recently, computer-generated holograms (CGHs) having high diffraction efficiency and flexibility of design have been widely developed in many applications such as optical information processing, optical computing, optical interconnection, etc. Among proposed optimization methods, GA has become popular due to its capability of reaching nearly global. However, there exits a drawback to consider when we use the genetic algorithm. It is the large amount of computation time to construct desired holograms. One of the major reasons that the GA' s operation may be time intensive results from the expense of computing the cost function that must Fourier transform the parameters encoded on the hologram into the fitness value. In trying to remedy this drawback, Artificial Neural Network (ANN) has been put forward, allowing CGHs to be created easily and quickly (1), but the quality of reconstructed images is not high enough to use in applications of high preciseness. For that, we are in attempt to find a new approach of combiningthe good properties and performance of both the GA and ANN to make CGHs of high diffraction efficiency in a short time. The optimization of CGH using the genetic algorithm is merely a process of iteration, including selection, crossover, and mutation operators [2]. It is worth noting that the evaluation of the cost function with the aim of selecting better holograms plays an important role in the implementation of the GA. However, this evaluation process wastes much time for Fourier transforming the encoded parameters on the hologram into the value to be solved. Depending on the speed of computer, this process can even last up to ten minutes. It will be more effective if instead of merely generating random holograms in the initial process, a set of approximately desired holograms is employed. By doing so, the initial population will contain less trial holograms equivalent to the reduction of the computation time of GA's. Accordingly, a hybrid algorithm that utilizes a trained neural network to initiate the GA's procedure is proposed. Consequently, the initial population contains less random holograms and is compensated by approximately desired holograms. Figure 1 is the flowchart of the hybrid algorithm in comparison with the classical GA. The procedure of synthesizing a hologram on computer is divided into two steps. First the simulation of holograms based on ANN method [1] to acquire approximately desired holograms is carried. With a teaching data set of 9 characters obtained from the classical GA, the number of layer is 3, the number of hidden node is 100, learning rate is 0.3, and momentum is 0.5, the artificial neural network trained enables us to attain the approximately desired holograms, which are fairly good agreement with what we suggested in the theory. The second step, effect of several parameters on the operation of the hybrid algorithm is investigated. In principle, the operation of the hybrid algorithm and GA are the same except the modification of the initial step. Hence, the verified results in Ref [2] of the parameters such as the probability of crossover and mutation, the tournament size, and the crossover block size are remained unchanged, beside of the reduced population size. The reconstructed image of 76.4% diffraction efficiency and 5.4% uniformity is achieved when the population size is 30, the iteration number is 2000, the probability of crossover is 0.75, and the probability of mutation is 0.001. A comparison between the hybrid algorithm and GA in term of diffraction efficiency and computation time is also evaluated as shown in Fig. 2. With a 66.7% reduction in computation time and a 2% increase in diffraction efficiency compared to the GA method, the hybrid algorithm demonstrates its efficient performance. In the optical experiment, the phase holograms were displayed on a programmable phase modulator (model XGA). Figures 3 are pictures of diffracted patterns of the letter "0" from the holograms generated using the hybrid algorithm. Diffraction efficiency of 75.8% and uniformity of 5.8% are measured. We see that the simulation and experiment results are fairly good agreement with each other. In this paper, Genetic Algorithm and Neural Network have been successfully combined in designing CGHs. This method gives a significant reduction in computation time compared to the GA method while still allowing holograms of high diffraction efficiency and uniformity to be achieved. This work was supported by No.mOl-2001-000-00324-0 (2002)) from the Korea Science & Engineering Foundation.

  • PDF

A Partial Encryption Method for the Efficiency and the Security Enhancement of Massive Data Transmission in the Cloud Environment (클라우드 환경에서의 대용량 데이터 전송의 효율성과 보안성 강화를 위한 부분 암호화 방법)

  • Jo, Sung-Hwan;Han, Gi-Tae
    • KIPS Transactions on Computer and Communication Systems
    • /
    • v.6 no.9
    • /
    • pp.397-406
    • /
    • 2017
  • In case of using the existing encrypted algorithm for massive data encryption service under the cloud environment, the problem that requires much time in data encryption come to the fore. To make up for this weakness, a partial encryption method is used generally. However, the existing partial encryption method has a disadvantage that the encrypted data can be inferred due to the remaining area that is not encrypted. This study proposes a partial encryption method of increasing the encryption speed and complying with the security standard in order to solve this demerit. The proposed method consists of 3 processes such as header formation, partial encryption and block shuffle. In step 1 Header formation process, header data necessary for the algorithm are generated. In step 2 Partial encryption process, a part of data is encrypted, using LEA (Lightweight Encryption Algorithm), and all data are transformed with XOR of data in the unencrypted part and the block generated in the encryption process. In step 3 Block shuffle process, the blocks are mixed, using the shuffle data stored with the random arrangement form in the header to carry out encryption by transforming the data into an unrecognizable form. As a result of the implementation of the proposed method, applying it to a mobile device, all the encrypted data were transformed into an unrecognizable form, so the data could not be inferred, and the data could not be restored without the encryption key. It was confirmed that the proposed method could make prompt treatment possible in encrypting mass data since the encryption speed is improved by approximately 273% or so compared to LEA which is Lightweight Encryption Algorithm.

A Study on the Nonlinear Deterministic Characteristics of Stock Returns (주식 수익률의 비선형 결정론적 특성에 관한 연구)

  • Chang, Kyung-Chun;Kim, Hyun-Seok
    • The Korean Journal of Financial Management
    • /
    • v.21 no.1
    • /
    • pp.149-181
    • /
    • 2004
  • In this study we perform empirical tests using KOSPI return to investigate the existence of nonlinear characteristics in the generating process of stock returns. There are three categories in empirical tests; the test of nonlinear dependence, nonlinear stochastic process and nonlinear deterministic chaos. According to the analysis of nonlinearity, stock returns are not normally distributed but leptokurtic, and appear to have nonlinear dependence. And it's decided that the nonlinear structure of stock returns can not be completely explained using nonlinear stochastic models of ARCH-type. Nonlinear deterministic chaos system is the feedback system, which the past incidents influence the present, and it is the fractal structure with self-similarity and has the sensitive dependence on initial conditions. To summarize the results of chaos analysis for KOSPI return, it is the persistent time series, which is not IID and has long memory, takes biased random walk, and is estimated to be fractal distribution. Also correlation dimension, as the approximation of fractal dimension, converged stably within 3 and 4, and maximum Lyapunov exponent has positive value. This suggests that chaotic attractor and the sensitive dependence on initial conditions exist in stock returns. These results fit into the characteristics of chaos system. Therefore it's decided that the generating process of stock returns has nonlinear deterministic structure and follow chaotic process.

  • PDF

Development of Time-Dependent Reliability-Based Design Method Based on Stochastic Process on Caisson Sliding of Vertical Breakwater (직립방파제의 케이슨 활동에 대한 확률과정에 기반한 시간의존 신뢰성 설계법 개발)

  • Kim, Seung-Woo;Cheon, Sehyeon;Suh, Kyung-Duck
    • Journal of Korean Society of Coastal and Ocean Engineers
    • /
    • v.24 no.5
    • /
    • pp.305-318
    • /
    • 2012
  • Although the existing performance-based design method for the vertical breakwater evaluates an average sliding distance during an arbitrary time, it does not calculate the probability of the first occurrence of an event exceeding an allowable sliding distance(i.e. the first-passage probability). Designers need information about the probability that the structure is damaged for the first time for not only design but also maintenance and operation of the structure. Therefore, in this study, a time-dependent reliability design method based on a stochastic process is developed to evaluate the first-passage probability of caisson sliding. Caisson sliding can be formulated by the Poisson spike process because both occurrence time and intensity of severe waves causing caisson sliding are random processes. The occurrence rate of severe waves is expressed as a function of the distribution function of sliding distance and mean occurrence rate of severe waves. These values simulated by a performance-based design method are expressed as multivariate regression functions of design variables. As a result, because the distribution function of sliding distance and the mean occurrence rate of severe waves are expressed as functions of significant wave height, caisson width, and water depth, the first-passage probability of caisson sliding can be easily evaluated.

Characteristics of Manufacturing for Special Cement Using High Chlorine by-product (고염소 부산물을 이용한 특수시멘트 제조 특성)

  • Moon, Kiyeon;Cho, Jinsang;Choi, Moonkwan;Cho, Kyehong
    • Resources Recycling
    • /
    • v.30 no.6
    • /
    • pp.68-75
    • /
    • 2021
  • This study aims to investigate the manufacturing process of calcium chloride-based special cement, i.e., CCA (calcium chloro aluminate, C11A7·CaCl2), which uses limestone, by using one type of random industrial by-product, domestic coal ash, cement kiln dust. The manufacturing process of was examined in detail, and the results suggested that the amount of CCA synthesized increased with an increase in the firing temperature. The manufacturing process of CCA was investigated at 1200℃, which was determined as the optimum firing temperature. The results showed that in general, the amount of CCA synthesized tended to increase with an increase in the firing time; however, the clinker melted when the firing time was more than 30 min, thereby suggesting that a firing time of less than 20 min would be suitable for the clinkering process. The optimal firing conditions for manufacturing CCA were obtained as follows: heating rate of 10 ℃/min, firing temperature of 1200 ℃, and holding time of 20 min. The results also suggest that manufacturing CCA will be easier when high chlorine-containing cement kiln dust is used.

Analysis of the Process Capability Index According to the Sample Size of Multi-Measurement (다측정 표본크기에 대한 공정능력지수 분석)

  • Lee, Do-Kyung
    • Journal of Korean Society of Industrial and Systems Engineering
    • /
    • v.42 no.1
    • /
    • pp.151-157
    • /
    • 2019
  • This study is about the process capability index (PCI). In this study, we introduce several indices including the index $C_{PR}$ and present the characteristics of the $C_{PR}$ as well as its validity. The difference between the other indices and the $C_{PR}$ is the way we use to estimate the standard deviation. Calculating the index, most indices use sample standard deviation while the index $C_{PR}$ uses range R. The sample standard deviation is generally a better estimator than the range R. But in the case of the panel process, the $C_{PR}$ has more consistency than the other indices at the point of non-conforming ratio which is an important term in quality control. The reason why the $C_{PR}$ using the range has better consistency is explained by introducing the concept of 'flatness ratio'. At least one million cells are present in one panel, so we can't inspect all of them. In estimating the PCI, it is necessary to consider the inspection cost together with the consistency. Even though we want smaller sample size at the point of inspection cost, the small sample size makes the PCI unreliable. There is 'trade off' between the inspection cost and the accuracy of the PCI. Therefore, we should obtain as large a sample size as possible under the allowed inspection cost. In order for $C_{PR}$ to be used throughout the industry, it is necessary to analyze the characteristics of the $C_{PR}$. Because the $C_{PR}$ is a kind of index including subgroup concept, the analysis should be done at the point of sample size of the subgroup. We present numerical analysis results of $C_{PR}$ by the data from the random number generating method. In this study, we also show the difference between the $C_{PR}$ using the range and the $C_P$ which is a representative index using the sample standard deviation. Regression analysis was used for the numerical analysis of the sample data. In addition, residual analysis and equal variance analysis was also conducted.