• Title/Summary/Keyword: Homogeneous Model

Search Result 996, Processing Time 0.028 seconds

Program Development to Evaluate Permeability Tensor of Fractured Media Using Borehole Televiewer and BIPS Images and an Assessment of Feasibility of the Program on Field Sites (시추공 텔리뷰어 및 BIPS의 영상자료 해석을 통한 파쇄매질의 투수율텐서 계산 프로그램 개발 및 현장 적용성 평가)

  • 구민호;이동우;원경식
    • The Journal of Engineering Geology
    • /
    • v.9 no.3
    • /
    • pp.187-206
    • /
    • 1999
  • A computer program to numerically predict the permeability tensor of fractured rocks is developed using information on discontinuities which Borehole Televiewer and Borehole Image Processing System (BIPS) provide. It uses orientation and thickness of a large number of discontinuities as input data, and calculates relative values of the 9 elements consisting of the permeability tensor by the formulation based on the EPM model, which regards a fractured rock as a homogeneous, anisotropic porous medium. In order to assess feasibility of the program on field sites, the numerically calculated tensor was obtained using BIPS logs and compared to the results of pumping test conducted in the boreholes of the study area. The degree of horizontal anisotropy and the direction of maximum horizontal permeability are 2.8 and $N77^{\circ}CE$, respectively, determined from the pumping test data, while 3.0 and $N63^{\circ}CE$ from the numerical analysis by the developed program. Disagreement between two analyses, especially for the principal direction of anisotropy, seems to be caused by problems in analyzing the pumping test data, in applicability of the EPM model and the cubic law, and in simplified relationship between the crack size and aperture. Aside from these problems, consideration of hydraulic parameters characterizing roughness of cracks and infilling materials seems to be required to improve feasibility of the proposed program. Three-dimensional assessment of its feasibility on field sites can be accomplished by conducting a series of cross-hole packer tests consisting of an injecting well and a monitoring well at close distance.

  • PDF

Characteristics of Bearing Capacity under Square Footing on Two-layered Sand (2개층 사질토지반에서 정방형 기초의 지지력 특성)

  • 김병탁;김영수;이종현
    • Journal of the Korean Geotechnical Society
    • /
    • v.17 no.4
    • /
    • pp.289-299
    • /
    • 2001
  • 본 연구는 균질 및 2개층 비균질지반에서 사질토지반 상에 놓인 정방형 기초의 극한지지력과 침하에 대하여 고찰하였다. 본 연구는 얕은기초의 거동에 대한 정방형 기초의 크기, 지반 상대밀도, 기초 폭에 대한 상부층의 두께 비(H/B), 상부층 아래 경계면의 경사($\theta$) 그리고 지반강성비의 영향을 규명하기 위하여 모형실험을 수행하였다. 동일 상대밀도에서 지지력 계수($N_{{\gamma}}$)는 일정하지 않으며 기초 폭에 직접적으로 관련되며 지지력계수는 기초 폭이 증가함에 따라 감소하였다. 기초크기의 영향과 구속압력의 영향을 고려하는 Ueno 방법에 의한 극한지지력의 예측값은 고전적인 지지력 산정식보다 더 잘 일치하며 그 값은 실험값의 65% 이상으로 나타났다. $\theta$=$0^{\circ}$인 2개층 지반의 결과에 근거하여, 극한지지력에 대한 하부층 지반의 영향을 무시할 수 있는 한계 상부층 두께는 기초 폭의 2배로 결정되었다. 그러나, 73%의 상부층 상대밀도인 경우는 침하비($\delta$B) 0.05 이하에서만 이 결과가 유효하였다. 경계면이 경사진 2개층 지반의 결과에 근거하여, 상부층의 상대밀도가 느슨할수록 그리고 상부층의 두께가 클수록 극한지지력에 대한 경계면 경사의 영향은 크지 않는 것으로 나타났다. 경계면의 경사가 증가함에 따른 극한침하량의 변화는 경계면이 수평인 경우($\theta$=$0^{\circ}$)를 기준으로 0.82~1.2(상부층 $D_{r}$=73%인 경우) 그리고 0.9~1.07(상부층 $D_{r}$=50%인 경우) 정도로 나타났다.Markup Language 문서로부터 무선 마크업 언어 문서로 자동 변환된 텍스트를 인코딩하는 경우와 같이 특정한 응용 분야에서는 일반 문자열에 대한 확장 인코딩 기법을 적용할 필요가 있을 수 있다.mical etch-stop method for the etching of Si in TMAH:IPA;pyrazine solutions provides a powerful and versatile alternative process for fabricating high-yield Si micro-membranes. the RSC circle, but also to the logistics system in the SLC circle. Thus, the RSLC model can maximize combat synergy effects by integrating the RSC and the SLC. With a similar logic, this paper develops "A Revised System of Systems with Logistics (RSSL)" which combines "A New system of Systems" and logistics. These tow models proposed here help explain several issues such as logistics environment in future warfare, MOE(Measure of Effectiveness( on logistics performance, and COA(Course of Actions) for decreasing mass and increasing velocity. In particular, velocity in logistics is emphasized.

  • PDF

Illegal Cash Accommodation Detection Modeling Using Ensemble Size Reduction (신용카드 불법현금융통 적발을 위한 축소된 앙상블 모형)

  • Lee, Hwa-Kyung;Han, Sang-Bum;Jhee, Won-Chul
    • Journal of Intelligence and Information Systems
    • /
    • v.16 no.1
    • /
    • pp.93-116
    • /
    • 2010
  • Ensemble approach is applied to the detection modeling of illegal cash accommodation (ICA) that is the well-known type of fraudulent usages of credit cards in far east nations and has not been addressed in the academic literatures. The performance of fraud detection model (FDM) suffers from the imbalanced data problem, which can be remedied to some extent using an ensemble of many classifiers. It is generally accepted that ensembles of classifiers produce better accuracy than a single classifier provided there is diversity in the ensemble. Furthermore, recent researches reveal that it may be better to ensemble some selected classifiers instead of all of the classifiers at hand. For the effective detection of ICA, we adopt ensemble size reduction technique that prunes the ensemble of all classifiers using accuracy and diversity measures. The diversity in ensemble manifests itself as disagreement or ambiguity among members. Data imbalance intrinsic to FDM affects our approach for ICA detection in two ways. First, we suggest the training procedure with over-sampling methods to obtain diverse training data sets. Second, we use some variants of accuracy and diversity measures that focus on fraud class. We also dynamically calculate the diversity measure-Forward Addition and Backward Elimination. In our experiments, Neural Networks, Decision Trees and Logit Regressions are the base models as the ensemble members and the performance of homogeneous ensembles are compared with that of heterogeneous ensembles. The experimental results show that the reduced size ensemble is as accurate on average over the data-sets tested as the non-pruned version, which provides benefits in terms of its application efficiency and reduced complexity of the ensemble.

Removals of Formaldehyde by Silver Nano Particles Attached on the Surface of Activated Carbon (나노 은입자가 첨착된 활성탄의 포름알데히드 제거특성)

  • Shin, Seung-Kyu;Kang, Jeong-Hee;Song, Ji-Hyeon
    • Journal of Korean Society of Environmental Engineers
    • /
    • v.32 no.10
    • /
    • pp.936-941
    • /
    • 2010
  • This study was conducted to investigate formaldehyde removals by silver nano-particles attached on the surface of granular activated carbon (Ag-AC) and to compare the results to those obtained with ordinary activated carbon (AC). The BET analysis showed that the overall surface area and the fraction of micropores (less than $20{\AA}$ diameter) of the Ag-AC were significantly decreased because the silver particles blocked the small pores on the surface of the Ag-AC. The formaldehyde removal capacity of the Ag-AC determined using the Freundlich isotherm was higher than that of AC. Despite the decreased BET surface area and micropore volume, the Ag-AC had the increased removal capacity for formaldehyde, presumably due to catalytic oxidation by silver nano-particles. In contrast, the adsorption intensity of the Ag-AC, estimated by 1/n in the Freundlich isotherm equation, was similar to that of the ordinary AC, indicating that the surface modification using silver nano-particles did not affect the adsorption characteristics of AC. In a column experiment, the Ag-AC also showed a longer breakthrough time than that of the AC. Simulation results using the homogeneous surface diffusion model (HSDM) were well fitted to the breakthrough curve of formaldehyde for the ordinary AC, but the predictions showed substantial deviations from the experimental data for the Ag-AC. The discrepancy was due to the catalytic oxidation of silver nano-particles that was not incorporated in the HSDM. Consequently, a new numerical model that takes the catalytic oxidation into accounts needs to be developed to predict the combined oxidation and adsorption process more accurately.

Spatial reproducibility of complex fractionated atrial electrogram depending on the direction and configuration of bipolar electrodes: an in-silico modeling study

  • Song, Jun-Seop;Lee, Young-Seon;Hwang, Minki;Lee, Jung-Kee;Li, Changyong;Joung, Boyoung;Lee, Moon-Hyoung;Shim, Eun Bo;Pak, Hui-Nam
    • The Korean Journal of Physiology and Pharmacology
    • /
    • v.20 no.5
    • /
    • pp.507-514
    • /
    • 2016
  • Although 3D-complex fractionated atrial electrogram (CFAE) mapping is useful in radiofrequency catheter ablation for persistent atrial fibrillation (AF), the directions and configuration of the bipolar electrodes may affect the electrogram. This study aimed to compare the spatial reproducibility of CFAE by changing the catheter orientations and electrode distance in an in -silico left atrium (LA). We conducted this study by importing the heart CT image of a patient with AF into a 3D-homogeneous human LA model. Electrogram morphology, CFAE-cycle lengths (CLs) were compared for 16 different orientations of a virtual bipolar conventional catheter (conv-cath: size 3.5 mm, inter-electrode distance 4.75 mm). Additionally, the spatial correlations of CFAE-CLs and the percentage of consistent sites with CFAE-CL<120 ms were analyzed. The results from the conv-cath were compared with that obtained using a mini catheter (mini-cath: size 1 mm, inter-electrode distance 2.5 mm). Depending on the catheter orientation, the electrogram morphology and CFAE-CLs varied (conv-cath: $11.5{\pm}0.7%$ variation, mini-cath: $7.1{\pm}1.2%$ variation), however the mini-cath produced less variation of CFAE-CL than conv-cath (p<0.001). There were moderate spatial correlations among CFAE-CL measured at 16 orientations (conv-cath: $r=0.3055{\pm}0.2194$ vs. mini-cath: $0.6074{\pm}0.0733$, p<0.001). Additionally, the ratio of consistent CFAE sites was higher for mini catheter than conventional one ($38.3{\pm}4.6%$ vs. $22.3{\pm}1.4%$, p<0.05). Electrograms and CFAE distribution are affected by catheter orientation and electrode configuration in the in-silico LA model. However, there was moderate spatial consistency of CFAE areas, and narrowly spaced bipolar catheters were less influenced by catheter direction than conventional catheters.

Study on the Coefficient of Thermal Expansion for Composites Containing 2-Dimensional Ellipsoidal Inclusions (2차원 타원형의 충전제를 함유하는 복합재료의 열팽창 계수 연구)

  • Lee, Kee-Yoon;Kim, Kyung-Hwan;Jeoung, Sun-Kyoung;Jeon, Hyoung-Jin;Joo, Sang-Il
    • Polymer(Korea)
    • /
    • v.31 no.2
    • /
    • pp.160-167
    • /
    • 2007
  • This paper proposes a model for the solutions predicting the coefficient of thermal expansion of composites including fiber-like shaped$(a_1>a_2=a_3)$ and disk-like shaped$(a_1=a_2>a_3)$ inclusions like two dimensional geometries, which was analyzed by one axis and a single aspect ratio, $(\rho_\alpha=a_1/a_3)$. The analysis follows the procedure developed for elastic moduli by using the Lee and Paul's approach. The effects of the aspect ratio on the coefficient of thermal expansion of composites containing aligned isotropic inclusions are examined. This model should be limited to analyze the composites with unidirectionally aligned inclusions and with complete binding to each other of both matrix and inclusions having homogeneous properties. The longitudinal coefficients of thermal expansion $\alpha_{11}$ decrease and approach the coefficient of thermal expansion of filler, as the aspect ratios increase. However, the transverse coefficients of thermal expansion $\alpha_{33}$ increase or decrease with the aspect ratios.

Analysis of Hydraulic Fracture Geometry by Considering Stress Shadow Effect during Multi-stage Hydraulic Fracturing in Shale Formation (셰일저류층의 다단계 수압파쇄에서 응력그림자 효과를 고려한 균열형태 분석)

  • Yoo, Jeong-min;Park, Hyemin;Wang, Jihoon;Sung, Wonmo
    • Journal of the Korean Institute of Gas
    • /
    • v.25 no.1
    • /
    • pp.20-29
    • /
    • 2021
  • During multi-stage fracturing in a low permeable shale formation, stress interference occurs between the stages which is called the "stress shadow effect(SSE)". The effect may alter the fracture propagation direction and induce ununiform geometry. In this study, the stress shadow effect on the hydraulic fracture geometry and the well productivity were investigated by the commercial full-3D fracture model, GOHFER. In a homogeneous reservoir model, a multi-stage fracturing process was performed with or without the SSE. In addition, the fracturing was performed on two shale reservoirs with different geomechanical properties(Young's modulus and Poisson's ratio) to analyze the stress shadow effect. In the simulation results, the stress change caused by the fracture created in the previous stage switched the maximum/minimum horizontal stress and the lower productivity L-direction fracture was more dominating over the T-direction fracture. Since the Marcellus shale is more brittle than more dominating over the T-direction fracture. Since the Marcellus shale is more brittle than the relatively ductile Eagle Ford shale, the fracture width in the former was developed thicker, resulting in the larger fracture volume. And the Marcellus shale's Young's modulus is low, the stress effect is less significant than the Eagle Ford shale in the stage 2. The stress shadow effect strongly depends on not only the spacing between fractures but also the geomechanical properties. Therefore, the stress shadow effect needs to be taken into account for more accurate analysis of the fracture geometry and for more reliable prediction of the well productivity.

Simulation analysis and evaluation of decontamination effect of different abrasive jet process parameters on radioactively contaminated metal

  • Lin Zhong;Jian Deng;Zhe-wen Zuo;Can-yu Huang;Bo Chen;Lin Lei;Ze-yong Lei;Jie-heng Lei;Mu Zhao;Yun-fei Hua
    • Nuclear Engineering and Technology
    • /
    • v.55 no.11
    • /
    • pp.3940-3955
    • /
    • 2023
  • A new method of numerical simulating prediction and decontamination effect evaluation for abrasive jet decontamination to radioactively contaminated metal is proposed. Based on the Computational Fluid Dynamics and Discrete Element Model (CFD-DEM) coupled simulation model, the motion patterns and distribution of abrasives can be predicted, and the decontamination effect can be evaluated by image processing and recognition technology. The impact of three key parameters (impact distance, inlet pressure, abrasive mass flow rate) on the decontamination effect is revealed. Moreover, here are experiments of reliability verification to decontamination effect and numerical simulation methods that has been conducted. The results show that: 60Co and other homogeneous solid solution radioactive pollutants can be removed by abrasive jet, and the average removal rate of Co exceeds 80%. It is reliable for the proposed numerical simulation and evaluation method because of the well goodness of fit between predicted value and actual values: The predicted values and actual values of the abrasive distribution diameter are Ф57 and Ф55; the total coverage rate is 26.42% and 23.50%; the average impact velocity is 81.73 m/s and 78.00 m/s. Further analysis shows that the impact distance has a significant impact on the distribution of abrasive particles on the target surface, the coverage rate of the core area increases at first, and then decreases with the increase of the impact distance of the nozzle, which reach a maximum of 14.44% at 300 mm. It is recommended to set the impact distance around 300 mm, because at this time the core area coverage of the abrasive is the largest and the impact velocity is stable at the highest speed of 81.94 m/s. The impact of the nozzle inlet pressure on the decontamination effect mainly affects the impact kinetic energy of the abrasive and has little impact on the distribution. The greater the inlet pressure, the greater the impact kinetic energy, and the stronger the decontamination ability of the abrasive. But in return, the energy consumption is higher, too. For the decontamination of radioactively contaminated metals, it is recommended to set the inlet pressure of the nozzle at around 0.6 MPa. Because most of the Co elements can be removed under this pressure. Increasing the mass and flow of abrasives appropriately can enhance the decontamination effectiveness. The total mass of abrasives per unit decontamination area is suggested to be 50 g because the core area coverage rate of the abrasive is relatively large under this condition; and the nozzle wear extent is acceptable.

Uncertainty Calculation Algorithm for the Estimation of the Radiochronometry of Nuclear Material (핵물질 연대측정을 위한 불확도 추정 알고리즘 연구)

  • JaeChan Park;TaeHoon Jeon;JungHo Song;MinSu Ju;JinYoung Chung;KiNam Kwon;WooChul Choi;JaeHak Cheong
    • Journal of Radiation Industry
    • /
    • v.17 no.4
    • /
    • pp.345-357
    • /
    • 2023
  • Nuclear forensics has been understood as a mendatory component in the international society for nuclear material control and non-proliferation verification. Radiochronometry of nuclear activities for nuclear forensics are decay series characteristics of nuclear materials and the Bateman equation to estimate when nuclear materials were purified and produced. Radiochronometry values have uncertainty of measurement due to the uncertainty factors in the estimation process. These uncertainties should be calculated using appropriate evaluation methods that are representative of the accuracy and reliability. The IAEA, US, and EU have been researched on radiochronometry and uncertainty of measurement, although the uncertainty calculation method using the Bateman equation is limited by the underestimation of the decay constant and the impossibility of estimating the age of more than one generation, so it is necessary to conduct uncertainty calculation research using computer simulation such as Monte Carlo method. This highlights the need for research using computational simulations, such as the Monte Carlo method, to overcome these limitations. In this study, we have analyzed mathematical models and the LHS (Latin Hypercube Sampling) methods to enhance the reliability of radiochronometry which is to develop an uncertainty algorithm for nuclear material radiochronometry using Bateman Equation. We analyzed the LHS method, which can obtain effective statistical results with a small number of samples, and applied it to algorithms that are Monte Carlo methods for uncertainty calculation by computer simulation. This was implemented through the MATLAB computational software. The uncertainty calculation model using mathematical models demonstrated characteristics based on the relationship between sensitivity coefficients and radiative equilibrium. Computational simulation random sampling showed characteristics dependent on random sampling methods, sampling iteration counts, and the probability distribution of uncertainty factors. For validation, we compared models from various international organizations, mathematical models, and the Monte Carlo method. The developed algorithm was found to perform calculations at an equivalent level of accuracy compared to overseas institutions and mathematical model-based methods. To enhance usability, future research and comparisons·validations need to incorporate more complex decay chains and non-homogeneous conditions. The results of this study can serve as foundational technology in the nuclear forensics field, providing tools for the identification of signature nuclides and aiding in the research, development, comparison, and validation of related technologies.

A Study of Anomaly Detection for ICT Infrastructure using Conditional Multimodal Autoencoder (ICT 인프라 이상탐지를 위한 조건부 멀티모달 오토인코더에 관한 연구)

  • Shin, Byungjin;Lee, Jonghoon;Han, Sangjin;Park, Choong-Shik
    • Journal of Intelligence and Information Systems
    • /
    • v.27 no.3
    • /
    • pp.57-73
    • /
    • 2021
  • Maintenance and prevention of failure through anomaly detection of ICT infrastructure is becoming important. System monitoring data is multidimensional time series data. When we deal with multidimensional time series data, we have difficulty in considering both characteristics of multidimensional data and characteristics of time series data. When dealing with multidimensional data, correlation between variables should be considered. Existing methods such as probability and linear base, distance base, etc. are degraded due to limitations called the curse of dimensions. In addition, time series data is preprocessed by applying sliding window technique and time series decomposition for self-correlation analysis. These techniques are the cause of increasing the dimension of data, so it is necessary to supplement them. The anomaly detection field is an old research field, and statistical methods and regression analysis were used in the early days. Currently, there are active studies to apply machine learning and artificial neural network technology to this field. Statistically based methods are difficult to apply when data is non-homogeneous, and do not detect local outliers well. The regression analysis method compares the predictive value and the actual value after learning the regression formula based on the parametric statistics and it detects abnormality. Anomaly detection using regression analysis has the disadvantage that the performance is lowered when the model is not solid and the noise or outliers of the data are included. There is a restriction that learning data with noise or outliers should be used. The autoencoder using artificial neural networks is learned to output as similar as possible to input data. It has many advantages compared to existing probability and linear model, cluster analysis, and map learning. It can be applied to data that does not satisfy probability distribution or linear assumption. In addition, it is possible to learn non-mapping without label data for teaching. However, there is a limitation of local outlier identification of multidimensional data in anomaly detection, and there is a problem that the dimension of data is greatly increased due to the characteristics of time series data. In this study, we propose a CMAE (Conditional Multimodal Autoencoder) that enhances the performance of anomaly detection by considering local outliers and time series characteristics. First, we applied Multimodal Autoencoder (MAE) to improve the limitations of local outlier identification of multidimensional data. Multimodals are commonly used to learn different types of inputs, such as voice and image. The different modal shares the bottleneck effect of Autoencoder and it learns correlation. In addition, CAE (Conditional Autoencoder) was used to learn the characteristics of time series data effectively without increasing the dimension of data. In general, conditional input mainly uses category variables, but in this study, time was used as a condition to learn periodicity. The CMAE model proposed in this paper was verified by comparing with the Unimodal Autoencoder (UAE) and Multi-modal Autoencoder (MAE). The restoration performance of Autoencoder for 41 variables was confirmed in the proposed model and the comparison model. The restoration performance is different by variables, and the restoration is normally well operated because the loss value is small for Memory, Disk, and Network modals in all three Autoencoder models. The process modal did not show a significant difference in all three models, and the CPU modal showed excellent performance in CMAE. ROC curve was prepared for the evaluation of anomaly detection performance in the proposed model and the comparison model, and AUC, accuracy, precision, recall, and F1-score were compared. In all indicators, the performance was shown in the order of CMAE, MAE, and AE. Especially, the reproduction rate was 0.9828 for CMAE, which can be confirmed to detect almost most of the abnormalities. The accuracy of the model was also improved and 87.12%, and the F1-score was 0.8883, which is considered to be suitable for anomaly detection. In practical aspect, the proposed model has an additional advantage in addition to performance improvement. The use of techniques such as time series decomposition and sliding windows has the disadvantage of managing unnecessary procedures; and their dimensional increase can cause a decrease in the computational speed in inference.The proposed model has characteristics that are easy to apply to practical tasks such as inference speed and model management.