• Title/Summary/Keyword: Discretization

Search Result 690, Processing Time 0.037 seconds

Ray Effect Analysis Using the Discrete Elements Method in X-Y Geometry (2차원 직각좌표계에서 DEM을 이용한 ray effect의 해석)

  • Choi, Ho-Sin;Kim, Jong-Kyung
    • Journal of Radiation Protection and Research
    • /
    • v.17 no.1
    • /
    • pp.43-56
    • /
    • 1992
  • As one of the methods to ameliorate the ray effects which are the nature of anomalous computational effects due to the discretization of the angular variable in discrete ordinates approximations, a computational program, named TWODET (TWO dimensional Discrete Element Transport), has developed in 2 dimensional cartesian coordinates system using the discrete elements method, in which the discrete angle quadratures are steered by the spatially dependent angular fluxes. The results of the TWODET calculation with K-2, L-3 discrete angular quadratures, in the problem of a centrally located, isotropically emitting flat source in an absorbing square, are shown to be more accurate than that of the DOT 4.3 calculation with S-10 full symmetry angular quadratures, in remedy of the ray effect at the edge flux distributions of the square. But the computing time of the TWODET is about 4 times more than that of the DOT 4.3. In the problem of vacuum boundaries just outside of the source region in an absorbing square, the results of the TWODET calculation are shown severely anomalous ray effects, due to the sudden discontinuity between the source and the vacuum, like as the results of the DOT 4.3 calculation. In the probelm of an external source in an absorbing square in which a highly absorbing medium is added, the results of the TWODET calculation with K-3, L-4 show a good ones like as, somewhat more than, that of the DOT 4.3 calculation with S-10.

  • PDF

A Two-Phase Stock Trading System based on Pattern Matching and Automatic Rule Induction (패턴 매칭과 자동 규칙 생성에 기반한 2단계 주식 트레이딩 시스템)

  • Lee, Jong-Woo;Kim, Yu-Seop;Kim, Sung-Dong;Lee, Jae-Won;Chae, Jin-Seok
    • The KIPS Transactions:PartB
    • /
    • v.10B no.3
    • /
    • pp.257-264
    • /
    • 2003
  • In the context of a dynamic trading environment, the ultimate goal of the financial forecasting system is to optimize a specific trading objective. This paper proposes a two-phase (extraction and filtering) stock trading system that aims at maximizing the rates of returns. Extraction of stocks is performed by searching specific time-series patterns described by a combination of values of technical indicators. In the filtering phase, several rules are applied to the extracted sets of stocks to select stocks to be actually traded. The filtering rules are automatically induced from past data. From a large database of daily stock prices, the values of technical indicators are calculated. They are used to make the extraction patterns, and the distributions of the discretization intervals of the values are calculated for both positive and negative data sets. We assumed that the values in the intervals of distinctive distribution may contribute to the prediction of future trend of stocks, so the rules for filtering stocks are automatically induced from the data in those intervals. We show the rates of returns when using our trading system outperform the market average. These results mean rule induction method using distributional differences is useful.

A Solute Transport Analysis around Underground Storage Cavern by using Eigenvalue Numerical Technique (고유치 수치기법을 이용한 지하저장공동 주위의 용질이동해석)

  • Chung, Il-Moon;Kim, Ji-Tae;Cho, Won-Cheol;Kim, Nam-Won
    • The Journal of Engineering Geology
    • /
    • v.18 no.4
    • /
    • pp.381-391
    • /
    • 2008
  • The eigenvalue technique is introduced to overcome the problem of truncation errors caused by temporal discretization of numerical analysis. The eigenvalue technique is different from simulation in that only the space is discretized. The spatially discretized equation is diagonized and the linear dynamic system is then decoupled. The time integration can be done independently and continuously for any nodal point at any time. The results of eigenvalue technique are compared with the exact solution and FEM numerical solution. The eigenvalue technique is more efficient than the FEM to the computation time and the computer storage in the same conditions. This technique is applied to the solute transport analysis in nonuniform flow fields around underground storage caverns. This method can be very useful for time consuming simulations. So, a sensitivity analysis is carried out by using this method to analyze the safety of caverns from nearly located contaminant sources. According to the simulations, the reaching time from source to the nearest cavern may takes 50 years with longitudinal dispersivity of 50 m and transversal dispersivity of 5 m, respectively.

Fracture and Hygrothermal Effects in Composite Materials (복합재의 파괴와 hygrothermal 효과에 관한 연구)

  • Kook-Chan Ahn;Nam-Kyung Kim
    • Journal of the Korean Society of Safety
    • /
    • v.11 no.4
    • /
    • pp.143-150
    • /
    • 1996
  • This is an explicit-Implicit, finite element analysis for linear as well as nonlinear hygrothermal stress problems. Additional features, such as moisture diffusion equation, crack element and virtual crack extension(VCE ) method for evaluating J-integral are implemented in this program. The Linear Elastic Fracture Mechanics(LEFM) Theory is employed to estimate the crack driving force under the transient condition for and existing crack. Pores in materials are assumed to be saturated with moisture in the liquid form at the room temperature, which may vaporize as the temperature increases. The vaporization effects on the crack driving force are also studied. The Ideal gas equation is employed to estimate the thermodynamic pressure due to vaporization at each time step after solving basic nodal values. A set of field equations governing the time dependent response of porous media are derived from balance laws based on the mixture theory Darcy's law Is assumed for the fluid flow through the porous media. Perzyna's viscoplastic model incorporating the Von-Mises yield criterion are implemented. The Green-Naghdi stress rate is used for the invariant of stress tensor under superposed rigid body motion. Isotropic elements are used for the spatial discretization and an iterative scheme based on the full newton-Raphson method is used for solving the nonlinear governing equations.

  • PDF

Design of an Arm Gesture Recognition System Using Feature Transformation and Hidden Markov Models (특징 변환과 은닉 마코프 모델을 이용한 팔 제스처 인식 시스템의 설계)

  • Heo, Se-Kyeong;Shin, Ye-Seul;Kim, Hye-Suk;Kim, In-Cheol
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.2 no.10
    • /
    • pp.723-730
    • /
    • 2013
  • This paper presents the design of an arm gesture recognition system using Kinect sensor. A variety of methods have been proposed for gesture recognition, ranging from the use of Dynamic Time Warping(DTW) to Hidden Markov Models(HMM). Our system learns a unique HMM corresponding to each arm gesture from a set of sequential skeleton data. Whenever the same gesture is performed, the trajectory of each joint captured by Kinect sensor may much differ from the previous, depending on the length and/or the orientation of the subject's arm. In order to obtain the robust performance independent of these conditions, the proposed system executes the feature transformation, in which the feature vectors of joint positions are transformed into those of angles between joints. To improve the computational efficiency for learning and using HMMs, our system also performs the k-means clustering to get one-dimensional integer sequences as inputs for discrete HMMs from high-dimensional real-number observation vectors. The dimension reduction and discretization can help our system use HMMs efficiently to recognize gestures in real-time environments. Finally, we demonstrate the recognition performance of our system through some experiments using two different datasets.

Consistent Boundary Condition for Horizontally-Polarized Shear (SH) Waves Propagated in Layered Waveguides (층상 waveguide에서의 SH파 전파 해석을 위한 경계조건)

  • Lee, Jin Ho
    • Journal of the Computational Structural Engineering Institute of Korea
    • /
    • v.34 no.2
    • /
    • pp.113-120
    • /
    • 2021
  • The wave-propagation phenomenon in an infinite medium has been used to describe the physics in many fields of engineering and natural science. Analytical or numerical methods have been developed to obtain solutions to problems related to the wave-propagation phenomenon. Energy radiation into infinite regions must be accurately considered for accurate solutions to these problems; hence, various numerical and mechanical models as well as boundary conditions have been developed. This paper proposes a new boundary condition that can be applied to scalar-wave or horizontally-polarized shear-wave (or SH-wave) propagation problems in layered waveguides. A governing equation is obtained for the SH waves by applying finite-element discretization in the vertical direction of the waveguide and subsequently modified to derive the boundary condition for the infinite region of the waveguide. Using the orthogonality of the eigenmodes for the SH waves in a layered waveguide, the new boundary condition is shown to be equivalent to the existing root-finding absorbing boundary condition; further, the accuracy is shown to increase with the degree of the new boundary condition, and its stability can be proven. The accuracy and stability are then demonstrated by applying the proposed boundary condition to wave-propagation problems in layered waveguides.

A Study on the Development of a Fire Site Risk Prediction Model based on Initial Information using Big Data Analysis (빅데이터 분석을 활용한 초기 정보 기반 화재현장 위험도 예측 모델 개발 연구)

  • Kim, Do Hyoung;Jo, Byung wan
    • Journal of the Society of Disaster Information
    • /
    • v.17 no.2
    • /
    • pp.245-253
    • /
    • 2021
  • Purpose: This study develops a risk prediction model that predicts the risk of a fire site by using initial information such as building information and reporter acquisition information, and supports effective mobilization of fire fighting resources and the establishment of damage minimization strategies for appropriate responses in the early stages of a disaster. Method: In order to identify the variables related to the fire damage scale on the fire statistics data, a correlation analysis between variables was performed using a machine learning algorithm to examine predictability, and a learning data set was constructed through preprocessing such as data standardization and discretization. Using this, we tested a plurality of machine learning algorithms, which are evaluated as having high prediction accuracy, and developed a risk prediction model applying the algorithm with the highest accuracy. Result: As a result of the machine learning algorithm performance test, the accuracy of the random forest algorithm was the highest, and it was confirmed that the accuracy of the intermediate value was relatively high for the risk class. Conclusion: The accuracy of the prediction model was limited due to the bias of the damage scale data in the fire statistics, and data refinement by matching data and supplementing the missing values was necessary to improve the predictive model performance.

Quality of Radiomics Research on Brain Metastasis: A Roadmap to Promote Clinical Translation

  • Chae Jung Park;Yae Won Park;Sung Soo Ahn;Dain Kim;Eui Hyun Kim;Seok-Gu Kang;Jong Hee Chang;Se Hoon Kim;Seung-Koo Lee
    • Korean Journal of Radiology
    • /
    • v.23 no.1
    • /
    • pp.77-88
    • /
    • 2022
  • Objective: Our study aimed to evaluate the quality of radiomics studies on brain metastases based on the radiomics quality score (RQS), Transparent Reporting of a multivariable prediction model for Individual Prognosis Or Diagnosis (TRIPOD) checklist, and the Image Biomarker Standardization Initiative (IBSI) guidelines. Materials and Methods: PubMed MEDLINE, and EMBASE were searched for articles on radiomics for evaluating brain metastases, published until February 2021. Of the 572 articles, 29 relevant original research articles were included and evaluated according to the RQS, TRIPOD checklist, and IBSI guidelines. Results: External validation was performed in only three studies (10.3%). The median RQS was 3.0 (range, -6 to 12), with a low basic adherence rate of 50.0%. The adherence rate was low in comparison to the "gold standard" (10.3%), stating the potential clinical utility (10.3%), performing the cut-off analysis (3.4%), reporting calibration statistics (6.9%), and providing open science and data (3.4%). None of the studies involved test-retest or phantom studies, prospective studies, or cost-effectiveness analyses. The overall rate of adherence to the TRIPOD checklist was 60.3% and low for reporting title (3.4%), blind assessment of outcome (0%), description of the handling of missing data (0%), and presentation of the full prediction model (0%). The majority of studies lacked pre-processing steps, with bias-field correction, isovoxel resampling, skull stripping, and gray-level discretization performed in only six (20.7%), nine (31.0%), four (3.8%), and four (13.8%) studies, respectively. Conclusion: The overall scientific and reporting quality of radiomics studies on brain metastases published during the study period was insufficient. Radiomics studies should adhere to the RQS, TRIPOD, and IBSI guidelines to facilitate the translation of radiomics into the clinical field.

Development and evaluation of a 2-dimensional land surface flood analysis model using uniform square grid (정형 사각 격자 기반의 2차원 지표면 침수해석 모형 개발 및 평가)

  • Choi, Yun-Seok;Kim, Joo-Hun;Choi, Cheon-Kyu;Kim, Kyung-Tak
    • Journal of Korea Water Resources Association
    • /
    • v.52 no.5
    • /
    • pp.361-372
    • /
    • 2019
  • The purpose of this study is to develop a two-dimensional land surface flood analysis model based on uniform square grid using the governing equations except for the convective acceleration term in the momentum equation. Finite volume method and implicit method were applied to spatial and temporal discretization. In order to reduce the execution time of the model, parallel computation techniques using CPU were applied. To verify the developed model, the model was compared with the analytical solution and the behavior of the model was evaluated through numerical experiments in the virtual domain. In addition, inundation analyzes were performed at different spatial resolutions for the domestic Janghowon area and the Sebou river area in Morocco, and the results were compared with the analysis results using the CAESER-LISFLOOD (CLF) model. In model verification, simulation results were well matched with the analytical solution, and the flow analyses in the virtual domain were also evaluated to be reasonable. The results of inundation simulations in the Janghowon and the Sebou river area by this study and CLF model were similar with each other and for Janghowon area, the simulation result was also similar to the flooding area of flood hazard map. The different parts in the simulation results of this study and the CLF model were compared and evaluated for each case. The results of this study suggest that the model proposed in this study can simulate the flooding well in the floodplain. However, in case of flood analysis using the model presented in this study, the characteristics and limitations of the model by domain composition method, governing equation and numerical method should be fully considered.

Memory Organization for a Fuzzy Controller.

  • Jee, K.D.S.;Poluzzi, R.;Russo, B.
    • Proceedings of the Korean Institute of Intelligent Systems Conference
    • /
    • 1993.06a
    • /
    • pp.1041-1043
    • /
    • 1993
  • Fuzzy logic based Control Theory has gained much interest in the industrial world, thanks to its ability to formalize and solve in a very natural way many problems that are very difficult to quantify at an analytical level. This paper shows a solution for treating membership function inside hardware circuits. The proposed hardware structure optimizes the memoried size by using particular form of the vectorial representation. The process of memorizing fuzzy sets, i.e. their membership function, has always been one of the more problematic issues for the hardware implementation, due to the quite large memory space that is needed. To simplify such an implementation, it is commonly [1,2,8,9,10,11] used to limit the membership functions either to those having triangular or trapezoidal shape, or pre-definite shape. These kinds of functions are able to cover a large spectrum of applications with a limited usage of memory, since they can be memorized by specifying very few parameters ( ight, base, critical points, etc.). This however results in a loss of computational power due to computation on the medium points. A solution to this problem is obtained by discretizing the universe of discourse U, i.e. by fixing a finite number of points and memorizing the value of the membership functions on such points [3,10,14,15]. Such a solution provides a satisfying computational speed, a very high precision of definitions and gives the users the opportunity to choose membership functions of any shape. However, a significant memory waste can as well be registered. It is indeed possible that for each of the given fuzzy sets many elements of the universe of discourse have a membership value equal to zero. It has also been noticed that almost in all cases common points among fuzzy sets, i.e. points with non null membership values are very few. More specifically, in many applications, for each element u of U, there exists at most three fuzzy sets for which the membership value is ot null [3,5,6,7,12,13]. Our proposal is based on such hypotheses. Moreover, we use a technique that even though it does not restrict the shapes of membership functions, it reduces strongly the computational time for the membership values and optimizes the function memorization. In figure 1 it is represented a term set whose characteristics are common for fuzzy controllers and to which we will refer in the following. The above term set has a universe of discourse with 128 elements (so to have a good resolution), 8 fuzzy sets that describe the term set, 32 levels of discretization for the membership values. Clearly, the number of bits necessary for the given specifications are 5 for 32 truth levels, 3 for 8 membership functions and 7 for 128 levels of resolution. The memory depth is given by the dimension of the universe of the discourse (128 in our case) and it will be represented by the memory rows. The length of a world of memory is defined by: Length = nem (dm(m)+dm(fm) Where: fm is the maximum number of non null values in every element of the universe of the discourse, dm(m) is the dimension of the values of the membership function m, dm(fm) is the dimension of the word to represent the index of the highest membership function. In our case then Length=24. The memory dimension is therefore 128*24 bits. If we had chosen to memorize all values of the membership functions we would have needed to memorize on each memory row the membership value of each element. Fuzzy sets word dimension is 8*5 bits. Therefore, the dimension of the memory would have been 128*40 bits. Coherently with our hypothesis, in fig. 1 each element of universe of the discourse has a non null membership value on at most three fuzzy sets. Focusing on the elements 32,64,96 of the universe of discourse, they will be memorized as follows: The computation of the rule weights is done by comparing those bits that represent the index of the membership function, with the word of the program memor . The output bus of the Program Memory (μCOD), is given as input a comparator (Combinatory Net). If the index is equal to the bus value then one of the non null weight derives from the rule and it is produced as output, otherwise the output is zero (fig. 2). It is clear, that the memory dimension of the antecedent is in this way reduced since only non null values are memorized. Moreover, the time performance of the system is equivalent to the performance of a system using vectorial memorization of all weights. The dimensioning of the word is influenced by some parameters of the input variable. The most important parameter is the maximum number membership functions (nfm) having a non null value in each element of the universe of discourse. From our study in the field of fuzzy system, we see that typically nfm 3 and there are at most 16 membership function. At any rate, such a value can be increased up to the physical dimensional limit of the antecedent memory. A less important role n the optimization process of the word dimension is played by the number of membership functions defined for each linguistic term. The table below shows the request word dimension as a function of such parameters and compares our proposed method with the method of vectorial memorization[10]. Summing up, the characteristics of our method are: Users are not restricted to membership functions with specific shapes. The number of the fuzzy sets and the resolution of the vertical axis have a very small influence in increasing memory space. Weight computations are done by combinatorial network and therefore the time performance of the system is equivalent to the one of the vectorial method. The number of non null membership values on any element of the universe of discourse is limited. Such a constraint is usually non very restrictive since many controllers obtain a good precision with only three non null weights. The method here briefly described has been adopted by our group in the design of an optimized version of the coprocessor described in [10].

  • PDF