• Title/Summary/Keyword: Iteration

Search Result 1,889, Processing Time 0.03 seconds

The Flood Water Stage Prediction based on Neural Networks Method in Stream Gauge Station (하천수위표지점에서 신경망기법을 이용한 홍수위의 예측)

  • Kim, Seong-Won;Salas, Jose-D.
    • Journal of Korea Water Resources Association
    • /
    • v.33 no.2
    • /
    • pp.247-262
    • /
    • 2000
  • In this paper, the WSANN(Water Stage Analysis with Neural Network) model was presented so as to predict flood water stage at Jindong which has been the major stream gauging station in Nakdong river basin. The WSANN model used the improved backpropagation training algorithm which was complemented by the momentum method, improvement of initial condition and adaptive-learning rate and the data which were used for this study were classified into training and testing data sets. An empirical equation was derived to determine optimal hidden layer node between the hidden layer node and threshold iteration number. And, the calibration of the WSANN model was performed by the four training data sets. As a result of calibration, the WSANN22 and WSANN32 model were selected for the optimal models which would be used for model verification. The model verification was carried out so as to evaluate model fitness with the two-untrained testing data sets. And, flood water stages were reasonably predicted through the results of statistical analysis. As results of this study, further research activities are needed for the construction of a real-time warning of the impending flood and for the control of flood water stage with neural network method in river basin. basin.

  • PDF

Adaptive Hard Decision Aided Fast Decoding Method using Parity Request Estimation in Distributed Video Coding (패리티 요구량 예측을 이용한 적응적 경판정 출력 기반 고속 분산 비디오 복호화 기술)

  • Shim, Hiuk-Jae;Oh, Ryang-Geun;Jeon, Byeung-Woo
    • Journal of Broadcast Engineering
    • /
    • v.16 no.4
    • /
    • pp.635-646
    • /
    • 2011
  • In distributed video coding, low complexity encoder can be realized by shifting encoder-side complex processes to decoder-side. However, not only motion estimation/compensation processes but also complex LDPC decoding process are imposed to the Wyner-Ziv decoder, therefore decoder-side complexity has been one important issue to improve. LDPC decoding process consists of numerous iterative decoding processes, therefore complexity increases as the number of iteration increases. This iterative LDPC decoding process accounts for more than 60% of whole WZ decoding complexity, therefore it can be said to be a main target for complexity reduction. Previously, HDA (Hard Decision Aided) method is introduced for fast LDPC decoding process. For currently received parity bits, HDA method certainly reduces the complexity of decoding process, however, LDPC decoding process is still performed even with insufficient amount of parity request which cannot lead to successful LDPC decoding. Therefore, we can further reduce complexity by avoiding the decoding process for insufficient parity bits. In this paper, therefore, a parity request estimation method is proposed using bit plane-wise correlation and temporal correlation. Joint usage of HDA method and the proposed method achieves about 72% of complexity reduction in LDPC decoding process, while rate distortion performance is degraded only by -0.0275 dB in BDPSNR.

Thermal-Hydraulic Analysis and Parametric Study on the Spent Fuel Pool Storage (기사용 핵연료 저장조에 대한 열수력 해석 및 관련 인자의 영향 평가)

  • Lee, Kye-Bock;Nam, Ki-Il;Park, Jong-Ryul;Lee, Sang-Keun
    • Nuclear Engineering and Technology
    • /
    • v.26 no.1
    • /
    • pp.19-31
    • /
    • 1994
  • The objective of this study is to conduct a thermal-hydraulic analysis on the spent fuel pool and to evaluate a parametric effect for the thermal-hydraulic analysis of spent fuel pool. The selected parameters are the Reynolds Number and the gap flow through the oater gap between fuel cell and fuel bundle. The simplified flow network for a path of fuel cells is used to analyze the natural circulation phenomenon. In the flow network analysis, the pressure drop for each assembly from the entrance of the fuel rack to the exit of the fuel assembly is balanced by the driving head due to the density difference between the pool fluid and the average fluid in each spent fuel assembly. The governing equations ore developed using this relation. But, since the parameters(flow rate, pressure loss coefficient, decay heat, density)are coupled each other, iteration method is used to obtain the solution. For the analysis of the YGN 3&4 spent fuel rack, 12 channels are considered and the inputs such as decay heat and pressure loss coefficient are determined conservatively. The results show the thermal-hydraulic characteristics(void fraction, density, boiling height)of the YGN 3&4 spent fuel rack. There occurs small amount of boiling in the cells. Fuel cladding temperature is lower than 343.3$^{\circ}C$. The evaluation of parametric effect indicates that flow resistances by geometric effect are very sensitive to Reynolds number in the transition region and the gap flow is negligible because of the larger flow resistance in the gap flow path than in the fuel bundle.

  • PDF

Fragment Combination From DNA Sequence Data Using Fuzzy Reasoning Method (퍼지 추론기법을 이용한 DNA 염기 서열의 단편결합)

  • Kim, Kwang-Baek;Park, Hyun-Jung
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.10 no.12
    • /
    • pp.2329-2334
    • /
    • 2006
  • In this paper, we proposed a method complementing failure of combining DNA fragments, defect of conventional contig assembly programs. In the proposed method, very long DNA sequence data are made into a prototype of fragment of about 700 bases that can be analyzed by automatic sequence analyzer at one time, and then matching ratio is calculated by comparing a standard prototype with 3 fragmented clones of about 700 bases generated by the PCR method. In this process, the time for calculation of matching ratio is reduced by Compute Agreement algorithm. Two candidates of combined fragments of every prototype are extracted by the degree of overlapping of calculated fragment pairs, and then degree of combination is decided using a fuzzy reasoning method that utilizes the matching ratios of each extracted fragment, and A, C, G, T membership degrees of each DNA sequence, and previous frequencies of each A, C, G, T. In this paper. DNA sequence combination is completed by the iteration of the process to combine decided optimal test fragments until no fragment remains. For the experiments, fragments or about 700 bases were generated from each sequence of 10,000 bases and 100,000 bases extracted from 'PCC6803', complete protein genome. From the experiments by applying random notations on these fragments, we could see that the proposed method was faster than FAP program, and combination failure, defect of conventional contig assembly programs, did not occur.

An Approach of Scalable SHIF Ontology Reasoning using Spark Framework (Spark 프레임워크를 적용한 대용량 SHIF 온톨로지 추론 기법)

  • Kim, Je-Min;Park, Young-Tack
    • Journal of KIISE
    • /
    • v.42 no.10
    • /
    • pp.1195-1206
    • /
    • 2015
  • For the management of a knowledge system, systems that automatically infer and manage scalable knowledge are required. Most of these systems use ontologies in order to exchange knowledge between machines and infer new knowledge. Therefore, approaches are needed that infer new knowledge for scalable ontology. In this paper, we propose an approach to perform rule based reasoning for scalable SHIF ontologies in a spark framework which works similarly to MapReduce in distributed memories on a cluster. For performing efficient reasoning in distributed memories, we focus on three areas. First, we define a data structure for splitting scalable ontology triples into small sets according to each reasoning rule and loading these triple sets in distributed memories. Second, a rule execution order and iteration conditions based on dependencies and correlations among the SHIF rules are defined. Finally, we explain the operations that are adapted to execute the rules, and these operations are based on reasoning algorithms. In order to evaluate the suggested methods in this paper, we perform an experiment with WebPie, which is a representative ontology reasoner based on a cluster using the LUBM set, which is formal data used to evaluate ontology inference and search speed. Consequently, the proposed approach shows that the throughput is improved by 28,400% (157k/sec) from WebPie(553/sec) with LUBM.

The Consideration for Optimum 3D Seismic Processing Procedures in Block II, Northern Part of South Yellow Sea Basin (대륙붕 2광구 서해분지 북부지역의 3D전산처리 최적화 방안시 고려점)

  • Ko, Seung-Won;Shin, Kook-Sun;Jung, Hyun-Young
    • The Korean Journal of Petroleum Geology
    • /
    • v.11 no.1 s.12
    • /
    • pp.9-17
    • /
    • 2005
  • In the main target area of the block II, Targe-scale faults occur below the unconformity developed around 1 km in depth. The contrast of seismic velocity around the unconformity is generally so large that the strong multiples and the radical velocity variation would deteriorate the quality of migrated section due to serious distortion. More than 15 kinds of data processing techniques have been applied to improve the image resolution for the structures farmed from this active crustal activity. The bad and noisy traces were edited on the common shot gathers in the first step to get rid of acquisition problems which could take place from unfavorable conditions such as climatic change during data acquisition. Correction of amplitude attenuation caused from spherical divergence and inelastic attenuation has been also applied. Mild F/K filter was used to attenuate coherent noise such as guided waves and side scatters. Predictive deconvolution has been applied before stacking to remove peg-leg multiples and water reverberations. The velocity analysis process was conducted at every 2 km interval to analyze migration velocity, and it was iterated to get the high fidelity image. The strum noise caused from streamer was completely removed by applying predictive deconvolution in time space and ${\tau}-P$ domain. Residual multiples caused from thin layer or water bottom were eliminated through parabolic radon transform demultiple process. The migration using curved ray Kirchhoff-style algorithm has been applied to stack data. The velocity obtained after several iteration approach for MVA (migration velocity analysis) was used instead or DMO for the migration velocity. Using various testing methods, optimum seismic processing parameter can be obtained for structural and stratigraphic interpretation in the Block II, Yellow Sea Basin.

  • PDF

Development of the Meta-heuristic Optimization Algorithm: Exponential Bandwidth Harmony Search with Centralized Global Search (새로운 메타 휴리스틱 최적화 알고리즘의 개발: Exponential Bandwidth Harmony Search with Centralized Global Search)

  • Kim, Young Nam;Lee, Eui Hoon
    • Journal of the Korea Academia-Industrial cooperation Society
    • /
    • v.21 no.2
    • /
    • pp.8-18
    • /
    • 2020
  • An Exponential Bandwidth Harmony Search with Centralized Global Search (EBHS-CGS) was developed to enhance the performance of a Harmony Search (HS). EBHS-CGS added two methods to improve the performance of HS. The first method is an improvement of bandwidth (bw) that enhances the local search. This method replaces the existing bw with an exponential bw and reduces the bw value as the iteration proceeds. This form of bw allows for an accurate local search, which enables the algorithm to obtain more accurate values. The second method is to reduce the search range for an efficient global search. This method reduces the search space by considering the best decision variable in Harmony Memory (HM). This process is carried out separately from the global search of the HS by the new parameter, Centralized Global Search Rate (CGSR). The reduced search space enables an effective global search, which improves the performance of the algorithm. The proposed algorithm was applied to a representative optimization problem (math and engineering), and the results of the application were compared with the HS and better Improved Harmony Search (IHS).

A Study on the Secondary Optimization Analysis based on the Result of Primary Structure Analysis for the Die Thickness (금형두께에 대한 1차 구조해석 결과를 기반으로 한 2차 최적화 해석에 관한 연구)

  • Lee, Jong-Bae;Kim, Sang-Hyun;Woo, Chang-Ki
    • Journal of the Korea Academia-Industrial cooperation Society
    • /
    • v.15 no.6
    • /
    • pp.3448-3454
    • /
    • 2014
  • Generally existing structure analysis was applied to elastic analysis basically in practice. Considering the nonlinear material and the nonlinear geometric to be a more precise analysis, for this reason, The necessity for a structual analysis have been constantly required. Therefore, after optimization is performed, designed a simple model which is applied the principle of nonlinear in this study, a structural analysis of existing experienced users, have a aims at presenting theory and a method in order to perform anyone the analysis easily. In this study, the proposed model applied to die ribs, Regarding the shear load, less strain and stress was generated but strength was sufficient. The initial strain and stress was reconfigured to fit the size and shape, A hyperstudy in conjunction with Abaqus with nonlinear structural analysis, revealed an acceptable maximum and minimum range of stress and under the conditions of minimum strain, the plate made with a constant increment. In the experimental models, the plate thickness was given a power of 40 Newton, according to the thickness of the press die through an iterative process. When the stress and strain was applied to the die thickness, 7-8mm thickness could be obtained by optimizing.

Nonlinear Dynamic Analysis on Low-Tension Towed Cable by Finite Difference Method (유한차분법을 이용한 저장력 예인케이블의 비선형 동적해석)

  • Han-Il Park;Dong-Ho Jung
    • Journal of the Society of Naval Architects of Korea
    • /
    • v.39 no.1
    • /
    • pp.28-37
    • /
    • 2002
  • In this study nonlinear dynamic behaviors of towed tow-tension cables are numerically analysed. In the case of a taut cable analysis, a bending stiffness term is usually neglected due to its minor effect but it plays an important role in a low-tension cable analysis. A low-tension cable may experience large displacements due to relatively small restoring forces and thus the effects of fluid and geometric non-linearities become predominant. The bending stiffness and non-linearity effects are considered in this work. In order to obtain dynamic behaviors of a towed low-tension cable, three-dimensional nonlinear dynamic equation is described and discretized by employing a finite difference method. An implicit method and Newton-Raphson iteration are adopted for the time integration and nonlinear solutions. For the calculation of huge size of matrices. block tri-diagonal matrix method is applied, which is much faster than the well-known Gauss-Jordan method in two point boundary value problems. Some case studies are carried out and the results of numerical simulations are compared with those of a in-house program of WHOI Cable with good agreements.

Construction stages analyses using time dependent material properties of concrete arch dams

  • Sevim, Baris;Altunisik, Ahmet C.;Bayraktar, Alemdar
    • Computers and Concrete
    • /
    • v.14 no.5
    • /
    • pp.599-612
    • /
    • 2014
  • This paper presents the effects of the construction stages using time dependent material properties on the structural behaviour of concrete arch dams. For this purpose, a double curvature Type-5 arch dam suggested in "Arch Dams" symposium in England in 1968 is selected as a numerical example. Finite element models of Type-5 arch dam are modelled using SAP2000 program. Geometric nonlinearity is taken into consideration in the construction stage analysis using P-Delta plus large displacement criterion. In addition, the time dependent material strength variations and geometric variations are included in the analysis. Elasticity modulus, creep and shrinkage are computed for different stages of the construction process. In the construction stage analyses, a total of 64 construction stages are included. Each stage has generally $6000m^3$ concrete volume. Total duration is taken into account as 1280 days. Maximum total step and maximum iteration for each step are selected as 200 and 50, respectively. The structural behaviour of the arch dam at different construction stages has been examined. Two different finite element analyses cases are performed. In the first case, construction stages using time dependent material properties are considered. In the second case, only linear static analysis (not considered construction stages) is taken into account. Variation of the displacements and stresses are obtained from the both analyses. It is highlighted that construction stage analysis using time dependent material strength variations and geometric variations has an important effect on the structural behaviour of arch dams. The maximum longitudinal, transverse and vertical displacements obtained from construction stages and static analyses are 1.35 mm and 0 mm; -8.44 and 6.68 mm; -4.00 and -9.90 mm, respectively. In addition, vertical displacements increase from the base to crest of the dam for both analyses. The maximum S11, S22 and S33 stresses are obtained as 1.60MPa and 2.84MPa; 1.39MPa and 2.43MPa; 0.60MPa and 0.50MPa, respectively. The differences between maximum longitudinal, transverse, and vertical stresses obtained from construction stage and static analyses are 78%, 75%, and %17, respectively. On the other hand, there is averagely 12% difference between minimum stresses for all three directions.