• Title/Summary/Keyword: Boundary Computation

Search Result 452, Processing Time 0.024 seconds

Numerical Computations on the Hydrodynamic Forces by Internal Waves in a Sediment Pocket (퇴적 침전구에서 발생하는 내면파 유동에 의한 유체력 해석)

  • Kyoung Jo-Hyun;Kim Jang-Whan;Bai Kwang-June
    • Journal of the Korean Society for Marine Environment & Energy
    • /
    • v.7 no.4
    • /
    • pp.192-198
    • /
    • 2004
  • A numerical method is developed to solve a two-dimensional diffraction problem for a body located in a sediment pocket where a heavier muddy water is trapped. In the present study, the wave exciting forces acting on a submerged body in the water-sediment interface by an incident wave is investigate. It is assumed that the heavier mud is trapped locally in a sediment pocket. A mathematical formulation is made in the scope of the potential theory. The fluid is assumed to be inviscid, incompressible and its motion irrotational. The boundary conditions on the unknown free surface and interface are linearized. As a method of solution, the localized finite-element method is adopted. In the method, the computation domain is reduced by utilizing the complete set of analytic solutions known in the infinite subdomain to be truncated by introduction of an appropriate juncture conditions. The main advantage of this method is that any complex geometry of the boundaries can be easily accommodated. Computations are carried out for mono-chromatic plane progressive surface waves normally incident on the domain. Numerical results are compared with those obtained by Lassiter based on Schwingers variational method. Good Agreements are obtained in general. Another numerical computations are made for the cases with and without a body in the sediment pocket.

  • PDF

The Recognition of Occluded 2-D Objects Using the String Matching and Hash Retrieval Algorithm (스트링 매칭과 해시 검색을 이용한 겹쳐진 이차원 물체의 인식)

  • Kim, Kwan-Dong;Lee, Ji-Yong;Lee, Byeong-Gon;Ahn, Jae-Hyeong
    • The Transactions of the Korea Information Processing Society
    • /
    • v.5 no.7
    • /
    • pp.1923-1932
    • /
    • 1998
  • This paper deals with a 2-D objects recognition algorithm. And in this paper, we present an algorithm which can reduce the computation time in model retrieval by means of hashing technique instead of using the binary~tree method. In this paper, we treat an object boundary as a string of structural units and use an attributed string matching algorithm to compute similarity measure between two strings. We select from the privileged strings a privileged string wIth mmimal eccentricity. This privileged string is treated as the reference string. And thell we wllstructed hash table using the distance between privileged string and the reference string as a key value. Once the database of all model strings is built, the recognition proceeds by segmenting the scene into a polygonal approximation. The distance between privileged string extracted from the scene and the reference string is used for model hypothesis rerieval from the table. As a result of the computer simulation, the proposed method can recognize objects only computing, the distance 2-3tiems, while previous method should compute the distance 8-10 times for model retrieval.

  • PDF

MAC-Layer Error Control for Real-Time Broadcasting of MPEG-4 Scalable Video over 3G Networks (3G 네트워크에서 MPEG-4 스케일러블 비디오의 실시간 방송을 위한 실행시간 예측 기반 MAC계층 오류제어)

  • Kang, Kyungtae;Noh, Dong Kun
    • Journal of the Korea Society of Computer and Information
    • /
    • v.19 no.3
    • /
    • pp.63-71
    • /
    • 2014
  • We analyze the execution time of Reed-Solomon coding, which is the MAC-layer forward error correction scheme used in CDMA2000 1xEV-DO broadcast services, under different air channel conditions. The results show that the time constraints of MPEG-4 cannot be guaranteed by Reed-Solomon decoding when the packet loss rate (PLR) is high, due to its long computation time on current hardware. To alleviate this problem, we propose three error control schemes. Our static scheme bypasses Reed-Solomon decoding at the mobile node to satisfy the MPEG-4 time constraint when the PLR exceeds a given boundary. Second, dynamic scheme corrects errors in a best-effort manner within the time constraint, instead of giving up altogether when the PLR is high; this achieves a further quality improvement. The third, video-aware dynamic scheme fixes errors in a similar way to the dynamic scheme, but in a priority-driven manner which makes the video appear smoother. Extensive simulation results show the effectiveness of our schemes compared to the original FEC scheme.

Analysis and solution to the phase concentration and DC-like component of correlation result in Daejeon correlator (대전 상관기의 상관 결과에 나타난 유사 DC 성분과 위상 집중 현상에 대한 원인 분석과 해결 방법)

  • Roh, Duk-Gyoo;Oh, Se-Jin;Yeom, Jae-Hwan;Oh, Chung-Sik;Jung, Jin-Seung;Chung, Dong-Kyu;Yun, Young-Joo;Oyama, Tomoaki;Ozeki, Kensuke;Onuki, Hirofumi
    • Journal of the Institute of Convergence Signal Processing
    • /
    • v.14 no.3
    • /
    • pp.191-204
    • /
    • 2013
  • In this paper, we investigated the correlation outputs of Daejeon correlator at the viewpoints of the buffer memory setting related to the fine delay tracking and the under/overflow issue in FFT modules, in order to eliminate DC-like component and phase concentration to 0 degree. As the ring buffer memory is being used for the fine delay tracking, the DC-like component in correlation outputs is generated by improper setting of data read/write address, and then that address setting method is modified to exclude a polluted FFT segment in correlation processing when crossing the port/stream boundary. The phase concentration to 0 degree at beginning of bandpass is caused by inadequate scaling factors, which may be the origins of under/overflow occurred at internal computation of FFT stage. With the revised method of the ring buffer memory setting and the scaling factors in FFT, we could obtain higher signal-to-noise ratio and flux density, compared to the previous method, through the correlation processing of true observational data.

Evaluation and Comparison of the Topographic Effect Determination Using Korean Digital Elevation Model (우리나라 수치표고모델을 이용한 지형효과 산출방식의 비교평가)

  • Lee, Suk-Bae;Lee, Dong-Ha;Kwon, Jay-Hyun
    • Journal of Korean Society for Geospatial Information Science
    • /
    • v.16 no.1
    • /
    • pp.83-93
    • /
    • 2008
  • The topographic effect is one of the most important component in the solution of the geodetic boundary value problem (geodetic BVP). Therefore, topographic effect should be considered properly for developing the precise geoid model, especially for the area where contains many mountains like Korea. The selection of gravity reduction method in the context of the precise geoid determination depends on the magnitude of its indirect effect, the smoothness and magnitude of the reduced gravity anomalies, and their related geophysical interpretation. In this study, Korean digital elevation model with 100m resolution was constructed and topographic effect was calculated by three reduction methods as like Helmert condensation method and RTM method and Airy-isostatic reduction method. Through the analysis of computation results, we can find that RTM reduction method is the best optimal method and the results shows that gravity anomaly and indirect effect of geoidal height are $0.660{\pm}13.009mGal$, $-0.004{\pm}0.131m$ respectively and it is the most gentle slow of the three methods. Through this study, it was found that the RTM method is better suitable for calculating topographic effect precisely in context of precise geoid determination in Korea than other reduction methods.

  • PDF

Reproduction of Shallow Tides and Tidal Asymmetry by Using Finely Resolved Grid on the West Coast of Korea (서해연안 상세해상을 통한 천해조석 및 조석비대칭 재현)

  • Suh, Seung-Won
    • Journal of Korean Society of Coastal and Ocean Engineers
    • /
    • v.23 no.4
    • /
    • pp.313-325
    • /
    • 2011
  • Finite element grid system using h-refinement on the Yellow Sea was constructed based on previous study (Suh, 1999b) from 14 K to 210 K and special attention was concentrated on refining the coastal zone. In grid generation, depth change between adjacent points and non-dimensional tidal wave length ratio were considered. As a result approximately a quarter of the total nodes are located nearby 5 m of shallow area. Accurate bathymetry data using 30's and ETOPO1 with open boundary conditions of 8 major tidal constituents extracted automatically from FES2004 have been applied. In tidal simulation a 3-dimensional nonlinear harmonic model was setup and tidal amplification due to changes in vertical turbulent and bottom friction were simulated. In this study not only 8 major tidal constituents but also nonlinear shallow tides $M_4,$, $MS_4$ and long period $M_f,$, $M_{sf}$ were reproduced. It is found that implication of spatial variation of friction coefficient plays a very important role in reproduction of astronomical and shallow tides which are computed by iterative computation of nonlinear terms. Also it should be considered differently with respect to tidal periods. To understand the distribution of tidal asymmetry, amplitude ratio of $M_4/M_2$ and phase differences $2g(M_2)-g(M_4)$ were calculated. Tidal distortion ratio marks up to 0.2 on the west coast showing shallow coastal characteristics and somewhat wide range of ebb-dominances in front of Mokpo area are reproduced.

Color-Depth Combined Semantic Image Segmentation Method (색상과 깊이정보를 융합한 의미론적 영상 분할 방법)

  • Kim, Man-Joung;Kang, Hyun-Soo
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.18 no.3
    • /
    • pp.687-696
    • /
    • 2014
  • This paper presents a semantic object extraction method using user's stroke input, color, and depth information. It is supposed that a semantically meaningful object is surrounded with a few strokes from a user, and has similar depths all over the object. In the proposed method, deciding the region of interest (ROI) is based on the stroke input, and the semantically meaningful object is extracted by using color and depth information. Specifically, the proposed method consists of two steps. The first step is over-segmentation inside the ROI using color and depth information. The second step is semantically meaningful object extraction where over-segmented regions are classified into the object region and the background region according to the depth of each region. In the over-segmentation step, we propose a new marker extraction method where there are two propositions, i.e. an adaptive thresholding scheme to maximize the number of the segmented regions and an adaptive weighting scheme for color and depth components in computation of the morphological gradients that is required in the marker extraction. In the semantically meaningful object extraction, we classify over-segmented regions into the object region and the background region in order of the boundary regions to the inner regions, the average depth of each region being compared to the average depth of all regions classified into the object region. In experimental results, we demonstrate that the proposed method yields reasonable object extraction results.

Simulations of Temporal and Spatial Distributions of Rainfall-Induced Turbidity Flow in a Reservoir Using CE-QUAL-W2 (CE-QUAL-W2 모형을 이용한 저수지 탁수의 시공간분포 모의)

  • Chung, Se-Woong;Oh, Jung-Kuk;Ko, Ick-Hwan
    • Journal of Korea Water Resources Association
    • /
    • v.38 no.8 s.157
    • /
    • pp.655-664
    • /
    • 2005
  • A real-time monitoring and modeling system (RTMMS) for rainfall-induced turbidity flow, which is one of the major obstacles for sustainable use of reservoir water resources, is under development. As a prediction model for the RTMMS, a laterally integrated two-dimensional hydrodynamic and water quality model, CE-QUAL-W2 was tested by simulating the temperature stratification, density flow regimes, and temporal and spatial distributions of turbidity in a reservoir. The inflow water temperature and turbidity measured every hour during the flood season of 2004 were used as the boundary conditions. The monitoring data showed that inflow water temperature drop by 5 to $10^{\circ}C$ during rainfall events in summer, and consequently resulted in the development of density flow regimes such as plunge flow and interflow in the reservoir. The model showed relatively satisfactory performance in replicating the water temperature profiles and turbidity distributions, although considerable discrepancies were partially detected between observed and simulated results. The model was either very efficient in computation as the CPU run time to simulate the whole flood season took only 4 minutes with a Pentium 4(CPU 2.0GHz) desktop computer, which is essentially requited for real-time modeling of turbidity plume.

Implementation of GIS-based Application Program for Circuity and Accessibility Analysis in Road Network Graph (도로망 그래프의 우회도와 접근도 분석을 위한 GIS 응용 프로그램 개발)

  • Lee, Kiwon
    • Journal of the Korean Association of Geographic Information Studies
    • /
    • v.7 no.1
    • /
    • pp.84-93
    • /
    • 2004
  • Recently, domain-specific demands with respect to practical applications and analysis scheme using spatial thematic information are increasing. Accordingly, in this study, GIS-based application program is implemented to perform spatial analysis in transportation geography with base road layer data. Using this program, quantitative estimation of circuity and accessibility, which can be extracted from nodes composed of the graph-typed network structure, in a arbitrary analysis zone or administrative boundary zone is possible. Circuity is a concept to represent the difference extent between actual nodes and fully connected nodes in the analysis zone. While, accessibility can be used to find out extent of accessibility or connectivity between all nodes contained in the analysis zone, judging from inter-connecting status of the whole nodes. In put data of this program, which was implemented in AVX executable extension using AvenueTM of ArcView, is not transportation database information based on transportation data model, but layer data, directly obtaining from digital map sets. It is thought that computation of circuity and accessibility can be used as kinds of spatial analysis functions for GIS applications in the transportation field.

  • PDF

Ensemble Learning with Support Vector Machines for Bond Rating (회사채 신용등급 예측을 위한 SVM 앙상블학습)

  • Kim, Myoung-Jong
    • Journal of Intelligence and Information Systems
    • /
    • v.18 no.2
    • /
    • pp.29-45
    • /
    • 2012
  • Bond rating is regarded as an important event for measuring financial risk of companies and for determining the investment returns of investors. As a result, it has been a popular research topic for researchers to predict companies' credit ratings by applying statistical and machine learning techniques. The statistical techniques, including multiple regression, multiple discriminant analysis (MDA), logistic models (LOGIT), and probit analysis, have been traditionally used in bond rating. However, one major drawback is that it should be based on strict assumptions. Such strict assumptions include linearity, normality, independence among predictor variables and pre-existing functional forms relating the criterion variablesand the predictor variables. Those strict assumptions of traditional statistics have limited their application to the real world. Machine learning techniques also used in bond rating prediction models include decision trees (DT), neural networks (NN), and Support Vector Machine (SVM). Especially, SVM is recognized as a new and promising classification and regression analysis method. SVM learns a separating hyperplane that can maximize the margin between two categories. SVM is simple enough to be analyzed mathematical, and leads to high performance in practical applications. SVM implements the structuralrisk minimization principle and searches to minimize an upper bound of the generalization error. In addition, the solution of SVM may be a global optimum and thus, overfitting is unlikely to occur with SVM. In addition, SVM does not require too many data sample for training since it builds prediction models by only using some representative sample near the boundaries called support vectors. A number of experimental researches have indicated that SVM has been successfully applied in a variety of pattern recognition fields. However, there are three major drawbacks that can be potential causes for degrading SVM's performance. First, SVM is originally proposed for solving binary-class classification problems. Methods for combining SVMs for multi-class classification such as One-Against-One, One-Against-All have been proposed, but they do not improve the performance in multi-class classification problem as much as SVM for binary-class classification. Second, approximation algorithms (e.g. decomposition methods, sequential minimal optimization algorithm) could be used for effective multi-class computation to reduce computation time, but it could deteriorate classification performance. Third, the difficulty in multi-class prediction problems is in data imbalance problem that can occur when the number of instances in one class greatly outnumbers the number of instances in the other class. Such data sets often cause a default classifier to be built due to skewed boundary and thus the reduction in the classification accuracy of such a classifier. SVM ensemble learning is one of machine learning methods to cope with the above drawbacks. Ensemble learning is a method for improving the performance of classification and prediction algorithms. AdaBoost is one of the widely used ensemble learning techniques. It constructs a composite classifier by sequentially training classifiers while increasing weight on the misclassified observations through iterations. The observations that are incorrectly predicted by previous classifiers are chosen more often than examples that are correctly predicted. Thus Boosting attempts to produce new classifiers that are better able to predict examples for which the current ensemble's performance is poor. In this way, it can reinforce the training of the misclassified observations of the minority class. This paper proposes a multiclass Geometric Mean-based Boosting (MGM-Boost) to resolve multiclass prediction problem. Since MGM-Boost introduces the notion of geometric mean into AdaBoost, it can perform learning process considering the geometric mean-based accuracy and errors of multiclass. This study applies MGM-Boost to the real-world bond rating case for Korean companies to examine the feasibility of MGM-Boost. 10-fold cross validations for threetimes with different random seeds are performed in order to ensure that the comparison among three different classifiers does not happen by chance. For each of 10-fold cross validation, the entire data set is first partitioned into tenequal-sized sets, and then each set is in turn used as the test set while the classifier trains on the other nine sets. That is, cross-validated folds have been tested independently of each algorithm. Through these steps, we have obtained the results for classifiers on each of the 30 experiments. In the comparison of arithmetic mean-based prediction accuracy between individual classifiers, MGM-Boost (52.95%) shows higher prediction accuracy than both AdaBoost (51.69%) and SVM (49.47%). MGM-Boost (28.12%) also shows the higher prediction accuracy than AdaBoost (24.65%) and SVM (15.42%)in terms of geometric mean-based prediction accuracy. T-test is used to examine whether the performance of each classifiers for 30 folds is significantly different. The results indicate that performance of MGM-Boost is significantly different from AdaBoost and SVM classifiers at 1% level. These results mean that MGM-Boost can provide robust and stable solutions to multi-classproblems such as bond rating.