• Title/Summary/Keyword: Sampling-Based Algorithm

Search Result 477, Processing Time 0.026 seconds

Improved Response Surface Method Using Modified Selection Technique of Sampling Points (개선된 평가점 선정기법을 이용한 응답면기법)

  • 김상효;나성원;황학주
    • Proceedings of the Computational Structural Engineering Institute Conference
    • /
    • 1993.10a
    • /
    • pp.248-255
    • /
    • 1993
  • Recently, due to the increasing attention to the structural safety under uncertain environments, many researches on the structural reliability analysis have been peformed. Some useful methods are available to evaluate performance reliability of structures with explicit limit states. However, for large structures, in which structural behaviors can be analyzed with finite element models and the limit states are only expressed implicitly, Monte-Carlo simulation method has been mainly used. However, Monte-Carlo simulation method spends too much computational time on repetitive structural analysis. Many alternative methods are suggested to reduce the computational work required in Monte-Carlo simulation. Response surface method is widely used to improve the efficiency of structural reliability analysis. Response surface method is based on the concept of approximating simple polynomial function of basic random variables for the limit state which is not easily expressed in explicit forms of design random variables. The response surface method has simple algorithm. However, the accuracy of results highly depends on how properly the stochastic characteristics of the original limit state has been represented by approximated function, In this study, an improved response surface method is proposed in which the sampling points for creating response surface are modified to represent the failure surface more adequately and the combined use of a linear response surface function and Rackwitz-Fiessler method has been employed. The method is found to be more effective and efficient than previous response surface methods. In addition more consistent convergence is achieved, Accuracy of the proposed method has been investigated through example.

  • PDF

Study on Optimization of Design Parameters for Offshore Mooring System using Sampling Method (샘플링 기법을 통한 계류 시스템 설계 변수 최적화 방안에 관한 연구)

  • Kang, Soo-Won;Lee, Seung-Jae
    • Journal of Ocean Engineering and Technology
    • /
    • v.32 no.4
    • /
    • pp.215-221
    • /
    • 2018
  • In this study, the optimal design of a mooring system was carried out. Unlike almost all design methods, which are based on the deterministic method, this study focused on the probabilistic method. The probabilistic method, especially the design of experiment (DOE), could be a good way to cover some of the drawbacks of the deterministic approach. There various parameters for a mooring system, as widely known, including the weight, length, and stiffness of line. Scenarios for the mooring system parameters were produced using the Latin Hypercube Sampling method of the probabilistic approach. Next, a vessel-mooring system coupled analysis was performed in Orcaflex. A total of 50 scenarios were used in this study to optimize the initial design by means of a genetic algorithm. Finally, after determining the optimal process, a reliability analysis was performed to understand the system validity.

Reconstruction Algorithms for Spiral-scan Echo Planar Imaging (Spiral scan 초고속 자기공명영상 재구성 알고리즘)

  • Ahn, C.B.;Kim, C.Y.;Park, D.J.;Kim, H.J.;Ryu, Y.S.;Yi, Y.;Oh, C.H.;Lee, H.K.
    • Proceedings of the KOSOMBE Conference
    • /
    • v.1996 no.11
    • /
    • pp.157-160
    • /
    • 1996
  • In this paper, reconstruction algorithms of spiral scan imaging which has been used for ultra fast magnetic resonance imaging have been reviewed, and some simulation results using two different algorithms are reported. Since the trajectory of the spiral scan in k-space is the spiral, reconstruction of the spiral scan is not as straight forward as that used in Fourier imaging technique where the sampling points are usually on the rectangular grids. Originally the reconstruction of the spiral scan imaging was based on the convolution backprojection algorithm modified with a shift term, however, some other reconstruction techniques have also been tried by remapping sampling points from spiral trajectory to Cartesian grids. Some experimental aspects of MR spiral scan imaging will also be addressed.

  • PDF

Hyper Parameter Tuning Method based on Sampling for Optimal LSTM Model

  • Kim, Hyemee;Jeong, Ryeji;Bae, Hyerim
    • Journal of the Korea Society of Computer and Information
    • /
    • v.24 no.1
    • /
    • pp.137-143
    • /
    • 2019
  • As the performance of computers increases, the use of deep learning, which has faced technical limitations in the past, is becoming more diverse. In many fields, deep learning has contributed to the creation of added value and used on the bases of more data as the application become more divers. The process for obtaining a better performance model will require a longer time than before, and therefore it will be necessary to find an optimal model that shows the best performance more quickly. In the artificial neural network modeling a tuning process that changes various elements of the neural network model is used to improve the model performance. Except Gride Search and Manual Search, which are widely used as tuning methods, most methodologies have been developed focusing on heuristic algorithms. The heuristic algorithm can get the results in a short time, but the results are likely to be the local optimal solution. Obtaining a global optimal solution eliminates the possibility of a local optimal solution. Although the Brute Force Method is commonly used to find the global optimal solution, it is not applicable because of an infinite number of hyper parameter combinations. In this paper, we use a statistical technique to reduce the number of possible cases, so that we can find the global optimal solution.

A Network Intrusion Security Detection Method Using BiLSTM-CNN in Big Data Environment

  • Hong Wang
    • Journal of Information Processing Systems
    • /
    • v.19 no.5
    • /
    • pp.688-701
    • /
    • 2023
  • The conventional methods of network intrusion detection system (NIDS) cannot measure the trend of intrusiondetection targets effectively, which lead to low detection accuracy. In this study, a NIDS method which based on a deep neural network in a big-data environment is proposed. Firstly, the entire framework of the NIDS model is constructed in two stages. Feature reduction and anomaly probability output are used at the core of the two stages. Subsequently, a convolutional neural network, which encompasses a down sampling layer and a characteristic extractor consist of a convolution layer, the correlation of inputs is realized by introducing bidirectional long short-term memory. Finally, after the convolution layer, a pooling layer is added to sample the required features according to different sampling rules, which promotes the overall performance of the NIDS model. The proposed NIDS method and three other methods are compared, and it is broken down under the conditions of the two databases through simulation experiments. The results demonstrate that the proposed model is superior to the other three methods of NIDS in two databases, in terms of precision, accuracy, F1- score, and recall, which are 91.64%, 93.35%, 92.25%, and 91.87%, respectively. The proposed algorithm is significant for improving the accuracy of NIDS.

Location Positioning System Based on K-NN for Sensor Networks (센서네트워크를 위한 K-NN 기반의 위치 추정 시스템)

  • Kim, Byoung-Kug;Hong, Won-Gil
    • Journal of Korea Multimedia Society
    • /
    • v.15 no.9
    • /
    • pp.1112-1125
    • /
    • 2012
  • To realize LBS (Location Based Service), typically GPS is mostly used. However, this system can be only used in out-sides. Furthermore, the use of the GPS in sensor networks is not efficient due to the low power consumption. Hence, we propose methods for the location positioning which is runnable at indoor in this paper. The proposed methods elaborate the location positioning system via applying K-NN(K-Nearest Neighbour) Algorithm with its intermediate values based on IEEE 802.15.4 technology; which is mostly used for the sensor networks. Logically the accuracy of the location positioning is proportional to the number of sampling sensor nodes' RSS according to the K-NN. By the way, numerous sampling uses a lot of sensor networks' resources. In order to reduce the number of samplings, we, instead, attempt to use the intermediate values of K-NN's signal boundaries, so that our proposed methods are able to positioning almost two times as accurate as the general ways of K-NN's result.

Super Resolution Algorithm using TV-G Decomposition (TV-G 분해를 이용한 초해상도 알고리즘)

  • Eum, Kyoung-Bae;Beom, Dong-Kyu
    • Journal of Digital Contents Society
    • /
    • v.18 no.8
    • /
    • pp.1517-1522
    • /
    • 2017
  • Among single image SR techniques, the TV based SR approach seems most successful in terms of edge preservation and no artifacts. But, this approach achieves insufficient SR for texture component. In this paper, we proposed a new TV-G decomposition based SR method to solve this problem. We proposed the SVR based up-sampling to get better edge preservation in the structure component. The NNE used the relaxed constraint to improve the NE. We used the NNE based learning method to improve the resolution of the texture component. Through experimental results, we quantitatively and qualitatively confirm the improved results of the proposed SR method when comparing with conventional interpolation method, ScSR, TV and NNE.

Determination of Optimal Cluster Size Using Bootstrap and Genetic Algorithm (붓스트랩 기법과 유전자 알고리즘을 이용한 최적 군집 수 결정)

  • Park, Min-Jae;Jun, Sung-Hae;Oh, Kyung-Whan
    • Journal of the Korean Institute of Intelligent Systems
    • /
    • v.13 no.1
    • /
    • pp.12-17
    • /
    • 2003
  • Optimal determination of cluster size has an effect on the result of clustering. In K-means algorithm, the difference of clustering performance is large by initial K. But the initial cluster size is determined by prior knowledge or subjectivity in most clustering process. This subjective determination may not be optimal. In this Paper, the genetic algorithm based optimal determination approach of cluster size is proposed for automatic determination of cluster size and performance upgrading of its result. The initial population based on attribution is generated for searching optimal cluster size. The fitness value is defined the inverse of dissimilarity summation. So this is converged to upgraded total performance. The mutation operation is used for local minima problem. Finally, the re-sampling of bootstrapping is used for computational time cost.

Non-Simultaneous Sampling Deactivation during the Parameter Approximation of a Topic Model

  • Jeong, Young-Seob;Jin, Sou-Young;Choi, Ho-Jin
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.7 no.1
    • /
    • pp.81-98
    • /
    • 2013
  • Since Probabilistic Latent Semantic Analysis (PLSA) and Latent Dirichlet Allocation (LDA) were introduced, many revised or extended topic models have appeared. Due to the intractable likelihood of these models, training any topic model requires to use some approximation algorithm such as variational approximation, Laplace approximation, or Markov chain Monte Carlo (MCMC). Although these approximation algorithms perform well, training a topic model is still computationally expensive given the large amount of data it requires. In this paper, we propose a new method, called non-simultaneous sampling deactivation, for efficient approximation of parameters in a topic model. While each random variable is normally sampled or obtained by a single predefined burn-in period in the traditional approximation algorithms, our new method is based on the observation that the random variable nodes in one topic model have all different periods of convergence. During the iterative approximation process, the proposed method allows each random variable node to be terminated or deactivated when it is converged. Therefore, compared to the traditional approximation ways in which usually every node is deactivated concurrently, the proposed method achieves the inference efficiency in terms of time and memory. We do not propose a new approximation algorithm, but a new process applicable to the existing approximation algorithms. Through experiments, we show the time and memory efficiency of the method, and discuss about the tradeoff between the efficiency of the approximation process and the parameter consistency.

Massive 3D Point Cloud Visualization by Generating Artificial Center Points from Multi-Resolution Cube Grid Structure (다단계 정육면체 격자 기반의 가상점 생성을 통한 대용량 3D point cloud 가시화)

  • Yang, Seung-Chan;Han, Soo Hee;Heo, Joon
    • Journal of the Korean Society of Surveying, Geodesy, Photogrammetry and Cartography
    • /
    • v.30 no.4
    • /
    • pp.335-342
    • /
    • 2012
  • 3D point cloud is widely used in Architecture, Civil Engineering, Medical, Computer Graphics, and many other fields. Due to the improvement of 3D laser scanner, a massive 3D point cloud whose gigantic file size is bigger than computer's memory requires efficient preprocessing and visualization. We suggest a data structure to solve the problem; a 3D point cloud is gradually subdivided by arbitrary-sized cube grids structure and corresponding point cloud subsets generated by the center of each grid cell are achieved while preprocessing. A massive 3D point cloud file is tested through two algorithms: QSplat and ours. Our algorithm, grid-based, showed slower speed in preprocessing but performed faster rendering speed comparing to QSplat. Also our algorithm is further designed to editing or segmentation using the original coordinates of 3D point cloud.