• Title/Summary/Keyword: A* algorithm

Search Result 54,221, Processing Time 0.08 seconds

Role of unstructured data on water surface elevation prediction with LSTM: case study on Jamsu Bridge, Korea (LSTM 기법을 활용한 수위 예측 알고리즘 개발 시 비정형자료의 역할에 관한 연구: 잠수교 사례)

  • Lee, Seung Yeon;Yoo, Hyung Ju;Lee, Seung Oh
    • Journal of Korea Water Resources Association
    • /
    • v.54 no.spc1
    • /
    • pp.1195-1204
    • /
    • 2021
  • Recently, local torrential rain have become more frequent and severe due to abnormal climate conditions, causing a surge in human and properties damage including infrastructures along the river. In this study, water surface elevation prediction algorithm was developed using the LSTM (Long Short-term Memory) technique specialized for time series data among Machine Learning to estimate and prevent flooding of the facilities. The study area is Jamsu Bridge, the study period is 6 years (2015~2020) of June, July and August and the water surface elevation of the Jamsu Bridge after 3 hours was predicted. Input data set is composed of the water surface elevation of Jamsu Bridge (EL.m), the amount of discharge from Paldang Dam (m3/s), the tide level of Ganghwa Bridge (cm) and the number of tweets in Seoul. Complementary data were constructed by using not only structured data mainly used in precedent research but also unstructured data constructed through wordcloud, and the role of unstructured data was presented through comparison and analysis of whether or not unstructured data was used. When predicting the water surface elevation of the Jamsu Bridge, the accuracy of prediction was improved and realized that complementary data could be conservative alerts to reduce casualties. In this study, it was concluded that the use of complementary data was relatively effective in providing the user's safety and convenience of riverside infrastructure. In the future, more accurate water surface elevation prediction would be expected through the addition of types of unstructured data or detailed pre-processing of input data.

Scheduling of Parallel Offset Printing Process for Packaging Printing (패키징 인쇄를 위한 병렬 오프셋 인쇄 공정의 스케줄링)

  • Jaekyeong, Moon;Hyunchul, Tae
    • KOREAN JOURNAL OF PACKAGING SCIENCE & TECHNOLOGY
    • /
    • v.28 no.3
    • /
    • pp.183-192
    • /
    • 2022
  • With the growth of the packaging industry, demand on the packaging printing comes in various forms. Customers' orders are diversifying and the standards for quality are increasing. Offset printing is mainly used in the packaging printing since it is easy to print in large quantities. However, productivity of the offset printing decreases when printing various order. This is because it takes time to change colors for each printing unit. Therefore, scheduling that minimizes the color replacement time and shortens the overall makespan is required. By the existing manual method based on workers' experience or intuition, scheduling results may vary for workers and this uncertainty increase the production cost. In this study, we propose an automated scheduling method of parallel offset printing process for packaging printing. We decompose the original problem into assigning and sequencing orders, and ink arrangement for printing problems. Vehicle routing problem and assignment problem are applied to each part. Mixed integer programming is used to model the problem mathematically. But it needs a lot of computational time to solve as the size of the problem grows. So guided local search algorithm is used to solve the problem. Through actual data experiments, we reviewed our method's applicability and role in the field.

A Study on the Implementation and Development of Image Processing Algorithms for Vibes Detection Equipment (정맥 검출 장비 구현 및 영상처리 알고리즘 개발에 대한 연구)

  • Jin-Hyoung, Jeong;Jae-Hyun, Jo;Jee-Hun, Jang;Sang-Sik, Lee
    • The Journal of Korea Institute of Information, Electronics, and Communication Technology
    • /
    • v.15 no.6
    • /
    • pp.463-470
    • /
    • 2022
  • Intravenous injection is widely used for patient treatment, including injection drugs, fluids, parenteral nutrition, and blood products, and is the most frequently performed invasive treatment for inpatients, including blood collection, peripheral catheter insertion, and other IV therapy, and more than 1 billion cases per year. Intravenous injection is one of the difficult procedures performed only by experienced nurses who have been trained in intravenous injection, and failure can lead to thrombosis and hematoma or nerve damage to the vein. Nurses who frequently perform intravenous injections may also make mistakes because it is not easy to detect veins due to factors such as obesity, skin color, and age. Accordingly, studies on auxiliary equipment capable of visualizing the venous structure of the back of the hand or arm have been published to reduce mistakes during intravenous injection. This paper is about the development of venous detection equipment that visualizes venous structure during intravenous injection, and the optimal combination was selected by comparing the brightness of acquired images according to the combination of near-infrared (NIR) LED and Filter with different wavelength bands. In addition, an image processing algorithm was derived to threshehold and making blood vessel part to green through grayscale conversion, histogram equilzation, and sharpening filters for clarity of vein images obtained through the implemented venous detection experimental module.

Parallel Computation on the Three-dimensional Electromagnetic Field by the Graph Partitioning and Multi-frontal Method (그래프 분할 및 다중 프론탈 기법에 의거한 3차원 전자기장의 병렬 해석)

  • Kang, Seung-Hoon;Song, Dong-Hyeon;Choi, JaeWon;Shin, SangJoon
    • Journal of the Korean Society for Aeronautical & Space Sciences
    • /
    • v.50 no.12
    • /
    • pp.889-898
    • /
    • 2022
  • In this paper, parallel computing method on the three-dimensional electromagnetic field is proposed. The present electromagnetic scattering analysis is conducted based on the time-harmonic vector wave equation and the finite element method. The edge-based element and 2nd -order absorbing boundary condition are used. Parallelization of the elemental numerical integration and the matrix assemblage is accomplished by allocating the partitioned finite element subdomain for each processor. The graph partitioning library, METIS, is employed for the subdomain generation. The large sparse matrix computation is conducted by MUMPS, which is the parallel computing library based on the multi-frontal method. The accuracy of the present program is validated by the comparison against the Mie-series analytical solution and the results by ANSYS HFSS. In addition, the scalability is verified by measuring the speed-up in terms of the number of processors used. The present electromagnetic scattering analysis is performed for a perfect electric conductor sphere, isotropic/anisotropic dielectric sphere, and the missile configuration. The algorithm of the present program will be applied to the finite element and tearing method, aiming for the further extended parallel computing performance.

Performance Analysis of DoS/DDoS Attack Detection Algorithms using Different False Alarm Rates (False Alarm Rate 변화에 따른 DoS/DDoS 탐지 알고리즘의 성능 분석)

  • Jang, Beom-Soo;Lee, Joo-Young;Jung, Jae-Il
    • Journal of the Korea Society for Simulation
    • /
    • v.19 no.4
    • /
    • pp.139-149
    • /
    • 2010
  • Internet was designed for network scalability and best-effort service which makes all hosts connected to Internet to be vulnerable against attack. Many papers have been proposed about attack detection algorithms against the attack using IP spoofing and DoS/DDoS attack. Purpose of DoS/DDoS attack is achieved in short period after the attack begins. Therefore, DoS/DDoS attack should be detected as soon as possible. Attack detection algorithms using false alarm rates consist of the false negative rate and the false positive rate. Moreover, they are important metrics to evaluate the attack detections. In this paper, we analyze the performance of the attack detection algorithms using the impact of false negative rate and false positive rate variation to the normal traffic and the attack traffic by simulations. As the result of this, we find that the number of passed attack packets is in the proportion to the false negative rate and the number of passed normal packets is in the inverse proportion to the false positive rate. We also analyze the limits of attack detection due to the relation between the false negative rate and the false positive rate. Finally, we propose a solution to minimize the limits of attack detection algorithms by defining the network state using the ratio between the number of packets classified as attack packets and the number of packets classified as normal packets. We find the performance of attack detection algorithm is improved by passing the packets classified as attacks.

Data analysis by Integrating statistics and visualization: Visual verification for the prediction model (통계와 시각화를 결합한 데이터 분석: 예측모형 대한 시각화 검증)

  • Mun, Seong Min;Lee, Kyung Won
    • Design Convergence Study
    • /
    • v.15 no.6
    • /
    • pp.195-214
    • /
    • 2016
  • Predictive analysis is based on a probabilistic learning algorithm called pattern recognition or machine learning. Therefore, if users want to extract more information from the data, they are required high statistical knowledge. In addition, it is difficult to find out data pattern and characteristics of the data. This study conducted statistical data analyses and visual data analyses to supplement prediction analysis's weakness. Through this study, we could find some implications that haven't been found in the previous studies. First, we could find data pattern when adjust data selection according as splitting criteria for the decision tree method. Second, we could find what type of data included in the final prediction model. We found some implications that haven't been found in the previous studies from the results of statistical and visual analyses. In statistical analysis we found relation among the multivariable and deducted prediction model to predict high box office performance. In visualization analysis we proposed visual analysis method with various interactive functions. Finally through this study we verified final prediction model and suggested analysis method extract variety of information from the data.

Structural Optimization and Improvement of Initial Weight Dependency of the Neural Network Model for Determination of Preconsolidation Pressure from Piezocone Test Result (피에조콘을 이용한 선행압밀하중 결정 신경망 모델의 구조 최적화 및 초기 연결강도 의존성 개선)

  • Kim, Young-Sang;Joo, No-Ah;Park, Hyun-Il;Park, Sol-Ji
    • KSCE Journal of Civil and Environmental Engineering Research
    • /
    • v.29 no.3C
    • /
    • pp.115-125
    • /
    • 2009
  • The preconsolidation pressure has been commonly determined by oedometer test. However, it can also be determined by insitu test, such as piezocone test with theoretical and(or) empirical correlations. Recently, Neural Network (NN) theory was applied and some models were proposed to estimate the preconsolidation pressure or OCR. It was already found that NN model can come over the site dependency and prediction accuracy is greatly improved when compared with present theoretical and empirical models. However, since the optimization process of synaptic weights of NN model is dependent on the initial synaptic weights, NN models which are trained with different initial weights can't avoid the variability on prediction result for new database even though they have same structure and use same transfer function. In this study, Committee Neural Network (CNN) model is proposed to improve the initial weight dependency of multi-layered neural network model on the prediction of preconsolidation pressure of soft clay from piezocone test result. Prediction results of CNN model are compared with those of conventional empirical and theoretical models and multi-layered neural network model, which has the optimized structure. It was found that even though the NN model has the optimized structure for given training data set, it still has the initial weight dependency, while the proposed CNN model can improve the initial weight dependency of the NN model and provide a consistent and precise inference result than existing NN models.

Analysis of performance changes based on the characteristics of input image data in the deep learning-based algal detection model (딥러닝 기반 조류 탐지 모형의 입력 이미지 자료 특성에 따른 성능 변화 분석)

  • Juneoh Kim;Jiwon Baek;Jongrack Kim;Jungsu Park
    • Journal of Wetlands Research
    • /
    • v.25 no.4
    • /
    • pp.267-273
    • /
    • 2023
  • Algae are an important component of the ecosystem. However, the excessive growth of cyanobacteria has various harmful effects on river environments, and diatoms affect the management of water supply processes. Algal monitoring is essential for sustainable and efficient algae management. In this study, an object detection model was developed that detects and classifies images of four types of harmful cyanobacteria used for the criteria of the algae alert system, and one diatom, Synedra sp.. You Only Look Once(YOLO) v8, the latest version of the YOLO model, was used for the development of the model. The mean average precision (mAP) of the base model was analyzed as 64.4. Five models were created to increase the diversity of the input images used for model training by performing rotation, magnification, and reduction of original images. Changes in model performance were compared according to the composition of the input images. As a result of the analysis, the model that applied rotation, magnification, and reduction showed the best performance with mAP 86.5. The mAP of the model that only used image rotation, combined rotation and magnification, and combined image rotation and reduction were analyzed as 85.3, 82.3, and 83.8, respectively.

Performance Comparison of Automatic Classification Using Word Embeddings of Book Titles (단행본 서명의 단어 임베딩에 따른 자동분류의 성능 비교)

  • Yong-Gu Lee
    • Journal of the Korean Society for information Management
    • /
    • v.40 no.4
    • /
    • pp.307-327
    • /
    • 2023
  • To analyze the impact of word embedding on book titles, this study utilized word embedding models (Word2vec, GloVe, fastText) to generate embedding vectors from book titles. These vectors were then used as classification features for automatic classification. The classifier utilized the k-nearest neighbors (kNN) algorithm, with the categories for automatic classification based on the DDC (Dewey Decimal Classification) main class 300 assigned by libraries to books. In the automatic classification experiment applying word embeddings to book titles, the Skip-gram architectures of Word2vec and fastText showed better results in the automatic classification performance of the kNN classifier compared to the TF-IDF features. In the optimization of various hyperparameters across the three models, the Skip-gram architecture of the fastText model demonstrated overall good performance. Specifically, better performance was observed when using hierarchical softmax and larger embedding dimensions as hyperparameters in this model. From a performance perspective, fastText can generate embeddings for substrings or subwords using the n-gram method, which has been shown to increase recall. The Skip-gram architecture of the Word2vec model generally showed good performance at low dimensions(size 300) and with small sizes of negative sampling (3 or 5).

Towards high-accuracy data modelling, uncertainty quantification and correlation analysis for SHM measurements during typhoon events using an improved most likely heteroscedastic Gaussian process

  • Qi-Ang Wang;Hao-Bo Wang;Zhan-Guo Ma;Yi-Qing Ni;Zhi-Jun Liu;Jian Jiang;Rui Sun;Hao-Wei Zhu
    • Smart Structures and Systems
    • /
    • v.32 no.4
    • /
    • pp.267-279
    • /
    • 2023
  • Data modelling and interpretation for structural health monitoring (SHM) field data are critical for evaluating structural performance and quantifying the vulnerability of infrastructure systems. In order to improve the data modelling accuracy, and extend the application range from data regression analysis to out-of-sample forecasting analysis, an improved most likely heteroscedastic Gaussian process (iMLHGP) methodology is proposed in this study by the incorporation of the outof-sample forecasting algorithm. The proposed iMLHGP method overcomes this limitation of constant variance of Gaussian process (GP), and can be used for estimating non-stationary typhoon-induced response statistics with high volatility. The first attempt at performing data regression and forecasting analysis on structural responses using the proposed iMLHGP method has been presented by applying it to real-world filed SHM data from an instrumented cable-stay bridge during typhoon events. Uncertainty quantification and correlation analysis were also carried out to investigate the influence of typhoons on bridge strain data. Results show that the iMLHGP method has high accuracy in both regression and out-of-sample forecasting. The iMLHGP framework takes both data heteroscedasticity and accurate analytical processing of noise variance (replace with a point estimation on the most likely value) into account to avoid the intensive computational effort. According to uncertainty quantification and correlation analysis results, the uncertainties of strain measurements are affected by both traffic and wind speed. The overall change of bridge strain is affected by temperature, and the local fluctuation is greatly affected by wind speed in typhoon conditions.