• Title/Summary/Keyword: Computational algorithms

Search Result 1,465, Processing Time 0.024 seconds

An Algorithm of Welding Bead Detection and Evaluation Using and Multiple Filters Geodesic Active Contour (다중필터와 축지적 활성 윤곽선 알고리즘을 이용한 용접 비드 검출 및 판단 알고리즘)

  • Milyahilu, John;Kim, Young-Bong;Lee, Jae Eun;Kim, Jong-Nam
    • Journal of the Institute of Convergence Signal Processing
    • /
    • v.22 no.3
    • /
    • pp.141-148
    • /
    • 2021
  • In this paper, we propose an algorithm of welding bead detection and evaluation using geodesic active contour algorithm and high pass filter with image processing technique. The algorithm uses histogram equalization and high pass filter as gaussian filter to improve contrast. The image processing techniques smoothens the welding beads reduce the noise on an image. Then, the algorithm detects the welding bead area by applying the geodesic active contour algorithm and morphological ooperation. It also applies the balloon force that either inflates in, or deflates out the evolving contour for a better segmentation. After that, we propose a method for determining the quality of welding bead using effective length and width of the detected bead. In the experiments, our algorithm achieved the highest recall, precision, F-measure and IOU as 0.9894, 0.9668, 0.9780, and 0.8957 respectively. We compared the proposed algorithm with the conventional algorithms to evaluate the performance of the proposed algorithm. The proposed algorithm achieved better performance compared to the conventional ones with a maximum computational time of 0.6 seconds for segmenting and evaluating one welding bead.

A Comparative Study of Subset Construction Methods in OSEM Algorithms using Simulated Projection Data of Compton Camera (모사된 컴프턴 카메라 투사데이터의 재구성을 위한 OSEM 알고리즘의 부분집합 구성법 비교 연구)

  • Kim, Soo-Mee;Lee, Jae-Sung;Lee, Mi-No;Lee, Ju-Hahn;Kim, Joong-Hyun;Kim, Chan-Hyeong;Lee, Chun-Sik;Lee, Dong-Soo;Lee, Soo-Jin
    • Nuclear Medicine and Molecular Imaging
    • /
    • v.41 no.3
    • /
    • pp.234-240
    • /
    • 2007
  • Purpose: In this study we propose a block-iterative method for reconstructing Compton scattered data. This study shows that the well-known expectation maximization (EM) approach along with its accelerated version based on the ordered subsets principle can be applied to the problem of image reconstruction for Compton camera. This study also compares several methods of constructing subsets for optimal performance of our algorithms. Materials and Methods: Three reconstruction algorithms were implemented; simple backprojection (SBP), EM, and ordered subset EM (OSEM). For OSEM, the projection data were grouped into subsets in a predefined order. Three different schemes for choosing nonoverlapping subsets were considered; scatter angle-based subsets, detector position-based subsets, and both scatter angle- and detector position-based subsets. EM and OSEM with 16 subsets were performed with 64 and 4 iterations, respectively. The performance of each algorithm was evaluated in terms of computation time and normalized mean-squared error. Results: Both EM and OSEM clearly outperformed SBP in all aspects of accuracy. The OSEM with 16 subsets and 4 iterations, which is equivalent to the standard EM with 64 iterations, was approximately 14 times faster in computation time than the standard EM. In OSEM, all of the three schemes for choosing subsets yielded similar results in computation time as well as normalized mean-squared error. Conclusion: Our results show that the OSEM algorithm, which have proven useful in emission tomography, can also be applied to the problem of image reconstruction for Compton camera. With properly chosen subset construction methods and moderate numbers of subsets, our OSEM algorithm significantly improves the computational efficiency while keeping the original quality of the standard EM reconstruction. The OSEM algorithm with scatter angle- and detector position-based subsets is most available.

Comparison of Algorithms for Generating Parametric Image of Cerebral Blood Flow Using ${H_2}^{15}O$ PET Positron Emission Tomography (${H_2}^{15}O$ PET을 이용한 뇌혈류 파라메트릭 영상 구성을 위한 알고리즘 비교)

  • Lee, Jae-Sung;Lee, Dong-Soo;Park, Kwang-Suk;Chung, June-Key;Lee, Myung-Chul
    • The Korean Journal of Nuclear Medicine
    • /
    • v.37 no.5
    • /
    • pp.288-300
    • /
    • 2003
  • Purpose: To obtain regional blood flow and tissue-blood partition coefficient with time-activity curves from ${H_2}^{15}O$ PET, fitting of some parameters in the Kety model is conventionally accomplished by nonlinear least squares (NLS) analysis. However, NLS requires considerable compuation time then is impractical for pixel-by-pixel analysis to generate parametric images of these parameters. In this study, we investigated several fast parameter estimation methods for the parametric image generation and compared their statistical reliability and computational efficiency. Materials and Methods: These methods included linear least squres (LLS), linear weighted least squares (LWLS), linear generalized least squares (GLS), linear generalized weighted least squares (GWLS), weighted Integration (WI), and model-based clustering method (CAKS). ${H_2}^{15}O$ dynamic brain PET with Poisson noise component was simulated using numerical Zubal brain phantom. Error and bias in the estimation of rCBF and partition coefficient, and computation time in various noise environments was estimated and compared. In audition, parametric images from ${H_2}^{15}O$ dynamic brain PET data peformed on 16 healthy volunteers under various physiological conditions was compared to examine the utility of these methods for real human data. Results: These fast algorithms produced parametric images with similar image qualify and statistical reliability. When CAKS and LLS methods were used combinedly, computation time was significantly reduced and less than 30 seconds for $128{\times}128{\times}46$ images on Pentium III processor. Conclusion: Parametric images of rCBF and partition coefficient with good statistical properties can be generated with short computation time which is acceptable in clinical situation.

Selective Word Embedding for Sentence Classification by Considering Information Gain and Word Similarity (문장 분류를 위한 정보 이득 및 유사도에 따른 단어 제거와 선택적 단어 임베딩 방안)

  • Lee, Min Seok;Yang, Seok Woo;Lee, Hong Joo
    • Journal of Intelligence and Information Systems
    • /
    • v.25 no.4
    • /
    • pp.105-122
    • /
    • 2019
  • Dimensionality reduction is one of the methods to handle big data in text mining. For dimensionality reduction, we should consider the density of data, which has a significant influence on the performance of sentence classification. It requires lots of computations for data of higher dimensions. Eventually, it can cause lots of computational cost and overfitting in the model. Thus, the dimension reduction process is necessary to improve the performance of the model. Diverse methods have been proposed from only lessening the noise of data like misspelling or informal text to including semantic and syntactic information. On top of it, the expression and selection of the text features have impacts on the performance of the classifier for sentence classification, which is one of the fields of Natural Language Processing. The common goal of dimension reduction is to find latent space that is representative of raw data from observation space. Existing methods utilize various algorithms for dimensionality reduction, such as feature extraction and feature selection. In addition to these algorithms, word embeddings, learning low-dimensional vector space representations of words, that can capture semantic and syntactic information from data are also utilized. For improving performance, recent studies have suggested methods that the word dictionary is modified according to the positive and negative score of pre-defined words. The basic idea of this study is that similar words have similar vector representations. Once the feature selection algorithm selects the words that are not important, we thought the words that are similar to the selected words also have no impacts on sentence classification. This study proposes two ways to achieve more accurate classification that conduct selective word elimination under specific regulations and construct word embedding based on Word2Vec embedding. To select words having low importance from the text, we use information gain algorithm to measure the importance and cosine similarity to search for similar words. First, we eliminate words that have comparatively low information gain values from the raw text and form word embedding. Second, we select words additionally that are similar to the words that have a low level of information gain values and make word embedding. In the end, these filtered text and word embedding apply to the deep learning models; Convolutional Neural Network and Attention-Based Bidirectional LSTM. This study uses customer reviews on Kindle in Amazon.com, IMDB, and Yelp as datasets, and classify each data using the deep learning models. The reviews got more than five helpful votes, and the ratio of helpful votes was over 70% classified as helpful reviews. Also, Yelp only shows the number of helpful votes. We extracted 100,000 reviews which got more than five helpful votes using a random sampling method among 750,000 reviews. The minimal preprocessing was executed to each dataset, such as removing numbers and special characters from text data. To evaluate the proposed methods, we compared the performances of Word2Vec and GloVe word embeddings, which used all the words. We showed that one of the proposed methods is better than the embeddings with all the words. By removing unimportant words, we can get better performance. However, if we removed too many words, it showed that the performance was lowered. For future research, it is required to consider diverse ways of preprocessing and the in-depth analysis for the co-occurrence of words to measure similarity values among words. Also, we only applied the proposed method with Word2Vec. Other embedding methods such as GloVe, fastText, ELMo can be applied with the proposed methods, and it is possible to identify the possible combinations between word embedding methods and elimination methods.

Estimation of Chlorophyll Contents in Pear Tree Using Unmanned AerialVehicle-Based-Hyperspectral Imagery (무인기 기반 초분광영상을 이용한 배나무 엽록소 함량 추정)

  • Ye Seong Kang;Ki Su Park;Eun Li Kim;Jong Chan Jeong;Chan Seok Ryu;Jung Gun Cho
    • Korean Journal of Remote Sensing
    • /
    • v.39 no.5_1
    • /
    • pp.669-681
    • /
    • 2023
  • Studies have tried to apply remote sensing technology, a non-destructive survey method, instead of the existing destructive survey, which requires relatively large labor input and a long time to estimate chlorophyll content, which is an important indicator for evaluating the growth of fruit trees. This study was conducted to non-destructively evaluate the chlorophyll content of pear tree leaves using unmanned aerial vehicle-based hyperspectral imagery for two years(2021, 2022). The reflectance of the single bands of the pear tree canopy extracted through image processing was band rationed to minimize unstable radiation effects depending on time changes. The estimation (calibration and validation) models were developed using machine learning algorithms of elastic-net, k-nearest neighbors(KNN), and support vector machine with band ratios as input variables. By comparing the performance of estimation models based on full band ratios, key band ratios that are advantageous for reducing computational costs and improving reproducibility were selected. As a result, for all machine learning models, when calibration of coefficient of determination (R2)≥0.67, root mean squared error (RMSE)≤1.22 ㎍/cm2, relative error (RE)≤17.9% and validation of R2≥0.56, RMSE≤1.41 ㎍/cm2, RE≤20.7% using full band ratios were compared, four key band ratios were selected. There was relatively no significant difference in validation performance between machine learning models. Therefore, the KNN model with the highest calibration performance was used as the standard, and its key band ratios were 710/714, 718/722, 754/758, and 758/762 nm. The performance of calibration showed R2=0.80, RMSE=0.94 ㎍/cm2, RE=13.9%, and validation showed R2=0.57, RMSE=1.40 ㎍/cm2, RE=20.5%. Although the performance results based on validation were not sufficient to estimate the chlorophyll content of pear tree leaves, it is meaningful that key band ratios were selected as a standard for future research. To improve estimation performance, it is necessary to continuously secure additional datasets and improve the estimation model by reproducing it in actual orchards. In future research, it is necessary to continuously secure additional datasets to improve estimation performance, verify the reliability of the selected key band ratios, and upgrade the estimation model to be reproducible in actual orchards.