• Title/Summary/Keyword: Algorithm Model

Search Result 13,015, Processing Time 0.049 seconds

lp-norm regularization for impact force identification from highly incomplete measurements

  • Yanan Wang;Baijie Qiao;Jinxin Liu;Junjiang Liu;Xuefeng Chen
    • Smart Structures and Systems
    • /
    • v.34 no.2
    • /
    • pp.97-116
    • /
    • 2024
  • The standard l1-norm regularization is recently introduced for impact force identification, but generally underestimates the peak force. Compared to l1-norm regularization, lp-norm (0 ≤ p < 1) regularization, with a nonconvex penalty function, has some promising properties such as enforcing sparsity. In the framework of sparse regularization, if the desired solution is sparse in the time domain or other domains, the under-determined problem with fewer measurements than candidate excitations may obtain the unique solution, i.e., the sparsest solution. Considering the joint sparse structure of impact force in temporal and spatial domains, we propose a general lp-norm (0 ≤ p < 1) regularization methodology for simultaneous identification of the impact location and force time-history from highly incomplete measurements. Firstly, a nonconvex optimization model based on lp-norm penalty is developed for regularizing the highly under-determined problem of impact force identification. Secondly, an iteratively reweighed l1-norm algorithm is introduced to solve such an under-determined and unconditioned regularization model through transforming it into a series of l1-norm regularization problems. Finally, numerical simulation and experimental validation including single-source and two-source cases of impact force identification are conducted on plate structures to evaluate the performance of lp-norm (0 ≤ p < 1) regularization. Both numerical and experimental results demonstrate that the proposed lp-norm regularization method, merely using a single accelerometer, can locate the actual impacts from nine fixed candidate sources and simultaneously reconstruct the impact force time-history; compared to the state-of-the-art l1-norm regularization, lp-norm (0 ≤ p < 1) regularization procures sufficiently sparse and more accurate estimates; although the peak relative error of the identified impact force using lp-norm regularization has a decreasing tendency as p is approaching 0, the results of lp-norm regularization with 0 ≤ p ≤ 1/2 have no significant differences.

A Study on the Artificial Intelligence (AI) Training Data Quality: Fuzzy-set Qualitative Comparative Analysis (fsQCA) Approach (인공지능 학습용 데이터 품질에 대한 연구: 퍼지셋 질적비교분석)

  • Hyunmok Oh;Seoyoun Lee;Younghoon Chang
    • Information Systems Review
    • /
    • v.26 no.1
    • /
    • pp.19-56
    • /
    • 2024
  • This study is empirical research to enhance understanding of AI (artificial intelligence) training data project in South Korea. It primarily focuses on the various concerns regarding data quality from policy-executing institutions, data construction companies, and organizations utilizing AI training data to develop the most reliable algorithm for society. For academic contribution, this study suggests a theoretical foundation and research model for understanding AI training data quality and its antecedents, as well as the unique data and ethical aspects of AI. For this purpose, this study proposes a research model with important antecedents related to AI training data quality, such as data attribute factors, data building environmental factors, and data type-related factors. The study collects 393 sample data from actual practitioners and personnel from companies building artificial intelligence training data and companies developing artificial intelligence services. Data analysis was conducted through Fuzzy Set Qualitative Comparative Analysis (fsQCA) and Artificial Neural Network analysis (ANN), presenting academic and practical implications related to the quality of AI training data.

Development of Prediction Model for XRD Mineral Composition Using Machine Learning (기계학습을 활용한 XRD 광물 조성 예측 모델 개발)

  • Park Sun Young;Lee Kyungbook;Choi Jiyoung;Park Ju Young
    • Korean Journal of Mineralogy and Petrology
    • /
    • v.37 no.2
    • /
    • pp.23-34
    • /
    • 2024
  • It is essential to know the mineral composition of core samples to assess the possibility of gas hydrate (GH) in sediments. During the exploration of gas hydrates (GH), mineral composition values were obtained from each core sample collected in the Ulleung Basin using X-ray diffraction (XRD). Based on this data, machine learning was performed with 3100 input values representing XRD peak intensities and 12 output values representing mineral compositions. The 488 data points were divided into 307 training samples, 132 validation samples, and 49 test samples. The random forest (RF) algorithm was utilized to obtain results. The machine learning results, compared with expert-predicted mineral compositions, revealed a Mean Absolute Error (MAE) of 1.35%. To enhance the performance of the developed model, principal component analysis (PCA) was employed to extract the key features of XRD peaks, reducing the dimensionality of input data. Subsequent machine learning with the refined data resulted in a decreased MAE, reaching a maximum of 1.23%. Additionally, the efficiency of the learning process improved over time, as confirmed from a temporal perspective.

Deep Learning based Brachial Plexus Ultrasound Images Segmentation by Leveraging an Object Detection Algorithm (객체 검출 알고리즘을 활용한 딥러닝 기반 상완 신경총 초음파 영상의 분할에 관한 연구)

  • Kukhyun Cho;Hyunseung Ryu;Myeongjin Lee;Suhyung Park
    • Journal of the Korean Society of Radiology
    • /
    • v.18 no.5
    • /
    • pp.557-566
    • /
    • 2024
  • Ultrasound-guided regional anesthesia is one of the most common techniques used in peripheral nerve blockade by enhancing pain control and recovery time. However, accurate Brachial Plexus (BP) nerve detection and identification remains a challenging task due to the difficulty in data acquisition such as speckle and Doppler artifacts even for experienced anesthesiologists. To mitigate the issue, we introduce a BP nerve small target segmentation network by incorporating BP object detection and U-Net based semantic segmentation into a single deep learning framework based on the multi-scale approach. To this end, the current BP detection and identification was estimated: 1) A RetinaNet model was used to roughly locate the BP nerve region using multi-scale based feature representations, and 2) U-Net was then used by feeding plural BP nerve features for each scale. The experimental results demonstrate that our proposed model produces high quality BP segmentation by increasing the accuracies of the BP nerve identification with the assistance of roughly locating the BP nerve area compared to competing methods such as segmentation-only models.

The Relationship Analysis between the Epicenter and Lineaments in the Odaesan Area using Satellite Images and Shaded Relief Maps (위성영상과 음영기복도를 이용한 오대산 지역 진앙의 위치와 선구조선의 관계 분석)

  • CHA, Sung-Eun;CHI, Kwang-Hoon;JO, Hyun-Woo;KIM, Eun-Ji;LEE, Woo-Kyun
    • Journal of the Korean Association of Geographic Information Studies
    • /
    • v.19 no.3
    • /
    • pp.61-74
    • /
    • 2016
  • The purpose of this paper is to analyze the relationship between the location of the epicenter of a medium-sized earthquake(magnitude 4.8) that occurred on January 20, 2007 in the Odaesan area with lineament features using a shaded relief map(1/25,000 scale) and satellite images from LANDSAT-8 and KOMPSAT-2. Previous studies have analyzed lineament features in tectonic settings primarily by examining two-dimensional satellite images and shaded relief maps. These methods, however, limit the application of the visual interpretation of relief features long considered as the major component of lineament extraction. To overcome some existing limitations of two-dimensional images, this study examined three-dimensional images, produced from a Digital Elevation Model and drainage network map, for lineament extraction. This approach reduces mapping errors introduced by visual interpretation. In addition, spline interpolation was conducted to produce density maps of lineament frequency, intersection, and length required to estimate the density of lineament at the epicenter of the earthquake. An algorithm was developed to compute the Value of the Relative Density(VRD) representing the relative density of lineament from the map. The VRD is the lineament density of each map grid divided by the maximum density value from the map. As such, it is a quantified value that indicates the concentration level of the lineament density across the area impacted by the earthquake. Using this algorithm, the VRD calculated at the earthquake epicenter using the lineament's frequency, intersection, and length density maps ranged from approximately 0.60(min) to 0.90(max). However, because there were differences in mapped images such as those for solar altitude and azimuth, the mean of VRD was used rather than those categorized by the images. The results show that the average frequency of VRD was approximately 0.85, which was 21% higher than the intersection and length of VRD, demonstrating the close relationship that exists between lineament and the epicenter. Therefore, it is concluded that the density map analysis described in this study, based on lineament extraction, is valid and can be used as a primary data analysis tool for earthquake research in the future.

Development an Artificial Neural Network to Predict Infectious Bronchitis Virus Infection in Laying Hen Flocks (산란계의 전염성 기관지염을 예측하기 위한 인공신경망 모형의 개발)

  • Pak Son-Il;Kwon Hyuk-Moo
    • Journal of Veterinary Clinics
    • /
    • v.23 no.2
    • /
    • pp.105-110
    • /
    • 2006
  • A three-layer, feed-forward artificial neural network (ANN) with sixteen input neurons, three hidden neurons, and one output neuron was developed to identify the presence of infectious bronchitis (IB) infection as early as possible in laying hen flocks. Retrospective data from flocks that enrolled IB surveillance program between May 2003 and November 2005 were used to build the ANN. Data set of 86 flocks was divided randomly into two sets: 77 cases for training set and 9 cases for testing set. Input factors were 16 epidemiological findings including characteristics of the layer house, management practice, flock size, and the output was either presence or absence of IB. ANN was trained using training set with a back-propagation algorithm and test set was used to determine the network's capability to predict outcomes that it has never seen. Diagnostic performance of the trained network was evaluated by constructing receiver operating characteristic (ROC) curve with the area under the curve (AUC), which were also used to determine the best positivity criterion for the model. Several different ANNs with different structures were created. The best-fitted trained network, IBV_D1, was able to predict IB in 73 cases out of 77 (diagnostic accuracy 94.8%) in the training set. Sensitivity and specificity of the trained neural network was 95.5% (42/44, 95% CI, 84.5-99.4) and 93.9% (31/33, 95% CI, 79.8-99.3), respectively. For testing set, AVC of the ROC curve for the IBV_D1 network was 0.948 (SE=0.086, 95% CI 0.592-0.961) in recognizing IB infection status accurately. At a criterion of 0.7149, the diagnostic accuracy was the highest with a 88.9% with the highest sensitivity of 100%. With this value of sensitivity and specificity together with assumed 44% of IB prevalence, IBV_D1 network showed a PPV of 80% and an NPV of 100%. Based on these findings, the authors conclude that neural network can be successfully applied to the development of a screening model for identifying IB infection in laying hen flocks.

Quantitative Analysis of Digital Radiography Pixel Values to absorbed Energy of Detector based on the X-Ray Energy Spectrum Model (X선 스펙트럼 모델을 이용한 DR 화소값과 디텍터 흡수에너지의 관계에 대한 정량적 분석)

  • Kim Do-Il;Kim Sung-Hyun;Ho Dong-Su;Choe Bo-young;Suh Tae-Suk;Lee Jae-Mun;Lee Hyoung-Koo
    • Progress in Medical Physics
    • /
    • v.15 no.4
    • /
    • pp.202-209
    • /
    • 2004
  • Flat panel based digital radiography (DR) systems have recently become useful and important in the field of diagnostic radiology. For DRs with amorphous silicon photosensors, CsI(TI) is normally used as the scintillator, which produces visible light corresponding to the absorbed radiation energy. The visible light photons are converted into electric signal in the amorphous silicon photodiodes which constitute a two dimensional array. In order to produce good quality images, detailed behaviors of DR detectors to radiation must be studied. The relationship between air exposure and the DR outputs has been investigated in many studies. But this relationship was investigated under the condition of the fixed tube voltage. In this study, we investigated the relationship between the DR outputs and X-ray in terms of the absorbed energy in the detector rather than the air exposure using SPEC-l8, an X-ray energy spectrum model. Measured exposure was compared with calculated exposure for obtaining the inherent filtration that is a important input variable of SPEC-l8. The absorbed energy in the detector was calculated using algorithm of calculating the absorbed energy in the material and pixel values of real images under various conditions was obtained. The characteristic curve was obtained using the relationship of two parameter and the results were verified using phantoms made of water and aluminum. The pixel values of the phantom image were estimated and compared with the characteristic curve under various conditions. It was found that the relationship between the DR outputs and the absorbed energy in the detector was almost linear. In a experiment using the phantoms, the estimated pixel values agreed with the characteristic curve, although the effect of scattered photons introduced some errors. However, effect of a scattered X-ray must be studied because it was not included in the calculation algorithm. The result of this study can provide useful information about a pre-processing of digital radiography.

  • PDF

Three-Dimensional High-Frequency Electromagnetic Modeling Using Vector Finite Elements (벡터 유한 요소를 이용한 고주파 3차원 전자탐사 모델링)

  • Son Jeong-Sul;Song Yoonho;Chung Seung-Hwan;Suh Jung Hee
    • Geophysics and Geophysical Exploration
    • /
    • v.5 no.4
    • /
    • pp.280-290
    • /
    • 2002
  • Three-dimensional (3-D) electromagnetic (EM) modeling algorithm has been developed using finite element method (FEM) to acquire more efficient interpretation techniques of EM data. When FEM based on nodal elements is applied to EM problem, spurious solutions, so called 'vector parasite', are occurred due to the discontinuity of normal electric fields and may lead the completely erroneous results. Among the methods curing the spurious problem, this study adopts vector element of which basis function has the amplitude and direction. To reduce computational cost and required core memory, complex bi-conjugate gradient (CBCG) method is applied to solving complex symmetric matrix of FEM and point Jacobi method is used to accelerate convergence rate. To verify the developed 3-D EM modeling algorithm, its electric and magnetic field for a layered-earth model are compared with those of layered-earth solution. As we expected, the vector based FEM developed in this study does not cause ny vector parasite problem, while conventional nodal based FEM causes lots of errors due to the discontinuity of field variables. For testing the applicability to high frequencies 100 MHz is used as an operating frequency for the layer structure. Modeled fields calculated from developed code are also well matched with the layered-earth ones for a model with dielectric anomaly as well as conductive anomaly. In a vertical electric dipole source case, however, the discontinuity of field variables causes the conventional nodal based FEM to include a lot of errors due to the vector parasite. Even for the case, the vector based FEM gave almost the same results as the layered-earth solution. The magnetic fields induced by a dielectric anomaly at high frequencies show unique behaviors different from those by a conductive anomaly. Since our 3-D EM modeling code can reflect the effect from a dielectric anomaly as well as a conductive anomaly, it may be a groundwork not only to apply high frequency EM method to the field survey but also to analyze the fold data obtained by high frequency EM method.

Selective Word Embedding for Sentence Classification by Considering Information Gain and Word Similarity (문장 분류를 위한 정보 이득 및 유사도에 따른 단어 제거와 선택적 단어 임베딩 방안)

  • Lee, Min Seok;Yang, Seok Woo;Lee, Hong Joo
    • Journal of Intelligence and Information Systems
    • /
    • v.25 no.4
    • /
    • pp.105-122
    • /
    • 2019
  • Dimensionality reduction is one of the methods to handle big data in text mining. For dimensionality reduction, we should consider the density of data, which has a significant influence on the performance of sentence classification. It requires lots of computations for data of higher dimensions. Eventually, it can cause lots of computational cost and overfitting in the model. Thus, the dimension reduction process is necessary to improve the performance of the model. Diverse methods have been proposed from only lessening the noise of data like misspelling or informal text to including semantic and syntactic information. On top of it, the expression and selection of the text features have impacts on the performance of the classifier for sentence classification, which is one of the fields of Natural Language Processing. The common goal of dimension reduction is to find latent space that is representative of raw data from observation space. Existing methods utilize various algorithms for dimensionality reduction, such as feature extraction and feature selection. In addition to these algorithms, word embeddings, learning low-dimensional vector space representations of words, that can capture semantic and syntactic information from data are also utilized. For improving performance, recent studies have suggested methods that the word dictionary is modified according to the positive and negative score of pre-defined words. The basic idea of this study is that similar words have similar vector representations. Once the feature selection algorithm selects the words that are not important, we thought the words that are similar to the selected words also have no impacts on sentence classification. This study proposes two ways to achieve more accurate classification that conduct selective word elimination under specific regulations and construct word embedding based on Word2Vec embedding. To select words having low importance from the text, we use information gain algorithm to measure the importance and cosine similarity to search for similar words. First, we eliminate words that have comparatively low information gain values from the raw text and form word embedding. Second, we select words additionally that are similar to the words that have a low level of information gain values and make word embedding. In the end, these filtered text and word embedding apply to the deep learning models; Convolutional Neural Network and Attention-Based Bidirectional LSTM. This study uses customer reviews on Kindle in Amazon.com, IMDB, and Yelp as datasets, and classify each data using the deep learning models. The reviews got more than five helpful votes, and the ratio of helpful votes was over 70% classified as helpful reviews. Also, Yelp only shows the number of helpful votes. We extracted 100,000 reviews which got more than five helpful votes using a random sampling method among 750,000 reviews. The minimal preprocessing was executed to each dataset, such as removing numbers and special characters from text data. To evaluate the proposed methods, we compared the performances of Word2Vec and GloVe word embeddings, which used all the words. We showed that one of the proposed methods is better than the embeddings with all the words. By removing unimportant words, we can get better performance. However, if we removed too many words, it showed that the performance was lowered. For future research, it is required to consider diverse ways of preprocessing and the in-depth analysis for the co-occurrence of words to measure similarity values among words. Also, we only applied the proposed method with Word2Vec. Other embedding methods such as GloVe, fastText, ELMo can be applied with the proposed methods, and it is possible to identify the possible combinations between word embedding methods and elimination methods.

Evaluating Global Container Ports' Performance Considering the Port Calls' Attractiveness (기항 매력도를 고려한 세계 컨테이너 항만의 성과 평가)

  • Park, Byungin
    • Journal of Korea Port Economic Association
    • /
    • v.38 no.3
    • /
    • pp.105-131
    • /
    • 2022
  • Even after the improvement in 2019, UNCTAD's Liner Shipping Connectivity Index (LSCI), which evaluates the performance of the global container port market, has limited use. In particular, since the liner shipping connectivity index evaluates the performance based only on the distance of the relationship, the performance index combining the port attractiveness of calling would be more efficient. This study used the modified Huff model, the hub-authority algorithm and the eigenvector centrality of social network analysis, and correlation analysis for 2007, 2017, and 2019 data of Ocean-Commerce, Japan. The findings are as follows: Firstly, the port attractiveness of calling and the overall performance of the port did not always match. However, according to the analysis of the attractiveness of a port calling, Busan remained within the top 10. Still, the attractiveness among other Korean ports improved slowly from the low level during the study period. Secondly, Global container ports are generally specialized for long-term specialized inbound and outbound ports by the route and grow while maintaining professionalism throughout the entire period. The Korean ports continue to change roles from analysis period to period. Lastly, the volume of cargo by period and the extended port connectivity index (EPCI) presented in this study showed a correlation from 0.77 to 0.85. Even though the Atlantic data is excluded from the analysis and the ship's operable capacity is used instead of the port throughput volume, it shows a high correlation. The study result would help evaluate and analyze global ports. According to the study, Korean ports need a long-term strategy to improve performance while maintaining professionalism. In order to maintain and develop the port's desirable role, it is necessary to utilize cooperation and partnerships with the complimentary port and attract shipping companies' services calling to the complementary port. Although this study carried out a complex analysis using a lot of data and methodologies for an extended period, it is necessary to conduct a study covering ports around the world, a long-term panel analysis, and a scientific parameter estimation study of the attractiveness analysis.