• Title/Summary/Keyword: 행렬

Search Result 4,091, Processing Time 0.037 seconds

Performance Evaluation of Loss Functions and Composition Methods of Log-scale Train Data for Supervised Learning of Neural Network (신경 망의 지도 학습을 위한 로그 간격의 학습 자료 구성 방식과 손실 함수의 성능 평가)

  • Donggyu Song;Seheon Ko;Hyomin Lee
    • Korean Chemical Engineering Research
    • /
    • v.61 no.3
    • /
    • pp.388-393
    • /
    • 2023
  • The analysis of engineering data using neural network based on supervised learning has been utilized in various engineering fields such as optimization of chemical engineering process, concentration prediction of particulate matter pollution, prediction of thermodynamic phase equilibria, and prediction of physical properties for transport phenomena system. The supervised learning requires training data, and the performance of the supervised learning is affected by the composition and the configurations of the given training data. Among the frequently observed engineering data, the data is given in log-scale such as length of DNA, concentration of analytes, etc. In this study, for widely distributed log-scaled training data of virtual 100×100 images, available loss functions were quantitatively evaluated in terms of (i) confusion matrix, (ii) maximum relative error and (iii) mean relative error. As a result, the loss functions of mean-absolute-percentage-error and mean-squared-logarithmic-error were the optimal functions for the log-scaled training data. Furthermore, we figured out that uniformly selected training data lead to the best prediction performance. The optimal loss functions and method for how to compose training data studied in this work would be applied to engineering problems such as evaluating DNA length, analyzing biomolecules, predicting concentration of colloidal suspension.

A Study on Teaching the Method of Lagrange Multipliers in the Era of Digital Transformation (라그랑주 승수법의 교수·학습에 대한 소고: 라그랑주 승수법을 활용한 주성분 분석 사례)

  • Lee, Sang-Gu;Nam, Yun;Lee, Jae Hwa
    • Communications of Mathematical Education
    • /
    • v.37 no.1
    • /
    • pp.65-84
    • /
    • 2023
  • The method of Lagrange multipliers, one of the most fundamental algorithms for solving equality constrained optimization problems, has been widely used in basic mathematics for artificial intelligence (AI), linear algebra, optimization theory, and control theory. This method is an important tool that connects calculus and linear algebra. It is actively used in artificial intelligence algorithms including principal component analysis (PCA). Therefore, it is desired that instructors motivate students who first encounter this method in college calculus. In this paper, we provide an integrated perspective for instructors to teach the method of Lagrange multipliers effectively. First, we provide visualization materials and Python-based code, helping to understand the principle of this method. Second, we give a full explanation on the relation between Lagrange multiplier and eigenvalues of a matrix. Third, we give the proof of the first-order optimality condition, which is a fundamental of the method of Lagrange multipliers, and briefly introduce the generalized version of it in optimization. Finally, we give an example of PCA analysis on a real data. These materials can be utilized in class for teaching of the method of Lagrange multipliers.

Fiber Finite Element Mixed Method for Nonlinear Analysis of Steel-Concrete Composite Structures (강-콘크리트 합성구조물의 비선형해석을 위한 화이버 유한요소 혼합법)

  • Park, Jung-Woong;Kim, Seung-Eock
    • KSCE Journal of Civil and Environmental Engineering Research
    • /
    • v.28 no.6A
    • /
    • pp.789-798
    • /
    • 2008
  • The stiffness method provides a framework to calculate the structural deformations directly from solving the equilibrium state. However, to use the displacement shape functions leads to approximate estimation of stiffness matrix and resisting forces, and accordingly results in a low accuracy. The conventional flexibility method uses the relation between sectional forces and nodal forces in which the equilibrium is always satisfied over all sections along the element. However, the determination of the element resisting forces is not so straightforward. In this study, a new fiber finite element mixed method has been developed for nonlinear anaysis of steel-concrete composite structures in the context of a standard finite element analysis program. The proposed method applies the Newton method based on the load control and uses the incremental secant stiffness method which is computationally efficient and stable. Also, the method is employed to analyze the steel-concrete composite structures, and the analysis results are compared with those obtained by ABAQUS. The comparison shows that the proposed method consistently well predicts the nonlinear behavior of the composite structures, and gives good efficiency.

Heterogeneous Sensor Coordinate System Calibration Technique for AR Whole Body Interaction (AR 전신 상호작용을 위한 이종 센서 간 좌표계 보정 기법)

  • Hangkee Kim;Daehwan Kim;Dongchun Lee;Kisuk Lee;Nakhoon Baek
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.12 no.7
    • /
    • pp.315-324
    • /
    • 2023
  • A simple and accurate whole body rehabilitation interaction technology using immersive digital content is needed for elderly patients with steadily increasing age-related diseases. In this study, we introduce whole-body interaction technology using HoloLens and Kinect for this purpose. To achieve this, we propose three coordinate transformation methods: mesh feature point-based transformation, AR marker-based transformation, and body recognition-based transformation. The mesh feature point-based transformation aligns the coordinate system by designating three feature points on the spatial mesh and using a transform matrix. This method requires manual work and has lower usability, but has relatively high accuracy of 8.5mm. The AR marker-based method uses AR and QR markers recognized by HoloLens and Kinect simultaneously to achieve a compliant accuracy of 11.2mm. The body recognition-based transformation aligns the coordinate system by using the position of the head or HMD recognized by both devices and the position of both hands or controllers. This method has lower accuracy, but does not require additional tools or manual work, making it more user-friendly. Additionally, we reduced the error by more than 10% using RANSAC as a post-processing technique. These three methods can be selectively applied depending on the usability and accuracy required for the content. In this study, we validated this technology by applying it to the "Thunder Punch" and rehabilitation therapy content.

A Meshless Method Using the Local Partition of Unity for Modeling of Cohesive Cracks (점성균열 모델을 위한 국부단위분할이 적용된 무요소법)

  • Zi, Goangseup;Jung, Jin-kyu;Kim, Byeong Min
    • KSCE Journal of Civil and Environmental Engineering Research
    • /
    • v.26 no.5A
    • /
    • pp.861-872
    • /
    • 2006
  • The element free Galerkin method is extended by the local partition of unity method to model the cohesive cracks in two dimensional continuum. The shape function of a particle whose domain of influence is completely cut by a crack is enriched by the step enrichment function. If the domain of influence contains a crack tip inside, it is enriched by a branch enrichment function which does not have the LEFM stress singularity. The discrete equations are obtained directly from the standard Galerkin method since the enrichment is only for the displacement field, which satisfies the local partition of unity. Because only particles whose domains of influence are influenced by a crack are enriched, the system matrix is still sparse so that the increase of the computational cost is minimized. The condition for crack growth in dynamic problems is obtained from the material instability; when the acoustic tensor loses the positive definiteness, a cohesive crack is inserted to the point so as to change the continuum to a discontiuum. The crack speed is naturally obtained from the criterion. It is found that this method is more accurate and converges faster than the classical meshless methods which are based on the visibility concept. In this paper, several well-known static and dynamic problems were solved to verify the method.

Establish for Link Travel Time Distribution Estimation Model Using Fuzzy (퍼지추론을 이용한 링크통행시간 분포비율 추정모형 구축)

  • Lee, Young Woo
    • KSCE Journal of Civil and Environmental Engineering Research
    • /
    • v.26 no.2D
    • /
    • pp.233-239
    • /
    • 2006
  • Most research for until at now link travel time were research for mean link travel time calculate or estimate which uses the average of the individual vehicle. however, the link travel time distribution is divided caused by with the impact factor which is various traffic condition, signal operation condition and the road conditional etc. preceding study result for link travel time distribution characteristic showed that the patterns of going through traffic were divided up to 2 in the link travel times. therefore, it will be more accurate to divide up the link travel time into the one involving delay and the other without delay, rather than using the average link travel time in terms of assessing the traffic situation. this study is it analyzed transit hour distribution characteristic and a cause using examine to the variables which give an effect at link travel time distribute using simulation program and determinate link travel time distribute ratio estimation model. to assess the distribution of the link travel times, this research develops the regression model and the fuzzy model. the variables that have high level of correlations in both estimation models are the rest time of green ball and the delay vehicles. these variables were used to construct the methods in the estimation models. The comparison of the two estimation models-fuzzy and regression model- showed that fuzzy model out-competed the regression model in terms of reliability and applicability.

Establishment of a deep learning-based defect classification system for optimizing textile manufacturing equipment

  • YuLim Kim;Jaeil Kim
    • Journal of the Korea Society of Computer and Information
    • /
    • v.28 no.10
    • /
    • pp.27-35
    • /
    • 2023
  • In this paper, we propose a process of increasing productivity by applying a deep learning-based defect detection and classification system to the prepreg fiber manufacturing process, which is in high demand in the field of producing composite materials. In order to apply it to toe prepreg manufacturing equipment that requires a solution due to the occurrence of a large amount of defects in various conditions, the optimal environment was first established by selecting cameras and lights necessary for defect detection and classification model production. In addition, data necessary for the production of multiple classification models were collected and labeled according to normal and defective conditions. The multi-classification model is made based on CNN and applies pre-learning models such as VGGNet, MobileNet, ResNet, etc. to compare performance and identify improvement directions with accuracy and loss graphs. Data augmentation and dropout techniques were applied to identify and improve overfitting problems as major problems. In order to evaluate the performance of the model, a performance evaluation was conducted using the confusion matrix as a performance indicator, and the performance of more than 99% was confirmed. In addition, it checks the classification results for images acquired in real time by applying them to the actual process to check whether the discrimination values are accurately derived.

Detection of Cold Water Mass along the East Coast of Korea Using Satellite Sea Surface Temperature Products (인공위성 해수면온도 자료를 이용한 동해 연안 냉수대 탐지 알고리즘 개발)

  • Won-Jun Choi;Chan-Su Yang
    • Korean Journal of Remote Sensing
    • /
    • v.39 no.6_1
    • /
    • pp.1235-1243
    • /
    • 2023
  • This study proposes the detection algorithm for the cold water mass (CWM) along the eastern coast of the Korean Peninsula using sea surface temperature (SST) data provided by the Korea Institute of Ocean Science and Technology (KIOST). Considering the occurrence and distribution of the CWM, the eastern coast of the Korean Peninsula is classified into 3 regions("Goseong-Uljin", "Samcheok-Guryongpo", "Pohang-Gijang"), and the K-means clustering is first applied to SST field of each region. Three groups, K-means clusters are used to determine CWM through applying a double threshold filter predetermined using the standard deviation and the difference of average SST for the 3 groups. The estimated sea area is judged by the CWM if the standard deviation in the sea area is 0.6℃ or higher and the average water temperature difference is 2℃ or higher. As a result of the CWM detection in 2022, the number of CWM occurrences in "Pohang-Gijang" was the most frequent on 77 days and performance indicators of the confusion matrix were calculated for quantitative evaluation. The accuracy of the three regions was 0.83 or higher, and the F1 score recorded a maximum of 0.95 in "Pohang-Gijang". The detection algorithm proposed in this study has been applied to the KIOST SST system providing a CWM map by email.

An Accelerated Approach to Dose Distribution Calculation in Inverse Treatment Planning for Brachytherapy (근접 치료에서 역방향 치료 계획의 선량분포 계산 가속화 방법)

  • Byungdu Jo
    • Journal of the Korean Society of Radiology
    • /
    • v.17 no.5
    • /
    • pp.633-640
    • /
    • 2023
  • With the recent development of static and dynamic modulated brachytherapy methods in brachytherapy, which use radiation shielding to modulate the dose distribution to deliver the dose, the amount of parameters and data required for dose calculation in inverse treatment planning and treatment plan optimization algorithms suitable for new directional beam intensity modulated brachytherapy is increasing. Although intensity-modulated brachytherapy enables accurate dose delivery of radiation, the increased amount of parameters and data increases the elapsed time required for dose calculation. In this study, a GPU-based CUDA-accelerated dose calculation algorithm was constructed to reduce the increase in dose calculation elapsed time. The acceleration of the calculation process was achieved by parallelizing the calculation of the system matrix of the volume of interest and the dose calculation. The developed algorithms were all performed in the same computing environment with an Intel (3.7 GHz, 6-core) CPU and a single NVIDIA GTX 1080ti graphics card, and the dose calculation time was evaluated by measuring only the dose calculation time, excluding the additional time required for loading data from disk and preprocessing operations. The results showed that the accelerated algorithm reduced the dose calculation time by about 30 times compared to the CPU-only calculation. The accelerated dose calculation algorithm can be expected to speed up treatment planning when new treatment plans need to be created to account for daily variations in applicator movement, such as in adaptive radiotherapy, or when dose calculation needs to account for changing parameters, such as in dynamically modulated brachytherapy.

Deep Learning Approach for Automatic Discontinuity Mapping on 3D Model of Tunnel Face (터널 막장 3차원 지형모델 상에서의 불연속면 자동 매핑을 위한 딥러닝 기법 적용 방안)

  • Chuyen Pham;Hyu-Soung Shin
    • Tunnel and Underground Space
    • /
    • v.33 no.6
    • /
    • pp.508-518
    • /
    • 2023
  • This paper presents a new approach for the automatic mapping of discontinuities in a tunnel face based on its 3D digital model reconstructed by LiDAR scan or photogrammetry techniques. The main idea revolves around the identification of discontinuity areas in the 3D digital model of a tunnel face by segmenting its 2D projected images using a deep-learning semantic segmentation model called U-Net. The proposed deep learning model integrates various features including the projected RGB image, depth map image, and local surface properties-based images i.e., normal vector and curvature images to effectively segment areas of discontinuity in the images. Subsequently, the segmentation results are projected back onto the 3D model using depth maps and projection matrices to obtain an accurate representation of the location and extent of discontinuities within the 3D space. The performance of the segmentation model is evaluated by comparing the segmented results with their corresponding ground truths, which demonstrates the high accuracy of segmentation results with the intersection-over-union metric of approximately 0.8. Despite still being limited in training data, this method exhibits promising potential to address the limitations of conventional approaches, which only rely on normal vectors and unsupervised machine learning algorithms for grouping points in the 3D model into distinct sets of discontinuities.