• Title/Summary/Keyword: Complex algorithm

Search Result 2,600, Processing Time 0.034 seconds

Deep learning-based Human Action Recognition Technique Considering the Spatio-Temporal Relationship of Joints (관절의 시·공간적 관계를 고려한 딥러닝 기반의 행동인식 기법)

  • Choi, Inkyu;Song, Hyok
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 2022.05a
    • /
    • pp.413-415
    • /
    • 2022
  • Since human joints can be used as useful information for analyzing human behavior as a component of the human body, many studies have been conducted on human action recognition using joint information. However, it is a very complex problem to recognize human action that changes every moment using only each independent joint information. Therefore, an additional information extraction method to be used for learning and an algorithm that considers the current state based on the past state are needed. In this paper, we propose a human action recognition technique considering the positional relationship of connected joints and the change of the position of each joint over time. Using the pre-trained joint extraction model, position information of each joint is obtained, and bone information is extracted using the difference vector between the connected joints. In addition, a simplified neural network is constructed according to the two types of inputs, and spatio-temporal features are extracted by adding LSTM. As a result of the experiment using a dataset consisting of 9 behaviors, it was confirmed that when the action recognition accuracy was measured considering the temporal and spatial relationship features of each joint, it showed superior performance compared to the result using only single joint information.

  • PDF

Admittance Model-Based Nanodynamic Control of Diamond Turning Machine (어드미턴스 모델을 이용한 다이아몬드 터닝머시인의 초정밀진동제어)

  • Jeong, Sanghwa;Kim, Sangsuk
    • Journal of the Korean Society for Precision Engineering
    • /
    • v.13 no.10
    • /
    • pp.154-160
    • /
    • 1996
  • The control of diamond turning is usually achieved through a laser-interferometer feedback of slide position. The limitation of this control scheme is that the feedback signal does not account for additional dynamics of the tool post and the material removal process. If the tool post is rigid and the material removal process is relatively static, then such a non-collocated position feedback control scheme may surfice. However, as the accuracy requirement gets tighter and desired surface cnotours become more complex, the need for a direct tool-tip sensing becomes inevitable. The physical constraints of the machining process prohibit any reasonable implementation of a tool-tip motion measurement. It is proposed that the measured force normal to the face of the workpiece can be filtered through an appropriate admittance transfer function to result in the estimated dapth of cut. This can be compared to the desired depth of cut to generate the adjustment control action in additn to position feedback control. In this work, the design methodology on the admittance model-based control with a conventional controller is presented. The recursive least-squares algorithm with forgetting factor is proposed to identify the parameters and update the cutting process in real time. The normal cutting forces are measured to identify the cutting dynamics in the real diamond turning process using the precision dynamoneter. Based on the parameter estimation of cutting dynamics and the admitance model-based nanodynamic control scheme, simulation results are shown.

  • PDF

Low Power Security Architecture for the Internet of Things (사물인터넷을 위한 저전력 보안 아키텍쳐)

  • Yun, Sun-woo;Park, Na-eun;Lee, Il-gu
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 2021.10a
    • /
    • pp.199-201
    • /
    • 2021
  • The Internet of Things (IoT) is a technology that can organically connect people and things without time and space constraints by using communication network technology and sensors, and transmit and receive data in real time. The IoT used in all industrial fields has limitations in terms of storage allocation, such as device size, memory capacity, and data transmission performance, so it is important to manage power consumption to effectively utilize the limited battery capacity. In the prior research, there is a problem in that security is deteriorated instead of improving power efficiency by lightening the security algorithm of the encryption module. In this study, we proposes a low-power security architecture that can utilize high-performance security algorithms in the IoT environment. This can provide high security and power efficiency by using relatively complex security modules in low-power environments by executing security modules only when threat detection is required based on inspection results.

  • PDF

Privacy Preserving Techniques for Deep Learning in Multi-Party System (멀티 파티 시스템에서 딥러닝을 위한 프라이버시 보존 기술)

  • Hye-Kyeong Ko
    • The Journal of the Convergence on Culture Technology
    • /
    • v.9 no.3
    • /
    • pp.647-654
    • /
    • 2023
  • Deep Learning is a useful method for classifying and recognizing complex data such as images and text, and the accuracy of the deep learning method is the basis for making artificial intelligence-based services on the Internet useful. However, the vast amount of user da vita used for training in deep learning has led to privacy violation problems, and it is worried that companies that have collected personal and sensitive data of users, such as photographs and voices, own the data indefinitely. Users cannot delete their data and cannot limit the purpose of use. For example, data owners such as medical institutions that want to apply deep learning technology to patients' medical records cannot share patient data because of privacy and confidentiality issues, making it difficult to benefit from deep learning technology. In this paper, we have designed a privacy preservation technique-applied deep learning technique that allows multiple workers to use a neural network model jointly, without sharing input datasets, in multi-party system. We proposed a method that can selectively share small subsets using an optimization algorithm based on modified stochastic gradient descent, confirming that it could facilitate training with increased learning accuracy while protecting private information.

Prediction of Settlement of Vertical Drainage-Reinforced Soft Clay Ground using Back-Analysis (역해석 기법에 근거한 수직배수재로 개량된 연약점토지반의 침하예측)

  • Park, Hyun Il;Kim, Yun Tae;Hwang, Daejin;Lee, Seung Rae
    • KSCE Journal of Civil and Environmental Engineering Research
    • /
    • v.26 no.4C
    • /
    • pp.229-238
    • /
    • 2006
  • Observed field behaviors are frequently different from the behaviors predicted in the design state due to several uncertainties involved in soil properties, numerical modeling, and error of measuring system even though a sophisticated numerical analysis technique is applied to solve the consolidation behavior of drainage-installed soft deposits. In this study, genetic algorithms are applied to back-analyze the soil properties using the observed behavior of soft clay deposit composed of multi layers that shows complex consolidation characteristics. Utilizing the program, one might be able to appropriately predict the subsequent consolidation behavior from the measured data in an early stage of consolidation of multi layered soft deposits. Example analyses for drainage-installed multi-layered soft deposits are performed to examine the applicability of proposed back-analysis method.

Grid Strut-Tie Model Approach for Structural Concrete Design (콘크리트 구조부재의 설계를 위한 격자 스트럿-타이 모델 방법)

  • Yun, Young Mook;Kim, Byung Hun
    • KSCE Journal of Civil and Environmental Engineering Research
    • /
    • v.26 no.4A
    • /
    • pp.621-637
    • /
    • 2006
  • Although the approaches implementing strut-tie models are the valuable tools for designing discontinuity regions of structural concrete, the approaches of the current design codes have to be improved for the design of structural concrete subjected to complex loading and geometrical conditions because of the uncertainties in the selection of strut-tie model, in the use of an indeterminate strut-tie model, and in the effective strengths of struts and nodal zones. To improve the uncertainties, a grid struttie model approach is proposed in this study. The proposed approach, allowing to perform a consistent and effective design of structural concrete, employs an initial grid strut-tie model in which various load combinations can be considered. In addition, the approach performs an automatic selection of an optimal strut-tie model by evaluating the capacities of struts and ties using a simple optimization algorithm. The validity and effectiveness of the proposed approach is verified by conducting the analysis of the four reinforced concrete deep beams tested to failure and the design of shearwalls with two openings.

Framework for improving the prediction rate with respect to outdoor thermal comfort using machine learning

  • Jeong, Jaemin;Jeong, Jaewook;Lee, Minsu;Lee, Jaehyun
    • International conference on construction engineering and project management
    • /
    • 2022.06a
    • /
    • pp.119-127
    • /
    • 2022
  • Most of the construction works are conducted outdoors, so the construction workers are affected by weather conditions such as temperature, humidity, and wind velocity which can be evaluated the thermal comfort as environmental factors. In our previous researches, it was found that construction accidents are usually occurred in the discomfort ranges. The safety management, therefore, should be planned in consideration of the thermal comfort and measured by a specialized simulation tool. However, it is very complex, time-consuming, and difficult to model. To address this issue, this study is aimed to develop a framework of a prediction model for improving the prediction accuracy about outdoor thermal comfort considering environmental factors using machine learning algorithms with hyperparameter tuning. This study is done in four steps: i) Establishment of database, ii) Selection of variables to develop prediction model, iii) Development of prediction model; iv) Conducting of hyperparameter tuning. The tree type algorithm is used to develop the prediction model. The results of this study are as follows. First, considering three variables related to environmental factor, the prediction accuracy was 85.74%. Second, the prediction accuracy was 86.55% when considering four environmental factors. Third, after conducting hyperparameter tuning, the prediction accuracy was increased up to 87.28%. This study has several contributions. First, using this prediction model, the thermal comfort can be calculated easily and quickly. Second, using this prediction model, the safety management can be utilized to manage the construction accident considering weather conditions.

  • PDF

Dynamic Remeshing for Real-Time Representation of Thin-Shell Tearing Simulations on the GPU

  • Jong-Hyun Kim
    • Journal of the Korea Society of Computer and Information
    • /
    • v.28 no.12
    • /
    • pp.89-96
    • /
    • 2023
  • In this paper, we propose a GPU-based method for real-time processing of dynamic re-meshing required for tearing cloth. Thin shell materials are used in various fields such as physics-based simulation/animation, games, and virtual reality. Tearing the fabric requires dynamically updating the geometry and connectivity, making the process complex and computationally intensive. This process needs to be fast, especially when dealing with interactive content. Most methods perform re-meshing through low-resolution simulations to maintain real-time, or rely on an already segmented pattern, which is not considered dynamic re-meshing, and the quality of the torn pattern is low. In this paper, we propose a new GPU-optimized dynamic re-meshing algorithm that enables real-time processing of high-resolution fabric tears. The method proposed in this paper can be used for virtual surgical simulation and physics-based modeling in games and virtual environments that require real-time, as it allows dynamic re-meshing rather than pre-split meshes.

Comparison of Deep Learning Models Using Protein Sequence Data (단백질 기능 예측 모델의 주요 딥러닝 모델 비교 실험)

  • Lee, Jeung Min;Lee, Hyun
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.11 no.6
    • /
    • pp.245-254
    • /
    • 2022
  • Proteins are the basic unit of all life activities, and understanding them is essential for studying life phenomena. Since the emergence of the machine learning methodology using artificial neural networks, many researchers have tried to predict the function of proteins using only protein sequences. Many combinations of deep learning models have been reported to academia, but the methods are different and there is no formal methodology, and they are tailored to different data, so there has never been a direct comparative analysis of which algorithms are more suitable for handling protein data. In this paper, the single model performance of each algorithm was compared and evaluated based on accuracy and speed by applying the same data to CNN, LSTM, and GRU models, which are the most frequently used representative algorithms in the convergence research field of predicting protein functions, and the final evaluation scale is presented as Micro Precision, Recall, and F1-score. The combined models CNN-LSTM and CNN-GRU models also were evaluated in the same way. Through this study, it was confirmed that the performance of LSTM as a single model is good in simple classification problems, overlapping CNN was suitable as a single model in complex classification problems, and the CNN-LSTM was relatively better as a combination model.

Development of Type 2 Prediction Prediction Based on Big Data (빅데이터 기반 2형 당뇨 예측 알고리즘 개발)

  • Hyun Sim;HyunWook Kim
    • The Journal of the Korea institute of electronic communication sciences
    • /
    • v.18 no.5
    • /
    • pp.999-1008
    • /
    • 2023
  • Early prediction of chronic diseases such as diabetes is an important issue, and improving the accuracy of diabetes prediction is especially important. Various machine learning and deep learning-based methodologies are being introduced for diabetes prediction, but these technologies require large amounts of data for better performance than other methodologies, and the learning cost is high due to complex data models. In this study, we aim to verify the claim that DNN using the pima dataset and k-fold cross-validation reduces the efficiency of diabetes diagnosis models. Machine learning classification methods such as decision trees, SVM, random forests, logistic regression, KNN, and various ensemble techniques were used to determine which algorithm produces the best prediction results. After training and testing all classification models, the proposed system provided the best results on XGBoost classifier with ADASYN method, with accuracy of 81%, F1 coefficient of 0.81, and AUC of 0.84. Additionally, a domain adaptation method was implemented to demonstrate the versatility of the proposed system. An explainable AI approach using the LIME and SHAP frameworks was implemented to understand how the model predicts the final outcome.