• Title/Summary/Keyword: Optimized algorithm

Search Result 1,808, Processing Time 0.03 seconds

Design and Implementation of BNN based Human Identification and Motion Classification System Using CW Radar (연속파 레이다를 활용한 이진 신경망 기반 사람 식별 및 동작 분류 시스템 설계 및 구현)

  • Kim, Kyeong-min;Kim, Seong-jin;NamKoong, Ho-jung;Jung, Yun-ho
    • Journal of Advanced Navigation Technology
    • /
    • v.26 no.4
    • /
    • pp.211-218
    • /
    • 2022
  • Continuous wave (CW) radar has the advantage of reliability and accuracy compared to other sensors such as camera and lidar. In addition, binarized neural network (BNN) has a characteristic that dramatically reduces memory usage and complexity compared to other deep learning networks. Therefore, this paper proposes binarized neural network based human identification and motion classification system using CW radar. After receiving a signal from CW radar, a spectrogram is generated through a short-time Fourier transform (STFT). Based on this spectrogram, we propose an algorithm that detects whether a person approaches a radar. Also, we designed an optimized BNN model that can support the accuracy of 90.0% for human identification and 98.3% for motion classification. In order to accelerate BNN operation, we designed BNN hardware accelerator on field programmable gate array (FPGA). The accelerator was implemented with 1,030 logics, 836 registers, and 334.904 Kbit block memory, and it was confirmed that the real-time operation was possible with a total calculation time of 6 ms from inference to transferring result.

A Study on Digital Color Reproduction for Recording Color Appearance of Cultural Heritage (문화유산의 현색(顯色) 기록화를 위한 디지털 색재현 연구)

  • Song, Hyeong Rok;Jo, Young Hoon
    • Journal of Conservation Science
    • /
    • v.38 no.2
    • /
    • pp.154-165
    • /
    • 2022
  • The color appearance of cultural heritage are essential factors for manufacturing technique interpretation, conservation treatment usage, and condition monitoring. Therefore, this study systematically established color reproduction procedures based on the digital color management system for the portrait of Gwon Eungsu. Moreover, various application strategies for recording and conserving the cultural heritage were proposed. Overall color reproduction processes were conducted in the following order: photography condition setting, standard color measurements, digital photography, color correction, and color space creation. Therefore, compared with the color appearance, the digital image applied to a camera maker profile indicated an average color difference of 𝜟10.1. However, the digital reproduction result based on the color management system exhibits an average color difference of 𝜟1.1, which is close to the color appearance. This means that although digital photography conditions are optimized, recording the color appearance is difficult when relying on the correction algorithm developed by the camera maker. Therefore, the digital color reproduction of cultural heritage is required through color correction and color space creation based on the raw digital image, which is a crucial process for documenting the color appearance. Additionally, the recording of color appearance through digital color reproduction is important for condition evaluation, conservation treatment, and restoration of cultural heritage. Furthermore, standard data of imaging analysis are available for discoloration monitoring.

A Study on the Optimization of Main Dimensions of a Ship by Design Search Techniques based on the AI (AI 기반 설계 탐색 기법을 통한 선박의 주요 치수 최적화)

  • Dong-Woo Park;Inseob Kim
    • Journal of the Korean Society of Marine Environment & Safety
    • /
    • v.28 no.7
    • /
    • pp.1231-1237
    • /
    • 2022
  • In the present study, the optimization of the main particulars of a ship using AI-based design search techniques was investigated. For the design search techniques, the SHERPA algorithm by HEEDS was applied, and CFD analysis using STAR-CCM+ was applied for the calculation of resistance performance. Main particulars were automatically transformed by modifying the main particulars of the ship at the stage of preprocessing using JAVA script and Python. Small catamaran was chosen for the present study, and the main dimensions of the length, breadth, draft of demi-hull, and distance between demi-hulls were considered as design variables. Total resistance was considered as an objective function, and the range of displaced volume considering the arrangement of the outfitting system was chosen as the constraint. As a result, the changes in the individual design variables were within ±5%, and the total resistance of the optimized hull form was decreased by 11% compared with that of the existing hull form. Throughout the present study, the resistance performance of small catamaran could be improved by the optimization of the main dimensions without direct modification of the hull shape. In addition, the application of optimization using design search techniques is expected for the improvement in the resistance performance of a ship.

A Study on the Accuracy Comparison of Object Detection Algorithms for 360° Camera Images for BIM Model Utilization (BIM 모델 활용을 위한 360° 카메라 이미지의 객체 탐지 알고리즘 정확성 비교 연구)

  • Hyun-Chul Joo;Ju-Hyeong Lee;Jong-Won Lim;Jae-Hee Lee;Leen-Seok Kang
    • Land and Housing Review
    • /
    • v.14 no.3
    • /
    • pp.145-155
    • /
    • 2023
  • Recently, with the widespread adoption of Building Information Modeling (BIM) technology in the construction industry, various object detection algorithms have been used to verify errors between 3D models and actual construction elements. Since the characteristics of objects vary depending on the type of construction facility, such as buildings, bridges, and tunnels, appropriate methods for object detection technology need to be employed. Additionally, for object detection, initial object images are required, and to obtain these, various methods, such as drones and smartphones, can be used for image acquisition. The study uses a 360° camera optimized for internal tunnel imaging to capture initial images of the tunnel structures of railway and road facilities. Various object detection methodologies including the YOLO, SSD, and R-CNN algorithms are applied to detect actual objects from the captured images. And the Faster R-CNN algorithm had a higher recognition rate and mAP value than the SSD and YOLO v5 algorithms, and the difference between the minimum and maximum values of the recognition rates was small, showing equal detection ability. Considering the increasing adoption of BIM in current railway and road construction projects, this research highlights the potential utilization of 360° cameras and object detection methodologies for tunnel facility sections, aiming to expand their application in maintenance.

Graph Convolutional - Network Architecture Search : Network architecture search Using Graph Convolution Neural Networks (그래프 합성곱-신경망 구조 탐색 : 그래프 합성곱 신경망을 이용한 신경망 구조 탐색)

  • Su-Youn Choi;Jong-Youel Park
    • The Journal of the Convergence on Culture Technology
    • /
    • v.9 no.1
    • /
    • pp.649-654
    • /
    • 2023
  • This paper proposes the design of a neural network structure search model using graph convolutional neural networks. Deep learning has a problem of not being able to verify whether the designed model has a structure with optimized performance due to the nature of learning as a black box. The neural network structure search model is composed of a recurrent neural network that creates a model and a convolutional neural network that is the generated network. Conventional neural network structure search models use recurrent neural networks, but in this paper, we propose GC-NAS, which uses graph convolutional neural networks instead of recurrent neural networks to create convolutional neural network models. The proposed GC-NAS uses the Layer Extraction Block to explore depth, and the Hyper Parameter Prediction Block to explore spatial and temporal information (hyper parameters) based on depth information in parallel. Therefore, since the depth information is reflected, the search area is wider, and the purpose of the search area of the model is clear by conducting a parallel search with depth information, so it is judged to be superior in theoretical structure compared to GC-NAS. GC-NAS is expected to solve the problem of the high-dimensional time axis and the range of spatial search of recurrent neural networks in the existing neural network structure search model through the graph convolutional neural network block and graph generation algorithm. In addition, we hope that the GC-NAS proposed in this paper will serve as an opportunity for active research on the application of graph convolutional neural networks to neural network structure search.

Biomechanical Research Trends for Alpine Ski Analysis (알파인 스키 분석을 위한 운동역학 연구 동향)

  • Lee, Jusung;Moon, Jeheon;Kim, Jinhae;Hwang, Jinny;Kim, Hyeyoung
    • 한국체육학회지인문사회과학편
    • /
    • v.57 no.6
    • /
    • pp.293-308
    • /
    • 2018
  • This study was carried out to investigate the current trends in skiing-related research from existing literature in the field of kinematics, measurement sensor and computer simulation. In the field of kinematics, research is being conducted on the mechanism of ski turn, posture analysis according to the grade and skill level of skiers, friction force of ski and snow, and air resistance. In the field of measurement sensor and computer simulation, researches are being conducted for researching and developing equipment using IMU sensor and GPS. The results of this study are as follows. First, beyond the limits of the existing kinematic analysis, it is necessary to develop measurement equipment that can analyze the entire skiing area and can be deployed with ease at the sports scene. Second, research on the accuracy of information obtained using measurement sensors and various analysis techniques based on these measures should be carried out continuously to provide data that can help the sports scene. Third, it is necessary to use computer simulation methods to clarify the injury mechanism and discover ways to prevent injuries related to skiing. Fourth, it is necessary to provide optimized ski trajectory algorithm by developing 3D ski model using computer simulation and comparing with actual skiing data.

Dynamic Nonlinear Prediction Model of Univariate Hydrologic Time Series Using the Support Vector Machine and State-Space Model (Support Vector Machine과 상태공간모형을 이용한 단변량 수문 시계열의 동역학적 비선형 예측모형)

  • Kwon, Hyun-Han;Moon, Young-Il
    • KSCE Journal of Civil and Environmental Engineering Research
    • /
    • v.26 no.3B
    • /
    • pp.279-289
    • /
    • 2006
  • The reconstruction of low dimension nonlinear behavior from the hydrologic time series has been an active area of research in the last decade. In this study, we present the applications of a powerful state space reconstruction methodology using the method of Support Vector Machines (SVM) to the Great Salt Lake (GSL) volume. SVMs are machine learning systems that use a hypothesis space of linear functions in a Kernel induced higher dimensional feature space. SVMs are optimized by minimizing a bound on a generalized error (risk) measure, rather than just the mean square error over a training set. The utility of this SVM regression approach is demonstrated through applications to the short term forecasts of the biweekly GSL volume. The SVM based reconstruction is used to develop time series forecasts for multiple lead times ranging from the period of two weeks to several months. The reliability of the algorithm in learning and forecasting the dynamics is tested using split sample sensitivity analyses, with a particular interest in forecasting extreme states. Unlike previously reported methodologies, SVMs are able to extract the dynamics using only a few past observed data points (Support Vectors, SV) out of the training examples. Considering statistical measures, the prediction model based on SVM demonstrated encouraging and promising results in a short-term prediction. Thus, the SVM method presented in this study suggests a competitive methodology for the forecast of hydrologic time series.

A study on the design of an efficient hardware and software mixed-mode image processing system for detecting patient movement (환자움직임 감지를 위한 효율적인 하드웨어 및 소프트웨어 혼성 모드 영상처리시스템설계에 관한 연구)

  • Seungmin Jung;Euisung Jung;Myeonghwan Kim
    • Journal of Internet Computing and Services
    • /
    • v.25 no.1
    • /
    • pp.29-37
    • /
    • 2024
  • In this paper, we propose an efficient image processing system to detect and track the movement of specific objects such as patients. The proposed system extracts the outline area of an object from a binarized difference image by applying a thinning algorithm that enables more precise detection compared to previous algorithms and is advantageous for mixed-mode design. The binarization and thinning steps, which require a lot of computation, are designed based on RTL (Register Transfer Level) and replaced with optimized hardware blocks through logic circuit synthesis. The designed binarization and thinning block was synthesized into a logic circuit using the standard 180n CMOS library and its operation was verified through simulation. To compare software-based performance, performance analysis of binary and thinning operations was also performed by applying sample images with 640 × 360 resolution in a 32-bit FPGA embedded system environment. As a result of verification, it was confirmed that the mixed-mode design can improve the processing speed by 93.8% in the binary and thinning stages compared to the previous software-only processing speed. The proposed mixed-mode system for object recognition is expected to be able to efficiently monitor patient movements even in an edge computing environment where artificial intelligence networks are not applied.

The effects of physical factors in SPECT (물리적 요소가 SPECT 영상에 미치는 영향)

  • 손혜경;김희중;나상균;이희경
    • Progress in Medical Physics
    • /
    • v.7 no.1
    • /
    • pp.65-77
    • /
    • 1996
  • Using the 2-D and 3-D Hoffman brain phantom, 3-D Jaszczak phantom and Single Photon Emission Computed Tomography, the effects of data acquisition parameter, attenuation, noise, scatter and reconstruction algorithm on image quantitation as well as image quality were studied. For the data acquisition parameters, the images were acquired by changing the increment angle of rotation and the radius. The less increment angle of rotation resulted in superior image quality. Smaller radius from the center of rotation gave better image quality, since the resolution degraded as increasing the distance from detector to object increased. Using the flood data in Jaszczak phantom, the optimal attenuation coefficients were derived as 0.12cm$\^$-1/ for all collimators. Consequently, the all images were corrected for attenuation using the derived attenuation coefficients. It showed concave line profile without attenuation correction and flat line profile with attenuation correction in flood data obtained with jaszczak phantom. And the attenuation correction improved both image qulity and image quantitation. To study the effects of noise, the images were acquired for 1min, 2min, 5min, 10min, and 20min. The 20min image showed much better noise characteristics than 1min image indicating that increasing the counting time reduces the noise characteristics which follow the Poisson distribution. The images were also acquired using dual-energy windows, one for main photopeak and another one for scatter peak. The images were then compared with and without scatter correction. Scatter correction improved image quality so that the cold sphere and bar pattern in Jaszczak phantom were clearly visualized. Scatter correction was also applied to 3-D Hoffman brain phantom and resulted in better image quality. In conclusion, the SPECT images were significantly affected by the factors of data acquisition parameter, attenuation, noise, scatter, and reconstruction algorithm and these factors must be optimized or corrected to obtain the useful SPECT data in clinical applications.

  • PDF

Development of a Stock Trading System Using M & W Wave Patterns and Genetic Algorithms (M&W 파동 패턴과 유전자 알고리즘을 이용한 주식 매매 시스템 개발)

  • Yang, Hoonseok;Kim, Sunwoong;Choi, Heung Sik
    • Journal of Intelligence and Information Systems
    • /
    • v.25 no.1
    • /
    • pp.63-83
    • /
    • 2019
  • Investors prefer to look for trading points based on the graph shown in the chart rather than complex analysis, such as corporate intrinsic value analysis and technical auxiliary index analysis. However, the pattern analysis technique is difficult and computerized less than the needs of users. In recent years, there have been many cases of studying stock price patterns using various machine learning techniques including neural networks in the field of artificial intelligence(AI). In particular, the development of IT technology has made it easier to analyze a huge number of chart data to find patterns that can predict stock prices. Although short-term forecasting power of prices has increased in terms of performance so far, long-term forecasting power is limited and is used in short-term trading rather than long-term investment. Other studies have focused on mechanically and accurately identifying patterns that were not recognized by past technology, but it can be vulnerable in practical areas because it is a separate matter whether the patterns found are suitable for trading. When they find a meaningful pattern, they find a point that matches the pattern. They then measure their performance after n days, assuming that they have bought at that point in time. Since this approach is to calculate virtual revenues, there can be many disparities with reality. The existing research method tries to find a pattern with stock price prediction power, but this study proposes to define the patterns first and to trade when the pattern with high success probability appears. The M & W wave pattern published by Merrill(1980) is simple because we can distinguish it by five turning points. Despite the report that some patterns have price predictability, there were no performance reports used in the actual market. The simplicity of a pattern consisting of five turning points has the advantage of reducing the cost of increasing pattern recognition accuracy. In this study, 16 patterns of up conversion and 16 patterns of down conversion are reclassified into ten groups so that they can be easily implemented by the system. Only one pattern with high success rate per group is selected for trading. Patterns that had a high probability of success in the past are likely to succeed in the future. So we trade when such a pattern occurs. It is a real situation because it is measured assuming that both the buy and sell have been executed. We tested three ways to calculate the turning point. The first method, the minimum change rate zig-zag method, removes price movements below a certain percentage and calculates the vertex. In the second method, high-low line zig-zag, the high price that meets the n-day high price line is calculated at the peak price, and the low price that meets the n-day low price line is calculated at the valley price. In the third method, the swing wave method, the high price in the center higher than n high prices on the left and right is calculated as the peak price. If the central low price is lower than the n low price on the left and right, it is calculated as valley price. The swing wave method was superior to the other methods in the test results. It is interpreted that the transaction after checking the completion of the pattern is more effective than the transaction in the unfinished state of the pattern. Genetic algorithms(GA) were the most suitable solution, although it was virtually impossible to find patterns with high success rates because the number of cases was too large in this simulation. We also performed the simulation using the Walk-forward Analysis(WFA) method, which tests the test section and the application section separately. So we were able to respond appropriately to market changes. In this study, we optimize the stock portfolio because there is a risk of over-optimized if we implement the variable optimality for each individual stock. Therefore, we selected the number of constituent stocks as 20 to increase the effect of diversified investment while avoiding optimization. We tested the KOSPI market by dividing it into six categories. In the results, the portfolio of small cap stock was the most successful and the high vol stock portfolio was the second best. This shows that patterns need to have some price volatility in order for patterns to be shaped, but volatility is not the best.