• Title/Summary/Keyword: a neural network

Search Result 9,871, Processing Time 0.04 seconds

A novel approach to the classification of ultrasonic NDE signals using the Expectation Maximization(EM) and Least Mean Square(LMS) algorithms (Expectation Maximization (EM)과 Least Mean Square(LMS) algorithm을 이용하여 초음파 비파괴검사 신호의 분류를 하기 위한 새로운 접근법)

  • Daewon Kim
    • Journal of the Institute of Convergence Signal Processing
    • /
    • v.4 no.1
    • /
    • pp.15-26
    • /
    • 2003
  • Ultrasonic inspection methods are widely used for detecting flaws in materials. The signal analysis step plays a crucial part in the data interpretation process. A number of signal processing methods have been proposed to classify ultrasonic flaw signals. One of the more popular methods involves the extraction of an appropriate set of features followed by the use of a neural network for the classification of the signals in the feature space. This paper describes an alternative approach which uses the least mean square (LMS) method and expectation maximization (EM) algorithm with the model based deconvolution which is employed for classifying nondestructive evaluation (NDE) signals from steam generator tubes in a nuclear power plant. The signals due to cracks and deposits are not significantly different. These signals must be discriminated to prevent from happening a huge disaster such as contamination of water or explosion. A model based deconvolution has been described to facilitate comparison of classification results. The method uses the space alternating generalized expectation maximization (SAGE) algorithm In conjunction with the Newton-Raphson method which uses the Hessian parameter resulting in fast convergence to estimate the time of flight and the distance between the tube wall and the ultrasonic sensor Results using these schemes for the classification of ultrasonic signals from cracks and deposits within steam generator tubes are presented and showed a reasonable performances.

  • PDF

Gaze Detection by Computing Facial and Eye Movement (얼굴 및 눈동자 움직임에 의한 시선 위치 추적)

  • 박강령
    • Journal of the Institute of Electronics Engineers of Korea SP
    • /
    • v.41 no.2
    • /
    • pp.79-88
    • /
    • 2004
  • Gaze detection is to locate the position on a monitor screen where a user is looking by computer vision. Gaze detection systems have numerous fields of application. They are applicable to the man-machine interface for helping the handicapped to use computers and the view control in three dimensional simulation programs. In our work, we implement it with a computer vision system setting a IR-LED based single camera. To detect the gaze position, we locate facial features, which is effectively performed with IR-LED based camera and SVM(Support Vector Machine). When a user gazes at a position of monitor, we can compute the 3D positions of those features based on 3D rotation and translation estimation and affine transform. Finally, the gaze position by the facial movements is computed from the normal vector of the plane determined by those computed 3D positions of features. In addition, we use a trained neural network to detect the gaze position by eye's movement. As experimental results, we can obtain the facial and eye gaze position on a monitor and the gaze position accuracy between the computed positions and the real ones is about 4.8 cm of RMS error.

Optimizing the Electricity Price Revenue of Wind Power Generation Captures in the South Korean Electricity Market (남한 전력시장에서 풍력발전점유의 전력가격수익 최적화)

  • Eamon, Byrne;Kim, Hyun-Goo;Kang, Yong-Heack;Yun, Chang-Yeol
    • Journal of the Korean Solar Energy Society
    • /
    • v.36 no.1
    • /
    • pp.63-73
    • /
    • 2016
  • How effectively a wind farm captures high market prices can greatly influence a wind farm's viability. This research identifies and creates an understanding of the effects that result in various capture prices (average revenue earned per unit of generation) that can be seen among different wind farms, in the current and future competitive SMP (System Marginal Price) market in South Korea. Through the use of a neural network to simulate changes in SMP caused by increased renewables, based on the Korea Institute of Energy Research's extensive wind resource database for South Korea, the variances in current and future capture prices are modelled and analyzed for both onshore and offshore wind power generation. Simulation results shows a spread in capture price of 5.5% for the year 2035 that depends on both a locations wind characteristics and the generations' correlation with other wind power generation. Wind characteristics include the generations' correlation with SMP price, diurnal profile shape, and capacity factor. The wind revenue cannibalization effect reduces the capture price obtained by wind power generation that is located close to a substantial amount of other wind power generation. In onshore locations wind characteristics can differ significantly/ Hence it is recommended that possible wind development sites have suitable diurnal profiles that effectively capture high SMP prices. Also, as increasing wind power capacity becomes installed in South Korea, it is recommended that wind power generation be located in regions far from the expected wind power generation 'hotspots' in the future. Hence, a suitable site along the east mountain ridges of South Korea is predicted to be extremely effective in attaining high SMP capture prices. Attention to these factors will increase the revenues obtained by wind power generation in a competitive electricity market.

Convergence CCTV camera embedded with Deep Learning SW technology (딥러닝 SW 기술을 이용한 임베디드형 융합 CCTV 카메라)

  • Son, Kyong-Sik;Kim, Jong-Won;Lim, Jae-Hyun
    • Journal of the Korea Convergence Society
    • /
    • v.10 no.1
    • /
    • pp.103-113
    • /
    • 2019
  • License plate recognition camera is dedicated device designed for acquiring images of the target vehicle for recognizing letters and numbers in a license plate. Mostly, it is used as a part of the system combined with server and image analysis module rather than as a single use. However, building a system for vehicle license plate recognition is costly because it is required to construct a facility with a server providing the management and analysis of the captured images and an image analysis module providing the extraction of numbers and characters and recognition of the vehicle's plate. In this study, we would like to develop an embedded type convergent camera (Edge Base) which can expand the function of the camera to not only the license plate recognition but also the security CCTV function together and to perform two functions within the camera. This embedded type convergence camera equipped with a high resolution 4K IP camera for clear image acquisition and fast data transmission extracted license plate area by applying YOLO, a deep learning software for multi object recognition based on open source neural network algorithm and detected number and characters of the plate and verified the detection accuracy and recognition accuracy and confirmed that this camera can perform CCTV security function and vehicle number plate recognition function successfully.

A Deep-Learning Based Automatic Detection of Craters on Lunar Surface for Lunar Construction (달기지 건설을 위한 딥러닝 기반 달표면 크레이터 자동 탐지)

  • Shin, Hyu Soung;Hong, Sung Chul
    • KSCE Journal of Civil and Environmental Engineering Research
    • /
    • v.38 no.6
    • /
    • pp.859-865
    • /
    • 2018
  • A construction of infrastructures and base station on the moon could be undertaken by linking with the regions where construction materials and energy could be supplied on site. It is necessary to detect craters on the lunar surface and gather their topological information in advance, which forms permanent shaded regions (PSR) in which rich ice deposits might be available. In this study, an effective method for automatic detection of lunar craters on the moon surface is taken into consideration by employing a latest version of deep-learning algorithm. A training of a deep-learning algorithm is performed by involving the still images of 90000 taken from the LRO orbiter on operation by NASA and the label data involving position and size of partly craters shown in each image. the Faster RCNN algorithm, which is a latest version of deep-learning algorithms, is applied for a deep-learning training. The trained deep-learning code was used for automatic detection of craters which had not been trained. As results, it is shown that a lot of erroneous information for crater's positions and sizes labelled by NASA has been automatically revised and many other craters not labelled has been detected. Therefore, it could be possible to automatically produce regional maps of crater density and topological information on the moon which could be changed through time and should be highly valuable in engineering consideration for lunar construction.

An Integrated Model for Predicting Changes in Cryptocurrency Return Based on News Sentiment Analysis and Deep Learning (감성분석을 이용한 뉴스정보와 딥러닝 기반의 암호화폐 수익률 변동 예측을 위한 통합모형)

  • Kim, Eunmi
    • Knowledge Management Research
    • /
    • v.22 no.2
    • /
    • pp.19-32
    • /
    • 2021
  • Bitcoin, a representative cryptocurrency, is receiving a lot of attention around the world, and the price of Bitcoin shows high volatility. High volatility is a risk factor for investors and causes social problems caused by reckless investment. Since the price of Bitcoin responds quickly to changes in the world environment, we propose to predict the price volatility of Bitcoin by utilizing news information that provides a variety of information in real-time. In other words, positive news stimulates investor sentiment and negative news weakens investor sentiment. Therefore, in this study, sentiment information of news and deep learning were applied to predict the change in Bitcoin yield. A single predictive model of logit, artificial neural network, SVM, and LSTM was built, and an integrated model was proposed as a method to improve predictive performance. As a result of comparing the performance of the prediction model built on the historical price information and the prediction model reflecting the sentiment information of the news, it was found that the integrated model based on the sentiment information of the news was the best. This study will be able to prevent reckless investment and provide useful information to investors to make wise investments through a predictive model.

Construction Method of ECVAM using Land Cover Map and KOMPSAT-3A Image (토지피복지도와 KOMPSAT-3A위성영상을 활용한 환경성평가지도의 구축)

  • Kwon, Hee Sung;Song, Ah Ram;Jung, Se Jung;Lee, Won Hee
    • Journal of the Korean Society of Surveying, Geodesy, Photogrammetry and Cartography
    • /
    • v.40 no.5
    • /
    • pp.367-380
    • /
    • 2022
  • In this study, the periodic and simplified update and production way of the ECVAM (Environmental Conservation Value Assessment Map) was presented through the classification of environmental values using KOMPSAT-3A satellite imagery and land cover map. ECVAM is a map that evaluates the environmental value of the country in five stages based on 62 legal evaluation items and 8 environmental and ecological evaluation items, and is provided on two scales: 1:25000 and 1:5000. However, the 1:5000 scale environmental assessment map is being produced and serviced with a slow renewal cycle of one year due to various constraints such as the absence of reference materials and different production years. Therefore, in this study, one of the deep learning techniques, KOMPSAT-3A satellite image, SI (Spectral Indices), and land cover map were used to conduct this study to confirm the possibility of establishing an environmental assessment map. As a result, the accuracy was calculated to be 87.25% and 85.88%, respectively. Through the results of the study, it was possible to confirm the possibility of constructing an environmental assessment map using satellite imagery, optical index, and land cover classification.

A Study on the Efficacy of Edge-Based Adversarial Example Detection Model: Across Various Adversarial Algorithms

  • Jaesung Shim;Kyuri Jo
    • Journal of the Korea Society of Computer and Information
    • /
    • v.29 no.2
    • /
    • pp.31-41
    • /
    • 2024
  • Deep learning models show excellent performance in tasks such as image classification and object detection in the field of computer vision, and are used in various ways in actual industrial sites. Recently, research on improving robustness has been actively conducted, along with pointing out that this deep learning model is vulnerable to hostile examples. A hostile example is an image in which small noise is added to induce misclassification, and can pose a significant threat when applying a deep learning model to a real environment. In this paper, we tried to confirm the robustness of the edge-learning classification model and the performance of the adversarial example detection model using it for adversarial examples of various algorithms. As a result of robustness experiments, the basic classification model showed about 17% accuracy for the FGSM algorithm, while the edge-learning models maintained accuracy in the 60-70% range, and the basic classification model showed accuracy in the 0-1% range for the PGD/DeepFool/CW algorithm, while the edge-learning models maintained accuracy in 80-90%. As a result of the adversarial example detection experiment, a high detection rate of 91-95% was confirmed for all algorithms of FGSM/PGD/DeepFool/CW. By presenting the possibility of defending against various hostile algorithms through this study, it is expected to improve the safety and reliability of deep learning models in various industries using computer vision.

Study on the Neural Network for Handwritten Hangul Syllabic Character Recognition (수정된 Neocognitron을 사용한 필기체 한글인식)

  • 김은진;백종현
    • Korean Journal of Cognitive Science
    • /
    • v.3 no.1
    • /
    • pp.61-78
    • /
    • 1991
  • This paper descibes the study of application of a modified Neocognitron model with backward path for the recognition of Hangul(Korean) syllabic characters. In this original report, Fukushima demonstrated that Neocognitron can recognize hand written numerical characters of $19{\times}19$ size. This version accepts $61{\times}61$ images of handwritten Hangul syllabic characters or a part thereof with a mouse or with a scanner. It consists of an input layer and 3 pairs of Uc layers. The last Uc layer of this version, recognition layer, consists of 24 planes of $5{\times}5$ cells which tell us the identity of a grapheme receiving attention at one time and its relative position in the input layer respectively. It has been trained 10 simple vowel graphemes and 14 simple consonant graphemes and their spatial features. Some patterns which are not easily trained have been trained more extrensively. The trained nerwork which can classify indivisual graphemes with possible deformation, noise, size variance, transformation or retation wre then used to recongnize Korean syllabic characters using its selective attention mechanism for image segmentation task within a syllabic characters. On initial sample tests on input characters our model could recognize correctly up to 79%of the various test patterns of handwritten Korean syllabic charactes. The results of this study indeed show Neocognitron as a powerful model to reconginze deformed handwritten charavters with big size characters set via segmenting its input images as recognizable parts. The same approach may be applied to the recogition of chinese characters, which are much complex both in its structures and its graphemes. But processing time appears to be the bottleneck before it can be implemented. Special hardware such as neural chip appear to be an essestial prerquisite for the practical use of the model. Further work is required before enabling the model to recognize Korean syllabic characters consisting of complex vowels and complex consonants. Correct recognition of the neighboring area between two simple graphemes would become more critical for this task.

A Study on the Development Trend of Artificial Intelligence Using Text Mining Technique: Focused on Open Source Software Projects on Github (텍스트 마이닝 기법을 활용한 인공지능 기술개발 동향 분석 연구: 깃허브 상의 오픈 소스 소프트웨어 프로젝트를 대상으로)

  • Chong, JiSeon;Kim, Dongsung;Lee, Hong Joo;Kim, Jong Woo
    • Journal of Intelligence and Information Systems
    • /
    • v.25 no.1
    • /
    • pp.1-19
    • /
    • 2019
  • Artificial intelligence (AI) is one of the main driving forces leading the Fourth Industrial Revolution. The technologies associated with AI have already shown superior abilities that are equal to or better than people in many fields including image and speech recognition. Particularly, many efforts have been actively given to identify the current technology trends and analyze development directions of it, because AI technologies can be utilized in a wide range of fields including medical, financial, manufacturing, service, and education fields. Major platforms that can develop complex AI algorithms for learning, reasoning, and recognition have been open to the public as open source projects. As a result, technologies and services that utilize them have increased rapidly. It has been confirmed as one of the major reasons for the fast development of AI technologies. Additionally, the spread of the technology is greatly in debt to open source software, developed by major global companies, supporting natural language recognition, speech recognition, and image recognition. Therefore, this study aimed to identify the practical trend of AI technology development by analyzing OSS projects associated with AI, which have been developed by the online collaboration of many parties. This study searched and collected a list of major projects related to AI, which were generated from 2000 to July 2018 on Github. This study confirmed the development trends of major technologies in detail by applying text mining technique targeting topic information, which indicates the characteristics of the collected projects and technical fields. The results of the analysis showed that the number of software development projects by year was less than 100 projects per year until 2013. However, it increased to 229 projects in 2014 and 597 projects in 2015. Particularly, the number of open source projects related to AI increased rapidly in 2016 (2,559 OSS projects). It was confirmed that the number of projects initiated in 2017 was 14,213, which is almost four-folds of the number of total projects generated from 2009 to 2016 (3,555 projects). The number of projects initiated from Jan to Jul 2018 was 8,737. The development trend of AI-related technologies was evaluated by dividing the study period into three phases. The appearance frequency of topics indicate the technology trends of AI-related OSS projects. The results showed that the natural language processing technology has continued to be at the top in all years. It implied that OSS had been developed continuously. Until 2015, Python, C ++, and Java, programming languages, were listed as the top ten frequently appeared topics. However, after 2016, programming languages other than Python disappeared from the top ten topics. Instead of them, platforms supporting the development of AI algorithms, such as TensorFlow and Keras, are showing high appearance frequency. Additionally, reinforcement learning algorithms and convolutional neural networks, which have been used in various fields, were frequently appeared topics. The results of topic network analysis showed that the most important topics of degree centrality were similar to those of appearance frequency. The main difference was that visualization and medical imaging topics were found at the top of the list, although they were not in the top of the list from 2009 to 2012. The results indicated that OSS was developed in the medical field in order to utilize the AI technology. Moreover, although the computer vision was in the top 10 of the appearance frequency list from 2013 to 2015, they were not in the top 10 of the degree centrality. The topics at the top of the degree centrality list were similar to those at the top of the appearance frequency list. It was found that the ranks of the composite neural network and reinforcement learning were changed slightly. The trend of technology development was examined using the appearance frequency of topics and degree centrality. The results showed that machine learning revealed the highest frequency and the highest degree centrality in all years. Moreover, it is noteworthy that, although the deep learning topic showed a low frequency and a low degree centrality between 2009 and 2012, their ranks abruptly increased between 2013 and 2015. It was confirmed that in recent years both technologies had high appearance frequency and degree centrality. TensorFlow first appeared during the phase of 2013-2015, and the appearance frequency and degree centrality of it soared between 2016 and 2018 to be at the top of the lists after deep learning, python. Computer vision and reinforcement learning did not show an abrupt increase or decrease, and they had relatively low appearance frequency and degree centrality compared with the above-mentioned topics. Based on these analysis results, it is possible to identify the fields in which AI technologies are actively developed. The results of this study can be used as a baseline dataset for more empirical analysis on future technology trends that can be converged.