• Title/Summary/Keyword: neural network model

Search Result 4,593, Processing Time 0.04 seconds

A Design on Face Recognition System Based on pRBFNNs by Obtaining Real Time Image (실시간 이미지 획득을 통한 pRBFNNs 기반 얼굴인식 시스템 설계)

  • Oh, Sung-Kwun;Seok, Jin-Wook;Kim, Ki-Sang;Kim, Hyun-Ki
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.16 no.12
    • /
    • pp.1150-1158
    • /
    • 2010
  • In this study, the Polynomial-based Radial Basis Function Neural Networks is proposed as one of the recognition part of overall face recognition system that consists of two parts such as the preprocessing part and recognition part. The design methodology and procedure of the proposed pRBFNNs are presented to obtain the solution to high-dimensional pattern recognition problem. First, in preprocessing part, we use a CCD camera to obtain a picture frame in real-time. By using histogram equalization method, we can partially enhance the distorted image influenced by natural as well as artificial illumination. We use an AdaBoost algorithm proposed by Viola and Jones, which is exploited for the detection of facial image area between face and non-facial image area. As the feature extraction algorithm, PCA method is used. In this study, the PCA method, which is a feature extraction algorithm, is used to carry out the dimension reduction of facial image area formed by high-dimensional information. Secondly, we use pRBFNNs to identify the ID by recognizing unique pattern of each person. The proposed pRBFNNs architecture consists of three functional modules such as the condition part, the conclusion part, and the inference part as fuzzy rules formed in 'If-then' format. In the condition part of fuzzy rules, input space is partitioned with Fuzzy C-Means clustering. In the conclusion part of rules, the connection weight of pRBFNNs is represented as three kinds of polynomials such as constant, linear, and quadratic. Coefficients of connection weight identified with back-propagation using gradient descent method. The output of pRBFNNs model is obtained by fuzzy inference method in the inference part of fuzzy rules. The essential design parameters (including learning rate, momentum coefficient and fuzzification coefficient) of the networks are optimized by means of the Particle Swarm Optimization. The proposed pRBFNNs are applied to real-time face recognition system and then demonstrated from the viewpoint of output performance and recognition rate.

Vehicle Headlight and Taillight Recognition in Nighttime using Low-Exposure Camera and Wavelet-based Random Forest (저노출 카메라와 웨이블릿 기반 랜덤 포레스트를 이용한 야간 자동차 전조등 및 후미등 인식)

  • Heo, Duyoung;Kim, Sang Jun;Kwak, Choong Sub;Nam, Jae-Yeal;Ko, Byoung Chul
    • Journal of Broadcast Engineering
    • /
    • v.22 no.3
    • /
    • pp.282-294
    • /
    • 2017
  • In this paper, we propose a novel intelligent headlight control (IHC) system which is durable to various road lights and camera movement caused by vehicle driving. For detecting candidate light blobs, the region of interest (ROI) is decided as front ROI (FROI) and back ROI (BROI) by considering the camera geometry based on perspective range estimation model. Then, light blobs such as headlights, taillights of vehicles, reflection light as well as the surrounding road lighting are segmented using two different adaptive thresholding. From the number of segmented blobs, taillights are first detected using the redness checking and random forest classifier based on Haar-like feature. For the headlight and taillight classification, we use the random forest instead of popular support vector machine or convolutional neural networks for supporting fast learning and testing in real-life applications. Pairing is performed by using the predefined geometric rules, such as vertical coordinate similarity and association check between blobs. The proposed algorithm was successfully applied to various driving sequences in night-time, and the results show that the performance of the proposed algorithms is better than that of recent related works.

Stock Price Prediction Using Sentiment Analysis: from "Stock Discussion Room" in Naver (SNS감성 분석을 이용한 주가 방향성 예측: 네이버 주식토론방 데이터를 이용하여)

  • Kim, Myeongjin;Ryu, Jihye;Cha, Dongho;Sim, Min Kyu
    • The Journal of Society for e-Business Studies
    • /
    • v.25 no.4
    • /
    • pp.61-75
    • /
    • 2020
  • The scope of data for understanding or predicting stock prices has been continuously widened from traditional structured format data to unstructured data. This study investigates whether commentary data collected from SNS may affect future stock prices. From "Stock Discussion Room" in Naver, we collect 20 stocks' commentary data for six months, and test whether this data have prediction power with respect to one-hour ahead price direction and price range. Deep neural network such as LSTM and CNN methods are employed to model the predictive relationship. Among the 20 stocks, we find that future price direction can be predicted with higher than the accuracy of 50% in 13 stocks. Also, the future price range can be predicted with higher than the accuracy of 50% in 16 stocks. This study validate that the investors' sentiment reflected in SNS community such as Naver's "Stock Discussion Room" may affect the demand and supply of stocks, thus driving the stock prices.

Binary classification of bolts with anti-loosening coating using transfer learning-based CNN (전이학습 기반 CNN을 통한 풀림 방지 코팅 볼트 이진 분류에 관한 연구)

  • Noh, Eunsol;Yi, Sarang;Hong, Seokmoo
    • Journal of the Korea Academia-Industrial cooperation Society
    • /
    • v.22 no.2
    • /
    • pp.651-658
    • /
    • 2021
  • Because bolts with anti-loosening coatings are used mainly for joining safety-related components in automobiles, accurate automatic screening of these coatings is essential to detect defects efficiently. The performance of the convolutional neural network (CNN) used in a previous study [Identification of bolt coating defects using CNN and Grad-CAM] increased with increasing number of data for the analysis of image patterns and characteristics. On the other hand, obtaining the necessary amount of data for coated bolts is difficult, making training time-consuming. In this paper, resorting to the same VGG16 model as in a previous study, transfer learning was applied to decrease the training time and achieve the same or better accuracy with fewer data. The classifier was trained, considering the number of training data for this study and its similarity with ImageNet data. In conjunction with the fully connected layer, the highest accuracy was achieved (95%). To enhance the performance further, the last convolution layer and the classifier were fine-tuned, which resulted in a 2% increase in accuracy (97%). This shows that the learning time can be reduced by transfer learning and fine-tuning while maintaining a high screening accuracy.

A Deep Learning Method for Cost-Effective Feed Weight Prediction of Automatic Feeder for Companion Animals (반려동물용 자동 사료급식기의 비용효율적 사료 중량 예측을 위한 딥러닝 방법)

  • Kim, Hoejung;Jeon, Yejin;Yi, Seunghyun;Kwon, Ohbyung
    • Journal of Intelligence and Information Systems
    • /
    • v.28 no.2
    • /
    • pp.263-278
    • /
    • 2022
  • With the recent advent of IoT technology, automatic pet feeders are being distributed so that owners can feed their companion animals while they are out. However, due to behaviors of pets, the method of measuring weight, which is important in automatic feeding, can be easily damaged and broken when using the scale. The 3D camera method has disadvantages due to its cost, and the 2D camera method has relatively poor accuracy when compared to 3D camera method. Hence, the purpose of this study is to propose a deep learning approach that can accurately estimate weight while simply using a 2D camera. For this, various convolutional neural networks were used, and among them, the ResNet101-based model showed the best performance: an average absolute error of 3.06 grams and an average absolute ratio error of 3.40%, which could be used commercially in terms of technical and financial viability. The result of this study can be useful for the practitioners to predict the weight of a standardized object such as feed only through an easy 2D image.

A Design of the Vehicle Crisis Detection System(VCDS) based on vehicle internal and external data and deep learning (차량 내·외부 데이터 및 딥러닝 기반 차량 위기 감지 시스템 설계)

  • Son, Su-Rak;Jeong, Yi-Na
    • The Journal of Korea Institute of Information, Electronics, and Communication Technology
    • /
    • v.14 no.2
    • /
    • pp.128-133
    • /
    • 2021
  • Currently, autonomous vehicle markets are commercializing a third-level autonomous vehicle, but there is a possibility that an accident may occur even during fully autonomous driving due to stability issues. In fact, autonomous vehicles have recorded 81 accidents. This is because, unlike level 3, autonomous vehicles after level 4 have to judge and respond to emergency situations by themselves. Therefore, this paper proposes a vehicle crisis detection system(VCDS) that collects and stores information outside the vehicle through CNN, and uses the stored information and vehicle sensor data to output the crisis situation of the vehicle as a number between 0 and 1. The VCDS consists of two modules. The vehicle external situation collection module collects surrounding vehicle and pedestrian data using a CNN-based neural network model. The vehicle crisis situation determination module detects a crisis situation in the vehicle by using the output of the vehicle external situation collection module and the vehicle internal sensor data. As a result of the experiment, the average operation time of VESCM was 55ms, R-CNN was 74ms, and CNN was 101ms. In particular, R-CNN shows similar computation time to VESCM when the number of pedestrians is small, but it takes more computation time than VESCM as the number of pedestrians increases. On average, VESCM had 25.68% faster computation time than R-CNN and 45.54% faster than CNN, and the accuracy of all three models did not decrease below 80% and showed high accuracy.

Effectiveness of the Detection of Pulmonary Emphysema using VGGNet with Low-dose Chest Computed Tomography Images (저선량 흉부 CT를 이용한 VGGNet 폐기종 검출 유용성 평가)

  • Kim, Doo-Bin;Park, Young-Joon;Hong, Joo-Wan
    • Journal of the Korean Society of Radiology
    • /
    • v.16 no.4
    • /
    • pp.411-417
    • /
    • 2022
  • This study aimed to learn and evaluate the effectiveness of VGGNet in the detection of pulmonary emphysema using low-dose chest computed tomography images. In total, 8000 images with normal findings and 3189 images showing pulmonary emphysema were used. Furthermore, 60%, 24%, and 16% of the normal and emphysema data were randomly assigned to training, validation, and test datasets, respectively, in model learning. VGG16 and VGG19 were used for learning, and the accuracy, loss, confusion matrix, precision, recall, specificity, and F1-score were evaluated. The accuracy and loss for pulmonary emphysema detection of the low-dose chest CT test dataset were 92.35% and 0.21% for VGG16 and 95.88% and 0.09% for VGG19, respectively. The precision, recall, and specificity were 91.60%, 98.36%, and 77.08% for VGG16 and 96.55%, 97.39%, and 92.72% for VGG19, respectively. The F1-scores were 94.86% and 96.97% for VGG16 and VGG19, respectively. Through the above evaluation index, VGG19 is judged to be more useful in detecting pulmonary emphysema. The findings of this study would be useful as basic data for the research on pulmonary emphysema detection models using VGGNet and artificial neural networks.

Analysis of Contribution of Climate and Cultivation Management Variables Affecting Orchardgrass Production (오차드그라스의 생산량에 영향을 미치는 기후 및 재배관리의 기여도 분석)

  • Moonju Kim;Ji Yung Kim;Mu-Hwan Jo;Kyungil Sung
    • Journal of The Korean Society of Grassland and Forage Science
    • /
    • v.43 no.1
    • /
    • pp.1-10
    • /
    • 2023
  • This study aimed to confirm the importance ratio of climate and management variables on production of orchardgrass in Korea (1982-2014). For the climate, the mean temperature in January (MTJ, ℃), lowest temperature in January (LTJ, ℃), growing days 0 to 5 (GD 1, day), growing days 5 to 25 (GD 2, day), Summer depression days (SSD, day), rainfall days (RD, day), accumulated rainfall (AR, mm), and sunshine duration (SD, hr) were considered. For the management, the establishment period (EP, 0-6 years) and number of cutting (NC, 2nd-5th) were measured. The importance ratio on production of orchardgrass was estimated using the neural network model with the perceptron method. It was performed by SPSS 26.0 (IBM Corp., Chicago). As a result, EP was the most important variable (100%), followed by RD (82.0%), AR (79.1%), NC (69.2%), LTJ (66.2%), GD 2 (63.3%), GD 1 (61.6%), SD (58.1%), SSD (50.8%) and MTJ (41.8%). It implies that EP, RD, AR, and NC were more important than others. Since the annual rainfall in Korea is exceed the required amount for the growth and development of orchardgrass, the damage caused by heavy rainfall exceeding the appropriate level could be reduced through drainage management. It means that, when cultivating orchardgrass, factors that can be controlled were relatively important. Although it is difficult to interpret the specific effect of climates on production due to neural networking modeling, in the future, this study is expected to be useful in production prediction and damage estimation by climate change by selecting major factors.

A Deep Learning-based Real-time Deblurring Algorithm on HD Resolution (HD 해상도에서 실시간 구동이 가능한 딥러닝 기반 블러 제거 알고리즘)

  • Shim, Kyujin;Ko, Kangwook;Yoon, Sungjoon;Ha, Namkoo;Lee, Minseok;Jang, Hyunsung;Kwon, Kuyong;Kim, Eunjoon;Kim, Changick
    • Journal of Broadcast Engineering
    • /
    • v.27 no.1
    • /
    • pp.3-12
    • /
    • 2022
  • Image deblurring aims to remove image blur, which can be generated while shooting the pictures by the movement of objects, camera shake, blurring of focus, and so forth. With the rise in popularity of smartphones, it is common to carry portable digital cameras daily, so image deblurring techniques have become more significant recently. Originally, image deblurring techniques have been studied using traditional optimization techniques. Then with the recent attention on deep learning, deblurring methods based on convolutional neural networks have been actively proposed. However, most of them have been developed while focusing on better performance. Therefore, it is not easy to use in real situations due to the speed of their algorithms. To tackle this problem, we propose a novel deep learning-based deblurring algorithm that can be operated in real-time on HD resolution. In addition, we improved the training and inference process and could increase the performance of our model without any significant effect on the speed and the speed without any significant effect on the performance. As a result, our algorithm achieves real-time performance by processing 33.74 frames per second at 1280×720 resolution. Furthermore, it shows excellent performance compared to its speed with a PSNR of 29.78 and SSIM of 0.9287 with the GoPro dataset.

Study on the Neural Network for Handwritten Hangul Syllabic Character Recognition (수정된 Neocognitron을 사용한 필기체 한글인식)

  • 김은진;백종현
    • Korean Journal of Cognitive Science
    • /
    • v.3 no.1
    • /
    • pp.61-78
    • /
    • 1991
  • This paper descibes the study of application of a modified Neocognitron model with backward path for the recognition of Hangul(Korean) syllabic characters. In this original report, Fukushima demonstrated that Neocognitron can recognize hand written numerical characters of $19{\times}19$ size. This version accepts $61{\times}61$ images of handwritten Hangul syllabic characters or a part thereof with a mouse or with a scanner. It consists of an input layer and 3 pairs of Uc layers. The last Uc layer of this version, recognition layer, consists of 24 planes of $5{\times}5$ cells which tell us the identity of a grapheme receiving attention at one time and its relative position in the input layer respectively. It has been trained 10 simple vowel graphemes and 14 simple consonant graphemes and their spatial features. Some patterns which are not easily trained have been trained more extrensively. The trained nerwork which can classify indivisual graphemes with possible deformation, noise, size variance, transformation or retation wre then used to recongnize Korean syllabic characters using its selective attention mechanism for image segmentation task within a syllabic characters. On initial sample tests on input characters our model could recognize correctly up to 79%of the various test patterns of handwritten Korean syllabic charactes. The results of this study indeed show Neocognitron as a powerful model to reconginze deformed handwritten charavters with big size characters set via segmenting its input images as recognizable parts. The same approach may be applied to the recogition of chinese characters, which are much complex both in its structures and its graphemes. But processing time appears to be the bottleneck before it can be implemented. Special hardware such as neural chip appear to be an essestial prerquisite for the practical use of the model. Further work is required before enabling the model to recognize Korean syllabic characters consisting of complex vowels and complex consonants. Correct recognition of the neighboring area between two simple graphemes would become more critical for this task.