• Title/Summary/Keyword: learning through the image

Search Result 925, Processing Time 0.024 seconds

Development of Collaborative Robot Control Training Medium to Improve Worker Safety and Work Convenience Using Image Processing and Machine Learning-Based Hand Signal Recognition (작업자의 안전과 작업 편리성 향상을 위한 영상처리 및 기계학습 기반 수신호 인식 협동로봇 제어 교육 매체 개발)

  • Jin-heork Jung;Hun Jeong;Gyeong-geun Park;Gi-ju Lee;Hee-seok Park;Chae-hun An
    • Journal of Practical Engineering Education
    • /
    • v.14 no.3
    • /
    • pp.543-553
    • /
    • 2022
  • A collaborative robot(Cobot) is one of the production systems presented in the 4th industrial revolution and are systems that can maximize efficiency by combining the exquisite hand skills of workers and the ability of simple repetitive tasks of robots. Also, research on the development of an efficient interface method between the worker and the robot is continuously progressing along with the solution to the safety problem arising from the sharing of the workspace. In this study, a method for controlling the robot by recognizing the worker's hand signal was presented to enhance the convenience and concentration of the worker, and the safety of the worker was secured by introducing the concept of a safety zone. Various technologies such as robot control, PLC, image processing, machine learning, and ROS were used to implement this. In addition, the roles and interface methods of the proposed technologies were defined and presented for using educational media. Students can build and adjust the educational media system by linking the introduced various technologies. Therefore, there is an excellent advantage in recognizing the necessity of the technology required in the field and inducing in-depth learning about it. In addition, presenting a problem and then seeking a way to solve it on their own can lead to self-directed learning. Through this, students can learn key technologies of the 4th industrial revolution and improve their ability to solve various problems.

Saliency Attention Method for Salient Object Detection Based on Deep Learning (딥러닝 기반의 돌출 객체 검출을 위한 Saliency Attention 방법)

  • Kim, Hoi-Jun;Lee, Sang-Hun;Han, Hyun Ho;Kim, Jin-Soo
    • Journal of the Korea Convergence Society
    • /
    • v.11 no.12
    • /
    • pp.39-47
    • /
    • 2020
  • In this paper, we proposed a deep learning-based detection method using Saliency Attention to detect salient objects in images. The salient object detection separates the object where the human eye is focused from the background, and determines the highly relevant part of the image. It is usefully used in various fields such as object tracking, detection, and recognition. Existing deep learning-based methods are mostly Autoencoder structures, and many feature losses occur in encoders that compress and extract features and decoders that decompress and extend the extracted features. These losses cause the salient object area to be lost or detect the background as an object. In the proposed method, Saliency Attention is proposed to reduce the feature loss and suppress the background region in the Autoencoder structure. The influence of the feature values was determined using the ELU activation function, and Attention was performed on the feature values in the normalized negative and positive regions, respectively. Through this Attention method, the background area was suppressed and the projected object area was emphasized. Experimental results showed improved detection results compared to existing deep learning methods.

Applying deep learning based super-resolution technique for high-resolution urban flood analysis (고해상도 도시 침수 해석을 위한 딥러닝 기반 초해상화 기술 적용)

  • Choi, Hyeonjin;Lee, Songhee;Woo, Hyuna;Kim, Minyoung;Noh, Seong Jin
    • Journal of Korea Water Resources Association
    • /
    • v.56 no.10
    • /
    • pp.641-653
    • /
    • 2023
  • As climate change and urbanization are causing unprecedented natural disasters in urban areas, it is crucial to have urban flood predictions with high fidelity and accuracy. However, conventional physically- and deep learning-based urban flood modeling methods have limitations that require a lot of computer resources or data for high-resolution flooding analysis. In this study, we propose and implement a method for improving the spatial resolution of urban flood analysis using a deep learning based super-resolution technique. The proposed approach converts low-resolution flood maps by physically based modeling into the high-resolution using a super-resolution deep learning model trained by high-resolution modeling data. When applied to two cases of retrospective flood analysis at part of City of Portland, Oregon, U.S., the results of the 4-m resolution physical simulation were successfully converted into 1-m resolution flood maps through super-resolution. High structural similarity between the super-solution image and the high-resolution original was found. The results show promising image quality loss within an acceptable limit of 22.80 dB (PSNR) and 0.73 (SSIM). The proposed super-resolution method can provide efficient model training with a limited number of flood scenarios, significantly reducing data acquisition efforts and computational costs.

Deep Learning Algorithm Training and Performance Analysis for Corridor Monitoring (회랑 감시를 위한 딥러닝 알고리즘 학습 및 성능분석)

  • Woo-Jin Jung;Seok-Min Hong;Won-Hyuck Choi
    • Journal of Advanced Navigation Technology
    • /
    • v.27 no.6
    • /
    • pp.776-781
    • /
    • 2023
  • K-UAM will be commercialized through maturity after 2035. Since the Urban Air Mobility (UAM) corridor will be used vertically separating the existing helicopter corridor, the corridor usage is expected to increase. Therefore, a system for monitoring corridors is also needed. In recent years, object detection algorithms have developed significantly. Object detection algorithms are largely divided into one-stage model and two-stage model. In real-time detection, the two-stage model is not suitable for being too slow. One-stage models also had problems with accuracy, but they have improved performance through version upgrades. Among them, YOLO-V5 improved small image object detection performance through Mosaic. Therefore, YOLO-V5 is the most suitable algorithm for systems that require real-time monitoring of wide corridors. Therefore, this paper trains YOLO-V5 and analyzes whether it is ultimately suitable for corridor monitoring.K-uam will be commercialized through maturity after 2035.

Development of Deep Learning Structure to Secure Visibility of Outdoor LED Display Board According to Weather Change (날씨 변화에 따른 실외 LED 전광판의 시인성 확보를 위한 딥러닝 구조 개발)

  • Sun-Gu Lee;Tae-Yoon Lee;Seung-Ho Lee
    • Journal of IKEEE
    • /
    • v.27 no.3
    • /
    • pp.340-344
    • /
    • 2023
  • In this paper, we propose a study on the development of deep learning structure to secure visibility of outdoor LED display board according to weather change. The proposed technique secures the visibility of the outdoor LED display board by automatically adjusting the LED luminance according to the weather change using deep learning using an imaging device. In order to automatically adjust the LED luminance according to weather changes, a deep learning model that can classify the weather is created by learning it using a convolutional network after first going through a preprocessing process for the flattened background part image data. The applied deep learning network reduces the difference between the input value and the output value using the Residual learning function, inducing learning while taking the characteristics of the initial input value. Next, by using a controller that recognizes the weather and adjusts the luminance of the outdoor LED display board according to the weather change, the luminance is changed so that the luminance increases when the surrounding environment becomes bright, so that it can be seen clearly. In addition, when the surrounding environment becomes dark, the visibility is reduced due to scattering of light, so the brightness of the electronic display board is lowered so that it can be seen clearly. By applying the method proposed in this paper, the result of the certified measurement test of the luminance measurement according to the weather change of the LED sign board confirmed that the visibility of the outdoor LED sign board was secured according to the weather change.

Monosyllable Speech Recognition through Facial Movement Analysis (안면 움직임 분석을 통한 단음절 음성인식)

  • Kang, Dong-Won;Seo, Jeong-Woo;Choi, Jin-Seung;Choi, Jae-Bong;Tack, Gye-Rae
    • The Transactions of The Korean Institute of Electrical Engineers
    • /
    • v.63 no.6
    • /
    • pp.813-819
    • /
    • 2014
  • The purpose of this study was to extract accurate parameters of facial movement features using 3-D motion capture system in speech recognition technology through lip-reading. Instead of using the features obtained through traditional camera image, the 3-D motion system was used to obtain quantitative data for actual facial movements, and to analyze 11 variables that exhibit particular patterns such as nose, lip, jaw and cheek movements in monosyllable vocalizations. Fourteen subjects, all in 20s of age, were asked to vocalize 11 types of Korean vowel monosyllables for three times with 36 reflective markers on their faces. The obtained facial movement data were then calculated into 11 parameters and presented as patterns for each monosyllable vocalization. The parameter patterns were performed through learning and recognizing process for each monosyllable with speech recognition algorithms with Hidden Markov Model (HMM) and Viterbi algorithm. The accuracy rate of 11 monosyllables recognition was 97.2%, which suggests the possibility of voice recognition of Korean language through quantitative facial movement analysis.

Development of personalized clothing recommendation service based on artificial intelligence (인공지능 기반 개인 맞춤형 의류 추천 서비스 개발)

  • Kim, Hyoung Suk;Lee, Jong Hyuck;Lee, Hyun Dong
    • Smart Media Journal
    • /
    • v.10 no.1
    • /
    • pp.116-123
    • /
    • 2021
  • Due to the rapid growth of the online fashion market and the resulting expansion of online choices, there is a problem that the seller cannot directly respond to a large number of consumers individually, although consumers are increasingly demanding for more personalized recommendation services. Images are being tagged as a way to meet consumer's personalization needs, but when people tagging, tagging is very subjective for each person, and artificial intelligence tagging has very limited words and does not meet the needs of users. To solve this problem, we designed an algorithm that recognizes the shape, attribute, and emotional information of the product included in the image with AI, and codes this information to represent all the information that the image has with a combination of codes. Through this algorithm, it became possible by acquiring a variety of information possessed by the image in real time, such as the sensibility of the fashion image and the TPO information expressed by the fashion image, which was not possible until now. Based on this information, it is possible to go beyond the stage of analyzing the tastes of consumers and make hyper-personalized clothing recommendations that combine the tastes of consumers with information about trends and TPOs.

Improving target recognition of active sonar multi-layer processor through deep learning of a small amounts of imbalanced data (소수 불균형 데이터의 심층학습을 통한 능동소나 다층처리기의 표적 인식성 개선)

  • Young-Woo Ryu;Jeong-Goo Kim
    • The Journal of the Acoustical Society of Korea
    • /
    • v.43 no.2
    • /
    • pp.225-233
    • /
    • 2024
  • Active sonar transmits sound waves to detect covertly maneuvering underwater objects and detects the signals reflected back from the target. However, in addition to the target's echo, the active sonar's received signal is mixed with seafloor, sea surface reverberation, biological noise, and other noise, making target recognition difficult. Conventional techniques for detecting signals above a threshold not only cause false detections or miss targets depending on the set threshold, but also have the problem of having to set an appropriate threshold for various underwater environments. To overcome this, research has been conducted on automatic calculation of threshold values through techniques such as Constant False Alarm Rate (CFAR) and application of advanced tracking filters and association techniques, but there are limitations in environments where a significant number of detections occur. As deep learning technology has recently developed, efforts have been made to apply it in the field of underwater target detection, but it is very difficult to acquire active sonar data for discriminator learning, so not only is the data rare, but there are only a very small number of targets and a relatively large number of non-targets. There are difficulties due to the imbalance of data. In this paper, the image of the energy distribution of the detection signal is used, and a classifier is learned in a way that takes into account the imbalance of the data to distinguish between targets and non-targets and added to the existing technique. Through the proposed technique, target misclassification was minimized and non-targets were eliminated, making target recognition easier for active sonar operators. And the effectiveness of the proposed technique was verified through sea experiment data obtained in the East Sea.

Segmentation of Natural Fine Aggregates in Micro-CT Microstructures of Recycled Aggregates Using Unet-VGG16 (Unet-VGG16 모델을 활용한 순환골재 마이크로-CT 미세구조의 천연골재 분할)

  • Sung-Wook Hong;Deokgi Mun;Se-Yun Kim;Tong-Seok Han
    • Journal of the Computational Structural Engineering Institute of Korea
    • /
    • v.37 no.2
    • /
    • pp.143-149
    • /
    • 2024
  • Segmentation of material phases through image analysis is essential for analyzing the microstructure of materials. Micro-CT images exhibit variations in grayscale values depending on the phases constituting the material. Phase segmentation is generally achieved by comparing the grayscale values in the images. In the case of waste concrete used as a recycled aggregate, it is challenging to distinguish between hydrated cement paste and natural aggregates, as these components exhibit similar grayscale values in micro-CT images. In this study, we propose a method for automatically separating the aggregates in concrete, in micro-CT images. Utilizing the Unet-VGG16 deep-learning network, we introduce a technique for segmenting the 2D aggregate images and stacking them to obtain 3D aggregate images. Image filtering is employed to separate aggregate particles from the selected 3D aggregate images. The performance of aggregate segmentation is validated through accuracy, precision, recall, and F1-score assessments.

3D Point Cloud Reconstruction Technique from 2D Image Using Efficient Feature Map Extraction Network (효율적인 feature map 추출 네트워크를 이용한 2D 이미지에서의 3D 포인트 클라우드 재구축 기법)

  • Kim, Jeong-Yoon;Lee, Seung-Ho
    • Journal of IKEEE
    • /
    • v.26 no.3
    • /
    • pp.408-415
    • /
    • 2022
  • In this paper, we propose a 3D point cloud reconstruction technique from 2D images using efficient feature map extraction network. The originality of the method proposed in this paper is as follows. First, we use a new feature map extraction network that is about 27% efficient than existing techniques in terms of memory. The proposed network does not reduce the size to the middle of the deep learning network, so important information required for 3D point cloud reconstruction is not lost. We solved the memory increase problem caused by the non-reduced image size by reducing the number of channels and by efficiently configuring the deep learning network to be shallow. Second, by preserving the high-resolution features of the 2D image, the accuracy can be further improved than that of the conventional technique. The feature map extracted from the non-reduced image contains more detailed information than the existing method, which can further improve the reconstruction accuracy of the 3D point cloud. Third, we use a divergence loss that does not require shooting information. The fact that not only the 2D image but also the shooting angle is required for learning, the dataset must contain detailed information and it is a disadvantage that makes it difficult to construct the dataset. In this paper, the accuracy of the reconstruction of the 3D point cloud can be increased by increasing the diversity of information through randomness without additional shooting information. In order to objectively evaluate the performance of the proposed method, using the ShapeNet dataset and using the same method as in the comparative papers, the CD value of the method proposed in this paper is 5.87, the EMD value is 5.81, and the FLOPs value is 2.9G. It was calculated. On the other hand, the lower the CD and EMD values, the better the accuracy of the reconstructed 3D point cloud approaches the original. In addition, the lower the number of FLOPs, the less memory is required for the deep learning network. Therefore, the CD, EMD, and FLOPs performance evaluation results of the proposed method showed about 27% improvement in memory and 6.3% in terms of accuracy compared to the methods in other papers, demonstrating objective performance.