• Title/Summary/Keyword: Map recognition

Search Result 497, Processing Time 0.028 seconds

Classification of Normal/Abnormal Conditions for Small Reciprocating Compressors using Wavelet Transform and Artificial Neural Network (웨이브렛변환과 인공신경망 기법을 이용한 소형 왕복동 압축기의 상태 분류)

  • Lim, Dong-Soo;An, Jin-Long;Yang, Bo-Suk;An, Byung-Ha
    • Proceedings of the Korean Society for Noise and Vibration Engineering Conference
    • /
    • 2000.11a
    • /
    • pp.796-801
    • /
    • 2000
  • The monitoring and diagnostics of the rotating machinery have been received considerable attention for many years. The objectives are to classify the machinery condition and to find out the cause of abnormal condition. This paper describes a signal classification method for diagnosing the rotating machinery using the artificial neural network and the wavelet transform. In order to extract salient features, the wavelet transform are used from primary noise signals. Since the wavelet transform decomposes raw time-waveform signals into two respective parts in the time space and frequency domain, more and better features can be obtained easier than time-waveform analysis. In the training phase for classification, self-organizing feature map(SOFM) and learning vector quantization(LVQ) are applied, and the accuracies of them are compared with each other. This paper is focused on the development of an advanced signal classifier to automatise the vibration signal pattern recognition. This method is verified by small reciprocating compressors, for refrigerator and normal and abnormal conditions are classified with high flexibility and reliability.

  • PDF

Monocular Camera based Real-Time Object Detection and Distance Estimation Using Deep Learning (딥러닝을 활용한 단안 카메라 기반 실시간 물체 검출 및 거리 추정)

  • Kim, Hyunwoo;Park, Sanghyun
    • The Journal of Korea Robotics Society
    • /
    • v.14 no.4
    • /
    • pp.357-362
    • /
    • 2019
  • This paper proposes a model and train method that can real-time detect objects and distances estimation based on a monocular camera by applying deep learning. It used YOLOv2 model which is applied to autonomous or robot due to the fast image processing speed. We have changed and learned the loss function so that the YOLOv2 model can detect objects and distances at the same time. The YOLOv2 loss function added a term for learning bounding box values x, y, w, h, and distance values z as 클래스ification losses. In addition, the learning was carried out by multiplying the distance term with parameters for the balance of learning. we trained the model location, recognition by camera and distance data measured by lidar so that we enable the model to estimate distance and objects from a monocular camera, even when the vehicle is going up or down hill. To evaluate the performance of object detection and distance estimation, MAP (Mean Average Precision) and Adjust R square were used and performance was compared with previous research papers. In addition, we compared the original YOLOv2 model FPS (Frame Per Second) for speed measurement with FPS of our model.

Blind Helper program development by using Wireless Camera and Window Phone (무선 카메라 모듈과 Window Phone을 이용한 시각장애인 보조 프로그램 개발)

  • Kim, Yoeng-Woon;Park, Jong-Ki;Yu, Jae-Hoon;Hwang, Young-Sup;Heo, Jeong
    • Annual Conference of KIPS
    • /
    • 2012.11a
    • /
    • pp.474-477
    • /
    • 2012
  • 현대사회는 시각장애인에 대한 복지가 부족하다. 예를 들어 유도블럭의 홰손, 지폐의 점자처리의 모호함 등 시각장애인을 위해 만들어진 복지조차 사용하기 어려운게 현실이다. 그래서 우리는 무선카메라와 Window Phone을 이용하여 상기 불편함을 해소하기 위하여 이 프로젝트를 시작하였다. Guide Line Detection은 앞을 못 보는 시각장애인에게 무선카메라에 보이는 영상에서 유도블럭을 찾아 시각장애인과의 거리를 음성으로 알려준다. Bill Recognition은 지폐를 인식하여 음성으로 알려준다. 길 안내 기능은 길을 찾아가지 못하는 시각장애인에게 특정 지점마다 길 안내정보를 등록하고, 등록된 정보는 시각장애인이 실시간으로 길 안내를 받을 수 있다. 음성인식은 기기를 사용하기 힘든 시각장애인들에 대한 접근성을 높이기 위해 WinPhone Application이 제공하는 모든 기능을 흔들기와 음성만으로 사용할 수 있도록 제공한다. 무선카메라의 화질과 Window Phone의 GPS 불규칙적인 오차 때문에 많은 시행착오가 있었지만 무선카메라는 웹 캠으로, GPS오차는 BingMap API의 GPS 가상 좌표로 대체하여 프로젝트를 마칠 수 있었다.

Deep Multi-task Network for Simultaneous Hazy Image Semantic Segmentation and Dehazing (안개영상의 의미론적 분할 및 안개제거를 위한 심층 멀티태스크 네트워크)

  • Song, Taeyong;Jang, Hyunsung;Ha, Namkoo;Yeon, Yoonmo;Kwon, Kuyong;Sohn, Kwanghoon
    • Journal of Korea Multimedia Society
    • /
    • v.22 no.9
    • /
    • pp.1000-1010
    • /
    • 2019
  • Image semantic segmentation and dehazing are key tasks in the computer vision. In recent years, researches in both tasks have achieved substantial improvements in performance with the development of Convolutional Neural Network (CNN). However, most of the previous works for semantic segmentation assume the images are captured in clear weather and show degraded performance under hazy images with low contrast and faded color. Meanwhile, dehazing aims to recover clear image given observed hazy image, which is an ill-posed problem and can be alleviated with additional information about the image. In this work, we propose a deep multi-task network for simultaneous semantic segmentation and dehazing. The proposed network takes single haze image as input and predicts dense semantic segmentation map and clear image. The visual information getting refined during the dehazing process can help the recognition task of semantic segmentation. On the other hand, semantic features obtained during the semantic segmentation process can provide cues for color priors for objects, which can help dehazing process. Experimental results demonstrate the effectiveness of the proposed multi-task approach, showing improved performance compared to the separate networks.

Effective machine learning-based haze removal technique using haze-related features (안개관련 특징을 이용한 효과적인 머신러닝 기반 안개제거 기법)

  • Lee, Ju-Hee;Kang, Bong-Soon
    • Journal of IKEEE
    • /
    • v.25 no.1
    • /
    • pp.83-87
    • /
    • 2021
  • In harsh environments such as fog or fine dust, the cameras' detection ability for object recognition may significantly decrease. In order to accurately obtain important information even in bad weather, fog removal algorithms are necessarily required. Research has been conducted in various ways, such as computer vision/data-based fog removal technology. In those techniques, estimating the amount of fog through the input image's depth information is an important procedure. In this paper, a linear model is presented under the assumption that the image dark channel dictionary, saturation ∗ value, and sharpness characteristics are linearly related to depth information. The proposed method of haze removal through a linear model shows the superiority of algorithm performance in quantitative numerical evaluation.

A Concept Map Study on Teacher Competency for ESD(Education for Sustainable Development) in Early Childhood (유아기 지속가능발전교육을 위한 교사역량에 대한 개념도 연구)

  • Lee, Hyobin;Kwon, Yeonhee;An, Jungeun
    • Korean Journal of Childcare and Education
    • /
    • v.17 no.6
    • /
    • pp.53-72
    • /
    • 2021
  • Objective: This study aimed to reveal early childhood teachers' perceptions of teacher competency for ESD using concept mapping and demonstrating its importance. Methods: 16 early childhood teachers in charge of 3-5 year olds conducted statement writing, and then the importance of selected statements were rated by 160 early childhood teachers in charge of 3-5 year olds. Selected statements were analyzed through multidimensional scaling and hierarchical cluster analysis. Results: Early childhood teachers perceived teacher competency for ESD in early childhood as concept mapping with two-dimensions and six clusters. The following six clusters were established (1) ethics for sustainable development, (2) willingness to participate in ESD, (3) development and operation of a sustainable development curriculum, (4) recognition and practice of environmental issues, (5) realization of value for sustainable development, and (6) practical thinking for ESD. And then among the six clusters, the most important cluster was recognized as 'ethics for sustainable development', and among the statements 'having an open mind to understand multiculturalism and the disabled' was considered relatively important Conclusion/Implications: Based on these results, we discussed the importance of teacher competency for ESD in early childhood, development of teacher competency scale, and preparation of a teacher education plans for each competency.

Development of Smart Mobility System for Persons with Disabilities (장애인을 위한 스마트 모빌리티 시스템 개발)

  • Yu, Yeong Jun;Park, Se Eun;An, Tae Jun;Yang, Ji Ho;Lee, Myeong-Gyu;Lee, Chul-Hee
    • Journal of Drive and Control
    • /
    • v.19 no.4
    • /
    • pp.97-103
    • /
    • 2022
  • Low fertility rates and increased life expectancy further exacerbate the process of an aging society. This is also reflected in the gradual increase in the proportion of vulnerable groups in the social population. The demand for improved mobility among vulnerable groups such as the elderly or the disabled has greatly driven the growth of the electric-assisted mobility device market. However, such mobile devices generally require a certain operating capability, which limits the range of vulnerable groups who can use the device and increases the cost of learning. Therefore, autonomous driving technology needs to be introduced to make mobility easier for a wider range of vulnerable groups to meet their needs of work and leisure in different environments. This study uses mini PC Odyssey, Velodyne Lidar VLP-16, electronic device and Linux-based ROS program to realize the functions of working environment recognition, simultaneous localization, map generation and navigation of electric powered mobile devices for vulnerable groups. This autonomous driving mobility device is expected to be of great help to the vulnerable who lack the immediate response in dangerous situations.

Construction of Database for Deep Learning-based Occlusion Area Detection in the Virtual Environment (가상 환경에서의 딥러닝 기반 폐색영역 검출을 위한 데이터베이스 구축)

  • Kim, Kyeong Su;Lee, Jae In;Gwak, Seok Woo;Kang, Won Yul;Shin, Dae Young;Hwang, Sung Ho
    • Journal of Drive and Control
    • /
    • v.19 no.3
    • /
    • pp.9-15
    • /
    • 2022
  • This paper proposes a method for constructing and verifying datasets used in deep learning technology, to prevent safety accidents in automated construction machinery or autonomous vehicles. Although open datasets for developing image recognition technologies are challenging to meet requirements desired by users, this study proposes the interface of virtual simulators to facilitate the creation of training datasets desired by users. The pixel-level training image dataset was verified by creating scenarios, including various road types and objects in a virtual environment. Detecting an object from an image may interfere with the accurate path determination due to occlusion areas covered by another object. Thus, we construct a database, for developing an occlusion area detection algorithm in a virtual environment. Additionally, we present the possibility of its use as a deep learning dataset to calculate a grid map, that enables path search considering occlusion areas. Custom datasets are built using the RDBMS system.

Autonomous Vehicles as Safety and Security Agents in Real-Life Environments

  • Al-Absi, Ahmed Abdulhakim
    • International journal of advanced smart convergence
    • /
    • v.11 no.2
    • /
    • pp.7-12
    • /
    • 2022
  • Safety and security are the topmost priority in every environment. With the aid of Artificial Intelligence (AI), many objects are becoming more intelligent, conscious, and curious of their surroundings. The recent scientific breakthroughs in autonomous vehicular designs and development; powered by AI, network of sensors and the rapid increase of Internet of Things (IoTs) could be utilized in maintaining safety and security in our environments. AI based on deep learning architectures and models, such as Deep Neural Networks (DNNs), is being applied worldwide in the automotive design fields like computer vision, natural language processing, sensor fusion, object recognition and autonomous driving projects. These features are well known for their identification, detective and tracking abilities. With the embedment of sensors, cameras, GPS, RADAR, LIDAR, and on-board computers in many of these autonomous vehicles being developed, these vehicles can properly map their positions and proximity to everything around them. In this paper, we explored in detail several ways in which these enormous features embedded in these autonomous vehicles, such as the network of sensors fusion, computer vision and natural image processing, natural language processing, and activity aware capabilities of these automobiles, could be tapped and utilized in safeguarding our lives and environment.

Toward Practical Augmentation of Raman Spectra for Deep Learning Classification of Contamination in HDD

  • Seksan Laitrakun;Somrudee Deepaisarn;Sarun Gulyanon;Chayud Srisumarnk;Nattapol Chiewnawintawat;Angkoon Angkoonsawaengsuk;Pakorn Opaprakasit;Jirawan Jindakaew;Narisara Jaikaew
    • Journal of information and communication convergence engineering
    • /
    • v.21 no.3
    • /
    • pp.208-215
    • /
    • 2023
  • Deep learning techniques provide powerful solutions to several pattern-recognition problems, including Raman spectral classification. However, these networks require large amounts of labeled data to perform well. Labeled data, which are typically obtained in a laboratory, can potentially be alleviated by data augmentation. This study investigated various data augmentation techniques and applied multiple deep learning methods to Raman spectral classification. Raman spectra yield fingerprint-like information about chemical compositions, but are prone to noise when the particles of the material are small. Five augmentation models were investigated to build robust deep learning classifiers: weighted sums of spectral signals, imitated chemical backgrounds, extended multiplicative signal augmentation, and generated Gaussian and Poisson-distributed noise. We compared the performance of nine state-of-the-art convolutional neural networks with all the augmentation techniques. The LeNet5 models with background noise augmentation yielded the highest accuracy when tested on real-world Raman spectral classification at 88.33% accuracy. A class activation map of the model was generated to provide a qualitative observation of the results.