• Title/Summary/Keyword: Image recognition technology

Search Result 992, Processing Time 0.024 seconds

Design and Implementation of Vehicle Control Network Using WiFi Network System (WiFi 네트워크 시스템을 활용한 차량 관제용 네트워크의 설계 및 구현)

  • Yu, Hwan-Shin
    • Journal of the Korea Academia-Industrial cooperation Society
    • /
    • v.20 no.3
    • /
    • pp.632-637
    • /
    • 2019
  • Recent researches on autonomous driving of vehicles are becoming very active, and it is a trend to assist safe driving and improve driver's convenience. Autonomous vehicles are required to combine artificial intelligence, image recognition capability, and Internet communication between objects. Because mobile telecommunication networks have limitations in their processing, they can be easily implemented and scale using an easily expandable Wi-Fi network. We propose a wireless design method to construct such a vehicle control network. We propose the arrangement of AP and the software configuration method to minimize loss of data transmission / reception of mobile terminal. Through the design of the proposed network system, the communication performance of the moving vehicle can be dramatically increased. We also verify the packet structure of GPS, video, voice, and data communication that can be used for the vehicle through experiments on the movement of various terminal devices. This wireless design technology can be extended to various general purpose wireless networks such as 2.4GHz, 5GHz and 10GHz Wi-Fi. It is also possible to link wireless intelligent road network with autonomous driving.

Interactive ADAS development and verification framework based on 3D car simulator (3D 자동차 시뮬레이터 기반 상호작용형 ADAS 개발 및 검증 프레임워크)

  • Cho, Deun-Sol;Jung, Sei-Youl;Kim, Hyeong-Su;Lee, Seung-gi;Kim, Won-Tae
    • Journal of IKEEE
    • /
    • v.22 no.4
    • /
    • pp.970-977
    • /
    • 2018
  • The autonomous vehicle is based on an advanced driver assistance system (ADAS) consisting of a sensor that collects information about the surrounding environment and a control module that determines the measured data. As interest in autonomous navigation technology grows recently, an easy development framework for ADAS beginners and learners is needed. However, existing development and verification methods are based on high performance vehicle simulator, which has drawbacks such as complexity of verification method and high cost. Also, most of the schemes do not provide the sensing data required by the ADAS directly from the simulator, which limits verification reliability. In this paper, we present an interactive ADAS development and verification framework using a 3D vehicle simulator that overcomes the problems of existing methods. ADAS with image recognition based artificial intelligence was implemented as a virtual sensor in a 3D car simulator, and autonomous driving verification was performed in real scenarios.

Interactive Mobile Augmented Reality System using Muscle Sensor and Image-based Localization System through Client-Server Communication (서버/클라이언트 통신을 통한 영상 기반 위치 인식 및 근육 센서를 이용한 상호작용 모바일 증강현실 시스템)

  • Lee, Sungjin;Baik, Davin;Choi, Sangyeon;Hwang, Sung Soo
    • Journal of the HCI Society of Korea
    • /
    • v.13 no.4
    • /
    • pp.15-23
    • /
    • 2018
  • A lot of games are supposed to play through controller operations, such as mouse and keyboard rather than user's physical movement. These games have limitation that causes the user lack of movement. Therefore, this study will solve the problems that these traditional game systems have through the development of a motion-producing system, and provide users more realistic system. It recognizes the user's position in a given space and provides a mobile augmented reality system that interacts with virtual game characters. It uses augmented reality technology to make users feel as if virtual characters exist in real space and it designs a mobile game system that uses armband controllers that interact with virtual characters.

  • PDF

A Study on the Development of a Tool to Support Classification of Strategic Items Using Deep Learning (딥러닝을 활용한 전략물자 판정 지원도구 개발에 대한 연구)

  • Cho, Jae-Young;Yoon, Ji-Won
    • Journal of the Korea Institute of Information Security & Cryptology
    • /
    • v.30 no.6
    • /
    • pp.967-973
    • /
    • 2020
  • As the implementation of export controls is spreading, the importance of classifying strategic items is increasing, but Korean export companies that are new to export controls are not able to understand the concept of strategic items, and it is difficult to classifying strategic items due to various criteria for controlling strategic items. In this paper, we propose a method that can easily approach the process of classification by lowering the barrier to entry for users who are new to export controls or users who are using classification of strategic items. If the user can confirm the decision result by providing a manual or a catalog for the procedure of classifying strategic items, it will be more convenient and easy to approach the method and procedure for classfying strategic items. In order to achieve the purpose of this study, it utilizes deep learning, which are being studied in image recognition and classification, and OCR(optical character reader) technology. And through the research and development of the support tool, we provide information that is helpful for the classification of strategic items to our companies.

Presenting Practical Approaches for AI-specialized Fields in Gwangju Metro-city (광주광역시의 AI 특화분야를 위한 실용적인 접근 사례 제시)

  • Cha, ByungRae;Cha, YoonSeok;Park, Sun;Shin, Byeong-Chun;Kim, JongWon
    • Smart Media Journal
    • /
    • v.10 no.1
    • /
    • pp.55-62
    • /
    • 2021
  • We applied machine learning of semi-supervised learning, transfer learning, and federated learning as examples of AI use cases that can be applied to the three major industries(Automobile industry, Energy industry, and AI/Healthcare industry) of Gwangju Metro-city, and established an ML strategy for AI services for the major industries. Based on the ML strategy of AI service, practical approaches are suggested, the semi-supervised learning approach is used for automobile image recognition technology, and the transfer learning approach is used for diabetic retinopathy detection in the healthcare field. Finally, the case of the federated learning approach is to be used to predict electricity demand. These approaches were tested based on hardware such as single board computer Raspberry Pi, Jaetson Nano, and Intel i-7, and the validity of practical approaches was verified.

Efficient Thread Allocation Method of Convolutional Neural Network based on GPGPU (GPGPU 기반 Convolutional Neural Network의 효율적인 스레드 할당 기법)

  • Kim, Mincheol;Lee, Kwangyeob
    • Asia-pacific Journal of Multimedia Services Convergent with Art, Humanities, and Sociology
    • /
    • v.7 no.10
    • /
    • pp.935-943
    • /
    • 2017
  • CNN (Convolution neural network), which is used for image classification and speech recognition among neural networks learning based on positive data, has been continuously developed to have a high performance structure to date. There are many difficulties to utilize in an embedded system with limited resources. Therefore, we use GPU (General-Purpose Computing on Graphics Processing Units), which is used for general-purpose operation of GPU to solve the problem because we use pre-learned weights but there are still limitations. Since CNN performs simple and iterative operations, the computation speed varies greatly depending on the thread allocation and utilization method in the Single Instruction Multiple Thread (SIMT) based GPGPU. To solve this problem, there is a thread that needs to be relaxed when performing Convolution and Pooling operations with threads. The remaining threads have increased the operation speed by using the method used in the following feature maps and kernel calculations.

A Method of Extracting Features of Sensor-only Facilities for Autonomous Cooperative Driving

  • Hyung Lee;Chulwoo Park;Handong Lee;Sanyeon Won
    • Journal of the Korea Society of Computer and Information
    • /
    • v.28 no.12
    • /
    • pp.191-199
    • /
    • 2023
  • In this paper, we propose a method to extract the features of five sensor-only facilities built as infrastructure for autonomous cooperative driving, which are from point cloud data acquired by LiDAR. In the case of image acquisition sensors installed in autonomous vehicles, the acquisition data is inconsistent due to the climatic environment and camera characteristics, so LiDAR sensor was applied to replace them. In addition, high-intensity reflectors were designed and attached to each facility to make it easier to distinguish it from other existing facilities with LiDAR. From the five sensor-only facilities developed and the point cloud data acquired by the data acquisition system, feature points were extracted based on the average reflective intensity of the high-intensity reflective paper attached to the facility, clustered by the DBSCAN method, and changed to two-dimensional coordinates by a projection method. The features of the facility at each distance consist of three-dimensional point coordinates, two-dimensional projected coordinates, and reflection intensity, and will be used as training data for a model for facility recognition to be developed in the future.

Gesture Control Gaming for Motoric Post-Stroke Rehabilitation

  • Andi Bese Firdausiah Mansur
    • International Journal of Computer Science & Network Security
    • /
    • v.23 no.10
    • /
    • pp.37-43
    • /
    • 2023
  • The hospital situation, timing, and patient restrictions have become obstacles to an optimum therapy session. The crowdedness of the hospital might lead to a tight schedule and a shorter period of therapy. This condition might strike a post-stroke patient in a dilemma where they need regular treatment to recover their nervous system. In this work, we propose an in-house and uncomplex serious game system that can be used for physical therapy. The Kinect camera is used to capture the depth image stream of a human skeleton. Afterwards, the user might use their hand gesture to control the game. Voice recognition is deployed to ease them with play. Users must complete the given challenge to obtain a more significant outcome from this therapy system. Subjects will use their upper limb and hands to capture the 3D objects with different speeds and positions. The more substantial challenge, speed, and location will be increased and random. Each delegated entity will raise the scores. Afterwards, the scores will be further evaluated to correlate with therapy progress. Users are delighted with the system and eager to use it as their daily exercise. The experimental studies show a comparison between score and difficulty that represent characteristics of user and game. Users tend to quickly adapt to easy and medium levels, while high level requires better focus and proper synchronization between hand and eye to capture the 3D objects. The statistical analysis with a confidence rate(α:0.05) of the usability test shows that the proposed gaming is accessible, even without specialized training. It is not only for therapy but also for fitness because it can be used for body exercise. The result of the experiment is very satisfying. Most users enjoy and familiarize themselves quickly. The evaluation study demonstrates user satisfaction and perception during testing. Future work of the proposed serious game might involve haptic devices to stimulate their physical sensation.

Hard Example Generation by Novel View Synthesis for 3-D Pose Estimation (3차원 자세 추정 기법의 성능 향상을 위한 임의 시점 합성 기반의 고난도 예제 생성)

  • Minji Kim;Sungchan Kim
    • IEMEK Journal of Embedded Systems and Applications
    • /
    • v.19 no.1
    • /
    • pp.9-17
    • /
    • 2024
  • It is widely recognized that for 3D human pose estimation (HPE), dataset acquisition is expensive and the effectiveness of augmentation techniques of conventional visual recognition tasks is limited. We address these difficulties by presenting a simple but effective method that augments input images in terms of viewpoints when training a 3D human pose estimation (HPE) model. Our intuition is that meaningful variants of the input images for HPE could be obtained by viewing a human instance in the images from an arbitrary viewpoint different from that in the original images. The core idea is to synthesize new images that have self-occlusion and thus are difficult to predict at different viewpoints even with the same pose of the original example. We incorporate this idea into the training procedure of the 3D HPE model as an augmentation stage of the input samples. We show that a strategy for augmenting the synthesized example should be carefully designed in terms of the frequency of performing the augmentation and the selection of viewpoints for synthesizing the samples. To this end, we propose a new metric to measure the prediction difficulty of input images for 3D HPE in terms of the distance between corresponding keypoints on both sides of a human body. Extensive exploration of the space of augmentation probability choices and example selection according to the proposed distance metric leads to a performance gain of up to 6.2% on Human3.6M, the well-known pose estimation dataset.

Principal component analysis in C[11]-PIB imaging (주성분분석을 이용한 C[11]-PIB imaging 영상분석)

  • Kim, Nambeom;Shin, Gwi Soon;Ahn, Sung Min
    • The Korean Journal of Nuclear Medicine Technology
    • /
    • v.19 no.1
    • /
    • pp.12-16
    • /
    • 2015
  • Purpose Principal component analysis (PCA) is a method often used in the neuroimagre analysis as a multivariate analysis technique for describing the structure of high dimensional correlation as the structure of lower dimensional space. PCA is a statistical procedure that uses an orthogonal transformation to convert a set of observations of correlated variables into a set of values of linearly independent variables called principal components. In this study, in order to investigate the usefulness of PCA in the brain PET image analysis, we tried to analyze C[11]-PIB PET image as a representative case. Materials and Methods Nineteen subjects were included in this study (normal = 9, AD/MCI = 10). For C[11]-PIB, PET scan were acquired for 20 min starting 40 min after intravenous injection of 9.6 MBq/kg C[11]-PIB. All emission recordings were acquired with the Biograph 6 Hi-Rez (Siemens-CTI, Knoxville, TN) in three-dimensional acquisition mode. Transmission map for attenuation-correction was acquired using the CT emission scans (130 kVp, 240 mA). Standardized uptake values (SUVs) of C[11]-PIB calculated from PET/CT. In normal subjects, 3T MRI T1-weighted images were obtained to create a C[11]-PIB template. Spatial normalization and smoothing were conducted as a pre-processing for PCA using SPM8 and PCA was conducted using Matlab2012b. Results Through the PCA, we obtained linearly uncorrelated independent principal component images. Principal component images obtained through the PCA can simplify the variation of whole C[11]-PIB images into several principal components including the variation of neocortex and white matter and the variation of deep brain structure such as pons. Conclusion PCA is useful to analyze and extract the main pattern of C[11]-PIB image. PCA, as a method of multivariate analysis, might be useful for pattern recognition of neuroimages such as FDG-PET or fMRI as well as C[11]-PIB image.

  • PDF