• Title/Summary/Keyword: Camera module

Search Result 499, Processing Time 0.033 seconds

A Development of Active Monitoring and Approach Alarm System for Marine Buoy Protection and Ship Accident Prevention based on Trail Cameras and AIS (해상 부이 보호 및 선박 사고 예방을 위한 트레일 카메라-AIS 연계형 능동감시 및 접근경보 시스템 개발)

  • Hwang, Hun-Gyu;Kim, Bae-Sung;Kim, Hyen-Woo;Gang, Yong-Soo;Kim, Dae-Han
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.22 no.7
    • /
    • pp.1021-1029
    • /
    • 2018
  • The marine buoys are operated in various domains, which are navigation route and danger maker, weather and environment monitoring, military strategical element, etc. If the marine buoy is damaged, there consumes many cost and time for recovery or replacement, because of severe environmental condition, and causes a risk possibility of secondary accident. In this paper, we developed an active monitoring and approach alarm providing system using trail cameras and AIS for protection for the marine buoys. To do this, we analyzed existing researches and similar systems, extracted requirements for enhancement, and designed the system architecture that applied the enhanced elements. The main considerations of system enhancement are: integration of AIS and trail cameras, adopting of phased alarm technique by approaching ships, applying of selective communication module, conducting the image processing of ships for providing alarm, and applying thermal cameras. After that, we developed the system using designed architecture and verified effectiveness of the system based on laboratory or field-level tests.

Two Design Techniques of Embedded Systems Based on Ad-Hoc Network for Wireless Image Observation (애드 혹 네트워크 기반의 무선 영상 관측용 임베디드 시스템의 두 가지 설계 기법들)

  • LEE, Yong Up;Song, Chang-Yeoung;Park, Jeong-Uk
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.39A no.5
    • /
    • pp.271-279
    • /
    • 2014
  • In this paper, the two design techniques of the embedded system which provides a wireless image observation with temporary ad-hoc network are proposed and developed. The first method is based on the embedded system design technique for a nearly real-time wireless short observation application, having a specific remote monitoring node with a built-in image processing function, and having the maximum rate of 1 fps (frame per second) wireless image transmission capability of a $160{\times}128$size image. The second technique uses the embedded system for a general wireless long observation application, consisting of the main node, the remote monitoring node, and the system controller with built-in image processing function, and the capability of the wireless image transmission rate of 1/3 fps. The proposed system uses the wireless ad-hoc network which is widely accepted as a short range, low power, and bidirectional digital communication, the hardware are consisted of the general developed modules, a small digital camera, and a PC, and the embedded software based upon the Zigbee stack and the user interface software are developed and tested on the implemented module. The wireless environment analysis and the performance results are presented.

A study on development of RGB color variable optical ID module considering smart factory environment (스마트 팩토리 환경을 고려한 RGB 컬러 가변형 광 ID 모듈개발 연구)

  • Lee, Min-Ho;Timur, Khudaybergenov;Lee, Beom-Hee;Cho, Ju-Phil;Cha, Jae-Sang
    • The Journal of Korea Institute of Information, Electronics, and Communication Technology
    • /
    • v.11 no.5
    • /
    • pp.623-629
    • /
    • 2018
  • Smart Factory is a concept of automatic production system of machines by the fusion of ICT and manufacturing. As a base technology for realizing such a smart factory, there is an increasing interest in a low-power environmentally friendly LED lighting system, and researches on so-called optical ID related application technologies such as communication using a LED and position recognition are actively underway. In this paper, We have proposed a system that can reliably identify logistics location and additional information without being affected by electromagnetic interference such as high voltage, high current, and generator in the plant. Through the basic experiment, we confirmed the applicability of the color ID recognition rate from 98.8% to 93.8% according to the eight color variations in the short distance.

Gesture Spotting by Web-Camera in Arbitrary Two Positions and Fuzzy Garbage Model (임의 두 지점의 웹 카메라와 퍼지 가비지 모델을 이용한 사용자의 의미 있는 동작 검출)

  • Yang, Seung-Eun
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.1 no.2
    • /
    • pp.127-136
    • /
    • 2012
  • Many research of hand gesture recognition based on vision system have been conducted which enable user operate various electronic devices more easily. 3D position calculation and meaningful gesture classification from similar gestures should be executed to recognize hand gesture accurately. A simple and cost effective method of 3D position calculation and gesture spotting (a task to recognize meaningful gesture from other similar meaningless gestures) is described in this paper. 3D position is achieved by calculation of two cameras relative position through pan/tilt module and a marker regardless with the placed position. Fuzzy garbage model is proposed to provide a variable reference value to decide whether the user gesture is the command gesture or not. The reference is achieved from fuzzy command gesture model and fuzzy garbage model which returns the score that shows the degree of belonging to command gesture and garbage gesture respectively. Two-stage user adaptation is proposed that off-line (batch) adaptation for inter-personal difference and on-line (incremental) adaptation for intra-difference to enhance the performance. Experiment is conducted for 5 different users. The recognition rate of command (discriminate command gesture) is more than 95% when only one command like meaningless gesture exists and more than 85% when the command is mixed with many other similar gestures.

Development of LiDAR-Based MRM Algorithm for LKS System (LKS 시스템을 위한 라이다 기반 MRM 알고리즘 개발)

  • Son, Weon Il;Oh, Tae Young;Park, Kihong
    • The Journal of The Korea Institute of Intelligent Transport Systems
    • /
    • v.20 no.1
    • /
    • pp.174-192
    • /
    • 2021
  • The LIDAR sensor, which provides higher cognitive performance than cameras and radar, is difficult to apply to ADAS or autonomous driving because of its high price. On the other hand, as the price is decreasing rapidly, expectations are rising to improve existing autonomous driving functions by taking advantage of the LIDAR sensor. In level 3 autonomous vehicles, when a dangerous situation in the cognitive module occurs due to a sensor defect or sensor limit, the driver must take control of the vehicle for manual driving. If the driver does not respond to the request, the system must automatically kick in and implement a minimum risk maneuver to maintain the risk within a tolerable level. In this study, based on this background, a LIDAR-based LKS MRM algorithm was developed for the case when the normal operation of LKS was not possible due to troubles in the cognitive system. From point cloud data collected by LIDAR, the algorithm generates the trajectory of the vehicle in front through object clustering and converts it to the target waypoints of its own. Hence, if the camera-based LKS is not operating normally, LIDAR-based path tracking control is performed as MRM. The HAZOP method was used to identify the risk sources in the LKS cognitive systems. B, and based on this, test scenarios were derived and used in the validation process by simulation. The simulation results indicated that the LIDAR-based LKS MRM algorithm of this study prevents lane departure in dangerous situations caused by various problems or difficulties in the LKS cognitive systems and could prevent possible traffic accidents.

Change Attention-based Vehicle Scratch Detection System (변화 주목 기반 차량 흠집 탐지 시스템)

  • Lee, EunSeong;Lee, DongJun;Park, GunHee;Lee, Woo-Ju;Sim, Donggyu;Oh, Seoung-Jun
    • Journal of Broadcast Engineering
    • /
    • v.27 no.2
    • /
    • pp.228-239
    • /
    • 2022
  • In this paper, we propose an unmanned vehicle scratch detection deep learning model for car sharing services. Conventional scratch detection models consist of two steps: 1) a deep learning module for scratch detection of images before and after rental, 2) a manual matching process for finding newly generated scratches. In order to build a fully automatic scratch detection model, we propose a one-step unmanned scratch detection deep learning model. The proposed model is implemented by applying transfer learning and fine-tuning to the deep learning model that detects changes in satellite images. In the proposed car sharing service, specular reflection greatly affects the scratch detection performance since the brightness of the gloss-treated automobile surface is anisotropic and a non-expert user takes a picture with a general camera. In order to reduce detection errors caused by specular reflected light, we propose a preprocessing process for removing specular reflection components. For data taken by mobile phone cameras, the proposed system can provide high matching performance subjectively and objectively. The scores for change detection metrics such as precision, recall, F1, and kappa are 67.90%, 74.56%, 71.08%, and 70.18%, respectively.

Attention based Feature-Fusion Network for 3D Object Detection (3차원 객체 탐지를 위한 어텐션 기반 특징 융합 네트워크)

  • Sang-Hyun Ryoo;Dae-Yeol Kang;Seung-Jun Hwang;Sung-Jun Park;Joong-Hwan Baek
    • Journal of Advanced Navigation Technology
    • /
    • v.27 no.2
    • /
    • pp.190-196
    • /
    • 2023
  • Recently, following the development of LIDAR technology which can detect distance from the object, the interest for LIDAR based 3D object detection network is getting higher. Previous networks generate inaccurate localization results due to spatial information loss during voxelization and downsampling. In this study, we propose an attention-based convergence method and a camera-LIDAR convergence system to acquire high-level features and high positional accuracy. First, by introducing the attention method into the Voxel-RCNN structure, which is a grid-based 3D object detection network, the multi-scale sparse 3D convolution feature is effectively fused to improve the performance of 3D object detection. Additionally, we propose the late-fusion mechanism for fusing outcomes in 3D object detection network and 2D object detection network to delete false positive. Comparative experiments with existing algorithms are performed using the KITTI data set, which is widely used in the field of autonomous driving. The proposed method showed performance improvement in both 2D object detection on BEV and 3D object detection. In particular, the precision was improved by about 0.54% for the car moderate class compared to Voxel-RCNN.

IGRINS Design and Performance Report

  • Park, Chan;Jaffe, Daniel T.;Yuk, In-Soo;Chun, Moo-Young;Pak, Soojong;Kim, Kang-Min;Pavel, Michael;Lee, Hanshin;Oh, Heeyoung;Jeong, Ueejeong;Sim, Chae Kyung;Lee, Hye-In;Le, Huynh Anh Nguyen;Strubhar, Joseph;Gully-Santiago, Michael;Oh, Jae Sok;Cha, Sang-Mok;Moon, Bongkon;Park, Kwijong;Brooks, Cynthia;Ko, Kyeongyeon;Han, Jeong-Yeol;Nah, Jakyuong;Hill, Peter C.;Lee, Sungho;Barnes, Stuart;Yu, Young Sam;Kaplan, Kyle;Mace, Gregory;Kim, Hwihyun;Lee, Jae-Joon;Hwang, Narae;Kang, Wonseok;Park, Byeong-Gon
    • The Bulletin of The Korean Astronomical Society
    • /
    • v.39 no.2
    • /
    • pp.90-90
    • /
    • 2014
  • The Immersion Grating Infrared Spectrometer (IGRINS) is the first astronomical spectrograph that uses a silicon immersion grating as its dispersive element. IGRINS fully covers the H and K band atmospheric transmission windows in a single exposure. It is a compact high-resolution cross-dispersion spectrometer whose resolving power R is 40,000. An individual volume phase holographic grating serves as a secondary dispersing element for each of the H and K spectrograph arms. On the 2.7m Harlan J. Smith telescope at the McDonald Observatory, the slit size is $1^{{\prime}{\prime}}{\times}15^{{\prime}{\prime}}$. IGRINS has a plate scale of 0.27" pixel-1 on a $2048{\times}2048$ pixel Teledyne Scientific & Imaging HAWAII-2RG detector with a SIDECAR ASIC cryogenic controller. The instrument includes four subsystems; a calibration unit, an input relay optics module, a slit-viewing camera, and nearly identical H and K spectrograph modules. The use of a silicon immersion grating and a compact white pupil design allows the spectrograph collimated beam size to be 25mm, which permits the entire cryogenic system to be contained in a moderately sized ($0.96m{\times}0.6m{\times}0.38m$) rectangular Dewar. The fabrication and assembly of the optical and mechanical components were completed in 2013. From January to July of this year, we completed the system optical alignment and carried out commissioning observations on three runs to improve the efficiency of the instrument software and hardware. We describe the major design characteristics of the instrument including the system requirements and the technical strategy to meet them. We also present the instrumental performance test results derived from the commissioning runs at the McDonald Observatory.

  • PDF

Automatic gasometer reading system using selective optical character recognition (관심 문자열 인식 기술을 이용한 가스계량기 자동 검침 시스템)

  • Lee, Kyohyuk;Kim, Taeyeon;Kim, Wooju
    • Journal of Intelligence and Information Systems
    • /
    • v.26 no.2
    • /
    • pp.1-25
    • /
    • 2020
  • In this paper, we suggest an application system architecture which provides accurate, fast and efficient automatic gasometer reading function. The system captures gasometer image using mobile device camera, transmits the image to a cloud server on top of private LTE network, and analyzes the image to extract character information of device ID and gas usage amount by selective optical character recognition based on deep learning technology. In general, there are many types of character in an image and optical character recognition technology extracts all character information in an image. But some applications need to ignore non-of-interest types of character and only have to focus on some specific types of characters. For an example of the application, automatic gasometer reading system only need to extract device ID and gas usage amount character information from gasometer images to send bill to users. Non-of-interest character strings, such as device type, manufacturer, manufacturing date, specification and etc., are not valuable information to the application. Thus, the application have to analyze point of interest region and specific types of characters to extract valuable information only. We adopted CNN (Convolutional Neural Network) based object detection and CRNN (Convolutional Recurrent Neural Network) technology for selective optical character recognition which only analyze point of interest region for selective character information extraction. We build up 3 neural networks for the application system. The first is a convolutional neural network which detects point of interest region of gas usage amount and device ID information character strings, the second is another convolutional neural network which transforms spatial information of point of interest region to spatial sequential feature vectors, and the third is bi-directional long short term memory network which converts spatial sequential information to character strings using time-series analysis mapping from feature vectors to character strings. In this research, point of interest character strings are device ID and gas usage amount. Device ID consists of 12 arabic character strings and gas usage amount consists of 4 ~ 5 arabic character strings. All system components are implemented in Amazon Web Service Cloud with Intel Zeon E5-2686 v4 CPU and NVidia TESLA V100 GPU. The system architecture adopts master-lave processing structure for efficient and fast parallel processing coping with about 700,000 requests per day. Mobile device captures gasometer image and transmits to master process in AWS cloud. Master process runs on Intel Zeon CPU and pushes reading request from mobile device to an input queue with FIFO (First In First Out) structure. Slave process consists of 3 types of deep neural networks which conduct character recognition process and runs on NVidia GPU module. Slave process is always polling the input queue to get recognition request. If there are some requests from master process in the input queue, slave process converts the image in the input queue to device ID character string, gas usage amount character string and position information of the strings, returns the information to output queue, and switch to idle mode to poll the input queue. Master process gets final information form the output queue and delivers the information to the mobile device. We used total 27,120 gasometer images for training, validation and testing of 3 types of deep neural network. 22,985 images were used for training and validation, 4,135 images were used for testing. We randomly splitted 22,985 images with 8:2 ratio for training and validation respectively for each training epoch. 4,135 test image were categorized into 5 types (Normal, noise, reflex, scale and slant). Normal data is clean image data, noise means image with noise signal, relfex means image with light reflection in gasometer region, scale means images with small object size due to long-distance capturing and slant means images which is not horizontally flat. Final character string recognition accuracies for device ID and gas usage amount of normal data are 0.960 and 0.864 respectively.