• Title/Summary/Keyword: Information Processing Module

Search Result 989, Processing Time 0.026 seconds

Perceptual Generative Adversarial Network for Single Image De-Snowing (단일 영상에서 눈송이 제거를 위한 지각적 GAN)

  • Wan, Weiguo;Lee, Hyo Jong
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.8 no.10
    • /
    • pp.403-410
    • /
    • 2019
  • Image de-snowing aims at eliminating the negative influence by snow particles and improving scene understanding in images. In this paper, a perceptual generative adversarial network based a single image snow removal method is proposed. The residual U-Net is designed as a generator to generate the snow free image. In order to handle various sizes of snow particles, the inception module with different filter kernels is adopted to extract multiple resolution features of the input snow image. Except the adversarial loss, the perceptual loss and total variation loss are employed to improve the quality of the resulted image. Experimental results indicate that our method can obtain excellent performance both on synthetic and realistic snow images in terms of visual observation and commonly used visual quality indices.

Implemented of Integrated Interface Control Unit with Compatible and Improve Brightness of Existing Full Color LED Display System (Full Color LED 디스플레이장치와 휘도 개선과 호환성을 갖는 통합인터페이스 제어장치 구현)

  • Lee, Ju-Yeon
    • Journal of Convergence for Information Technology
    • /
    • v.11 no.12
    • /
    • pp.90-96
    • /
    • 2021
  • In this paper, we designed manufactured and design an integrated interface control unit that has compatibility with brightness control unit, color control unit, and existing control unit. As the implementation method the standard of DVI/HDMI transmission method is applied to the data transmission method, and the Sil 1169 IC is used as the applied IC. Brightness control is programmed to have eight levels of brightness control using the AT89C2051. Also, EPM240T100C5 IC was used for image and dimming data processing. As a result, it is compatible with the control unit using the DVI/HDMI transmission method manufactured by each company and can reproduce clear high quality full HD video according to the surrounding brightness through the full color LED display system.

Ultimate-Game Automatic Trace and Analysis System Using IoT (사물인터넷 기반 얼티미트 경기 자동추적 및 분석 시스템)

  • Lim, Jea Yun
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.11 no.1
    • /
    • pp.59-66
    • /
    • 2022
  • In this paper, by applying IoT technology to the Ultimate game, which is one of the games using Flyingdisc, the process of the game is traced based on the players and the flyingdisc, and a comprehensive relationship analysis between players is performed on the results of the game. A WiFi module with built-in GPS is attached in the players and flyingdisc. The player's ID, latitude/longitude values received from GPS and time are stored in the database in realtime during the game. Process informations of the game is also stored in the database at the same time using mobile Ultimate game App. Based on this informations after the game is over, we developed a system that can perform comprehensive analysis of the game contents. By using the informations stored in the database, the player-based game process and the flyingdisc-based scoring process are visualized in the virtual playground. Various game result informations for players are graphically analyzed using Python.

Single Shot Detector for Detecting Clickable Object in Mobile Device Screen (모바일 디바이스 화면의 클릭 가능한 객체 탐지를 위한 싱글 샷 디텍터)

  • Jo, Min-Seok;Chun, Hye-won;Han, Seong-Soo;Jeong, Chang-Sung
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.11 no.1
    • /
    • pp.29-34
    • /
    • 2022
  • We propose a novel network architecture and build dataset for recognizing clickable objects on mobile device screens. The data was collected based on clickable objects on the mobile device screen that have numerous resolution, and a total of 24,937 annotation data were subdivided into seven categories: text, edit text, image, button, region, status bar, and navigation bar. We use the Deconvolution Single Shot Detector as a baseline, the backbone network with Squeeze-and-Excitation blocks, the Single Shot Detector layer structure to derive inference results and the Feature pyramid networks structure. Also we efficiently extract features by changing the input resolution of the existing 1:1 ratio of the network to a 1:2 ratio similar to the mobile device screen. As a result of experimenting with the dataset we have built, the mean average precision was improved by up to 101% compared to baseline.

Design and Implementation of Home IoT Cultivation System with iOS Interface (iOS 인터페이스를 지원하는 가정용 IoT 재배 시스템 설계 및 구현)

  • Jeong, Seung Gyun;Kim, Gyu Dong;Kim, Byeong Chang
    • KIPS Transactions on Computer and Communication Systems
    • /
    • v.12 no.2
    • /
    • pp.61-68
    • /
    • 2023
  • Demand for "pet plants" and "planteriers" is increasing due to the increase in COVID-19 and fine dust. In this paper, Smart pots for planterior should be small in size while providing the functions for cultivation without any problems. They should provide a user interface for long-range control for user's convenience. We implemented smart pots by incorporating IoT into pots. In response to the growing number of iPhone users, we developed an iOS app for user interface and UX/UI design. By communicating with the smartphone app and a home pot server over the Internet, users can check and control the state of the pot anytime, anywhere. The server and the pot module were separated to reduce the size of the pot itself. By locating a water bottle at the bottom of the pot, we expect that it is suitable for a "planterier" because it adopts a circulating structure in which drainage flows down to the water bottle as it is.

A Specification-Based Methodology for Data Collection in Artificial Intelligence System (명세 기반 인공지능 학습 데이터 수집 방법)

  • Kim, Donggi;Choi, Byunggi;Lee, Jaeho
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.11 no.11
    • /
    • pp.479-488
    • /
    • 2022
  • In recent years, with the rapid development of machine learning technology, research utilizing machine learning has been actively conducted in fields such as cognition, reasoning and judgment, and action among various technologies constituting intelligent systems. In order to utilize this machine learning, it is indispensable to collect data for learning. However, the types of data generated vary according to the environment in which the data is generated, and the types and forms of data required are different depending on the learning model to be used for machine learning. Due to this, there is a problem that the existing data collection method cannot be reused in a new environment, and a specialized data collection module must be developed each time. In this paper, we propose a specification-based methology for data collection in artificial intelligence system to solve the above problems, ensure the reusability of the data collection method according to the data collection environment, and automate the implementation of the data collection function.

Blurred Image Enhancement Techniques Using Stack-Attention (Stack-Attention을 이용한 흐릿한 영상 강화 기법)

  • Park Chae Rim;Lee Kwang Ill;Cho Seok Je
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.12 no.2
    • /
    • pp.83-90
    • /
    • 2023
  • Blurred image is an important factor in lowering image recognition rates in Computer vision. This mainly occurs when the camera is unstablely out of focus or the object in the scene moves quickly during the exposure time. Blurred images greatly degrade visual quality, weakening visibility, and this phenomenon occurs frequently despite the continuous development digital camera technology. In this paper, it replace the modified building module based on the Deep multi-patch neural network designed with convolution neural networks to capture details of input images and Attention techniques to focus on objects in blurred images in many ways and strengthen the image. It measures and assigns each weight at different scales to differentiate the blurring of change and restores from rough to fine levels of the image to adjust both global and local region sequentially. Through this method, it show excellent results that recover degraded image quality, extract efficient object detection and features, and complement color constancy.

Restoring Turbulent Images Based on an Adaptive Feature-fusion Multi-input-Multi-output Dense U-shaped Network

  • Haiqiang Qian;Leihong Zhang;Dawei Zhang;Kaimin Wang
    • Current Optics and Photonics
    • /
    • v.8 no.3
    • /
    • pp.215-224
    • /
    • 2024
  • In medium- and long-range optical imaging systems, atmospheric turbulence causes blurring and distortion of images, resulting in loss of image information. An image-restoration method based on an adaptive feature-fusion multi-input-multi-output (MIMO) dense U-shaped network (Unet) is proposed, to restore a single image degraded by atmospheric turbulence. The network's model is based on the MIMO-Unet framework and incorporates patch-embedding shallow-convolution modules. These modules help in extracting shallow features of images and facilitate the processing of the multi-input dense encoding modules that follow. The combination of these modules improves the model's ability to analyze and extract features effectively. An asymmetric feature-fusion module is utilized to combine encoded features at varying scales, facilitating the feature reconstruction of the subsequent multi-output decoding modules for restoration of turbulence-degraded images. Based on experimental results, the adaptive feature-fusion MIMO dense U-shaped network outperforms traditional restoration methods, CMFNet network models, and standard MIMO-Unet network models, in terms of image-quality restoration. It effectively minimizes geometric deformation and blurring of images.

Automatic gasometer reading system using selective optical character recognition (관심 문자열 인식 기술을 이용한 가스계량기 자동 검침 시스템)

  • Lee, Kyohyuk;Kim, Taeyeon;Kim, Wooju
    • Journal of Intelligence and Information Systems
    • /
    • v.26 no.2
    • /
    • pp.1-25
    • /
    • 2020
  • In this paper, we suggest an application system architecture which provides accurate, fast and efficient automatic gasometer reading function. The system captures gasometer image using mobile device camera, transmits the image to a cloud server on top of private LTE network, and analyzes the image to extract character information of device ID and gas usage amount by selective optical character recognition based on deep learning technology. In general, there are many types of character in an image and optical character recognition technology extracts all character information in an image. But some applications need to ignore non-of-interest types of character and only have to focus on some specific types of characters. For an example of the application, automatic gasometer reading system only need to extract device ID and gas usage amount character information from gasometer images to send bill to users. Non-of-interest character strings, such as device type, manufacturer, manufacturing date, specification and etc., are not valuable information to the application. Thus, the application have to analyze point of interest region and specific types of characters to extract valuable information only. We adopted CNN (Convolutional Neural Network) based object detection and CRNN (Convolutional Recurrent Neural Network) technology for selective optical character recognition which only analyze point of interest region for selective character information extraction. We build up 3 neural networks for the application system. The first is a convolutional neural network which detects point of interest region of gas usage amount and device ID information character strings, the second is another convolutional neural network which transforms spatial information of point of interest region to spatial sequential feature vectors, and the third is bi-directional long short term memory network which converts spatial sequential information to character strings using time-series analysis mapping from feature vectors to character strings. In this research, point of interest character strings are device ID and gas usage amount. Device ID consists of 12 arabic character strings and gas usage amount consists of 4 ~ 5 arabic character strings. All system components are implemented in Amazon Web Service Cloud with Intel Zeon E5-2686 v4 CPU and NVidia TESLA V100 GPU. The system architecture adopts master-lave processing structure for efficient and fast parallel processing coping with about 700,000 requests per day. Mobile device captures gasometer image and transmits to master process in AWS cloud. Master process runs on Intel Zeon CPU and pushes reading request from mobile device to an input queue with FIFO (First In First Out) structure. Slave process consists of 3 types of deep neural networks which conduct character recognition process and runs on NVidia GPU module. Slave process is always polling the input queue to get recognition request. If there are some requests from master process in the input queue, slave process converts the image in the input queue to device ID character string, gas usage amount character string and position information of the strings, returns the information to output queue, and switch to idle mode to poll the input queue. Master process gets final information form the output queue and delivers the information to the mobile device. We used total 27,120 gasometer images for training, validation and testing of 3 types of deep neural network. 22,985 images were used for training and validation, 4,135 images were used for testing. We randomly splitted 22,985 images with 8:2 ratio for training and validation respectively for each training epoch. 4,135 test image were categorized into 5 types (Normal, noise, reflex, scale and slant). Normal data is clean image data, noise means image with noise signal, relfex means image with light reflection in gasometer region, scale means images with small object size due to long-distance capturing and slant means images which is not horizontally flat. Final character string recognition accuracies for device ID and gas usage amount of normal data are 0.960 and 0.864 respectively.

A Development of Integrated Monitoring and Control System for Identification and Management of Fishing Gears (어구 식별 및 관리를 위한 통합 관제 시스템 개발)

  • Hwang, Hun-Gyu;Kim, Bae-Sung;Woo, Sang-Min;Woo, Yun-Tae;Kim, Nam-Su;Nam, Gyeung-Tae;Hwang, Jee-Joong;Lee, Young-Geun
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.22 no.9
    • /
    • pp.1228-1236
    • /
    • 2018
  • Recently, the maritime environment contaminated by the abandoned fishing gears. To solve this problem, there requires systematic management techniques for the fishing gears based on ICT technologies. The existed systems are optionally used by owners, but the systems need to adopt the monitoring and control architecture for integrated national surveillance. To do this, we designed an architecture for effective monitoring and management which collects position and state information using automatic identification buoy (AIB) device, to send the fishing ship, administrator ship, and shore side control center based on the IoT networks. Especially, in this paper, we developed the ENC-based integrated control system for efficient management which provides functions for position indication, state information display and loss alarm of fishing gears. Also, we conduct performance tests for data processing and visualization functions of the system to use a virtual buoy generation module.