• Title/Summary/Keyword: selective images

Search Result 148, Processing Time 0.031 seconds

The Development of Sensibility Evaluation Tools for User-Oriented Housing Interior Space (사용자 중심의 주거 실내공간 감성평가도구 개발)

  • Park, Ji-Min
    • Korean Institute of Interior Design Journal
    • /
    • v.23 no.5
    • /
    • pp.112-121
    • /
    • 2014
  • The purpose of this study is to develop the user-oriented housing interior space sensibility evaluation tools: The user-oriented housing interior space sensibility evaluation tools shall be developed through the systematic selection process of the extracted housing interior space images, which were linked with the adjectives of sensibility evaluation selected for the housing interior space preferred by the user from the specific words of the sensibility extracted to identify the characteristics of the user's sensibility which is recently being changed. In the results of analyzing the words of sensibility for the residential space preferred by the users with 48 pairs of adjectives. The user-oriented sensibility assessment tool was built by extracting 8 sensibility factors of 'cozy', 'practical' 'cheerful', 'traditional', 'unique', 'congenial', 'sensuous', and 'gorgeous' in the exploratory factor analysis. The image scale was constructed in two-dimensions of the sense of space and the type of space for the residential interior space images. The dimension of the 'sense of space' is explained by the axis of open-closed and the dimension of 'type of space, is explained by the axis of 'natural-artificial'. Such a structural model of the residential interior design attributes were divided into 8 groups. And the 42 images representing each group were selected and the user-oriented residential interior space image tool was built by adding user's selective elements.

Automatic gasometer reading system using selective optical character recognition (관심 문자열 인식 기술을 이용한 가스계량기 자동 검침 시스템)

  • Lee, Kyohyuk;Kim, Taeyeon;Kim, Wooju
    • Journal of Intelligence and Information Systems
    • /
    • v.26 no.2
    • /
    • pp.1-25
    • /
    • 2020
  • In this paper, we suggest an application system architecture which provides accurate, fast and efficient automatic gasometer reading function. The system captures gasometer image using mobile device camera, transmits the image to a cloud server on top of private LTE network, and analyzes the image to extract character information of device ID and gas usage amount by selective optical character recognition based on deep learning technology. In general, there are many types of character in an image and optical character recognition technology extracts all character information in an image. But some applications need to ignore non-of-interest types of character and only have to focus on some specific types of characters. For an example of the application, automatic gasometer reading system only need to extract device ID and gas usage amount character information from gasometer images to send bill to users. Non-of-interest character strings, such as device type, manufacturer, manufacturing date, specification and etc., are not valuable information to the application. Thus, the application have to analyze point of interest region and specific types of characters to extract valuable information only. We adopted CNN (Convolutional Neural Network) based object detection and CRNN (Convolutional Recurrent Neural Network) technology for selective optical character recognition which only analyze point of interest region for selective character information extraction. We build up 3 neural networks for the application system. The first is a convolutional neural network which detects point of interest region of gas usage amount and device ID information character strings, the second is another convolutional neural network which transforms spatial information of point of interest region to spatial sequential feature vectors, and the third is bi-directional long short term memory network which converts spatial sequential information to character strings using time-series analysis mapping from feature vectors to character strings. In this research, point of interest character strings are device ID and gas usage amount. Device ID consists of 12 arabic character strings and gas usage amount consists of 4 ~ 5 arabic character strings. All system components are implemented in Amazon Web Service Cloud with Intel Zeon E5-2686 v4 CPU and NVidia TESLA V100 GPU. The system architecture adopts master-lave processing structure for efficient and fast parallel processing coping with about 700,000 requests per day. Mobile device captures gasometer image and transmits to master process in AWS cloud. Master process runs on Intel Zeon CPU and pushes reading request from mobile device to an input queue with FIFO (First In First Out) structure. Slave process consists of 3 types of deep neural networks which conduct character recognition process and runs on NVidia GPU module. Slave process is always polling the input queue to get recognition request. If there are some requests from master process in the input queue, slave process converts the image in the input queue to device ID character string, gas usage amount character string and position information of the strings, returns the information to output queue, and switch to idle mode to poll the input queue. Master process gets final information form the output queue and delivers the information to the mobile device. We used total 27,120 gasometer images for training, validation and testing of 3 types of deep neural network. 22,985 images were used for training and validation, 4,135 images were used for testing. We randomly splitted 22,985 images with 8:2 ratio for training and validation respectively for each training epoch. 4,135 test image were categorized into 5 types (Normal, noise, reflex, scale and slant). Normal data is clean image data, noise means image with noise signal, relfex means image with light reflection in gasometer region, scale means images with small object size due to long-distance capturing and slant means images which is not horizontally flat. Final character string recognition accuracies for device ID and gas usage amount of normal data are 0.960 and 0.864 respectively.

Replication of Hybrid Micropatterns Using Selective Ultrasonic Imprinting (선택적 초음파 임프린팅을 사용한 복합 미세패턴의 복제기술)

  • Lee, Hyun Joong;Jung, Woosin;Park, Keun
    • Transactions of the Korean Society of Mechanical Engineers A
    • /
    • v.39 no.1
    • /
    • pp.71-77
    • /
    • 2015
  • Ultrasonic imprinting is a micropattern replication technology for a thermoplastic polymer surface that uses ultrasonic vibration energy; it has the advantages of a short cycle time and low energy consumption. Recently, ultrasonic imprinting has been further developed to extend its functionality: (i) selective ultrasonic imprinting using mask films and (ii) repetitive ultrasonic imprinting for composite pattern development. In this study, selective ultrasonic imprinting was combined with repetitive imprinting in order to replicate versatile micropatterns. For this purpose, a repetitive imprinting technology was further extended to utilize mask films, which enabled versatile micropatterns to be replicated using a single mold with micro-prism patterns. The replicated hybrid micropatterns were optically evaluated through laser light images, which showed that versatile optical diffusion characteristics can be obtained from the hybrid micropatterns.

Implementation of Preceding Vehicle Break-Lamp Detection System using Selective Attention Model and YOLO (선택적 주의집중 모델과 YOLO를 이용한 선행 차량 정지등 검출 시스템 구현)

  • Lee, Woo-Beom
    • Journal of the Institute of Convergence Signal Processing
    • /
    • v.22 no.2
    • /
    • pp.85-90
    • /
    • 2021
  • A ADAS(Advanced Driver Assistance System) for the safe driving is an important area in autonumous car. Specially, a ADAS software using an image sensors attached in previous car is low in building cost, and utilizes for various purpose. A algorithm for detecting the break-lamp from the tail-lamp of preceding vehicle is proposed in this paper. This method can perceive the driving condition of preceding vehicle. Proposed method uses the YOLO techinicque that has a excellent performance in object tracing from real scene, and extracts the intensity variable region of break-lamp from HSV image of detected vehicle ROI(Region Of Interest). After detecting the candidate region of break-lamp, each isolated region is labeled. The break-lamp region is detected finally by using the proposed selective-attention model that percieves the shape-similarity of labeled candidate region. In order to evaluate the performance of the preceding vehicle break-lamp detection system implemented in this paper, we applied our system to the various driving images. As a results, implemented system showed successful results.

A Selective Attention Based Target Detection System in Noisy Images (잡영 영상에서의 선택적 주의 기반 목표물 탐지 시스템)

  • 최경주;이일병
    • Proceedings of the Korean Information Science Society Conference
    • /
    • 2002.04b
    • /
    • pp.622-624
    • /
    • 2002
  • 본 논문에서는 선택적 주의에 기반한 잡영 영상에서의 목표물 탐지 방법에 대해 기술한다. 특히 제안하는 방법은 목표물에 대한 아무런 지식을 사용하지 않고, 단지 입력되는 영상의 상향식 단서만을 사용하여 목표물을 탐지해냄으로써 여러 다양한 분야에 일반적으로 사용될 수 있다. 제안하는 시스템에서는 몇 가지 기본 특징들이 입력된 영상에서 바로 추출되며, 이러한 특징들이 서로 통합되어 가는 과정에서 목표물 탐지에 유용하지 않은 정보는 자연스럽게 걸러지며, 유용한 정보는 추가되고 부각되어진다. 간단한 영상부터 복잡한 자연영상에 이르는 다양한 잡영 영상을 대상으로 실험하여 제안하는 시스템의 성능을 평가하였다.

  • PDF

A Simple Mathematical Analysis of Correlation Target Tracker in Image Sequences (영상신호를 이용한 상관방식 추적기에 대한 간단한 수학적인 해석)

  • Cho, Jae-Soo;Park, Dong-Jo
    • Proceedings of the IEEK Conference
    • /
    • 2003.11a
    • /
    • pp.485-488
    • /
    • 2003
  • A conventional correlation target tracker is analysed with a simple mathematical approach. And, we will propose a correlation measure with selective attentional property in order to overcome the false-peak problem of the conventional methods. Various experimental results show that the proposed correlation measure is able to reduce considerably the probability of false-peaks degraded by the correlation between background images of a reference block and a distorted and noisy sensor input image.

  • PDF

The Four Points Diagonal Positioning Algorithm for Iris Position Tracking Improvement

  • Chai Duck-Hyun;Ryu Kwang-Ryol
    • Journal of information and communication convergence engineering
    • /
    • v.2 no.3
    • /
    • pp.202-204
    • /
    • 2004
  • An improvement of tracking capacity to find a position of the Iris images is presented in this paper. The propose algorithm is used the Four Points Diagonal Positioning algorithm that the image is positioned with arbitrary 4 points on the edge of iris and the selective 4 points are drawn by a diagonal line on the cross. The experiment result shows that the algorithm is efficient to track on the eyelid.

A Study on the Enhancement of Tracking Capability for Iris Image

  • Chai, Duck-Hyun;Kim, Jung-Tae;Hur, Chang-Wu;Ryu, Kwang-Ryol
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 2004.05a
    • /
    • pp.24-27
    • /
    • 2004
  • An enhancement of tracking capacity to find a position of the Iris images is presented in this paper. The propose algorithm is called FFDP (Four Points Diagonal Positioning) that the image is positioned with arbitrary 4 points on the edge of iris and the selective 4 points are drawn by a diagonal line on the cross. The experiment result shows that the algorithm is efficient to track on the eyelid.

  • PDF

Robust Watermarking using Selective Embedding Method in Color Images (칼라영상의 화질열화를 고려한 선택적 삽입의 강인한 워터마킹)

  • 원준호;전병우
    • Journal of the Institute of Electronics Engineers of Korea SP
    • /
    • v.41 no.5
    • /
    • pp.143-152
    • /
    • 2004
  • In this paper, we propose the new algorithms of watermarking that utilize the characteristics of color images for solving the trade-off problem between the image quality and the robustness. Since the human visual characteristics of each RGB channel are different, we can gain more robust watermarking on the condition of the same image degradation.

Graphic Simulator for processing test of Humanoid Robot (인간형 로봇의 동작 더스트를 위한 그래픽 시뮬레이터)

  • Hwang, Byung-Hun;Kim, Jee-Hong
    • Proceedings of the KIEE Conference
    • /
    • 2003.07d
    • /
    • pp.2480-2482
    • /
    • 2003
  • As make a simulator including user interface functions like start & stop, load parameters, record and save, view 3D display has a real-like length and numerical value of sizes, represent real-shape of inner and outer part of robot, make the possible fast and slow selective observation as a adjust a step, receiving the images through the image device which attached in robot, so make a motion tester simulator of humanoid robot which coded by windows based GUI(Graphic User Interface) program with a MMI(Man Machine Interface) function that user can watch the environment which included robot and use a images. For implement this, we use a design data that converted data which made by use a CAD for Laser RP(Rapid Prototyping) progress into C coding for simulator programming. Using OpenGL, an API of graphic, it has a efficiency and detail of graphic operation. To make and test animation data, it has the option of save and resume in animation.

  • PDF