• Title/Summary/Keyword: Input task

Search Result 431, Processing Time 0.024 seconds

Codebook-Based Foreground Extraction Algorithm with Continuous Learning of Background (연속적인 배경 모델 학습을 이용한 코드북 기반의 전경 추출 알고리즘)

  • Jung, Jae-Young
    • Journal of Digital Contents Society
    • /
    • v.15 no.4
    • /
    • pp.449-455
    • /
    • 2014
  • Detection of moving objects is a fundamental task in most of the computer vision applications, such as video surveillance, activity recognition and human motion analysis. This is a difficult task due to many challenges in realistic scenarios which include irregular motion in background, illumination changes, objects cast shadows, changes in scene geometry and noise, etc. In this paper, we propose an foreground extraction algorithm based on codebook, a database of information about background pixel obtained from input image sequence. Initially, we suppose a first frame as a background image and calculate difference between next input image and it to detect moving objects. The resulting difference image may contain noises as well as pure moving objects. Second, we investigate a codebook with color and brightness of a foreground pixel in the difference image. If it is matched, it is decided as a fault detected pixel and deleted from foreground. Finally, a background image is updated to process next input frame iteratively. Some pixels are estimated by input image if they are detected as background pixels. The others are duplicated from the previous background image. We apply out algorithm to PETS2009 data and compare the results with those of GMM and standard codebook algorithms.

Korean Dependency Parsing using Pointer Networks (포인터 네트워크를 이용한 한국어 의존 구문 분석)

  • Park, Cheoneum;Lee, Changki
    • Journal of KIISE
    • /
    • v.44 no.8
    • /
    • pp.822-831
    • /
    • 2017
  • In this paper, we propose a Korean dependency parsing model using multi-task learning based pointer networks. Multi-task learning is a method that can be used to improve the performance by learning two or more problems at the same time. In this paper, we perform dependency parsing by using pointer networks based on this method and simultaneously obtaining the dependency relation and dependency label information of the words. We define five input criteria to perform pointer networks based on multi-task learning of morpheme in dependency parsing of a word. We apply a fine-tuning method to further improve the performance of the dependency parsing proposed in this paper. The results of our experiment show that the proposed model has better UAS 91.79% and LAS 89.48% than conventional Korean dependency parsing.

Comparative Analysis on the Mock-ups' Configuration and Monitoring Protocol System of Advanced Daylighting Systems for Daylighting Experiment - Focused on IEA SHC Task21- (첨단채광시스템 실험용 Mock-Up 모형의 형상 및 모니터링 프로토콜 시스템에 관한 비교분석 - IEA SHC Task21을 중심으로-)

  • Jeong, In-Young;Choi, Sang-Hyun;Kim, Jeong-Tai
    • KIEAE Journal
    • /
    • v.4 no.1
    • /
    • pp.11-20
    • /
    • 2004
  • Innovative daylighting systems in buildings in various climatic zones around the world have been developed under the IEA SHC Task21. The performance assessment were obtained by monitoring the most systems using full-scale test model rooms or actual buildings under real sky conditions. This study aims to analyze the configuration and monitoring system of the nine Mock-up models of the IEA SHC Task21 comparatively. For the purpose, the geometry of the test rooms (length, width, height, window area, glazed area and occupied), reflectance of walls, floor and ceiling, transmittance of glazing (transmittance for hemispherical irradiation, normal irradiation and U-value) were compared. And equipment for measurement (manufacturer, range, calibration, maximum calibration error, cosine response error, fatigue error), and data acquisition system (manufacturer, type, number of differential analogue input channels, A/D converter resolution in bits, data acquisition software) were also analyzed comparatively. Some findings of these experimental methodology of standard monitoring have been proven to be a valuable one for future assessment of advanced daylighting systems in our country.

Development and Evaluation of an English Speaking Task Using Smartphone and Text-to-Speech (스마트폰과 음성합성을 활용한 영어 말하기 과제의 개발과 평가)

  • Moon, Dosik
    • The Journal of the Institute of Internet, Broadcasting and Communication
    • /
    • v.16 no.5
    • /
    • pp.13-20
    • /
    • 2016
  • This study explores the effects of an video-recording English speaking task model on learners. The learning model, a form of mobile learning, was developed to facilitate the learners' output practice applying advantages of a smartphone and Text-to Speech. The survey results shows the positive effects of the speaking task on the domain of pronunciation, speaking, listening, writing in terms of students' confidence, as well as general English ability. The study further examines the possibilities and limitations of the speaking task in assisting Korean learners improve their speaking ability, who do not have sufficient exposure to English input or output practice due to the situational limitations where English is learned as a foreign language.

Comparison of Two Methods for Size-interpolation on CRT Display : Analog Stimulus-Digital Response Vs. Digital Stimulus-Analog Response (CRT 표시장치에서 두 형태의 크기-내삽 추정 방법의 비교 연구 : 상사자극-계수 반응과 계수 자극-상사반응)

  • Ro, Jae-ho
    • Journal of Industrial Technology
    • /
    • v.14
    • /
    • pp.127-140
    • /
    • 1994
  • This study is concerned with the accuracy and the patterns when different methods was used in interpolation task. Although 3 methods employed the same modality for input (visual) and for output (manual responding), they differed in central processing, which method 1 is relatively more tendency of verbal processing, method 2 is realtively more tendency of spatial processing and method 3 needed a number of switching code (verbal/spatial) performing task. Split-plot design was adopted, which whole plot consisted of methods (3), orientations (horizon, vertical), base-line sizes (300, 500, 700 pixels) and split plot consisted of target locations (1-99). The results showed the anchor effect and the range effect. Method 2, method 3 and method 1 that order was better accuracy. ANOVA showed that the accuracy was significantly influenced by the method, the location of target, and its interactions ($method{\times}location$, $size{\times}location$). Analysis of error data, response time and frequency of under, just, over estimate indicated that a systematic error pattern was made in task and methods changed not only the performance but also the pattern. The results provided support for the importance of the multiple resources theory in accounting for S-C-R compatibility and task performance. They are discussed in terms of multiple resources theory and guidelines for system design is suggested by the S-C-R compatibility.

  • PDF

A Study on the Application of Task Offloading for Real-Time Object Detection in Resource-Constrained Devices (자원 제약적 기기에서 자율주행의 실시간 객체탐지를 위한 태스크 오프로딩 적용에 관한 연구)

  • Jang Shin Won;Yong-Geun Hong
    • KIPS Transactions on Computer and Communication Systems
    • /
    • v.12 no.12
    • /
    • pp.363-370
    • /
    • 2023
  • Object detection technology that accurately recognizes the road and surrounding conditions is a key technology in the field of autonomous driving. In the field of autonomous driving, object detection technology requires real-time performance as well as accuracy of inference services. Task offloading technology should be utilized to apply object detection technology for accuracy and real-time on resource-constrained devices rather than high-performance machines. In this paper, experiments such as performance comparison of task offloading, performance comparison according to input image resolution, and performance comparison according to camera object resolution were conducted and the results were analyzed in relation to the application of task offloading for real-time object detection of autonomous driving in resource-constrained devices. In this experiment, the low-resolution image could derive performance improvement through the application of the task offloading structure, which met the real-time requirements of autonomous driving. The high-resolution image did not meet the real-time requirements for autonomous driving due to the increase in communication time, although there was an improvement in performance. Through these experiments, it was confirmed that object recognition in autonomous driving affects various conditions such as input images and communication environments along with the object recognition model used.

Survey on Nucleotide Encoding Techniques and SVM Kernel Design for Human Splice Site Prediction

  • Bari, A.T.M. Golam;Reaz, Mst. Rokeya;Choi, Ho-Jin;Jeong, Byeong-Soo
    • Interdisciplinary Bio Central
    • /
    • v.4 no.4
    • /
    • pp.14.1-14.6
    • /
    • 2012
  • Splice site prediction in DNA sequence is a basic search problem for finding exon/intron and intron/exon boundaries. Removing introns and then joining the exons together forms the mRNA sequence. These sequences are the input of the translation process. It is a necessary step in the central dogma of molecular biology. The main task of splice site prediction is to find out the exact GT and AG ended sequences. Then it identifies the true and false GT and AG ended sequences among those candidate sequences. In this paper, we survey research works on splice site prediction based on support vector machine (SVM). The basic difference between these research works is nucleotide encoding technique and SVM kernel selection. Some methods encode the DNA sequence in a sparse way whereas others encode in a probabilistic manner. The encoded sequences serve as input of SVM. The task of SVM is to classify them using its learning model. The accuracy of classification largely depends on the proper kernel selection for sequence data as well as a selection of kernel parameter. We observe each encoding technique and classify them according to their similarity. Then we discuss about kernel and their parameter selection. Our survey paper provides a basic understanding of encoding approaches and proper kernel selection of SVM for splice site prediction.

Reliability and Validity of the CAP for Computer Access Assessment of Persons with Physical Disabilities

  • Jeong, Dong-Hoon
    • The Journal of Korean Physical Therapy
    • /
    • v.27 no.1
    • /
    • pp.30-37
    • /
    • 2015
  • Purpose: The purpose of this study was to develop a computer access assessment tool for persons with physical disabilities and to evaluate reliability and validity. Methods: We developed a computerized Computer access Assessment Program (CAP) through many kinds of literature review and tools analysis for evaluation of computer access, task analysis of fundamental input devices operation and expert review. The CAP data were obtained from 105 normal university students and 16 students with physical disabilities. The test items of CAP are composed of four timed mouse tasks, four timed keyboard tasks, and a timed scanning task. Thus, the software measures user performance in skills needed for computer interaction, such as keyboard and pointer use, navigating through menus, and scanning. To determine the validity of these measurements, we compared data on CAP reports to a Compass report. Compass software allows an evaluator for assessment of an individual's computer input skills. Results: Results of this study showed that the CAP had high internal consistency, reliability of test-retest, concurrent validity, and convergent validity. Conclusion: Therefore, the CAP is appropriate for evaluation and determination of computer access skill of persons with physical disabilities. It is possible to get clear quantitative data on performance when providing computer access services if you can use the CAP data. Using this quantitative evidence, insights can be gained into the specific nature of any difficulties experienced by persons with physical disabilities and find wise solutions.

Quantitative Assessment of Input and Integrated Information in GIS-based Multi-source Spatial Data Integration: A Case Study for Mineral Potential Mapping

  • Kwon, Byung-Doo;Chi, Kwang-Hoon;Lee, Ki-Won;Park, No-Wook
    • Journal of the Korean earth science society
    • /
    • v.25 no.1
    • /
    • pp.10-21
    • /
    • 2004
  • Recently, spatial data integration for geoscientific application has been regarded as an important task of various geoscientific applications of GIS. Although much research has been reported in the literature, quantitative assessment of the spatial interrelationship between input data layers and an integrated layer has not been considered fully and is in the development stage. Regarding this matter, we propose here, methodologies that account for the spatial interrelationship and spatial patterns in the spatial integration task, namely a multi-buffer zone analysis and a statistical analysis based on a contingency table. The main part of our work, the multi-buffer zone analysis, was addressed and applied to reveal the spatial pattern around geological source primitives and statistical analysis was performed to extract information for the assessment of an integrated layer. Mineral potential mapping using multi-source geoscience data sets from Ogdong in Korea was applied to illustrate application of this methodology.

Fast and Robust Face Detection based on CNN in Wild Environment (CNN 기반의 와일드 환경에 강인한 고속 얼굴 검출 방법)

  • Song, Junam;Kim, Hyung-Il;Ro, Yong Man
    • Journal of Korea Multimedia Society
    • /
    • v.19 no.8
    • /
    • pp.1310-1319
    • /
    • 2016
  • Face detection is the first step in a wide range of face applications. However, detecting faces in the wild is still a challenging task due to the wide range of variations in pose, scale, and occlusions. Recently, many deep learning methods have been proposed for face detection. However, further improvements are required in the wild. Another important issue to be considered in the face detection is the computational complexity. Current state-of-the-art deep learning methods require a large number of patches to deal with varying scales and the arbitrary image sizes, which result in an increased computational complexity. To reduce the complexity while achieving better detection accuracy, we propose a fully convolutional network-based face detection that can take arbitrarily-sized input and produce feature maps (heat maps) corresponding to the input image size. To deal with the various face scales, a multi-scale network architecture that utilizes the facial components when learning the feature maps is proposed. On top of it, we design multi-task learning technique to improve detection performance. Extensive experiments have been conducted on the FDDB dataset. The experimental results show that the proposed method outperforms state-of-the-art methods with the accuracy of 82.33% at 517 false alarms, while improving computational efficiency significantly.