• Title/Summary/Keyword: 에지 방향

Search Result 404, Processing Time 0.024 seconds

Using Optical Flow and HoG for Nighttime PDS (야간 PDS를 위한 광학 흐름과 기울기 방향 히스토그램 이용 방법)

  • Cho, Hi-Tek;Yoo, Hyeon-Joong;Kim, Hyoung-Suk;Hwang, Jeng-Neng
    • Journal of the Korea Academia-Industrial cooperation Society
    • /
    • v.10 no.7
    • /
    • pp.1556-1567
    • /
    • 2009
  • The death rate of pedestrian in car accidents in Korea is 2.5 times higher than the average of OECD countries'. If a system that can detect pedestrians and send alarm to drivers is built and reduces the rate, it is worth developing such a pedestrian detection system (PDS). Since the accident rate in which pedestrians are involved is higher at nighttime than in daytime, the adoption of nighttime PDS is being standardized by big auto companies. However, they are usually using night visions or multiple sensors, which are usually expensive. In this paper we suggest a method for nighttime PDS using single wide dynamic range (WDR) monochrome camera in visible spectrum band. In our experiments, pedestrians were accurately detected if only most edges of pedestrians could be obtained.

Implementation of Vision System for Measuring Earing Rate of Aluminium CAN (알루미늄 캔재의 이어링률 측정을 위한 비젼 시스템 구현)

  • Lee Yang-Bum;Shin Seen-Beom
    • Journal of the Institute of Convergence Signal Processing
    • /
    • v.6 no.1
    • /
    • pp.8-14
    • /
    • 2005
  • The implementation of vision system using CCD camera which measures the earing rate of aluminium CAN is represented in this paper. In order to optimize the input image, the object of the input image is separated and the position of the image is calibrated. In the preprocessing, the definition of image is improved by the histogram equalization, and then the edges of the input image are detected by the Robert mask. The heights of the four ears and angles of the aluminium CAN are measured manually with the digital vernier calipers in industry. It takes 30 seconds to measure manually the height of one direction of the aluminium CAN at least three times. However, when the proposed system in this paper is applied, it takes 0.02 seconds only. In conclusion, the efficiency of the proposed system is higher than that of the system used in the industry.

  • PDF

Extending the Abstraction Capability of BPMN by Introducing Vertical Abstraction (수직적 추상의 도입에 의한 BPMN 추상기능의 확장)

  • Kang, Sung-Won;Lee, Dan-Hyung;Ahn, Yu-Whoan
    • The KIPS Transactions:PartD
    • /
    • v.16D no.2
    • /
    • pp.223-236
    • /
    • 2009
  • BPMN is a standard business process description notation developed by OMG. It allows the user to have an abstract view of a process that hides its details with the Collapsed Sub-Process notation. While it is a useful direction of abstraction that can be called the horizontal abstraction, a different kind of abstraction, the vertical abstraction, is necessary when different stakeholders of business would like to have different views of the business process form their own viewpoints of interest. For example, stakeholders may want to see a process from the viewpoint of a particular group of actors or from the viewpoint of a certain set of goals. This paper first extends horizontal abstraction capability of BPMN by introducing the notion of super edge and, moreover, adds the vertical abstraction capability to it by introducing the notions of 'aspect attribute' and 'interest specification' and notations for them.

Detection of Artificial Caption using Temporal and Spatial Information in Video (시·공간 정보를 이용한 동영상의 인공 캡션 검출)

  • Joo, SungIl;Weon, SunHee;Choi, HyungIl
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.1 no.2
    • /
    • pp.115-126
    • /
    • 2012
  • The artificial captions appearing in videos include information that relates to the videos. In order to obtain the information carried by captions, many methods for caption extraction from videos have been studied. Most traditional methods of detecting caption region have used one frame. However video include not only spatial information but also temporal information. So we propose a method of detection caption region using temporal and spatial information. First, we make improved Text-Appearance-Map and detect continuous candidate regions through matching between candidate-regions. Second, we detect disappearing captions using disappearance test in candidate regions. In case of captions disappear, the caption regions are decided by a merging process which use temporal and spatial information. Final, we decide final caption regions through ANNs using edge direction histograms for verification. Our proposed method was experienced on many kinds of captions with a variety of sizes, shapes, positions and the experiment result was evaluated through Recall and Precision.

Method for Road Vanishing Point Detection Using DNN and Hog Feature (DNN과 HoG Feature를 이용한 도로 소실점 검출 방법)

  • Yoon, Dae-Eun;Choi, Hyung-Il
    • The Journal of the Korea Contents Association
    • /
    • v.19 no.1
    • /
    • pp.125-131
    • /
    • 2019
  • A vanishing point is a point on an image to which parallel lines projected from a real space gather. A vanishing point in a road space provides important spatial information. It is possible to improve the position of an extracted lane or generate a depth map image using a vanishing point in the road space. In this paper, we propose a method of detecting vanishing points on images taken from a vehicle's point of view using Deep Neural Network (DNN) and Histogram of Oriented Gradient (HoG). The proposed algorithm is divided into a HoG feature extraction step, in which the edge direction is extracted by dividing an image into blocks, a DNN learning step, and a test step. In the learning stage, learning is performed using 2,300 road images taken from a vehicle's point of views. In the test phase, the efficiency of the proposed algorithm using the Normalized Euclidean Distance (NormDist) method is measured.

VVC Intra Triangular Partitioning Prediction for Screen Contents (스크린 콘텐츠를 위한 VVC 화면내 삼각형 분할 예측 방법)

  • Choe, Jaeryun;Gwon, Daehyeok;Han, Heeji;Lee, Hahyun;Kang, Jungwon;Choi, Haechul
    • Journal of Broadcast Engineering
    • /
    • v.25 no.3
    • /
    • pp.325-337
    • /
    • 2020
  • Versatile Video Coding (VVC) is a new video coding standard that is being developed by the Joint Video Experts Team of ISO/IEC/ITU-T and it has adopted various technologies including screen content coding tools. Screen contents have a feature that blocks are likely to have diagonal edges like character regions. If triangular partitioning coding is allowed for screen contents having such the feature, coding efficiency would increase. This paper proposes a intra prediction method using triangular partitioning prediction for screen content coding. Similar to the Triangular Prediction Mode of VVC that supports the triangular partitioning prediction, the proposed method derives two prediction blocks using Horizontal and Vertical modes and then it blends the predicted blocks applying masks with triangle shape to generate a final prediction block. The experimental results of the proposed method showed an average of 1.86%, 1.49%, and 1.55% coding efficiency in YUV, respectively, for VVC screen content test sequences.

Region Analysis of Business Card Images Acquired in PDA Using DCT and Information Pixel Density (DCT와 정보 화소 밀도를 이용한 PDA로 획득한 명함 영상에서의 영역 해석)

  • 김종흔;장익훈;김남철
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.29 no.8C
    • /
    • pp.1159-1174
    • /
    • 2004
  • In this paper, we present an efficient algorithm for region analysis of business card images acquired in a PDA by using DCT and information pixel density. The proposed method consists of three parts: region segmentation, information region classification, and text region classification. In the region segmentation, an input business card image is partitioned into 8 f8 blocks and the blocks are classified into information and background blocks using the normalized DCT energy in their low frequency bands. The input image is then segmented into information and background regions by region labeling on the classified blocks. In the information region classification, each information region is classified into picture region or text region by using a ratio of the DCT energy of horizontal and vertical edge components to that in low frequency band and a density of information pixels, that are black pixels in its binarized region. In the text region classification, each text region is classified into large character region or small character region by using the density of information pixels and an averaged horizontal and vertical run-lengths of information pixels. Experimental results show that the proposed method yields good performance of region segmentation, information region classification, and text region classification for test images of several types of business cards acquired by a PDA under various surrounding conditions. In addition, the error rates of the proposed region segmentation are about 2.2-10.1% lower than those of the conventional region segmentation methods. It is also shown that the error rates of the proposed information region classification is about 1.7% lower than that of the conventional information region classification method.

New Prefiltering Methods based on a Histogram Matching to Compensate Luminance and Chrominance Mismatch for Multi-view Video (다시점 비디오의 휘도 및 색차 성분 불일치 보상을 위한 히스토그램 매칭 기반의 전처리 기법)

  • Lee, Dong-Seok;Yoo, Ji-Sang
    • Journal of the Institute of Electronics Engineers of Korea SP
    • /
    • v.47 no.6
    • /
    • pp.127-136
    • /
    • 2010
  • In multi-view video, illumination disharmony between neighboring views can occur on account of different location of each camera and imperfect camera calibration, and so on. Such discrepancy can be the cause of the performance decrease of multi-view video coding by mismatch of inter-view prediction which refer to the pictures obtained from the neighboring views at the same time. In this paper, we propose an efficient histogram-based prefiltering algorithm to compensate mismatches between the luminance and chrominance components in multi-view video for improving its coding efficiency. To compensate illumination variation efficiently, all camera frames of a multi-view sequence are adjusted to a predefined reference through the histogram matching. A Cosited filter that is used for chroma subsampling in many video encoding schemes is applied to each color component prior to histogram matching to improve its performance. The histogram matching is carried out in the RGB color space after color space converting from YCbCr color space. The effective color conversion skill that has respect to direction of edge and range of pixel value in an image is employed in the process. Experimental results show that the compression ratio for the proposed algorithm is improved comparing with other methods.

Scene Text Extraction in Natural Images using Hierarchical Feature Combination and Verification (계층적 특징 결합 및 검증을 이용한 자연이미지에서의 장면 텍스트 추출)

  • 최영우;김길천;송영자;배경숙;조연희;노명철;이성환;변혜란
    • Journal of KIISE:Software and Applications
    • /
    • v.31 no.4
    • /
    • pp.420-438
    • /
    • 2004
  • Artificially or naturally contained texts in the natural images have significant and detailed information about the scenes. If we develop a method that can extract and recognize those texts in real-time, the method can be applied to many important applications. In this paper, we suggest a new method that extracts the text areas in the natural images using the low-level image features of color continuity. gray-level variation and color valiance and that verifies the extracted candidate regions by using the high-level text feature such as stroke. And the two level features are combined hierarchically. The color continuity is used since most of the characters in the same text lesion have the same color, and the gray-level variation is used since the text strokes are distinctive in their gray-values to the background. Also, the color variance is used since the text strokes are distinctive in their gray-values to the background, and this value is more sensitive than the gray-level variations. The text level stroke features are extracted using a multi-resolution wavelet transforms on the local image areas and the feature vectors are input to a SVM(Support Vector Machine) classifier for the verification. We have tested the proposed method using various kinds of the natural images and have confirmed that the extraction rates are very high even in complex background images.

Development of Signal Processing Circuit for Side-absorber of Dual-mode Compton Camera (이중 모드 컴프턴 카메라의 측면 흡수부 제작을 위한 신호처리회로 개발)

  • Seo, Hee;Park, Jin-Hyung;Park, Jong-Hoon;Kim, Young-Su;Kim, Chan-Hyeong;Lee, Ju-Hahn;Lee, Chun-Sik
    • Journal of Radiation Protection and Research
    • /
    • v.37 no.1
    • /
    • pp.16-24
    • /
    • 2012
  • In the present study, a gamma-ray detector and associated signal processing circuit was developed for a side-absorber of a dual-mode Compton camera. The gamma-ray detector was made by optically coupling a CsI(Tl) scintillation crystal to a silicon photodiode. The developed signal processing circuit consists of two parts, i.e., the slow part for energy measurement and the fast part for timing measurement. In the fast part, there are three components: (1) fast shaper, (2) leading-edge discriminator, and (3) TTL-to-NIM logic converter. AC coupling configuration between the detector and front-end electronics (FEE) was used. Because the noise properties of FEE can significantly affect the overall performance of the detection system, some design criteria were presented. The performance of the developed system was evaluated in terms of energy and timing resolutions. The evaluated energy resolution was 12.0% and 15.6% FWHM for 662 and 511 keV peaks, respectively. The evaluated timing resolution was 59.0 ns. In the conclusion, the methods to improve the performance were discussed because the developed gamma-ray detection system showed the performance that could be applicable but not satisfactory in Compton camera application.