• Title/Summary/Keyword: Visual Algorithm

Search Result 1,422, Processing Time 0.04 seconds

Automated Generation of Multi-Scale Map Database for Web Map Services (웹 지도서비스를 위한 다축척 지도 데이터셋 자동생성 기법 연구)

  • Park, Woo Jin;Bang, Yoon Sik;Yu, Ki Yun
    • Journal of the Korean Society of Surveying, Geodesy, Photogrammetry and Cartography
    • /
    • v.30 no.5
    • /
    • pp.435-444
    • /
    • 2012
  • Although the multi-scale map database should be constructed for the web map services and location-based services, much part of generation process is based on the manual editing. In this study, the map generalization methodology for automatic construction of multi-scale database from the primary data is proposed. Moreover, the generalization methodology is applied to the real map data and the prototype of multi-scale map dataset is generated. Among the generalization operators, selection/elimination, simplification and amalgamation/aggregation is applied in organized manner. The algorithm and parameters for generalization is determined experimentally considering T$\ddot{o}$pfer's radical law, minimum drawable object of map and visual aspect. The target scale level is five(1:1,000, 1:5,000, 1:25,000, 1:100,000, 1:500,000) and for the target data, new address data and digital topographic map is used.

A GCST-based Digital Image Watermarking Scheme (GCST 기반 디지털 영상 워터마킹 방법)

  • Lee, Juck-Sik
    • Journal of the Institute of Convergence Signal Processing
    • /
    • v.13 no.3
    • /
    • pp.142-149
    • /
    • 2012
  • Various image transformations can be used to compress images, to reduce noises in images and to extract useful features. Watermarking techniques using DCT and DWT have been a lot of research interest in the spread of multimedia contents. In this paper, Gabor cosine and sine transform considered as human visual filter is applied to embedding and extraction of watermarks for digital images. The proposed transform is used for watermarking with fifteen attacks. Randomly normal distributed noises are used as an embedded watermark. To measure the similarity between the embedded watermark and extracted one, a correlation value is computed and furthermore is compared with that of existing DCT method. Correlation values of extracted watermark are computed with randomly normal distributed noise sequences, and the sequence with the largest correlation value is declared as the embedded watermark. Frequency components are divided into various bands. Experimental results for low frequency and mid-frequency bands have shown that the proposed GCST provides a good watermarking algorithm and its performance is better than DCT.

Computer Vision Based Measurement, Error Analysis and Calibration (컴퓨터 시각(視覺)에 의거한 측정기술(測定技術) 및 측정오차(測定誤差)의 분석(分析)과 보정(補正))

  • Hwang, H.;Lee, C.H.
    • Journal of Biosystems Engineering
    • /
    • v.17 no.1
    • /
    • pp.65-78
    • /
    • 1992
  • When using a computer vision system for a measurement, the geometrically distorted input image usually restricts the site and size of the measuring window. A geometrically distorted image caused by the image sensing and processing hardware degrades the accuracy of the visual measurement and prohibits the arbitrary selection of the measuring scope. Therefore, an image calibration is inevitable to improve the measuring accuracy. A calibration process is usually done via four steps such as measurement, modeling, parameter estimation, and compensation. In this paper, the efficient error calibration technique of a geometrically distorted input image was developed using a neural network. After calibrating a unit pixel, the distorted image was compensated by training CMLAN(Cerebellar Model Linear Associator Network) without modeling the behavior of any system element. The input/output training pairs for the network was obtained by processing the image of the devised sampled pattern. The generalization property of the network successfully compensates the distortion errors of the untrained arbitrary pixel points on the image space. The error convergence of the trained network with respect to the network control parameters were also presented. The compensated image through the network was then post processed using a simple DDA(Digital Differential Analyzer) to avoid the pixel disconnectivity. The compensation effect was verified using known sized geometric primitives. A way to extract directly a real scaled geometric quantity of the object from the 8-directional chain coding was also devised and coded. Since the developed calibration algorithm does not require any knowledge of modeling system elements and estimating parameters, it can be applied simply to any image processing system. Furthermore, it efficiently enhances the measurement accuracy and allows the arbitrary sizing and locating of the measuring window. The applied and developed algorithms were coded as a menu driven way using MS-C language Ver. 6.0, PC VISION PLUS library functions, and VGA graphic functions.

  • PDF

Study on Proportional Reasoning in Elementary School Mathematics (초등학교 수학 교과에서의 비례 추론에 대한 연구)

  • Jeong, Eun Sil
    • Journal of Educational Research in Mathematics
    • /
    • v.23 no.4
    • /
    • pp.505-516
    • /
    • 2013
  • The purpose of this paper is to analyse the essence of proportional reasoning and to analyse the contents of the textbooks according to the mathematics curriculum revised in 2007, and to seek the direction for developing the proportional reasoning in the elementary school mathematics focused the task variables. As a result of analysis, it is found out that proportional reasoning is one form of qualitative and quantitative reasoning which is related to ratio, rate, proportion and involves a sense of covariation, multiple comparison. Mathematics textbooks according to the mathematics curriculum revised in 2007 are mainly examined by the characteristics of the proportional reasoning. It is found out that some tasks related the proportional reasoning were decreased and deleted and were numerically and algorithmically approached. It should be recognized that mechanical methods, such as the cross-product algorithm, for solving proportions do not develop proportional reasoning and should be required to provide tasks in a wide range of context including visual models.

  • PDF

A Study on the Effective Algorithms for tine Generalization (선형성 지형자료의 일반화에 대한 효율적인 알고리즘에 관한 연구)

  • 김감래;이호남
    • Journal of the Korean Society of Surveying, Geodesy, Photogrammetry and Cartography
    • /
    • v.12 no.1
    • /
    • pp.43-52
    • /
    • 1994
  • This paper outlines a new approach to the line generalization when preparing small scale map on the basis of existing large scale digital map. Line generalizations are conducted based on Douglas algorithm using 1/25,000 scale topographic maps of southeastern JEJU island which produced by National Geographic Institute to analyze the fitness to the original and problems of graphical representation. Compare to the same scale map which was generated by manual method, a verity of small, but sometimes significant errors & modification of topological relationship have been detected. The research gives full details of three algorithms that operationalize the smallest visible object method, together with some empirical results. A comparison of the results produced by the new algorithms with those produced by manual generalization and Douglas method of data reduction is provided. Also this paper presents the preliminary results of an relationships between the size of smallest visual object and requiring data storages for each algorithms.

  • PDF

Depth Estimation and Intermediate View Synthesis for Three-dimensional Video Generation (3차원 영상 생성을 위한 깊이맵 추정 및 중간시점 영상합성 방법)

  • Lee, Sang-Beom;Lee, Cheon;Ho, Yo-Sung
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.34 no.10B
    • /
    • pp.1070-1075
    • /
    • 2009
  • In this paper, we propose new depth estimation and intermediate view synthesis algorithms for three-dimensional video generation. In order to improve temporal consistency of the depth map sequence, we add a temporal weighting function to the conventional matching function when we compute the matching cost for estimating the depth information. In addition, we propose a boundary noise removal method in the view synthesis operation. after finding boundary noise areas using the depth map, we replace them with corresponding texture information from the other reference image. Experimental results showed that the proposed algorithm improved temporal consistency of the depth sequence and reduced flickering artifacts in the virtual view. It also improved visual quality of the synthesized virtual views by removing the boundary noise.

A Macroblock-Layer Rate Control with Adaptive Quantization Parameter Decision and Header Bits Length Estimation (적응적 양자화 파라미터 결정과 헤더 비트량 예측을 통한 매크로블록 단위 비트율 제어)

  • Kim, Se-Ho;Suh, Jae-Won
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.34 no.2C
    • /
    • pp.200-208
    • /
    • 2009
  • A macroblock layer rate control for H.264/AVC has the problem that allocated target bits for current frame occasionally are exhausted too fast due to inadequate quantization parameter assignment. In this case, the maximum permissible quantization parameter is used to encode for remaining macroblocks and it leads to degradation of the visual quality. In addition, the header bits length estimation algorithm used for quantization parameter assignment takes the average header bits length for the encoded macroblocks of the previous frame and the current frame. Therefore, it generates a big mismatch between the actually generated header bits length and the estimated header bits length. In this paper, we propose adaptive quantization parameter decision method to prevent early exhausting target bits during encoding the current frame by considering the number of macroblocks that have negative targets bits in previous frame and the improved header bits length estimation scheme for accurate quantization parameter decision.

The Study of the System Development on the Safe Environment of Children's Smartphone Use and Contents Recommendations (유아들의 안전한 스마트폰 사용 환경 및 콘텐츠 추천 시스템 개발)

  • Lee, Kyung-A;Park, Eun-Young
    • Journal of Digital Contents Society
    • /
    • v.19 no.5
    • /
    • pp.845-852
    • /
    • 2018
  • This study has developed a preventive launcher from smartphone addiction for the digital generation and the contents recommendation based on machine learning which used multiple and collective intelligence. This could provide convenient digital nurturing experience for the parents who fear their children's over use of digital devices and also suggest individually adaptive digital learning methods that enhance the learning efficiency and pleasurable and safe learning environment for the children. Suggested application is a kind of gamification launcher that protects children from harmful contents and from smartphone addiction with time limit settings. For parents who find difficulty choosing from various kinds of contents and applications for education, this suggested system could provide a learning analytic report based on big data after collecting and analyzing the data of their children's learning and activities and recommend contents necessary for their kids using recommended algorithm by collective intelligence.

The Study of Comparison of DCT-based H.263 Quantizer for Computative Quantity Reduction (계산량 감축을 위한 DCT-Based H.263 양자화기의 비교 연구)

  • Shin, Kyung-Cheol
    • Journal of the Institute of Convergence Signal Processing
    • /
    • v.9 no.3
    • /
    • pp.195-200
    • /
    • 2008
  • To compress the moving picture data effectively, it is needed to reduce spatial and temporal redundancy of input image data. While motion estimation! compensation methods is effectively able to reduce temporal redundancy but it is increased computation complexity because of the prediction between frames. So, the study of algorithm for computation reduction and real time processing is needed. This paper is presenting quantizer effectively able to quantize DCT coefficient considering the human visual sensitivity. As quantizer that proposed DCT-based H.263 could make transmit more frame than TMN5 at a same transfer speed, and it could decrease the frame drop effect. And the luminance signal appeared the difference of $-0.3{\sim}+0.65dB$ in the average PSNR for the estimation of objective image quality and the chrominance signal appeared the improvement in about 1.73dB in comparision with TMN5. The proposed method reduces $30{\sim}31%$ compared with NTSS and $20{\sim}21%$ compared to 4SS in comparition of calculation quantity.

  • PDF

Error Concealment Method Based on POCS for Multi-layered Video Coding (다계층 비디오 코딩에 적용 가능한 POCS 기반 에러 은닉 기법)

  • Yun, Byoung-Ju
    • Journal of the Institute of Electronics Engineers of Korea SP
    • /
    • v.46 no.4
    • /
    • pp.67-75
    • /
    • 2009
  • Multi-layered video coding that provides scalability across the visual content has emerged for easily adaptive service over current heterogeneous network. However, the network is still error prone environment so that video service may suffer packet loss or erroneous decoding of the video. Especially distortion caused by the burst error may propagate to several pictures until intra refreshing, which will raise a terrific degradation of picture quality. To overcome the problem at terminal independently, we propose a new error concealment algorithm for the multi-layered video coding. The proposed method uses the similarity of between layers in the multi-layered video coding and POCS (Projections Onto Convex Sets) which is a powerful error concealment tool, but heavily dependent on initial values. To find adequate initial value which can reduce iteration times as well as achieve high performance, we took consideration into both features of layered approach coding and the correlation in neighbor blocks. The simulation results show that the proposed concealment method works well.