• Title/Summary/Keyword: 시간 가중치

Search Result 791, Processing Time 0.035 seconds

FastXcorr : FORTRAN Program for Fast Cross-over Error Correction of Marine Geophysical Survey Data (FastXcorr : 해양지구물리탐사 자료의 빠른 교차점오차 보정을 위한 프로그램 개발)

  • Kim, Kyong-O;Kang, Moo-Hee;Gong, Gee-Soo
    • Economic and Environmental Geology
    • /
    • v.41 no.2
    • /
    • pp.219-223
    • /
    • 2008
  • Many cross-over errors due to position errors, meter errors, observation errors, sea conditions and so on occur when marine geophysical data collected by own and other agencies are merged, and these errors can create artificial anomalies which cause an improper interpretation. Many methods have been introduced to reduce cross-over errors. However, most methods are designed to compare each point or segment data to find cross-over points, and require a long processing time. Therefore, FORTRAN program (FastXcorr) is presented to fast determine cross-over points using an overlap-sector, and to adjust cross-over errors using a weighted linear interpolation algorithm.

Modified Exposure Fusion with Improved Exposure Adjustment Using Histogram and Gamma Correction (히스토그램과 감마보정 기반의 노출 조정을 이용한 다중 노출 영상 합성 기법)

  • Park, Imjae;Park, Deajun;Jeong, Jechang
    • Journal of Broadcast Engineering
    • /
    • v.22 no.3
    • /
    • pp.327-338
    • /
    • 2017
  • Exposure fusion is a typical image fusion technique to generate a high dynamic range image by combining two or more different exposure images. In this paper, we propose block-based exposure adjustment considering unique characteristics of human visual system and improved saturation measure to get weight map. Proposed exposure adjustment artificially corrects intensity values of each input images considering human visual system, efficiently preserving details in the result image of exposure fusion. The improved saturation measure is used to make a weight map that effectively reflects the saturation region in the input images. We show the superiority of the proposed algorithm through subjective image quality, MEF-SSIM, and execution time comparison with the conventional exposure fusion algorithm.

Vocabulary Retrieve System using Improve Levenshtein Distance algorithm (개선된 Levenshtein Distance 알고리즘을 사용한 어휘 탐색 시스템)

  • Lee, Jong-Sub;Oh, Sang-Yeob
    • Journal of Digital Convergence
    • /
    • v.11 no.11
    • /
    • pp.367-372
    • /
    • 2013
  • In general, Levenshtein distance algorithm have a problem with not distinguish the consideration of vacabulary retrieve, because Levenshtein methode is used to vocabulary order are not defined. In this paper, we propose a improved Levenshtein methode, it effectively manage the vocabulary retrieve by frequency use of a vocabulary, and it gives the weight number which have a order between vocabularies. Therefore proposed methode have a advantage of solve the defect of perception rate in the case of increase the vocabulary, improve the recognition time become higher and it can be effectively retrieval space management.. System performance as a result of represent vocabulary dependence recognition rate of 97.81%, vocabulary independence recognition rate of 96.91% in indoor environment. Also, vocabulary dependence recognition rate of 91.11%, vocabulary independence recognition rate of 90.01% in outdoor environment.

Determination of Flood Risk Considering Flood Control Ability and Urban Environment Risk (수방능력 및 재해위험을 고려한 침수위험도 결정)

  • Lee, Eui Hoon;Choi, Hyeon Seok;Kim, Joong Hoon
    • Journal of Korea Water Resources Association
    • /
    • v.48 no.9
    • /
    • pp.757-768
    • /
    • 2015
  • Recently, climate change has affected short time concentrated local rainfall and unexpected heavy rain which is increasingly causing life and property damage. In this research, arithmetic average analysis, weighted average analysis, and principal component analysis are used for predicting flood risk. This research is foundation for application of predicting flood risk based on annals of disaster and status of urban planning. Results obtained by arithmetic average analysis, weighted average analysis, and principal component analysis using many factors affect on flood are compared. In case of arithmetic average analysis, each factor has same weights though it is simple method. In case of weighted average analysis, correlation factors are complex by many variables and multicollinearty problem happen though it has different weights. For solving these problems, principal component analysis (PCA) is used because each factor has different weights and the number of variables is smaller than other methods by combining variables. Finally, flood risk assessment considering flood control ability and urban environment risk in former research is predicted.

The Efficient Cut Detection Algorithm Using the Weight in News Video Data (뉴스 비디오 데이터에서의 가중치를 이용한 효율적 장면변환 검출 알고리즘)

  • Jeong, Yeong-Eun;Lee, Dong-Seop;Sin, Seong-Yun;Jeon, Geun-Hwan;Bae, Seok-Chan;Lee, Yang-Won
    • The Transactions of the Korea Information Processing Society
    • /
    • v.6 no.2
    • /
    • pp.282-291
    • /
    • 1999
  • In order to construct the News Video Database System, cut detection technique is very important. In general, the color histogram, $\chi$2 histogram or Bin-to-Bin difference(B2B) techniques are mainly using for the scene partitioning. In this paper, we propose the efficient algorithm that is applied the weight in terms of NTSC standard to cut detection. This algorithm is able to reduce the time of acquiring and comparing histogram using by separate calculation of R, G, and B for the color histogram technique. And it also provide the efficient selection method fo threshold value by and use the news videos of KBS, MBC, SBS, CNN and NHK as experimental domains. By the result of experiment, we present the proposed algorithm is more efficient for cut detection than the previous methods, and that the basis for the automatic selection of threshold values.

  • PDF

Robust Stereo Matching under Radiometric Change based on Weighted Local Descriptor (광량 변화에 강건한 가중치 국부 기술자 기반의 스테레오 정합)

  • Koo, Jamin;Kim, Yong-Ho;Lee, Sangkeun
    • Journal of the Institute of Electronics and Information Engineers
    • /
    • v.52 no.4
    • /
    • pp.164-174
    • /
    • 2015
  • In a real scenario, radiometric change has frequently occurred in the stereo image acquisition process using multiple cameras with geometric characteristics or moving a single camera because it has different camera parameters and illumination change. Conventional stereo matching algorithms have a difficulty in finding correct corresponding points because it is assumed that corresponding pixels have similar color values. In this paper, we present a new method based on the local descriptor reflecting intensity, gradient and texture information. Furthermore, an adaptive weight for local descriptor based on the entropy is applied to estimate correct corresponding points under radiometric variation. The proposed method is tested on Middlebury datasets with radiometric changes, and compared with state-of-the-art algorithms. Experimental result shows that the proposed scheme outperforms other comparison algorithms around 5% less matching error on average.

Parameter estimation of unsteady flow model using mulit-objective optimization and minimax regret approach (다목적최적화와 최소최대 후회도 방법에 의한 부정류 계산모형의 매개변수 추정)

  • Li, Li;Chung, Eun-Sung;Jun, Kyung Soo
    • Proceedings of the Korea Water Resources Association Conference
    • /
    • 2017.05a
    • /
    • pp.310-310
    • /
    • 2017
  • 홍수추적 모형의 적절성을 결정하는 중요한 요소 중 하나는 모형의 매개변수이다. 특히 자연하천에 관한 부정류 계산모형의 매개변수인 조도계수는 하상재료의 특성에 따라 좌우되는 표피마찰뿐만 아니라 하상의 굴곡 등 단면형의 변화에 따른 형상손실 및 하천의 사행에 따른 손실 효과 등을 포괄적으로 내포하고 있기 때문에 모든 하천구간에 대하여 일반적으로 적용할 수 있는 조도계수의 값을 하나로 결정하기는 어렵다. 또한 조도계수는 흐름조건, 즉 유량 또는 수위의 변화에 따른 가변성을 갖고 있기 때문에, 흐름이 시간 및 공간적으로 변화하는 부정류 계산모형에 있어서는 더욱 그러하다. 그러므로 본 연구에서는 조도계수의 가변성과 다수 지점의 관측치를 고려한 모형보정의 결과로부터 얻은 파레토 최적화와 최소최대 후회도 방법(Minimax regret approach, MRA)을 결합하여 부정류 계산모형의 안정적인 매개변수를 선정할 수 있는 방법을 제안하였다. 여러 지점의 관측치를 고려한 모형의 보정은 다목적 최적화 문제로서, 여러 지점에 대한 가중치를 결합하여 얻은 하나의 목적함수에 대하여 여러 번의 개별 최적화를 수행함으로써 다수의 파레토 최적해들을 구할 수 있는 통합접근법을 적용하였다. 이때 유량에 따른 조도계수의 가변성을 나타내는 두 개의 매개변수로 구성된 관계식을 이용하여 두 구간에 대한 매개변수들을 모형의 추정 대상 매개변수로서 최적화하였다. 이 후 각기 다른 홍수사상에 대해 보정과 검증을 수행하였으며 각각에 대한 평가지표의 후회도를 정량화하였고 최종 안정적인 매개변수를 추정하기 위해 MRA를 이용하여 종합적인 순위를 도출하였다. MRA는 완전히 불확실한 의사결정 상황에서 유용한 방법으로 알려져 있는데 가장 나쁜 순위가 가장 좋은 것을 선택할 수 있게 하는 보수적인 의사결정기법이다. 계산결과 추정된 모형의 가변조도계수와 그로부터 얻은 두 개 지점에서의 평가지표인 RMSE는 두 지점에 대한 가중치의 조합에 따라 선택되는 매개변수 값에 따라 달라짐을 알 수 있었다. 본 연구에서 제시한 방법은 수문 및 수리모형의 다수의 관측지점의 자료를 이용한 매개변수 산정문제에 있어서 안정적인 해를 도출할 수 있다.

  • PDF

Efficient Controlling Trajectory of NPC with Accumulation Map based on Path of User and NavMesh in Unity3D

  • Kim, Jong-Hyun
    • Journal of the Korea Society of Computer and Information
    • /
    • v.25 no.4
    • /
    • pp.55-61
    • /
    • 2020
  • In this paper, we present a novel approach to efficiently control the location of NPC(Non-playable characters) in the interactive virtual world such as game, virtual reality. To control the NPC's movement path, we first calculate the main trajectory based on the user's path, and then move the NPC based on the weight map. Our method constructs automatically a navigation mesh that provides new paths for NPC by referencing the user trajectories. Our method enables adaptive changes to the virtual world over time and provides user-preferred path weights for smartagent path planning. We have tested the usefulness of our algorithm with several example scenarios from interactive worlds such as video games, virtual reality. In practice, our framework can be applied easily to any type of navigation in an interactive world.

Document Analysis based Main Requisite Extraction System (문서 분석 기반 주요 요소 추출 시스템)

  • Lee, Jongwon;Yeo, Ilyeon;Jung, Hoekyung
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.23 no.4
    • /
    • pp.401-406
    • /
    • 2019
  • In this paper, we propose a system for analyzing documents in XML format and in reports. The system extracts the paper or reports of keywords, shows them to the user, and then extracts the paragraphs containing the keywords by inputting the keywords that the user wants to search within the document. The system checks the frequency of keywords entered by the user, calculates weights, and removes paragraphs containing only keywords with the lowest weight. Also, we divide the refined paragraphs into 10 regions, calculate the importance of the paragraphs per region, compare the importance of each region, and inform the user of the main region having the highest importance. With these features, the proposed system can provide the main paragraphs with higher compression ratio than analyzing the papers or reports using the existing document analysis system. This will reduce the time required to understand the document.

Realtime Media Streaming Technique Based on Adaptive Weight in Hybrid CDN/P2P Architecture

  • Lee, Jun Pyo
    • Journal of the Korea Society of Computer and Information
    • /
    • v.26 no.3
    • /
    • pp.1-7
    • /
    • 2021
  • In this paper, optimized media data retrieval and transmission based on the Hybrid CDN/P2P architecture and selective storage through user's prediction of requestability enable seamless data transfer to users and reduction of unnecessary traffic. We also propose a new media management method to minimize the possibility of transmission delay and packet loss so that media can be utilized in real time. To this end, we construct each media into logical segments, continuously compute weights for each segment, and determine whether to store segment data based on the calculated weights. We also designate scattered computing nodes on the network as local groups by distance and ensure that storage space is efficiently shared and utilized within those groups. Experiments conducted to verify the efficiency of the proposed technique have shown that the proposed method yields a relatively good performance evaluation compared to the existing methods, which can enable both initial latency reduction and seamless transmission.