• Title/Summary/Keyword: Edge Weights

Search Result 95, Processing Time 0.024 seconds

Depth Image Upsampling Algorithm Using Selective Weight (선택적 가중치를 이용한 깊이 영상 업샘플링 알고리즘)

  • Shin, Soo-Yeon;Kim, Dong-Myung;Suh, Jae-Won
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.21 no.7
    • /
    • pp.1371-1378
    • /
    • 2017
  • In this paper, we present an upsampling technique for depth map image using selective bilateral weights and a color weight using laplacian function. These techniques prevent color texture copy problem, which problem appears in existing upsamplers uses bilateral weight. First, we construct a high-resolution image using the bicubic interpolation technique. Next, we detect a color texture region using pixel value differences of depth and color image. If an interpolated pixel belongs to the color texture edge region, we calculate weighting values of spatial and depth in $3{\times}3$ neighboring pixels and compute the cost value to determine the boundary pixel value. Otherwise we use color weight instead of depth weight. Finally, the pixel value having minimum cost is determined as the pixel value of the high-resolution depth image. Simulation results show that the proposed algorithm achieves good performance in terns of PSNR comparison and subjective visual quality.

Multi-View Wyner-Ziv Video Coding Based on Spatio-temporal Adaptive Estimation (시공간 적응적인 예측에 기초한 다시점 위너-지브 비디오 부호화 기법)

  • Lee, Beom-yong;Kim, Jin-soo
    • The Journal of the Korea Contents Association
    • /
    • v.16 no.6
    • /
    • pp.9-18
    • /
    • 2016
  • This paper proposes a multi-view Wyner-Ziv Video coding scheme based on spatio-temporal adaptive estimation. The proposed algorithm is designed to search for a better estimated block with joint bi-directional motion estimation by introducing weights between temporal and spatial directions, and by classifying effectively the region of interest blocks, which is based on the edge detection and the synthesis, and by selecting the reference estimation block from the effective motion vector analysis. The proposed algorithm exploits the information of a single frame viewpoint and adjacent frame viewpoints, simultaneously and then generates adaptively side information in a variety of closure, and reflection regions to have a better performance. Through several simulations with multi-view video sequences, it is shown that the proposed algorithm performs visual quality improvement as well as bit-rate reduction, compared to the conventional methods.

A Reexamination on the Influence of Fine-particle between Districts in Seoul from the Perspective of Information Theory (정보이론 관점에서 본 서울시 지역구간의 미세먼지 영향력 재조명)

  • Lee, Jaekoo;Lee, Taehoon;Yoon, Sungroh
    • KIISE Transactions on Computing Practices
    • /
    • v.21 no.2
    • /
    • pp.109-114
    • /
    • 2015
  • This paper presents a computational model on the transfer of airborne fine particles to analyze the similarities and influences among the 25 districts in Seoul by quantifying a time series data collected from each district. The properties of each district are driven with the model of a time series of the fine particle concentrations, and the calculation of edge-based weights are carried out with the transfer entropies between all pairs of the districts. We applied a modularity-based graph clustering technique to detect the communities among the 25 districts. The result indicates the discovered clusters correspond to a high transfer-entropy group among the communities with geographical adjacency or high in-between traffic volumes. We believe that this approach can be further extended to the discovery of significant flows of other indicators causing environmental pollution.

MRF Particle filter-based Multi-Touch Tracking and Gesture Likelihood Estimation (MRF 입자필터 멀티터치 추적 및 제스처 우도 측정)

  • Oh, Chi-Min;Shin, Bok-Suk;Klette, Reinhard;Lee, Chil-Woo
    • Smart Media Journal
    • /
    • v.4 no.1
    • /
    • pp.16-24
    • /
    • 2015
  • In this paper, we propose a method for multi-touch tracking using MRF-based particle filters and gesture likelihood estimation Each touch (of one finger) is considered to be one object. One of frequently occurring issues is the hijacking problem which means that an object tracker can be hijacked by neighboring object. If a predicted particle is close to an adjacent object then the particle's weight should be lowered by analysing the influence of neighboring objects for avoiding hijacking problem. We define a penalty function to lower the weights of those particles. MRF is a graph representation where a node is the location of a target object and an edge describes the adjacent relation of target object. It is easy to utilize MRF as data structure of adjacent objects. Moreover, since MRF graph representation is helpful to analyze multi-touch gestures, we describe how to define gesture likelihoods based on MRF. The experimental results show that the proposed method can avoid the occurrence of hijacking problems and is able to estimate gesture likelihoods with high accuracy.

Directional Deinterlacing Method Using Local Gradient Features (국부 Gradient 특징을 이용한 방향성 deinterlacing 방법)

  • Woo, Dong-Hun;Eom, Il-Kyu;Kim, Yoo-Shin
    • Journal of the Institute of Electronics Engineers of Korea SP
    • /
    • v.42 no.5 s.305
    • /
    • pp.41-46
    • /
    • 2005
  • Deinterlacing is the conversion from interlaced to progressive scan image that is considered to be 2 times image interpolation. In this paper, the simple and effective deinterlacing method is proposed based on the local gradient information of neighborhood pixels. In the proposed method, the weights for directions around the pixel to be interpolated are estimated, and the weighted sum for the neighborhood pixels is the final intensity value of the pixel to be interpolated. The proposed method has the structure suitable to practical implementation and can avoid the artifacts due to the wrong detection of edge direction. In the simulation, it showed improved subjective and objective performance than the ELA method and comparable performance compared with the variation of ELA method which has more complex structure and requires a couple of parameters that is determined by experience.

A Study on Weight-Based Route Inference Using Traffic Data (항적 데이터를 활용한 가중치 기반 항로 추론에 대한 연구)

  • Seung Sim;Hyun-Jin Kim;Young-Soo Min;Jun-Rae Cho;Jeong-Hun Woo;Ho-June Seok;Deuk-Jae Cho;Jong-Hwa Baek;Jaeyong Jung
    • Proceedings of the Korean Institute of Navigation and Port Research Conference
    • /
    • 2023.05a
    • /
    • pp.208-209
    • /
    • 2023
  • Intelligent maritime traffic information service for maritime traffic safety operates a service that provides safe and efficient optimal safety routes considering information such as water depth, maritime safety law, weather information, and fuel consumption. However, from a service user's point of view, they prefer a route that suits their personal navigation experience and style, such as unnecessary detours and conservative safety distances for maritime objects. In this study, the optimal safety route can be extracted based on the experience of service users without reflecting the separate maritime environment by adjusting the weight of the trunk line for the area where the ship frequently navigates with the ship's track data collected through LTE-M model was studied.

  • PDF

Research on Performance of Graph Algorithm using Deep Learning Technology (딥러닝 기술을 적용한 그래프 알고리즘 성능 연구)

  • Giseop Noh
    • The Journal of the Convergence on Culture Technology
    • /
    • v.10 no.1
    • /
    • pp.471-476
    • /
    • 2024
  • With the spread of various smart devices and computing devices, big data generation is occurring widely. Machine learning is an algorithm that performs reasoning by learning data patterns. Among the various machine learning algorithms, the algorithm that attracts attention is deep learning based on neural networks. Deep learning is achieving rapid performance improvement with the release of various applications. Recently, among deep learning algorithms, attempts to analyze data using graph structures are increasing. In this study, we present a graph generation method for transferring to a deep learning network. This paper proposes a method of generalizing node properties and edge weights in the graph generation process and converting them into a structure for deep learning input by presenting a matricization We present a method of applying a linear transformation matrix that can preserve attribute and weight information in the graph generation process. Finally, we present a deep learning input structure of a general graph and present an approach for performance analysis.

Query Expansion Based on Word Graphs Using Pseudo Non-Relevant Documents and Term Proximity (잠정적 부적합 문서와 어휘 근접도를 반영한 어휘 그래프 기반 질의 확장)

  • Jo, Seung-Hyeon;Lee, Kyung-Soon
    • The KIPS Transactions:PartB
    • /
    • v.19B no.3
    • /
    • pp.189-194
    • /
    • 2012
  • In this paper, we propose a query expansion method based on word graphs using pseudo-relevant and pseudo non-relevant documents to achieve performance improvement in information retrieval. The initially retrieved documents are classified into a core cluster when a document includes core query terms extracted by query term combinations and the degree of query term proximity. Otherwise, documents are classified into a non-core cluster. The documents that belong to a core query cluster can be seen as pseudo-relevant documents, and the documents that belong to a non-core cluster can be seen as pseudo non-relevant documents. Each cluster is represented as a graph which has nodes and edges. Each node represents a term and each edge represents proximity between the term and a query term. The term weight is calculated by subtracting the term weight in the non-core cluster graph from the term weight in the core cluster graph. It means that a term with a high weight in a non-core cluster graph should not be considered as an expanded term. Expansion terms are selected according to the term weights. Experimental results on TREC WT10g test collection show that the proposed method achieves 9.4% improvement over the language model in mean average precision.

Automated Detecting and Tracing for Plagiarized Programs using Gumbel Distribution Model (굼벨 분포 모델을 이용한 표절 프로그램 자동 탐색 및 추적)

  • Ji, Jeong-Hoon;Woo, Gyun;Cho, Hwan-Gue
    • The KIPS Transactions:PartA
    • /
    • v.16A no.6
    • /
    • pp.453-462
    • /
    • 2009
  • Studies on software plagiarism detection, prevention and judgement have become widespread due to the growing of interest and importance for the protection and authentication of software intellectual property. Many previous studies focused on comparing all pairs of submitted codes by using attribute counting, token pattern, program parse tree, and similarity measuring algorithm. It is important to provide a clear-cut model for distinguishing plagiarism and collaboration. This paper proposes a source code clustering algorithm using a probability model on extreme value distribution. First, we propose an asymmetric distance measure pdist($P_a$, $P_b$) to measure the similarity of $P_a$ and $P_b$ Then, we construct the Plagiarism Direction Graph (PDG) for a given program set using pdist($P_a$, $P_b$) as edge weights. And, we transform the PDG into a Gumbel Distance Graph (GDG) model, since we found that the pdist($P_a$, $P_b$) score distribution is similar to a well-known Gumbel distribution. Second, we newly define pseudo-plagiarism which is a sort of virtual plagiarism forced by a very strong functional requirement in the specification. We conducted experiments with 18 groups of programs (more than 700 source codes) collected from the ICPC (International Collegiate Programming Contest) and KOI (Korean Olympiad for Informatics) programming contests. The experiments showed that most plagiarized codes could be detected with high sensitivity and that our algorithm successfully separated real plagiarism from pseudo plagiarism.

Intensity Based Stereo Matching Algorithm Including Boundary Information (경계선 영역 정보를 이용한 밝기값 기반 스테레오 정합)

  • Choi, Dong-Jun;Kim, Do-Hyun;Yang, Yeong-Yil
    • Journal of the Korean Institute of Telematics and Electronics S
    • /
    • v.35S no.12
    • /
    • pp.84-92
    • /
    • 1998
  • In this paper, we propose the novel cost functions for finding the disparity between the left and the right images in the stereo matching problem. The dynamic programming method is used in solving the stereo matching problem by Cox et al[10]. In the reference[10], only the intensity of the pixels in the epipolar line is used as the cost functions to find the corresponding pixels. We propose the two new cost functions. The information of the slope of the pixel is introduced to the constraints in determining the weights of intensity and direction(the historical information). The pixels with the higher slope are matched mainly by the intensity of pixels. As the slope becomes lower, the matching is performed mainly by the direction. Secondly, the disparity information of the previous epipolar line the pixel is used to find the disparity of the current epipolar line. If the pixel in the left epipolar line, $p-i$ and the pixel in the right epipolar line, $p-j$ satisfy the following conditions, the higher matching probability is given to the pixels, $p-i$ and $p-j$. i) The pixels, $p-i$ and $p-j$ are the pixles on the edges in the left and the right images, respectively. ⅱ) For the pixels $p-k$ and $p-l$ in the previous epipolar line, $p-k$and $p-l$ are matched and are the pixels on the same edge with $p-i$ and $p-j$, respectively. The proposed method compared with the original method[10] finds the better matching results for the test images.

  • PDF