• Title/Summary/Keyword: Filtering Scheme

Search Result 398, Processing Time 0.03 seconds

The QoS Filtering and Scalable Transmission Scheme of MPEG Data to Adapt Network Bandwidth Variation (통신망 대역폭 변화에 적응하는 MPEG 데이터의 QoS 필터링 기법과 스케일러블 전송 기법)

  • 유우종;김두현;유관종
    • Journal of Korea Multimedia Society
    • /
    • v.3 no.5
    • /
    • pp.479-494
    • /
    • 2000
  • Although the proliferation of real-time multimedia services over the Internet might indicate its successfulness in dealing with heterogeneous environments, it is obvious, on the other hand, that the internet now has to cope with a flood of multimedia data which consumes most of network communication channels due to a great deal of video or audio streams. Therefore, for the purpose of an efficient and appropriate utilization of network resources, it requires to develop and deploy a new scalable transmission technique n consideration of respective network environment and individual clients computing power. Also, we can eliminate the waste effects of storage device and data transmission overhead in that the same video stream duplicated according to QoS. The purpose of this paper is to develop a technology that can adjust the amount of data transmitted as an MPEG video stream according to its given communication bandwidth, and technique that can reflect dynamic bandwidth while playing a video stream. For this purpose, we introduce a media scalable media decomposer working on server side, and a scalable media composer working o n a client side, and then propose a scalable transmission method and a media sender and a media receiver in consideration of dynamic QoS. Those methods proposed her can facilitate an effective use of network resources, and provide multimedia MPEG video services in real-time with respect to individual client computing environment.

  • PDF

Development of Intelligent GNSS Positioning Technique Based on Low Cost Module for an Alley Navigation (골목길 내비게이션을 위한 저가 모듈 기반의 지능형 GNSS 측위 기술 개발)

  • Kim, Hye In;Park, Kwan Dong
    • Journal of Korean Society for Geospatial Information Science
    • /
    • v.24 no.3
    • /
    • pp.11-18
    • /
    • 2016
  • Since GNSS signals get blocked by buildings in urban canyons or narrow alleys, it is very difficult to secure a enough number of visible satellites for satellite navigation in those poor signal-reception environments. In those situations, one cannot get their coordinates or obtain accurate positions. In this study, a couple of strategies for improving positioning accuracy in urban canyons were developed and their performance was verified. First of all, we combined GPS and GLONASS measurements together and devised algorithms to quality-control observed signals and eliminate outliers. Also, a new multipath reduction scheme was applied to minimize its effect by utilizing SNR values of the observed signals. For performance verification of the developed technique, a narrow alley of 10m width located near the back gate of the Inha University was selected as the test-bed, and then we conducted static and kinematic positioning at four pre-surveyed points. We found that our new algorithms produced an 45% improvement in an open-sky environment compared with the positioning result of a low-cost u-blox receiver. In the alleys, 3-D accuracy improved by an average of 37%. In the case of kinematic positioning, especially, biases showing up in regular receivers got eliminated significantly through our new filtering algorithms.

A Study on the Efficient Algorithm for Converting Range Matching Rules into TCAM Entries in the Packet Filtering System (패킷 필터링 시스템에서 범위 규칙의 효율적 TCAM 엔트리 변환 알고리즘 연구)

  • Kim, Yong-Kwon;Cho, Hyun-Mook;Choe, Jin-Kyu;Lee, Kyou-Ho;Ki, Jang-Geun
    • Journal of IKEEE
    • /
    • v.9 no.1 s.16
    • /
    • pp.19-30
    • /
    • 2005
  • Packet classification is defined as the action to match the packet with a set of predefined rules. One of classification is to use Ternary Content Addressable Memory hardware search engine that has faster than other algorithmic methods. However, TCAM has some limitations. One of them is that TCAM can not perform range matching efficiently. A range has to be expanded into prefixes to fit the boundary. In general, the number of expansion could be up to 2w-2, where w is the width of the field. For example, if two range fields with 16 bits are used, there could be up to $30\;{\times}\;30\;=\;900$ expansions for a single rule. In this paper, we describe the novel algorithm for converting range matching rules into TCAM entry efficiently. The number of maximum entry is 2w-4 when using the algorithm. Furthermore, it has also benefit about the negation range. In the result of experimentation, the new scheme practically reduces 14 percent in case that searched fields are source port and destination port number.

  • PDF

Adaptive Filter Design for Eliminating Baseline Wandering Noise of Electrocardiogram (심전도 기저선 흔들림 잡음 제거를 위한 적응형 필터 설계)

  • Choi, Chul-Hyung;Rahman, MD Saifur;Kim, Si-Kyung;Park, In-Deok;Kim, Young-Pil
    • The Journal of Korean Institute of Information Technology
    • /
    • v.15 no.12
    • /
    • pp.157-164
    • /
    • 2017
  • Mobile ECG signal measurement is a technique to measure small signals of several mV, and many studies have been conducted to remove noise including wandering scheme. Removal of the equipotential line noise caused by shaking or movement of the electrode cable is one of the core research contents for the electrocardiogram measurement. In this study, we proposed a modified step-size of combined NLMS(normalized least squares) and DLMS(delayed least squares) adaptive filter to eliminate baseline noise from ECG signals. The proposed method mainly adjusts initial filter step-size to reduce distortion of original ECG signals characteristic after eliminating baseline noise. The modified filter step-size is scaled by filter order size and distortion minimization factor. This method is suitable for portable ECG device with a small processor and less power consumption. This technique also decreases computation time which is essential for real-time filtering. The proposed filter also increase the signal to noise ratio (SNR) compared to conventional NLMS filter.

Enhanced Block Matching Scheme for Denoising Images Based on Bit-Plane Decomposition of Images (영상의 이진화평면 분해에 기반한 확장된 블록매칭 잡음제거)

  • Pok, Gouchol
    • The Journal of Korea Institute of Information, Electronics, and Communication Technology
    • /
    • v.12 no.3
    • /
    • pp.321-326
    • /
    • 2019
  • Image denoising methods based on block matching are founded on the experimental observations that neighboring patches or blocks in images retain similar features with each other, and have been proved to show superior performance in denoising different kinds of noise. The methods, however, take into account only neighboring blocks in searching for similar blocks, and ignore the characteristic features of the reference block itself. Consequently, denoising performance is negatively affected when outliers of the Gaussian distribution are included in the reference block which is to be denoised. In this paper, we propose an expanded block matching method in which noisy images are first decomposed into a number of bit-planes, then the range of true signals are estimated based on the distribution of pixels on the bit-planes, and finally outliers are replaced by the neighboring pixels belonging to the estimated range. In this way, the advantages of the conventional Gaussian filter can be added to the blocking matching method. We tested the proposed method through extensive experiments with well known test-bed images, and observed that performance gain can be achieved by the proposed method.

Grain-Size Trend Analysis for Identifying Net Sediment Transport Pathways: Potentials and Limitations (퇴적물 이동경로 식별을 위한 입도경향 분석법의 가능성과 한계)

  • Kim, Sung-Hwan;Rhew, Ho-Sahng;Yu, Keun-Bae
    • Journal of the Korean Geographical Society
    • /
    • v.42 no.4
    • /
    • pp.469-487
    • /
    • 2007
  • Grain-Size Trend Analysis is the methodology to identify net sediment transport pathways, based on the assumption that the movement of sediment from the source to deposit leaves the identifiable spatial pattern of mean, sorting, and skewness of grain size. It can easily be implemented with low cost, so it has great potentials to contribute to geomorphological research, whereas it can also be used inadequately without recognition of its limitations. This research aims to compare three established methods of grain-size trend analysis to search for the adequate way of application, and also suggest the research tasks needed in improving this methodology 1D pathway method can corporate the field experience into analyzing the pathway, provide the useful information of depositional environments through X-distribution, and identify the long-term trend effectively. However, it has disadvantage of the dependence on subjective interpretation, and a relatively coarse temporal scale. Gao-Collins's 2D transport vector method has the objective procedure, has the capability to visualize the transport pattern in 2D format, and to identify the pattern at a finer temporal scale, whereas characteristic distance and semiquantitative filtering are controversial. Le Roux's alternative 2D transport vector method has two improvement of Gao-Collins's in that it expands the empirical rules, considers the gradient of each parameters as well as the order, and has the ability to identify the pattern at a finer temporal scale, while the basic concepts are arbitrary and complicated. The application of grain sire trend analysis requires the selection of adequate method and the design of proper sampling scheme, based on the field knowledge of researcher, the temporal scale of sediment transport pattern targeted, and information needed. Besides, the relationship between the depth of sample and representative temporal scale should be systematically investigated in improving this methodology.

Development and Analysis of COMS AMV Target Tracking Algorithm using Gaussian Cluster Analysis (가우시안 군집분석을 이용한 천리안 위성의 대기운동벡터 표적추적 알고리듬 개발 및 분석)

  • Oh, Yurim;Kim, Jae Hwan;Park, Hyungmin;Baek, Kanghyun
    • Korean Journal of Remote Sensing
    • /
    • v.31 no.6
    • /
    • pp.531-548
    • /
    • 2015
  • Atmospheric Motion Vector (AMV) from satellite images have shown Slow Speed Bias (SSB) in comparison with rawinsonde. The causes of SSB are originated from tracking, selection, and height assignment error, which is known to be the leading error. However, recent works have shown that height assignment error cannot be fully explained the cause of SSB. This paper attempts a new approach to examine the possibility of SSB reduction of COMS AMV by using a new target tracking algorithm. Tracking error can be caused by averaging of various wind patterns within a target and changing of cloud shape in searching process over time. To overcome this problem, Gaussian Mixture Model (GMM) has been adopted to extract the coldest cluster as target since the shape of such target is less subject to transformation. Then, an image filtering scheme is applied to weigh more on the selected coldest pixels than the other, which makes it easy to track the target. When AMV derived from our algorithm with sum of squared distance method and current COMS are compared with rawindsonde, our products show noticeable improvement over COMS products in mean wind speed by an increase of $2.7ms^{-1}$ and SSB reduction by 29%. However, the statistics regarding the bias show negative impact for mid/low level with our algorithm, and the number of vectors are reduced by 40% relative to COMS. Therefore, further study is required to improve accuracy for mid/low level winds and increase the number of AMV vectors.

Subject-Balanced Intelligent Text Summarization Scheme (주제 균형 지능형 텍스트 요약 기법)

  • Yun, Yeoil;Ko, Eunjung;Kim, Namgyu
    • Journal of Intelligence and Information Systems
    • /
    • v.25 no.2
    • /
    • pp.141-166
    • /
    • 2019
  • Recently, channels like social media and SNS create enormous amount of data. In all kinds of data, portions of unstructured data which represented as text data has increased geometrically. But there are some difficulties to check all text data, so it is important to access those data rapidly and grasp key points of text. Due to needs of efficient understanding, many studies about text summarization for handling and using tremendous amounts of text data have been proposed. Especially, a lot of summarization methods using machine learning and artificial intelligence algorithms have been proposed lately to generate summary objectively and effectively which called "automatic summarization". However almost text summarization methods proposed up to date construct summary focused on frequency of contents in original documents. Those summaries have a limitation for contain small-weight subjects that mentioned less in original text. If summaries include contents with only major subject, bias occurs and it causes loss of information so that it is hard to ascertain every subject documents have. To avoid those bias, it is possible to summarize in point of balance between topics document have so all subject in document can be ascertained, but still unbalance of distribution between those subjects remains. To retain balance of subjects in summary, it is necessary to consider proportion of every subject documents originally have and also allocate the portion of subjects equally so that even sentences of minor subjects can be included in summary sufficiently. In this study, we propose "subject-balanced" text summarization method that procure balance between all subjects and minimize omission of low-frequency subjects. For subject-balanced summary, we use two concept of summary evaluation metrics "completeness" and "succinctness". Completeness is the feature that summary should include contents of original documents fully and succinctness means summary has minimum duplication with contents in itself. Proposed method has 3-phases for summarization. First phase is constructing subject term dictionaries. Topic modeling is used for calculating topic-term weight which indicates degrees that each terms are related to each topic. From derived weight, it is possible to figure out highly related terms for every topic and subjects of documents can be found from various topic composed similar meaning terms. And then, few terms are selected which represent subject well. In this method, it is called "seed terms". However, those terms are too small to explain each subject enough, so sufficient similar terms with seed terms are needed for well-constructed subject dictionary. Word2Vec is used for word expansion, finds similar terms with seed terms. Word vectors are created after Word2Vec modeling, and from those vectors, similarity between all terms can be derived by using cosine-similarity. Higher cosine similarity between two terms calculated, higher relationship between two terms defined. So terms that have high similarity values with seed terms for each subjects are selected and filtering those expanded terms subject dictionary is finally constructed. Next phase is allocating subjects to every sentences which original documents have. To grasp contents of all sentences first, frequency analysis is conducted with specific terms that subject dictionaries compose. TF-IDF weight of each subjects are calculated after frequency analysis, and it is possible to figure out how much sentences are explaining about each subjects. However, TF-IDF weight has limitation that the weight can be increased infinitely, so by normalizing TF-IDF weights for every subject sentences have, all values are changed to 0 to 1 values. Then allocating subject for every sentences with maximum TF-IDF weight between all subjects, sentence group are constructed for each subjects finally. Last phase is summary generation parts. Sen2Vec is used to figure out similarity between subject-sentences, and similarity matrix can be formed. By repetitive sentences selecting, it is possible to generate summary that include contents of original documents fully and minimize duplication in summary itself. For evaluation of proposed method, 50,000 reviews of TripAdvisor are used for constructing subject dictionaries and 23,087 reviews are used for generating summary. Also comparison between proposed method summary and frequency-based summary is performed and as a result, it is verified that summary from proposed method can retain balance of all subject more which documents originally have.