• Title/Summary/Keyword: Remove Redundancy

Search Result 47, Processing Time 0.022 seconds

A comparative study of filter methods based on information entropy

  • Kim, Jung-Tae;Kum, Ho-Yeun;Kim, Jae-Hwan
    • Journal of Advanced Marine Engineering and Technology
    • /
    • v.40 no.5
    • /
    • pp.437-446
    • /
    • 2016
  • Feature selection has become an essential technique to reduce the dimensionality of data sets. Many features are frequently irrelevant or redundant for the classification tasks. The purpose of feature selection is to select relevant features and remove irrelevant and redundant features. Applications of the feature selection range from text processing, face recognition, bioinformatics, speaker verification, and medical diagnosis to financial domains. In this study, we focus on filter methods based on information entropy : IG (Information Gain), FCBF (Fast Correlation Based Filter), and mRMR (minimum Redundancy Maximum Relevance). FCBF has the advantage of reducing computational burden by eliminating the redundant features that satisfy the condition of approximate Markov blanket. However, FCBF considers only the relevance between the feature and the class in order to select the best features, thus failing to take into consideration the interaction between features. In this paper, we propose an improved FCBF to overcome this shortcoming. We also perform a comparative study to evaluate the performance of the proposed method.

Improved AP Deployment Optimization Scheme Based on Multi-objective Particle Swarm Optimization Algorithm

  • Kong, Zhengyu;Wu, Duanpo;Jin, Xinyu;Cen, Shuwei;Dong, Fang
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.15 no.4
    • /
    • pp.1568-1589
    • /
    • 2021
  • Deployment of access point (AP) is a problem that must be considered in network planning. However, this problem is usually a NP-hard problem which is difficult to directly reach optimal solution. Thus, improved AP deployment optimization scheme based on swarm intelligence algorithm is proposed to research on this problem. First, the scheme estimates the number of APs. Second, the multi-objective particle swarm optimization (MOPSO) algorithm is used to optimize the location and transmit power of APs. Finally, the greedy algorithm is used to remove the redundant APs. Comparing with multi-objective whale swarm optimization algorithm (MOWOA), particle swarm optimization (PSO) and grey wolf optimization (GWO), the proposed deployment scheme can reduce AP's transmit power and improves energy efficiency under different numbers of users. From the experimental results, the proposed deployment scheme can reduce transmit power about 2%-7% and increase energy efficiency about 2%-25%, comparing with MOWOA. In addition, the proposed deployment scheme can reduce transmit power at most 50% and increase energy efficiency at most 200%, comparing with PSO and GWO.

Multispectral Image Compression Using Classified Interband Bidirectional Prediction and Extended SPHT (영역별 대역간 양방향 예측과 확장된 SPIHT를 이용한 다분광 화상데이터의 압축)

  • Kim, Seung-Jin;Ban, Seong-Won;Kim, Byung-Ju;Park, Kyung-Nam;Kim, Young-Choon;Lee, Kuhn-Il
    • Journal of the Institute of Electronics Engineers of Korea SP
    • /
    • v.39 no.5
    • /
    • pp.486-493
    • /
    • 2002
  • In this paper, we proposed the effective multispectral image compression method using CIBP(classified interband bidrectional prediction) and extended SPIHT(set partition in hierarchical trees) in wavelet domain. We determine separately feature bands that have the highest correlation with other bands in the visible range and in the infrared range of wavelengths. Feature bands are coded to remove the spatial redundancy with SPIHT in the wavelet domain. Prediction bands that have high correlation with feature bands are wavelet transformed and they are classified into one of three classes considering reflection characteristics of the baseband. For Prediction bands, CIBP is performed to reduce the spectral redundancy. for the difference bands between prediction bands and the predicted bands, They are ordered to upgrade the compression efficiency of extended SPIHT with the largest error magnitude. The arranged bands are coded to compensate the prediction error with extended SPIHT. Experiments are carried out on the multispectral images. The results show that the proposed method reconstructs higher quality images than images reconstructed by the conventional methods at the same bit rate.

Parameter Estimation for Multipath Error in GPS Dual Frequency Carrier Phase Measurements Using Unscented Kalman Filters

  • Lee, Eun-Sung;Chun, Se-Bum;Lee, Young-Jae;Kang, Tea-Sam;Jee, Gyu-In;Kim, Jeong-Rae
    • International Journal of Control, Automation, and Systems
    • /
    • v.5 no.4
    • /
    • pp.388-396
    • /
    • 2007
  • This paper describes a multipath estimation method for Global Positioning System (GPS) dual frequency carrier phase measurements. Multipath is a major error source in high precision GPS applications, i.e., carrier phase measurements for precise positioning and attitude determinations. In order to estimate and remove multipath at carrier phase measurements, an array GPS antenna system has been used. The known geometry between the antennas is used to estimate multipath parameters. Dual frequency carrier phase measurements increase the redundancy of measurements, so it can reduce the number of antennas. The unscented Kalman filter (UKF) is recently applied to many areas to overcome some of the limitations of the extended Kalman filter (EKF) such as weakness to severe nonlinearity. This paper uses the UKF for estimating multipath parameters. A series of simulations were performed with GPS antenna arrays located on a straight line with one reflector. The geometry information of the antenna array reduces the number of estimated multipath parameters from four to three. Both the EKF and the UKF are used as estimation algorithms and the results of the EKF and the UKF are compared. When the initial parameters are far from true parameters, the UKF shows better performance than the EKF.

The Design of Repeated Motion on Adaptive Block Matching Algorithm in Real-Time Image (실시간 영상에서 반복적인 움직임에 적응한 블록정합 알고리즘 설계)

  • Kim Jang-Hyung;Kang Jin-Suk
    • Journal of Korea Multimedia Society
    • /
    • v.8 no.3
    • /
    • pp.345-354
    • /
    • 2005
  • Since motion estimation and motion compensation methods remove the redundant data to employ the temporal redundancy in images, it plays an important role in digital video compression. Because of its high computational complexity, however, it is difficult to apply to high-resolution applications in real time environments. If we have a priori knowledge about the motion of an image block before the motion estimation, the location of a better starting point for the search of an exact motion vector can be determined to expedite the searching process. In this paper presents the motion detection algorithm that can run robustly about recusive motion. The motion detection compares and analyzes two frames each other, motion of whether happened judge. Through experiments, we show significant improvements in the reduction of the computational time in terms of the number of search steps without much quality degradation in the predicted image.

  • PDF

Efficient Motion Information Representation in Splitting Region of HEVC (HEVC의 분할 영역에서 효율적인 움직임 정보 표현)

  • Lee, Dong-Shik;Kim, Young-Mo
    • Journal of Korea Multimedia Society
    • /
    • v.15 no.4
    • /
    • pp.485-491
    • /
    • 2012
  • This paper proposes 'Coding Unit Tree' based on quadtree efficiently with motion vector to represent splitting information of a Coding Unit (CU) in HEVC. The new international video coding, High Efficiency Video Coding (HEVC), adopts various techniques and new unit concept: CU, Prediction Unit (PU), and Transform Unit (TU). The basic coding unit, CU is larger than macroblock of H.264/AVC and it splits to process image-based quadtree with a hierarchical structure. However, in case that there are complex motions in CU, the more signaling bits with motion information need to be transmitted. This structure provides a flexibility and a base for a optimization, but there are overhead about splitting information. This paper analyzes those signals and proposes a new algorithm which removes those redundancy. The proposed algorithm utilizes a type code, a dominant value, and residue values at a node in quadtree to remove the addition bits. Type code represents a structure of an image tree and the two values represent a node value. The results show that the proposed algorithm gains 13.6% bit-rate reduction over the HM-1.0.

Efficient Residual Upsampling Scheme for H.264/AVC SVC (H.264/AVC SVC를 위한 효율적인 잔여신호 업 샘플링 기법)

  • Goh, Gyeong-Eun;Kang, Jin-Mi;Kim, Sung-Min;Chung, Ki-Dong
    • Journal of KIISE:Information Networking
    • /
    • v.35 no.6
    • /
    • pp.549-556
    • /
    • 2008
  • To achieve flexible visual content adaption for multimedia communications, the ISO/IEC MPEG & ITU-T VCEG form the JVT to develop SVC amendment for the H.264/AVC standard. JVT uses inter-layer prediction as well as inter prediction and intra prediction that are provided in H.264/AVC to remove the redundancy among layers. The main goal consists of designing inter-layer prediction tools that enable the usage of as much as possible base layer information to improve the rate-distortion efficiency of the enhancement layer. But inter layer prediction causes the computational complexity to be increased. In this paper, we proposed an efficient residual prediction. In order to reduce the computational complexity while maintaining the high coding efficiency. The proposed residual prediction uses modified interpolation that is defined in H.264/AVC SVC.

The Intelligent Clinical Laboratory as a Tool to Increase Cancer Care Management Productivity

  • Mohammadzadeh, Niloofar;Safdari, Reza
    • Asian Pacific Journal of Cancer Prevention
    • /
    • v.15 no.6
    • /
    • pp.2935-2937
    • /
    • 2014
  • Studies of the causes of cancer, early detection, prevention or treatment need accurate, comprehensive, and timely cancer data. The clinical laboratory provides important cancer information needed for physicians which influence clinical decisions regarding treatment, diagnosis and patient monitoring. Poor communication between health care providers and clinical laboratory personnel can lead to medical errors and wrong decisions in providing cancer care. Because of the key impact of laboratory information on cancer diagnosis and treatment the quality of the tests, lab reports, and appropriate lab management are very important. A laboratory information management system (LIMS) can have an important role in diagnosis, fast and effective access to cancer data, decrease redundancy and costs, and facilitate the integration and collection of data from different types of instruments and systems. In spite of significant advantages LIMS is limited by factors such as problems in adaption to new instruments that may change existing work processes. Applications of intelligent software simultaneously with existing information systems, in addition to remove these restrictions, have important benefits including adding additional non-laboratory-generated information to the reports, facilitating decision making, and improving quality and productivity of cancer care services. Laboratory systems must have flexibility to change and have the capability to develop and benefit from intelligent devices. Intelligent laboratory information management systems need to benefit from informatics tools and latest technologies like open sources. The aim of this commentary is to survey application, opportunities and necessity of intelligent clinical laboratory as a tool to increase cancer care management productivity.

ImprovementofMLLRAlgorithmforRapidSpeakerAdaptationandReductionofComputation (빠른 화자 적응과 연산량 감소를 위한 MLLR알고리즘 개선)

  • Kim, Ji-Un;Chung, Jae-Ho
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.29 no.1C
    • /
    • pp.65-71
    • /
    • 2004
  • We improved the MLLR speaker adaptation algorithm with reduction of the order of HMM parameters using PCA(Principle Component Analysis) or ICA(Independent Component Analysis). To find a smaller set of variables with less redundancy, we adapt PCA(principal component analysis) and ICA(independent component analysis) that would give as good a representation as possible, minimize the correlations between data elements, and remove the axis with less covariance or higher-order statistical independencies. Ordinary MLLR algorithm needs more than 30 seconds adaptation data to represent higher word recognition rate of SD(Speaker Dependent) models than of SI(Speaker Independent) models, whereas proposed algorithm needs just more than 10 seconds adaptation data. 10 components for ICA and PCA represent similar performance with 36 components for ordinary MLLR framework. So, compared with ordinary MLLR algorithm, the amount of total computation requested in speaker adaptation is reduced by about 1/167 in proposed MLLR algorithm.

An Adaptive Block Matching Algorithm Based on Temporal Correlations (시간적 상관성을 이용한 적응적 블록 정합 알고리즘)

  • Yoon, Hyo-Sun;Lee, Guee-Sang
    • The KIPS Transactions:PartB
    • /
    • v.9B no.2
    • /
    • pp.199-204
    • /
    • 2002
  • Since motion estimation and motion compensation methods remove the redundant data to employ the temporal redundancy in images, it plays an important role in digital video compression. Because of its high computational complexity, however, it is difficult to apply to high-resolution applications in real time environments. If we have information about the motion of an image block before the motion estimation, the location of a better starting point for the search of an exact motion vector can be determined to expedite the searching process. In this paper, we present an adaptive motion estimation approach bated on temporal correlations of consecutive image frames that defines the search pattern and determines the location of the initial search point adaptively. Through experiments, compared with DS(Diamond Search) algorithm, the proposed algorithm is about 0.1∼0.5(dB) better than DS in terms of PSNR(Peak Signal to Noise Ratio) and improves as high as 50% compared with DS in terms of average number of search point per motion vector estimation.