• Title/Summary/Keyword: Information filtering

Search Result 3,011, Processing Time 0.028 seconds

Privacy-Preserving Parallel Range Query Processing Algorithm Based on Data Filtering in Cloud Computing (클라우드 컴퓨팅에서 프라이버시 보호를 지원하는 데이터 필터링 기반 병렬 영역 질의 처리 알고리즘)

  • Kim, Hyeong Jin;Chang, Jae-Woo
    • KIPS Transactions on Computer and Communication Systems
    • /
    • v.10 no.9
    • /
    • pp.243-250
    • /
    • 2021
  • Recently, with the development of cloud computing, interest in database outsourcing is increasing. However, when the database is outsourced, there is a problem in that the information of the data owner is exposed to internal and external attackers. Therefore, in this paper, we propose a parallel range query processing algorithm that supports privacy protection. The proposed algorithm uses the Paillier encryption system to support data protection, query protection, and access pattern protection. To reduce the operation cost of a checking protocol (SRO) for overlapping regions in the existing algorithm, the efficiency of the SRO protocol is improved through a garbled circuit. The proposed parallel range query processing algorithm is largely composed of two steps. It consists of a parallel kd-tree search step that searches the kd-tree in parallel and safely extracts the data of the leaf node including the query, and a parallel data search step through multiple threads for retrieving the data included in the query area. On the other hand, the proposed algorithm provides high query processing performance through parallelization of secure protocols and index search. We show that the performance of the proposed parallel range query processing algorithm increases in proportion to the number of threads and the proposed algorithm shows performance improvement by about 5 times compared with the existing algorithm.

Enhanced Block Matching Scheme for Denoising Images Based on Bit-Plane Decomposition of Images (영상의 이진화평면 분해에 기반한 확장된 블록매칭 잡음제거)

  • Pok, Gouchol
    • The Journal of Korea Institute of Information, Electronics, and Communication Technology
    • /
    • v.12 no.3
    • /
    • pp.321-326
    • /
    • 2019
  • Image denoising methods based on block matching are founded on the experimental observations that neighboring patches or blocks in images retain similar features with each other, and have been proved to show superior performance in denoising different kinds of noise. The methods, however, take into account only neighboring blocks in searching for similar blocks, and ignore the characteristic features of the reference block itself. Consequently, denoising performance is negatively affected when outliers of the Gaussian distribution are included in the reference block which is to be denoised. In this paper, we propose an expanded block matching method in which noisy images are first decomposed into a number of bit-planes, then the range of true signals are estimated based on the distribution of pixels on the bit-planes, and finally outliers are replaced by the neighboring pixels belonging to the estimated range. In this way, the advantages of the conventional Gaussian filter can be added to the blocking matching method. We tested the proposed method through extensive experiments with well known test-bed images, and observed that performance gain can be achieved by the proposed method.

Design and Implementation of Anomaly Traffic Control framework based on Linux Netfilter System and CBQ Routing Mechanisms (리눅스 Netfilter시스템과 CBQ 라우팅 기능을 이용한 비정상 트래픽 제어 프레임워크 설계 및 구현)

  • 조은경;고광선;이태근;강용혁;엄영익
    • Journal of the Korea Institute of Information Security & Cryptology
    • /
    • v.13 no.6
    • /
    • pp.129-140
    • /
    • 2003
  • Recently viruses and various hacking tools that threat hosts on a network becomes more intelligent and cleverer, and so the various security mechanisms against them have ken developed during last decades. To detect these network attacks, many NIPSs(Network-based Intrusion Prevention Systems) that are more functional than traditional NIDSs are developed by several companies and organizations. But, many previous NIPSS are hewn to have some weakness in protecting important hosts from network attacks because of its incorrectness and post-management aspects. The aspect of incorrectness means that many NIPSs incorrectly discriminate between normal and attack network traffic in real time. The aspect of post-management means that they generally respond to attacks after the intrusions are already performed to a large extent. Therefore, to detect network attacks in realtime and to increase the capability of analyzing packets, faster and more active responding capabilities are required for NIPS frameworks. In this paper, we propose a framework for real-time intrusion prevention. This framework consists of packet filtering component that works on netfilter in Linux kernel and traffic control component that have a capability of step-by-step control over abnormal network traffic with the CBQ mechanism.

Design and Implementation of an E-mail Worm-Virus Filtering System on MS Windows (MS 윈도우즈에서 E-메일 웜-바이러스 차단 시스템의 설계 및 구현)

  • Choi Jong-Cheon;Chang Hye-Young;Cho Seong-Je
    • Journal of the Korea Institute of Information Security & Cryptology
    • /
    • v.15 no.6
    • /
    • pp.37-47
    • /
    • 2005
  • Recently, the malicious e-mail worm-viruses have been widely spreaded over the Internet. If the recipient opens the e-mail attachment or an e-mail itself that contains the worm-virus, the worm-virus can be activated and then cause a tremendous damage to the system by propagating itself to everyone on the mailing list in the user's e-mail package. In this paper, we have designed and implemented two methods blocking e-mail worm-viruses. In the fist method, each e-mail is transmitted only by sender activity such as the click of button on a mail client application. In the second one, we insert the two modules into the sender side, where the one module transforms a recipient's address depending on a predefined rule only in time of pushing button and the other converts the address reversely with the former module whenever an e-mail is sent. The lader method also supports a polymorphism model in order to cope with the new types of e-mail worm-virus attacks. The two methods are designed not to work for the e-mail viruses. There is no additional fraction on the receiver's side of the e-mail system. Experimental results show that the proposed methods can screen the e-mail worm-viruses efficiently with a low overhead.

Design of a 60 Hz Band Rejection FilterInsensitive to Component Tolerances (부품 허용 오차에 둔감한 60Hz 대역 억제 필터 설계)

  • Cheon, Jimin
    • The Journal of Korea Institute of Information, Electronics, and Communication Technology
    • /
    • v.15 no.2
    • /
    • pp.109-116
    • /
    • 2022
  • In this paper, we propose a band rejection filter (BRF) with a state variable filter (SVF) structure to effectively remove the influence of 60 Hz line frequency noise introduced into the sensor system. The conventional BRF of the SVF structure uses an additional operational amplifier (OPAMP) to add a low pass filter (LPF) output and a high pass filter (HPF) output or an input signal and a band pass filter. Therefore, the notch frequency and the notch depth that determine the signal attenuation of the BRF greatly depend on the tolerance of the resistors used to obtain the sum or difference of the signals. On the other hand, in the proposed BRF, since the BRF output is formed naturally within the SVF structure, there is no need for a combination between each port. The notch frequency of the proposed BRF is 59.99 Hz, and it can be confirmed that it is not affected at all by the tolerance of the resistor through the Monte Carlo simulation results. The notch depth also has an average of -42.54dB and a standard deviation of 0.63dB, confirming that normal operation as a BRF is possible. Also, with the proposed BRF, noise filtering was applied to the electrocardiogram (ECG) signal that interfered with 60 Hz noise, and it was confirmed that the 60 Hz noise was appropriately suppressed.

Automated Analysis for PDC-R Technique by Multiple Filtering (다중필터링에 의한 PDC-R 기법의 자동화 해석)

  • Joh, Sung-Ho;Rahman, Norinah Abd;Hassanul, Raja
    • KSCE Journal of Civil and Environmental Engineering Research
    • /
    • v.30 no.3C
    • /
    • pp.141-148
    • /
    • 2010
  • Electrical noises like self potential, burst noises and 60-Hz electrical noises are one of the causes to reduce reliability of electrical resistivity survey. Even the PDC-R (Pseudo DC resisitivity) technique, recently developed, is suffering from the problem of low reliability due to electrical noises. That is, both DC-based and AC-based resistivity technique is subject to reliability problem due to electrical noises embedded in urban geotechnical sites. In this research, a new technique to enhance reliability of the PDC-R technique by minimizing influence of electrical noises was proposed. In addition, an automated procedure was also proposed to facilitate data analysis and interpretation of PDC-R measurements. The proposed technique is composed of two steps: 1. to extract information only related with the input current by means of multiple-filter technique, and 2. to undertake a task to sort out signal information only to show stable and reliable characteristics. This automated procedure was verified by a synthetic harmonic wave including DC shift, burst random noises and 60-Hz electrical noises. Also the procedure was applied to site investigation at urban areas for proving its feasibility and accuracy.

Correlation Between Sensory Processing Ability and Characteristics of Eating for Children With Pervasive Developmental Disorders (전반적 발달장애아동의 감각처리능력과 섭식 특성의 상관관계)

  • Kang, Hyun-Jin;Chang, Moon-Young;Kim, Kyeong-Mi
    • The Journal of Korean Academy of Sensory Integration
    • /
    • v.9 no.2
    • /
    • pp.41-49
    • /
    • 2011
  • Objective : This study aims to compare children with and without pervasive developmental disorders in terms of the sensory processing ability and behavioral characteristic of oral feeding. This study also aims to identify correlation between sensory processing and characteristics of eating. Methods : The subjects of this research were normal children and those who have diagnosis of a pervasive developmental disorder, aged from 4 to 6. The research instruments were composed of Short Sensory Profile (SSP), Brief Autism Mealtime Behavior Inventory (BAMBI) and Food Items of the Sensory Checklist. Data collection was done by a professional survey institute located in 10 cities including Busan, South Korea. The survey questionnaires were distributed to 455 parents of children with and without pervasive developmental disabilities through the survey institutes. Total 263 answers were collected out of 455 questionnaires (62%) and 154 answers were used in data analysis. Out of 154 answers, 45 were for children with pervasive developmental disabilities and 109 were for normal children. Data analysis was done to identify correlations between sensory processing and characteristics of eating such as eating behavior and oral feeding. Results : 1. There was a significant difference between children with and without pervasive developmental disorders in all area of sensory processing ability (p<.05). 2. There was no difference between children with and without pervasive developmental disorders in eating behavior (p=0.881) and oral feeding (p=0.324). 3. In the group of children with a pervasive developmental disorders, it is found that there is negative correlation between sensory processing, eating behavior and oral feeding (r=-0.384, p<.01). 4. A remarkable significant correlation was found between sensory processing and eating behavior especially in taste/smell sensitivity (r=-0.6, p<.01) and auditory filtering (r=-0.326, p<.05). The correlation between sensory processing and oral feeding was most significant in under responsiveness/seeking sensation (r=-0.372, p<.05) and auditory filtering (r=-0.382, p<.05). Conclusion : This study found that there are significant correlations between sensory processing ability and some characteristics of eating behaviors for children with pervasive developmental disorders. This information can be useful to develop a program to intervene eating behavior problems of children with pervasive developmental disorders.

  • PDF

A Study on the Selection of Parameter Values of FUSION Software for Improving Airborne LiDAR DEM Accuracy in Forest Area (산림지역에서의 LiDAR DEM 정확도 향상을 위한 FUSION 패러미터 선정에 관한 연구)

  • Cho, Seungwan;Park, Joowon
    • Journal of Korean Society of Forest Science
    • /
    • v.106 no.3
    • /
    • pp.320-329
    • /
    • 2017
  • This study aims to evaluate whether the accuracy of LiDAR DEM is affected by the changes of the five input levels ('1','3','5','7' and '9') of median parameter ($F_{md}$), mean parameter ($F_{mn}$) of the Filtering Algorithm (FA) in the GroundFilter module and median parameter ($I_{md}$), mean parameter ($I_{mn}$) of the Interpolation Algorithm (IA) in the GridSurfaceCreate module of the FUSION in order to present the combination of parameter levels producing the most accurate LiDAR DEM. The accuracy is measured by the residuals calculated by difference between the field elevation values and their corresponding DEM elevation values. A multi-way ANOVA is used to statistically examine whether there are effects of parameter level changes on the means of the residuals. The Tukey HSD is conducted as a post-hoc test. The results of the multi- way ANOVA test show that the changes in the levels of $F_{md}$, $F_{mn}$, $I_{mn}$ have significant effects on the DEM accuracy with the significant interaction effect between $F_{md}$ and $F_{mn}$. Therefore, the level of $F_{md}$, $F_{mn}$, and the interaction between two variables are considered to be factors affecting the accuracy of LiDAR DEM as well as the level of $I_{mn}$. As the results of the Tukey HSD test on the combination levels of $F_{md}{\ast}F_{mn}$, the mean of residuals of the '$9{\ast}3$' combination provides the highest accuracy while the '$1{\ast}1$' combination provides the lowest one. Regarding $I_{mn}$ levels, the mean of residuals of the both '3' and '1' provides the highest accuracy. This study can contribute to improve the accuracy of the forest attributes as well as the topographic information extracted from the LiDAR data.

Quantitative Conductivity Estimation Error due to Statistical Noise in Complex $B_1{^+}$ Map (정량적 도전율측정의 오차와 $B_1{^+}$ map의 노이즈에 관한 분석)

  • Shin, Jaewook;Lee, Joonsung;Kim, Min-Oh;Choi, Narae;Seo, Jin Keun;Kim, Dong-Hyun
    • Investigative Magnetic Resonance Imaging
    • /
    • v.18 no.4
    • /
    • pp.303-313
    • /
    • 2014
  • Purpose : In-vivo conductivity reconstruction using transmit field ($B_1{^+}$) information of MRI was proposed. We assessed the accuracy of conductivity reconstruction in the presence of statistical noise in complex $B_1{^+}$ map and provided a parametric model of the conductivity-to-noise ratio value. Materials and Methods: The $B_1{^+}$ distribution was simulated for a cylindrical phantom model. By adding complex Gaussian noise to the simulated $B_1{^+}$ map, quantitative conductivity estimation error was evaluated. The quantitative evaluation process was repeated over several different parameters such as Larmor frequency, object radius and SNR of $B_1{^+}$ map. A parametric model for the conductivity-to-noise ratio was developed according to these various parameters. Results: According to the simulation results, conductivity estimation is more sensitive to statistical noise in $B_1{^+}$ phase than to noise in $B_1{^+}$ magnitude. The conductivity estimate of the object of interest does not depend on the external object surrounding it. The conductivity-to-noise ratio is proportional to the signal-to-noise ratio of the $B_1{^+}$ map, Larmor frequency, the conductivity value itself and the number of averaged pixels. To estimate accurate conductivity value of the targeted tissue, SNR of $B_1{^+}$ map and adequate filtering size have to be taken into account for conductivity reconstruction process. In addition, the simulation result was verified at 3T conventional MRI scanner. Conclusion: Through all these relationships, quantitative conductivity estimation error due to statistical noise in $B_1{^+}$ map is modeled. By using this model, further issues regarding filtering and reconstruction algorithms can be investigated for MREPT.

A New Item Recommendation Procedure Using Preference Boundary

  • Kim, Hyea-Kyeong;Jang, Moon-Kyoung;Kim, Jae-Kyeong;Cho, Yoon-Ho
    • Asia pacific journal of information systems
    • /
    • v.20 no.1
    • /
    • pp.81-99
    • /
    • 2010
  • Lately, in consumers' markets the number of new items is rapidly increasing at an overwhelming rate while consumers have limited access to information about those new products in making a sensible, well-informed purchase. Therefore, item providers and customers need a system which recommends right items to right customers. Also, whenever new items are released, for instance, the recommender system specializing in new items can help item providers locate and identify potential customers. Currently, new items are being added to an existing system without being specially noted to consumers, making it difficult for consumers to identify and evaluate new products introduced in the markets. Most of previous approaches for recommender systems have to rely on the usage history of customers. For new items, this content-based (CB) approach is simply not available for the system to recommend those new items to potential consumers. Although collaborative filtering (CF) approach is not directly applicable to solve the new item problem, it would be a good idea to use the basic principle of CF which identifies similar customers, i,e. neighbors, and recommend items to those customers who have liked the similar items in the past. This research aims to suggest a hybrid recommendation procedure based on the preference boundary of target customer. We suggest the hybrid recommendation procedure using the preference boundary in the feature space for recommending new items only. The basic principle is that if a new item belongs within the preference boundary of a target customer, then it is evaluated to be preferred by the customer. Customers' preferences and characteristics of items including new items are represented in a feature space, and the scope or boundary of the target customer's preference is extended to those of neighbors'. The new item recommendation procedure consists of three steps. The first step is analyzing the profile of items, which are represented as k-dimensional feature values. The second step is to determine the representative point of the target customer's preference boundary, the centroid, based on a personal information set. To determine the centroid of preference boundary of a target customer, three algorithms are developed in this research: one is using the centroid of a target customer only (TC), the other is using centroid of a (dummy) big target customer that is composed of a target customer and his/her neighbors (BC), and another is using centroids of a target customer and his/her neighbors (NC). The third step is to determine the range of the preference boundary, the radius. The suggested algorithm Is using the average distance (AD) between the centroid and all purchased items. We test whether the CF-based approach to determine the centroid of the preference boundary improves the recommendation quality or not. For this purpose, we develop two hybrid algorithms, BC and NC, which use neighbors when deciding centroid of the preference boundary. To test the validity of hybrid algorithms, BC and NC, we developed CB-algorithm, TC, which uses target customers only. We measured effectiveness scores of suggested algorithms and compared them through a series of experiments with a set of real mobile image transaction data. We spilt the period between 1st June 2004 and 31st July and the period between 1st August and 31st August 2004 as a training set and a test set, respectively. The training set Is used to make the preference boundary, and the test set is used to evaluate the performance of the suggested hybrid recommendation procedure. The main aim of this research Is to compare the hybrid recommendation algorithm with the CB algorithm. To evaluate the performance of each algorithm, we compare the purchased new item list in test period with the recommended item list which is recommended by suggested algorithms. So we employ the evaluation metric to hit the ratio for evaluating our algorithms. The hit ratio is defined as the ratio of the hit set size to the recommended set size. The hit set size means the number of success of recommendations in our experiment, and the test set size means the number of purchased items during the test period. Experimental test result shows the hit ratio of BC and NC is bigger than that of TC. This means using neighbors Is more effective to recommend new items. That is hybrid algorithm using CF is more effective when recommending to consumers new items than the algorithm using only CB. The reason of the smaller hit ratio of BC than that of NC is that BC is defined as a dummy or virtual customer who purchased all items of target customers' and neighbors'. That is centroid of BC often shifts from that of TC, so it tends to reflect skewed characters of target customer. So the recommendation algorithm using NC shows the best hit ratio, because NC has sufficient information about target customers and their neighbors without damaging the information about the target customers.