• Title/Summary/Keyword: weighted sharing

Search Result 59, Processing Time 0.025 seconds

A Comparative Study on the Improvement-required order of Non-territorial office and Territorial office (비 영역 업무공간과 일반 업무공간의 개선 우선순위 비교연구)

  • Cho, Ji-Yeon
    • Journal of The Korean Digital Architecture Interior Association
    • /
    • v.7 no.2
    • /
    • pp.51-58
    • /
    • 2007
  • The saying likely as "In the rapidly-changing outside management environment, the light-weighted body only able to survive" means the 'lightness' in working space, reducing expenditure for immovable property and unnecessary facility, and also means the flexibility of facility which is ready to cope promptly with the approaching situation. For such reason, as many enterprises do not support an individual workstation for each person but by introduction of sharing system, also reducing the area for working space, they have made the immovable property expense per person curtailed. Though the reduction of the immovable property expense that subsequently gained by merely decreasing of working space brings a remarkable effect of retrenchment, there is also a worrying opinion that because of the narrowed working space than before, or by disappearing of private space, a possible non-productive effect which let working enthusiasm falls down might be brought. By practicing such investigating method, at the first stage survey, the Satisfaction of non-territorial and territorial office user and the order of required improvement are figured out, also arranged the appraisal items as 6 factors in total, and all of the factors were being influenced to the entire satisfaction in statistically meaningful level.

  • PDF

Enhanced Reputation-based Fusion Mechanism for Secure Distributed Spectrum Sensing in Cognitive Radio Networks (인지 라디오 네트워크에서 안전한 분산 스펙트럼 센싱을 위한 향상된 평판기반 퓨전 메커니즘)

  • Kim, Mi-Hui;Choo, Hyun-Seung
    • Journal of Internet Computing and Services
    • /
    • v.11 no.6
    • /
    • pp.61-72
    • /
    • 2010
  • Spectrum scarcity problem and increasing spectrum demand for new wireless applications have embossed the importance of cognitive radio technology; the technology enables the sharing of channels among secondary (unlicensed) and primary (licensed) users on a non-interference basis after sensing the vacant channel. To enhance the accuracy of sensing, distributed spectrum sensing is proposed. However, it is necessary to provide the robustness against the compromised sensing nodes in the distributed spectrum sensing. RDSS, a fusion mechanism based on the reputation of sensing nodes and WSPRT (weighted sequential probability ratio test), was proposed. However, in RDSS, the execution number of WSPRT could increase according to the order of inputted sensing values, and the fast defense against the forged values is difficult. In this paper, we propose an enhanced fusion mechanism to input the sensing values in reputation order and exclude the sensing values with the high possibility to be compromised, using the trend of reputation variation. We evaluate our mechanism through simulation. The results show that our mechanism improves the robustness against attack with the smaller number of sensing values and more accurate detection ratio than RDSS.

Speech extraction based on AuxIVA with weighted source variance and noise dependence for robust speech recognition (강인 음성 인식을 위한 가중화된 음원 분산 및 잡음 의존성을 활용한 보조함수 독립 벡터 분석 기반 음성 추출)

  • Shin, Ui-Hyeop;Park, Hyung-Min
    • The Journal of the Acoustical Society of Korea
    • /
    • v.41 no.3
    • /
    • pp.326-334
    • /
    • 2022
  • In this paper, we propose speech enhancement algorithm as a pre-processing for robust speech recognition in noisy environments. Auxiliary-function-based Independent Vector Analysis (AuxIVA) is performed with weighted covariance matrix using time-varying variances with scaling factor from target masks representing time-frequency contributions of target speech. The mask estimates can be obtained using Neural Network (NN) pre-trained for speech extraction or diffuseness using Coherence-to-Diffuse power Ratio (CDR) to find the direct sounds component of a target speech. In addition, outputs for omni-directional noise are closely chained by sharing the time-varying variances similarly to independent subspace analysis or IVA. The speech extraction method based on AuxIVA is also performed in Independent Low-Rank Matrix Analysis (ILRMA) framework by extending the Non-negative Matrix Factorization (NMF) for noise outputs to Non-negative Tensor Factorization (NTF) to maintain the inter-channel dependency in noise output channels. Experimental results on the CHiME-4 datasets demonstrate the effectiveness of the presented algorithms.

Measuring Out-of-pocket Payment, Catastrophic Health Expenditure and the Related Socioeconomic Inequality in Peru: A Comparison Between 2008 and 2017

  • Hernandez-Vasquez, Akram;Rojas-Roque, Carlos;Vargas-Fernandez, Rodrigo;Rosselli, Diego
    • Journal of Preventive Medicine and Public Health
    • /
    • v.53 no.4
    • /
    • pp.266-274
    • /
    • 2020
  • Objectives: Describe out-of-pocket payment (OOP) and the proportion of Peruvian households with catastrophic health expenditure (CHE) and evaluate changes in socioeconomic inequalities in CHE between 2008 and 2017. Methods: We used data from the 2008 and 2017 National Household Surveys on Living and Poverty Conditions (ENAHO in Spanish), which are based on probabilistic stratified, multistage and independent sampling of areas. OOP was converted into constant dollars of 2017. A household with CHE was assumed when the proportion between OOP and payment capacity was ≥0.40. OOP was described by median and interquartile range while CHE was described by weighted proportions and 95% confidence intervals (CIs). To estimate the socioeconomic inequality in CHE we computed the Erreygers concentration index. Results: The median OOP reduced from 205.8 US dollars to 158.7 US dollars between 2008 and 2017. The proportion of CHE decreased from 4.9% (95% CI, 4.5 to 5.2) in 2008 to 3.7% (95% CI, 3.4 to 4.0) in 2017. Comparison of socioeconomic inequality of CHE showed no differences between 2008 and 2017, except for rural households in which CHE was less concentrated in richer households (p<0.05) and in households located on the rest of the coast, showing an increase in the concentration of CHE in richer households (p<0.05). Conclusions: Although OOP and CHE reduced between 2008 and 2017, there is still socioeconomic inequality in the burden of CHE across different subpopulations. To reverse this situation, access to health resources and health services should be promoted and guaranteed to all populations.

Comparison of Methods for Linkage Analysis of Affected Sibship Data (이환 형제 자료에 대한 유전적 연관성 분석 방법의 비교)

  • Go, Min-Jin;Lim, Kil-Seob;Lee, Hak-Bae;Song, Ki-Jun
    • The Korean Journal of Applied Statistics
    • /
    • v.22 no.2
    • /
    • pp.329-340
    • /
    • 2009
  • For complex diseases such as diabetes, hypertension, it is believed that model-free methods might work better because they do not require a precise knowledge of the mode of inheritance controlling the disease trait. This is done by estimating the sharing probabilities that a pair shares zero, one, or two alleles identical by descent(IBD) and has some specific branches of test procedure, i.e., the mean test, the proportion test, and the minmax test. Among them, the minmax test is known to be more robust than others regardless of genetic mode of inheritance in current use. In this study, we compared the power of the methods which are based on minmax test and considering weighting schemes for sib-pairs to analyze sibship data. In simulation result, we found that the method based on Suarez' was more powerful than any others without respect to marker allele frequency, genetic mode of inheritance, sibship size. Also, The power of both Suarez- and Hodge-based methods was higher when marker allele frequency and sibship size were higher, and this result was remarkable in dominant mode of inheritance especially.

An Effective Method for Blocking Illegal Sports Gambling Ads on Social Media

  • Kim, Ji-A;Lee, Geum-Boon
    • Journal of the Korea Society of Computer and Information
    • /
    • v.24 no.12
    • /
    • pp.201-207
    • /
    • 2019
  • In this paper, we propose an effective method to block illegal gambling advertisement on social media. With the increase of smartphone and internet usage, users can easily access various information while sharing information such as text and video with a large number of others. In addition, illegal sports gambling advertisements are also continue to be transmitted on SNS. To avoid most surveillance networks, users are easily exposed to illegal sports gambling advertisement images by including phrases in the images that indicate illegal sports gambling advertisements. In order to cope with these problems, we proposed a method to actively block illegal sports gambling advertisements in a way different from the conventional passive methods. In this paper, we select words frequently used for illegal sports gambling, classifies them into three groups according to their importance, calculate WF for each word using weighted formula by degree of relevance and frequency, and then sum the WF of the words in the image. Blocking, warning, and passing were determined by cv, the total of WF. Experimenting with the proposed method, 193 out of 200 experimental images were correctly judged with 96.5% accuracy, and even though 7 images were illegal sports gambling advertisements. Further research is needed to block 3.5% of illegal sports betting ads that cannot be blocked in the future.

A Study on Detection Technique of Anomaly Signal for Financial Loan Fraud Based on Social Network Analysis (소셜 네트워크 분석 기반의 금융회사 불법대출 이상징후 탐지기법에 관한 연구)

  • Wi, Choong-Ki;Kim, Hyoung-Joong;Lee, Sang-Jin
    • Journal of the Korea Institute of Information Security & Cryptology
    • /
    • v.22 no.4
    • /
    • pp.851-868
    • /
    • 2012
  • After the financial crisis in 2008, the financial market still seems to be unstable with expanding the insolvency of the financial companies' real estate project financing loan in the aftermath of the lasted real estate recession. Especially after the illegal actions of people's financial institutions disclosed, while increased the anxiety of economic subjects about financial markets and weighted in the confusion of financial markets, the potential risk for the overall national economy is increasing. Thus as economic recession prolongs, the people's financial institutions having a weak profit structure and financing ability commit illegal acts in a variety of ways in order to conceal insolvent assets. Especially it is hard to find the loans of shareholder and the same borrower sharing credit risk in advance because most of them usually use a third-party's name bank account. Therefore, in order to effectively detect the fraud under other's name, it is necessary to analyze by clustering the borrowers high-related to a particular borrower through an analysis of association between the whole borrowers. In this paper, we introduce Analysis Techniques for detecting financial loan frauds in advance through an analysis of association between the whole borrowers by extending SNA(social network analysis) which is being studied by focused on sociology recently to the forensic accounting field of the financial frauds. Also this technique introduced in this pager will be very useful to regulatory authorities or law enforcement agencies at the field inspection or investigation.

A Folksonomy Ranking Framework: A Semantic Graph-based Approach (폭소노미 사이트를 위한 랭킹 프레임워크 설계: 시맨틱 그래프기반 접근)

  • Park, Hyun-Jung;Rho, Sang-Kyu
    • Asia pacific journal of information systems
    • /
    • v.21 no.2
    • /
    • pp.89-116
    • /
    • 2011
  • In collaborative tagging systems such as Delicious.com and Flickr.com, users assign keywords or tags to their uploaded resources, such as bookmarks and pictures, for their future use or sharing purposes. The collection of resources and tags generated by a user is called a personomy, and the collection of all personomies constitutes the folksonomy. The most significant need of the folksonomy users Is to efficiently find useful resources or experts on specific topics. An excellent ranking algorithm would assign higher ranking to more useful resources or experts. What resources are considered useful In a folksonomic system? Does a standard superior to frequency or freshness exist? The resource recommended by more users with mere expertise should be worthy of attention. This ranking paradigm can be implemented through a graph-based ranking algorithm. Two well-known representatives of such a paradigm are Page Rank by Google and HITS(Hypertext Induced Topic Selection) by Kleinberg. Both Page Rank and HITS assign a higher evaluation score to pages linked to more higher-scored pages. HITS differs from PageRank in that it utilizes two kinds of scores: authority and hub scores. The ranking objects of these pages are limited to Web pages, whereas the ranking objects of a folksonomic system are somewhat heterogeneous(i.e., users, resources, and tags). Therefore, uniform application of the voting notion of PageRank and HITS based on the links to a folksonomy would be unreasonable, In a folksonomic system, each link corresponding to a property can have an opposite direction, depending on whether the property is an active or a passive voice. The current research stems from the Idea that a graph-based ranking algorithm could be applied to the folksonomic system using the concept of mutual Interactions between entitles, rather than the voting notion of PageRank or HITS. The concept of mutual interactions, proposed for ranking the Semantic Web resources, enables the calculation of importance scores of various resources unaffected by link directions. The weights of a property representing the mutual interaction between classes are assigned depending on the relative significance of the property to the resource importance of each class. This class-oriented approach is based on the fact that, in the Semantic Web, there are many heterogeneous classes; thus, applying a different appraisal standard for each class is more reasonable. This is similar to the evaluation method of humans, where different items are assigned specific weights, which are then summed up to determine the weighted average. We can check for missing properties more easily with this approach than with other predicate-oriented approaches. A user of a tagging system usually assigns more than one tags to the same resource, and there can be more than one tags with the same subjectivity and objectivity. In the case that many users assign similar tags to the same resource, grading the users differently depending on the assignment order becomes necessary. This idea comes from the studies in psychology wherein expertise involves the ability to select the most relevant information for achieving a goal. An expert should be someone who not only has a large collection of documents annotated with a particular tag, but also tends to add documents of high quality to his/her collections. Such documents are identified by the number, as well as the expertise, of users who have the same documents in their collections. In other words, there is a relationship of mutual reinforcement between the expertise of a user and the quality of a document. In addition, there is a need to rank entities related more closely to a certain entity. Considering the property of social media that ensures the popularity of a topic is temporary, recent data should have more weight than old data. We propose a comprehensive folksonomy ranking framework in which all these considerations are dealt with and that can be easily customized to each folksonomy site for ranking purposes. To examine the validity of our ranking algorithm and show the mechanism of adjusting property, time, and expertise weights, we first use a dataset designed for analyzing the effect of each ranking factor independently. We then show the ranking results of a real folksonomy site, with the ranking factors combined. Because the ground truth of a given dataset is not known when it comes to ranking, we inject simulated data whose ranking results can be predicted into the real dataset and compare the ranking results of our algorithm with that of a previous HITS-based algorithm. Our semantic ranking algorithm based on the concept of mutual interaction seems to be preferable to the HITS-based algorithm as a flexible folksonomy ranking framework. Some concrete points of difference are as follows. First, with the time concept applied to the property weights, our algorithm shows superior performance in lowering the scores of older data and raising the scores of newer data. Second, applying the time concept to the expertise weights, as well as to the property weights, our algorithm controls the conflicting influence of expertise weights and enhances overall consistency of time-valued ranking. The expertise weights of the previous study can act as an obstacle to the time-valued ranking because the number of followers increases as time goes on. Third, many new properties and classes can be included in our framework. The previous HITS-based algorithm, based on the voting notion, loses ground in the situation where the domain consists of more than two classes, or where other important properties, such as "sent through twitter" or "registered as a friend," are added to the domain. Forth, there is a big difference in the calculation time and memory use between the two kinds of algorithms. While the matrix multiplication of two matrices, has to be executed twice for the previous HITS-based algorithm, this is unnecessary with our algorithm. In our ranking framework, various folksonomy ranking policies can be expressed with the ranking factors combined and our approach can work, even if the folksonomy site is not implemented with Semantic Web languages. Above all, the time weight proposed in this paper will be applicable to various domains, including social media, where time value is considered important.

Resolving the 'Gray sheep' Problem Using Social Network Analysis (SNA) in Collaborative Filtering (CF) Recommender Systems (소셜 네트워크 분석 기법을 활용한 협업필터링의 특이취향 사용자(Gray Sheep) 문제 해결)

  • Kim, Minsung;Im, Il
    • Journal of Intelligence and Information Systems
    • /
    • v.20 no.2
    • /
    • pp.137-148
    • /
    • 2014
  • Recommender system has become one of the most important technologies in e-commerce in these days. The ultimate reason to shop online, for many consumers, is to reduce the efforts for information search and purchase. Recommender system is a key technology to serve these needs. Many of the past studies about recommender systems have been devoted to developing and improving recommendation algorithms and collaborative filtering (CF) is known to be the most successful one. Despite its success, however, CF has several shortcomings such as cold-start, sparsity, gray sheep problems. In order to be able to generate recommendations, ordinary CF algorithms require evaluations or preference information directly from users. For new users who do not have any evaluations or preference information, therefore, CF cannot come up with recommendations (Cold-star problem). As the numbers of products and customers increase, the scale of the data increases exponentially and most of the data cells are empty. This sparse dataset makes computation for recommendation extremely hard (Sparsity problem). Since CF is based on the assumption that there are groups of users sharing common preferences or tastes, CF becomes inaccurate if there are many users with rare and unique tastes (Gray sheep problem). This study proposes a new algorithm that utilizes Social Network Analysis (SNA) techniques to resolve the gray sheep problem. We utilize 'degree centrality' in SNA to identify users with unique preferences (gray sheep). Degree centrality in SNA refers to the number of direct links to and from a node. In a network of users who are connected through common preferences or tastes, those with unique tastes have fewer links to other users (nodes) and they are isolated from other users. Therefore, gray sheep can be identified by calculating degree centrality of each node. We divide the dataset into two, gray sheep and others, based on the degree centrality of the users. Then, different similarity measures and recommendation methods are applied to these two datasets. More detail algorithm is as follows: Step 1: Convert the initial data which is a two-mode network (user to item) into an one-mode network (user to user). Step 2: Calculate degree centrality of each node and separate those nodes having degree centrality values lower than the pre-set threshold. The threshold value is determined by simulations such that the accuracy of CF for the remaining dataset is maximized. Step 3: Ordinary CF algorithm is applied to the remaining dataset. Step 4: Since the separated dataset consist of users with unique tastes, an ordinary CF algorithm cannot generate recommendations for them. A 'popular item' method is used to generate recommendations for these users. The F measures of the two datasets are weighted by the numbers of nodes and summed to be used as the final performance metric. In order to test performance improvement by this new algorithm, an empirical study was conducted using a publically available dataset - the MovieLens data by GroupLens research team. We used 100,000 evaluations by 943 users on 1,682 movies. The proposed algorithm was compared with an ordinary CF algorithm utilizing 'Best-N-neighbors' and 'Cosine' similarity method. The empirical results show that F measure was improved about 11% on average when the proposed algorithm was used

    . Past studies to improve CF performance typically used additional information other than users' evaluations such as demographic data. Some studies applied SNA techniques as a new similarity metric. This study is novel in that it used SNA to separate dataset. This study shows that performance of CF can be improved, without any additional information, when SNA techniques are used as proposed. This study has several theoretical and practical implications. This study empirically shows that the characteristics of dataset can affect the performance of CF recommender systems. This helps researchers understand factors affecting performance of CF. This study also opens a door for future studies in the area of applying SNA to CF to analyze characteristics of dataset. In practice, this study provides guidelines to improve performance of CF recommender systems with a simple modification.


  • (34141) Korea Institute of Science and Technology Information, 245, Daehak-ro, Yuseong-gu, Daejeon
    Copyright (C) KISTI. All Rights Reserved.