• Title/Summary/Keyword: Information System Reliability

Search Result 2,683, Processing Time 0.035 seconds

Nondestructive Quantification of Corrosion in Cu Interconnects Using Smith Charts (스미스 차트를 이용한 구리 인터커텍트의 비파괴적 부식도 평가)

  • Minkyu Kang;Namgyeong Kim;Hyunwoo Nam;Tae Yeob Kang
    • Journal of the Microelectronics and Packaging Society
    • /
    • v.31 no.2
    • /
    • pp.28-35
    • /
    • 2024
  • Corrosion inside electronic packages significantly impacts the system performance and reliability, necessitating non-destructive diagnostic techniques for system health management. This study aims to present a non-destructive method for assessing corrosion in copper interconnects using the Smith chart, a tool that integrates the magnitude and phase of complex impedance for visualization. For the experiment, specimens simulating copper transmission lines were subjected to temperature and humidity cycles according to the MIL-STD-810G standard to induce corrosion. The corrosion level of the specimen was quantitatively assessed and labeled based on color changes in the R channel. S-parameters and Smith charts with progressing corrosion stages showed unique patterns corresponding to five levels of corrosion, confirming the effectiveness of the Smith chart as a tool for corrosion assessment. Furthermore, by employing data augmentation, 4,444 Smith charts representing various corrosion levels were obtained, and artificial intelligence models were trained to output the corrosion stages of copper interconnects based on the input Smith charts. Among image classification-specialized CNN and Transformer models, the ConvNeXt model achieved the highest diagnostic performance with an accuracy of 89.4%. When diagnosing the corrosion using the Smith chart, it is possible to perform a non-destructive evaluation using electronic signals. Additionally, by integrating and visualizing signal magnitude and phase information, it is expected to perform an intuitive and noise-robust diagnosis.

Land Cover Classification of Coastal Area by SAM from Airborne Hyperspectral Images (항공 초분광 영상으로부터 연안지역의 SAM 토지피복분류)

  • LEE, Jin-Duk;BANG, Kon-Joon;KIM, Hyun-Ho
    • Journal of the Korean Association of Geographic Information Studies
    • /
    • v.21 no.1
    • /
    • pp.35-45
    • /
    • 2018
  • Image data collected by an airborne hyperspectral camera system have a great usability in coastal line mapping, detection of facilities composed of specific materials, detailed land use analysis, change monitoring and so forh in a complex coastal area because the system provides almost complete spectral and spatial information for each image pixel of tens to hundreds of spectral bands. A few approaches after classifying by a few approaches based on SAM(Spectral Angle Mapper) supervised classification were applied for extracting optimal land cover information from hyperspectral images acquired by CASI-1500 airborne hyperspectral camera on the object of a coastal area which includes both land and sea water areas. We applied three different approaches, that is to say firstly the classification approach of combined land and sea areas, secondly the reclassification approach after decompostion of land and sea areas from classification result of combined land and sea areas, and thirdly the land area-only classification approach using atmospheric correction images and compared classification results and accuracies. Land cover classification was conducted respectively by selecting not only four band images with the same wavelength range as IKONOS, QuickBird, KOMPSAT and GeoEye satelllite images but also eight band images with the same wavelength range as WorldView-2 from 48 band hyperspectral images and then compared with the classification result conducted with all of 48 band images. As a result, the reclassification approach after decompostion of land and sea areas from classification result of combined land and sea areas is more effective than classification approach of combined land and sea areas. It is showed the bigger the number of bands, the higher accuracy and reliability in the reclassification approach referred above. The results of higher spectral resolution showed asphalt or concrete roads was able to be classified more accurately.

Dispute of Part-Whole Representation in Conceptual Modeling (부분-전체 관계에 관한 개념적 모델링의 논의에 관하여)

  • Kim, Taekyung;Park, Jinsoo;Rho, Sangkyu
    • Journal of Intelligence and Information Systems
    • /
    • v.18 no.4
    • /
    • pp.97-116
    • /
    • 2012
  • Conceptual modeling is an important step for successful system development. It helps system designers and business practitioners share the same view on domain knowledge. If the work is successful, a result of conceptual modeling can be beneficial in increasing productivity and reducing failures. However, the value of conceptual modeling is unlikely to be evaluated uniformly because we are lack of agreement on how to elicit concepts and how to represent those with conceptual modeling constructs. Especially, designing relationships between components, also known as part-whole relationships, have been regarded as complicated work. The recent study, "Representing Part-Whole Relations in Conceptual Modeling : An Empirical Evaluation" (Shanks et al., 2008), published in MIS Quarterly, can be regarded as one of positive efforts. Not only the study is one of few attempts of trying to clarify how to select modeling alternatives in part-whole design, but also it shows results based on an empirical experiment. Shanks et al. argue that there are two modeling alternatives to represent part-whole relationships : an implicit representation and an explicit one. By conducting an experiment, they insist that the explicit representation increases the value of a conceptual model. Moreover, Shanks et al. justify their findings by citing the BWW ontology. Recently, the study from Shanks et al. faces criticism. Allen and March (2012) argue that Shanks et al.'s experiment is lack of validity and reliability since the experimental setting suffers from error-prone and self-defensive design. They point out that the experiment is intentionally fabricated to support the idea, as such that using concrete UML concepts results in positive results in understanding models. Additionally, Allen and March add that the experiment failed to consider boundary conditions; thus reducing credibility. Shanks and Weber (2012) contradict flatly the argument suggested by Allen and March (2012). To defend, they posit the BWW ontology is righteously applied in supporting the research. Moreover, the experiment, they insist, can be fairly acceptable. Therefore, Shanks and Weber argue that Allen and March distort the true value of Shanks et al. by pointing out minor limitations. In this study, we try to investigate the dispute around Shanks et al. in order to answer to the following question : "What is the proper value of the study conducted by Shanks et al.?" More profoundly, we question whether or not using the BWW ontology can be the only viable option of exploring better conceptual modeling methods and procedures. To understand key issues around the dispute, first we reviewed previous studies relating to the BWW ontology. We critically reviewed both of Shanks and Weber and Allen and March. With those findings, we further discuss theories on part-whole (or part-of) relationships that are rarely treated in the dispute. As a result, we found three additional evidences that are not sufficiently covered by the dispute. The main focus of the dispute is on the errors of experimental methods: Shanks et al. did not use Bunge's Ontology properly; the refutation of a paradigm shift is lack of concrete, logical rationale; the conceptualization on part-whole relations should be reformed. Conclusively, Allen and March indicate properly issues that weaken the value of Shanks et al. In general, their criticism is reasonable; however, they do not provide sufficient answers how to anchor future studies on part-whole relationships. We argue that the use of the BWW ontology should be rigorously evaluated by its original philosophical rationales surrounding part-whole existence. Moreover, conceptual modeling on the part-whole phenomena should be investigated with more plentiful lens of alternative theories. The criticism on Shanks et al. should not be regarded as a contradiction on evaluating modeling methods of alternative part-whole representations. To the contrary, it should be viewed as a call for research on usable and useful approaches to increase value of conceptual modeling.

Analysis of Building Characteristics and Temporal Changes of Fire Alarms (건물 특성과 시간적 변화가 소방시설관리시스템의 화재알람에 미치는 영향 분석 연구)

  • Lim, Gwanmuk;Ko, Seoltae;Kim, Yoosin;Park, Keon Chul
    • Journal of Internet Computing and Services
    • /
    • v.22 no.4
    • /
    • pp.83-98
    • /
    • 2021
  • The purpose of this study to find the factors influencing the fire alarms using IoT firefighting facility management system data of Seoul Fire & Disaster Headquarters, and to present academic implications for establishing an effective prevention system of fire situation. As the number of high and complex buildings increases and former bulidings are advanced, the fire detection facilities that can quickly respond to emergency situations are also increasing. However, if the accuracy of the fire situation is incorrectly detected and the accuracy is lowered, the inconvenience of the residents increases and the reliability decreases. Therefore, it is necessary to improve accuracy of the system through efficient inspection and the internal environment investigation of buildings. The purpose of this study is to find out that false detection may occur due to building characteristics such as usage or time, and to aim of emphasizing the need for efficient system inspection and controlling the internal environment. As a result, it is found that the size(total area) of the building had the greatest effect on the fire alarms, and the fire alarms increased as private buildings, R-type receivers, and a large number of failure or shutoff days. In addition, factors that influencing fire alarms were different depending on the main usage of the building. In terms of time, it was found to follow people's daily patterns during weekdays(9 am to 6 pm), and each peaked around 10 am and 2 pm. This study was claimed that it is necessary to investigate the building environment that caused the fire alarms, along with the system internal inspection. Also, it propose additional recording of building environment data in real-time for follow-up research and system enhancement.

A Study on Market Size Estimation Method by Product Group Using Word2Vec Algorithm (Word2Vec을 활용한 제품군별 시장규모 추정 방법에 관한 연구)

  • Jung, Ye Lim;Kim, Ji Hui;Yoo, Hyoung Sun
    • Journal of Intelligence and Information Systems
    • /
    • v.26 no.1
    • /
    • pp.1-21
    • /
    • 2020
  • With the rapid development of artificial intelligence technology, various techniques have been developed to extract meaningful information from unstructured text data which constitutes a large portion of big data. Over the past decades, text mining technologies have been utilized in various industries for practical applications. In the field of business intelligence, it has been employed to discover new market and/or technology opportunities and support rational decision making of business participants. The market information such as market size, market growth rate, and market share is essential for setting companies' business strategies. There has been a continuous demand in various fields for specific product level-market information. However, the information has been generally provided at industry level or broad categories based on classification standards, making it difficult to obtain specific and proper information. In this regard, we propose a new methodology that can estimate the market sizes of product groups at more detailed levels than that of previously offered. We applied Word2Vec algorithm, a neural network based semantic word embedding model, to enable automatic market size estimation from individual companies' product information in a bottom-up manner. The overall process is as follows: First, the data related to product information is collected, refined, and restructured into suitable form for applying Word2Vec model. Next, the preprocessed data is embedded into vector space by Word2Vec and then the product groups are derived by extracting similar products names based on cosine similarity calculation. Finally, the sales data on the extracted products is summated to estimate the market size of the product groups. As an experimental data, text data of product names from Statistics Korea's microdata (345,103 cases) were mapped in multidimensional vector space by Word2Vec training. We performed parameters optimization for training and then applied vector dimension of 300 and window size of 15 as optimized parameters for further experiments. We employed index words of Korean Standard Industry Classification (KSIC) as a product name dataset to more efficiently cluster product groups. The product names which are similar to KSIC indexes were extracted based on cosine similarity. The market size of extracted products as one product category was calculated from individual companies' sales data. The market sizes of 11,654 specific product lines were automatically estimated by the proposed model. For the performance verification, the results were compared with actual market size of some items. The Pearson's correlation coefficient was 0.513. Our approach has several advantages differing from the previous studies. First, text mining and machine learning techniques were applied for the first time on market size estimation, overcoming the limitations of traditional sampling based- or multiple assumption required-methods. In addition, the level of market category can be easily and efficiently adjusted according to the purpose of information use by changing cosine similarity threshold. Furthermore, it has a high potential of practical applications since it can resolve unmet needs for detailed market size information in public and private sectors. Specifically, it can be utilized in technology evaluation and technology commercialization support program conducted by governmental institutions, as well as business strategies consulting and market analysis report publishing by private firms. The limitation of our study is that the presented model needs to be improved in terms of accuracy and reliability. The semantic-based word embedding module can be advanced by giving a proper order in the preprocessed dataset or by combining another algorithm such as Jaccard similarity with Word2Vec. Also, the methods of product group clustering can be changed to other types of unsupervised machine learning algorithm. Our group is currently working on subsequent studies and we expect that it can further improve the performance of the conceptually proposed basic model in this study.

A Study on People Counting in Public Metro Service using Hybrid CNN-LSTM Algorithm (Hybrid CNN-LSTM 알고리즘을 활용한 도시철도 내 피플 카운팅 연구)

  • Choi, Ji-Hye;Kim, Min-Seung;Lee, Chan-Ho;Choi, Jung-Hwan;Lee, Jeong-Hee;Sung, Tae-Eung
    • Journal of Intelligence and Information Systems
    • /
    • v.26 no.2
    • /
    • pp.131-145
    • /
    • 2020
  • In line with the trend of industrial innovation, IoT technology utilized in a variety of fields is emerging as a key element in creation of new business models and the provision of user-friendly services through the combination of big data. The accumulated data from devices with the Internet-of-Things (IoT) is being used in many ways to build a convenience-based smart system as it can provide customized intelligent systems through user environment and pattern analysis. Recently, it has been applied to innovation in the public domain and has been using it for smart city and smart transportation, such as solving traffic and crime problems using CCTV. In particular, it is necessary to comprehensively consider the easiness of securing real-time service data and the stability of security when planning underground services or establishing movement amount control information system to enhance citizens' or commuters' convenience in circumstances with the congestion of public transportation such as subways, urban railways, etc. However, previous studies that utilize image data have limitations in reducing the performance of object detection under private issue and abnormal conditions. The IoT device-based sensor data used in this study is free from private issue because it does not require identification for individuals, and can be effectively utilized to build intelligent public services for unspecified people. Especially, sensor data stored by the IoT device need not be identified to an individual, and can be effectively utilized for constructing intelligent public services for many and unspecified people as data free form private issue. We utilize the IoT-based infrared sensor devices for an intelligent pedestrian tracking system in metro service which many people use on a daily basis and temperature data measured by sensors are therein transmitted in real time. The experimental environment for collecting data detected in real time from sensors was established for the equally-spaced midpoints of 4×4 upper parts in the ceiling of subway entrances where the actual movement amount of passengers is high, and it measured the temperature change for objects entering and leaving the detection spots. The measured data have gone through a preprocessing in which the reference values for 16 different areas are set and the difference values between the temperatures in 16 distinct areas and their reference values per unit of time are calculated. This corresponds to the methodology that maximizes movement within the detection area. In addition, the size of the data was increased by 10 times in order to more sensitively reflect the difference in temperature by area. For example, if the temperature data collected from the sensor at a given time were 28.5℃, the data analysis was conducted by changing the value to 285. As above, the data collected from sensors have the characteristics of time series data and image data with 4×4 resolution. Reflecting the characteristics of the measured, preprocessed data, we finally propose a hybrid algorithm that combines CNN in superior performance for image classification and LSTM, especially suitable for analyzing time series data, as referred to CNN-LSTM (Convolutional Neural Network-Long Short Term Memory). In the study, the CNN-LSTM algorithm is used to predict the number of passing persons in one of 4×4 detection areas. We verified the validation of the proposed model by taking performance comparison with other artificial intelligence algorithms such as Multi-Layer Perceptron (MLP), Long Short Term Memory (LSTM) and RNN-LSTM (Recurrent Neural Network-Long Short Term Memory). As a result of the experiment, proposed CNN-LSTM hybrid model compared to MLP, LSTM and RNN-LSTM has the best predictive performance. By utilizing the proposed devices and models, it is expected various metro services will be provided with no illegal issue about the personal information such as real-time monitoring of public transport facilities and emergency situation response services on the basis of congestion. However, the data have been collected by selecting one side of the entrances as the subject of analysis, and the data collected for a short period of time have been applied to the prediction. There exists the limitation that the verification of application in other environments needs to be carried out. In the future, it is expected that more reliability will be provided for the proposed model if experimental data is sufficiently collected in various environments or if learning data is further configured by measuring data in other sensors.

A Study on Differentiation and Improvement in Arbitration Systems in Construction Disputes (건설분쟁 중재제도의 차별화 및 개선방안에 관한 연구)

  • Lee, Sun-Jae
    • Journal of Arbitration Studies
    • /
    • v.29 no.2
    • /
    • pp.239-282
    • /
    • 2019
  • The importance of ADR(Alternative Dispute Resolution), which has the advantage of expertise, speed and neutrality due to the increase of arbitration cases due to domestic and foreign construction disputes, has emerged. Therefore, in order for the nation's arbitration system and the arbitration Organization to jump into the ranks of advanced international mediators, it is necessary to research the characteristics and advantages of these arbitration Organization through a study of prior domestic and foreign research and operation of international arbitration Organization. As a problem, First, education for the efficient promotion of arbitrators (compulsory education, maintenance education, specialized education, seminars, etc.). second, The effectiveness of arbitration in resolving construction disputes (hearing methods, composition of the tribunal, and speed). third, The issue of flexibility and diversity of arbitration solutions (the real problem of methodologies such as mediation and arbitration) needs to be drawn on the Arbitration laws and practical problems, such as laws, rules and guidelines. Therefore, Identify the problems presented in the preceding literature and diagnosis of the defects and problems of the KCAB by drawing features and benefits from the arbitration system operated by the international arbitration Institution. As an improvement, the results of an empirical analysis are derived for "arbitrator" simultaneously through a recognition survey. As a method of improvement, First, as an optimal combination of arbitration hearing and judgment in the settlement of construction disputes,(to improve speed). (1) A plan to improve the composition of the audit department according to the complexity, specificity, and magnification of the arbitration cases - (1)Methods to cope with the increased role of the non-lawyer(Specialist, technical expert). (2)Securing technical mediators for each specialized expert according to the large and special corporation arbitration cases. (2) Improving the method of writing by area of the arbitration guidelines, second, Introduction of the intensive hearing system for psychological efficiency and the institutional improvement plan (1) Problems of optimizing the arbitration decision hearing procedure and resolution of arbitration, and (2) Problems of the management of technical arbitrators of arbitration tribunals. (1)A plan to expand hearing work of technical arbitrator(Review on the introduction of the Assistant System as a member of the arbitration tribunals). (2)Improved use of alternative appraisers by tribunals(cost analysis and utilization of the specialized institution for calculating construction costs), Direct management of technical arbitrators : A Study on the Improvement of the Assessment Reliability of the Appraisal and the Appraisal Period. third, Improvement of expert committee system and new method, (1) Creating a non-executive technical committee : Special technology affairs, etc.(Major, supports pre-qualification of special events and coordinating work between parties). (2) Expanding the standing committee.(Added expert technicians : important, special, large affairs / pre-consultations, pre-coordination and mediation-arbitration). This has been shown to be an improvement. In addition, institutional differentiation to enhance the flexibility and diversity of arbitration. In addition, as an institutional differentiation to enhance the flexibility and diversity of arbitration, First, The options for "Med-Arb", "Arb-Med" and "Arb-Med-Arb" are selected. second, By revising the Agreement Act [Article 28, 2 (Agreement on Dispute Resolution)], which is to be amended by the National Parties, the revision of the arbitration settlement clause under the Act, to expand the method to resolve arbitration. third, 2017.6.28. Measures to strengthen the status role and activities of expert technical arbitrators under enforcement, such as the Act on Promotion of Interestments Industry and the Information of Enforcement Decree. Fourth, a measure to increase the role of expert technical Arbitrators by enacting laws on the promotion of the arbitration industry is needed. Especially, the establishment of the Act on Promotion of Intermediation Industry should be established as an international arbitration agency for the arbitration system. Therefore, it proposes a study of improvement and differentiation measures in the details and a policy, legal and institutional improvement and legislation.

A Study on the Application of Outlier Analysis for Fraud Detection: Focused on Transactions of Auction Exception Agricultural Products (부정 탐지를 위한 이상치 분석 활용방안 연구 : 농수산 상장예외품목 거래를 대상으로)

  • Kim, Dongsung;Kim, Kitae;Kim, Jongwoo;Park, Steve
    • Journal of Intelligence and Information Systems
    • /
    • v.20 no.3
    • /
    • pp.93-108
    • /
    • 2014
  • To support business decision making, interests and efforts to analyze and use transaction data in different perspectives are increasing. Such efforts are not only limited to customer management or marketing, but also used for monitoring and detecting fraud transactions. Fraud transactions are evolving into various patterns by taking advantage of information technology. To reflect the evolution of fraud transactions, there are many efforts on fraud detection methods and advanced application systems in order to improve the accuracy and ease of fraud detection. As a case of fraud detection, this study aims to provide effective fraud detection methods for auction exception agricultural products in the largest Korean agricultural wholesale market. Auction exception products policy exists to complement auction-based trades in agricultural wholesale market. That is, most trades on agricultural products are performed by auction; however, specific products are assigned as auction exception products when total volumes of products are relatively small, the number of wholesalers is small, or there are difficulties for wholesalers to purchase the products. However, auction exception products policy makes several problems on fairness and transparency of transaction, which requires help of fraud detection. In this study, to generate fraud detection rules, real huge agricultural products trade transaction data from 2008 to 2010 in the market are analyzed, which increase more than 1 million transactions and 1 billion US dollar in transaction volume. Agricultural transaction data has unique characteristics such as frequent changes in supply volumes and turbulent time-dependent changes in price. Since this was the first trial to identify fraud transactions in this domain, there was no training data set for supervised learning. So, fraud detection rules are generated using outlier detection approach. We assume that outlier transactions have more possibility of fraud transactions than normal transactions. The outlier transactions are identified to compare daily average unit price, weekly average unit price, and quarterly average unit price of product items. Also quarterly averages unit price of product items of the specific wholesalers are used to identify outlier transactions. The reliability of generated fraud detection rules are confirmed by domain experts. To determine whether a transaction is fraudulent or not, normal distribution and normalized Z-value concept are applied. That is, a unit price of a transaction is transformed to Z-value to calculate the occurrence probability when we approximate the distribution of unit prices to normal distribution. The modified Z-value of the unit price in the transaction is used rather than using the original Z-value of it. The reason is that in the case of auction exception agricultural products, Z-values are influenced by outlier fraud transactions themselves because the number of wholesalers is small. The modified Z-values are called Self-Eliminated Z-scores because they are calculated excluding the unit price of the specific transaction which is subject to check whether it is fraud transaction or not. To show the usefulness of the proposed approach, a prototype of fraud transaction detection system is developed using Delphi. The system consists of five main menus and related submenus. First functionalities of the system is to import transaction databases. Next important functions are to set up fraud detection parameters. By changing fraud detection parameters, system users can control the number of potential fraud transactions. Execution functions provide fraud detection results which are found based on fraud detection parameters. The potential fraud transactions can be viewed on screen or exported as files. The study is an initial trial to identify fraud transactions in Auction Exception Agricultural Products. There are still many remained research topics of the issue. First, the scope of analysis data was limited due to the availability of data. It is necessary to include more data on transactions, wholesalers, and producers to detect fraud transactions more accurately. Next, we need to extend the scope of fraud transaction detection to fishery products. Also there are many possibilities to apply different data mining techniques for fraud detection. For example, time series approach is a potential technique to apply the problem. Even though outlier transactions are detected based on unit prices of transactions, however it is possible to derive fraud detection rules based on transaction volumes.

A Checklist to Improve the Fairness in AI Financial Service: Focused on the AI-based Credit Scoring Service (인공지능 기반 금융서비스의 공정성 확보를 위한 체크리스트 제안: 인공지능 기반 개인신용평가를 중심으로)

  • Kim, HaYeong;Heo, JeongYun;Kwon, Hochang
    • Journal of Intelligence and Information Systems
    • /
    • v.28 no.3
    • /
    • pp.259-278
    • /
    • 2022
  • With the spread of Artificial Intelligence (AI), various AI-based services are expanding in the financial sector such as service recommendation, automated customer response, fraud detection system(FDS), credit scoring services, etc. At the same time, problems related to reliability and unexpected social controversy are also occurring due to the nature of data-based machine learning. The need Based on this background, this study aimed to contribute to improving trust in AI-based financial services by proposing a checklist to secure fairness in AI-based credit scoring services which directly affects consumers' financial life. Among the key elements of trustworthy AI like transparency, safety, accountability, and fairness, fairness was selected as the subject of the study so that everyone could enjoy the benefits of automated algorithms from the perspective of inclusive finance without social discrimination. We divided the entire fairness related operation process into three areas like data, algorithms, and user areas through literature research. For each area, we constructed four detailed considerations for evaluation resulting in 12 checklists. The relative importance and priority of the categories were evaluated through the analytic hierarchy process (AHP). We use three different groups: financial field workers, artificial intelligence field workers, and general users which represent entire financial stakeholders. According to the importance of each stakeholder, three groups were classified and analyzed, and from a practical perspective, specific checks such as feasibility verification for using learning data and non-financial information and monitoring new inflow data were identified. Moreover, financial consumers in general were found to be highly considerate of the accuracy of result analysis and bias checks. We expect this result could contribute to the design and operation of fair AI-based financial services.

Managing Duplicate Memberships of Websites : An Approach of Social Network Analysis (웹사이트 중복회원 관리 : 소셜 네트워크 분석 접근)

  • Kang, Eun-Young;Kwahk, Kee-Young
    • Journal of Intelligence and Information Systems
    • /
    • v.17 no.1
    • /
    • pp.153-169
    • /
    • 2011
  • Today using Internet environment is considered absolutely essential for establishing corporate marketing strategy. Companies have promoted their products and services through various ways of on-line marketing activities such as providing gifts and points to customers in exchange for participating in events, which is based on customers' membership data. Since companies can use these membership data to enhance their marketing efforts through various data analysis, appropriate website membership management may play an important role in increasing the effectiveness of on-line marketing campaign. Despite the growing interests in proper membership management, however, there have been difficulties in identifying inappropriate members who can weaken on-line marketing effectiveness. In on-line environment, customers tend to not reveal themselves clearly compared to off-line market. Customers who have malicious intent are able to create duplicate IDs by using others' names illegally or faking login information during joining membership. Since the duplicate members are likely to intercept gifts and points that should be sent to appropriate customers who deserve them, this can result in ineffective marketing efforts. Considering that the number of website members and its related marketing costs are significantly increasing, it is necessary for companies to find efficient ways to screen and exclude unfavorable troublemakers who are duplicate members. With this motivation, this study proposes an approach for managing duplicate membership based on the social network analysis and verifies its effectiveness using membership data gathered from real websites. A social network is a social structure made up of actors called nodes, which are tied by one or more specific types of interdependency. Social networks represent the relationship between the nodes and show the direction and strength of the relationship. Various analytical techniques have been proposed based on the social relationships, such as centrality analysis, structural holes analysis, structural equivalents analysis, and so on. Component analysis, one of the social network analysis techniques, deals with the sub-networks that form meaningful information in the group connection. We propose a method for managing duplicate memberships using component analysis. The procedure is as follows. First step is to identify membership attributes that will be used for analyzing relationship patterns among memberships. Membership attributes include ID, telephone number, address, posting time, IP address, and so on. Second step is to compose social matrices based on the identified membership attributes and aggregate the values of each social matrix into a combined social matrix. The combined social matrix represents how strong pairs of nodes are connected together. When a pair of nodes is strongly connected, we expect that those nodes are likely to be duplicate memberships. The combined social matrix is transformed into a binary matrix with '0' or '1' of cell values using a relationship criterion that determines whether the membership is duplicate or not. Third step is to conduct a component analysis for the combined social matrix in order to identify component nodes and isolated nodes. Fourth, identify the number of real memberships and calculate the reliability of website membership based on the component analysis results. The proposed procedure was applied to three real websites operated by a pharmaceutical company. The empirical results showed that the proposed method was superior to the traditional database approach using simple address comparison. In conclusion, this study is expected to shed some light on how social network analysis can enhance a reliable on-line marketing performance by efficiently and effectively identifying duplicate memberships of websites.