• Title/Summary/Keyword: 시스템 성능 향상

Search Result 7,458, Processing Time 0.041 seconds

A study on the developmental plan of Alarm Monitoring Service (기계경비의 발전적 대응방안에 관한 연구)

  • Chung, Tae-Hwang;So, Seung-Young
    • Korean Security Journal
    • /
    • no.22
    • /
    • pp.145-168
    • /
    • 2010
  • Since Alarm Monitoring Service was introduced in Korea in 1981, the market has been increasing and is expected to increase continually. Some factors such as the increase of social security need and the change of safety consciousness, increase of persons who live alone could be affected positively on Alarm Monitoring Service industry. As Alarm Monitoring Service come into wide use, the understanding of electronic security service is spread and consumer's demand is difficult, so consideration about new developmental plan is need to respond to the change actively. Electronic security system is consist of various kinds of element, so every element could do their role equally. Alarm Monitoring Service should satisfy consumer's various needs because it is not necessary commodity, also electronic security device could be easily operated and it's appearance has to have a good design. To solve the false alarm problem, detection sensor's improvement should be considered preferentially and development of new type of sensor that operate dissimilarly to replace former sensor is needed. On the other hand, to settle the matter that occurred by response time, security company could explain the limit on Alarm Monitoring System to consumer honestly and ask for an understanding. If consumer could be joined into security activity by security agent's explanation, better security service would be provided with mutual confidence. To save response time the consideration on the introduction of GIS(Global Information System) is needed rather than GPS(Global Positioning System). Although training program for security agents is important, several benefits for security agents should be considered together. The development of new business model is required for preparation against market stagnation and the development of new commodity to secure consumer for housing service rather than commercial facility service. for the purpose of those, new commodity related to home-network system and video surveillance system could be considered, also new added service with network between security company and consumer for a basis is to be considered.

  • PDF

Improvement of Energy Density in Supercapacitor by Ion Doping Control for Energy Storage System (에너지 저장장치용 슈퍼커패시터 이온 도핑 제어를 통한 에너지 밀도 향상 연구)

  • Park, Byung-jun;Yoo, SeonMi;Yang, SeongEun;Han, SangChul;No, TaeMoo;Lee, Young Hee;Han, YoungHee
    • KEPCO Journal on Electric Power and Energy
    • /
    • v.5 no.3
    • /
    • pp.209-213
    • /
    • 2019
  • Recently, demand for high energy density and long cycling stability of energy storage system has increased for application using with frequency regulation (F/R) in power grid. Supercapacitor have long lifetime and high charge and discharge rate, it is very adaptable to apply a frequency regulation in power grid. Supercapacitor can complement batteries to reduce the size and installation of batteries. Because their utilization in a system can potentially eliminate the need for short-term frequent replacement as required by batteries, hence, saving the resources invested in the upkeep of the whole system or extension of lifecycle of batteries in the long run of power grid. However, low energy density in supercapacitor is critical weakness to utilization for huge energy storage system of power grid. So, it is still far from being able to replace batteries and struggle in meeting the demand for a high energy density. But, today, LIC (Lithium Ion Capacitor) considered as an attractive structure to improve energy density much more than EDLC (Electric double layer capacitor) because LIC has high voltage range up to 3.8 V. But, many aspects of the electrochemical performance of LIC still need to be examined closely in order to apply for commercial use. In this study, in order to improve the capacitance of LIC related with energy density, we designed new method of pre-doping in anode electrode. The electrode in cathode were fabricated in dry room which has a relative humidity under 0.1% and constant electrode thickness over $100{\mu}m$ was manufactured for stable mechanical strength and anode doping. To minimize of contact resistance, fabricated electrode was conducted hot compression process from room temperature to $65^{\circ}C$. We designed various pre-doping method for LIC structure and analyzing the doping mechanism issues. Finally, we suggest new pre-doping method to improve the capacitance and electrochemical stability for LIC.

A review on the design requirement of temperature in high-level nuclear waste disposal system: based on bentonite buffer (고준위폐기물처분시스템 설계 제한온도 설정에 관한 기술현황 분석: 벤토나이트 완충재를 중심으로)

  • Kim, Jin-Seop;Cho, Won-Jin;Park, Seunghun;Kim, Geon-Young;Baik, Min-Hoon
    • Journal of Korean Tunnelling and Underground Space Association
    • /
    • v.21 no.5
    • /
    • pp.587-609
    • /
    • 2019
  • Short-and long-term stabilities of bentonite, favored material as buffer in geological repositories for high-level waste were reviewed in this paper in addition to alternative design concepts of buffer to mitigate the thermal load from decay heat of SF (Spent Fuel) and further increase the disposal efficiency. It is generally reported that the irreversible changes in structure, hydraulic behavior, and swelling capacity are produced due to temperature increase and vapor flow between $150{\sim}250^{\circ}C$. Provided that the maximum temperature of bentonite is less than $150^{\circ}C$, however, the effects of temperature on the material, structural, and mineralogical stability seems to be minor. The maximum temperature in disposal system will constrain and determine the amount of waste to be disposed per unit area and be regarded as an important design parameter influencing the availability of disposal site. Thus, it is necessary to identify the effects of high temperature on the performance of buffer and allow for the thermal constraint greater than $100^{\circ}C$. In addition, the development of high-performance EBS (Engineered Barrier System) such as composite bentonite buffer mixed with graphite or silica and multi-layered buffer (i.e., highly thermal-conductive layer or insulating layer) should be taken into account to enhance the disposal efficiency in parallel with the development of multilayer repository. This will contribute to increase of reliability and securing the acceptance of the people with regard to a high-level waste disposal.

Improvement of Radar Rainfall Estimation Using Radar Reflectivity Data from the Hybrid Lowest Elevation Angles (혼합 최저고도각 반사도 자료를 이용한 레이더 강우추정 정확도 향상)

  • Lyu, Geunsu;Jung, Sung-Hwa;Nam, Kyung-Yeub;Kwon, Soohyun;Lee, Cheong-Ryong;Lee, Gyuwon
    • Journal of the Korean earth science society
    • /
    • v.36 no.1
    • /
    • pp.109-124
    • /
    • 2015
  • A novel approach, hybrid surface rainfall (KNU-HSR) technique developed by Kyungpook Natinal University, was utilized for improving the radar rainfall estimation. The KNU-HSR technique estimates radar rainfall at a 2D hybrid surface consistings of the lowest radar bins that is immune to ground clutter contaminations and significant beam blockage. Two HSR techniques, static and dynamic HSRs, were compared and evaluated in this study. Static HSR technique utilizes beam blockage map and ground clutter map to yield the hybrid surface whereas dynamic HSR technique additionally applies quality index map that are derived from the fuzzy logic algorithm for a quality control in real time. The performances of two HSRs were evaluated by correlation coefficient (CORR), total ratio (RATIO), mean bias (BIAS), normalized standard deviation (NSD), and mean relative error (MRE) for ten rain cases. Dynamic HSR (CORR=0.88, BIAS= $-0.24mm\;hr^{-1}$, NSD=0.41, MRE=37.6%) shows better performances than static HSR without correction of reflectivity calibration bias (CORR=0.87, BIAS= $-2.94mm\;hr^{-1}$, NSD=0.76, MRE=58.4%) for all skill scores. Dynamic HSR technique overestimates surface rainfall at near range whereas it underestimates rainfall at far ranges due to the effects of beam broadening and increasing the radar beam height. In terms of NSD and MRE, dynamic HSR shows the best results regardless of the distance from radar. Static HSR significantly overestimates a surface rainfall at weaker rainfall intensity. However, RATIO of dynamic HSR remains almost 1.0 for all ranges of rainfall intensity. After correcting system bias of reflectivity, NSD and MRE of dynamic HSR are improved by about 20 and 15%, respectively.

Application Effect of Heating Energy Saving Package on Venlo Type Glasshouse of Paprika Cultivation (파프리카 재배 벤로형 유리온실에서 난방에너지 절감 패키지 기술 적용효과)

  • Kwon, Jin Kyung;Jeon, Jong Gil;Kim, Seung Hee;Kim, Hyung Gweon
    • Journal of Bio-Environment Control
    • /
    • v.25 no.4
    • /
    • pp.225-231
    • /
    • 2016
  • Glasshouse heating package technologies to improve energy usage efficiency in winter were developed. Heating package was composed of the ground water source heat pump with heating capacity of 105kW, the aluminum multi-layer thermal curtain with six layers of different materials and the root zone local heater with XL pipes of ${\phi}20mm$. Venlo type glasshouse($461m^2$) with the heating package was compared with the same type and area control glasshouse with the light oil boiler, the usual non-woven fabric thermal curtain with respect to the glasshouse inside temperature, relative humidity, crop growth, and heating energy consumption. The results of test in paprika cultivation glasshouses showed that the air temperature inside glasshouse with aluminum multi-layer thermal curtain was maintained $2.2^{\circ}C$ higher than that of control glasshouse in un-heating night time and the temperature in bed with root zone local heating was $4.7^{\circ}C$ higher than that in bed without local heating. Average heating coefficient of performance(COP) of the ground water source heat pump used in paprika cultivation was 3.7 and the glasshouse inside temperature was maintained at $21^{\circ}C$ of heating set up temperature. The heating energy consumptions per 10a were measured at 14,071L of light oil and 364kWh of electric power for the control glasshouse and 35,082kWh for the glasshouse applied heating package. As results, the heating cost of the glasshouse applied heating package was 87 percent lower than that of control glasshouse. The growths of paprika in glasshouses of control and applied heating package did not show any significant difference.

An Implementation Method of the Character Recognizer for the Sorting Rate Improvement of an Automatic Postal Envelope Sorting Machine (우편물 자동구분기의 구분율 향상을 위한 문자인식기의 구현 방법)

  • Lim, Kil-Taek;Jeong, Seon-Hwa;Jang, Seung-Ick;Kim, Ho-Yon
    • Journal of Korea Society of Industrial Information Systems
    • /
    • v.12 no.4
    • /
    • pp.15-24
    • /
    • 2007
  • The recognition of postal address images is indispensable for the automatic sorting of postal envelopes. The process of the address image recognition is composed of three steps-address image preprocessing, character recognition, address interpretation. The extracted character images from the preprocessing step are forwarded to the character recognition step, in which multiple candidate characters with reliability scores are obtained for each character image extracted. aracters with reliability scores are obtained for each character image extracted. Utilizing those character candidates with scores, we obtain the final valid address for the input envelope image through the address interpretation step. The envelope sorting rate depends on the performance of all three steps, among which character recognition step could be said to be very important. The good character recognizer would be the one which could produce valid candidates with very reliable scores to help the address interpretation step go easy. In this paper, we propose the method of generating character candidates with reliable recognition scores. We utilize the existing MLP(multilayered perceptrons) neural network of the address recognition system in the current automatic postal envelope sorters, as the classifier for the each image from the preprocessing step. The MLP is well known to be one of the best classifiers in terms of processing speed and recognition rate. The false alarm problem, however, might be occurred in recognition results, which made the address interpretation hard. To make address interpretation easy and improve the envelope sorting rate, we propose promising methods to reestimate the recognition score (confidence) of the existing MLP classifier: the generation method of the statistical recognition properties of the classifier and the method of the combination of the MLP and the subspace classifier which roles as a reestimator of the confidence. To confirm the superiority of the proposed method, we have used the character images of the real postal envelopes from the sorters in the post office. The experimental results show that the proposed method produces high reliability in terms of error and rejection for individual characters and non-characters.

  • PDF

EFFECT OF BENZALKONIUM CHLORIDE ON DENTIN BONDING WITH NTG-GMA/BPDM AND DSDM SYSTEM (Benzalkonium Chloride가 NTG-GMA/BPDM계 및 DSDM계 상아질접착제의 접착성능에 미치는 영향)

  • Shin, Il;Park, Jin-Hoon
    • Restorative Dentistry and Endodontics
    • /
    • v.20 no.2
    • /
    • pp.699-720
    • /
    • 1995
  • This study was conducted to evaluate the effect of benzalkonium chloride solution as a wetting agent instead of water on dentin bonding with NTG-GMA/BPDM system (All-bond 2, Bisco.) and DSDM system (Aelitebond, Bisco.). Benzalkonium chloride solution is a chemical disinfectant widely used in medical and dental clinics for preoperative preparation of skin and mucosa due to its strong effect of cationic surface active detergent. Eighty freshly extracted bovine lower incisor were grinded labially to expose flat dentin surface, and then were acid-etched with 10 % phosphoric acid for 15 second, water-rinsed, and dried for 10 second with air syringe. The specimens were randomly divided into 8 groups of 10 teeth. The specimens of control group were remoistured with water and the specimens of experimental groups were remoistured with 0.1 %, 0.5 %, and 1.0 % benzalkonium chloride solution respectively. And then, the Aelitefil composite resin was bonded to the pretreated surface of the specimens by use of All-bond 2 dentin bonding system or Aelitebond dentin bonding system in equal number of the specimens. The bonded specimens were stored in $37^{\circ}C$ distilled water for 24 hours, then the tensile bond strength was measured, the mode of failure was observed, the fractured dentin surface were examined under scanning electron microscopy, and FT-IR spectroscopy was taken for the purpose of investigating the changes of the dentin surface pretreated with benzal konium chloride solution followed by each primer of the dentin bonding systems. The results were as follows : In the group of bonding with NTG-GMA/BPDM dentin bonding agent(All-bond 2), higher tensile bond strength was only seen in the experimental group remoistured with 0.1 % benzal konium chloride solution than that in water-remoistured control group(p<0.05). In the group of bonding with DSDM dentin bonding agent (Aelitebond), no significant differences were seen between the control and each one of the experimental group(p<0.05). Higher tensile bond strength were seen in NTG-GMAIBPDM dentin bonding agent group than in DSDM dentin bonding agent group regardless of remoistur ization with benzal konium chloride solution. On the examination of failure mode, cohesive and mixed failure were predominantly seen in the group of bonding with NTG-GMAIBPDM dentin bonding agent, while adhesive failure was predominantly seen in the group of bonding with DSDM dentin bonding agent. On SEM examination of fractured surfaces, no differences of findings of primed dentin surface between the groups with and without remoisturization with benzal konium chloride solution. FT-IR spectroscopy taken from the control and the experimental group reve::.led that some higher absorbance derived from the primers binding to dentin surface was seen at the group pretreated with 0.1 % benzal konium chloride solution than at the control group of remoisturizing with water.

  • PDF

Feasibility of Deep Learning Algorithms for Binary Classification Problems (이진 분류문제에서의 딥러닝 알고리즘의 활용 가능성 평가)

  • Kim, Kitae;Lee, Bomi;Kim, Jong Woo
    • Journal of Intelligence and Information Systems
    • /
    • v.23 no.1
    • /
    • pp.95-108
    • /
    • 2017
  • Recently, AlphaGo which is Bakuk (Go) artificial intelligence program by Google DeepMind, had a huge victory against Lee Sedol. Many people thought that machines would not be able to win a man in Go games because the number of paths to make a one move is more than the number of atoms in the universe unlike chess, but the result was the opposite to what people predicted. After the match, artificial intelligence technology was focused as a core technology of the fourth industrial revolution and attracted attentions from various application domains. Especially, deep learning technique have been attracted as a core artificial intelligence technology used in the AlphaGo algorithm. The deep learning technique is already being applied to many problems. Especially, it shows good performance in image recognition field. In addition, it shows good performance in high dimensional data area such as voice, image and natural language, which was difficult to get good performance using existing machine learning techniques. However, in contrast, it is difficult to find deep leaning researches on traditional business data and structured data analysis. In this study, we tried to find out whether the deep learning techniques have been studied so far can be used not only for the recognition of high dimensional data but also for the binary classification problem of traditional business data analysis such as customer churn analysis, marketing response prediction, and default prediction. And we compare the performance of the deep learning techniques with that of traditional artificial neural network models. The experimental data in the paper is the telemarketing response data of a bank in Portugal. It has input variables such as age, occupation, loan status, and the number of previous telemarketing and has a binary target variable that records whether the customer intends to open an account or not. In this study, to evaluate the possibility of utilization of deep learning algorithms and techniques in binary classification problem, we compared the performance of various models using CNN, LSTM algorithm and dropout, which are widely used algorithms and techniques in deep learning, with that of MLP models which is a traditional artificial neural network model. However, since all the network design alternatives can not be tested due to the nature of the artificial neural network, the experiment was conducted based on restricted settings on the number of hidden layers, the number of neurons in the hidden layer, the number of output data (filters), and the application conditions of the dropout technique. The F1 Score was used to evaluate the performance of models to show how well the models work to classify the interesting class instead of the overall accuracy. The detail methods for applying each deep learning technique in the experiment is as follows. The CNN algorithm is a method that reads adjacent values from a specific value and recognizes the features, but it does not matter how close the distance of each business data field is because each field is usually independent. In this experiment, we set the filter size of the CNN algorithm as the number of fields to learn the whole characteristics of the data at once, and added a hidden layer to make decision based on the additional features. For the model having two LSTM layers, the input direction of the second layer is put in reversed position with first layer in order to reduce the influence from the position of each field. In the case of the dropout technique, we set the neurons to disappear with a probability of 0.5 for each hidden layer. The experimental results show that the predicted model with the highest F1 score was the CNN model using the dropout technique, and the next best model was the MLP model with two hidden layers using the dropout technique. In this study, we were able to get some findings as the experiment had proceeded. First, models using dropout techniques have a slightly more conservative prediction than those without dropout techniques, and it generally shows better performance in classification. Second, CNN models show better classification performance than MLP models. This is interesting because it has shown good performance in binary classification problems which it rarely have been applied to, as well as in the fields where it's effectiveness has been proven. Third, the LSTM algorithm seems to be unsuitable for binary classification problems because the training time is too long compared to the performance improvement. From these results, we can confirm that some of the deep learning algorithms can be applied to solve business binary classification problems.

A Study on Knowledge Entity Extraction Method for Individual Stocks Based on Neural Tensor Network (뉴럴 텐서 네트워크 기반 주식 개별종목 지식개체명 추출 방법에 관한 연구)

  • Yang, Yunseok;Lee, Hyun Jun;Oh, Kyong Joo
    • Journal of Intelligence and Information Systems
    • /
    • v.25 no.2
    • /
    • pp.25-38
    • /
    • 2019
  • Selecting high-quality information that meets the interests and needs of users among the overflowing contents is becoming more important as the generation continues. In the flood of information, efforts to reflect the intention of the user in the search result better are being tried, rather than recognizing the information request as a simple string. Also, large IT companies such as Google and Microsoft focus on developing knowledge-based technologies including search engines which provide users with satisfaction and convenience. Especially, the finance is one of the fields expected to have the usefulness and potential of text data analysis because it's constantly generating new information, and the earlier the information is, the more valuable it is. Automatic knowledge extraction can be effective in areas where information flow is vast, such as financial sector, and new information continues to emerge. However, there are several practical difficulties faced by automatic knowledge extraction. First, there are difficulties in making corpus from different fields with same algorithm, and it is difficult to extract good quality triple. Second, it becomes more difficult to produce labeled text data by people if the extent and scope of knowledge increases and patterns are constantly updated. Third, performance evaluation is difficult due to the characteristics of unsupervised learning. Finally, problem definition for automatic knowledge extraction is not easy because of ambiguous conceptual characteristics of knowledge. So, in order to overcome limits described above and improve the semantic performance of stock-related information searching, this study attempts to extract the knowledge entity by using neural tensor network and evaluate the performance of them. Different from other references, the purpose of this study is to extract knowledge entity which is related to individual stock items. Various but relatively simple data processing methods are applied in the presented model to solve the problems of previous researches and to enhance the effectiveness of the model. From these processes, this study has the following three significances. First, A practical and simple automatic knowledge extraction method that can be applied. Second, the possibility of performance evaluation is presented through simple problem definition. Finally, the expressiveness of the knowledge increased by generating input data on a sentence basis without complex morphological analysis. The results of the empirical analysis and objective performance evaluation method are also presented. The empirical study to confirm the usefulness of the presented model, experts' reports about individual 30 stocks which are top 30 items based on frequency of publication from May 30, 2017 to May 21, 2018 are used. the total number of reports are 5,600, and 3,074 reports, which accounts about 55% of the total, is designated as a training set, and other 45% of reports are designated as a testing set. Before constructing the model, all reports of a training set are classified by stocks, and their entities are extracted using named entity recognition tool which is the KKMA. for each stocks, top 100 entities based on appearance frequency are selected, and become vectorized using one-hot encoding. After that, by using neural tensor network, the same number of score functions as stocks are trained. Thus, if a new entity from a testing set appears, we can try to calculate the score by putting it into every single score function, and the stock of the function with the highest score is predicted as the related item with the entity. To evaluate presented models, we confirm prediction power and determining whether the score functions are well constructed by calculating hit ratio for all reports of testing set. As a result of the empirical study, the presented model shows 69.3% hit accuracy for testing set which consists of 2,526 reports. this hit ratio is meaningfully high despite of some constraints for conducting research. Looking at the prediction performance of the model for each stocks, only 3 stocks, which are LG ELECTRONICS, KiaMtr, and Mando, show extremely low performance than average. this result maybe due to the interference effect with other similar items and generation of new knowledge. In this paper, we propose a methodology to find out key entities or their combinations which are necessary to search related information in accordance with the user's investment intention. Graph data is generated by using only the named entity recognition tool and applied to the neural tensor network without learning corpus or word vectors for the field. From the empirical test, we confirm the effectiveness of the presented model as described above. However, there also exist some limits and things to complement. Representatively, the phenomenon that the model performance is especially bad for only some stocks shows the need for further researches. Finally, through the empirical study, we confirmed that the learning method presented in this study can be used for the purpose of matching the new text information semantically with the related stocks.

Social Network-based Hybrid Collaborative Filtering using Genetic Algorithms (유전자 알고리즘을 활용한 소셜네트워크 기반 하이브리드 협업필터링)

  • Noh, Heeryong;Choi, Seulbi;Ahn, Hyunchul
    • Journal of Intelligence and Information Systems
    • /
    • v.23 no.2
    • /
    • pp.19-38
    • /
    • 2017
  • Collaborative filtering (CF) algorithm has been popularly used for implementing recommender systems. Until now, there have been many prior studies to improve the accuracy of CF. Among them, some recent studies adopt 'hybrid recommendation approach', which enhances the performance of conventional CF by using additional information. In this research, we propose a new hybrid recommender system which fuses CF and the results from the social network analysis on trust and distrust relationship networks among users to enhance prediction accuracy. The proposed algorithm of our study is based on memory-based CF. But, when calculating the similarity between users in CF, our proposed algorithm considers not only the correlation of the users' numeric rating patterns, but also the users' in-degree centrality values derived from trust and distrust relationship networks. In specific, it is designed to amplify the similarity between a target user and his or her neighbor when the neighbor has higher in-degree centrality in the trust relationship network. Also, it attenuates the similarity between a target user and his or her neighbor when the neighbor has higher in-degree centrality in the distrust relationship network. Our proposed algorithm considers four (4) types of user relationships - direct trust, indirect trust, direct distrust, and indirect distrust - in total. And, it uses four adjusting coefficients, which adjusts the level of amplification / attenuation for in-degree centrality values derived from direct / indirect trust and distrust relationship networks. To determine optimal adjusting coefficients, genetic algorithms (GA) has been adopted. Under this background, we named our proposed algorithm as SNACF-GA (Social Network Analysis - based CF using GA). To validate the performance of the SNACF-GA, we used a real-world data set which is called 'Extended Epinions dataset' provided by 'trustlet.org'. It is the data set contains user responses (rating scores and reviews) after purchasing specific items (e.g. car, movie, music, book) as well as trust / distrust relationship information indicating whom to trust or distrust between users. The experimental system was basically developed using Microsoft Visual Basic for Applications (VBA), but we also used UCINET 6 for calculating the in-degree centrality of trust / distrust relationship networks. In addition, we used Palisade Software's Evolver, which is a commercial software implements genetic algorithm. To examine the effectiveness of our proposed system more precisely, we adopted two comparison models. The first comparison model is conventional CF. It only uses users' explicit numeric ratings when calculating the similarities between users. That is, it does not consider trust / distrust relationship between users at all. The second comparison model is SNACF (Social Network Analysis - based CF). SNACF differs from the proposed algorithm SNACF-GA in that it considers only direct trust / distrust relationships. It also does not use GA optimization. The performances of the proposed algorithm and comparison models were evaluated by using average MAE (mean absolute error). Experimental result showed that the optimal adjusting coefficients for direct trust, indirect trust, direct distrust, indirect distrust were 0, 1.4287, 1.5, 0.4615 each. This implies that distrust relationships between users are more important than trust ones in recommender systems. From the perspective of recommendation accuracy, SNACF-GA (Avg. MAE = 0.111943), the proposed algorithm which reflects both direct and indirect trust / distrust relationships information, was found to greatly outperform a conventional CF (Avg. MAE = 0.112638). Also, the algorithm showed better recommendation accuracy than the SNACF (Avg. MAE = 0.112209). To confirm whether these differences are statistically significant or not, we applied paired samples t-test. The results from the paired samples t-test presented that the difference between SNACF-GA and conventional CF was statistical significant at the 1% significance level, and the difference between SNACF-GA and SNACF was statistical significant at the 5%. Our study found that the trust/distrust relationship can be important information for improving performance of recommendation algorithms. Especially, distrust relationship information was found to have a greater impact on the performance improvement of CF. This implies that we need to have more attention on distrust (negative) relationships rather than trust (positive) ones when tracking and managing social relationships between users.