• Title/Summary/Keyword: Data Memory

Search Result 3,297, Processing Time 0.032 seconds

GIS Optimization for Bigdata Analysis and AI Applying (Bigdata 분석과 인공지능 적용한 GIS 최적화 연구)

  • Kwak, Eun-young;Park, Dea-woo
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 2022.05a
    • /
    • pp.171-173
    • /
    • 2022
  • The 4th industrial revolution technology is developing people's lives more efficiently. GIS provided on the Internet services such as traffic information and time information makes people getting more quickly to destination. National geographic information service(NGIS) and each local government are making basic data to investigate SOC accessibility for analyzing optimal point. To construct the shortest distance, the accessibility from the starting point to the arrival point is analyzed. Applying road network map, the starting point and the ending point, the shortest distance, the optimal accessibility is calculated by using Dijkstra algorithm. The analysis information from multiple starting points to multiple destinations was required more than 3 steps of manual analysis to decide the position for the optimal point, within about 0.1% error. It took more time to process the many-to-many (M×N) calculation, requiring at least 32G memory specification of the computer. If an optimal proximity analysis service is provided at a desired location more versatile, it is possible to efficiently analyze locations that are vulnerable to business start-up and living facilities access, and facility selection for the public.

  • PDF

Adverse Effects on EEGs and Bio-Signals Coupling on Improving Machine Learning-Based Classification Performances

  • SuJin Bak
    • Journal of the Korea Society of Computer and Information
    • /
    • v.28 no.10
    • /
    • pp.133-153
    • /
    • 2023
  • In this paper, we propose a novel approach to investigating brain-signal measurement technology using Electroencephalography (EEG). Traditionally, researchers have combined EEG signals with bio-signals (BSs) to enhance the classification performance of emotional states. Our objective was to explore the synergistic effects of coupling EEG and BSs, and determine whether the combination of EEG+BS improves the classification accuracy of emotional states compared to using EEG alone or combining EEG with pseudo-random signals (PS) generated arbitrarily by random generators. Employing four feature extraction methods, we examined four combinations: EEG alone, EG+BS, EEG+BS+PS, and EEG+PS, utilizing data from two widely-used open datasets. Emotional states (task versus rest states) were classified using Support Vector Machine (SVM) and Long Short-Term Memory (LSTM) classifiers. Our results revealed that when using the highest accuracy SVM-FFT, the average error rates of EEG+BS were 4.7% and 6.5% higher than those of EEG+PS and EEG alone, respectively. We also conducted a thorough analysis of EEG+BS by combining numerous PSs. The error rate of EEG+BS+PS displayed a V-shaped curve, initially decreasing due to the deep double descent phenomenon, followed by an increase attributed to the curse of dimensionality. Consequently, our findings suggest that the combination of EEG+BS may not always yield promising classification performance.

Added Value of Chemical Exchange-Dependent Saturation Transfer MRI for the Diagnosis of Dementia

  • Jang-Hoon Oh;Bo Guem Choi;Hak Young Rhee;Jin San Lee;Kyung Mi Lee;Soonchan Park;Ah Rang Cho;Chang-Woo Ryu;Key Chung Park;Eui Jong Kim;Geon-Ho Jahng
    • Korean Journal of Radiology
    • /
    • v.22 no.5
    • /
    • pp.770-781
    • /
    • 2021
  • Objective: Chemical exchange-dependent saturation transfer (CEST) MRI is sensitive for detecting solid-like proteins and may detect changes in the levels of mobile proteins and peptides in tissues. The objective of this study was to evaluate the characteristics of chemical exchange proton pools using the CEST MRI technique in patients with dementia. Materials and Methods: Our institutional review board approved this cross-sectional prospective study and informed consent was obtained from all participants. This study included 41 subjects (19 with dementia and 22 without dementia). Complete CEST data of the brain were obtained using a three-dimensional gradient and spin-echo sequence to map CEST indices, such as amide, amine, hydroxyl, and magnetization transfer ratio asymmetry (MTRasym) values, using six-pool Lorentzian fitting. Statistical analyses of CEST indices were performed to evaluate group comparisons, their correlations with gray matter volume (GMV) and Mini-Mental State Examination (MMSE) scores, and receiver operating characteristic (ROC) curves. Results: Amine signals (0.029 for non-dementia, 0.046 for dementia, p = 0.011 at hippocampus) and MTRasym values at 3 ppm (0.748 for non-dementia, 1.138 for dementia, p = 0.022 at hippocampus), and 3.5 ppm (0.463 for non-dementia, 0.875 for dementia, p = 0.029 at hippocampus) were significantly higher in the dementia group than in the non-dementia group. Most CEST indices were not significantly correlated with GMV; however, except amide, most indices were significantly correlated with the MMSE scores. The classification power of most CEST indices was lower than that of GMV but adding one of the CEST indices in GMV improved the classification between the subject groups. The largest improvement was seen in the MTRasym values at 2 ppm in the anterior cingulate (area under the ROC curve = 0.981), with a sensitivity of 100 and a specificity of 90.91. Conclusion: CEST MRI potentially allows noninvasive image alterations in the Alzheimer's disease brain without injecting isotopes for monitoring different disease states and may provide a new imaging biomarker in the future.

Deletion Timing of Cic Alleles during Hematopoiesis Determines the Degree of Peripheral CD4+ T Cell Activation and Proliferation

  • Guk-Yeol Park;Gil-Woo Lee;Soeun Kim;Hyebeen Hong;Jong Seok Park;Jae-Ho Cho;Yoontae Lee
    • IMMUNE NETWORK
    • /
    • v.20 no.5
    • /
    • pp.43.1-43.11
    • /
    • 2020
  • Capicua (CIC) is a transcriptional repressor that regulates several developmental processes. CIC deficiency results in lymphoproliferative autoimmunity accompanied by expansion of CD44hiCD62Llo effector/memory and follicular Th cell populations. Deletion of Cic alleles in hematopoietic stem cells (Vav1-Cre-mediated knockout of Cic) causes more severe autoimmunity than that caused by the knockout of Cic in CD4+CD8+ double positive thymocytes (Cd4-Cre-mediated knockout of Cic). In this study, we compared splenic CD4+ T cell activation and proliferation between whole immune cell-specific Cic-null (Cicf/f;Vav1-Cre) and T cell-specific Cic-null (Cicf/f;Cd4-Cre) mice. Hyperactivation and hyperproliferation of CD4+ T cells were more apparent in Cicf/f;Vav1-Cre mice than in Cicf/f;Cd4-Cre mice. Cicf/f;Vav1-Cre CD4+ T cells more rapidly proliferated and secreted larger amounts of IL-2 upon TCR stimulation than did Cicf/f;Cd4-Cre CD4+ T cells, while the TCR stimulation-induced activation of the TCR signaling cascade and calcium flux were comparable between them. Mixed wild-type and Cicf/f;Vav1-Cre bone marrow chimeras also exhibited more apparent hyperactivation and hyperproliferation of Cic-deficient CD4+ T cells than did mixed wild-type and Cicf/f;Cd4-Cre bone marrow chimeras. Taken together, our data demonstrate that CIC deficiency at the beginning of T cell development endows peripheral CD4+ T cells with enhanced T cell activation and proliferative capability.

Effects of GV1001 on Language Dysfunction in Patients With Moderate-to-Severe Alzheimer's Disease: Post Hoc Analysis of Severe Impairment Battery Subscales

  • Hyuk Sung Kwon;Seong-Ho Koh;Seong Hye Choi;Jee Hyang Jeong;Hae Ri Na;Chan Nyoung Lee;YoungSoon Yang;Ae Young Lee;Jae-Hong Lee;Kyung Won Park;Hyun Jeong Han;Byeong C. Kim;Jinse Park;Jee-Young Lee;Kyu-Yong Lee;Sangjae Kim
    • Dementia and Neurocognitive Disorders
    • /
    • v.22 no.3
    • /
    • pp.100-108
    • /
    • 2023
  • Background and Purpose: The efficacy and safety of GV1001 have been demonstrated in patients with moderate-to-severe Alzheimer's disease (AD). In this study, we aimed to further demonstrate the effectiveness of GV1001 using subscales of the Severe Impairment Battery (SIB), which is a validated measure to assess cognitive function in patients with moderate-to-severe AD. Methods: We performed a post hoc analysis of data from a 6 month, multicenter, phase 2, randomized, double-blind, placebo-controlled trial with GV1001 (ClinicalTrials.gov, NCT03184467). Patients were randomized to receive either GV1001 or a placebo for 24 weeks. In the current study, nine subscales of SIB-social interaction, memory, orientation, language, attention, praxis, visuospatial ability, construction, and orientation to name-were compared between the treatment (GV1001 1.12 mg) and placebo groups at weeks 12 and 24. The safety endpoints for these patients were also determined based on adverse events. Results: In addition to the considerable beneficial effect of GV1001 on the SIB total score, GV1001 1.12 mg showed the most significant effect on language function at 24 weeks compared to placebo in both the full analysis set (FAS) and per-protocol set (PPS) (p=0.017 and p=0.011, respectively). The rate of adverse events did not differ significantly between the 2 groups. Conclusions: Patients with moderate-to-severe AD receiving GV1001 had greater language benefits than those receiving placebo, as measured using the SIB language subscale.

GPU Based Feature Profile Simulation for Deep Contact Hole Etching in Fluorocarbon Plasma

  • Im, Yeon-Ho;Chang, Won-Seok;Choi, Kwang-Sung;Yu, Dong-Hun;Cho, Deog-Gyun;Yook, Yeong-Geun;Chun, Poo-Reum;Lee, Se-A;Kim, Jin-Tae;Kwon, Deuk-Chul;Yoon, Jung-Sik;Kim3, Dae-Woong;You, Shin-Jae
    • Proceedings of the Korean Vacuum Society Conference
    • /
    • 2012.08a
    • /
    • pp.80-81
    • /
    • 2012
  • Recently, one of the critical issues in the etching processes of the nanoscale devices is to achieve ultra-high aspect ratio contact (UHARC) profile without anomalous behaviors such as sidewall bowing, and twisting profile. To achieve this goal, the fluorocarbon plasmas with major advantage of the sidewall passivation have been used commonly with numerous additives to obtain the ideal etch profiles. However, they still suffer from formidable challenges such as tight limits of sidewall bowing and controlling the randomly distorted features in nanoscale etching profile. Furthermore, the absence of the available plasma simulation tools has made it difficult to develop revolutionary technologies to overcome these process limitations, including novel plasma chemistries, and plasma sources. As an effort to address these issues, we performed a fluorocarbon surface kinetic modeling based on the experimental plasma diagnostic data for silicon dioxide etching process under inductively coupled C4F6/Ar/O2 plasmas. For this work, the SiO2 etch rates were investigated with bulk plasma diagnostics tools such as Langmuir probe, cutoff probe and Quadruple Mass Spectrometer (QMS). The surface chemistries of the etched samples were measured by X-ray Photoelectron Spectrometer. To measure plasma parameters, the self-cleaned RF Langmuir probe was used for polymer deposition environment on the probe tip and double-checked by the cutoff probe which was known to be a precise plasma diagnostic tool for the electron density measurement. In addition, neutral and ion fluxes from bulk plasma were monitored with appearance methods using QMS signal. Based on these experimental data, we proposed a phenomenological, and realistic two-layer surface reaction model of SiO2 etch process under the overlying polymer passivation layer, considering material balance of deposition and etching through steady-state fluorocarbon layer. The predicted surface reaction modeling results showed good agreement with the experimental data. With the above studies of plasma surface reaction, we have developed a 3D topography simulator using the multi-layer level set algorithm and new memory saving technique, which is suitable in 3D UHARC etch simulation. Ballistic transports of neutral and ion species inside feature profile was considered by deterministic and Monte Carlo methods, respectively. In case of ultra-high aspect ratio contact hole etching, it is already well-known that the huge computational burden is required for realistic consideration of these ballistic transports. To address this issue, the related computational codes were efficiently parallelized for GPU (Graphic Processing Unit) computing, so that the total computation time could be improved more than few hundred times compared to the serial version. Finally, the 3D topography simulator was integrated with ballistic transport module and etch reaction model. Realistic etch-profile simulations with consideration of the sidewall polymer passivation layer were demonstrated.

  • PDF

The Effect of Consumers' Value Motives on the Perception of Blog Reviews Credibility: the Moderation Effect of Tie Strength (소비자의 가치 추구 동인이 블로그 리뷰의 신뢰성 지각에 미치는 영향: 유대강도에 따른 조절효과를 중심으로)

  • Chu, Wujin;Roh, Min Jung
    • Asia Marketing Journal
    • /
    • v.13 no.4
    • /
    • pp.159-189
    • /
    • 2012
  • What attracts consumers to bloggers' reviews? Consumers would be attracted both by the Bloggers' expertise (i.e., knowledge and experience) and by his/her unbiased manner of delivering information. Expertise and trustworthiness are both virtues of information sources, particularly when there is uncertainty in decision-making. Noting this point, we postulate that consumers' motives determine the relative weights they place on expertise and trustworthiness. In addition, our hypotheses assume that tie strength moderates consumers' expectation on bloggers' expertise and trustworthiness: with expectation on expertise enhanced for power-blog user-group (weak-ties), and an expectation on trustworthiness elevated for personal-blog user-group (strong-ties). Finally, we theorize that the effect of credibility on willingness to accept a review is moderated by tie strength; the predictive power of credibility is more prominent for the personal-blog user-groups than for the power-blog user groups. To support these assumptions, we conducted a field survey with blog users, collecting retrospective self-report data. The "gourmet shop" was chosen as a target product category, and obtained data analyzed by structural equations modeling. Findings from these data provide empirical support for our theoretical predictions. First, we found that the purposive motive aimed at satisfying instrumental information needs increases reliance on bloggers' expertise, but interpersonal connectivity value for alleviating loneliness elevates reliance on bloggers' trustworthiness. Second, expertise-based credibility is more prominent for power-blog user-groups than for personal-blog user-groups. While strong ties attract consumers with trustworthiness based on close emotional bonds, weak ties gain consumers' attention with new, non-redundant information (Levin & Cross, 2004). Thus, when the existing knowledge system, used in strong ties, does not work as smoothly for addressing an impending problem, the weak-tie source can be utilized as a handy reference. Thus, we can anticipate that power bloggers secure credibility by virtue of their expertise while personal bloggers trade off on their trustworthiness. Our analysis demonstrates that power bloggers appeal more strongly to consumers than do personal bloggers in the area of expertise-based credibility. Finally, the effect of review credibility on willingness to accept a review is higher for the personal-blog user-group than for the power-blog user-group. Actually, the inference that review credibility is a potent predictor of assessing willingness to accept a review is grounded on the analogy that attitude is an effective indicator of purchase intention. However, if memory about established attitudes is blocked, the predictive power of attitude on purchase intention is considerably diminished. Likewise, the effect of credibility on willingness to accept a review can be affected by certain moderators. Inspired by this analogy, we introduced tie strength as a possible moderator and demonstrated that tie strength moderated the effect of credibility on willingness to accept a review. Previously, Levin and Cross (2004) showed that credibility mediates strong-ties through receipt of knowledge, but this credibility mediation is not observed for weak-ties, where a direct path to it is activated. Thus, the predictive power of credibility on behavioral intention - that is, willingness to accept a review - is expected to be higher for strong-ties.

  • PDF

A Study on Risk Parity Asset Allocation Model with XGBoos (XGBoost를 활용한 리스크패리티 자산배분 모형에 관한 연구)

  • Kim, Younghoon;Choi, HeungSik;Kim, SunWoong
    • Journal of Intelligence and Information Systems
    • /
    • v.26 no.1
    • /
    • pp.135-149
    • /
    • 2020
  • Artificial intelligences are changing world. Financial market is also not an exception. Robo-Advisor is actively being developed, making up the weakness of traditional asset allocation methods and replacing the parts that are difficult for the traditional methods. It makes automated investment decisions with artificial intelligence algorithms and is used with various asset allocation models such as mean-variance model, Black-Litterman model and risk parity model. Risk parity model is a typical risk-based asset allocation model which is focused on the volatility of assets. It avoids investment risk structurally. So it has stability in the management of large size fund and it has been widely used in financial field. XGBoost model is a parallel tree-boosting method. It is an optimized gradient boosting model designed to be highly efficient and flexible. It not only makes billions of examples in limited memory environments but is also very fast to learn compared to traditional boosting methods. It is frequently used in various fields of data analysis and has a lot of advantages. So in this study, we propose a new asset allocation model that combines risk parity model and XGBoost machine learning model. This model uses XGBoost to predict the risk of assets and applies the predictive risk to the process of covariance estimation. There are estimated errors between the estimation period and the actual investment period because the optimized asset allocation model estimates the proportion of investments based on historical data. these estimated errors adversely affect the optimized portfolio performance. This study aims to improve the stability and portfolio performance of the model by predicting the volatility of the next investment period and reducing estimated errors of optimized asset allocation model. As a result, it narrows the gap between theory and practice and proposes a more advanced asset allocation model. In this study, we used the Korean stock market price data for a total of 17 years from 2003 to 2019 for the empirical test of the suggested model. The data sets are specifically composed of energy, finance, IT, industrial, material, telecommunication, utility, consumer, health care and staple sectors. We accumulated the value of prediction using moving-window method by 1,000 in-sample and 20 out-of-sample, so we produced a total of 154 rebalancing back-testing results. We analyzed portfolio performance in terms of cumulative rate of return and got a lot of sample data because of long period results. Comparing with traditional risk parity model, this experiment recorded improvements in both cumulative yield and reduction of estimated errors. The total cumulative return is 45.748%, about 5% higher than that of risk parity model and also the estimated errors are reduced in 9 out of 10 industry sectors. The reduction of estimated errors increases stability of the model and makes it easy to apply in practical investment. The results of the experiment showed improvement of portfolio performance by reducing the estimated errors of the optimized asset allocation model. Many financial models and asset allocation models are limited in practical investment because of the most fundamental question of whether the past characteristics of assets will continue into the future in the changing financial market. However, this study not only takes advantage of traditional asset allocation models, but also supplements the limitations of traditional methods and increases stability by predicting the risks of assets with the latest algorithm. There are various studies on parametric estimation methods to reduce the estimated errors in the portfolio optimization. We also suggested a new method to reduce estimated errors in optimized asset allocation model using machine learning. So this study is meaningful in that it proposes an advanced artificial intelligence asset allocation model for the fast-developing financial markets.

A Folksonomy Ranking Framework: A Semantic Graph-based Approach (폭소노미 사이트를 위한 랭킹 프레임워크 설계: 시맨틱 그래프기반 접근)

  • Park, Hyun-Jung;Rho, Sang-Kyu
    • Asia pacific journal of information systems
    • /
    • v.21 no.2
    • /
    • pp.89-116
    • /
    • 2011
  • In collaborative tagging systems such as Delicious.com and Flickr.com, users assign keywords or tags to their uploaded resources, such as bookmarks and pictures, for their future use or sharing purposes. The collection of resources and tags generated by a user is called a personomy, and the collection of all personomies constitutes the folksonomy. The most significant need of the folksonomy users Is to efficiently find useful resources or experts on specific topics. An excellent ranking algorithm would assign higher ranking to more useful resources or experts. What resources are considered useful In a folksonomic system? Does a standard superior to frequency or freshness exist? The resource recommended by more users with mere expertise should be worthy of attention. This ranking paradigm can be implemented through a graph-based ranking algorithm. Two well-known representatives of such a paradigm are Page Rank by Google and HITS(Hypertext Induced Topic Selection) by Kleinberg. Both Page Rank and HITS assign a higher evaluation score to pages linked to more higher-scored pages. HITS differs from PageRank in that it utilizes two kinds of scores: authority and hub scores. The ranking objects of these pages are limited to Web pages, whereas the ranking objects of a folksonomic system are somewhat heterogeneous(i.e., users, resources, and tags). Therefore, uniform application of the voting notion of PageRank and HITS based on the links to a folksonomy would be unreasonable, In a folksonomic system, each link corresponding to a property can have an opposite direction, depending on whether the property is an active or a passive voice. The current research stems from the Idea that a graph-based ranking algorithm could be applied to the folksonomic system using the concept of mutual Interactions between entitles, rather than the voting notion of PageRank or HITS. The concept of mutual interactions, proposed for ranking the Semantic Web resources, enables the calculation of importance scores of various resources unaffected by link directions. The weights of a property representing the mutual interaction between classes are assigned depending on the relative significance of the property to the resource importance of each class. This class-oriented approach is based on the fact that, in the Semantic Web, there are many heterogeneous classes; thus, applying a different appraisal standard for each class is more reasonable. This is similar to the evaluation method of humans, where different items are assigned specific weights, which are then summed up to determine the weighted average. We can check for missing properties more easily with this approach than with other predicate-oriented approaches. A user of a tagging system usually assigns more than one tags to the same resource, and there can be more than one tags with the same subjectivity and objectivity. In the case that many users assign similar tags to the same resource, grading the users differently depending on the assignment order becomes necessary. This idea comes from the studies in psychology wherein expertise involves the ability to select the most relevant information for achieving a goal. An expert should be someone who not only has a large collection of documents annotated with a particular tag, but also tends to add documents of high quality to his/her collections. Such documents are identified by the number, as well as the expertise, of users who have the same documents in their collections. In other words, there is a relationship of mutual reinforcement between the expertise of a user and the quality of a document. In addition, there is a need to rank entities related more closely to a certain entity. Considering the property of social media that ensures the popularity of a topic is temporary, recent data should have more weight than old data. We propose a comprehensive folksonomy ranking framework in which all these considerations are dealt with and that can be easily customized to each folksonomy site for ranking purposes. To examine the validity of our ranking algorithm and show the mechanism of adjusting property, time, and expertise weights, we first use a dataset designed for analyzing the effect of each ranking factor independently. We then show the ranking results of a real folksonomy site, with the ranking factors combined. Because the ground truth of a given dataset is not known when it comes to ranking, we inject simulated data whose ranking results can be predicted into the real dataset and compare the ranking results of our algorithm with that of a previous HITS-based algorithm. Our semantic ranking algorithm based on the concept of mutual interaction seems to be preferable to the HITS-based algorithm as a flexible folksonomy ranking framework. Some concrete points of difference are as follows. First, with the time concept applied to the property weights, our algorithm shows superior performance in lowering the scores of older data and raising the scores of newer data. Second, applying the time concept to the expertise weights, as well as to the property weights, our algorithm controls the conflicting influence of expertise weights and enhances overall consistency of time-valued ranking. The expertise weights of the previous study can act as an obstacle to the time-valued ranking because the number of followers increases as time goes on. Third, many new properties and classes can be included in our framework. The previous HITS-based algorithm, based on the voting notion, loses ground in the situation where the domain consists of more than two classes, or where other important properties, such as "sent through twitter" or "registered as a friend," are added to the domain. Forth, there is a big difference in the calculation time and memory use between the two kinds of algorithms. While the matrix multiplication of two matrices, has to be executed twice for the previous HITS-based algorithm, this is unnecessary with our algorithm. In our ranking framework, various folksonomy ranking policies can be expressed with the ranking factors combined and our approach can work, even if the folksonomy site is not implemented with Semantic Web languages. Above all, the time weight proposed in this paper will be applicable to various domains, including social media, where time value is considered important.

KNU Korean Sentiment Lexicon: Bi-LSTM-based Method for Building a Korean Sentiment Lexicon (Bi-LSTM 기반의 한국어 감성사전 구축 방안)

  • Park, Sang-Min;Na, Chul-Won;Choi, Min-Seong;Lee, Da-Hee;On, Byung-Won
    • Journal of Intelligence and Information Systems
    • /
    • v.24 no.4
    • /
    • pp.219-240
    • /
    • 2018
  • Sentiment analysis, which is one of the text mining techniques, is a method for extracting subjective content embedded in text documents. Recently, the sentiment analysis methods have been widely used in many fields. As good examples, data-driven surveys are based on analyzing the subjectivity of text data posted by users and market researches are conducted by analyzing users' review posts to quantify users' reputation on a target product. The basic method of sentiment analysis is to use sentiment dictionary (or lexicon), a list of sentiment vocabularies with positive, neutral, or negative semantics. In general, the meaning of many sentiment words is likely to be different across domains. For example, a sentiment word, 'sad' indicates negative meaning in many fields but a movie. In order to perform accurate sentiment analysis, we need to build the sentiment dictionary for a given domain. However, such a method of building the sentiment lexicon is time-consuming and various sentiment vocabularies are not included without the use of general-purpose sentiment lexicon. In order to address this problem, several studies have been carried out to construct the sentiment lexicon suitable for a specific domain based on 'OPEN HANGUL' and 'SentiWordNet', which are general-purpose sentiment lexicons. However, OPEN HANGUL is no longer being serviced and SentiWordNet does not work well because of language difference in the process of converting Korean word into English word. There are restrictions on the use of such general-purpose sentiment lexicons as seed data for building the sentiment lexicon for a specific domain. In this article, we construct 'KNU Korean Sentiment Lexicon (KNU-KSL)', a new general-purpose Korean sentiment dictionary that is more advanced than existing general-purpose lexicons. The proposed dictionary, which is a list of domain-independent sentiment words such as 'thank you', 'worthy', and 'impressed', is built to quickly construct the sentiment dictionary for a target domain. Especially, it constructs sentiment vocabularies by analyzing the glosses contained in Standard Korean Language Dictionary (SKLD) by the following procedures: First, we propose a sentiment classification model based on Bidirectional Long Short-Term Memory (Bi-LSTM). Second, the proposed deep learning model automatically classifies each of glosses to either positive or negative meaning. Third, positive words and phrases are extracted from the glosses classified as positive meaning, while negative words and phrases are extracted from the glosses classified as negative meaning. Our experimental results show that the average accuracy of the proposed sentiment classification model is up to 89.45%. In addition, the sentiment dictionary is more extended using various external sources including SentiWordNet, SenticNet, Emotional Verbs, and Sentiment Lexicon 0603. Furthermore, we add sentiment information about frequently used coined words and emoticons that are used mainly on the Web. The KNU-KSL contains a total of 14,843 sentiment vocabularies, each of which is one of 1-grams, 2-grams, phrases, and sentence patterns. Unlike existing sentiment dictionaries, it is composed of words that are not affected by particular domains. The recent trend on sentiment analysis is to use deep learning technique without sentiment dictionaries. The importance of developing sentiment dictionaries is declined gradually. However, one of recent studies shows that the words in the sentiment dictionary can be used as features of deep learning models, resulting in the sentiment analysis performed with higher accuracy (Teng, Z., 2016). This result indicates that the sentiment dictionary is used not only for sentiment analysis but also as features of deep learning models for improving accuracy. The proposed dictionary can be used as a basic data for constructing the sentiment lexicon of a particular domain and as features of deep learning models. It is also useful to automatically and quickly build large training sets for deep learning models.