• Title/Summary/Keyword: Co-word

Search Result 309, Processing Time 0.021 seconds

Building and Analyzing Panic Disorder Social Media Corpus for Automatic Deep Learning Classification Model (딥러닝 자동 분류 모델을 위한 공황장애 소셜미디어 코퍼스 구축 및 분석)

  • Lee, Soobin;Kim, Seongdeok;Lee, Juhee;Ko, Youngsoo;Song, Min
    • Journal of the Korean Society for information Management
    • /
    • v.38 no.2
    • /
    • pp.153-172
    • /
    • 2021
  • This study is to create a deep learning based classification model to examine the characteristics of panic disorder and to classify the panic disorder tendency literature by the panic disorder corpus constructed for the present study. For this purpose, 5,884 documents of the panic disorder corpus collected from social media were directly annotated based on the mental disease diagnosis manual and were classified into panic disorder-prone and non-panic-disorder documents. Then, TF-IDF scores were calculated and word co-occurrence analysis was performed to analyze the lexical characteristics of the corpus. In addition, the co-occurrence between the symptom frequency measurement and the annotated symptom was calculated to analyze the characteristics of panic disorder symptoms and the relationship between symptoms. We also conducted the performance evaluation for a deep learning based classification model. Three pre-trained models, BERT multi-lingual, KoBERT, and KcBERT, were adopted for classification model, and KcBERT showed the best performance among them. This study demonstrated that it can help early diagnosis and treatment of people suffering from related symptoms by examining the characteristics of panic disorder and expand the field of mental illness research to social media.

A Design of Foundation Technology for PLC-based Smart-grave(Tumulus) System

  • Huh, Jun-Ho;Koh, Taehoon;Seo, Kyungryong
    • Journal of Korea Multimedia Society
    • /
    • v.18 no.11
    • /
    • pp.1319-1331
    • /
    • 2015
  • In the Republic of Korea, there's been a culture called 'Hyo' since Koryo Dynasty and this word represents the meaning of paying utmost respects to one's own parents and ancestors whether they are alive or have passed. However, nowadays, most of people live away from their family gravesites so that they do not and cannot take care of them except on the special holidays. For this reason, people could not respond promptly to the incidents occurred at the sites as they receive notifications much later dates most of the time. Thus, in this paper, we propose a low-cost gravesite monitoring system which the users can immediately respond to the disastrous events after being informed of current situations through PLC without delay. For the performance evaluation, the lab and test bed experiments were performed on an actual ship using 200Mbps and 500Mbps products instead of performing an on-site experiment after the system has actually been constructed. The Mountain Region PLC was installed on the power lines and the result showed successful 36.14Mbps communication. Therefore, we expect that this study will contribute in time and cost reduction while constructing the internet infrastructures in mountain regions or building the Smart-graves, tumulus, and charnel houses.

A Dependency Graph-Based Keyphrase Extraction Method Using Anti-patterns

  • Batsuren, Khuyagbaatar;Batbaatar, Erdenebileg;Munkhdalai, Tsendsuren;Li, Meijing;Namsrai, Oyun-Erdene;Ryu, Keun Ho
    • Journal of Information Processing Systems
    • /
    • v.14 no.5
    • /
    • pp.1254-1271
    • /
    • 2018
  • Keyphrase extraction is one of fundamental natural language processing (NLP) tools to improve many text-mining applications such as document summarization and clustering. In this paper, we propose to use two novel techniques on the top of the state-of-the-art keyphrase extraction methods. First is the anti-patterns that aim to recognize non-keyphrase candidates. The state-of-the-art methods often used the rich feature set to identify keyphrases while those rich feature set cover only some of all keyphrases because keyphrases share very few similar patterns and stylistic features while non-keyphrase candidates often share many similar patterns and stylistic features. Second one is to use the dependency graph instead of the word co-occurrence graph that could not connect two words that are syntactically related and placed far from each other in a sentence while the dependency graph can do so. In experiments, we have compared the performances with different settings of the graphs (co-occurrence and dependency), and with the existing method results. Finally, we discovered that the combination method of dependency graph and anti-patterns outperform the state-of-the-art performances.

Edge-Enhancement Method by Subtracting Low Frequency Components of an Image (이미지 저주파 성분 덜어냄을 이용한 에지 강화 기법)

  • Jang, Won-Woo;Kim, Ju-Hyun;Kwak, Boo-Dong;Park, Keun-Woo;Kang, Bong-Soon
    • Proceedings of the Korea Institute of Convergence Signal Processing
    • /
    • 2006.06a
    • /
    • pp.53-56
    • /
    • 2006
  • In this paper, we present the algorithm to enhance the high frequency components of the image by subtracting smoothing version of an image from the original image. In a word, this is the technique to get more a precise and vivid image. The method of Gaussian smoothing is accepted to get the components of the flat image. Moreover we need to consider the size of the gain of the proposed filter in order to preserve the overall brightness of a image. Based on the algorithm verified by MATLAB, we can obtain more vivid and fuller detail of the image than an original image.

  • PDF

A Bibliometric Approach for Department-Level Disciplinary Analysis and Science Mapping of Research Output Using Multiple Classification Schemes

  • Gautam, Pitambar
    • Journal of Contemporary Eastern Asia
    • /
    • v.18 no.1
    • /
    • pp.7-29
    • /
    • 2019
  • This study describes an approach for comparative bibliometric analysis of scientific publications related to (i) individual or several departments comprising a university, and (ii) broader integrated subject areas using multiple disciplinary schemes. It uses a custom dataset of scientific publications (ca. 15,000 articles and reviews, published during 2009-2013, and recorded in the Web of Science Core Collections) with author affiliations to the research departments, dedicated to science, technology, engineering, mathematics, and medicine (STEMM), of a comprehensive university. The dataset was subjected, at first, to the department level and discipline level analyses using the newly available KAKEN-L3 classification (based on MEXT/JSPS Grants-in-Aid system), hierarchical clustering, correspondence analysis to decipher the major departmental and disciplinary clusters, and visualization of the department-discipline relationships using two-dimensional stacked bar diagrams. The next step involved the creation of subsets covering integrated subject areas and a comparative analysis of departmental contributions to a specific area (medical, health and life science) using several disciplinary schemes: Essential Science Indicators (ESI) 22 research fields, SCOPUS 27 subject areas, OECD Frascati 38 subordinate research fields, and KAKEN-L3 66 subject categories. To illustrate the effective use of the science mapping techniques, the same subset for medical, health and life science area was subjected to network analyses for co-occurrences of keywords, bibliographic coupling of the publication sources, and co-citation of sources in the reference lists. The science mapping approach demonstrates the ways to extract information on the prolific research themes, the most frequently used journals for publishing research findings, and the knowledge base underlying the research activities covered by the publications concerned.

Study on mechanical properties of Yellow River silt solidified by MICP technology

  • Yuke, Wang;Rui, Jiang;Gan, Wang;Meiju, Jiao
    • Geomechanics and Engineering
    • /
    • v.32 no.3
    • /
    • pp.347-359
    • /
    • 2023
  • With the development of infrastructure, there is a critical shortage of filling materials all over the word. However, a large amount of silt accumulated in the lower reaches of the Yellow River is treated as waste every year, which will cause environmental pollution and waste of resources. Microbial induced calcium carbonate precipitation (MICP) technology, with the advantage of efficient, economical and environmentally friendly protection, is selected to solidify the abandoned Yellow River silt with poor mechanical properties into high-quality filling material in this paper. Based on unconfined compressive strength (UCS) test, determination of calcium carbonate (CaCO3) content and scanning electron microscope (SEM) test, the effects of cementation solution concentration, treatment times and relative density on the solidification effect were studied. The results show that the loose silt particles can be effectively solidified together into filling material with excellent mechanical properties through MICP technology. The concentration of cementation solution have a significant impact on the solidification effect, and the reasonable concentration of cementation solution is 1.5 mol/L. With the increase of treatment times, the pores in the soil are filled with CaCO3, and the UCS of the specimens after 10 times of treatment can reach 2.5 MPa with a relatively high CaCO3 content of 26%. With the improvement of treatment degree, the influence of relative density on the UCS increases gradually. Microscopic analysis revealed that after MICP reinforcement, CaCO3 adhered to the surface of soil particles and cemented with each other to form a dense structure.

Memory Organization for a Fuzzy Controller.

  • Jee, K.D.S.;Poluzzi, R.;Russo, B.
    • Proceedings of the Korean Institute of Intelligent Systems Conference
    • /
    • 1993.06a
    • /
    • pp.1041-1043
    • /
    • 1993
  • Fuzzy logic based Control Theory has gained much interest in the industrial world, thanks to its ability to formalize and solve in a very natural way many problems that are very difficult to quantify at an analytical level. This paper shows a solution for treating membership function inside hardware circuits. The proposed hardware structure optimizes the memoried size by using particular form of the vectorial representation. The process of memorizing fuzzy sets, i.e. their membership function, has always been one of the more problematic issues for the hardware implementation, due to the quite large memory space that is needed. To simplify such an implementation, it is commonly [1,2,8,9,10,11] used to limit the membership functions either to those having triangular or trapezoidal shape, or pre-definite shape. These kinds of functions are able to cover a large spectrum of applications with a limited usage of memory, since they can be memorized by specifying very few parameters ( ight, base, critical points, etc.). This however results in a loss of computational power due to computation on the medium points. A solution to this problem is obtained by discretizing the universe of discourse U, i.e. by fixing a finite number of points and memorizing the value of the membership functions on such points [3,10,14,15]. Such a solution provides a satisfying computational speed, a very high precision of definitions and gives the users the opportunity to choose membership functions of any shape. However, a significant memory waste can as well be registered. It is indeed possible that for each of the given fuzzy sets many elements of the universe of discourse have a membership value equal to zero. It has also been noticed that almost in all cases common points among fuzzy sets, i.e. points with non null membership values are very few. More specifically, in many applications, for each element u of U, there exists at most three fuzzy sets for which the membership value is ot null [3,5,6,7,12,13]. Our proposal is based on such hypotheses. Moreover, we use a technique that even though it does not restrict the shapes of membership functions, it reduces strongly the computational time for the membership values and optimizes the function memorization. In figure 1 it is represented a term set whose characteristics are common for fuzzy controllers and to which we will refer in the following. The above term set has a universe of discourse with 128 elements (so to have a good resolution), 8 fuzzy sets that describe the term set, 32 levels of discretization for the membership values. Clearly, the number of bits necessary for the given specifications are 5 for 32 truth levels, 3 for 8 membership functions and 7 for 128 levels of resolution. The memory depth is given by the dimension of the universe of the discourse (128 in our case) and it will be represented by the memory rows. The length of a world of memory is defined by: Length = nem (dm(m)+dm(fm) Where: fm is the maximum number of non null values in every element of the universe of the discourse, dm(m) is the dimension of the values of the membership function m, dm(fm) is the dimension of the word to represent the index of the highest membership function. In our case then Length=24. The memory dimension is therefore 128*24 bits. If we had chosen to memorize all values of the membership functions we would have needed to memorize on each memory row the membership value of each element. Fuzzy sets word dimension is 8*5 bits. Therefore, the dimension of the memory would have been 128*40 bits. Coherently with our hypothesis, in fig. 1 each element of universe of the discourse has a non null membership value on at most three fuzzy sets. Focusing on the elements 32,64,96 of the universe of discourse, they will be memorized as follows: The computation of the rule weights is done by comparing those bits that represent the index of the membership function, with the word of the program memor . The output bus of the Program Memory (μCOD), is given as input a comparator (Combinatory Net). If the index is equal to the bus value then one of the non null weight derives from the rule and it is produced as output, otherwise the output is zero (fig. 2). It is clear, that the memory dimension of the antecedent is in this way reduced since only non null values are memorized. Moreover, the time performance of the system is equivalent to the performance of a system using vectorial memorization of all weights. The dimensioning of the word is influenced by some parameters of the input variable. The most important parameter is the maximum number membership functions (nfm) having a non null value in each element of the universe of discourse. From our study in the field of fuzzy system, we see that typically nfm 3 and there are at most 16 membership function. At any rate, such a value can be increased up to the physical dimensional limit of the antecedent memory. A less important role n the optimization process of the word dimension is played by the number of membership functions defined for each linguistic term. The table below shows the request word dimension as a function of such parameters and compares our proposed method with the method of vectorial memorization[10]. Summing up, the characteristics of our method are: Users are not restricted to membership functions with specific shapes. The number of the fuzzy sets and the resolution of the vertical axis have a very small influence in increasing memory space. Weight computations are done by combinatorial network and therefore the time performance of the system is equivalent to the one of the vectorial method. The number of non null membership values on any element of the universe of discourse is limited. Such a constraint is usually non very restrictive since many controllers obtain a good precision with only three non null weights. The method here briefly described has been adopted by our group in the design of an optimized version of the coprocessor described in [10].

  • PDF

Issues on Articles Covering Outstanding Management of Apartment Complexes - Content Analysis of Newspaper Reports with Lexical Statistics - (우수 아파트단지 취재기사에서의 관리상의 논점 - 탐방기사를 이용한 언어통계학적 내용분석 -)

  • Choi Jung-Min;Kang Soon-Joo
    • Journal of the Korean housing association
    • /
    • v.17 no.4
    • /
    • pp.131-143
    • /
    • 2006
  • Nowadays, diverse mass media discovers and introduces outstanding management cases of apartment complexes to induce vital competitions of constructors and active participation of residents to apartment management. This study statistically analyzed the management issues of outstanding apartment complexes that have been introduced by mass media with lexical criteria to examine the characteristics of their exemplary management. The key issues of outstanding apartment management are summarized as: efficient management of convenient facilities for residents, community activities based on residents' participation, and maintenance of pleasant living environments through transparent management. Also, the result of the relation arrangement of co-occurrence word from a Social Network Analysis included three key concepts of multi-family housing management - Maintenance Management, Operating Management, and Community Life Management - with emphasis on 'residents' and 'apartment complexes.' However, Operating Management was relatively deemphasized.

Energy-efficient Reconfigurable FEC Processor for Multi-standard Wireless Communication Systems

  • Li, Meng;der Perre, Liesbet Van;van Thillo, Wim;Lee, Youngjoo
    • JSTS:Journal of Semiconductor Technology and Science
    • /
    • v.17 no.3
    • /
    • pp.333-340
    • /
    • 2017
  • In this paper, we describe HW/SW co-optimizations for reconfigurable application specific instruction-set processors (ASIPs). Based on our previous very long instruction word (VLIW) ASIP, the proposed framework realizes various forward error-correction (FEC) algorithms for wireless communication systems. In order to enhance the energy efficiency, we newly introduce several design methodologies including high-radix algorithms, task-level out-of-order executions, and intensive resource allocations with loop-level rescheduling. The case study on the radix-4 turbo decoding shows that the proposed techniques improve the energy efficiency by 3.7 times compared to the previous architecture.

Calculation of similarity by weighting title and summary in word co-occurrence of research reports (연구 보고서의 공기관계 정보에 제목 및 요약의 가중치를 적용한 유사도 계산)

  • Kim, Nam-Hun;Joo, Jong-Min;Park, Hyuk-Ro;Yang, Hyung-Jeong
    • Proceedings of The KACE
    • /
    • 2017.08a
    • /
    • pp.37-40
    • /
    • 2017
  • 본 논문에서는 국가 연구 보고서의 공기 관계 정보와 제목, 요약 등에 가중치를 적용한 유사도 계산방법을 제안한다. 이를 위해 국가 연구개발 보고서에서 텍스트를 추출하여 한 문장 단위로 문서를 분할하고, 기본 불용어와 보고서에서 특징적으로 나타나는 불용어를 처리하고 형태소 분석을 한 뒤 공기관계를 추출하였다. 또한 문서의 유사도 계산시 정확성을 높이기 위해 제목과 요약 부분에 가중치를 부여하였다. 이를 통해 본 논문에서 제안하는 방법이 문서 검색 라이브러인 루씬(Lucene)을 이용한 방법보다 2.5%의 검색성능 향상을 그리고 Knn-휴리스틱 방법보다는 1.1%의 검색성능 향상을 보였다. 이러한 결과를 통해 문서의 요약과 제목 그리고 공기관계 정보가 연구보고서의 유사도를 계산 하는데 영향을 미친다는 것을 보였다.

  • PDF