• Title/Summary/Keyword: extraction methods

Search Result 3,500, Processing Time 0.034 seconds

A Study on the Trend and Utilization of Stone Waste (석재폐기물 현황 및 활용 연구)

  • Chea, Kwang-Seok;Lee, Young Geun;Koo, Namin;Yang, Hee Moon
    • Korean Journal of Mineralogy and Petrology
    • /
    • v.35 no.3
    • /
    • pp.333-344
    • /
    • 2022
  • The quarrying and utilization of natural building stones such as granite and marble are rapidly emerging in developing countries. A huge amount of wastes is being generated during the processing, cutting and sizing of these stones to make them useable. These wastes are disposed of in the open environment and the toxic nature of these wastes negatively affects the environment and human health. The growth trend in the world stone industry was confirmed in output for 2019, increasing more than one percent and reaching a new peak of some 155 million tons, excluding quarry discards. Per-capita stone use rose to 268 square meters per thousand persons (m2/1,000 inh), from 266 the previous year and 177 in 2001. However, we have to take into consideration that the world's gross quarrying production was about 316 million tons (100%) in 2019; about 53% of that amount, however, is regarded as quarrying waste. With regards to the stone processing stage, we have noticed that the world production has reached 91.15 million tons (29%), and consequently this means that 63.35 million tons of stone-processing scraps is produced. Therefore, we can say that, on a global level, if the quantity of material extracted in the quarry is 100%, the total percentage of waste is about 71%. This raises a substantial problem from the environmental, economical and social point of view. There are essentially three ways of dealing with inorganic waste, namely, reuse, recycling, or disposal in landfills. Reuse and recycling are the preferred waste management methods that consider environmental sustainability and the opportunity to generate important economic returns. Although there are many possible applications for stone waste, they can be summarized into three main general applications, namely, fillers for binders, ceramic formulations, and environmental applications. The use of residual sludge for substrate production seems to be highly promising: the substrate can be used for quarry rehabilitation and in the rehabilitation of industrial sites. This new product (artificial soil) could be included in the list of the materials to use in addition to topsoil for civil works, railway embankments roundabouts and stone sludge wastes could be used for the neutralization of acidic soil to increase the yield. Stone waste is also possible to find several examples of studies for the recovery of mineral residues, including the extraction of metallic elements, and mineral components, the production of construction raw materials, power generation, building materials, and gas and water treatment.

An Empirical Study on the Improvement of In Situ Soil Remediation Using Plasma Blasting, Pneumatic Fracturing and Vacuum Suction (플라즈마 블라스팅, 공압파쇄, 진공추출이 활용된 지중 토양정화공법의 정화 개선 효과에 대한 실증연구)

  • Jae-Yong Song;Geun-Chun Lee;Cha-Won Kang;Eun-Sup Kim;Hyun-Shic Jang;Bo-An Jang;Yu-Chul Park
    • The Journal of Engineering Geology
    • /
    • v.33 no.1
    • /
    • pp.85-103
    • /
    • 2023
  • The in-situ remediation of a solidified stratum containing a large amount of fine-texture material like clay or organic matter in contaminated soil faces limitations such as increased remediation cost resulting from decreased purification efficiency. Even if the soil conditions are good, remediation generally requires a long time to complete because of non-uniform soil properties and low permeability. This study assessed the remediation effect and evaluated the field applicability of a methodology that combines pneumatic fracturing, vacuum extraction, and plasma blasting (the PPV method) to improve the limitations facing existing underground remediation methods. For comparison, underground remediation was performed over 80 days using the experimental PPV method and chemical oxidation (the control method). The control group showed no decrease in the degree of contamination due to the poor delivery of the soil remediation agent, whereas the PPV method clearly reduced the degree of contamination during the remediation period. Remediation effect, as assessed by the reduction of the highest TPH (Total Petroleum Hydrocarbons) concentration by distance from the injection well, was uncleared in the control group, whereas the PPV method showed a remediation effect of 62.6% within a 1 m radius of the injection well radius, 90.1% within 1.1~2.0 m, and 92.1% within 2.1~3.0 m. When evaluating the remediation efficiency by considering the average rate of TPH concentration reduction by distance from the injection well, the control group was not clear; in contrast, the PPV method showed 53.6% remediation effect within 1 m of the injection well, 82.4% within 1.1~2.0 m, and 68.7% within 2.1~3.0 m. Both ways of considering purification efficiency (based on changes in TPH maximum and average contamination concentration) found the PPV method to increase the remediation effect by 149.0~184.8% compared with the control group; its average increase in remediation effect was ~167%. The time taken to reduce contamination by 80% of the initial concentration was evaluated by deriving a correlation equation through analysis of the TPH concentration: the PPV method could reduce the purification time by 184.4% compared with chemical oxidation. However, the present evaluation of a single site cannot be equally applied to all strata, so additional research is necessary to explore more clearly the proposed method's effect.

Application of Amplitude Demodulation to Acquire High-sampling Data of Total Flux Leakage for Tendon Nondestructive Estimation (덴던 비파괴평가를 위한 Total Flux Leakage에서 높은 측정빈도의 데이터를 획득하기 위한 진폭복조의 응용)

  • Joo-Hyung Lee;Imjong Kwahk;Changbin Joh;Ji-Young Choi;Kwang-Yeun Park
    • Journal of the Korea institute for structural maintenance and inspection
    • /
    • v.27 no.2
    • /
    • pp.17-24
    • /
    • 2023
  • A post-processing technique for the measurement signal of a solenoid-type sensor is introduced. The solenoid-type sensor nondestructively evaluates an external tendon of prestressed concrete using the total flux leakage (TFL) method. The TFL solenoid sensor consists of primary and secondary coils. AC electricity, with the shape of a sinusoidal function, is input in the primary coil. The signal proportional to the differential of the input is induced in the secondary coil. Because the amplitude of the induced signal is proportional to the cross-sectional area of the tendon, sectional loss of the tendon caused by ruptures or corrosion can be identified by the induced signal. Therefore, it is important to extract amplitude information from the measurement signal of the TFL sensor. Previously, the amplitude was extracted using local maxima, which is the simplest way to obtain amplitude information. However, because the sampling rate is dramatically decreased by amplitude extraction using the local maxima, the previous method places many restrictions on the direction of TFL sensor development, such as applying additional signal processing and/or artificial intelligence. Meanwhile, the proposed method uses amplitude demodulation to obtain the signal amplitude from the TFL sensor, and the sampling rate of the amplitude information is same to the raw TFL sensor data. The proposed method using amplitude demodulation provides ample freedom for development by eliminating restrictions on the first coil input frequency of the TFL sensor and the speed of applying the sensor to external tension. It also maintains a high measurement sampling rate, providing advantages for utilizing additional signal processing or artificial intelligence. The proposed method was validated through experiments, and the advantages were verified through comparison with the previous method. For example, in this study the amplitudes extracted by amplitude demodulation provided a sampling rate 100 times greater than those of the previous method. There may be differences depending on the given situation and specific equipment settings; however, in most cases, extracting amplitude information using amplitude demodulation yields more satisfactory results than previous methods.

MicroRNA Profile in the Helicobacter pylori-infected Gastric Epithelial Cells (Helicobacter pylori 감염 위상피세포에서 MicroRNA 발현 변화)

  • Chang Whan Kim;Sung Soo Kim;Tae Ho Kim;Woo Chul Chung;Jae Kwang Kim
    • Journal of Digestive Cancer Research
    • /
    • v.5 no.2
    • /
    • pp.105-112
    • /
    • 2017
  • Background: The expression of miRNAs in response to Helicobacter pylori infection has not been well explored. The aims of this study were to evaluate the H. pylori associated miRNAs in the gastric epithelial cells. Methods: We investigated gastric epithelial cell-line (HS3C) exposed H. pylori over 3 months and AGS cell-line (AGS) exposed H. pylori for 6 hour. After the extraction of miRNA from these cell-lines, microarray and real time PCR were performed to confirm the alteration of expression. Results: All 12 miRNAs chosen for real-time PCR are based on the result of microarray and their potential functions related to H. pylori infection. miR-21, miR-221, miR-222 were upregulated in the H. pylori infected AGS cell for 6 hours and HS3C cells. miR-99b, miR-200b, miR-203b and miR-373 were downregulated in the H. pylori infected AGS cell for 6 hours and HS3C cells. miR-23a, miR-23b, miR-125b, miR-141 and miR-155 were upregulated in HS3C cell line but not in H. pylori infected AGS cell for 6 hours. Conclusion: miR-21, miR-99b, miR-125b, miR-200b, miR-203b, miR-221, miR-222, and miR-373 are supposed to be related with oncogenesis of H. pylori infection. Further studies are needed for the evaluation of the function of these confirmed miRNAs.

  • PDF

Region of Interest Extraction and Bilinear Interpolation Application for Preprocessing of Lipreading Systems (입 모양 인식 시스템 전처리를 위한 관심 영역 추출과 이중 선형 보간법 적용)

  • Jae Hyeok Han;Yong Ki Kim;Mi Hye Kim
    • The Transactions of the Korea Information Processing Society
    • /
    • v.13 no.4
    • /
    • pp.189-198
    • /
    • 2024
  • Lipreading is one of the important parts of speech recognition, and several studies have been conducted to improve the performance of lipreading in lipreading systems for speech recognition. Recent studies have used method to modify the model architecture of lipreading system to improve recognition performance. Unlike previous research that improve recognition performance by modifying model architecture, we aim to improve recognition performance without any change in model architecture. In order to improve the recognition performance without modifying the model architecture, we refer to the cues used in human lipreading and set other regions such as chin and cheeks as regions of interest along with the lip region, which is the existing region of interest of lipreading systems, and compare the recognition rate of each region of interest to propose the highest performing region of interest In addition, assuming that the difference in normalization results caused by the difference in interpolation method during the process of normalizing the size of the region of interest affects the recognition performance, we interpolate the same region of interest using nearest neighbor interpolation, bilinear interpolation, and bicubic interpolation, and compare the recognition rate of each interpolation method to propose the best performing interpolation method. Each region of interest was detected by training an object detection neural network, and dynamic time warping templates were generated by normalizing each region of interest, extracting and combining features, and mapping the dimensionality reduction of the combined features into a low-dimensional space. The recognition rate was evaluated by comparing the distance between the generated dynamic time warping templates and the data mapped to the low-dimensional space. In the comparison of regions of interest, the result of the region of interest containing only the lip region showed an average recognition rate of 97.36%, which is 3.44% higher than the average recognition rate of 93.92% in the previous study, and in the comparison of interpolation methods, the bilinear interpolation method performed 97.36%, which is 14.65% higher than the nearest neighbor interpolation method and 5.55% higher than the bicubic interpolation method. The code used in this study can be found a https://github.com/haraisi2/Lipreading-Systems.

Establishment of Analytical Method for Dichlorprop Residues, a Plant Growth Regulator in Agricultural Commodities Using GC/ECD (GC/ECD를 이용한 농산물 중 생장조정제 dichlorprop 잔류 분석법 확립)

  • Lee, Sang-Mok;Kim, Jae-Young;Kim, Tae-Hoon;Lee, Han-Jin;Chang, Moon-Ik;Kim, Hee-Jeong;Cho, Yoon-Jae;Choi, Si-Won;Kim, Myung-Ae;Kim, MeeKyung;Rhee, Gyu-Seek;Lee, Sang-Jae
    • Korean Journal of Environmental Agriculture
    • /
    • v.32 no.3
    • /
    • pp.214-223
    • /
    • 2013
  • BACKGROUND: This study focused on the development of an analytical method about dichlorprop (DCPP; 2-(2,4-dichlorophenoxy)propionic acid) which is a plant growth regulator, a synthetic auxin for agricultural commodities. DCPP prevents falling of fruits during their growth periods. However, the overdose of DCPP caused the unwanted maturing time and reduce the safe storage period. If we take fruits with exceeding maximum residue limits, it could be harmful. Therefore, this study presented the analytical method of DCPP in agricultural commodities for the nation-wide pesticide residues monitoring program of the Ministry of Food and Drug Safety. METHODS AND RESULTS: We adopted the analytical method for DCPP in agricultural commodities by gas chromatograph in cooperated with Electron Capture Detector(ECD). Sample extraction and purification by ion-associated partition method were applied, then quantitation was done by GC/ECD with DB-17, a moderate polarity column under the temperature-rising condition with nitrogen as a carrier gas and split-less mode. Standard calibration curve presented linearity with the correlation coefficient ($r^2$) > 0.9998, analysed from 0.1 to 2.0 mg/L concentration. Limit of quantitation in agricultural commodities represents 0.05 mg/kg, and average recoveries ranged from 78.8 to 102.2%. The repeatability of measurements expressed as coefficient of variation (CV %) was less than 9.5% in 0.05, 0.10, and 0.50 mg/kg. CONCLUSION(S): Our newly improved analytical method for DCPP residues in agricultural commodities was applicable to the nation-wide pesticide residues monitoring program with the acceptable level of sensitivity, repeatability and reproducibility.

Construction of Consumer Confidence index based on Sentiment analysis using News articles (뉴스기사를 이용한 소비자의 경기심리지수 생성)

  • Song, Minchae;Shin, Kyung-shik
    • Journal of Intelligence and Information Systems
    • /
    • v.23 no.3
    • /
    • pp.1-27
    • /
    • 2017
  • It is known that the economic sentiment index and macroeconomic indicators are closely related because economic agent's judgment and forecast of the business conditions affect economic fluctuations. For this reason, consumer sentiment or confidence provides steady fodder for business and is treated as an important piece of economic information. In Korea, private consumption accounts and consumer sentiment index highly relevant for both, which is a very important economic indicator for evaluating and forecasting the domestic economic situation. However, despite offering relevant insights into private consumption and GDP, the traditional approach to measuring the consumer confidence based on the survey has several limits. One possible weakness is that it takes considerable time to research, collect, and aggregate the data. If certain urgent issues arise, timely information will not be announced until the end of each month. In addition, the survey only contains information derived from questionnaire items, which means it can be difficult to catch up to the direct effects of newly arising issues. The survey also faces potential declines in response rates and erroneous responses. Therefore, it is necessary to find a way to complement it. For this purpose, we construct and assess an index designed to measure consumer economic sentiment index using sentiment analysis. Unlike the survey-based measures, our index relies on textual analysis to extract sentiment from economic and financial news articles. In particular, text data such as news articles and SNS are timely and cover a wide range of issues; because such sources can quickly capture the economic impact of specific economic issues, they have great potential as economic indicators. There exist two main approaches to the automatic extraction of sentiment from a text, we apply the lexicon-based approach, using sentiment lexicon dictionaries of words annotated with the semantic orientations. In creating the sentiment lexicon dictionaries, we enter the semantic orientation of individual words manually, though we do not attempt a full linguistic analysis (one that involves analysis of word senses or argument structure); this is the limitation of our research and further work in that direction remains possible. In this study, we generate a time series index of economic sentiment in the news. The construction of the index consists of three broad steps: (1) Collecting a large corpus of economic news articles on the web, (2) Applying lexicon-based methods for sentiment analysis of each article to score the article in terms of sentiment orientation (positive, negative and neutral), and (3) Constructing an economic sentiment index of consumers by aggregating monthly time series for each sentiment word. In line with existing scholarly assessments of the relationship between the consumer confidence index and macroeconomic indicators, any new index should be assessed for its usefulness. We examine the new index's usefulness by comparing other economic indicators to the CSI. To check the usefulness of the newly index based on sentiment analysis, trend and cross - correlation analysis are carried out to analyze the relations and lagged structure. Finally, we analyze the forecasting power using the one step ahead of out of sample prediction. As a result, the news sentiment index correlates strongly with related contemporaneous key indicators in almost all experiments. We also find that news sentiment shocks predict future economic activity in most cases. In almost all experiments, the news sentiment index strongly correlates with related contemporaneous key indicators. Furthermore, in most cases, news sentiment shocks predict future economic activity; in head-to-head comparisons, the news sentiment measures outperform survey-based sentiment index as CSI. Policy makers want to understand consumer or public opinions about existing or proposed policies. Such opinions enable relevant government decision-makers to respond quickly to monitor various web media, SNS, or news articles. Textual data, such as news articles and social networks (Twitter, Facebook and blogs) are generated at high-speeds and cover a wide range of issues; because such sources can quickly capture the economic impact of specific economic issues, they have great potential as economic indicators. Although research using unstructured data in economic analysis is in its early stages, but the utilization of data is expected to greatly increase once its usefulness is confirmed.

Effects of Y Chromosome Microdeletion on the Outcome of in vitro Fertilization (남성 불임 환자에서 Y 염색체 미세 결손이 체외 수정 결과에 미치는 영향)

  • Choi, Noh-Mi;Yang, Kwang-Moon;Kang, Inn-Soo;Seo, Ju-Tae;Song, In-Ok;Park, Chan-Woo;Lee, Hyoung-Song;Lee, Hyun-Joo;Ahn, Ka-Young;Hahn, Ho-Suap;Lee, Hee-Jung;Kim, Na-Young;Yu, Seung-Youn
    • Clinical and Experimental Reproductive Medicine
    • /
    • v.34 no.1
    • /
    • pp.41-48
    • /
    • 2007
  • Objective: To determine whether the presence of Y-chromosome microdeletion affects the outcome of in vitro fertilization (IVF) and intracytoplasmic sperm injection (ICSI) program. Methods: Fourteen couples with microdeletion in azoospermic factor (AZF)c region who attempted IVF/ICSI or cryopreserved and thawed embryo transfer cycles were enrolled. All of the men showed severe oligoasthenoteratoazoospermia (OATS) or azoospermia. As a control, 12 couples with OATS or azoospermia and having normal Y-chromosome were included. Both groups were divided into two subgroups by sperm source used in ICSI such as those who underwent testicular sperm extraction (TESE) and those used ejaculate sperm. We retrospectively analyzed our database in respect to the IVF outcomes. The outcome measures were mean number of good quality embryos, fertilization rates, implantation rates, $\beta$-hCG positive rates, early pregnancy loss and live birth rates. Results: Mean number of good quality embryos, implantation rates, $\beta$-hCG positive rates, early pregnancy loss rates and live birth rates were not significantly different between Y-chromosome microdeletion and control groups. But, fertilization rates in the Y-chromosome microdeletion group (61.1%) was significantly lower than that of control group (79.8%, p=0.003). Also, the subgroup underwent TESE and having AZFc microdeletion showed significantly lower fertilization rates (52.9%) than the subgroup underwent TESE and having normal Y-chromosome (79.5%, p=0.008). Otherwise, in the subgroups used ejaculate sperm, fertilization rates were showed tendency toward lower in couples having Y-chromosome microdeletion than couples with normal Y-chromosome. (65.5% versus 79.9%, p=0.082). But, there was no significance statistically. Conclusions: In IVF/ICSI cycles using TESE sperm, presence of V-chromosome microdeletion may adversely affect to fertilization ability of injected sperm. But, in cases of ejaculate sperm available for ICSI, IVF outcome was not affected by presence of Y-chromosome AZFc microdeletion. However, more larger scaled prospective study was needed to support our results.

A study on the classification of research topics based on COVID-19 academic research using Topic modeling (토픽모델링을 활용한 COVID-19 학술 연구 기반 연구 주제 분류에 관한 연구)

  • Yoo, So-yeon;Lim, Gyoo-gun
    • Journal of Intelligence and Information Systems
    • /
    • v.28 no.1
    • /
    • pp.155-174
    • /
    • 2022
  • From January 2020 to October 2021, more than 500,000 academic studies related to COVID-19 (Coronavirus-2, a fatal respiratory syndrome) have been published. The rapid increase in the number of papers related to COVID-19 is putting time and technical constraints on healthcare professionals and policy makers to quickly find important research. Therefore, in this study, we propose a method of extracting useful information from text data of extensive literature using LDA and Word2vec algorithm. Papers related to keywords to be searched were extracted from papers related to COVID-19, and detailed topics were identified. The data used the CORD-19 data set on Kaggle, a free academic resource prepared by major research groups and the White House to respond to the COVID-19 pandemic, updated weekly. The research methods are divided into two main categories. First, 41,062 articles were collected through data filtering and pre-processing of the abstracts of 47,110 academic papers including full text. For this purpose, the number of publications related to COVID-19 by year was analyzed through exploratory data analysis using a Python program, and the top 10 journals under active research were identified. LDA and Word2vec algorithm were used to derive research topics related to COVID-19, and after analyzing related words, similarity was measured. Second, papers containing 'vaccine' and 'treatment' were extracted from among the topics derived from all papers, and a total of 4,555 papers related to 'vaccine' and 5,971 papers related to 'treatment' were extracted. did For each collected paper, detailed topics were analyzed using LDA and Word2vec algorithms, and a clustering method through PCA dimension reduction was applied to visualize groups of papers with similar themes using the t-SNE algorithm. A noteworthy point from the results of this study is that the topics that were not derived from the topics derived for all papers being researched in relation to COVID-19 (

    ) were the topic modeling results for each research topic (
    ) was found to be derived from For example, as a result of topic modeling for papers related to 'vaccine', a new topic titled Topic 05 'neutralizing antibodies' was extracted. A neutralizing antibody is an antibody that protects cells from infection when a virus enters the body, and is said to play an important role in the production of therapeutic agents and vaccine development. In addition, as a result of extracting topics from papers related to 'treatment', a new topic called Topic 05 'cytokine' was discovered. A cytokine storm is when the immune cells of our body do not defend against attacks, but attack normal cells. Hidden topics that could not be found for the entire thesis were classified according to keywords, and topic modeling was performed to find detailed topics. In this study, we proposed a method of extracting topics from a large amount of literature using the LDA algorithm and extracting similar words using the Skip-gram method that predicts the similar words as the central word among the Word2vec models. The combination of the LDA model and the Word2vec model tried to show better performance by identifying the relationship between the document and the LDA subject and the relationship between the Word2vec document. In addition, as a clustering method through PCA dimension reduction, a method for intuitively classifying documents by using the t-SNE technique to classify documents with similar themes and forming groups into a structured organization of documents was presented. In a situation where the efforts of many researchers to overcome COVID-19 cannot keep up with the rapid publication of academic papers related to COVID-19, it will reduce the precious time and effort of healthcare professionals and policy makers, and rapidly gain new insights. We hope to help you get It is also expected to be used as basic data for researchers to explore new research directions.

  • A Proposal of a Keyword Extraction System for Detecting Social Issues (사회문제 해결형 기술수요 발굴을 위한 키워드 추출 시스템 제안)

    • Jeong, Dami;Kim, Jaeseok;Kim, Gi-Nam;Heo, Jong-Uk;On, Byung-Won;Kang, Mijung
      • Journal of Intelligence and Information Systems
      • /
      • v.19 no.3
      • /
      • pp.1-23
      • /
      • 2013
    • To discover significant social issues such as unemployment, economy crisis, social welfare etc. that are urgent issues to be solved in a modern society, in the existing approach, researchers usually collect opinions from professional experts and scholars through either online or offline surveys. However, such a method does not seem to be effective from time to time. As usual, due to the problem of expense, a large number of survey replies are seldom gathered. In some cases, it is also hard to find out professional persons dealing with specific social issues. Thus, the sample set is often small and may have some bias. Furthermore, regarding a social issue, several experts may make totally different conclusions because each expert has his subjective point of view and different background. In this case, it is considerably hard to figure out what current social issues are and which social issues are really important. To surmount the shortcomings of the current approach, in this paper, we develop a prototype system that semi-automatically detects social issue keywords representing social issues and problems from about 1.3 million news articles issued by about 10 major domestic presses in Korea from June 2009 until July 2012. Our proposed system consists of (1) collecting and extracting texts from the collected news articles, (2) identifying only news articles related to social issues, (3) analyzing the lexical items of Korean sentences, (4) finding a set of topics regarding social keywords over time based on probabilistic topic modeling, (5) matching relevant paragraphs to a given topic, and (6) visualizing social keywords for easy understanding. In particular, we propose a novel matching algorithm relying on generative models. The goal of our proposed matching algorithm is to best match paragraphs to each topic. Technically, using a topic model such as Latent Dirichlet Allocation (LDA), we can obtain a set of topics, each of which has relevant terms and their probability values. In our problem, given a set of text documents (e.g., news articles), LDA shows a set of topic clusters, and then each topic cluster is labeled by human annotators, where each topic label stands for a social keyword. For example, suppose there is a topic (e.g., Topic1 = {(unemployment, 0.4), (layoff, 0.3), (business, 0.3)}) and then a human annotator labels "Unemployment Problem" on Topic1. In this example, it is non-trivial to understand what happened to the unemployment problem in our society. In other words, taking a look at only social keywords, we have no idea of the detailed events occurring in our society. To tackle this matter, we develop the matching algorithm that computes the probability value of a paragraph given a topic, relying on (i) topic terms and (ii) their probability values. For instance, given a set of text documents, we segment each text document to paragraphs. In the meantime, using LDA, we can extract a set of topics from the text documents. Based on our matching process, each paragraph is assigned to a topic, indicating that the paragraph best matches the topic. Finally, each topic has several best matched paragraphs. Furthermore, assuming there are a topic (e.g., Unemployment Problem) and the best matched paragraph (e.g., Up to 300 workers lost their jobs in XXX company at Seoul). In this case, we can grasp the detailed information of the social keyword such as "300 workers", "unemployment", "XXX company", and "Seoul". In addition, our system visualizes social keywords over time. Therefore, through our matching process and keyword visualization, most researchers will be able to detect social issues easily and quickly. Through this prototype system, we have detected various social issues appearing in our society and also showed effectiveness of our proposed methods according to our experimental results. Note that you can also use our proof-of-concept system in http://dslab.snu.ac.kr/demo.html.


    (34141) Korea Institute of Science and Technology Information, 245, Daehak-ro, Yuseong-gu, Daejeon
    Copyright (C) KISTI. All Rights Reserved.