• Title/Summary/Keyword: text extraction

Search Result 459, Processing Time 0.026 seconds

Research trends over 10 years (2010-2021) in infant and toddler rearing behavior by family caregivers in South Korea: text network and topic modeling

  • In-Hye Song;Kyung-Ah Kang
    • Child Health Nursing Research
    • /
    • v.29 no.3
    • /
    • pp.182-194
    • /
    • 2023
  • Purpose: This study analyzed research trends in infant and toddler rearing behavior among family caregivers over a 10-year period (2010-2021). Methods: Text network analysis and topic modeling were employed on data collected from relevant papers, following the extraction and refinement of semantic morphemes. A semantic-centered network was constructed by extracting words from 2,613 English-language abstracts. Data analysis was performed using NetMiner 4.5.0. Results: Frequency analysis, degree centrality, and eigenvector centrality all revealed the terms ''scale," ''program," and ''education" among the top 10 keywords associated with infant and toddler rearing behaviors among family caregivers. The keywords extracted from the analysis were divided into two clusters through cohesion analysis. Additionally, they were classified into two topic groups using topic modeling: "program and evaluation" (64.37%) and "caregivers' role and competency in child development" (35.63%). Conclusion: The roles and competencies of family caregivers are essential for the development of infants and toddlers. Intervention programs and evaluations are necessary to improve rearing behaviors. Future research should determine the role of nurses in supporting family caregivers. Additionally, it should facilitate the development of nursing strategies and intervention programs to promote positive rearing practices.

Investigating the Impact of Corporate Social Responsibility on Firm's Short- and Long-Term Performance with Online Text Analytics (온라인 텍스트 분석을 통해 추정한 기업의 사회적책임 성과가 기업의 단기적 장기적 성과에 미치는 영향 분석)

  • Lee, Heesung;Jin, Yunseon;Kwon, Ohbyung
    • Journal of Intelligence and Information Systems
    • /
    • v.22 no.2
    • /
    • pp.13-31
    • /
    • 2016
  • Despite expectations of short- or long-term positive effects of corporate social responsibility (CSR) on firm performance, the results of existing research into this relationship are inconsistent partly due to lack of clarity about subordinate CSR concepts. In this study, keywords related to CSR concepts are extracted from atypical sources, such as newspapers, using text mining techniques to examine the relationship between CSR and firm performance. The analysis is based on data from the New York Times, a major news publication, and Google Scholar. We used text analytics to process unstructured data collected from open online documents to explore the effects of CSR on short- and long-term firm performance. The results suggest that the CSR index computed using the proposed text - online media - analytics predicts long-term performance very well compared to short-term performance in the absence of any internal firm reports or CSR institute reports. Our study demonstrates the text analytics are useful for evaluating CSR performance with respect to convenience and cost effectiveness.

Automatic Extraction of Alternative Words using Parallel Corpus (병렬말뭉치를 이용한 대체어 자동 추출 방법)

  • Baik, Jong-Bum;Lee, Soo-Won
    • Journal of KIISE:Computing Practices and Letters
    • /
    • v.16 no.12
    • /
    • pp.1254-1258
    • /
    • 2010
  • In information retrieval, different surface forms of the same object can cause poor performance of systems. In this paper, we propose the method extracting alternative words using translation words as features of each word extracted from parallel corpus, korean/english title pair of patent information. Also, we propose an association word filtering method to remove association words from an alternative word list. Evaluation results show that the proposed method outperforms other alternative word extraction methods.

Natural language processing techniques for bioinformatics

  • Tsujii, Jun-ichi
    • Proceedings of the Korean Society for Bioinformatics Conference
    • /
    • 2003.10a
    • /
    • pp.3-3
    • /
    • 2003
  • With biomedical literature expanding so rapidly, there is an urgent need to discover and organize knowledge extracted from texts. Although factual databases contain crucial information the overwhelming amount of new knowledge remains in textual form (e.g. MEDLINE). In addition, new terms are constantly coined as the relationships linking new genes, drugs, proteins etc. As the size of biomedical literature is expanding, more systems are applying a variety of methods to automate the process of knowledge acquisition and management. In my talk, I focus on the project, GENIA, of our group at the University of Tokyo, the objective of which is to construct an information extraction system of protein - protein interaction from abstracts of MEDLINE. The talk includes (1) Techniques we use fDr named entity recognition (1-a) SOHMM (Self-organized HMM) (1-b) Maximum Entropy Model (1-c) Lexicon-based Recognizer (2) Treatment of term variants and acronym finders (3) Event extraction using a full parser (4) Linguistic resources for text mining (GENIA corpus) (4-a) Semantic Tags (4-b) Structural Annotations (4-c) Co-reference tags (4-d) GENIA ontology I will also talk about possible extension of our work that links the findings of molecular biology with clinical findings, and claim that textual based or conceptual based biology would be a viable alternative to system biology that tends to emphasize the role of simulation models in bioinformatics.

  • PDF

Music Structure Analysis and Application (악곡구조 분석과 활용)

  • Seo, Jung-Bum;Bae, Jae-Hak
    • The KIPS Transactions:PartB
    • /
    • v.14B no.1 s.111
    • /
    • pp.33-42
    • /
    • 2007
  • This paper presents a new methodology for music structure analysis which facilitates rhetoric-based music summarization. Similarity analysis of musical constituents suggests the structure of a musical piece. We can recognize its musical form from the structure. Musical forms have rhetorical characteristics of their on. We have utilized the characteristics for locating musical motifs. Motif extraction is to music summarization what topic sentence extraction is to text summarization. We have evaluated the effectiveness of this methodology through a popular music case study.

The Block Segmentation and Extraction of Layout Information In Document (문서의 영역분리와 레이아웃 정보의 추출)

  • 조용주;남궁재찬
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.17 no.10
    • /
    • pp.1131-1146
    • /
    • 1992
  • In this paper, we suggest a new algorithm applied to the segmentation of published documents to obtain constituent and layout information of document. Firstly, we begin the process of blocking and labeling on a 300dpi scanned document. Secondly, we classify the blocked document by individual sub-regions. Thirdly, we group sub-regions into graphic areas and text areas. Finally, we extract information for layout recognition by using the data. From an experiment on papers of an academic society, we obtain the above 98% of region classification rate and extraction rate of information for the layout recognition.

  • PDF

Multi-Vector Document Embedding Using Semantic Decomposition of Complex Documents (복합 문서의 의미적 분해를 통한 다중 벡터 문서 임베딩 방법론)

  • Park, Jongin;Kim, Namgyu
    • Journal of Intelligence and Information Systems
    • /
    • v.25 no.3
    • /
    • pp.19-41
    • /
    • 2019
  • According to the rapidly increasing demand for text data analysis, research and investment in text mining are being actively conducted not only in academia but also in various industries. Text mining is generally conducted in two steps. In the first step, the text of the collected document is tokenized and structured to convert the original document into a computer-readable form. In the second step, tasks such as document classification, clustering, and topic modeling are conducted according to the purpose of analysis. Until recently, text mining-related studies have been focused on the application of the second steps, such as document classification, clustering, and topic modeling. However, with the discovery that the text structuring process substantially influences the quality of the analysis results, various embedding methods have actively been studied to improve the quality of analysis results by preserving the meaning of words and documents in the process of representing text data as vectors. Unlike structured data, which can be directly applied to a variety of operations and traditional analysis techniques, Unstructured text should be preceded by a structuring task that transforms the original document into a form that the computer can understand before analysis. It is called "Embedding" that arbitrary objects are mapped to a specific dimension space while maintaining algebraic properties for structuring the text data. Recently, attempts have been made to embed not only words but also sentences, paragraphs, and entire documents in various aspects. Particularly, with the demand for analysis of document embedding increases rapidly, many algorithms have been developed to support it. Among them, doc2Vec which extends word2Vec and embeds each document into one vector is most widely used. However, the traditional document embedding method represented by doc2Vec generates a vector for each document using the whole corpus included in the document. This causes a limit that the document vector is affected by not only core words but also miscellaneous words. Additionally, the traditional document embedding schemes usually map each document into a single corresponding vector. Therefore, it is difficult to represent a complex document with multiple subjects into a single vector accurately using the traditional approach. In this paper, we propose a new multi-vector document embedding method to overcome these limitations of the traditional document embedding methods. This study targets documents that explicitly separate body content and keywords. In the case of a document without keywords, this method can be applied after extract keywords through various analysis methods. However, since this is not the core subject of the proposed method, we introduce the process of applying the proposed method to documents that predefine keywords in the text. The proposed method consists of (1) Parsing, (2) Word Embedding, (3) Keyword Vector Extraction, (4) Keyword Clustering, and (5) Multiple-Vector Generation. The specific process is as follows. all text in a document is tokenized and each token is represented as a vector having N-dimensional real value through word embedding. After that, to overcome the limitations of the traditional document embedding method that is affected by not only the core word but also the miscellaneous words, vectors corresponding to the keywords of each document are extracted and make up sets of keyword vector for each document. Next, clustering is conducted on a set of keywords for each document to identify multiple subjects included in the document. Finally, a Multi-vector is generated from vectors of keywords constituting each cluster. The experiments for 3.147 academic papers revealed that the single vector-based traditional approach cannot properly map complex documents because of interference among subjects in each vector. With the proposed multi-vector based method, we ascertained that complex documents can be vectorized more accurately by eliminating the interference among subjects.

Automatic 5 Layer Model construction of Business Process Framework(BPF) with M2T Transformation (모델변환을 이용한 비즈니스 프로세스 프레임워크 5레이어 모델 자동 구축 방안)

  • Seo, Chae-Yun;Kim, R. Youngchul
    • The Journal of the Institute of Internet, Broadcasting and Communication
    • /
    • v.13 no.1
    • /
    • pp.63-70
    • /
    • 2013
  • In previous research, we suggested a business process structured query language(BPSQL) for information extraction and retrieval in the business process framework, and used an existing query language with the tablization for each layer within the framework, but still had a problem to manually build with the specification of each layer information of BFP. To solve this problem, we suggest automatically to build the schema based business process model with model-to-text conversion technique. This procedure consists of 1) defining each meta-model of the entire structure and of database schema, and 2) also defining model transformation rules for it. With this procedure, we can automatically transform from defining through meta-modeling of an integrated information system designed to the schema based model information table specification defined of the entire layer each layer specification with model-to-text conversion techniques. It is possible to develop the efficiently integrated information system.

Robust Watermarking for Digital Images in Geometric Distortions Using FP-ICA of Secant Method (할선법의 FP-ICA를 이용한 기하학적 변형에 강건한 디지털영상 워터마킹)

  • Cho Yong-Hyun
    • The KIPS Transactions:PartB
    • /
    • v.11B no.7 s.96
    • /
    • pp.813-820
    • /
    • 2004
  • This paper proposes a digital image watermarking which is robust to geometric distortions using an independent component analysis(ICA) of fixed-point(FP) algorithm based on secant method. The FP algorithm of secant method is applied for better performance in a separation time and rate, and ICA is applied to reject the prior knowledges for original image, key, and watermark such as locations and size, etc. The proposed method embeds the watermark into the spatial domain of original image The proposed watermarking technique has been applied to lena, key, and two watermarks(text and Gaussian noise) respectively. The simulation results show that the proposed method has higher speed and better rate for extracting the original images than the FP algorithm of Newton method. And the proposed method has a watermarking which is robust to geometric distortions such as resizing, rotation, and cropping. Especially, the watermark of images with Gaussian noise has better extraction performance than the watermark with text since Gaussian noise has lower correlation coefficient than the text to the original and key images. The watermarking of ICA doesn't require the prior knowledge for the original images.

Implementation of JBIG2 CODEC with Effective Document Segmentation (문서의 효율적 영역 분할과 JBIG2 CODEC의 구현)

  • 백옥규;김현민;고형화
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.27 no.6A
    • /
    • pp.575-583
    • /
    • 2002
  • JBIG2 is an International Standard fur compression of Bi-level images and documents. JBIG2 supports three encoding modes for high compression according to region features of documents. One of which is generic region coding for bitmap coding. The basic bitmap coder is either MMR or arithmetic coding. Pattern matching coding method is used for text region, and halftone pattern coding is used for halftone region. In this paper, a document is segmented into line-art, halftone and text region for JBIG2 encoding and JBIG2 CODEC is implemented. For efficient region segmentation of documents, region segmentation method using wavelet coefficient is applied with existing boundary extraction technique. In case of facsimile test image(IEEE-167a), there is improvement in compression ratio of about 2% and enhancement of subjective quality. Also, we propose arbitrary shape halftone region coding, which improves subjective quality in talc neighboring text of halftone region.