• Title/Summary/Keyword: Text processing

Search Result 1,202, Processing Time 0.027 seconds

Improvement of Endoscopic Image using De-Interlacing Technique (De-Interlace 기법을 이용한 내시경 영상의 화질 개선)

  • 신동익;조민수;허수진
    • Journal of Biomedical Engineering Research
    • /
    • v.19 no.5
    • /
    • pp.469-476
    • /
    • 1998
  • In the case of acquisition and displaying medical Images such as ultrasonography and endoscopy on VGA monitor of PC system, image degradation of tear-drop appears through scan conversion. In this study, we compare several methods which can solve this degradation and implement the hardware system that resolves this problem in real-time with PC. It is possible to represent high quality image display and real-time processing and acquisition with specific de-interlacing device and PCI bridge on our hardware system. Image quality is improved remarkably on our hardware system. It is implemented as PC-based system, so acquiring, saving images and describing text comment on those images and PACS networking can be easily implemented.metabolism. All images were spatially normalized to MNI standard PET template and smoothed with 16mm FWHM Gaussian kernel using SPM96. Mean count in cerebral region was normalized. The VOls for 34 cerebral regions were previously defined on the standard template and 17 different counts of mirrored regions to hemispheric midline were extracted from spatially normalized images. A three-layer feed-forward error back-propagation neural network classifier with 7 input nodes and 3 output nodes was used. The network was trained to interpret metabolic patterns and produce identical diagnoses with those of expert viewers. The performance of the neural network was optimized by testing with 5~40 nodes in hidden layer. Randomly selected 40 images from each group were used to train the network and the remainders were used to test the learned network. The optimized neural network gave a maximum agreement rate of 80.3% with expert viewers. It used 20 hidden nodes and was trained for 1508 epochs. Also, neural network gave agreement rates of 75~80% with 10 or 30 nodes in hidden layer. We conclude that artificial neural network performed as well as human experts and could be potentially useful as clinical decision support tool for the localization of epileptogenic zones.

  • PDF

An Implementation of Mobile Platform using Location Data Index Techniques (위치 데이터 인덱스 기법을 적용한 모바일 플랫폼구현)

  • Park, Chang-Hee;Kang, Jin-Suk;Sung, Mee-Young;Park, Jong-Song;Kim, Jang-Hyung
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.10 no.11
    • /
    • pp.1960-1972
    • /
    • 2006
  • In this thesis, GPS and the electronic mapping were used to realize such a system by recognizing license plate numbers and identifying the location of objects that move at synchronous times with simulated movement in the electronic map. As well, throughout the study, a camera attached to a PDA, one of the mobile devices, automatically recognized and confirmed acquired license plate numbers from the front and back of each cu. Using this mobile technique in a wireless network searches for specific plate numbers and information about the location of the car is transmitted to a remote sewer. The use of such a GPS-based system allows for the measurement of topography and the effective acquisition of a car's location. The information is then transmitted to a central controlling center and stored as text to be reproduced later in the form of diagrams. Getting positional information through GPS and using image-processing with a PDA makes it possible to estimate the correct information of a car's location and to transmit the specific information of the car to a control center simultaneously, so that the center will get information such as type of the cu, possibility of the defects that a car might have, and possibly to offer help with those functions. Such information can establish a mobile system that can recognize and accurately trace the location of cars.

Trend of Research and Industry-Related Analysis in Data Quality Using Time Series Network Analysis (시계열 네트워크분석을 통한 데이터품질 연구경향 및 산업연관 분석)

  • Jang, Kyoung-Ae;Lee, Kwang-Suk;Kim, Woo-Je
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.5 no.6
    • /
    • pp.295-306
    • /
    • 2016
  • The purpose of this paper is both to analyze research trends and to predict industrial flows using the meta-data from the previous studies on data quality. There have been many attempts to analyze the research trends in various fields till lately. However, analysis of previous studies on data quality has produced poor results because of its vast scope and data. Therefore, in this paper, we used a text mining, social network analysis for time series network analysis to analyze the vast scope and data of data quality collected from a Web of Science index database of papers published in the international data quality-field journals for 10 years. The analysis results are as follows: Decreases in Mathematical & Computational Biology, Chemistry, Health Care Sciences & Services, Biochemistry & Molecular Biology, Biochemistry & Molecular Biology, and Medical Information Science. Increases, on the contrary, in Environmental Sciences, Water Resources, Geology, and Instruments & Instrumentation. In addition, the social network analysis results show that the subjects which have the high centrality are analysis, algorithm, and network, and also, image, model, sensor, and optimization are increasing subjects in the data quality field. Furthermore, the industrial connection analysis result on data quality shows that there is high correlation between technique, industry, health, infrastructure, and customer service. And it predicted that the Environmental Sciences, Biotechnology, and Health Industry will be continuously developed. This paper will be useful for people, not only who are in the data quality industry field, but also the researchers who analyze research patterns and find out the industry connection on data quality.

A Comparative Analysis of the Changes in Perception of the Fourth Industrial Revolution: Focusing on Analyzing Social Media Data (4차 산업혁명에 대한 인식 변화 비교 분석: 소셜 미디어 데이터 분석을 중심으로)

  • You, Jae Eun;Choi, Jong Woo
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.9 no.11
    • /
    • pp.367-376
    • /
    • 2020
  • The fourth industrial revolution will greatly contribute to the entry of objects into an intelligent society through technologies such as big data and an artificial intelligence. Through the revolution, we were able to understand human behavior and awareness, and through the use of an artificial intelligence, we established ourselves as a key tool in various fields such as medicine and science. However, the fourth industrial revolution has a negative side with a positive future. In this study, an analysis was conducted using text mining techniques based on unstructured big data collected through social media. We wanted to look at keywords related to the fourth industrial revolution by year (2016, 2017 and 2018) and understand the meaning of each keyword. In addition, we understood how the keywords related to the Fourth Industrial Revolution changed with the change of the year and wanted to use R to conduct a Keyword Analysis to identify the recognition flow closely related to the Fourth Industrial Revolution through the keyword flow associated with the Fourth Industrial Revolution. Finally, people's perceptions of the fourth industrial revolution were identified by looking at the positive and negative feelings related to the fourth industrial revolution by year. The analysis showed that negative opinions were declining year after year, with more positive outlook and future.

An Analysis System for Whole Genomic Sequence Using String B-Tree (스트링 B-트리를 이용한 게놈 서열 분석 시스템)

  • Choe, Jeong-Hyeon;Jo, Hwan-Gyu
    • The KIPS Transactions:PartA
    • /
    • v.8A no.4
    • /
    • pp.509-516
    • /
    • 2001
  • As results of many genome projects, genomic sequences of many organisms are revealed. Various methods such as global alignment, local alignment are used to analyze the sequences of the organisms, and k -mer analysis is one of the methods for analyzing the genomic sequences. The k -mer analysis explores the frequencies of all k-mers or the symmetry of them where the k -mer is the sequenced base with the length of k. However, existing on-memory algorithms are not applicable to the k -mer analysis because a whole genomic sequence is usually a large text. Therefore, efficient data structures and algorithms are needed. String B-tree is a good data structure that supports external memory and fits into pattern matching. In this paper, we improve the string B-tree in order to efficiently apply the data structure to k -mer analysis, and the results of k -mer analysis for C. elegans and other 30 genomic sequences are shown. We present a visualization system which enables users to investigate the distribution and symmetry of the frequencies of all k -mers using CGR (Chaotic Game Representation). We also describe the method to find the signature which is the part of the sequence that is similar to the whole genomic sequence.

  • PDF

Recognition of Medicinal Efficacy of Pepper as an Introduced Species in Traditional Medicine (전통사회에서 외래종 작물인 고추의 효능 인식 - 한국 전통의서를 중심으로 -)

  • Oh, Jun-Ho;Kwon, Oh-Min;Park, Sang-Young;Ahn, Sang-Woo
    • Journal of the Korean Society of Food Culture
    • /
    • v.27 no.1
    • /
    • pp.12-18
    • /
    • 2012
  • The aim of this study is to look at how pepper was used in traditional medicine. In other words, this study aims to take a look at the process by which the medicinal nature & efficacy of pepper in traditional society was perceived and arranged through the aspects of the use of pepper as an exotic crop for treating diseases. This study investigated cases of using pepper for medical treatments by referring to books on traditional medicine in Korea. The old records about pepper are mainly in empirical medical books from the late Chosun dynasty. Nevertheless, the records about pepper tend to decrease in medical text as time goes by. Such a phenomenon can be attributable to the fact that people began to use pepper for daily food life rather than for medicinal purposes. Pepper was used mostly for digestive trouble such as vomiting, diarrhea, and stomachaches, and it was also applied to mental and aching diseases caused by the sound of body fluids remaining in the stomach. In addition, there were many cases where pepper was used externally for surgical disorders. Such symptoms for treatment are linked to, or in a complementary relationship with, research results in modern times. Boiled pepper was generally taken in the traditional herbal decoction method, and in the case of surgical diseases, it was applied externally. The cases of using old pepper, using pepper with seeds or without seeds, and using pepper mixed with sesame oil belong to a sort of herbal medicine processing, which usually aimed at changing the medicinal nature of pepper. In addition, in relation to the eating habits at that time, pepper was used as seasoning and to make red pepper paste with or without vinegar. There are two words used for pepper in the medical textbooks, 苦椒 (gocho) and 烈棗 (yeoljo). These words are translated into Korean as gochu, so we can identify this word as a nickname for pepper.

A Study of Location Based Services Using Location Data Index Techniques (위치데이터인덱스 기법을 적용한 위치기반서버스에 관한 연구)

  • Park Chang-Hee;Kim Jang-Hyung;Kang Jin-Suk
    • Journal of Korea Multimedia Society
    • /
    • v.9 no.5
    • /
    • pp.595-605
    • /
    • 2006
  • In this thesis, GPS and the electronic mapping were used to realize such a system by recognizing license plate numbers and identifying the location of objects that move at synchronous times with simulated movement in the electronic map. As well, throughout the study, a camera attached to a PDA, one of the mobile devices, automatically recognized and confirmed acquired license plate numbers from the front and back of each car. Using this mobile technique in a wireless network, searches for specific plate numbers and information about the location of the car is transmitted to a remote server. The use of such a GPS-based system allows for the measurement of topography and the effective acquisition of a car's location. The information is then transmitted to a central controlling center and stored as text to be reproduced later in the form of diagrams. Getting positional information through GPS and using image-processing with a PDA makes it possible to estimate the correct information of a car's location and to transmit the specific information of the car to a control center simultaneously, so that the center will get information such as type of the car, possibility of the defects that a car might have, and possibly to offer help with those functions. Such information can establish a mobile system that can recognize and accurately trace the location of cars.

  • PDF

Classifying Sub-Categories of Apartment Defect Repair Tasks: A Machine Learning Approach (아파트 하자 보수 시설공사 세부공종 머신러닝 분류 시스템에 관한 연구)

  • Kim, Eunhye;Ji, HongGeun;Kim, Jina;Park, Eunil;Ohm, Jay Y.
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.10 no.9
    • /
    • pp.359-366
    • /
    • 2021
  • A number of construction companies in Korea invest considerable human and financial resources to construct a system for managing apartment defect data and for categorizing repair tasks. Thus, this study proposes machine learning models to automatically classify defect complaint text-data into one of the sub categories of 'finishing work' (i.e., one of the defect repair tasks). In the proposed models, we employed two word representation methods (Bag-of-words, Term Frequency-Inverse Document Frequency (TF-IDF)) and two machine learning classifiers (Support Vector Machine, Random Forest). In particular, we conducted both binary- and multi- classification tasks to classify 9 sub categories of finishing work: home appliance installation work, paperwork, painting work, plastering work, interior masonry work, plaster finishing work, indoor furniture installation work, kitchen facility installation work, and tiling work. The machine learning classifiers using the TF-IDF representation method and Random Forest classification achieved more than 90% accuracy, precision, recall, and F1 score. We shed light on the possibility of constructing automated defect classification systems based on the proposed machine learning models.

Semi-automatic Construction of Learning Set and Integration of Automatic Classification for Academic Literature in Technical Sciences (기술과학 분야 학술문헌에 대한 학습집합 반자동 구축 및 자동 분류 통합 연구)

  • Kim, Seon-Wu;Ko, Gun-Woo;Choi, Won-Jun;Jeong, Hee-Seok;Yoon, Hwa-Mook;Choi, Sung-Pil
    • Journal of the Korean Society for information Management
    • /
    • v.35 no.4
    • /
    • pp.141-164
    • /
    • 2018
  • Recently, as the amount of academic literature has increased rapidly and complex researches have been actively conducted, researchers have difficulty in analyzing trends in previous research. In order to solve this problem, it is necessary to classify information in units of academic papers. However, in Korea, there is no academic database in which such information is provided. In this paper, we propose an automatic classification system that can classify domestic academic literature into multiple classes. To this end, first, academic documents in the technical science field described in Korean were collected and mapped according to class 600 of the DDC by using K-Means clustering technique to construct a learning set capable of multiple classification. As a result of the construction of the training set, 63,915 documents in the Korean technical science field were established except for the values in which metadata does not exist. Using this training set, we implemented and learned the automatic classification engine of academic documents based on deep learning. Experimental results obtained by hand-built experimental set-up showed 78.32% accuracy and 72.45% F1 performance for multiple classification.

A Study on the Application of Blockchain Technology to the Record Management Model (블록체인기술을 적용한 기록관리 모델 구축 방법 연구)

  • Hong, Deok-Yong
    • Journal of Korean Society of Archives and Records Management
    • /
    • v.19 no.3
    • /
    • pp.223-245
    • /
    • 2019
  • As the foundation for the Fourth Industrial Revolution, blockchain is becoming an essential core infrastructure and technology that creates new growth engines in various industries and is rapidly spreading to the environment of businesses and institutions worldwide. In this study, the characteristics and trends of blockchain technology were investigated and arranged, its application to the records management section of public institutions was required, and the procedures and methods of construction in the records management field of public institutions were studied in literature. Finally, blockchain technology was applied to the records management to propose an archive chain model and describe possible expectations. When the transactions that record the records management process of electronic documents are loaded into the blockchain, all the step information can be checked at once in the activity of processing the records management standard tasks that were fragmentarily nonlinked. If a blockchain function is installed in the electronic records management system, the person who produces the document by acquiring and registering the document enters the metadata and information, as well as stores and classifies all contents. This would simplify the process of reporting the production status and provide real-time information through the original text information disclosure service. Archivechain is a model that applies a cloud infrastructure as a backend as a service (BaaS) by applying a hyperledger platform based on the assumption that an electronic document production system and a records management system are integrated. Creating a smart, electronic system of the records management is the solution to bringing scattered information together by placing all life cycles of public records management in a blockchain.