• Title/Summary/Keyword: 정보처리지식

Search Result 1,709, Processing Time 0.026 seconds

An Investigation on Digital Humanities Research Trend by Analyzing the Papers of Digital Humanities Conferences (디지털 인문학 연구 동향 분석 - Digital Humanities 학술대회 논문을 중심으로 -)

  • Chung, EunKyung
    • Journal of the Korean Society for Library and Information Science
    • /
    • v.55 no.1
    • /
    • pp.393-413
    • /
    • 2021
  • Digital humanities, which creates new and innovative knowledge through the combination of digital information technology and humanities research problems, can be seen as a representative multidisciplinary field of study. To investigate the intellectual structure of the digital humanities field, a network analysis of authors and keywords co-word was performed on a total of 441 papers in the last two years (2019, 2020) at the Digital Humanities Conference. As the results of the author and keyword analysis show, we can find out the active activities of Europe, North America, and Japanese and Chinese authors in East Asia. Through the co-author network, 11 dis-connected sub-networks are identified, which can be seen as a result of closed co-authoring activities. Through keyword analysis, 16 sub-subject areas are identified, which are machine learning, pedagogy, metadata, topic modeling, stylometry, cultural heritage, network, digital archive, natural language processing, digital library, twitter, drama, big data, neural network, virtual reality, and ethics. This results imply that a diver variety of digital information technologies are playing a major role in the digital humanities. In addition, keywords with high frequency can be classified into humanities-based keywords, digital information technology-based keywords, and convergence keywords. The dynamics of the growth and development of digital humanities can represented in these combinations of keywords.

Study on the Openness of International Academic Papers by Researchers in Library and Information Science Using POI (Practical Openness Index) (POI(Practical Openness Index)를 활용한 문헌정보학 연구자 국제학술논문의 개방성 연구)

  • Cho, Jane
    • Journal of Korean Library and Information Science Society
    • /
    • v.52 no.2
    • /
    • pp.25-44
    • /
    • 2021
  • In a situation where OA papers are increasing, POI, which indexes how open the research activities of individual researchers are, is drawing attention. This study investigated the existence of OA papers and the OA method published in international academic journals by domestic LIS researchers, and derived the researchers' POI based on this. In addition, by examining the relationship between the POI index and the researcher's amount of research papers, the research sub field, and the foreign co-authors, it was analyzed whether these factors are relevant to the researcher's POI. As a result, there were 492 papers by 82 researchers whose OA status and method were normally identified through Unpaywall. Second, only 20.7% of papers published in international journals were open accessed, and almost cases were gold and green methods. Third, there were many papers in text mining in medical journals, and the papers opened in the green method are open in institutional repositories of foreign co-authors or transnational subject repositories such as PMC. Third, the POI index was relatively higher for researchers in the field of informetrics, machine learning than other fields. In addition, it was analyzed that the presence or absence of overseas co-authors is related to OA.

Target Word Selection Disambiguation using Untagged Text Data in English-Korean Machine Translation (영한 기계 번역에서 미가공 텍스트 데이터를 이용한 대역어 선택 중의성 해소)

  • Kim Yu-Seop;Chang Jeong-Ho
    • The KIPS Transactions:PartB
    • /
    • v.11B no.6
    • /
    • pp.749-758
    • /
    • 2004
  • In this paper, we propose a new method utilizing only raw corpus without additional human effort for disambiguation of target word selection in English-Korean machine translation. We use two data-driven techniques; one is the Latent Semantic Analysis(LSA) and the other the Probabilistic Latent Semantic Analysis(PLSA). These two techniques can represent complex semantic structures in given contexts like text passages. We construct linguistic semantic knowledge by using the two techniques and use the knowledge for target word selection in English-Korean machine translation. For target word selection, we utilize a grammatical relationship stored in a dictionary. We use k- nearest neighbor learning algorithm for the resolution of data sparseness Problem in target word selection and estimate the distance between instances based on these models. In experiments, we use TREC data of AP news for construction of latent semantic space and Wail Street Journal corpus for evaluation of target word selection. Through the Latent Semantic Analysis methods, the accuracy of target word selection has improved over 10% and PLSA has showed better accuracy than LSA method. finally we have showed the relatedness between the accuracy and two important factors ; one is dimensionality of latent space and k value of k-NT learning by using correlation calculation.

Design and Development of an EHR Platform Based on Medical Informatics Standards (의료정보 표준에 기반한 EHR 플랫폼의 설계 및 개발)

  • Kim, Hwa-Sun;Cho, Hune;Lee, In-Keun
    • Journal of the Korean Institute of Intelligent Systems
    • /
    • v.21 no.4
    • /
    • pp.456-462
    • /
    • 2011
  • As the ARRA enacted recently in the United States, the interest in EHR systems have been increased in the field of medical industry. The passage of the ARRA presents a program that provides incentives to office-based physicians and hospitals adapting the EHR systems to guarantee interoperability with various medical standards. Thanks to the incentive program, a great number of EHR systems have been developed and lots of office-based physicians and hospitals have adapted the EHR systems certified by CCHIT. Keeping pace with the rapid changes in the market of healthcare, some enterprises try to push in to the United States healthcare market based on the experience acquired by developing EHR systems for hospitals in Korea. However, the developed system must be customized because of the different medical environment between Korea and the United States. In this paper, therefore, we design and develop an integrated EHR platform to guarantee the interoperability between different medical information systems based on medical standard technologies. In the developed platform, an integrated system has been composed by integrating various basic techniques such as data transmission standards and its methods, medical standard terminologies and its usage, and knowledge management for medical decision-making support. Moreover, medical data can be processed electronically by adapting an HL7 interface engine and the terminologies for exchanging medical information and the standardization of medical information. We develop SeniCare, an EHR system for supporting ambulatory care of the office-based physicians, based on the platform, and we verify the usability of the platform by confirming whether SeniCare satisfies the criteria of "meaningful use" issued by CMS or not.

Trends of Semantic Web Services and Technologies : Focusing on the Business Support (비즈니스를 지원하는 시멘틱 웹서비스와 기술의 동향)

  • Kim, Jin-Sung;Kwon, Soon-Jae
    • Journal of Intelligence and Information Systems
    • /
    • v.16 no.4
    • /
    • pp.113-130
    • /
    • 2010
  • During the decades, considerable human interventions to comprehend the web information were increased continually. The successful expansion of the web services made it more complex and required more contributions of the users. Many researchers have tried to improve the comprehension ability of computers in supporting an intelligent web service. One reasonable approach is enriching the information with machine understandable semantics. They applied ontology design, intelligent reasoning and other logical representation schemes to design an infrastructure of the semantic web. For the features, the semantic web is considered as an intelligent access to understanding, transforming, storing, retrieving, and processing the information gathered from heterogeneous, distributed web resources. The goal of this study is firstly to explore the problems that restrict the applications of web services and the basic concepts, languages, and tools of the semantic web. Then we highlight some of the researches, solutions, and projects that have attempted to combine the semantic web and business support, and find out the pros and cons of the approaches. Through the study, we were able to know that the semantic web technology is trying to offer a new and higher level of web service to the online users. The services are overcoming the limitations of traditional web technologies/services. In traditional web services, too much human interventions were needed to seek and interpret the information. The semantic web service, however, is based on machine-understandable semantics and knowledge representation. Therefore, most of information processing activities will be executed by computers. The main elements required to develop a semantic web-based business support are business logics, ontologies, ontology languages, intelligent agents, applications, and etc. In using/managing the infrastructure of the semantic web services, software developers, service consumers, and service providers are the main representatives. Some researchers integrated those technologies, languages, tools, mechanisms, and applications into a semantic web services framework. Therefore, future directions of the semantic web-based business support should be start over from the infrastructure.

Analysis of the Importance and Satisfaction of Viewing Quality Factors among Non-Audience in Professional Baseball According to Corona 19 (코로나 19에 따른 프로야구 무관중 시청품질요인의 중요도, 만족도 분석)

  • Baek, Seung-Heon;Kim, Gi-Tak
    • Journal of Korea Entertainment Industry Association
    • /
    • v.15 no.2
    • /
    • pp.123-135
    • /
    • 2021
  • The data processing of this study is focused on keywords related to 'Corona 19 and professional baseball' and 'Corona 19 and professional baseball no spectators', using text mining and social network analysis of textom program to identify problems and view quality. It was used to set the variable of For quantitative analysis, a questionnaire on viewing quality was constructed, and out of 270 survey respondents, 250 questionnaires were used for the final study. As a tool for securing the validity and reliability of the questionnaire, exploratory factor analysis and reliability analysis were conducted, and IPA analysis (importance-satisfaction) was conducted based on the questionnaire that secured validity and reliability, and the results and strategies were presented. As a result of IPA analysis, factors related to the image (image composition, image coloration, image clarity, image enlargement and composition, high-quality image) were found in the first quadrant, and the second quadrant was the game situation (support team game level, support player game level, star). Player discovery, competition with rival teams), game information (match schedule information, player information check, team performance and player performance, game information), interaction (consensus with the supporting team), and some factors appeared. The factors of commentator (baseball-related knowledge, communication ability, pronunciation and voice, use of standard language, introduction of game-related information) and interaction (real-time communication with the front desk, sympathy with viewers, information exchange such as chatting) appeared.

The Automatic Extraction of Hypernyms and the Development of WordNet Prototype for Korean Nouns using Korean MRD (Machine Readable Dictionary) (국어사전을 이용한 한국어 명사에 대한 상위어 자동 추출 및 WordNet의 프로토타입 개발)

  • Kim, Min-Soo;Kim, Tae-Yeon;Noh, Bong-Nam
    • The Transactions of the Korea Information Processing Society
    • /
    • v.2 no.6
    • /
    • pp.847-856
    • /
    • 1995
  • When a human recognizes nouns in a sentence, s/he associates them with the hyper concepts of onus. For computer to simulate the human's word recognition, it should build the knowledge base (WordNet)for the hyper concepts of words. Until now, works for the WordNet haven't been performed in Korea, because they need lots of human efforts and time. But, as the power of computer is radically improved and common MRD becomes available, it is more feasible to automatically construct the WordNet. This paper proposes the method that automatically builds the WordNet of Korean nouns by using the descripti on of onus in Korean MRD, and it proposes the rules for extracting the hyper concepts (hypernyms)by analyzing structrual characteristics of Korean. The rules effect such characteristics as a headword lies on the rear part of sentences and the descriptive sentences of nouns have special structure. In addition, the WordNet prototype of Korean Nouns is developed, which is made by combining the hypernyms produced by the rules mentioned above. It extracts the hypernyms of about 2,500 sample words, and the result shows that about 92per cents of hypernyms are correct.

  • PDF

Construction of MATLAB API for Fuzzy Expert System Determining Automobile Warranty Coverage (자동차 보증수리 기간 결정을 위한 퍼지 전문가 시스템용 MATLAB API의 구축)

  • Lee, Sang-Hyoun;Kim, Chul-Min;Kim, Byung-Ki
    • The KIPS Transactions:PartD
    • /
    • v.12D no.6 s.102
    • /
    • pp.869-874
    • /
    • 2005
  • In the recent years there has been an increase of service competition in the activity of product selling, especially in the extension of warranty coverage and qualify. The variables in connection with the service competition are not crisp, and required the expertise of the production line. It thus becomes all the more necessary to use subtler tools as decision supports. These problems are typical not only of product companies but also of financial organizations, credit institutions, insurance, which need predictions of credibility for firms or persons in which they have any kind of interest. A suitable approach for minimizing the risk is to use a knowledge-based system. Most often expert systems are not standalone programs, but are embedded into a larger application. The aim of this paper is to discuss an approach for developing an embedded fuzzy expert system with respect to the product selling policy, especially to present the decision system of automobile selling activity around the extension of warranty coverage and quality. We use the MATLAB tools which integrates computation, visualization, and programming in an easy-to-use environment where problems and solutions are expressed in familiar mathematical notation. Also, we present the API functions embedding into the existing application.

Korean Collective Intelligence in Sharing Economy Using R Programming: A Text Mining and Time Series Analysis Approach (R프로그래밍을 활용한 공유경제의 한국인 집단지성: 텍스트 마이닝 및 시계열 분석)

  • Kim, Jae Won;Yun, You Dong;Jung, Yu Jin;Kim, Ki Youn
    • Journal of Internet Computing and Services
    • /
    • v.17 no.5
    • /
    • pp.151-160
    • /
    • 2016
  • The purpose of this research is to investigate Korean popular attitudes and social perceptions of 'sharing economy' terminology at the current moment from a creative or socio-economic point of view. In Korea, this study discovers and interprets the objective and tangible annual changes and patterns of sociocultural collective intelligence that have taken place over the last five years by applying text mining in the big data analysis approach. By crawling and Googling, this study collected a significant amount of time series web meta-data with regard to the theme of the sharing economy on the world wide web from 2010 to 2014. Consequently, huge amounts of raw data concerning sharing economy are processed into the value-added meaningful 'word clouding' form of graphs or figures by using the function of word clouding with R programming. Till now, the lack of accumulated data or collective intelligence about sharing economy notwithstanding, it is worth nothing that this study carried out preliminary research on conducting a time-series big data analysis from the perspective of knowledge management and processing. Thus, the results of this study can be utilized as fundamental data to help understand the academic and industrial aspects of future sharing economy-related markets or consumer behavior.

Development Testing/Evaluating Methods about Security Functions based on Digital Printer (디지털 프린터의 보안기능 시험/평가방법론 개발)

  • Cho, Young-Jun;Lee, Kwang-Woo;Cho, Sung-Kyu;Park, Hyun-Sang;Lee, Hyoung-Seob;Lee, Hyun-Seung;Kim, Song-Yi;Cha, Wook-Jae;Jeon, Woong-Ryul;Won, Dong-Ho;Kim, Seung-Joo
    • The KIPS Transactions:PartC
    • /
    • v.16C no.4
    • /
    • pp.461-476
    • /
    • 2009
  • Digital Printers that are mainly used in enterprises and public institutions are compound machinery and tools which are combined into various functions such as printing, copying, scanning, and fax so on. Digital Printers has security functionality for protecting the important data related with confidential industry technology from leaking. According to the trends, CC(Common Criteria) evaluation and assurance about digital printer is on progress in Japan and USA. Domestically CC evaluation and assurance is started recently. However, the know-how about the digital printer evaluation is not enough and the developers and the evaluators have difficulty in CC evaluation of digital printer products in the country. Therefore, the testing method of digital printer security functionality and evaluation technology is essentially needed for increasing demand for the evaluation afterwards. In this study, we analyze the security functionality and developing trends of digital printer products from internal and external major digital printer companies. Moreover, we research the characters of each security functions and propose guideline for digital printer security functionality evaluation and vulnerability testing methods.