• Title/Summary/Keyword: 저널

Search Result 29,768, Processing Time 0.04 seconds

Media Scholars and Power: The politicized intellectuals hanging on the dangerous rope (언론학자와 권력: 정치화된 지성의 위험한 줄타기)

  • Choi, Nakjin;Kim, Sunghae
    • Korean Journal of Legislative Studies
    • /
    • v.22 no.2
    • /
    • pp.113-156
    • /
    • 2016
  • Media scholars take a lion stake in power circle. Not only do they take a part in media policies but seize prestigious positions like board members in Korea Communication Commission(KCC). Unfortunately, though, little has been known about who they are, what qualifications they have, and whether they meet public interests. This paper attempts to unveil the mechanism of those politicized intellectuals who are specialized on the media. Two categories divided into 'representative' and 'expertise' are employed for this purpose. On the one hand, the representative means the degree of committment into such public services as participation in conferences or non-profit organizations. On the other hand, the number of research articles, books and projects belong to the expertise. Evaluation levels consist of 'excellence, good and average' were allocated to those scholars who are(were) in 'Power Hole,' where decision makings come into being. Some interesting observations were made though this study. First of all, such criteria as representative and expertise vaguely suggested by the laws were hardly fit into those intellectuals, Rarely did they commit into public service let alone showing vigilance in academic activities. Secondly, both ideological loyalty and political activities in line with the government had much to do with taking such positions. Thirdly, not surprisingly, it showed that to graduate from Seoul National University and have Ph.D. degree from U.S.A. was one of the most essential factors. In final, most of them were very good at taking advantage of the press in way of boosting their publicity. To attend at policy making processes either in form of board members or advisers is inevitable for media experts. However, as shown in this study, such qualification of public service and academic eagerness shouldn't be underestimated. Academic integrity not selling intelligence solely for private interests needs to be protected as well. The authors hope this study to provide a valuable opportunity to establish a kind of ethical standards in participating into politics.

Review of the Weather Hazard Research: Focused on Typhoon, Heavy Rain, Drought, Heat Wave, Cold Surge, Heavy Snow, and Strong Gust (위험기상 분야의 지난 연구를 뒤돌아보며: 태풍, 집중호우, 가뭄, 폭염, 한파, 강설, 강풍을 중심으로)

  • Chang-Hoi Ho;Byung-Gon Kim;Baek-Min Kim;Doo-Sun R. Park;Chang-Kyun Park;Seok-Woo Son;Jee-Hoon Jeong;Dong-Hyun Cha
    • Atmosphere
    • /
    • v.33 no.2
    • /
    • pp.223-246
    • /
    • 2023
  • This paper summarized the research papers on weather extremes that occurred in the Republic of Korea, which were published in the domestic and foreign journals during 1963~2022. Weather extreme is defined as a weather phenomenon that causes serious casualty and property loss; here, it includes typhoon, heavy rain, drought, heat wave, cold surge, heavy snow, and strong gust. Based on the 2011~2020 statistics in Korea, above 80% of property loss due to all natural disasters were caused by typhoons and heavy rainfalls. However, the impact of the other weather extremes can be underestimated rather than we have actually experienced; the property loss caused by the other extremes is hard to be quantitatively counted. Particularly, as global warming becomes serious, the influence of drought and heat wave has been increasing. The damages caused by cold surges, heavy snow, and strong gust occurred over relatively local areas on short-term time scales compared to other weather hazards. In particularly, strong gust accompanied with drought may result in severe forest fires over mountainous regions. We hope that the present review paper may remind us of the importance of weather extremes that directly affect our lives.

Empirical Study of Biogas Purification Equipment (바이오가스 정제 설비의 실증 연구)

  • Hwan Cheol Lee;Jae-Heon Lee
    • Plant Journal
    • /
    • v.18 no.4
    • /
    • pp.58-65
    • /
    • 2023
  • In this study, to increase the methane content of biogas supplied from Nanji Water Regeneration Center and to purify impurities, a three-stage membrane purification process was designed and installed to demonstrate operation. The methane concentration of biomethane produced in the 2 Nm3/h purification process was set to three cases: 95%, 96.5%, and 98%, and the membrane area ratio of the membrane was 1:1, 1:2, 1:1:1, The optimum conditions for the membrane area of the separator were derived by changing to five of 1:2:1 and 1:2:2. 3 stage separation membrane process of 30 Nm3/h was installed to reflect the optimum condition of 2 Nm3/h, and biomethane production of 98% or more of methane concentration was demonstrated. As a result of the operation of the 2 Nm3/h refining device, the methane recovery rate at the 98% methane concentration was 95.6% when the membrane area ratio was 1:1 as the result of the two-stage operation of the separator, and the recovery rate of methane at 1:2 was increased to 96.8%. The methane recovery rate of the membrane three-stage operation was highest at 96.8% when the membrane area ratio was operated at 1:2:1. The carbon dioxide removal rate was 16.4 to 96.4% and the 2:2 to 95.7% film area ratio in the two-step process. In the three-step process, the film area ratio was 1:2:1 to 95.4%, and the two-step process showed higher results than the three-step process. In the 30 Nm3/h scale biogas purification demonstration operation, the methane concentration after purification was 98%, the recovery rate of methane was 97.1%, the removal rate of carbon dioxide was 95.7%, and hydrogen sulfide, the cause of corrosion, was not detected, and the membrane area ratio was 1:2:1 demonstration operation, biomethane production with a methane concentration of 98% or higher was possible.

  • PDF

CT Examinations for COVID-19: A Systematic Review of Protocols, Radiation Dose, and Numbers Needed to Diagnose and Predict (COVID-19 진단을 위한 CT 검사: 프로토콜, 방사선량에 대한 체계적 문헌고찰 및 진단을 위한 CT 검사량)

  • Jong Hyuk Lee;Hyunsook Hong;Hyungjin Kim;Chang Hyun Lee;Jin Mo Goo;Soon Ho Yoon
    • Journal of the Korean Society of Radiology
    • /
    • v.82 no.6
    • /
    • pp.1505-1523
    • /
    • 2021
  • Purpose Although chest CT has been discussed as a first-line test for coronavirus disease 2019 (COVID-19), little research has explored the implications of CT exposure in the population. To review chest CT protocols and radiation doses in COVID-19 publications and explore the number needed to diagnose (NND) and the number needed to predict (NNP) if CT is used as a first-line test. Materials and Methods We searched nine highly cited radiology journals to identify studies discussing the CT-based diagnosis of COVID-19 pneumonia. Study-level information on the CT protocol and radiation dose was collected, and the doses were compared with each national diagnostic reference level (DRL). The NND and NNP, which depends on the test positive rate (TPR), were calculated, given a CT sensitivity of 94% (95% confidence interval [CI]: 91%-96%) and specificity of 37% (95% CI: 26%-50%), and applied to the early outbreak in Wuhan, New York, and Italy. Results From 86 studies, the CT protocol and radiation dose were reported in 81 (94.2%) and 17 studies (19.8%), respectively. Low-dose chest CT was used more than twice as often as standard-dose chest CT (39.5% vs.18.6%), while the remaining studies (44.2%) did not provide relevant information. The radiation doses were lower than the national DRLs in 15 of the 17 studies (88.2%) that reported doses. The NND was 3.2 scans (95% CI: 2.2-6.0). The NNPs at TPRs of 50%, 25%, 10%, and 5% were 2.2, 3.6, 8.0, 15.5 scans, respectively. In Wuhan, 35418 (TPR, 58%; 95% CI: 27710-56755) to 44840 (TPR, 38%; 95% CI: 35161-68164) individuals were estimated to have undergone CT examinations to diagnose 17365 patients. During the early surge in New York and Italy, daily NNDs changed up to 5.4 and 10.9 times, respectively, within 10 weeks. Conclusion Low-dose CT protocols were described in less than half of COVID-19 publications, and radiation doses were frequently lacking. The number of populations involved in a first-line diagnostic CT test could vary dynamically according to daily TPR; therefore, caution is required in future planning.

Categorization of Factors Causing the Framing Effect and Analysis of the 2015 Revised Curriculum Science Textbooks: Focusing on Risk Expressions (틀효과 발생 요인 범주화 및 2015 개정 교육과정 과학과 교과서 분석 -위험 표현을 중심으로-)

  • Hyeonju Lee;Minchul Kim
    • Journal of The Korean Association For Science Education
    • /
    • v.44 no.5
    • /
    • pp.391-404
    • /
    • 2024
  • The development of science and technology brings abundance and convenience to human life, but it also brings risks. The risks caused by science and technology are universal and far-reaching, affecting the lives of humans, and they are living in an uncertain VUCA era where humans cannot predict when and where they will encounter risks. In order to respond to these risks, it is necessary to increase the level of citizens' risk awareness through risk education. It is necessary to discuss the role of science education in helping citizens to judge and respond to risks scientifically and objectively. On the other hand, in the process of judging and assessing risks, citizens are affected by the frames and ways in which risk information is expressed, a phenomenon known as the "Framing Effect". In this study, we categorized the factors that cause the framing effect, and based on the categorization, we compared and analyzed the frames of risk expression presented in the 2015 revised curriculum science textbooks. For this purpose, we categorized the factors that cause the framing effect by looking at papers published in KCI and SSCI journals with keywords "Framing Effect", and extracted the risk expression texts in textbooks and analyzed them according to the categories. We were able to derive eight factors causing framing effect and categorize the relationship between the factors in a 5x5 matrix. The differences in the frequency of risk expressions by subject in the 2015 revised science curriculum were related to the nature of the subject and the achievement standards, and the differences in the frequency of risk expressions could be identified by the categories of framing and presentation methods. This study is significant in that it examines the way risk is expressed by science subjects based on the factors that cause the framing effect and suggests the importance of the framing effect in risk education.

A Study on Ontology and Topic Modeling-based Multi-dimensional Knowledge Map Services (온톨로지와 토픽모델링 기반 다차원 연계 지식맵 서비스 연구)

  • Jeong, Hanjo
    • Journal of Intelligence and Information Systems
    • /
    • v.21 no.4
    • /
    • pp.79-92
    • /
    • 2015
  • Knowledge map is widely used to represent knowledge in many domains. This paper presents a method of integrating the national R&D data and assists of users to navigate the integrated data via using a knowledge map service. The knowledge map service is built by using a lightweight ontology and a topic modeling method. The national R&D data is integrated with the research project as its center, i.e., the other R&D data such as research papers, patents, and reports are connected with the research project as its outputs. The lightweight ontology is used to represent the simple relationships between the integrated data such as project-outputs relationships, document-author relationships, and document-topic relationships. Knowledge map enables us to infer further relationships such as co-author and co-topic relationships. To extract the relationships between the integrated data, a Relational Data-to-Triples transformer is implemented. Also, a topic modeling approach is introduced to extract the document-topic relationships. A triple store is used to manage and process the ontology data while preserving the network characteristics of knowledge map service. Knowledge map can be divided into two types: one is a knowledge map used in the area of knowledge management to store, manage and process the organizations' data as knowledge, the other is a knowledge map for analyzing and representing knowledge extracted from the science & technology documents. This research focuses on the latter one. In this research, a knowledge map service is introduced for integrating the national R&D data obtained from National Digital Science Library (NDSL) and National Science & Technology Information Service (NTIS), which are two major repository and service of national R&D data servicing in Korea. A lightweight ontology is used to design and build a knowledge map. Using the lightweight ontology enables us to represent and process knowledge as a simple network and it fits in with the knowledge navigation and visualization characteristics of the knowledge map. The lightweight ontology is used to represent the entities and their relationships in the knowledge maps, and an ontology repository is created to store and process the ontology. In the ontologies, researchers are implicitly connected by the national R&D data as the author relationships and the performer relationships. A knowledge map for displaying researchers' network is created, and the researchers' network is created by the co-authoring relationships of the national R&D documents and the co-participation relationships of the national R&D projects. To sum up, a knowledge map-service system based on topic modeling and ontology is introduced for processing knowledge about the national R&D data such as research projects, papers, patent, project reports, and Global Trends Briefing (GTB) data. The system has goals 1) to integrate the national R&D data obtained from NDSL and NTIS, 2) to provide a semantic & topic based information search on the integrated data, and 3) to provide a knowledge map services based on the semantic analysis and knowledge processing. The S&T information such as research papers, research reports, patents and GTB are daily updated from NDSL, and the R&D projects information including their participants and output information are updated from the NTIS. The S&T information and the national R&D information are obtained and integrated to the integrated database. Knowledge base is constructed by transforming the relational data into triples referencing R&D ontology. In addition, a topic modeling method is employed to extract the relationships between the S&T documents and topic keyword/s representing the documents. The topic modeling approach enables us to extract the relationships and topic keyword/s based on the semantics, not based on the simple keyword/s. Lastly, we show an experiment on the construction of the integrated knowledge base using the lightweight ontology and topic modeling, and the knowledge map services created based on the knowledge base are also introduced.

Research Trend Analysis Using Bibliographic Information and Citations of Cloud Computing Articles: Application of Social Network Analysis (클라우드 컴퓨팅 관련 논문의 서지정보 및 인용정보를 활용한 연구 동향 분석: 사회 네트워크 분석의 활용)

  • Kim, Dongsung;Kim, Jongwoo
    • Journal of Intelligence and Information Systems
    • /
    • v.20 no.1
    • /
    • pp.195-211
    • /
    • 2014
  • Cloud computing services provide IT resources as services on demand. This is considered a key concept, which will lead a shift from an ownership-based paradigm to a new pay-for-use paradigm, which can reduce the fixed cost for IT resources, and improve flexibility and scalability. As IT services, cloud services have evolved from early similar computing concepts such as network computing, utility computing, server-based computing, and grid computing. So research into cloud computing is highly related to and combined with various relevant computing research areas. To seek promising research issues and topics in cloud computing, it is necessary to understand the research trends in cloud computing more comprehensively. In this study, we collect bibliographic information and citation information for cloud computing related research papers published in major international journals from 1994 to 2012, and analyzes macroscopic trends and network changes to citation relationships among papers and the co-occurrence relationships of key words by utilizing social network analysis measures. Through the analysis, we can identify the relationships and connections among research topics in cloud computing related areas, and highlight new potential research topics. In addition, we visualize dynamic changes of research topics relating to cloud computing using a proposed cloud computing "research trend map." A research trend map visualizes positions of research topics in two-dimensional space. Frequencies of key words (X-axis) and the rates of increase in the degree centrality of key words (Y-axis) are used as the two dimensions of the research trend map. Based on the values of the two dimensions, the two dimensional space of a research map is divided into four areas: maturation, growth, promising, and decline. An area with high keyword frequency, but low rates of increase of degree centrality is defined as a mature technology area; the area where both keyword frequency and the increase rate of degree centrality are high is defined as a growth technology area; the area where the keyword frequency is low, but the rate of increase in the degree centrality is high is defined as a promising technology area; and the area where both keyword frequency and the rate of degree centrality are low is defined as a declining technology area. Based on this method, cloud computing research trend maps make it possible to easily grasp the main research trends in cloud computing, and to explain the evolution of research topics. According to the results of an analysis of citation relationships, research papers on security, distributed processing, and optical networking for cloud computing are on the top based on the page-rank measure. From the analysis of key words in research papers, cloud computing and grid computing showed high centrality in 2009, and key words dealing with main elemental technologies such as data outsourcing, error detection methods, and infrastructure construction showed high centrality in 2010~2011. In 2012, security, virtualization, and resource management showed high centrality. Moreover, it was found that the interest in the technical issues of cloud computing increases gradually. From annual cloud computing research trend maps, it was verified that security is located in the promising area, virtualization has moved from the promising area to the growth area, and grid computing and distributed system has moved to the declining area. The study results indicate that distributed systems and grid computing received a lot of attention as similar computing paradigms in the early stage of cloud computing research. The early stage of cloud computing was a period focused on understanding and investigating cloud computing as an emergent technology, linking to relevant established computing concepts. After the early stage, security and virtualization technologies became main issues in cloud computing, which is reflected in the movement of security and virtualization technologies from the promising area to the growth area in the cloud computing research trend maps. Moreover, this study revealed that current research in cloud computing has rapidly transferred from a focus on technical issues to for a focus on application issues, such as SLAs (Service Level Agreements).

A Study on Recent Research Trend in Management of Technology Using Keywords Network Analysis (키워드 네트워크 분석을 통해 살펴본 기술경영의 최근 연구동향)

  • Kho, Jaechang;Cho, Kuentae;Cho, Yoonho
    • Journal of Intelligence and Information Systems
    • /
    • v.19 no.2
    • /
    • pp.101-123
    • /
    • 2013
  • Recently due to the advancements of science and information technology, the socio-economic business areas are changing from the industrial economy to a knowledge economy. Furthermore, companies need to do creation of new value through continuous innovation, development of core competencies and technologies, and technological convergence. Therefore, the identification of major trends in technology research and the interdisciplinary knowledge-based prediction of integrated technologies and promising techniques are required for firms to gain and sustain competitive advantage and future growth engines. The aim of this paper is to understand the recent research trend in management of technology (MOT) and to foresee promising technologies with deep knowledge for both technology and business. Furthermore, this study intends to give a clear way to find new technical value for constant innovation and to capture core technology and technology convergence. Bibliometrics is a metrical analysis to understand literature's characteristics. Traditional bibliometrics has its limitation not to understand relationship between trend in technology management and technology itself, since it focuses on quantitative indices such as quotation frequency. To overcome this issue, the network focused bibliometrics has been used instead of traditional one. The network focused bibliometrics mainly uses "Co-citation" and "Co-word" analysis. In this study, a keywords network analysis, one of social network analysis, is performed to analyze recent research trend in MOT. For the analysis, we collected keywords from research papers published in international journals related MOT between 2002 and 2011, constructed a keyword network, and then conducted the keywords network analysis. Over the past 40 years, the studies in social network have attempted to understand the social interactions through the network structure represented by connection patterns. In other words, social network analysis has been used to explain the structures and behaviors of various social formations such as teams, organizations, and industries. In general, the social network analysis uses data as a form of matrix. In our context, the matrix depicts the relations between rows as papers and columns as keywords, where the relations are represented as binary. Even though there are no direct relations between papers who have been published, the relations between papers can be derived artificially as in the paper-keyword matrix, in which each cell has 1 for including or 0 for not including. For example, a keywords network can be configured in a way to connect the papers which have included one or more same keywords. After constructing a keywords network, we analyzed frequency of keywords, structural characteristics of keywords network, preferential attachment and growth of new keywords, component, and centrality. The results of this study are as follows. First, a paper has 4.574 keywords on the average. 90% of keywords were used three or less times for past 10 years and about 75% of keywords appeared only one time. Second, the keyword network in MOT is a small world network and a scale free network in which a small number of keywords have a tendency to become a monopoly. Third, the gap between the rich (with more edges) and the poor (with fewer edges) in the network is getting bigger as time goes on. Fourth, most of newly entering keywords become poor nodes within about 2~3 years. Finally, keywords with high degree centrality, betweenness centrality, and closeness centrality are "Innovation," "R&D," "Patent," "Forecast," "Technology transfer," "Technology," and "SME". The results of analysis will help researchers identify major trends in MOT research and then seek a new research topic. We hope that the result of the analysis will help researchers of MOT identify major trends in technology research, and utilize as useful reference information when they seek consilience with other fields of study and select a new research topic.