• Title/Summary/Keyword: 논문 리뷰

Search Result 463, Processing Time 0.023 seconds

Analysis of ICT Education Trends using Keyword Occurrence Frequency Analysis and CONCOR Technique (키워드 출현 빈도 분석과 CONCOR 기법을 이용한 ICT 교육 동향 분석)

  • Youngseok Lee
    • Journal of Industrial Convergence
    • /
    • v.21 no.1
    • /
    • pp.187-192
    • /
    • 2023
  • In this study, trends in ICT education were investigated by analyzing the frequency of appearance of keywords related to machine learning and using conversion of iteration correction(CONCOR) techniques. A total of 304 papers from 2018 to the present published in registered sites were searched on Google Scalar using "ICT education" as the keyword, and 60 papers pertaining to ICT education were selected based on a systematic literature review. Subsequently, keywords were extracted based on the title and summary of the paper. For word frequency and indicator data, 49 keywords with high appearance frequency were extracted by analyzing frequency, via the term frequency-inverse document frequency technique in natural language processing, and words with simultaneous appearance frequency. The relationship degree was verified by analyzing the connection structure and centrality of the connection degree between words, and a cluster composed of words with similarity was derived via CONCOR analysis. First, "education," "research," "result," "utilization," and "analysis" were analyzed as main keywords. Second, by analyzing an N-GRAM network graph with "education" as the keyword, "curriculum" and "utilization" were shown to exhibit the highest correlation level. Third, by conducting a cluster analysis with "education" as the keyword, five groups were formed: "curriculum," "programming," "student," "improvement," and "information." These results indicate that practical research necessary for ICT education can be conducted by analyzing ICT education trends and identifying trends.

Automatic Extraction of Opinion Words from Korean Product Reviews Using the k-Structure (k-Structure를 이용한 한국어 상품평 단어 자동 추출 방법)

  • Kang, Han-Hoon;Yoo, Seong-Joon;Han, Dong-Il
    • Journal of KIISE:Software and Applications
    • /
    • v.37 no.6
    • /
    • pp.470-479
    • /
    • 2010
  • In relation to the extraction of opinion words, it may be difficult to directly apply most of the methods suggested in existing English studies to the Korean language. Additionally, the manual method suggested by studies in Korea poses a problem with the extraction of opinion words in that it takes a long time. In addition, English thesaurus-based extraction of Korean opinion words leaves a challenge to reconsider the deterioration of precision attributed to the one to one mismatching between Korean and English words. Studies based on Korean phrase analyzers may potentially fail due to the fact that they select opinion words with a low level of frequency. Therefore, this study will suggest the k-Structure (k=5 or 8) method, which may possibly improve the precision while mutually complementing existing studies in Korea, in automatically extracting opinion words from a simple sentence in a given Korean product review. A simple sentence is defined to be composed of at least 3 words, i.e., a sentence including an opinion word in ${\pm}2$ distance from the attribute name (e.g., the 'battery' of a camera) of a evaluated product (e.g., a 'camera'). In the performance experiment, the precision of those opinion words for 8 previously given attribute names were automatically extracted and estimated for 1,868 product reviews collected from major domestic shopping malls, by using k-Structure. The results showed that k=5 led to a recall of 79.0% and a precision of 87.0%; while k=8 led to a recall of 92.35% and a precision of 89.3%. Also, a test was conducted using PMI-IR (Pointwise Mutual Information - Information Retrieval) out of those methods suggested in English studies, which resulted in a recall of 55% and a precision of 57%.

Application of the Modified Real-Time Medical Information Standard for U-Healthcare Systems by Using HL7 and Modified MFER(TS-MFER) (HL7과 수정된 MFER(TS-MFER)을 접목한 U-healthcare 실시간 의료정보 표준화 적용)

  • Uhm, Jin-U;Park, Sang-Hyun
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.37 no.8C
    • /
    • pp.680-689
    • /
    • 2012
  • U-healthcare is maintaining of users' health without limitations from where and when they are. As it is important to guarantee compatibility between heterogeneous systems in U-healthcare, a medical information standard is compulsory. An adequate standard means that it is easy to understand and it can cover wide range of information types and various medical devices. Among them, HL7(Helath Level 7) has those traits, but HL7 is not adequate for non-text message, especially for medical waveform. JAHIS suggested an appropriate standard, that is MFER. MFER has many advantages for representation of medical waveform, but it is still not good for real-time applications. In U-healthcare, there are lots of needs for real-time application, so we need a standard that can have useful properties of MFER and HL7, and support real-time. In this article, there are two main topics. The first one is introducing MFER and HL7. Second, the new scheme(TS-MFER with HL7) is developed by modifying MFER and HL7 for real-time applications.

Analysis of Polymer Characteristics Using Matrix-assisted Laser Desorption/Ionization Time-of-flight Mass Spectrometry (말디토프 질량분석을 이용한 고분자의 특성분석)

  • Kang, Min-Jung;Seong, Yunseo;Kim, Moon-Ju;Kim, Myung Soo;Pyun, Jae-Chul
    • Applied Chemistry for Engineering
    • /
    • v.28 no.3
    • /
    • pp.263-271
    • /
    • 2017
  • The application of mass spectrometry to polymer science has rapidly increased since the development of MALDI-TOF MS. This review summarizes current polymer analysis methods using MALDI-TOF MS, which has been extensively applied to analyze the average molecular weight of biopolymers and synthetic polymers. Polymer sequences have also been analyzed to reveal the structures and composition of monomers. In addition, the analysis of unknown end-groups and the determination of polymer concentrations are very important applications. Hyphenated techniques using MALDI-tandem MS have been used for the analysis of fragmentation patterns and end-groups, and also the combination of SEC and MALDI-TOF MS techniques is recommended for the analysis of complex polymers. Moreover, MALDI-TOF MS has been utilized for the observation of polymer degradation. Ion mobility MS, TOF-SIMS, and MALDI-TOF-imaging are also emerging technologies for polymer characterization because of their ability to automatically fractionate and localize polymer samples. The determination of polymer characteristics and their relation to the material properties is one of the most important demands for polymer scientists; the development of software and instrument for higher molecular mass range (> 100 kD) will increase the applications of MALDI-TOF MS for polymer scientists.

Big data mining for natural disaster analysis (자연재해 분석을 위한 빅데이터 마이닝 기술)

  • Kim, Young-Min;Hwang, Mi-Nyeong;Kim, Taehong;Jeong, Chang-Hoo;Jeong, Do-Heon
    • Journal of the Korean Data and Information Science Society
    • /
    • v.26 no.5
    • /
    • pp.1105-1115
    • /
    • 2015
  • Big data analysis for disaster have been recently started especially to text data such as social media. Social data usually supports for the final two stages of disaster management, which consists of four stages: prevention, preparation, response and recovery. Otherwise, big data analysis for meteorologic data can contribute to the prevention and preparation. This motivated us to review big data technologies dealing with non-text data rather than text in natural disaster area. To this end, we first explain the main keywords, big data, data mining and machine learning in sec. 2. Then we introduce the state-of-the-art machine learning techniques in meteorology-related field sec. 3. We show how the traditional machine learning techniques have been adapted for climatic data by taking into account the domain specificity. The application of these techniques in natural disaster response are then introduced (sec. 4), and we finally conclude with several future research directions.

ADMM algorithms in statistics and machine learning (통계적 기계학습에서의 ADMM 알고리즘의 활용)

  • Choi, Hosik;Choi, Hyunjip;Park, Sangun
    • Journal of the Korean Data and Information Science Society
    • /
    • v.28 no.6
    • /
    • pp.1229-1244
    • /
    • 2017
  • In recent years, as demand for data-based analytical methodologies increases in various fields, optimization methods have been developed to handle them. In particular, various constraints required for problems in statistics and machine learning can be solved by convex optimization. Alternating direction method of multipliers (ADMM) can effectively deal with linear constraints, and it can be effectively used as a parallel optimization algorithm. ADMM is an approximation algorithm that solves complex original problems by dividing and combining the partial problems that are easier to optimize than original problems. It is useful for optimizing non-smooth or composite objective functions. It is widely used in statistical and machine learning because it can systematically construct algorithms based on dual theory and proximal operator. In this paper, we will examine applications of ADMM algorithm in various fields related to statistics, and focus on two major points: (1) splitting strategy of objective function, and (2) role of the proximal operator in explaining the Lagrangian method and its dual problem. In this case, we introduce methodologies that utilize regularization. Simulation results are presented to demonstrate effectiveness of the lasso.

A Study on the Visual Representation of TREC Text Documents in the Construction of Digital Library (디지털도서관 구축과정에서 TREC 텍스트 문서의 시각적 표현에 관한 연구)

  • Jeong, Ki-Tai;Park, Il-Jong
    • Journal of the Korean Society for information Management
    • /
    • v.21 no.3
    • /
    • pp.1-14
    • /
    • 2004
  • Visualization of documents will help users when they do search similar documents. and all research in information retrieval addresses itself to the problem of a user with an information need facing a data source containing an acceptable solution to that need. In various contexts. adequate solutions to this problem have included alphabetized cubbyholes housing papyrus rolls. microfilm registers. card catalogs and inverted files coded onto discs. Many information retrieval systems rely on the use of a document surrogate. Though they might be surprise to discover it. nearly every information seeker uses an array of document surrogates. Summaries. tables of contents. abstracts. reviews, and MARC recordsthese are all document surrogates. That is, they stand infor a document allowing a user to make some decision regarding it. whether to retrieve a book from the stacks, whether to read an entire article, etc. In this paper another type of document surrogate is investigated using a grouping method of term list. lising Multidimensional Scaling Method (MDS) those surrogates are visualized on two-dimensional graph. The distances between dots on the two-dimensional graph can be represented as the similarity of the documents. More close the distance. more similar the documents.

Emotion-on-a-chip(EOC) : Evolution of biochip technology to measure human emotion (감성 진단칩(Emotion-on-a-chip, EOC) : 인간 감성측정을 위한 바이오칩기술의 진화)

  • Jung, Hyo-Il;Kihl, Tae-Suk;Hwang, Yoo-Sun
    • Science of Emotion and Sensibility
    • /
    • v.14 no.1
    • /
    • pp.157-164
    • /
    • 2011
  • Emotion science is one of the rapidly expanding engineering/scientific disciplines which has a major impact on human society. Such growing interests in emotion science and engineering owe the recent trend that various academic fields are being merged. In this paper we propose the potential importance of the biochip technology in which the human emotion can be precisely measured in real time using body fluids such as blood, saliva and sweat. We firstly and newly name such a biochip an Emotion-On-a-Chip (EOC). EOC consists of biological markers to measure the emotion, electrode to acquire the signal, transducer to transfer the signal and display to show the result. In particular, microfabrication techniques made it possible to construct nano/micron scale sensing parts/chips to accommodate the biological molecules to capture the emotional bio-markers and gave us a new opportunities to investigate the emotion precisely. Future developments in the EOC techniques will be able to help combine the social sciences and natural sciences, and consequently expand the scope of studies.

  • PDF

An Optimized V&V Methodology to Improve Quality for Safety-Critical Software of Nuclear Power Plant (원전 안전-필수 소프트웨어의 품질향상을 위한 최적화된 확인 및 검증 방안)

  • Koo, Seo-Ryong;Yoo, Yeong-Jae
    • Journal of the Korea Society for Simulation
    • /
    • v.24 no.4
    • /
    • pp.1-9
    • /
    • 2015
  • As the use of software is more wider in the safety-critical nuclear fields, so study to improve safety and quality of the software has been actively carried out for more than the past decade. In the nuclear power plant, nuclear man-machine interface systems (MMIS) performs the function of the brain and neural networks of human and consists of fully digitalized equipments. Therefore, errors in the software for nuclear MMIS may occur an abnormal operation of nuclear power plant, can result in economic loss due to the consequential trip of the nuclear power plant. Verification and validation (V&V) is a software-engineering discipline that helps to build quality into software, and the nuclear industry has been defined by laws and regulations to implement and adhere to a through verification and validation activities along the software lifecycle. V&V is a collection of analysis and testing activities across the full lifecycle and complements the efforts of other quality-engineering functions. This study propose a methodology based on V&V activities and related tool-chain to improve quality for software in the nuclear power plant. The optimized methodology consists of a document evaluation, requirement traceability, source code review, and software testing. The proposed methodology has been applied and approved to the real MMIS project for Shin-Hanul units 1&2.

Enhanced Grid-Based Trajectory Cloaking Method for Efficiency Search and User Information Protection in Location-Based Services (위치기반 서비스에서 효율적 검색과 사용자 정보보호를 위한 향상된 그리드 기반 궤적 클로킹 기법)

  • Youn, Ji-Hye;Song, Doo-Hee;Cai, Tian-Yuan;Park, Kwang-Jin
    • KIPS Transactions on Computer and Communication Systems
    • /
    • v.7 no.8
    • /
    • pp.195-202
    • /
    • 2018
  • With the development of location-based applications such as smart phones and GPS navigation, active research is being conducted to protect location and trajectory privacy. To receive location-related services, users must disclose their exact location to the server. However, disclosure of users' location exposes not only their locations but also their trajectory to the server, which can lead to concerns of privacy violation. Furthermore, users request from the server not only location information but also multimedia information (photographs, reviews, etc. of the location), and this increases the processing cost of the server and the information to be received by the user. To solve these problems, this study proposes the EGTC (Enhanced Grid-based Trajectory Cloaking) technique. As with the existing GTC (Grid-based Trajectory Cloaking) technique, EGTC method divides the user trajectory into grids at the user privacy level (UPL) and creates a cloaking region in which a random query sequence is determined. In the next step, the necessary information is received as index by considering the sub-grid cell corresponding to the path through which the user wishes to move as c(x,y). The proposed method ensures the trajectory privacy as with the existing GTC method while reducing the amount of information the user must listen to. The excellence of the proposed method has been proven through experimental results.