• Title/Summary/Keyword: data pre-processing

Search Result 813, Processing Time 0.025 seconds

Research Trends on Doctor's Job Competencies in Korea Using Text Network Analysis (텍스트네트워크 분석을 활용한 국내 의사 직무역량 연구동향 분석)

  • Kim, Young Jon;Lee, Jea Woog;Yune, So Jung
    • Korean Medical Education Review
    • /
    • v.24 no.2
    • /
    • pp.93-102
    • /
    • 2022
  • We use the concept of the "doctor's role" as a guideline for developing medical education programs for medical students, residents, and doctors. Therefore, we should regularly reflect on the times and social needs to develop a clear sense of that role. The objective of the present study was to understand the knowledge structure related to doctor's job competencies in Korea. We analyzed research trends related to doctor's job competencies in Korea Citation Index journals using text network analysis through an integrative approach focusing on identifying social issues. We finally selected 1,354 research papers related to doctor's job competencies from 2011 to 2020, and we analyzed 2,627 words through data pre-processing with the NetMiner ver. 4.2 program (Cyram Inc., Seongnam, Korea). We conducted keyword centrality analysis, topic modeling, frequency analysis, and linear regression analysis using NetMiner ver. 4.2 (Cyram Inc.) and IBM SPSS ver. 23.0 (IBM Corp., Armonk, NY, USA). As a result of the study, words such as "family," "revision," and "rejection" appeared frequently. In topic modeling, we extracted five potential topics: "topic 1: Life and death in medical situations," "topic 2: Medical practice under the Medical Act," "topic 3: Medical malpractice and litigation," "topic 4: Medical professionalism," and "topic 5: Competency development education for medical students." Although there were no statistically significant changes in the research trends for each topic over time, it is nonetheless known that social changes could affect the demand for doctor's job competencies.

Development of Water Velocity Data Preprocessing Method for PAVOs (PAVOs 활용을 위한 유속데이터 전처리 기법 개발)

  • Soyeon Lim;Youngmoo Yu;Sinjae Lee;Yeongil Lee
    • Proceedings of the Korea Water Resources Association Conference
    • /
    • 2023.05a
    • /
    • pp.85-85
    • /
    • 2023
  • 유량 측정을 위해 도섭법, 횡측선법 등의 인력에 의한 방법이 적용되고 있으나, 이는 야간 및 휴일 측정, 인력 부족 등 여러 제약으로 인해 고수위 홍수를 측정하는 데에 한계가 있다. 이를 해결하기 위해 시공간적 제약이 없는 도플러 방식 초음파유속계(Acousitc Doppler Velocity Meter, ADVM)와 자동유속관측시스템(Portable Automatic Velocity Observation System; PAVOs)이 제안되었다. 이 방법들은 교량에 설치된 장치를 통해 실시간으로 유속이 계측되어 시공간적 제약이 없으며 홍수 관리에 유용하게 사용될 수 있다. 실시간으로 계측된 유속 데이터는 오·결측 값이 발생하며 ADVM의 경우 수위-유량관계식을 활용하는 등 전처리 방법이 활용되고 있지만 전자파표면유속계를 활용한 PAVOs 데이터의 전처리 방법에 대한 연구는 부족하다. 따라서 본 연구에서는 PAVOs에서 실시간으로 계측된 유속 데이터의 전 처리 과정(Pre-processing)을 개발하였다. PAVOs를 통해 측정된 데이터는 5분 단위로 10개의 유속이 한번에 측정되며 비정상성(Non-stationary)인 특징을 가진다. 이 데이터의 전처리 과정으로 오·결측값에 대한 처리 및 보간법 적용 이후 10개 값 중 실제 유속을 판단하고 잡음제거(Denoising)를 수행하였다. 이를 강원도 홍천강에 위치한 홍천교에서 계측된 유속 데이터에 적용하였다. 그 결과 데이터의 상승부와 하강부에서 일정한 경향성을 파악할 수 있다. 이 데이터를 통해 산정한 유량과 실측 기반의 평균유속과 관계를 통해 계산한 유량을 비교해 보았을 때 낮은 편차율을 가지는 것을 확인하였다. 전 처리 된 실시간 유속 데이터를 활용한다면 최고수위가 발생하였을 경우 홍수량을 산정할 수 있을 것이다. 또한, 강우 또는 하천 공사에 의해 변동하는 수위-유량관계곡선식을 실시간으로 개발할 수 있을 것이며 이는 효과적인 홍수관리에 큰 역할을 할 수 있을 것이다.

  • PDF

Comparison of image quality according to activation function during Super Resolution using ESCPN (ESCPN을 이용한 초해상화 시 활성화 함수에 따른 이미지 품질의 비교)

  • Song, Moon-Hyuk;Song, Ju-Myung;Hong, Yeon-Jo
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 2022.05a
    • /
    • pp.129-132
    • /
    • 2022
  • Super-resolution is the process of converting a low-quality image into a high-quality image. This study was conducted using ESPCN. In a super-resolution deep neural network, different quality images can be output even when receiving the same input data according to the activation function that determines the weight when passing through each node. Therefore, the purpose of this study is to find the most suitable activation function for super-resolution by applying the activation functions ReLU, ELU, and Swish and compare the quality of the output image for the same input images. The CelebaA Dataset was used as the dataset. Images were cut into a square during the pre-processing process then the image quality was lowered. The degraded image was used as the input image and the original image was used for evaluation. As a result, ELU and swish took a long time to train compared to ReLU, which is mainly used for machine learning but showed better performance.

  • PDF

High Resolution Photo Matting for Construction of Photo-realistic Model (실감모형 제작을 위한 고해상도 유물 이미지 매팅)

  • Choi, Seok-Keun;Lee, Soung-Ki;Choi, Do-Yeon;Kim, Gwang-Ho
    • Journal of the Korean Society of Surveying, Geodesy, Photogrammetry and Cartography
    • /
    • v.40 no.1
    • /
    • pp.23-30
    • /
    • 2022
  • Recently, there are various studies underway on the deep learning-used image matting methods. Even in the field of photogrammetry, a process of extracting information about relics from images photographed is essential to produce a high-quality realistic model. Such a process requires a great deal of time and manpower, so chroma-key has been used for extraction so far. This method is low in accuracy of sub-classification, however, it is difficult to apply the existing method to high-quality realistic models. Thus, this study attempted to remove background information from high-resolution relic images by using prior background information and trained learning data and evaluate both qualitative and quantitative results of the relic images extracted. As a result, this proposed method with FBA(manual trimap) showed quantitatively better results, and even in the qualitative evaluation, it was high in accuracy of classification around relics. Accordingly, this study confirmed the applicability of the proposed method in the indoor relic photography since it showed high accuracy and fast processing speed by acquiring prior background information when classifying high-resolution relic images.

IoT botnet attack detection using deep autoencoder and artificial neural networks

  • Deris Stiawan;Susanto ;Abdi Bimantara;Mohd Yazid Idris;Rahmat Budiarto
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.17 no.5
    • /
    • pp.1310-1338
    • /
    • 2023
  • As Internet of Things (IoT) applications and devices rapidly grow, cyber-attacks on IoT networks/systems also have an increasing trend, thus increasing the threat to security and privacy. Botnet is one of the threats that dominate the attacks as it can easily compromise devices attached to an IoT networks/systems. The compromised devices will behave like the normal ones, thus it is difficult to recognize them. Several intelligent approaches have been introduced to improve the detection accuracy of this type of cyber-attack, including deep learning and machine learning techniques. Moreover, dimensionality reduction methods are implemented during the preprocessing stage. This research work proposes deep Autoencoder dimensionality reduction method combined with Artificial Neural Network (ANN) classifier as botnet detection system for IoT networks/systems. Experiments were carried out using 3- layer, 4-layer and 5-layer pre-processing data from the MedBIoT dataset. Experimental results show that using a 5-layer Autoencoder has better results, with details of accuracy value of 99.72%, Precision of 99.82%, Sensitivity of 99.82%, Specificity of 99.31%, and F1-score value of 99.82%. On the other hand, the 5-layer Autoencoder model succeeded in reducing the dataset size from 152 MB to 12.6 MB (equivalent to a reduction of 91.2%). Besides that, experiments on the N_BaIoT dataset also have a very high level of accuracy, up to 99.99%.

Effectiveness Analysis of AI Maker Coding Education (AI 메이커 코딩 교육의 효과성 분석)

  • Lee, Jaeho;Kim, Daehyun;Lee, Seunghun
    • 한국정보교육학회:학술대회논문집
    • /
    • 2021.08a
    • /
    • pp.77-84
    • /
    • 2021
  • The purpose of this study is to propose AI maker coding education as a way to improve computational thinking(CT), which is an essential competence for problem-solving capability in modern society, and to analyze the effectiveness of this education on improving CT in elementary school students. For the research, 5 students from 4th graders and 5 students from 6th graders were recruited, and AI maker coding education was planned in 8 sessions to form classes from basic block coding and maker education to real-life problem solving. To analyze the effectiveness of AI maker coding education, pre- and post-CT examinations were performed. The test results confirmed that AI maker coding education had a significant effect on "abstraction", "algorithm", and "data processing" in the five CT components, and confirmed that there was no correlation in "problem resolution" and "automation". Overall, the average score of all students increased, and the deviation between students decreased, confirming that AI maker coding education was effective in improving CT.

  • PDF

A Study on the Improvement of IoT Network Performance Test Framework using OSS (개방형 SW를 이용한 IoT 네트워크 성능시험기 개선에 관한 연구)

  • Joung Youngjun;Jeong Yido;Lee SungHwa;Kim JinTae
    • The Journal of the Institute of Internet, Broadcasting and Communication
    • /
    • v.23 no.6
    • /
    • pp.97-102
    • /
    • 2023
  • This study is to provide improvement of tester for IoT system, which has recently become diversified and large-scale and It is about a method to improve the packet processing performance of the tester and securing flexibility in traffic protocol creation and operation. The purpose of this study is to design a OSS DPDK-based high-speed IoT network performance test system, which pre-verifies and measures the performance of data traffic transmission in an increasingly sophisticated high-capacity IoT network system. The basic structure of the high-speed IoT performance tester was designed using a DPDK-based traffic generator, the expected effect was suggested to traffic modeling and packet generation capability when the system was applied through experiments

Development Approach of Fault Detection Algorithm for RNSS Monitoring Station (차세대 RNSS 감시국을 위한 고장 검출 알고리즘 개발 방안)

  • Da-nim, Jung;Soo-min Lee;Chan-hee Lee;Eui-ho Kim;Heon-ho Choi
    • Journal of Advanced Navigation Technology
    • /
    • v.28 no.1
    • /
    • pp.1-14
    • /
    • 2024
  • Global navigation satellite system (GNSS) providing position, navigation and timing (PNT) services consist of satellite, ground, and user systems. Monitoring stations, a key element of the ground segment, play a crucial role in continuously collecting satellite navigation signals for service provision and fault detection. These stations detect anomalies such as threats to the signal-in-space (SIS) of satellites, receiver issues, and local threats. They deliver received data and detection results to the master station. This paper introduces the main monitoring algorithms and measurement pre-processing processes for quality assessment and fault detection of received satellite signals in current satellite navigation system monitoring stations. Furthermore, it proposes a strategy for the development of components, architecture, and algorithms for the new regional navigation satellite system (RNSS) monitoring stations.

The Establishment and Improvement of Full Cycle History Management System for Low- and Intermediate-level Radioactive Waste (중저준위 방사성폐기물 전주기 이력관리체계 구축 및 개선)

  • Jin-Woo Lee;Jun Lee;Hee-Chul Eun;Ji-Young Jeong
    • Journal of Radiation Industry
    • /
    • v.18 no.1
    • /
    • pp.95-100
    • /
    • 2024
  • To establish a radioactive waste life cycle history management system, a series of processes including waste generation, classification, packaging, storage, transportation, and disposal were reflected in the information management system. A preliminary review process was introduced to reduce the amount of radioactive waste generated and manage it efficiently. Through this, the amount of radioactive waste generated must be checked from the beginning of the research, and the generated radioactive waste must be thoroughly managed from the stage of generation to final disposal. In particular, in the case of radioactive waste data generated during nuclear facility operation and each experiment, a radioactive waste information management system must be operated to receive information from the waste generator and integrate it with processing information at the management stage. The application process for small-package containers was reflected so that information such as the generation facility of radioactive waste, generation facility, project information, types of radioactive waste, major radionuclides, etc. In the radioactive waste management process, the preceding steps are to receive waste history from the waste generators. This includes an application for a specified container with a QR label, pre-inspection, and management request. Next, the succeeding steps consist of repackaging, treatment, characterization, and evaluating the suitability of disposal, for a process to transparently manage radioactive wastes.

Long-term shape sensing of bridge girders using automated ROI extraction of LiDAR point clouds

  • Ganesh Kolappan Geetha;Sahyeon Lee;Junhwa Lee;Sung-Han Sim
    • Smart Structures and Systems
    • /
    • v.33 no.6
    • /
    • pp.399-414
    • /
    • 2024
  • This study discusses the long-term deformation monitoring and shape sensing of bridge girder surfaces with an automated extraction scheme for point clouds in the Region Of Interest (ROI), invariant to the position of a Light Detection And Ranging system (LiDAR). Advanced smart construction necessitates continuous monitoring of the deformation and shape of bridge girders during the construction phase. An automated scheme is proposed for reconstructing geometric model of ROI in the presence of noisy non-stationary background. The proposed scheme involves (i) denoising irrelevant background point clouds using dimensions from the design model, (ii) extracting the outer boundaries of the bridge girder by transforming and processing the point cloud data in a two-dimensional image space, (iii) extracting topology of pre-defined targets using the modified Otsu method, (iv) registering the point clouds to a common reference frame or design coordinate using extracted predefined targets placed outside ROI, and (v) defining the bounding box in the point clouds using corresponding dimensional information of the bridge girder and abutments from the design model. The surface-fitted reconstructed geometric model in the ROI is superposed consistently over a long period to monitor bridge shape and derive deflection during the construction phase, which is highly correlated. The proposed scheme of combining 2D-3D with the design model overcomes the sensitivity of 3D point cloud registration to initial match, which often leads to a local extremum.