• Title/Summary/Keyword: 학습 데이터

Search Result 6,392, Processing Time 0.035 seconds

A Deep Learning-based Real-time Deblurring Algorithm on HD Resolution (HD 해상도에서 실시간 구동이 가능한 딥러닝 기반 블러 제거 알고리즘)

  • Shim, Kyujin;Ko, Kangwook;Yoon, Sungjoon;Ha, Namkoo;Lee, Minseok;Jang, Hyunsung;Kwon, Kuyong;Kim, Eunjoon;Kim, Changick
    • Journal of Broadcast Engineering
    • /
    • v.27 no.1
    • /
    • pp.3-12
    • /
    • 2022
  • Image deblurring aims to remove image blur, which can be generated while shooting the pictures by the movement of objects, camera shake, blurring of focus, and so forth. With the rise in popularity of smartphones, it is common to carry portable digital cameras daily, so image deblurring techniques have become more significant recently. Originally, image deblurring techniques have been studied using traditional optimization techniques. Then with the recent attention on deep learning, deblurring methods based on convolutional neural networks have been actively proposed. However, most of them have been developed while focusing on better performance. Therefore, it is not easy to use in real situations due to the speed of their algorithms. To tackle this problem, we propose a novel deep learning-based deblurring algorithm that can be operated in real-time on HD resolution. In addition, we improved the training and inference process and could increase the performance of our model without any significant effect on the speed and the speed without any significant effect on the performance. As a result, our algorithm achieves real-time performance by processing 33.74 frames per second at 1280×720 resolution. Furthermore, it shows excellent performance compared to its speed with a PSNR of 29.78 and SSIM of 0.9287 with the GoPro dataset.

A Study on Ransomware Detection Methods in Actual Cases of Public Institutions (공공기관 실제 사례로 보는 랜섬웨어 탐지 방안에 대한 연구)

  • Yong Ju Park;Huy Kang Kim
    • Journal of the Korea Institute of Information Security & Cryptology
    • /
    • v.33 no.3
    • /
    • pp.499-510
    • /
    • 2023
  • Recently, an intelligent and advanced cyber attack attacks a computer network of a public institution using a file containing malicious code or leaks information, and the damage is increasing. Even in public institutions with various information protection systems, known attacks can be detected, but unknown dynamic and encryption attacks can be detected when existing signature-based or static analysis-based malware and ransomware file detection methods are used. vulnerable to The detection method proposed in this study extracts the detection result data of the system that can detect malicious code and ransomware among the information protection systems actually used by public institutions, derives various attributes by combining them, and uses a machine learning classification algorithm. Results are derived through experiments on how the derived properties are classified and which properties have a significant effect on the classification result and accuracy improvement. In the experimental results of this paper, although it is different for each algorithm when a specific attribute is included or not, the learning with a specific attribute shows an increase in accuracy, and later detects malicious code and ransomware files and abnormal behavior in the information protection system. It is expected that it can be used for property selection when creating algorithms.

Quality Visualization of Quality Metric Indicators based on Table Normalization of Static Code Building Information (정적 코드 내부 정보의 테이블 정규화를 통한 품질 메트릭 지표들의 가시화를 위한 추출 메커니즘)

  • Chansol Park;So Young Moon;R. Young Chul Kim
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.12 no.5
    • /
    • pp.199-206
    • /
    • 2023
  • The current software becomes the huge size of source codes. Therefore it is increasing the importance and necessity of static analysis for high-quality product. With static analysis of the code, it needs to identify the defect and complexity of the code. Through visualizing these problems, we make it guild for developers and stakeholders to understand these problems in the source codes. Our previous visualization research focused only on the process of storing information of the results of static analysis into the Database tables, querying the calculations for quality indicators (CK Metrics, Coupling, Number of function calls, Bad-smell), and then finally visualizing the extracted information. This approach has some limitations in that it takes a lot of time and space to analyze a code using information extracted from it through static analysis. That is since the tables are not normalized, it may occur to spend space and time when the tables(classes, functions, attributes, Etc.) are joined to extract information inside the code. To solve these problems, we propose a regularized design of the database tables, an extraction mechanism for quality metric indicators inside the code, and then a visualization with the extracted quality indicators on the code. Through this mechanism, we expect that the code visualization process will be optimized and that developers will be able to guide the modules that need refactoring. In the future, we will conduct learning of some parts of this process.

Time series and deep learning prediction study Using container Throughput at Busan Port (부산항 컨테이너 물동량을 이용한 시계열 및 딥러닝 예측연구)

  • Seung-Pil Lee;Hwan-Seong Kim
    • Proceedings of the Korean Institute of Navigation and Port Research Conference
    • /
    • 2022.06a
    • /
    • pp.391-393
    • /
    • 2022
  • In recent years, technologies forecasting demand based on deep learning and big data have accelerated the smartification of the field of e-commerce, logistics and distribution areas. In particular, ports, which are the center of global transportation networks and modern intelligent logistics, are rapidly responding to changes in the global economy and port environment caused by the 4th industrial revolution. Port traffic forecasting will have an important impact in various fields such as new port construction, port expansion, and terminal operation. Therefore, the purpose of this study is to compare the time series analysis and deep learning analysis, which are often used for port traffic prediction, and to derive a prediction model suitable for the future container prediction of Busan Port. In addition, external variables related to trade volume changes were selected as correlations and applied to the multivariate deep learning prediction model. As a result, it was found that the LSTM error was low in the single-variable prediction model using only Busan Port container freight volume, and the LSTM error was also low in the multivariate prediction model using external variables.

  • PDF

Deep Learning-Based Motion Reconstruction Using Tracker Sensors (트래커를 활용한 딥러닝 기반 실시간 전신 동작 복원 )

  • Hyunseok Kim;Kyungwon Kang;Gangrae Park;Taesoo Kwon
    • Journal of the Korea Computer Graphics Society
    • /
    • v.29 no.5
    • /
    • pp.11-20
    • /
    • 2023
  • In this paper, we propose a novel deep learning-based motion reconstruction approach that facilitates the generation of full-body motions, including finger motions, while also enabling the online adjustment of motion generation delays. The proposed method combines the Vive Tracker with a deep learning method to achieve more accurate motion reconstruction while effectively mitigating foot skating issues through the use of an Inverse Kinematics (IK) solver. The proposed method utilizes a trained AutoEncoder to reconstruct character body motions using tracker data in real-time while offering the flexibility to adjust motion generation delays as needed. To generate hand motions suitable for the reconstructed body motion, we employ a Fully Connected Network (FCN). By combining the reconstructed body motion from the AutoEncoder with the hand motions generated by the FCN, we can generate full-body motions of characters that include hand movements. In order to alleviate foot skating issues in motions generated by deep learning-based methods, we use an IK solver. By setting the trackers located near the character's feet as end-effectors for the IK solver, our method precisely controls and corrects the character's foot movements, thereby enhancing the overall accuracy of the generated motions. Through experiments, we validate the accuracy of motion generation in the proposed deep learning-based motion reconstruction scheme, as well as the ability to adjust latency based on user input. Additionally, we assess the correction performance by comparing motions with the IK solver applied to those without it, focusing particularly on how it addresses the foot skating issue in the generated full-body motions.

Automatic hand gesture area extraction and recognition technique using FMCW radar based point cloud and LSTM (FMCW 레이다 기반의 포인트 클라우드와 LSTM을 이용한 자동 핸드 제스처 영역 추출 및 인식 기법)

  • Seung-Tak Ra;Seung-Ho Lee
    • Journal of IKEEE
    • /
    • v.27 no.4
    • /
    • pp.486-493
    • /
    • 2023
  • In this paper, we propose an automatic hand gesture area extraction and recognition technique using FMCW radar-based point cloud and LSTM. The proposed technique has the following originality compared to existing methods. First, unlike methods that use 2D images as input vectors such as existing range-dopplers, point cloud input vectors in the form of time series are intuitive input data that can recognize movement over time that occurs in front of the radar in the form of a coordinate system. Second, because the size of the input vector is small, the deep learning model used for recognition can also be designed lightly. The implementation process of the proposed technique is as follows. Using the distance, speed, and angle information measured by the FMCW radar, a point cloud containing x, y, z coordinate format and Doppler velocity information is utilized. For the gesture area, the hand gesture area is automatically extracted by identifying the start and end points of the gesture using the Doppler point obtained through speed information. The point cloud in the form of a time series corresponding to the viewpoint of the extracted gesture area is ultimately used for learning and recognition of the LSTM deep learning model used in this paper. To evaluate the objective reliability of the proposed technique, an experiment calculating MAE with other deep learning models and an experiment calculating recognition rate with existing techniques were performed and compared. As a result of the experiment, the MAE value of the time series point cloud input vector + LSTM deep learning model was calculated to be 0.262 and the recognition rate was 97.5%. The lower the MAE and the higher the recognition rate, the better the results, proving the efficiency of the technique proposed in this paper.

Analysis of the Abstract Structure in Scientific Papers by Gifted Students and Exploring the Possibilities of Artificial Intelligence Applied to the Educational Setting (과학 영재의 논문 초록 구조 분석 및 이에 대한 인공지능의 활용 가능성 탐색)

  • Bongwoo Lee;Hunkoog Jho
    • Journal of The Korean Association For Science Education
    • /
    • v.43 no.6
    • /
    • pp.573-582
    • /
    • 2023
  • This study aimed to explore the potential use of artificial intelligence in science education for gifted students by analyzing the structure of abstracts written by students at a gifted science academy and comparing the performance of various elements extracted using AI. The study involved an analysis of 263 graduation theses from S Science High School over five years (2017-2021), focusing on the frequency and types of background, objectives, methods, results, and discussions included in their abstracts. This was followed by an evaluation of their accuracy using AI classification methods with fine-tuning and prompts. The results revealed that the frequency of elements in the abstracts written by gifted students followed the order of objectives, methods, results, background, and discussions. However, only 57.4% of the abstracts contained all the essential elements, such as objectives, methods, and results. Among these elements, fine-tuned AI classification showed the highest accuracy, with background, objectives, and results demonstrating relatively high performance, while methods and discussions were often inaccurately classified. These findings suggest the need for a more effective use of AI, through providing a better distribution of elements or appropriate datasets for training. Educational implications of these findings were also discussed.

A Study on the Research Trends in the Fourth Industrial Revolution in Korea Using Topic Modeling (토픽모델링을 활용한 4차 산업혁명 분야의 국내 연구 동향 분석)

  • Gi Young Kim;Dong-Jo Noh
    • Journal of the Korean BIBLIA Society for library and Information Science
    • /
    • v.34 no.4
    • /
    • pp.207-234
    • /
    • 2023
  • Since the advent of the Fourth Industrial Revolution, related studies have been conducted in various fields including industrial fields. In this study, to analyze domestic research trends on the Fourth Industrial Revolution, a keyword analysis and topic modeling analysis based on the LDA algorithm were conducted on 2,115 papers included in the KCI from January 2016 to August 2023. As a result of this study, first, the journals in which more than 30 academic papers related to the Fourth Industrial Revolution were published were digital convergence research, humanities society 21, e-business research, and learner-centered subject education research. Second, as a result of the topic modeling analysis, seven topics were selected: "human and artificial intelligence," "data and personal information management," "curriculum change and innovation," "corporate change and innovation," "education change and jobs," "culture and arts and content," and "information and corporate policies and responses." Third, common research topics related to the Fourth Industrial Revolution are "change in the curriculum," "human and artificial intelligence," and "culture arts and content," and common keywords include "company," "information," "protection," "smart," and "system." Fourth, in the first half of the research period (2016-2019), topics in the field of education appeared at the top, but in the second half (2020-2023), topics related to corporate, smart, digital, and service innovation appeared at the top. Fifth, research topics tended to become more specific or subdivided in the second half of the study. This trend is interpreted as a result of socioeconomic changes that occur as core technologies in the fourth industrial revolution are applied and utilized in various industrial fields after the corona pandemic. The results of this study are expected to provide useful information for identifying research trends in the field of the Fourth Industrial Revolution, establishing strategies, and subsequent research.

A study on end-to-end speaker diarization system using single-label classification (단일 레이블 분류를 이용한 종단 간 화자 분할 시스템 성능 향상에 관한 연구)

  • Jaehee Jung;Wooil Kim
    • The Journal of the Acoustical Society of Korea
    • /
    • v.42 no.6
    • /
    • pp.536-543
    • /
    • 2023
  • Speaker diarization, which labels for "who spoken when?" in speech with multiple speakers, has been studied on a deep neural network-based end-to-end method for labeling on speech overlap and optimization of speaker diarization models. Most deep neural network-based end-to-end speaker diarization systems perform multi-label classification problem that predicts the labels of all speakers spoken in each frame of speech. However, the performance of the multi-label-based model varies greatly depending on what the threshold is set to. In this paper, it is studied a speaker diarization system using single-label classification so that speaker diarization can be performed without thresholds. The proposed model estimate labels from the output of the model by converting speaker labels into a single label. To consider speaker label permutations in the training, the proposed model is used a combination of Permutation Invariant Training (PIT) loss and cross-entropy loss. In addition, how to add the residual connection structures to model is studied for effective learning of speaker diarization models with deep structures. The experiment used the Librispech database to generate and use simulated noise data for two speakers. When compared with the proposed method and baseline model using the Diarization Error Rate (DER) performance the proposed method can be labeling without threshold, and it has improved performance by about 20.7 %.

A Study on Intelligent Self-Recovery Technologies for Cyber Assets to Actively Respond to Cyberattacks (사이버 공격에 능동대응하기 위한 사이버 자산의 지능형 자가복구기술 연구)

  • Se-ho Choi;Hang-sup Lim;Jung-young Choi;Oh-jin Kwon;Dong-kyoo Shin
    • Journal of Internet Computing and Services
    • /
    • v.24 no.6
    • /
    • pp.137-144
    • /
    • 2023
  • Cyberattack technology is evolving to an unpredictable degree, and it is a situation that can happen 'at any time' rather than 'someday'. Infrastructure that is becoming hyper-connected and global due to cloud computing and the Internet of Things is an environment where cyberattacks can be more damaging than ever, and cyberattacks are still ongoing. Even if damage occurs due to external influences such as cyberattacks or natural disasters, intelligent self-recovery must evolve from a cyber resilience perspective to minimize downtime of cyber assets (OS, WEB, WAS, DB). In this paper, we propose an intelligent self-recovery technology to ensure sustainable cyber resilience when cyber assets fail to function properly due to a cyberattack. The original and updated history of cyber assets is managed in real-time using timeslot design and snapshot backup technology. It is necessary to secure technology that can automatically detect damage situations in conjunction with a commercialized file integrity monitoring program and minimize downtime of cyber assets by analyzing the correlation of backup data to damaged files on an intelligent basis to self-recover to an optimal state. In the future, we plan to research a pilot system that applies the unique functions of self-recovery technology and an operating model that can learn and analyze self-recovery strategies appropriate for cyber assets in damaged states.