• Title/Summary/Keyword: Preprocessing Process

Search Result 422, Processing Time 0.033 seconds

Image Restoration Filter using Combined Weight in Mixed Noise Environment (복합잡음 환경에서 결합가중치를 이용한 영상복원 필터)

  • Cheon, Bong-Won;Kim, Nam-Ho
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 2021.05a
    • /
    • pp.210-212
    • /
    • 2021
  • In modern society, various digital equipment are being distributed due to the influence of the 4th industrial revolution, and they are used in a wide range of fields such as automated processes, intelligent CCTV, medical industry, robots, and drones. Accordingly, the importance of the preprocessing process in a system operating based on an image is increasing, and an algorithm for effectively reconstructing an image is drawing attention. In this paper, we propose a filter algorithm based on a combined weight value to reconstruct an image in a complex noise environment. The proposed algorithm calculates the weight according to the spatial distance and the weight according to the difference between the pixel values for the input image and the pixel values inside the filtering mask, respectively. The final output was filtered by applying the join weights calculated based on the two weights to the mask. In order to verify the performance of the proposed algorithm, we simulated it by comparing it with the existing filter algorithm.

  • PDF

Prediction Model of Real Estate Transaction Price with the LSTM Model based on AI and Bigdata

  • Lee, Jeong-hyun;Kim, Hoo-bin;Shim, Gyo-eon
    • International Journal of Advanced Culture Technology
    • /
    • v.10 no.1
    • /
    • pp.274-283
    • /
    • 2022
  • Korea is facing a number difficulties arising from rising housing prices. As 'housing' takes the lion's share in personal assets, many difficulties are expected to arise from fluctuating housing prices. The purpose of this study is creating housing price prediction model to prevent such risks and induce reasonable real estate purchases. This study made many attempts for understanding real estate instability and creating appropriate housing price prediction model. This study predicted and validated housing prices by using the LSTM technique - a type of Artificial Intelligence deep learning technology. LSTM is a network in which cell state and hidden state are recursively calculated in a structure which added cell state, which is conveyor belt role, to the existing RNN's hidden state. The real sale prices of apartments in autonomous districts ranging from January 2006 to December 2019 were collected through the Ministry of Land, Infrastructure, and Transport's real sale price open system and basic apartment and commercial district information were collected through the Public Data Portal and the Seoul Metropolitan City Data. The collected real sale price data were scaled based on monthly average sale price and a total of 168 data were organized by preprocessing respective data based on address. In order to predict prices, the LSTM implementation process was conducted by setting training period as 29 months (April 2015 to August 2017), validation period as 13 months (September 2017 to September 2018), and test period as 13 months (December 2018 to December 2019) according to time series data set. As a result of this study for predicting 'prices', there have been the following results. Firstly, this study obtained 76 percent of prediction similarity. We tried to design a prediction model of real estate transaction price with the LSTM Model based on AI and Bigdata. The final prediction model was created by collecting time series data, which identified the fact that 76 percent model can be made. This validated that predicting rate of return through the LSTM method can gain reliability.

A Study on Performance Improvement of Non-Profiling Based Power Analysis Attack against CRYSTALS-Dilithium (CRYSTALS-Dilithium 대상 비프로파일링 기반 전력 분석 공격 성능 개선 연구)

  • Sechang Jang;Minjong Lee;Hyoju Kang;Jaecheol Ha
    • Journal of the Korea Institute of Information Security & Cryptology
    • /
    • v.33 no.1
    • /
    • pp.33-43
    • /
    • 2023
  • The National Institute of Standards and Technology (NIST), which is working on the Post-Quantum Cryptography (PQC) standardization project, announced four algorithms that have been finalized for standardization. In this paper, we demonstrate through experiments that private keys can be exposed by Correlation Power Analysis (CPA) and Differential Deep Learning Analysis (DDLA) attacks on polynomial coefficient-wise multiplication algorithms that operate in the process of generating signatures using CRYSTALS-Dilithium algorithm. As a result of the experiment on ARM-Cortex-M4, we succeeded in recovering the private key coefficient using CPA or DDLA attacks. In particular, when StandardScaler preprocessing and continuous wavelet transform applied power traces were used in the DDLA attack, the minimum number of power traces required for attacks is reduced and the Normalized Maximum Margines (NMM) value increased by about 3 times. Conseqently, the proposed methods significantly improves the attack performance.

Livestock Telemedicine System Prediction Model for Human Healthy Life (인간의 건강한 삶을 위한 가축원격 진료 예측 모델)

  • Kang, Yun-Jeong;Lee, Kwang-Jae;Choi, Dong-Oun
    • Journal of Korea Entertainment Industry Association
    • /
    • v.13 no.8
    • /
    • pp.335-343
    • /
    • 2019
  • Healthy living is an essential element of human happiness. Quality eating provides the basis for life, and the health of livestock, which provides meat and dairy products, has a direct impact on human health. In the case of calves, diarrhea is the cause of all diseases.In this paper, we use a sensor to measure calf 's biometric data to diagnose calf diarrhea. The collected biometric data is subjected to a preprocessing process for use as meaningful information. We measure calf birth history and calf biometrics. The ontology is constructed by inputting environmental information of housing and biochemistry, immunity, and measurement information of human body for disease management. We will build a knowledge base for predicting calf diarrhea by predicting calf diarrhea through logical reasoning. Predict diarrhea with the knowledge base on the name of the disease, cause, timing and symptoms of livestock diseases. These knowledge bases can be expressed as domain ontologies for parent ontology and prediction, and as a result, treatment and prevention methods can be suggested.

A Study on the Prediction of Ship Collision Based on Semi-Supervised Learning (준지도 학습 기반 선박충돌 예측에 대한 연구)

  • Ho-June Seok;Seung Sim;Jeong-Hun Woo;Jun-Rae Cho;Deuk-Jae Cho;Jong-Hwa Baek;Jaeyong Jung
    • Proceedings of the Korean Institute of Navigation and Port Research Conference
    • /
    • 2023.05a
    • /
    • pp.204-205
    • /
    • 2023
  • This study studied a prediction model for sending collision alarms for small fishing boats based on semi-supervised learning(SSL). The supervised learning (SL) method requires a large number of labeled data, but the labeling process takes a lot of resources and time. This study used service data collected through a data pipeline linked to 'intelligent maritime traffic information service' and data collected from real-sea experiment. The model accuracy was improved as a result of learning not only real-sea experiment data with labeling determined based on actual user satisfaction but also service data without label determined together.

  • PDF

Artificial intelligence-based blood pressure prediction using photoplethysmography signals

  • Yonghee Lee;YongWan Ju;Jundong Lee
    • Journal of the Korea Society of Computer and Information
    • /
    • v.28 no.11
    • /
    • pp.155-160
    • /
    • 2023
  • This paper presents a method for predicting blood pressure using the photoplethysmography signals. First, after measuring the optical blood flow signal, artifacts are removed through a preprocessing process, and a signal for learning is obtained. In addition, weight and height, which affect blood pressure, are measured as additional information. Next, a system is built to estimate systolic and diastolic blood pressure by learning the photoplethysmography signals, height, and weight as input variables through an artificial intelligence algorithm. The constructed system predicts the systolic and diastolic blood pressures using the inputs. The proposed method can continuously predict blood pressure in real time by receiving photoplethysmography signals that reflect the state of the heart and blood vessels, and the height and weight of the subject in an unconstrained method. In order to confirm the usefulness of the artificial intelligence-based blood pressure prediction system presented in this study, the usefulness of the results is verified by comparing the measured blood pressure with the predicted blood pressure.

Development of AI-based Smart Agriculture Early Warning System

  • Hyun Sim;Hyunwook Kim
    • Journal of the Korea Society of Computer and Information
    • /
    • v.28 no.12
    • /
    • pp.67-77
    • /
    • 2023
  • This study represents an innovative research conducted in the smart farm environment, developing a deep learning-based disease and pest detection model and applying it to the Intelligent Internet of Things (IoT) platform to explore new possibilities in the implementation of digital agricultural environments. The core of the research was the integration of the latest ImageNet models such as Pseudo-Labeling, RegNet, EfficientNet, and preprocessing methods to detect various diseases and pests in complex agricultural environments with high accuracy. To this end, ensemble learning techniques were applied to maximize the accuracy and stability of the model, and the model was evaluated using various performance indicators such as mean Average Precision (mAP), precision, recall, accuracy, and box loss. Additionally, the SHAP framework was utilized to gain a deeper understanding of the model's prediction criteria, making the decision-making process more transparent. This analysis provided significant insights into how the model considers various variables to detect diseases and pests.

Efficient Semi-automatic Annotation System based on Deep Learning

  • Hyunseok Lee;Hwa Hui Shin;Soohoon Maeng;Dae Gwan Kim;Hyojeong Moon
    • IEMEK Journal of Embedded Systems and Applications
    • /
    • v.18 no.6
    • /
    • pp.267-275
    • /
    • 2023
  • This paper presents the development of specialized software for annotating volume-of-interest on 18F-FDG PET/CT images with the goal of facilitating the studies and diagnosis of head and neck cancer (HNC). To achieve an efficient annotation process, we employed the SE-Norm-Residual Layer-based U-Net model. This model exhibited outstanding proficiency to segment cancerous regions within 18F-FDG PET/CT scans of HNC cases. Manual annotation function was also integrated, allowing researchers and clinicians to validate and refine annotations based on dataset characteristics. Workspace has a display with fusion of both PET and CT images, providing enhance user convenience through simultaneous visualization. The performance of deeplearning model was validated using a Hecktor 2021 dataset, and subsequently developed semi-automatic annotation functionalities. We began by performing image preprocessing including resampling, normalization, and co-registration, followed by an evaluation of the deep learning model performance. This model was integrated into the software, serving as an initial automatic segmentation step. Users can manually refine pre-segmented regions to correct false positives and false negatives. Annotation images are subsequently saved along with their corresponding 18F-FDG PET/CT fusion images, enabling their application across various domains. In this study, we developed a semi-automatic annotation software designed for efficiently generating annotated lesion images, with applications in HNC research and diagnosis. The findings indicated that this software surpasses conventional tools, particularly in the context of HNC-specific annotation with 18F-FDG PET/CT data. Consequently, developed software offers a robust solution for producing annotated datasets, driving advances in the studies and diagnosis of HNC.

A method for metadata extraction from a collection of records using Named Entity Recognition in Natural Language Processing (자연어 처리의 개체명 인식을 통한 기록집합체의 메타데이터 추출 방안)

  • Chiho Song
    • Journal of Korean Society of Archives and Records Management
    • /
    • v.24 no.2
    • /
    • pp.65-88
    • /
    • 2024
  • This pilot study explores a method of extracting metadata values and descriptions from records using named entity recognition (NER), a technique in natural language processing (NLP), a subfield of artificial intelligence. The study focuses on handwritten records from the Guro Industrial Complex, produced during the 1960s and 1970s, comprising approximately 1,200 pages and 80,000 words. After the preprocessing process of the records, which included digitization, the study employed a publicly available language API based on Google's Bidirectional Encoder Representations from Transformers (BERT) language model to recognize entity names within the text. As a result, 173 names of people and 314 of organizations and institutions were extracted from the Guro Industrial Complex's past records. These extracted entities are expected to serve as direct search terms for accessing the contents of the records. Furthermore, the study identified challenges that arose when applying the theoretical methodology of NLP to real-world records consisting of semistructured text. It also presents potential solutions and implications to consider when addressing these issues.

A Study on the Construction of Financial-Specific Language Model Applicable to the Financial Institutions (금융권에 적용 가능한 금융특화언어모델 구축방안에 관한 연구)

  • Jae Kwon Bae
    • Journal of Korea Society of Industrial Information Systems
    • /
    • v.29 no.3
    • /
    • pp.79-87
    • /
    • 2024
  • Recently, the importance of pre-trained language models (PLM) has been emphasized for natural language processing (NLP) such as text classification, sentiment analysis, and question answering. Korean PLM shows high performance in NLP in general-purpose domains, but is weak in domains such as finance, medicine, and law. The main goal of this study is to propose a language model learning process and method to build a financial-specific language model that shows good performance not only in the financial domain but also in general-purpose domains. The five steps of the financial-specific language model are (1) financial data collection and preprocessing, (2) selection of model architecture such as PLM or foundation model, (3) domain data learning and instruction tuning, (4) model verification and evaluation, and (5) model deployment and utilization. Through this, a method for constructing pre-learning data that takes advantage of the characteristics of the financial domain and an efficient LLM training method, adaptive learning and instruction tuning techniques, were presented.