• Title/Summary/Keyword: Generate Data

Search Result 3,066, Processing Time 0.03 seconds

DRM-FL: A Decentralized and Randomized Mechanism for Privacy Protection in Cross-Silo Federated Learning Approach (DRM-FL: Cross-Silo Federated Learning 접근법의 프라이버시 보호를 위한 분산형 랜덤화 메커니즘)

  • Firdaus, Muhammad;Latt, Cho Nwe Zin;Aguilar, Mariz;Rhee, Kyung-Hyune
    • Annual Conference of KIPS
    • /
    • 2022.05a
    • /
    • pp.264-267
    • /
    • 2022
  • Recently, federated learning (FL) has increased prominence as a viable approach for enhancing user privacy and data security by allowing collaborative multi-party model learning without exchanging sensitive data. Despite this, most present FL systems still depend on a centralized aggregator to generate a global model by gathering all submitted models from users, which could expose user privacy and the risk of various threats from malicious users. To solve these issues, we suggested a safe FL framework that employs differential privacy to counter membership inference attacks during the collaborative FL model training process and empowers blockchain to replace the centralized aggregator server.

Window defects identification method by using photos collected through the pre-handover inspection of multifamily housing (창호 하자 식별을 위한 컴퓨터 비전 기반 결함 탐지 방법)

  • Lee, Subin;Lee, Seulbi
    • Journal of Urban Science
    • /
    • v.11 no.2
    • /
    • pp.1-8
    • /
    • 2022
  • This study proposed how to identify window defects by using photos uploaded by occupants during the pre-handover inspection of mulch-family housing. A total of 1168 door images were acquired to generate training data and validation data. Subsequently, through the proposed algorithms, every pixel in images labeled a door was binarized using the OTSU threshold, and then dark pixels were identified as defects. Experimental results demonstrated that our computer vision-based defects identification method detects the door with a recall of 57.9%, and door defects with 63.6%. Although it is still a challenge to automatically identify building defects because of the distortion and brightness of photos, this study has the potential to support better defects management. Ultimately, the improved pre-handover inspection may lead to increased customer satisfaction.

The Principles and Applications of High-Throughput Sequencing Technologies

  • Jun-Yeong Lee
    • Development and Reproduction
    • /
    • v.27 no.1
    • /
    • pp.9-24
    • /
    • 2023
  • The advancement in high-throughput sequencing (HTS) technology has revolutionized the field of biology, including genomics, epigenomics, transcriptomics, and metagenomics. This technology has become a crucial tool in many areas of research, allowing scientists to generate vast amounts of genetic data at a much faster pace than traditional methods. With this increased speed and scale of data generation, researchers can now address critical questions and gain new insights into the inner workings of living organisms, as well as the underlying causes of various diseases. Although the first HTS technology have been introduced about two decades ago, it can still be challenging for those new to the field to understand and use effectively. This review aims to provide a comprehensive overview of commonly used HTS technologies these days and their applications in terms of genome sequencing, transcriptome, DNA methylation, DNA-protein interaction, chromatin accessibility, three-dimensional genome organization, and microbiome.

A Study on the Method of Non-Standard Cargo Volume Calculation Based on LiDar Sensor for Cargo Loading Optimization (화물 선적 최적화를 위한 LiDar 센서 기반 비규격 화물 체적산출 방법 연구)

  • Jeon, Young Joon;Kim, Ye Seul;Ahn, Sun Kyu;Jeong, Seok Chan
    • Journal of Korea Multimedia Society
    • /
    • v.25 no.4
    • /
    • pp.559-567
    • /
    • 2022
  • The optimal shipping location is determined by measuring the volume and weights of cargo shipped to non-standard cargo carriers. Currently, workers manually measure cargo volume, but automate it to improve work inefficiency. In this paper, we proposed the method of a real-time volume calculation using LiDar sensor for automating cargo measurement of non-standard cargo. For this purpose, we utilized the statistical techniques for data preprocessing and volume calculation, also used Voxel Grid filter to light weighted of data which are appropriate in real-time calculation. We implemented the function of Normal vectors and Triangle Mesh to generate surfaces and Alpha Shapes algorithms to process 3D modeling.

Analysis of AI Content Detector Tools

  • Yo-Seob Lee;Phil-Joo Moon
    • International journal of advanced smart convergence
    • /
    • v.12 no.4
    • /
    • pp.154-163
    • /
    • 2023
  • With the rapid development of AI technology, ChatGPT and other AI content creation tools are becoming common, and users are becoming curious and adopting them. These tools, unlike search engines, generate results based on user prompts, which puts them at risk of inaccuracy or plagiarism. This allows unethical users to create inappropriate content and poses greater educational and corporate data security concerns. AI content detection is needed and AI-generated text needs to be identified to address misinformation and trust issues. Along with the positive use of AI tools, monitoring and regulation of their ethical use is essential. When detecting content created by AI with an AI content detection tool, it can be used efficiently by using the appropriate tool depending on the usage environment and purpose. In this paper, we collect data on AI content detection tools and compare and analyze the functions and characteristics of AI content detection tools to help meet these needs.

Towards a small language model powered chain-of-reasoning for open-domain question answering

  • Jihyeon Roh;Minho Kim;Kyoungman Bae
    • ETRI Journal
    • /
    • v.46 no.1
    • /
    • pp.11-21
    • /
    • 2024
  • We focus on open-domain question-answering tasks that involve a chain-of-reasoning, which are primarily implemented using large language models. With an emphasis on cost-effectiveness, we designed EffiChainQA, an architecture centered on the use of small language models. We employed a retrieval-based language model to address the limitations of large language models, such as the hallucination issue and the lack of updated knowledge. To enhance reasoning capabilities, we introduced a question decomposer that leverages a generative language model and serves as a key component in the chain-of-reasoning process. To generate training data for our question decomposer, we leveraged ChatGPT, which is known for its data augmentation ability. Comprehensive experiments were conducted using the HotpotQA dataset. Our method outperformed several established approaches, including the Chain-of-Thoughts approach, which is based on large language models. Moreover, our results are on par with those of state-of-the-art Retrieve-then-Read methods that utilize large language models.

Current Role of Conduction System Pacing in Patients Requiring Permanent Pacing

  • Dominik Beer;Pugazhendhi Vijayaraman
    • Korean Circulation Journal
    • /
    • v.54 no.8
    • /
    • pp.427-453
    • /
    • 2024
  • His bundle pacing (HBP) and left bundle branch pacing (LBBP) are novel methods of pacing directly pacing the cardiac conduction system. HBP while developed more than two decades ago, only recently moved into the clinical mainstream. In contrast to conventional cardiac pacing, conduction system pacing including HBP and LBBP utilizes the native electrical system of the heart to rapidly disseminate the electrical impulse and generate a more synchronous ventricular contraction. Widespread adoption of conduction system pacing has resulted in a wealth of observational data, registries, and some early randomized controlled clinical trials. While much remains to be learned about conduction system pacing and its role in electrophysiology, data available thus far is very promising. In this review of conduction system pacing, the authors review the emergence of conduction system pacing and its contemporary role in patients requiring permanent cardiac pacing.

Crack growth prediction on a concrete structure using deep ConvLSTM

  • Man-Sung Kang;Yun-Kyu An
    • Smart Structures and Systems
    • /
    • v.33 no.4
    • /
    • pp.301-311
    • /
    • 2024
  • This paper proposes a deep convolutional long short-term memory (ConvLSTM)-based crack growth prediction technique for predictive maintenance of structures. Since cracks are one of the critical damage types in a structure, their regular inspection has been mandatory for structural safety and serviceability. To effectively establish the structural maintenance plan using the inspection results, crack propagation or growth prediction is essential. However, conventional crack prediction techniques based on mathematical models are not typically suitable for tracking complex nonlinear crack propagation mechanism on civil structures under harsh environmental conditions. To address the technical issue, a field data-driven crack growth prediction technique using ConvLSTM is newly proposed in this study. The proposed technique consists of the four steps: (1) time-series crack image acquisition, (2) target image stabilization, (3) deep learning-based crack detection and quantification and (4) crack growth prediction. The performance of the proposed technique is experimentally validated using a concrete mock-up specimen by applying step-wise bending loads to generate crack growth. The validation test results reveal the prediction accuracy of 94% on average compared with the ground truth obtained by field measurement.

Multimodal Block Transformer for Multimodal Time Series Forecasting

  • Sungho Park
    • Annual Conference of KIPS
    • /
    • 2024.10a
    • /
    • pp.636-639
    • /
    • 2024
  • Time series forecasting can be enhanced by integrating various data modalities beyond the past observations of the target time series. This paper introduces the Multimodal Block Transformer, a novel architecture that incorporates multivariate time series data alongside multimodal static information, which remains invariant over time, to improve forecasting accuracy. The core feature of this architecture is the Block Attention mechanism, designed to efficiently capture dependencies within multivariate time series by condensing multiple time series variables into a single unified sequence. This unified temporal representation is then fused with other modality embeddings to generate a non-autoregressive multi-horizon forecast. The model was evaluated on a dataset containing daily movie gross revenues and corresponding multimodal information about movies. Experimental results demonstrate that the Multimodal Block Transformer outperforms state-of-the-art models in both multivariate and multimodal time series forecasting.

The Development of Discharge Analysis Educational Program on NCS-Based for Medical Information Management (NCS 기반 의료정보관리를 위한 퇴원분석 교육프로그램 개발)

  • Choi, Joon-Young
    • The Journal of the Korea institute of electronic communication sciences
    • /
    • v.12 no.5
    • /
    • pp.957-964
    • /
    • 2017
  • In this study, It developed a program to carry out the training courses for NCS based medical information management tasks and to can understand the practical working knowledge of learners. This program is an educational program that can generate medical information by analyzing data of medical records after generating and storing data of medical records. Because the contents of the medical records vary and there are quantitative differences in the medical records, the contents of the medical records can be summarized and stored in the discharge analysis program for the standard of educational data. The medical terminology DB, medical terminology related DB, medical care related DB by the NCS ability unit element can be constructed and managed using the program. The following are the contents that can be learned through operation of the program. first, it's can understand Medical information DB management regulations through understanding sturucture of database. Second, it can understand the structure and function of the diagnostic code and medical practice code that are input to the discharge analysis program. The diagnostic codes and medical practice codes entered in the discharge analysis program can be searched and analyzed by each fields. Third, It can be advance medical information management ability by inputting and extracting data and generating medical information. In this study, It developed program that Students can be obtained Knowledge of medical information management and improved management competency by generate and analyze medical record data using discharge analysis program.