• Title/Summary/Keyword: 데이터 확장 기법

Search Result 827, Processing Time 0.027 seconds

Analysis Study on Patent for Scan-to-BIM Related Technology (Scan-to-BIM 관련기술 특허동향 분석연구)

  • Ryu, Jeong-Won;Byun, Na-Hyang
    • Journal of the Korea Academia-Industrial cooperation Society
    • /
    • v.21 no.12
    • /
    • pp.107-114
    • /
    • 2020
  • Technologies related scan-to-BIM for BIM-based reverse engineering techniques are beginning to be actively introduced in the A.E.C. industry, and the scalability of the technology is growing considerably. This study uses patent analysis based on objective data to find the right direction for Korean Scan-to-BIM technology by identifying the trends in Korea, the United States, Europe, and Japan. This was done using the WIPSON patent search system to find previous research on patent analysis related to building technology, theoretical consideration of scan-to-BIM technology, and published patents. We collected information, verified the process, and extracted valid patents. We used the effective patent data to analyze the annual trend of patent applications, national trends, and technological trends through the International Patent Classification (IPC) code, the types of the top 20 major applicants, and family patent trends.

SITM Attacks on Skinny-128-384 and Romulus-N (Skinny-128-384와 Romulus-N의 SITM 공격)

  • Park, Jonghyun;Kim, Jongsung
    • Journal of the Korea Institute of Information Security & Cryptology
    • /
    • v.32 no.5
    • /
    • pp.807-816
    • /
    • 2022
  • See-In-The-Middle (SITM) is an analysis technique that uses Side-Channel information for differential cryptanalysis. This attack collects unmasked middle-round power traces when implementing block ciphers to select plaintext pairs that satisfy the attacker's differential pattern and utilize them for differential cryptanalysis to recover the key. Romulus, one of the final candidates for the NIST Lightweight Cryptography standardization competition, is based on Tweakable block cipher Skinny-128-384+. In this paper, the SITM attack is applied to Skinny-128-384 implemented with 14-round partial masking. This attack not only increased depth by one round, but also significantly reduced the time/data complexity to 214.93/214.93. Depth refers to the round position of the block cipher that collects the power trace, and it is possible to measure the appropriate number of masking rounds required when applying the masking technique to counter this attack. Furthermore, we extend the attack to Romulus's Nonce-based AE mode Romulus-N, and Tweakey's structural features show that it can attack with less complexity than Skinny-128-384.

Classification of Characteristics in Two-Wheeler Accidents Using Clustering Techniques (클러스터링 기법을 이용한 이륜차 사고의 특징 분류)

  • Heo, Won-Jin;Kang, Jin-ho;Lee, So-hyun
    • Knowledge Management Research
    • /
    • v.25 no.1
    • /
    • pp.217-233
    • /
    • 2024
  • The demand for two-wheelers has increased in recent years, driven by the growing delivery culture, which has also led to a rise in the number of two-wheelers. Although two-wheelers are economically efficient in congested traffic conditions, reckless driving and ambiguous traffic laws for two-wheelers have turned two-wheeler accidents into a significant social issue. Given the high fatality rate associated with two-wheelers, the severity and risk of two-wheeler accidents are considerable. It is, therefore, crucial to thoroughly understand the characteristics of two-wheeler accidents by analyzing their attributes. In this study, the characteristics of two-wheeled vehicle accidents were categorized using the K-prototypes algorithm, based on data from two-wheeled vehicle accidents. As a result, the accidents were divided into four clusters according to their characteristics. Each cluster showed distinct traits in terms of the roads where accidents occurred, the major laws violated, the types of accidents, and the times of accident occurrences. By tailoring enforcement methods and regulations to the specific characteristics of each type of accident, we can reduce the incidence of accidents involving two-wheelers in metropolitan areas, thereby enhancing road safety. Furthermore, by applying machine learning techniques to urban transportation and safety, this study adds to the body of related literature.

Performance Analysis of Implementation on Image Processing Algorithm for Multi-Access Memory System Including 16 Processing Elements (16개의 처리기를 가진 다중접근기억장치를 위한 영상처리 알고리즘의 구현에 대한 성능평가)

  • Lee, You-Jin;Kim, Jea-Hee;Park, Jong-Won
    • Journal of the Institute of Electronics Engineers of Korea CI
    • /
    • v.49 no.3
    • /
    • pp.8-14
    • /
    • 2012
  • Improving the speed of image processing is in great demand according to spread of high quality visual media or massive image applications such as 3D TV or movies, AR(Augmented reality). SIMD computer attached to a host computer can accelerate various image processing and massive data operations. MAMS is a multi-access memory system which is, along with multiple processing elements(PEs), adequate for establishing a high performance pipelined SIMD machine. MAMS supports simultaneous access to pq data elements within a horizontal, a vertical, or a block subarray with a constant interval in an arbitrary position in an $M{\times}N$ array of data elements, where the number of memory modules(MMs), m, is a prime number greater than pq. MAMS-PP4 is the first realization of the MAMS architecture, which consists of four PEs in a single chip and five MMs. This paper presents implementation of image processing algorithms and performance analysis for MAMS-PP16 which consists of 16 PEs with 17 MMs in an extension or the prior work, MAMS-PP4. The newly designed MAMS-PP16 has a 64 bit instruction format and application specific instruction set. The author develops a simulator of the MAMS-PP16 system, which implemented algorithms can be executed on. Performance analysis has done with this simulator executing implemented algorithms of processing images. The result of performance analysis verifies consistent response of MAMS-PP16 through the pyramid operation in image processing algorithms comparing with a Pentium-based serial processor. Executing the pyramid operation in MAMS-PP16 results in consistent response of processing time while randomly response time in a serial processor.

The Improvement of Convergence Characteristic using the New RLS Algorithm in Recycling Buffer Structures

  • Kim, Gwang-Jun;Kim, Chun-Suck
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.7 no.4
    • /
    • pp.691-698
    • /
    • 2003
  • We extend the sue of the method of least square to develop a recursive algorithm for the design of adaptive transversal filters such that, given the least-square estimate of this vector of the filter at iteration n-l, we may compute the updated estimate of this vector at iteration n upon the arrival of new data. We begin the development of the RLS algorithm by reviewing some basic relations that pertain to the method of least squares. Then, by exploiting a relation in matrix algebra known as the matrix inversion lemma, we develop the RLS algorithm. An important feature of the RLS algorithm is that it utilizes information contained in the input data, extending back to the instant of time when the algorithm is initiated. In this paper, we propose new tap weight updated RLS algorithm in adaptive transversal filter with data-recycling buffer structure. We prove that convergence speed of learning curve of RLS algorithm with data-recycling buffer is faster than it of exiting RLS algorithm to mean square error versus iteration number. Also the resulting rate of convergence is typically an order of magnitude faster than the simple LMS algorithm. We show that the number of desired sample is portion to increase to converge the specified value from the three dimension simulation result of mean square error according to the degree of channel amplitude distortion and data-recycle buffer number. This improvement of convergence character in performance, is achieved at the B times of convergence speed of mean square error increase in data recycle buffer number with new proposed RLS algorithm.

Image Compression with using Wavelet Conversion Coefficients of Zerotree (웨이블렛 변환 계수의 제로트리를 이용한 영상압축)

  • Seo, Han-Seog;Park, Se-Won;Yim, Hwa-Young
    • The Journal of The Korea Institute of Intelligent Transport Systems
    • /
    • v.11 no.3
    • /
    • pp.55-62
    • /
    • 2012
  • EZW, also known as Embedded Zerotree Wavelet, is a technique that allows transforming original images into wavelet, then again compressing images using the transformed data. This algorithm demonstrates a simple structure and remarkable effectiveness. This paper has reformed the EZW to improve a compression efficiency. Fundamentally, EZW evaluates the priority level of wavelet-transformed data and stores them into four different categories considering the priority level of the data as well as their location information. The four categories are represented as the symbols P, N, Z, and T. Here, P and N correspond to the volume of data and the priority level whereas Z and T show the location information of data. Each letter is stored through the process of dominant pass. However, here is when the data of Z and T are stored redundantly which lead to unnecessary increase of data volume. In this paper, we propose a modified version of Embedded Zerotree Wavelet algorithm, which is designed to efficiently reduce the volume of redundantly stored data using four additionally inserted symbols. We name it EEZW, Extended Embedded Zerotree Wavelet. The proposed algorithm demonstrates the efficiency verified by a number of image and confirms an outstanding result through the PSNR(Peak Signal To Noise Rate) values, which measure their quality of images.

Design and Implementation of an Analysis System for Diseases and Protein Based on Components (컴포넌트 기반의 질병 및 단백질 분석 시스템의 설계 및 구현)

  • Park, Jun-Ho;Yeo, Myung-Ho;Lee, Ji-Hee;He, Li;Kang, Gwang-Goo;Kwon, Hyun-Ho;Lee, Jin-Ju;Lee, Hyo-Joon;Lim, Jong-Tae;Jang, Yong-Jin;WeiWei, Bao;Kim, Mi-Kyoung;Ryu, Jae-Woon;Kang, Tae-Ho;Kim, Hak-Yong;Yoo, Jae-Soo
    • The Journal of the Korea Contents Association
    • /
    • v.10 no.12
    • /
    • pp.59-69
    • /
    • 2010
  • The research on protein for the diseases analysis and the new medicines development is one of the most important themes in biotechnology. Since the analysis on diseases and protein needs to handle a large scale of data, we don't use the way to approach it by the experiments anymore. In recent, we have accelerated the research on diseases and protein analysis by sharing and connecting the various experimental data by combining the biotechnology with the IT technology. However, many biotechnology researchers have difficulty in handling the protein analysis tools based on the IT knowledge. In order to solve such problems, data analysis tools through the cooperation between IT researchers and biologists have been developed. However, the existing data analysis tools still have the problems that it is very hard for biologists to extend their functions and to use them. In this paper, we design and implement an effective analysis system for diseases and protein based on components that alleviates the problems of the existing data analysis systems.

SAR Image Impulse Response Analysis in Real Clutter Background (실제 클러터 배경에서 SAR 영상 임펄스 응답 특성 분석)

  • Jung, Chul-Ho;Jung, Jae-Hoon;Oh, Tae-Bong;Kwang, Young-Kil
    • Korean Journal of Remote Sensing
    • /
    • v.24 no.2
    • /
    • pp.99-106
    • /
    • 2008
  • A synthetic aperture radar (SAR) system is of great interest in many fields of civil and military applications because of all-weather and luminance free imaging capability. SAR image quality parameters such as spatial resolution, peak to sidelobe ratio (PSLR), and integrated sidelobe ratio (ISLR) can be normally estimated by modeling of impulse response function (IRF) which is obtained from various system design parameters such as altitude, operational frequency, PRF, etc. In modeling of IRF, however, background clutter environment surrounding the IRF is generally neglected. In this paper, analysis method for SAR mage quality is proposed in the real background clutter environment. First of all, SAR raw data of a point scatterer is generated based on various system parameters. Secondly, the generated raw data can be focused to ideal IRF by range Doppler algorithm (RDA). Finally, background clutter obtained from image of currently operating SAR system is applied to IRF. In addition, image quality is precisely analyzed by zooming and interpolation method for effective extraction of IRF, and then the effect of proposed methodology is presented with several simulation results under the assumption of estimation error of Doppler rate.

Discovering abstract structure of unmet needs and hidden needs in familiar use environment - Analysis of Smartphone users' behavior data (일상적 사용 환경에서의 잠재니즈, 은폐니즈의 추상구조 발견 - 스마트폰 사용자의 행동데이터 수집 및 해석)

  • Shin, Sung Won;Yoo, Seung Hun
    • Design Convergence Study
    • /
    • v.16 no.6
    • /
    • pp.169-184
    • /
    • 2017
  • There is a lot of needs that are not expressed as much as the expressed needs in familiar products and services that are used in daily life such as a smartphone. Finding the 'Inconveniences in familiar use' make it possible to create opportunities for value expanding in the existing products and service area. There are a lot of related works, which have studied the definition of hidden needs and the methods to find it. But, they are making it difficult to address the hidden needs in the cases of familiar use due to focus on the new product or service developing typically. In this study, we try to redefine the hidden needs in the daily familiarity and approach it in the new way to find out. Because of the users' unability to express what they want and the complexity of needs which can not be explained clearly, we can not approach it as the quantitative issue. For this reason, the basic data type selected as the user behavior data excluding all description is the screen-shot of the smartphone. We try to apply the integrated rules and patterns to the individual data using the qualitative coding techniques to overcome the limitations of qualitative analysis based on unstructured data. From this process, We can not only extract meaningful clues which can make to understand the hidden needs but also identify the possibility as a way to discover hidden needs through the review of relevance to actual market trends. The process of finding hidden needs is not easy to systemize in itself, but we expect the possibility to be conducted a reference frame for finding hidden needs of other further studies.

Corporate Bankruptcy Prediction Model using Explainable AI-based Feature Selection (설명가능 AI 기반의 변수선정을 이용한 기업부실예측모형)

  • Gundoo Moon;Kyoung-jae Kim
    • Journal of Intelligence and Information Systems
    • /
    • v.29 no.2
    • /
    • pp.241-265
    • /
    • 2023
  • A corporate insolvency prediction model serves as a vital tool for objectively monitoring the financial condition of companies. It enables timely warnings, facilitates responsive actions, and supports the formulation of effective management strategies to mitigate bankruptcy risks and enhance performance. Investors and financial institutions utilize default prediction models to minimize financial losses. As the interest in utilizing artificial intelligence (AI) technology for corporate insolvency prediction grows, extensive research has been conducted in this domain. However, there is an increasing demand for explainable AI models in corporate insolvency prediction, emphasizing interpretability and reliability. The SHAP (SHapley Additive exPlanations) technique has gained significant popularity and has demonstrated strong performance in various applications. Nonetheless, it has limitations such as computational cost, processing time, and scalability concerns based on the number of variables. This study introduces a novel approach to variable selection that reduces the number of variables by averaging SHAP values from bootstrapped data subsets instead of using the entire dataset. This technique aims to improve computational efficiency while maintaining excellent predictive performance. To obtain classification results, we aim to train random forest, XGBoost, and C5.0 models using carefully selected variables with high interpretability. The classification accuracy of the ensemble model, generated through soft voting as the goal of high-performance model design, is compared with the individual models. The study leverages data from 1,698 Korean light industrial companies and employs bootstrapping to create distinct data groups. Logistic Regression is employed to calculate SHAP values for each data group, and their averages are computed to derive the final SHAP values. The proposed model enhances interpretability and aims to achieve superior predictive performance.