• Title/Summary/Keyword: Consistency Algorithm

Search Result 256, Processing Time 0.022 seconds

Implementation of CNN-based Masking Algorithm for Post Processing of Aerial Image

  • CHOI, Eunsoo;QUAN, Zhixuan;JUNG, Sangwoo
    • Korean Journal of Artificial Intelligence
    • /
    • v.9 no.2
    • /
    • pp.7-14
    • /
    • 2021
  • Purpose: To solve urban problems, empirical research is being actively conducted to implement a smart city based on various ICT technologies, and digital twin technology is needed to effectively implement a smart city. A digital twin is essential for the realization of a smart city. A digital twin is a virtual environment that intuitively visualizes multidimensional data in the real world based on 3D. Digital twin is implemented on the premise of the convergence of GIS and BIM, and in particular, a lot of time is invested in data pre-processing and labeling in the data construction process. In digital twin, data quality is prioritized for consistency with reality, but there is a limit to data inspection with the naked eye. Therefore, in order to improve the required time and quality of digital twin construction, it was attempted to detect a building using Mask R-CNN, a deep learning-based masking algorithm for aerial images. If the results of this study are advanced and used to build digital twin data, it is thought that a high-quality smart city can be realized.

Investigation of expanding-folding absorbers with functionally graded thickness under axial loading and optimization of crushing parameters

  • Chunwei, Zhang;Limeng, Zhu;Farayi, Musharavati;Afrasyab, Khan;Tamer A., Sebaey
    • Steel and Composite Structures
    • /
    • v.45 no.6
    • /
    • pp.775-796
    • /
    • 2022
  • In this study, a new type of energy absorbers with a functionally graded thickness is investigated, these type of absorbers absorb energy through expanding-folding processes. The expanding-folding absorbers are composed of two sections: a thin-walled aluminum matrix and a thin-walled steel mandrel. Previous studies have shown higher efficiency of the mentioned absorbers compared to the conventional ones. In this study, the effect of thickness which has been functionally-graded on the aluminum matrix (in which expansion occurs) was investigated. To this end, initial functions were considered for the matrix thickness, which was ascending/descending along the axis. The study was done experimentally and numerically. Comparing the experimental data with the numerical results showed high consistency between the numerical and experimental results. In the final section of this study, the best energy absorber functionally graded thickness was introduced by optimization using a third-order genetic algorithm. The optimization results showed that by choosing a minimum thickness of 1.6 mm and the exponential coefficient of 3.25, the most optimal condition can be obtained for descending thickness absorbers.

Flame Diagnosis using Image Processing Technique

  • Kim, Song-Hwan;Lee, Tae-Young;Kim, Myun-Hee;Bae, Joon-Young;Lee, Sang-Ryong
    • International Journal of Precision Engineering and Manufacturing
    • /
    • v.3 no.2
    • /
    • pp.45-51
    • /
    • 2002
  • Recently the interest for the environment is increasing. So the criterion for the evaluation of the burner has changed. For efficient driving problem, if the thermal efficiency is higher and the oxygen in exhaust gas is lower, then burner is evaluated better. For environmental problem. burner must satisfy NOx limit, soot limit and CO limit. Generally the experienced operator judge of the combustion status of the burner by the color of flame. we don't still have any satisfactory solution against it. the relation of the combustion status and the color of the flame hasn't still been established. This paper is the study about the relation of the combustion status and the color of the flame. This paper describes development of real time flame diagnosis technique that evaluate and diagnose combustion state such as consistency of components in exhaust gas, stability of flame in quantitative sense. In this paper, it was proposed on the flame diagnosis technique of burner using image processing algorithm, the parameter extracted from the image of the flame was used as the input variables of the flame diagnostic system. at first, linear regression algorithm and multiple regression algorithm was used to obtain linear multi-nominal expression. Using the constructed inference algorithm, the amount of NOx and CO of the combustion gas was successfully inferred. the combustion control system will be realized sooner or later.

Common Rail Pressure Control Algorithm for Passenger Car Diesel Engines Using Quantitative Feedback Theory (QFT를 이용한 디젤엔진의 커먼레일 압력 제어알고리즘 설계 연구)

  • Shin, Jaewook;Hong, Seungwoo;Park, Inseok;Sunwoo, Myoungho
    • Transactions of the Korean Society of Mechanical Engineers B
    • /
    • v.38 no.2
    • /
    • pp.107-114
    • /
    • 2014
  • This paper proposes a common rail pressure control algorithm for passenger car diesel engines. For handling the parameter-varying characteristics of common rail systems, the quantitative feedback theory (QFT) is applied to the design of a robust rail pressure control algorithm. The driving current of the pressure control valve and the common rail pressure are used as the input/output variables for the common rail system model. The model parameter uncertainty ranges are identified through experiments. Rail pressure controller requirements in terms of tracking performance, robust stability, and disturbance rejection are defined on a Nichols chart, and these requirements are fulfilled by designing a compensator and a prefilter in the QFT framework. The proposed common rail pressure control algorithm is validated through engine experiments. The experimental results show that the proposed rail pressure controller has a good degree of consistency under various operating conditions, and it successfully satisfies the requirements for reference tracking and disturbance rejection.

Based on Multiple Reference Stations Ionospheric Anomaly Monitoring Algorithm on Consistency of Local Ionosphere (협역 전리층의 일관성을 이용한 다중 기준국 기반 전리층 이상 현상 감시 기법)

  • Song, Choongwon;Jang, JinHyeok;Sung, Sangkyung;Lee, Young Jae
    • Journal of the Korean Society for Aeronautical & Space Sciences
    • /
    • v.45 no.7
    • /
    • pp.550-557
    • /
    • 2017
  • Ionospheric delay, which affect the accuracy of GNSS positioning, is generated by electrons in Ionosphere. Solar activity level, region and time could make change of this delay level. Dual frequency receiver could effectively eliminate the delay using difference of refractive index between L1 to L2 frequency. But, Single frequency receiver have to use limited correction such as ionospheric model in standalone GNSS or PRC(pseudorange correction) in Differential GNSS. Generally, these corrections is effective in normal condition. but, they might be useless, when TEC(total electron content) extremely increase in local area. In this paper, monitoring algorithm is proposed for local ionospheric anomaly using multiple reference stations. For verification, the algorithm was performed with specific measurement data in Ionospheric storm day (20. Nov. 2003). this algorithm would detect local ionospheric anomaly and improve reliability of ionospheric corrections for standalone receiver.

Key Recovery Algorithm of Erroneous RSA Private Key Bits Using Generalized Probabilistic Measure (일반화된 확률 측도를 이용하여 에러가 있는 RSA 개인키를 복구하는 알고리즘)

  • Baek, Yoo-Jin
    • Journal of the Korea Institute of Information Security & Cryptology
    • /
    • v.26 no.5
    • /
    • pp.1089-1097
    • /
    • 2016
  • It is well-known that, if additional information other than a plaintext-ciphertext pair is available, breaking the RSA cryptosystem may be much easier than factorizing the RSA modulus. For example, Coppersmith showed that, given the 1/2 fraction of the least or most significant bits of one of two RSA primes, the RSA modulus can be factorized in a polynomial time. More recently, Henecka et. al showed that the RSA private key of the form (p, q, d, $d_p$, $d_q$) can efficiently be recovered whenever the bits of the private key are erroneous with error rate less than 23.7%. It is notable that their algorithm is based on counting the matching bits between the candidate key bit string and the given decayed RSA private key bit string. And, extending the algorithm, this paper proposes a new RSA private key recovery algorithm using a generalized probabilistic measure for measuring the consistency between the candidate key bits and the given decayed RSA private key bits.

Performance Analysis of MixMatch-Based Semi-Supervised Learning for Defect Detection in Manufacturing Processes (제조 공정 결함 탐지를 위한 MixMatch 기반 준지도학습 성능 분석)

  • Ye-Jun Kim;Ye-Eun Jeong;Yong Soo Kim
    • Journal of Korean Society of Industrial and Systems Engineering
    • /
    • v.46 no.4
    • /
    • pp.312-320
    • /
    • 2023
  • Recently, there has been an increasing attempt to replace defect detection inspections in the manufacturing industry using deep learning techniques. However, obtaining substantial high-quality labeled data to enhance the performance of deep learning models entails economic and temporal constraints. As a solution for this problem, semi-supervised learning, using a limited amount of labeled data, has been gaining traction. This study assesses the effectiveness of semi-supervised learning in the defect detection process of manufacturing using the MixMatch algorithm. The MixMatch algorithm incorporates three dominant paradigms in the semi-supervised field: Consistency regularization, Entropy minimization, and Generic regularization. The performance of semi-supervised learning based on the MixMatch algorithm was compared with that of supervised learning using defect image data from the metal casting process. For the experiments, the ratio of labeled data was adjusted to 5%, 10%, 25%, and 50% of the total data. At a labeled data ratio of 5%, semi-supervised learning achieved a classification accuracy of 90.19%, outperforming supervised learning by approximately 22%p. At a 10% ratio, it surpassed supervised learning by around 8%p, achieving a 92.89% accuracy. These results demonstrate that semi-supervised learning can achieve significant outcomes even with a very limited amount of labeled data, suggesting its invaluable application in real-world research and industrial settings where labeled data is limited.

Dynamic Priority Search Algorithm Of Multi-Agent (멀티에이전트의 동적우선순위 탐색 알고리즘)

  • Jin-Soo Kim
    • The Journal of Engineering Research
    • /
    • v.6 no.2
    • /
    • pp.11-22
    • /
    • 2004
  • A distributed constraint satisfaction problem (distributed CSP) is a constraint satisfaction problem(CSP) in which variables and constraints are distributed among multiple automated agents. ACSP is a problem to find a consistent assignment of values to variables. Even though the definition of a CSP is very simple, a surprisingly wide variety of AI problems can be formalized as CSPs. Similarly, various application problems in DAI (Distributed AI) that are concerned with finding a consistent combination of agent actions can be formalized as distributed CAPs. In recent years, many new backtracking algorithms for solving distributed CSPs have been proposed. But most of all, they have common drawbacks that the algorithm assumes the priority of agents is static. In this thesis, we establish a basic algorithm for solving distributed CSPs called dynamic priority search algorithm that is more efficient than common backtracking algorithms in which the priority order is static. In this algorithm, agents act asynchronously and concurrently based on their local knowledge without any global control, and have a flexible organization, in which the hierarchical order is changed dynamically, while the completeness of the algorithm is guaranteed. And we showed that the dynamic priority search algorithm can solve various problems, such as the distributed 200-queens problem, the distributed graph-coloring problem that common backtracking algorithm fails to solve within a reasonable amount of time. The experimental results on example problems show that this algorithm is by far more efficient than the backtracking algorithm, in which the priority order is static. The priority order represents a hierarchy of agent authority, i.e., the priority of decision-making. Therefore, these results imply that a flexible agent organization, in which the hierarchical order is changed dynamically, actually performs better than an organization in which the hierarchical order is static and rigid. Furthermore, we describe that the agent can be available to hold multiple variables in the searching scheme.

  • PDF

A Study on Reuse Technique of Software for SaaS Using Process Algebra

  • Hwang, Chigon;Shin, Hyoyoung;Lee, Jong-Yong;Jung, Kyedong
    • International journal of advanced smart convergence
    • /
    • v.3 no.2
    • /
    • pp.6-9
    • /
    • 2014
  • SaaS provides software hosted on the cloud computing in a form of service. Thus, it enables the extension of service functions by combining or reusing the existing software. As an analysis technique, this paper suggests a method of verifying the reusability of a process by analyzing it with the process algebra. The suggested method can confirm the reusability of existing software, ensure the consistency of modifications by tenants or requests, and provide probabilities of combining processes.

Automatic Speech Database Verification Method Based on Confidence Measure

  • Kang Jeomja;Jung Hoyoung;Kim Sanghun
    • MALSORI
    • /
    • no.51
    • /
    • pp.71-84
    • /
    • 2004
  • In this paper, we propose the automatic speech database verification method(or called automatic verification) based on confidence measure for a large speech database. This method verifies the consistency between given transcription and speech using the confidence measure. The automatic verification process consists of two stages : the word-level likelihood computation stage and multi-level likelihood ratio computation stage. In the word-level likelihood computation stage, we calculate the word-level likelihood using the viterbi decoding algorithm and make the segment information. In the multi-level likelihood ratio computation stage, we calculate the word-level and the phone-level likelihood ratio based on confidence measure with anti-phone model. By automatic verification, we have achieved about 61% error reduction. And also we can reduce the verification time from 1 month in manual to 1-2 days in automatic.

  • PDF