• Title/Summary/Keyword: time-weighted model

Search Result 322, Processing Time 0.025 seconds

Improvement of learning concrete crack detection model by weighted loss function

  • Sohn, Jung-Mo;Kim, Do-Soo;Hwang, Hye-Bin
    • Journal of the Korea Society of Computer and Information
    • /
    • v.25 no.10
    • /
    • pp.15-22
    • /
    • 2020
  • In this study, we propose an improvement method that can create U-Net model which detect fine concrete cracks by applying a weighted loss function. Because cracks in concrete are a factor that threatens safety, it is important to periodically check the condition and take prompt initial measures. However, currently, the visual inspection is mainly used in which the inspector directly inspects and evaluates with naked eyes. This has limitations not only in terms of accuracy, but also in terms of cost, time and safety. Accordingly, technologies using deep learning is being researched so that minute cracks generated in concrete structures can be detected quickly and accurately. As a result of attempting crack detection using U-Net in this study, it was confirmed that it could not detect minute cracks. Accordingly, as a result of verifying the performance of the model trained by applying the suggested weighted loss function, a highly reliable value (Accuracy) of 99% or higher and a harmonic average (F1_Score) of 89% to 92% was derived. The performance of the learning improvement plan was verified through the results of accurately and clearly detecting cracks.

SOC Verification Based on WGL

  • Du, Zhen-Jun;Li, Min
    • Journal of Korea Multimedia Society
    • /
    • v.9 no.12
    • /
    • pp.1607-1616
    • /
    • 2006
  • The growing market of multimedia and digital signal processing requires significant data-path portions of SoCs. However, the common models for verification are not suitable for SoCs. A novel model--WGL (Weighted Generalized List) is proposed, which is based on the general-list decomposition of polynomials, with three different weights and manipulation rules introduced to effect node sharing and the canonicity. Timing parameters and operations on them are also considered. Examples show the word-level WGL is the only model to linearly represent the common word-level functions and the bit-level WGL is especially suitable for arithmetic intensive circuits. The model is proved to be a uniform and efficient model for both bit-level and word-level functions. Then Based on the WGL model, a backward-construction logic-verification approach is presented, which reduces time and space complexity for multipliers to polynomial complexity(time complexity is less than $O(n^{3.6})$ and space complexity is less than $O(n^{1.5})$) without hierarchical partitioning. Finally, a construction methodology of word-level polynomials is also presented in order to implement complex high-level verification, which combines order computation and coefficient solving, and adopts an efficient backward approach. The construction complexity is much less than the existing ones, e.g. the construction time for multipliers grows at the power of less than 1.6 in the size of the input word without increasing the maximal space required. The WGL model and the verification methods based on WGL show their theoretical and applicable significance in SoC design.

  • PDF

Hierarchical and Incremental Clustering for Semi Real-time Issue Analysis on News Articles (준 실시간 뉴스 이슈 분석을 위한 계층적·점증적 군집화)

  • Kim, Hoyong;Lee, SeungWoo;Jang, Hong-Jun;Seo, DongMin
    • The Journal of the Korea Contents Association
    • /
    • v.20 no.6
    • /
    • pp.556-578
    • /
    • 2020
  • There are many different researches about how to analyze issues based on real-time news streams. But, there are few researches which analyze issues hierarchically from news articles and even a previous research of hierarchical issue analysis make clustering speed slower as the increment of news articles. In this paper, we propose a hierarchical and incremental clustering for semi real-time issue analysis on news articles. We trained siamese neural network based weighted cosine similarity model, applied this model to k-means algorithm which is used to make word clusters and converted news articles to document vectors by using these word clusters. Finally, we initialized an issue cluster tree from document vectors, updated this tree whenever news articles happen, and analyzed issues in semi real-time. Through the experiment and evaluation, we showed that up to about 0.26 performance has been improved in terms of NMI. Also, in terms of speed of incremental clustering, we also showed about 10 times faster than before.

Development of a Model Combining Covariance Matrices Derived from Spatial and Temporal Data to Estimate Missing Rainfall Data (공간 데이터와 시계열 데이터로부터 유도된 공분산행렬을 결합한 강수량 결측값 추정 모형)

  • Sung, Chan Yong
    • Journal of Environmental Science International
    • /
    • v.22 no.3
    • /
    • pp.303-308
    • /
    • 2013
  • This paper proposed a new method for estimating missing values in time series rainfall data. The proposed method integrated the two most widely used estimation methods, general linear model(GLM) and ordinary kriging(OK), by taking a weighted average of covariance matrices derived from each of the two methods. The proposed method was cross-validated using daily rainfall data at thirteen rain gauges in the Hyeong-san River basin. The goodness-of-fit of the proposed method was higher than those of GLM and OK, which can be attributed to the weighting algorithm that was designed to minimize errors caused by violations of assumptions of the two existing methods. This result suggests that the proposed method is more accurate in missing values in time series rainfall data, especially in a region where the assumptions of existing methods are not met, i.e., rainfall varies by season and topography is heterogeneous.

Multi-objective optimization of submerged floating tunnel route considering structural safety and total travel time

  • Eun Hak Lee;Gyu-Jin Kim
    • Structural Engineering and Mechanics
    • /
    • v.88 no.4
    • /
    • pp.323-334
    • /
    • 2023
  • The submerged floating tunnel (SFT) infrastructure has been regarded as an emerging technology that efficiently and safely connects land and islands. The SFT route problem is an essential part of the SFT planning and design phase, with significant impacts on the surrounding environment. This study aims to develop an optimization model considering transportation and structure factors. The SFT routing problem was optimized based on two objective functions, i.e., minimizing total travel time and cumulative strains, using NSGA-II. The proposed model was applied to the section from Mokpo to Jeju Island using road network and wave observation data. As a result of the proposed model, a Pareto optimum curve was obtained, showing a negative correlation between the total travel time and cumulative strain. Based on the inflection points on the Pareto optimum curve, four optimal SFT routes were selected and compared to identify the pros and cons. The travel time savings of the four selected alternatives were estimated to range from 9.9% to 10.5% compared to the non-implemented scenario. In terms of demand, there was a substantial shift in the number of travel and freight trips from airways to railways and roadways. Cumulative strain, calculated based on SFT distance, support structure, and wave energy, was found to be low when the route passed through small islands. The proposed model helps decision-making in the planning and design phases of SFT projects, ultimately contributing to the progress of a safe, efficient, and sustainable SFT infrastructure.

A Combined Process Control Procedure by Monitoring and Repeated Adjustment

  • Park, Changsoon
    • Communications for Statistical Applications and Methods
    • /
    • v.7 no.3
    • /
    • pp.773-788
    • /
    • 2000
  • Statistical process control (SPC) and engineering process control (EPC) are based on different strategies for processes quality improvement. SPC reduces process variability by detecting and eliminating special causes of process variation. while EPC reduces process variability by adjusting compensatory variables to keep the quality variable close to target. Recently there has been needs for a process control proceduce which combines the tow strategies. This paper considers a combined scheme which simultaneously applies SPC and EPC techniques to reduce the variation of a process. The process model under consideration is an integrated moving average(IMA) process with a step shift. The EPC part of the scheme adjusts the process back to target at every fixed monitoring intervals, which is referred to a repeated adjustment scheme. The SPC part of the scheme uses an exponentially weighted moving average(EWMA) of observed deviation from target to detect special causes. A Markov chain model is developed to relate the scheme's expected cost per unit time to the design parameters of he combined control scheme. The expected cost per unit time is composed of off-target cost, adjustment cost, monitoring cost, and false alarm cost.

  • PDF

An Application of Total Quality Management Efficiency Model in the Korean Distribution Industry

  • Yoo, Han-Joo;Park, Jong-Woo;Song, Gwang-Suk
    • International Journal of Quality Innovation
    • /
    • v.10 no.1
    • /
    • pp.25-36
    • /
    • 2009
  • The purpose of this study is to analyze the efficiency of the service quality activity itself by using the DEA Model, in contrast to previous quality evaluation methods, as an attempt to evaluate the service quality activities of the distribution industry. Furthermore, by complementing the shortfalls of the weighted value of the DEA Model, it recommends a DEA/PS Model that is appropriate in the evaluation of service quality activities. Based on this model, the study proposes the SQAE Model, an evaluation tool to complement the traditional measuring method. According to the results of the analysis of 18 sample distribution businesses, there was a discrepancy by business in the results of the Traditional Scoring System and the Evaluation Measuring System. Therefore, it is most desirable to not only be active in service quality activities but also increase efficiency at the same time.

A Study on the Design of an Adaptive pole Placement Controller with Improved Convergence Properties (개선된 수렴 특성을 갖는 적응 극배치 제어기의 설계에 관한 연구)

  • 홍연찬;김종환
    • The Transactions of the Korean Institute of Electrical Engineers
    • /
    • v.41 no.3
    • /
    • pp.311-319
    • /
    • 1992
  • In this paper, a direct adaptive pole placement controller for an unknown linear time-invariant single-input single-output nonminimum phase plant is proposed. To design this direct adaptive pole placement controller, the auxiliary signals are introduced. Consequently, a linear equation error model is formulated for estimating both the controller parameters and the additional auxiliary parameters. To estimate the controller parameters and the additional auxiliary parameters, the exponentially weighted least-squares algorithm is implemented, and a method of selecting the characteristic polynomials of the sensitivity function filters is proposed. In this method, all the past measurement data are weighted exponentially. A series of simulations for a nonminimum phase plant is presented to illustrate some features of both the parameter estimation and the output response of this adaptive pole placement controller.

  • PDF

Prediction of the long-term deformation of high rockfill geostructures using a hybrid back-analysis method

  • Ming Xu;Dehai Jin
    • Geomechanics and Engineering
    • /
    • v.36 no.1
    • /
    • pp.83-97
    • /
    • 2024
  • It is important to make reasonable prediction about the long-term deformation of high rockfill geostructures. However, the deformation is usually underestimated using the rockfill parameters obtained from laboratory tests due to different size effects, which make it necessary to identify parameters from in-situ monitoring data. This paper proposes a novel hybrid back-analysis method with a modified objective function defined for the time-dependent back-analysis problem. The method consists of two stages. In the first stage, an improved weighted average method is proposed to quickly narrow the search region; while in the second stage, an adaptive response surface method is proposed to iteratively search for the satisfactory solution, with a technique that can adaptively consider the translation, contraction or expansion of the exploration region. The accuracy and computational efficiency of the proposed hybrid back-analysis method is demonstrated by back-analyzing the long-term deformation of two high embankments constructed for airport runways, with the rockfills being modeled by a rheological model considering the influence of stress states on the creep behavior.

Volatility Analysis for Multivariate Time Series via Dimension Reduction (차원축소를 통한 다변량 시계열의 변동성 분석 및 응용)

  • Song, Eu-Gine;Choi, Moon-Sun;Hwang, S.Y.
    • Communications for Statistical Applications and Methods
    • /
    • v.15 no.6
    • /
    • pp.825-835
    • /
    • 2008
  • Multivariate GARCH(MGARCH) has been useful in financial studies and econometrics for modeling volatilities and correlations between components of multivariate time series. An obvious drawback lies in that the number of parameters increases rapidly with the number of variables involved. This thesis tries to resolve the problem by using dimension reduction technique. We briefly review both factor models for dimension reduction and the MGARCH models including EWMA (Exponentially weighted moving-average model), DVEC(Diagonal VEC model), BEKK and CCC(Constant conditional correlation model). We create meaningful portfolios obtained after reducing dimension through statistical factor models and fundamental factor models and in turn these portfolios are applied to MGARCH. In addition, we compare portfolios by assessing MSE, MAD(Mean absolute deviation) and VaR(Value at Risk). Various financial time series are analyzed for illustration.