• Title/Summary/Keyword: Reliable communication

Search Result 1,141, Processing Time 0.031 seconds

IMPROVING RELIABILITY OF BRIDGE DETERIORATION MODEL USING GENERATED MISSING CONDITION RATINGS

  • Jung Baeg Son;Jaeho Lee;Michael Blumenstein;Yew-Chaye Loo;Hong Guan;Kriengsak Panuwatwanich
    • International conference on construction engineering and project management
    • /
    • 2009.05a
    • /
    • pp.700-706
    • /
    • 2009
  • Bridges are vital components of any road network which demand crucial and timely decision-making for Maintenance, Repair and Rehabilitation (MR&R) activities. Bridge Management Systems (BMSs) as a decision support system (DSS), have been developed since the early 1990's to assist in the management of a large bridge network. Historical condition ratings obtained from biennial bridge inspections are major resources for predicting future bridge deteriorations via BMSs. Available historical condition ratings in most bridge agencies, however, are very limited, and thus posing a major barrier for obtaining reliable future structural performances. To alleviate this problem, the verified Backward Prediction Model (BPM) technique has been developed to help generate missing historical condition ratings. This is achieved through establishing the correlation between known condition ratings and such non-bridge factors as climate and environmental conditions, traffic volumes and population growth. Such correlations can then be used to obtain the bridge condition ratings of the missing years. With the help of these generated datasets, the currently available bridge deterioration model can be utilized to more reliably forecast future bridge conditions. In this paper, the prediction accuracy based on 4 and 9 BPM-generated historical condition ratings as input data are compared, using deterministic and stochastic bridge deterioration models. The comparison outcomes indicate that the prediction error decreases as more historical condition ratings obtained. This implies that the BPM can be utilised to generate unavailable historical data, which is crucial for bridge deterioration models to achieve more accurate prediction results. Nevertheless, there are considerable limitations in the existing bridge deterioration models. Thus, further research is essential to improve the prediction accuracy of bridge deterioration models.

  • PDF

Analysis of the Effects of the Truck Platooning Using a Meta-analysis (메타분석을 이용한 화물차 군집주행의 효과 분석)

  • Kim, Yejin;Jeong, Harim;Ko, Woori;Park, Joong-gyu;Yun, Ilsoo
    • The Journal of The Korea Institute of Intelligent Transport Systems
    • /
    • v.21 no.1
    • /
    • pp.76-90
    • /
    • 2022
  • The platooning refers to a form in which one or more following vehicles along the path of the leading vehicle(directly driven by the driver) drive in one platoon using V2V, V2I communication and vehicle-mounted sensor. Platooning has emerged in line with the increasing demand for cargo volume and advanced transportation logistics systems, and is expected to have effects such as increasing capacity, reducing labor costs, and reducing fuel consumption. However, compared to general passenger cars, research on autonomous driving of trucks and verification of their effects are insufficient. Therefore, in this study, meta-analysis was conducted on the theme of the effect of truck platooning, and the results of existing studies related to platooning effects were integrated into one reliable, generalized, and objective summary estimate. In conclusion, it was analyzed that the introduction of truck platooning would have an effect of 13.93% increase in capacity, 38.76% decrease in conflict, and 8.13% decrease in fuel consumption.

Exploiting Spatial Reuse Opportunity with Power Control in loco parentis Tree Topology of Low-power and Wide-area Networks (대부모 트리 구조의 저 전력 광역 네트워크를 위한 전력 제어 기반의 공간 재사용 기회 향상 기법)

  • Byeon, Seunggyu;Kim, JongDeok
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 2021.10a
    • /
    • pp.194-198
    • /
    • 2021
  • LoRa is a physical layer technology that is designed to provide a reliable long-range communication with introducing CSS and with introducing a loco parentis tree network. Since a leaf can utilize multiple parents at the same time with a single transmission, PDR increases logarithmically as the number of gateways increases. Because of the ALOHA-like MAC of LoRa, however, the PDR degrades even under the loco parentis tree topology similarly to the single-gateway environment. Our proposed method is aimed to achieve SDMA approach to reuse the same frequency in different areas. For that purpose, it elaborately controls each TxPower of the senders for each message in concurrent transmission to survive the collision at each different gateway. The gain from this so-called capture effect increases the capacity of resource-hungry LPWAN. Compared to a typical collision-free controlled-access scheme, our method outperforms by 10-35% from the perspective of the total count of the consumed time slots. Also, due to the power control mechanism in our method, the energy consumption reduced by 20-40%.

  • PDF

Machine learning techniques for reinforced concrete's tensile strength assessment under different wetting and drying cycles

  • Ibrahim Albaijan;Danial Fakhri;Adil Hussein Mohammed;Arsalan Mahmoodzadeh;Hawkar Hashim Ibrahim;Khaled Mohamed Elhadi;Shima Rashidi
    • Steel and Composite Structures
    • /
    • v.49 no.3
    • /
    • pp.337-348
    • /
    • 2023
  • Successive wetting and drying cycles of concrete due to weather changes can endanger the safety of engineering structures over time. Considering wetting and drying cycles in concrete tests can lead to a more correct and reliable design of engineering structures. This study aims to provide a model that can be used to estimate the resistance properties of concrete under different wetting and drying cycles. Complex sample preparation methods, the necessity for highly accurate and sensitive instruments, early sample failure, and brittle samples all contribute to the difficulty of measuring the strength of concrete in the laboratory. To address these problems, in this study, the potential ability of six machine learning techniques, including ANN, SVM, RF, KNN, XGBoost, and NB, to predict the concrete's tensile strength was investigated by applying 240 datasets obtained using the Brazilian test (80% for training and 20% for test). In conducting the test, the effect of additives such as glass and polypropylene, as well as the effect of wetting and drying cycles on the tensile strength of concrete, was investigated. Finally, the statistical analysis results revealed that the XGBoost model was the most robust one with R2 = 0.9155, mean absolute error (MAE) = 0.1080 Mpa, and variance accounted for (VAF) = 91.54% to predict the concrete tensile strength. This work's significance is that it allows civil engineers to accurately estimate the tensile strength of different types of concrete. In this way, the high time and cost required for the laboratory tests can be eliminated.

Enhancing LoRA Fine-tuning Performance Using Curriculum Learning

  • Daegeon Kim;Namgyu Kim
    • Journal of the Korea Society of Computer and Information
    • /
    • v.29 no.3
    • /
    • pp.43-54
    • /
    • 2024
  • Recently, there has been a lot of research on utilizing Language Models, and Large Language Models have achieved innovative results in various tasks. However, the practical application faces limitations due to the constrained resources and costs required to utilize Large Language Models. Consequently, there has been recent attention towards methods to effectively utilize models within given resources. Curriculum Learning, a methodology that categorizes training data according to difficulty and learns sequentially, has been attracting attention, but it has the limitation that the method of measuring difficulty is complex or not universal. Therefore, in this study, we propose a methodology based on data heterogeneity-based Curriculum Learning that measures the difficulty of data using reliable prior information and facilitates easy utilization across various tasks. To evaluate the performance of the proposed methodology, experiments were conducted using 5,000 specialized documents in the field of information communication technology and 4,917 documents in the field of healthcare. The results confirm that the proposed methodology outperforms traditional fine-tuning in terms of classification accuracy in both LoRA fine-tuning and full fine-tuning.

A Software Reliability Cost Model Based on the Shape Parameter of Lomax Distribution (Lomax 분포의 형상모수에 근거한 소프트웨어 신뢰성 비용모형에 관한 연구)

  • Yang, Tae-Jin
    • The Journal of Korea Institute of Information, Electronics, and Communication Technology
    • /
    • v.9 no.2
    • /
    • pp.171-177
    • /
    • 2016
  • Software reliability in the software development process is an important issue. Software process improvement helps in finishing with reliable software product. Infinite failure NHPP software reliability models presented in the literature exhibit either constant, monotonic increasing or monotonic decreasing failure occurrence rates per fault. In this study, reliability software cost model considering shape parameter based on life distribution from the process of software product testing was studied. The cost comparison problem of the Lomax distribution reliability growth model that is widely used in the field of reliability presented. The software failure model was used the infinite failure non-homogeneous Poisson process model. The parameters estimation using maximum likelihood estimation was conducted. For analysis of software cost model considering shape parameter. In the process of change and large software fix this situation can scarcely avoid the occurrence of defects is reality. The conditions that meet the reliability requirements and to minimize the total cost of the optimal release time. Studies comparing emissions when analyzing the problem to help kurtosis So why Kappa efficient distribution, exponential distribution, etc. updated in terms of the case is considered as also worthwhile. In this research, software developers to identify software development cost some extent be able to help is considered.

The Comparative Study of NHPP Software Reliability Model Based on Exponential and Inverse Exponential Distribution (지수 및 역지수 분포를 이용한 NHPP 소프트웨어 무한고장 신뢰도 모형에 관한 비교연구)

  • Kim, Hee-Cheul;Shin, Hyun-Cheul
    • The Journal of Korea Institute of Information, Electronics, and Communication Technology
    • /
    • v.9 no.2
    • /
    • pp.133-140
    • /
    • 2016
  • Software reliability in the software development process is an important issue. Software process improvement helps in finishing with reliable software product. Infinite failure NHPP software reliability models presented in the literature exhibit either constant, monotonic increasing or monotonic decreasing failure occurrence rates per fault. In this paper, we were proposed the reliability model with the exponential and inverse exponential distribution, which made out efficiency application for software reliability. Algorithm to estimate the parameters used to maximum likelihood estimator and bisection method, model selection based on mean square error (MSE) and coefficient of determination($R^2$), for the sake of efficient model, were employed. Analysis of failure, using real data set for the sake of proposing the exponential and inverse exponential distribution, was employed. This analysis of failure data compared with the exponential and inverse exponential distribution property. In order to insurance for the reliability of data, Laplace trend test was employed. In this study, the inverse exponential distribution model is also efficient in terms of reliability because it (the coefficient of determination is 80% or more) in the field of the conventional model can be used as an alternative could be confirmed. From this paper, the software developers have to consider life distribution by prior knowledge of the software to identify failure modes which can be able to help.

A Grouping Method of Photographic Advertisement Information Based on the Efficient Combination of Features (특징의 효과적 병합에 의한 광고영상정보의 분류 기법)

  • Jeong, Jae-Kyong;Jeon, Byeung-Woo
    • Journal of the Institute of Electronics Engineers of Korea CI
    • /
    • v.48 no.2
    • /
    • pp.66-77
    • /
    • 2011
  • We propose a framework for grouping photographic advertising images that employs a hierarchical indexing scheme based on efficient feature combinations. The study provides one specific application of effective tools for monitoring photographic advertising information through online and offline channels. Specifically, it develops a preprocessor for advertising image information tracking. We consider both global features that contain general information on the overall image and local features that are based on local image characteristics. The developed local features are invariant under image rotation and scale, the addition of noise, and change in illumination. Thus, they successfully achieve reliable matching between different views of a scene across affine transformations and exhibit high accuracy in the search for matched pairs of identical images. The method works with global features in advance to organize coarse clusters that consist of several image groups among the image data and then executes fine matching with local features within each cluster to construct elaborate clusters that are separated by identical image groups. In order to decrease the computational time, we apply a conventional clustering method to group images together that are similar in their global characteristics in order to overcome the drawback of excessive time for fine matching time by using local features between identical images.

Development of Chip-based Precision Motion Controller

  • Cho, Jung-Uk;Jeon, Jae-Wook
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 2003.10a
    • /
    • pp.1022-1027
    • /
    • 2003
  • The Motion controllers provide the sophisticated performance and enhanced capabilities we can see in the movements of robotic systems. Several types of motion controllers are available, some based on the kind of overall control system in use. PLC (Programmable Logic Controller)-based motion controllers still predominate. The many peoples use MCU (Micro Controller Unit)-based board level motion controllers and will continue to in the near-term future. These motion controllers control a variety motor system like robotic systems. Generally, They consist of large and complex circuits. PLC-based motion controller consists of high performance PLC, development tool, and application specific software. It can be cause to generate several problems that are large size and space, much cabling, and additional high coasts. MCU-based motion controller consists of memories like ROM and RAM, I/O interface ports, and decoder in order to operate MCU. Additionally, it needs DPRAM to communicate with host PC, counter to get position information of motor by using encoder signal, additional circuits to control servo, and application specific software to generate a various velocity profiles. It can be causes to generate several problems that are overall system complexity, large size and space, much cabling, large power consumption and additional high costs. Also, it needs much times to calculate velocity profile because of generating by software method and don't generate various velocity profiles like arbitrary velocity profile. Therefore, It is hard to generate expected various velocity profiles. And further, to embed real-time OS (Operating System) is considered for more reliable motion control. In this paper, the structure of chip-based precision motion controller is proposed to solve above-mentioned problems of control systems. This proposed motion controller is designed with a FPGA (Field Programmable Gate Arrays) by using the VHDL (Very high speed integrated circuit Hardware Description Language) and Handel-C that is program language for deign hardware. This motion controller consists of Velocity Profile Generator (VPG) part to generate expected various velocity profiles, PCI Interface part to communicate with host PC, Feedback Counter part to get position information by using encoder signal, Clock Generator to generate expected various clock signal, Controller part to control position of motor with generated velocity profile and position information, and Data Converter part to convert and transmit compatible data to D/A converter.

  • PDF

Edge Grouping and Contour Detection by Delaunary Triangulation (Delaunary 삼각화에 의한 그룹화 및 외형 탐지)

  • Lee, Sang-Hyun;Jung, Byeong-Soo;Jeong, Je-Pyong;Kim, Jung-Rok;Moon, Kyung-li
    • The Journal of the Institute of Internet, Broadcasting and Communication
    • /
    • v.13 no.1
    • /
    • pp.135-142
    • /
    • 2013
  • Contour detection is important for many computer vision applications, such as shape discrimination and object recognition. In many cases, local luminance changes turn out to be stronger in textured areas than on object contours. Therefore, local edge features, which only look at a small neighborhood of each pixel, cannot be reliable indicators of the presence of a contour, and some global analysis is needed. The novelty of this operator is that dilation is limited to Deluanary triangular. An efficient implementation is presented. The grouping algorithm is then embedded in a multi-threshold contour detector. At each threshold level, small groups of edges are removed, and contours are completed by means of a generalized reconstruction from markers. Both qualitative and quantitative comparison with existing approaches prove the superiority of the proposed contour detector in terms of larger amount of suppressed texture and more effective detection of low-contrast contour.