• Title/Summary/Keyword: Early detection algorithm

Search Result 229, Processing Time 0.026 seconds

Creation of Crack BIM in Bridge Deck and Development of BIM-FEM Interoperability Algorithm (교량 바닥판의 균열 BIM 생성 및 BIM-FEM 상호 연계 알고리즘 개발)

  • Yang, Dahyeon;Lee, Min-Jin;An, Hyojoon;Jung, Hyun-Jin;Lee, Jong-Han
    • KSCE Journal of Civil and Environmental Engineering Research
    • /
    • v.43 no.6
    • /
    • pp.689-693
    • /
    • 2023
  • Domestic bridges with a service life of more than 30 years are expected to account for approximately 54% of all bridges within the next 10 years. As bridges rapidly deteriorate, it is necessary to establish an appropriate maintenance plan. Recent domestic and international research have focused on the integration of BIM to digitize bridge maintenance information and then enhance accessibility and usability of the information. Accordingly, this study developed a BIM-FEM interoperability algorithm for bridge decks to convert maintenance information into data and efficiently manage the history of maintenance. After creating an initial crack BIM based on an exterior damage map, bridge specification and damage information were linked to a numerical analysis that performs damage analysis considering damage scenarios and design loads. The spread of cracks obtained from the analysis results were updated into the BIM. Based on the damage spread information on the BIM, an automated technology was also developed to assess both the current and future condition ratings of the bridge deck. This approach can enable an efficient maintenance of the deck using the history data from bridge inspection and diagnosis as well as future information on cracks and defects. The expected early detection and prevention would ultimately improve the lifespan and safety of bridges.

A Simulation-Based Investigation of an Advanced Traveler Information System with V2V in Urban Network (시뮬레이션기법을 통한 차량 간 통신을 이용한 첨단교통정보시스템의 효과 분석 (도시 도로망을 중심으로))

  • Kim, Hoe-Kyoung
    • Journal of Korean Society of Transportation
    • /
    • v.29 no.5
    • /
    • pp.121-138
    • /
    • 2011
  • More affordable and available cutting-edge technologies (e.g., wireless vehicle communication) are regarded as a possible alternative to the fixed infrastructure-based traffic information system requiring the expensive infrastructure investments and mostly implemented in the uninterrupted freeway network with limited spatial system expansion. This paper develops an advanced decentralized traveler information System (ATIS) using vehicle-to-vehicle (V2V) communication system whose performance (drivers' travel time savings) are enhanced by three complementary functions (autonomous automatic incident detection algorithm, reliable sample size function, and driver behavior model) and evaluates it in the typical $6{\times}6$ urban grid network with non-recurrent traffic state (traffic incident) with the varying key parameters (traffic flow, communication radio range, and penetration ratio), employing the off-the-shelf microscopic simulation model (VISSIM) under the ideal vehicle communication environment. Simulation outputs indicate that as the three key parameters are increased more participating vehicles are involved for traffic data propagation in the less communication groups at the faster data dissemination speed. Also, participating vehicles saved their travel time by dynamically updating the up-to-date traffic states and searching for the new route. Focusing on the travel time difference of (instant) re-routing vehicles, lower traffic flow cases saved more time than higher traffic flow ones. This is because a relatively small number of vehicles in 300vph case re-route during the most system-efficient time period (the early time of the traffic incident) but more vehicles in 514vph case re-route during less system-efficient time period, even after the incident is resolved. Also, normally re-routings on the network-entering links saved more travel time than any other places inside the network except the case where the direct effect of traffic incident triggers vehicle re-routings during the effective incident time period and the location and direction of the incident link determines the spatial distribution of re-routing vehicles.

Body Temperature Monitoring Using Subcutaneously Implanted Thermo-loggers from Holstein Steers

  • Lee, Y.;Bok, J.D.;Lee, H.J.;Lee, H.G.;Kim, D.;Lee, I.;Kang, S.K.;Choi, Y.J.
    • Asian-Australasian Journal of Animal Sciences
    • /
    • v.29 no.2
    • /
    • pp.299-306
    • /
    • 2016
  • Body temperature (BT) monitoring in cattle could be used to early detect fever from infectious disease or physiological events. Various ways to measure BT have been applied at different locations on cattle including rectum, reticulum, milk, subcutis and ear canal. In other to evaluate the temperature stability and reliability of subcutaneous temperature (ST) in highly fluctuating field conditions for continuous BT monitoring, long term ST profiles were collected and analyzed from cattle in autumn/winter and summer season by surgically implanted thermo-logger devices. Purposes of this study were to assess ST in the field condition as a reference BT and to determine any location effect of implantation on ST profile. In results, ST profile in cattle showed a clear circadian rhythm with daily lowest at 05:00 to 07:00 AM and highest around midnight and rather stable temperature readings (mean${\pm}$standard deviation [SD], $37.1^{\circ}C$ to $37.36^{\circ}C{\pm}0.91^{\circ}C$ to $1.02^{\circ}C$). STs are $1.39^{\circ}C$ to $1.65^{\circ}C$ lower than the rectal temperature and sometimes showed an irregular temperature drop below the normal physiologic one: 19.4% or 36.4% of 54,192 readings were below $36.5^{\circ}C$ or $37^{\circ}C$, respectively. Thus, for BT monitoring purposes in a fever-alarming-system, a correction algorithm is necessary to remove the influences of ambient temperature and animal resting behavior especially in winter time. One way to do this is simply discard outlier readings below $36.5^{\circ}C$ or $37^{\circ}C$ resulting in a much improved mean${\pm}$SD of $37.6^{\circ}C{\pm}0.64^{\circ}C$ or $37.8^{\circ}C{\pm}0.55^{\circ}C$, respectively. For location the upper scapula region seems the most reliable and convenient site for implantation of a thermo-sensor tag in terms of relatively low influence by ambient temperature and easy insertion compared to lower scapula or lateral neck.

Strategic Behavioral Characteristics of Co-opetition in the Display Industry (디스플레이 산업에서의 협력-경쟁(co-opetition) 전략적 행동 특성)

  • Jung, Hyo-jung;Cho, Yong-rae
    • Journal of Korea Technology Innovation Society
    • /
    • v.20 no.3
    • /
    • pp.576-606
    • /
    • 2017
  • It is more salient in the high-tech industry to cooperate even among competitors in order to promptly respond to the changes in product architecture. In this sense, 'co-opetition,' which is the combination word between 'cooperation' and 'competition,' is the new business term in the strategic management and represents the two concepts "simultaneously co-exist." From this view, this study set up the research purposes as follows: 1) investigating the corporate managerial and technological behavioral characteristics in the co-opetition of the global display industry. 2) verifying the emerging factors during the co-opetition behavior hereafter. 3) suggesting the strategic direction focusing on the co-opetition behavioral characteristics. To this end, this study used co-word network analysis to understand the structure in context level of the co-opetition. In order to understand topics on each network, we clustered the keywords by community detection algorithm based on modularity and labeled the cluster name. The results show that there were increasing patterns of competition rather than cooperation. Especially, the litigations for mutual control against Korean firms much more severely occurred and increased as time passed by. Investigating these network structure in technological evolution perspective, there were already active cooperation and competition among firms in the early 2000s surrounding the issues of OLED-related technology developments. From the middle of the 2000s, firm behaviors have focused on the acceleration of the existing technologies and the development of futuristic display. In other words, there has been competition to take leadership of the innovation in the level of final products such as the TV and smartphone by applying the display panel products. This study will provide not only better understanding on the context of the display industry, but also the analytical framework for the direction of the predictable innovation through analyzing the managerial and technological factors. Also, the methods can support CTOs and practitioners in the technology planning who should consider those factors in the process of decision making related to the strategic technology management and product development.

Development of Software Correlator for KJJVC (한일공동VLBI상관기를 위한 소프트웨어 상관기의 개발)

  • Yeom, J.H.;Oh, S.J.;Roh, D.G.;Kang, Y.W.;Park, S.Y.;Lee, C.H.;Chung, H.S.
    • Journal of Astronomy and Space Sciences
    • /
    • v.26 no.4
    • /
    • pp.567-588
    • /
    • 2009
  • Korea-Japan Joint VLBI Correlator (KJJVC) is being developed by collaborating KASI (Korea Astronomy and Space Science Institute), Korea, and NAOJ(National Observatory of Japan), Japan. In early 2010, KJJVC will work in normal operation. In this study, we developed the software correlator which is based on VCS (VLBI Correlation Subsystem) hardware specification as the core component of KJJVC. The main specification of software correlator is 8 Gbps, 8192 output channels, and 262,144-points FFT (Fast Fourier Transform) function same as VCS. And the functional algorithm which is same as specification of VCS and arithmetic register are adopted in this software correlator. To verify the performance of developed software correlator, the correlation experiments were carried out using the spectral line and continuum sources which were observed by VERA (VLBI Exploration of Radio Astrometry), NAOJ. And the experimental results were compared to the output of Mitaka FX correlator by referring spectrum shape, phase rate, and fringe detection and so on. Through the experimental results, we confirmed that the correlation results of software correlator are the same as Mitaka FX correlator and verified the effectiveness of it. In future, we expect that the developed software correlator will be the possible software correlator of KVN (Korean VLBI Network) with KJJVC by introducing the correlation post-processing and modifying the user interface as like GUI (Graphic User Interface).

Active Congestion Control Using Active Router′s Feedback Mechanism (액티브 라우터의 피드백 메커니즘을 이용한 혼잡제어 기법)

  • Choe, Gi-Hyeon;Jang, Gyeong-Su;Sin, Ho-Jin;Sin, Dong-Ryeol
    • The KIPS Transactions:PartC
    • /
    • v.9C no.4
    • /
    • pp.513-522
    • /
    • 2002
  • Current end-to-end congestion control depends only on the information of end points (using three duplicate ACK packets) and generally responds slowly to the network congestion. This mechanism can't avoid TCP global synchronization which TCP congestion window size is fluctuated during congestion occurred and if RTT (Round Trip Time) is increased, three duplicate ACK packets is not a correct congestion signal because congestion maybe already disappeared and the host may send more packets until receive the three duplicate ACK packets. Recently there is increasing interest in solving end-to-end congestion control using active network frameworks to improve the performance of TCP protocols. ACC (Active congestion control) is a variation of TCP-based congestion control with queue management In addition traffic modifications nay begin at the congested router (active router) so that ACC will respond more quickly to congestion than TCP variants. The advantage of this method is that the host uses the information provided by the active routers as well as the end points in order to relieve congestion and improve throughput. In this paper, we model enhanced ACC, provide its algorithm which control the congestion by using information in core networks and communications between active routers, and finally demonstrate enhanced performance by simulation.

Effect of All Sky Image Correction on Observations in Automatic Cloud Observation (자동 운량 관측에서 전천 영상 보정이 관측치에 미치는 효과)

  • Yun, Han-Kyung
    • The Journal of Korea Institute of Information, Electronics, and Communication Technology
    • /
    • v.15 no.2
    • /
    • pp.103-108
    • /
    • 2022
  • Various studies have been conducted on cloud observation using all-sky images acquired with a wide-angle camera system since the early 21st century, but it is judged that an automatic observation system that can completely replace the eye observation has not been obtained. In this study, to verify the quantification of cloud observation, which is the final step of the algorithm proposed to automate the observation, the cloud distribution of the all-sky image and the corrected image were compared and analyzed. The reason is that clouds are formed at a certain height depending on the type, but like the retina image, the center of the lens is enlarged and the edges are reduced, but the effect of human learning ability and spatial awareness on cloud observation is unknown. As a result of this study, the average cloud observation error of the all-sky image and the corrected image was 1.23%. Therefore, when compared with the eye observation in the decile, the error due to correction is 1.23% of the observed amount, which is very less than the allowable error of the eye observation, and it does not include human error, so it is possible to collect accurately quantified data. Since the change in cloudiness due to the correction is insignificant, it was confirmed that accurate observations can be obtained even by omitting the unnecessary correction step and observing the cloudiness in the pre-correction image.

A Study on the Design and Implementation of a Thermal Imaging Temperature Screening System for Monitoring the Risk of Infectious Diseases in Enclosed Indoor Spaces (밀폐공간 내 감염병 위험도 모니터링을 위한 열화상 온도 스크리닝 시스템 설계 및 구현에 대한 연구)

  • Jae-Young, Jung;You-Jin, Kim
    • KIPS Transactions on Computer and Communication Systems
    • /
    • v.12 no.2
    • /
    • pp.85-92
    • /
    • 2023
  • Respiratory infections such as COVID-19 mainly occur within enclosed spaces. The presence or absence of abnormal symptoms of respiratory infectious diseases is judged through initial symptoms such as fever, cough, sneezing and difficulty breathing, and constant monitoring of these early symptoms is required. In this paper, image matching correction was performed for the RGB camera module and the thermal imaging camera module, and the temperature of the thermal imaging camera module for the measurement environment was calibrated using a blackbody. To detection the target recommended by the standard, a deep learning-based object recognition algorithm and the inner canthus recognition model were developed, and the model accuracy was derived by applying a dataset of 100 experimenters. Also, the error according to the measured distance was corrected through the object distance measurement using the Lidar module and the linear regression correction module. To measure the performance of the proposed model, an experimental environment consisting of a motor stage, an infrared thermography temperature screening system and a blackbody was established, and the error accuracy within 0.28℃ was shown as a result of temperature measurement according to a variable distance between 1m and 3.5 m.

Development of a complex failure prediction system using Hierarchical Attention Network (Hierarchical Attention Network를 이용한 복합 장애 발생 예측 시스템 개발)

  • Park, Youngchan;An, Sangjun;Kim, Mintae;Kim, Wooju
    • Journal of Intelligence and Information Systems
    • /
    • v.26 no.4
    • /
    • pp.127-148
    • /
    • 2020
  • The data center is a physical environment facility for accommodating computer systems and related components, and is an essential foundation technology for next-generation core industries such as big data, smart factories, wearables, and smart homes. In particular, with the growth of cloud computing, the proportional expansion of the data center infrastructure is inevitable. Monitoring the health of these data center facilities is a way to maintain and manage the system and prevent failure. If a failure occurs in some elements of the facility, it may affect not only the relevant equipment but also other connected equipment, and may cause enormous damage. In particular, IT facilities are irregular due to interdependence and it is difficult to know the cause. In the previous study predicting failure in data center, failure was predicted by looking at a single server as a single state without assuming that the devices were mixed. Therefore, in this study, data center failures were classified into failures occurring inside the server (Outage A) and failures occurring outside the server (Outage B), and focused on analyzing complex failures occurring within the server. Server external failures include power, cooling, user errors, etc. Since such failures can be prevented in the early stages of data center facility construction, various solutions are being developed. On the other hand, the cause of the failure occurring in the server is difficult to determine, and adequate prevention has not yet been achieved. In particular, this is the reason why server failures do not occur singularly, cause other server failures, or receive something that causes failures from other servers. In other words, while the existing studies assumed that it was a single server that did not affect the servers and analyzed the failure, in this study, the failure occurred on the assumption that it had an effect between servers. In order to define the complex failure situation in the data center, failure history data for each equipment existing in the data center was used. There are four major failures considered in this study: Network Node Down, Server Down, Windows Activation Services Down, and Database Management System Service Down. The failures that occur for each device are sorted in chronological order, and when a failure occurs in a specific equipment, if a failure occurs in a specific equipment within 5 minutes from the time of occurrence, it is defined that the failure occurs simultaneously. After configuring the sequence for the devices that have failed at the same time, 5 devices that frequently occur simultaneously within the configured sequence were selected, and the case where the selected devices failed at the same time was confirmed through visualization. Since the server resource information collected for failure analysis is in units of time series and has flow, we used Long Short-term Memory (LSTM), a deep learning algorithm that can predict the next state through the previous state. In addition, unlike a single server, the Hierarchical Attention Network deep learning model structure was used in consideration of the fact that the level of multiple failures for each server is different. This algorithm is a method of increasing the prediction accuracy by giving weight to the server as the impact on the failure increases. The study began with defining the type of failure and selecting the analysis target. In the first experiment, the same collected data was assumed as a single server state and a multiple server state, and compared and analyzed. The second experiment improved the prediction accuracy in the case of a complex server by optimizing each server threshold. In the first experiment, which assumed each of a single server and multiple servers, in the case of a single server, it was predicted that three of the five servers did not have a failure even though the actual failure occurred. However, assuming multiple servers, all five servers were predicted to have failed. As a result of the experiment, the hypothesis that there is an effect between servers is proven. As a result of this study, it was confirmed that the prediction performance was superior when the multiple servers were assumed than when the single server was assumed. In particular, applying the Hierarchical Attention Network algorithm, assuming that the effects of each server will be different, played a role in improving the analysis effect. In addition, by applying a different threshold for each server, the prediction accuracy could be improved. This study showed that failures that are difficult to determine the cause can be predicted through historical data, and a model that can predict failures occurring in servers in data centers is presented. It is expected that the occurrence of disability can be prevented in advance using the results of this study.