• Title/Summary/Keyword: Critical metric

Search Result 67, Processing Time 0.03 seconds

Flexible smart sensor framework for autonomous structural health monitoring

  • Rice, Jennifer A.;Mechitov, Kirill;Sim, Sung-Han;Nagayama, Tomonori;Jang, Shinae;Kim, Robin;Spencer, Billie F. Jr.;Agha, Gul;Fujino, Yozo
    • Smart Structures and Systems
    • /
    • v.6 no.5_6
    • /
    • pp.423-438
    • /
    • 2010
  • Wireless smart sensors enable new approaches to improve structural health monitoring (SHM) practices through the use of distributed data processing. Such an approach is scalable to the large number of sensor nodes required for high-fidelity modal analysis and damage detection. While much of the technology associated with smart sensors has been available for nearly a decade, there have been limited numbers of fulls-cale implementations due to the lack of critical hardware and software elements. This research develops a flexible wireless smart sensor framework for full-scale, autonomous SHM that integrates the necessary software and hardware while addressing key implementation requirements. The Imote2 smart sensor platform is employed, providing the computation and communication resources that support demanding sensor network applications such as SHM of civil infrastructure. A multi-metric Imote2 sensor board with onboard signal processing specifically designed for SHM applications has been designed and validated. The framework software is based on a service-oriented architecture that is modular, reusable and extensible, thus allowing engineers to more readily realize the potential of smart sensor technology. Flexible network management software combines a sleep/wake cycle for enhanced power efficiency with threshold detection for triggering network wide operations such as synchronized sensing or decentralized modal analysis. The framework developed in this research has been validated on a full-scale a cable-stayed bridge in South Korea.

Deep Learning-based Depth Map Estimation: A Review

  • Abdullah, Jan;Safran, Khan;Suyoung, Seo
    • Korean Journal of Remote Sensing
    • /
    • v.39 no.1
    • /
    • pp.1-21
    • /
    • 2023
  • In this technically advanced era, we are surrounded by smartphones, computers, and cameras, which help us to store visual information in 2D image planes. However, such images lack 3D spatial information about the scene, which is very useful for scientists, surveyors, engineers, and even robots. To tackle such problems, depth maps are generated for respective image planes. Depth maps or depth images are single image metric which carries the information in three-dimensional axes, i.e., xyz coordinates, where z is the object's distance from camera axes. For many applications, including augmented reality, object tracking, segmentation, scene reconstruction, distance measurement, autonomous navigation, and autonomous driving, depth estimation is a fundamental task. Much of the work has been done to calculate depth maps. We reviewed the status of depth map estimation using different techniques from several papers, study areas, and models applied over the last 20 years. We surveyed different depth-mapping techniques based on traditional ways and newly developed deep-learning methods. The primary purpose of this study is to present a detailed review of the state-of-the-art traditional depth mapping techniques and recent deep learning methodologies. This study encompasses the critical points of each method from different perspectives, like datasets, procedures performed, types of algorithms, loss functions, and well-known evaluation metrics. Similarly, this paper also discusses the subdomains in each method, like supervised, unsupervised, and semi-supervised methods. We also elaborate on the challenges of different methods. At the conclusion of this study, we discussed new ideas for future research and studies in depth map research.

The timing of unprecedented hydrological drought under climate change

  • Yusuke Satoh;Hyungjun Kim
    • Proceedings of the Korea Water Resources Association Conference
    • /
    • 2023.05a
    • /
    • pp.48-48
    • /
    • 2023
  • The intensified droughts under climate change are expected to threaten stable water resource availability. Droughts exceeding the magnitude of historical variability could occur increasingly frequently under future climate conditions. It is crucial to understand how drought will evolve over time because the assumption of hydrological stationarity of the past decades would be inappropriate for future water resources management. However, the timing of the emergence of unprecedented drought conditions under climate change has rarely been examined. Here, using multimodel hydrological simulations, we investigate the changes in the frequency of hydrological drought (defined as abnormally low river discharge) under high and low greenhouse gas concentration scenarios and with existing water resources management and estimate the timing of the first emergence of unprecedented regional drought conditions that persist for over several consecutive years. This new metric enables a new quantification of the urgency of adaptation and mitigation with regard to drought under climate change. The times are detected for several sub-continental-scale regions, and three regions, namely, southwestern South America, Mediterranean Europe, and northern Africa, exhibit particularly robust and earlier critical times under the high-emission scenario. These three regions are expected to confront unprecedented conditions within the next 30 years with a high likelihood, regardless of the emission scenarios. In addition, the results obtained herein demonstrate the benefits of the lower-emission pathway in reducing the likelihood of emergence. The Paris Agreement goals are shown to be effective in reducing the likelihood to the unlikely level in most regions. Nevertheless, appropriate and prior adaptation measures are considered indispensable to when facing unprecedented drought conditions. The results of this study underscore the importance of improving drought preparedness within the considered time horizons.

  • PDF

A Multimodal Profile Ensemble Approach to Development of Recommender Systems Using Big Data (빅데이터 기반 추천시스템 구현을 위한 다중 프로파일 앙상블 기법)

  • Kim, Minjeong;Cho, Yoonho
    • Journal of Intelligence and Information Systems
    • /
    • v.21 no.4
    • /
    • pp.93-110
    • /
    • 2015
  • The recommender system is a system which recommends products to the customers who are likely to be interested in. Based on automated information filtering technology, various recommender systems have been developed. Collaborative filtering (CF), one of the most successful recommendation algorithms, has been applied in a number of different domains such as recommending Web pages, books, movies, music and products. But, it has been known that CF has a critical shortcoming. CF finds neighbors whose preferences are like those of the target customer and recommends products those customers have most liked. Thus, CF works properly only when there's a sufficient number of ratings on common product from customers. When there's a shortage of customer ratings, CF makes the formation of a neighborhood inaccurate, thereby resulting in poor recommendations. To improve the performance of CF based recommender systems, most of the related studies have been focused on the development of novel algorithms under the assumption of using a single profile, which is created from user's rating information for items, purchase transactions, or Web access logs. With the advent of big data, companies got to collect more data and to use a variety of information with big size. So, many companies recognize it very importantly to utilize big data because it makes companies to improve their competitiveness and to create new value. In particular, on the rise is the issue of utilizing personal big data in the recommender system. It is why personal big data facilitate more accurate identification of the preferences or behaviors of users. The proposed recommendation methodology is as follows: First, multimodal user profiles are created from personal big data in order to grasp the preferences and behavior of users from various viewpoints. We derive five user profiles based on the personal information such as rating, site preference, demographic, Internet usage, and topic in text. Next, the similarity between users is calculated based on the profiles and then neighbors of users are found from the results. One of three ensemble approaches is applied to calculate the similarity. Each ensemble approach uses the similarity of combined profile, the average similarity of each profile, and the weighted average similarity of each profile, respectively. Finally, the products that people among the neighborhood prefer most to are recommended to the target users. For the experiments, we used the demographic data and a very large volume of Web log transaction for 5,000 panel users of a company that is specialized to analyzing ranks of Web sites. R and SAS E-miner was used to implement the proposed recommender system and to conduct the topic analysis using the keyword search, respectively. To evaluate the recommendation performance, we used 60% of data for training and 40% of data for test. The 5-fold cross validation was also conducted to enhance the reliability of our experiments. A widely used combination metric called F1 metric that gives equal weight to both recall and precision was employed for our evaluation. As the results of evaluation, the proposed methodology achieved the significant improvement over the single profile based CF algorithm. In particular, the ensemble approach using weighted average similarity shows the highest performance. That is, the rate of improvement in F1 is 16.9 percent for the ensemble approach using weighted average similarity and 8.1 percent for the ensemble approach using average similarity of each profile. From these results, we conclude that the multimodal profile ensemble approach is a viable solution to the problems encountered when there's a shortage of customer ratings. This study has significance in suggesting what kind of information could we use to create profile in the environment of big data and how could we combine and utilize them effectively. However, our methodology should be further studied to consider for its real-world application. We need to compare the differences in recommendation accuracy by applying the proposed method to different recommendation algorithms and then to identify which combination of them would show the best performance.

A Study on the Framework of Cutover Decision Making on Large-scale IS Development Projects: A Core Banking Development Case of D Bank (대규모 정보시스템 개발 프로젝트의 컷오버 의사결정 프레임워크에 관한 연구: D은행 코어뱅킹 시스템 구축 사례를 중심으로)

  • Jeong, Cheon-Su;Ahn, Hyun-Chul;Jeong, Seung-Ryul
    • Information Systems Review
    • /
    • v.14 no.1
    • /
    • pp.1-19
    • /
    • 2012
  • A large-scale IS development project takes a long time, thus its project manager needs to be more careful on risk management. In particular, appropriate cutover decision making is critical in large-scale IS development projects because the opening of the large-scale IS significantly impacts the organization. Regardless of its importance, cutover decision making in conventional IS development projects has been done in a quite simple way. Conventional cutover decisions have been made by considering just whether the new IS operates or not from the system, application, and data implementation perspectives. However, this approach may lead to unsatisfactory performance or system failure in complex large-scale IS development. Under this background, we propose a new framework for cutover decision making on large-scale IS projects. To validate the applicability, we applied the framework to a core banking system development case. The case study shows that our framework is effective in proper cutover decision making.

  • PDF

An Adaptive Relay Node Selection Scheme for Alert Message Propagation in Inter-vehicle Communication (차량간 통신에서 긴급 메시지 전파를 위한 적응적 릴레이 노드 선정기법)

  • Kim, Tae-Hwan;Kim, Hie-Cheol;Hong, Won-Kee
    • The KIPS Transactions:PartC
    • /
    • v.14C no.7
    • /
    • pp.571-582
    • /
    • 2007
  • Vehicular ad-hoc networks is temporarily established through inter-vehicle communication without any additional infrastructure aids. It requires a immediate message propagation because it mainly deals with critical traffic information such as traffic accidents. The distance-based broadcast scheme is one of the representative broadcast schemes for vehicular ad-hoc network. In this scheme, a node to disseminate messages is selected based on a distance from a source node. However, a message propagation delay will be increased if the relay nodes are not placed at the border of transmission range of the source node. In particular, when the node density is low, the message propagation delay is getting longer. In this paper, we propose a time-window reservation based relay node selection scheme. A node receiving the alert message from the source node has its time-window and randomly selects its waiting time within the given time-window range. A proportional time period of the given time-window is reserved in order to reduce the message propagation delay. The experimental results show that the proposed scheme has shorter message propagation delay than the distance-based broadcast scheme irrespective of node density in VANET. In particular, when the node density is low, the proposed scheme shows about 26% shorter delay and about 46% better performance in terms of compound metric, which is a function of propagation latency and network traffic.

Correlation analysis between energy indices and source-to-node shortest pathway of water distribution network (상수도관망 수원-절점 최소거리와 에너지 지표 상관성 분석)

  • Lee, Seungyub;Jung, Donghwi
    • Journal of Korea Water Resources Association
    • /
    • v.51 no.11
    • /
    • pp.989-998
    • /
    • 2018
  • Connectivity between water source and demand node can be served as a critical system performance indicator of the degree of water distribution network (WDN)' failure severity under abnormal conditions. Graph theory-based approaches have been widely applied to quantify the connectivity due to WDN's graph-like topological feature. However, most previous studies used undirected-unweighted graph theory which is not proper to WDN. In this study, the directed-weighted graph theory was applied for WDN connectivity analyses. We also proposed novel connectivity indicators, Source-to-Node Shortest Pathway (SNSP) and SNSP-Degree (SNSP-D) which is an inverse of the SNSP value, that does not require complicate hydraulic simulation of a WDN of interest. The proposed SNSP-D index was demonstrated in total 42 networks in J City, South Korea in which Pearson Correlation Coefficient (PCC) between the proposed SNSP-D and four other system performance indicators was computed: three resilience indexes and an energy efficiency metric. It was confirmed that a system representative value of the SNSP-D has strong correlation with all resilience and energy efficiency indexes (PCC = 0.87 on average). Especially, PCC was higher than 0.93 with modified resilience index (MRI) and energy efficiency indicator. In addition, a multiple linear regression analysis was performed to identify the system hydraulic characteristic factors that affect the correlation between SNSP-D and other system performance indicators. The proposed SNSP is expected to be served as a useful surrogate measure of resilience and/or energy efficiency indexes in practice.

The Asymptotic Throughput and Connectivity of Cognitive Radio Networks with Directional Transmission

  • Wei, Zhiqing;Feng, Zhiyong;Zhang, Qixun;Li, Wei;Gulliver, T. Aaron
    • Journal of Communications and Networks
    • /
    • v.16 no.2
    • /
    • pp.227-237
    • /
    • 2014
  • Throughput scaling laws for two coexisting ad hoc networks with m primary users (PUs) and n secondary users (SUs) randomly distributed in an unit area have been widely studied. Early work showed that the secondary network performs as well as stand-alone networks, namely, the per-node throughput of the secondary networks is ${\Theta}(1/\sqrt{n{\log}n})$. In this paper, we show that by exploiting directional spectrum opportunities in secondary network, the throughput of secondary network can be improved. If the beamwidth of secondary transmitter (TX)'s main lobe is ${\delta}=o(1/{\log}n)$, SUs can achieve a per-node throughput of ${\Theta}(1/\sqrt{n{\log}n})$ for directional transmission and omni reception (DTOR), which is ${\Theta}({\log}n)$ times higher than the throughput with-out directional transmission. On the contrary, if ${\delta}={\omega}(1/{\log}n)$, the throughput gain of SUs is $2{\pi}/{\delta}$ for DTOR compared with the throughput without directional antennas. Similarly, we have derived the throughput for other cases of directional transmission. The connectivity is another critical metric to evaluate the performance of random ad hoc networks. The relation between the number of SUs n and the number of PUs m is assumed to be $n=m^{\beta}$. We show that with the HDP-VDP routing scheme, which is widely employed in the analysis of throughput scaling laws of ad hoc networks, the connectivity of a single SU can be guaranteed when ${\beta}$ > 1, and the connectivity of a single secondary path can be guaranteed when ${\beta}$ > 2. While circumventing routing can improve the connectivity of cognitive radio ad hoc network, we verify that the connectivity of a single SU as well as a single secondary path can be guaranteed when ${\beta}$ > 1. Thus, to achieve the connectivity of secondary networks, the density of SUs should be (asymptotically) bigger than that of PUs.

De-identifying Unstructured Medical Text and Attribute-based Utility Measurement (의료 비정형 텍스트 비식별화 및 속성기반 유용도 측정 기법)

  • Ro, Gun;Chun, Jonghoon
    • The Journal of Society for e-Business Studies
    • /
    • v.24 no.1
    • /
    • pp.121-137
    • /
    • 2019
  • De-identification is a method by which the remaining information can not be referred to a specific individual by removing the personal information from the data set. As a result, de-identification can lower the exposure risk of personal information that may occur in the process of collecting, processing, storing and distributing information. Although there have been many studies in de-identification algorithms, protection models, and etc., most of them are limited to structured data, and there are relatively few considerations on de-identification of unstructured data. Especially, in the medical field where the unstructured text is frequently used, many people simply remove all personally identifiable information in order to lower the exposure risk of personal information, while admitting the fact that the data utility is lowered accordingly. This study proposes a new method to perform de-identification by applying the k-anonymity protection model targeting unstructured text in the medical field in which de-identification is mandatory because privacy protection issues are more critical in comparison to other fields. Also, the goal of this study is to propose a new utility metric so that people can comprehend de-identified data set utility intuitively. Therefore, if the result of this research is applied to various industrial fields where unstructured text is used, we expect that we can increase the utility of the unstructured text which contains personal information.

Machine-learning-based out-of-hospital cardiac arrest (OHCA) detection in emergency calls using speech recognition (119 응급신고에서 수보요원과 신고자의 통화분석을 활용한 머신 러닝 기반의 심정지 탐지 모델)

  • Jong In Kim;Joo Young Lee;Jio Chung;Dae Jin Shin;Dong Hyun Choi;Ki Hong Kim;Ki Jeong Hong;Sunhee Kim;Minhwa Chung
    • Phonetics and Speech Sciences
    • /
    • v.15 no.4
    • /
    • pp.109-118
    • /
    • 2023
  • Cardiac arrest is a critical medical emergency where immediate response is essential for patient survival. This is especially true for Out-of-Hospital Cardiac Arrest (OHCA), for which the actions of emergency medical services in the early stages significantly impact outcomes. However, in Korea, a challenge arises due to a shortage of dispatcher who handle a large volume of emergency calls. In such situations, the implementation of a machine learning-based OHCA detection program can assist responders and improve patient survival rates. In this study, we address this challenge by developing a machine learning-based OHCA detection program. This program analyzes transcripts of conversations between responders and callers to identify instances of cardiac arrest. The proposed model includes an automatic transcription module for these conversations, a text-based cardiac arrest detection model, and the necessary server and client components for program deployment. Importantly, The experimental results demonstrate the model's effectiveness, achieving a performance score of 79.49% based on the F1 metric and reducing the time needed for cardiac arrest detection by 15 seconds compared to dispatcher. Despite working with a limited dataset, this research highlights the potential of a cardiac arrest detection program as a valuable tool for responders, ultimately enhancing cardiac arrest survival rates.