• Title/Summary/Keyword: traditional metrics

Search Result 81, Processing Time 0.029 seconds

Deep Learning-based Depth Map Estimation: A Review

  • Abdullah, Jan;Safran, Khan;Suyoung, Seo
    • Korean Journal of Remote Sensing
    • /
    • v.39 no.1
    • /
    • pp.1-21
    • /
    • 2023
  • In this technically advanced era, we are surrounded by smartphones, computers, and cameras, which help us to store visual information in 2D image planes. However, such images lack 3D spatial information about the scene, which is very useful for scientists, surveyors, engineers, and even robots. To tackle such problems, depth maps are generated for respective image planes. Depth maps or depth images are single image metric which carries the information in three-dimensional axes, i.e., xyz coordinates, where z is the object's distance from camera axes. For many applications, including augmented reality, object tracking, segmentation, scene reconstruction, distance measurement, autonomous navigation, and autonomous driving, depth estimation is a fundamental task. Much of the work has been done to calculate depth maps. We reviewed the status of depth map estimation using different techniques from several papers, study areas, and models applied over the last 20 years. We surveyed different depth-mapping techniques based on traditional ways and newly developed deep-learning methods. The primary purpose of this study is to present a detailed review of the state-of-the-art traditional depth mapping techniques and recent deep learning methodologies. This study encompasses the critical points of each method from different perspectives, like datasets, procedures performed, types of algorithms, loss functions, and well-known evaluation metrics. Similarly, this paper also discusses the subdomains in each method, like supervised, unsupervised, and semi-supervised methods. We also elaborate on the challenges of different methods. At the conclusion of this study, we discussed new ideas for future research and studies in depth map research.

A copula based bias correction method of climate data

  • Gyamfi Kwame Adutwum;Eun-Sung Chung
    • Proceedings of the Korea Water Resources Association Conference
    • /
    • 2023.05a
    • /
    • pp.160-160
    • /
    • 2023
  • Generally, Global Climate Models (GCM) cannot be used directly due to their inherent error arising from over or under-estimation of climate variables compared to the observed data. Several bias correction methods have been devised to solve this problem. Most of the traditional bias correction methods are one dimensional as they bias correct the climate variables separately. One such method is the Quantile Mapping method which builds a transfer function based on the statistical differences between the GCM and observed variables. Laux et al. introduced a copula-based method that bias corrects simulated climate data by employing not one but two different climate variables simultaneously and essentially extends the traditional one dimensional method into two dimensions. but it has some limitations. This study uses objective functions to address specifically, the limitations of Laux's methods on the Quantile Mapping method. The objective functions used were the observed rank correlation function, the observed moment function and the observed likelihood function. To illustrate the performance of this method, it is applied to ten GCMs for 20 stations in South Korea. The marginal distributions used were the Weibull, Gamma, Lognormal, Logistic and the Gumbel distributions. The tested copula family include most Archimedean copula families. Five performance metrics are used to evaluate the efficiency of this method, the Mean Square Error, Root Mean Square Error, Kolmogorov-Smirnov test, Percent Bias, Nash-Sutcliffe Efficiency and the Kullback Leibler Divergence. The results showed a significant improvement of Laux's method especially when maximizing the observed rank correlation function and when maximizing a combination of the observed rank correlation and observed moments functions for all GCMs in the validation period.

  • PDF

Structural Crack Detection Using Deep Learning: An In-depth Review

  • Safran Khan;Abdullah Jan;Suyoung Seo
    • Korean Journal of Remote Sensing
    • /
    • v.39 no.4
    • /
    • pp.371-393
    • /
    • 2023
  • Crack detection in structures plays a vital role in ensuring their safety, durability, and reliability. Traditional crack detection methods sometimes need significant manual inspections, which are laborious, expensive, and prone to error by humans. Deep learning algorithms, which can learn intricate features from large-scale datasets, have emerged as a viable option for automated crack detection recently. This study presents an in-depth review of crack detection methods used till now, like image processing, traditional machine learning, and deep learning methods. Specifically, it will provide a comparative analysis of crack detection methods using deep learning, aiming to provide insights into the advancements, challenges, and future directions in this field. To facilitate comparative analysis, this study surveys publicly available crack detection datasets and benchmarks commonly used in deep learning research. Evaluation metrics employed to check the performance of different models are discussed, with emphasis on accuracy, precision, recall, and F1-score. Moreover, this study provides an in-depth analysis of recent studies and highlights key findings, including state-of-the-art techniques, novel architectures, and innovative approaches to address the shortcomings of the existing methods. Finally, this study provides a summary of the key insights gained from the comparative analysis, highlighting the potential of deep learning in revolutionizing methodologies for crack detection. The findings of this research will serve as a valuable resource for researchers in the field, aiding them in selecting appropriate methods for crack detection and inspiring further advancements in this domain.

Outlier Detection Based on Discrete Wavelet Transform with Application to Saudi Stock Market Closed Price Series

  • RASHEDI, Khudhayr A.;ISMAIL, Mohd T.;WADI, S. Al;SERROUKH, Abdeslam
    • The Journal of Asian Finance, Economics and Business
    • /
    • v.7 no.12
    • /
    • pp.1-10
    • /
    • 2020
  • This study investigates the problem of outlier detection based on discrete wavelet transform in the context of time series data where the identification and treatment of outliers constitute an important component. An outlier is defined as a data point that deviates so much from the rest of observations within a data sample. In this work we focus on the application of the traditional method suggested by Tukey (1977) for detecting outliers in the closed price series of the Saudi Arabia stock market (Tadawul) between Oct. 2011 and Dec. 2019. The method is applied to the details obtained from the MODWT (Maximal-Overlap Discrete Wavelet Transform) of the original series. The result show that the suggested methodology was successful in detecting all of the outliers in the series. The findings of this study suggest that we can model and forecast the volatility of returns from the reconstructed series without outliers using GARCH models. The estimated GARCH volatility model was compared to other asymmetric GARCH models using standard forecast error metrics. It is found that the performance of the standard GARCH model were as good as that of the gjrGARCH model over the out-of-sample forecasts for returns among other GARCH specifications.

Benchmarking of BioPerl, Perl, BioJava, Java, BioPython, and Python for Primitive Bioinformatics Tasks and Choosing a Suitable Language

  • Ryu, Tae-Wan
    • International Journal of Contents
    • /
    • v.5 no.2
    • /
    • pp.6-15
    • /
    • 2009
  • Recently many different programming languages have emerged for the development of bioinformatics applications. In addition to the traditional languages, languages from open source projects such as BioPerl, BioPython, and BioJava have become popular because they provide special tools for biological data processing and are easy to use. However, it is not well-studied which of these programming languages will be most suitable for a given bioinformatics task and which factors should be considered in choosing a language for a project. Like many other application projects, bioinformatics projects also require various types of tasks. Accordingly, it will be a challenge to characterize all the aspects of a project in order to choose a language. However, most projects require some common and primitive tasks such as file I/O, text processing, and basic computation for counting, translation, statistics, etc. This paper presents the benchmarking results of six popular languages, Perl, BioPerl, Python, BioPython, Java, and BioJava, for several common and simple bioinformatics tasks. The experimental results of each language are compared through quantitative evaluation metrics such as execution time, memory usage, and size of the source code. Other qualitative factors, including writeability, readability, portability, scalability, and maintainability, that affect the success of a project are also discussed. The results of this research can be useful for developers in choosing an appropriate language for the development of bioinformatics applications.

Load Shedding for Temporal Queries over Data Streams

  • Al-Kateb, Mohammed;Lee, Byung-Suk
    • Journal of Computing Science and Engineering
    • /
    • v.5 no.4
    • /
    • pp.294-304
    • /
    • 2011
  • Enhancing continuous queries over data streams with temporal functions and predicates enriches the expressive power of those queries. While traditional continuous queries retrieve only the values of attributes, temporal continuous queries retrieve the valid time intervals of those values as well. Correctly evaluating such queries requires the coalescing of adjacent timestamps for value-equivalent tuples prior to evaluating temporal functions and predicates. For many stream applications, the available computing resources may be too limited to produce exact query results. These limitations are commonly addressed through load shedding and produce approximated query results. There have been many load shedding mechanisms proposed so far, but for temporal continuous queries, the presence of coalescing makes theses existing methods unsuitable. In this paper, we propose a new accuracy metric and load shedding algorithm that are suitable for temporal query processing when memory is insufficient. The accuracy metric uses a combination of the Jaccard coefficient to measure the accuracy of attribute values and $\mathcal{PQI}$ interval orders to measure the accuracy of the valid time intervals in the approximate query result. The algorithm employs a greedy strategy combining two objectives reflecting the two accuracy metrics (i.e., value and interval). In the performance study, the proposed greedy algorithm outperforms a conventional random load shedding algorithm by up to an order of magnitude in its achieved accuracy.

Energy-Efficiency and Transmission Strategy Selection in Cooperative Wireless Sensor Networks

  • Zhang, Yanbing;Dai, Huaiyu
    • Journal of Communications and Networks
    • /
    • v.9 no.4
    • /
    • pp.473-481
    • /
    • 2007
  • Energy efficiency is one of the most critical concerns for wireless sensor networks. By allowing sensor nodes in close proximity to cooperate in transmission to form a virtual multiple-input multiple-output(MIMO) system, recent progress in wireless MIMO communications can be exploited to boost the system throughput, or equivalently reduce the energy consumption for the same throughput and BER target. However, these cooperative transmission strategies may incur additional energy cost and system overhead. In this paper, assuming that data collectors are equipped with antenna arrays and superior processing capability, energy efficiency of relevant traditional and cooperative transmission strategies: Single-input-multiple-output(SIMO), space-time block coding(STBC), and spatial multiplexing(SM) are studied. Analysis in the wideband regime reveals that, while receive diversity introduces significant improvement in both energy efficiency and spectral efficiency, further improvement due to the transmit diversity of STBC is limited, as opposed to the superiority of the SM scheme especially for non-trivial spectral efficiency. These observations are further confirmed in our analysis of more realistic systems with limited bandwidth, finite constellation sizes, and a target error rate. Based on this analysis, general guidelines are presented for optimal transmission strategy selection in system level and link level, aiming at minimum energy consumption while meeting different requirements. The proposed selection rules, especially those based on system-level metrics, are easy to implement for sensor applications. The framework provided here may also be readily extended to other scenarios or applications.

Mobile Resource Reliability-based Job Scheduling for Mobile Grid

  • Jang, Sung-Ho;Lee, Jong-Sik
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.5 no.1
    • /
    • pp.83-104
    • /
    • 2011
  • Mobile grid is a combination of grid computing and mobile computing to build grid systems in a wireless mobile environment. The development of network technology is assisting in realizing mobile grid. Mobile grid based on established grid infrastructures needs effective resource management and reliable job scheduling because mobile grid utilizes not only static grid resources but also dynamic grid resources with mobility. However, mobile devices are considered as unavailable resources in traditional grids. Mobile resources should be integrated into existing grid sites. Therefore, this paper presents a mobile grid middleware interconnecting existing grid infrastructures with mobile resources and a mobile service agent installed on the mobile resources. This paper also proposes a mobile resource reliability-based job scheduling model in order to overcome the unreliability of wireless mobile devices and guarantee stable and reliable job processing. In the proposed job scheduling model, the mobile service agent calculates the mobile resource reliability of each resource by using diverse reliability metrics and predicts it. The mobile grid middleware allocated jobs to mobile resources by predicted mobile resource reliability. We implemented a simulation model that simplifies various functions of the proposed job scheduling model by using the DEVS (Discrete Event System Specification) which is the formalism for modeling and analyzing a general system. We also conducted diverse experiments for performance evaluation. Experimental results demonstrate that the proposed model can assist in improving the performance of mobile grid in comparison with existing job scheduling models.

Efficient Resource Slicing Scheme for Optimizing Federated Learning Communications in Software-Defined IoT Networks

  • Tam, Prohim;Math, Sa;Kim, Seokhoon
    • Journal of Internet Computing and Services
    • /
    • v.22 no.5
    • /
    • pp.27-33
    • /
    • 2021
  • With the broad adoption of the Internet of Things (IoT) in a variety of scenarios and application services, management and orchestration entities require upgrading the traditional architecture and develop intelligent models with ultra-reliable methods. In a heterogeneous network environment, mission-critical IoT applications are significant to consider. With erroneous priorities and high failure rates, catastrophic losses in terms of human lives, great business assets, and privacy leakage will occur in emergent scenarios. In this paper, an efficient resource slicing scheme for optimizing federated learning in software-defined IoT (SDIoT) is proposed. The decentralized support vector regression (SVR) based controllers predict the IoT slices via packet inspection data during peak hour central congestion to achieve a time-sensitive condition. In off-peak hour intervals, a centralized deep neural networks (DNN) model is used within computation-intensive aspects on fine-grained slicing and remodified decentralized controller outputs. With known slice and prioritization, federated learning communications iteratively process through the adjusted resources by virtual network functions forwarding graph (VNFFG) descriptor set up in software-defined networking (SDN) and network functions virtualization (NFV) enabled architecture. To demonstrate the theoretical approach, Mininet emulator was conducted to evaluate between reference and proposed schemes by capturing the key Quality of Service (QoS) performance metrics.

A Watermarking Technique for User Authentication Based on a Combination of Face Image and Device Identity in a Mobile Ecosystem

  • Al-Jarba, Fatimah;Al-Khathami, Mohammed
    • International Journal of Computer Science & Network Security
    • /
    • v.21 no.9
    • /
    • pp.303-316
    • /
    • 2021
  • Digital content protection has recently become an important requirement in biometrics-based authentication systems due to the challenges involved in designing a feasible and effective user authentication method. Biometric approaches are more effective than traditional methods, and simultaneously, they cannot be considered entirely reliable. This study develops a reliable and trustworthy method for verifying that the owner of the biometric traits is the actual user and not an impostor. Watermarking-based approaches are developed using a combination of a color face image of the user and a mobile equipment identifier (MEID). Employing watermark techniques that cannot be easily removed or destroyed, a blind image watermarking scheme based on fast discrete curvelet transform (FDCuT) and discrete cosine transform (DCT) is proposed. FDCuT is applied to the color face image to obtain various frequency coefficients of the image curvelet decomposition, and for high frequency curvelet coefficients DCT is applied to obtain various frequency coefficients. Furthermore, mid-band frequency coefficients are modified using two uncorrelated noise sequences with the MEID watermark bits to obtain a watermarked image. An analysis is carried out to verify the performance of the proposed schema using conventional performance metrics. Compared with an existing approach, the proposed approach is better able to protect multimedia data from unauthorized access and will effectively prevent anyone other than the actual user from using the identity or images.