• Title/Summary/Keyword: metric learning

Search Result 128, Processing Time 0.029 seconds

Analysis of Topological Invariants of Manifold Embedding for Waveform Signals (파형 신호에 대한 다양체 임베딩의 위상학적 불변항의 분석)

  • Hahn, Hee-Il
    • The Journal of the Institute of Internet, Broadcasting and Communication
    • /
    • v.16 no.1
    • /
    • pp.291-299
    • /
    • 2016
  • This paper raises a question of whether a simple periodic phenomenon is associated with the topology and provides the convincing answers to it. A variety of music instrumental sound signals are used to prove our assertion, which are embedded in Euclidean space to analyze their topologies by computing the homology groups. A commute time embedding is employed to transform segments of waveforms into the corresponding geometries, which is implemented by organizing patches according to the graph-based metric. It is shown that commute time embedding generates the intrinsic topological complexities although their geometries are varied according to the spectrums of the signals. This paper employs a persistent homology to determine the topological invariants of the simplicial complexes constructed by randomly sampling the commute time embedding of the waveforms, and discusses their applications.

Land Use Feature Extraction and Sprawl Development Prediction from Quickbird Satellite Imagery Using Dempster-Shafer and Land Transformation Model

  • Saharkhiz, Maryam Adel;Pradhan, Biswajeet;Rizeei, Hossein Mojaddadi;Jung, Hyung-Sup
    • Korean Journal of Remote Sensing
    • /
    • v.36 no.1
    • /
    • pp.15-27
    • /
    • 2020
  • Accurate knowledge of land use/land cover (LULC) features and their relative changes over upon the time are essential for sustainable urban management. Urban sprawl growth has been always also a worldwide concern that needs to carefully monitor particularly in a developing country where unplanned building constriction has been expanding at a high rate. Recently, remotely sensed imageries with a very high spatial/spectral resolution and state of the art machine learning approaches sent the urban classification and growth monitoring to a higher level. In this research, we classified the Quickbird satellite imagery by object-based image analysis of Dempster-Shafer (OBIA-DS) for the years of 2002 and 2015 at Karbala-Iraq. The real LULC changes including, residential sprawl expansion, amongst these years, were identified via change detection procedure. In accordance with extracted features of LULC and detected trend of urban pattern, the future LULC dynamic was simulated by using land transformation model (LTM) in geospatial information system (GIS) platform. Both classification and prediction stages were successfully validated using ground control points (GCPs) through accuracy assessment metric of Kappa coefficient that indicated 0.87 and 0.91 for 2002 and 2015 classification as well as 0.79 for prediction part. Detail results revealed a substantial growth in building over fifteen years that mostly replaced by agriculture and orchard field. The prediction scenario of LULC sprawl development for 2030 revealed a substantial decline in green and agriculture land as well as an extensive increment in build-up area especially at the countryside of the city without following the residential pattern standard. The proposed method helps urban decision-makers to identify the detail temporal-spatial growth pattern of highly populated cities like Karbala. Additionally, the results of this study can be considered as a probable future map in order to design enough future social services and amenities for the local inhabitants.

Performance Improvement of Nearest-neighbor Classification Learning through Prototype Selections (프로토타입 선택을 이용한 최근접 분류 학습의 성능 개선)

  • Hwang, Doo-Sung
    • Journal of the Institute of Electronics Engineers of Korea CI
    • /
    • v.49 no.2
    • /
    • pp.53-60
    • /
    • 2012
  • Nearest-neighbor classification predicts the class of an input data with the most frequent class among the near training data of the input data. Even though nearest-neighbor classification doesn't have a training stage, all of the training data are necessary in a predictive stage and the generalization performance depends on the quality of training data. Therefore, as the training data size increase, a nearest-neighbor classification requires the large amount of memory and the large computation time in prediction. In this paper, we propose a prototype selection algorithm that predicts the class of test data with the new set of prototypes which are near-boundary training data. Based on Tomek links and distance metric, the proposed algorithm selects boundary data and decides whether the selected data is added to the set of prototypes by considering classes and distance relationships. In the experiments, the number of prototypes is much smaller than the size of original training data and we takes advantages of storage reduction and fast prediction in a nearest-neighbor classification.

Altmetrics: Factor Analysis for Assessing the Popularity of Research Articles on Twitter

  • Pandian, Nandhini Devi Soundara;Na, Jin-Cheon;Veeramachaneni, Bhargavi;Boothaladinni, Rashmi Vishwanath
    • Journal of Information Science Theory and Practice
    • /
    • v.7 no.4
    • /
    • pp.33-44
    • /
    • 2019
  • Altmetrics measure the frequency of references about an article on social media platforms, like Twitter. This paper studies a variety of factors that affect the popularity of articles (i.e., the number of article mentions) in the field of psychology on Twitter. Firstly, in this study, we classify Twitter users mentioning research articles as academic versus non-academic users and experts versus non-experts, using a machine learning approach. Then we build a negative binomial regression model with the number of Twitter mentions of an article as a dependant variable, and nine Twitter related factors (the number of followers, number of friends, number of status, number of lists, number of favourites, number of retweets, number of likes, ratio of academic users, and ratio of expert users) and seven article related factors (the number of authors, title length, abstract length, abstract readability, number of institutions, citation count, and availability of research funding) as independent variables. From our findings, if a research article is mentioned by Twitter users with a greater number of friends, status, favourites, and lists, by tweets with a large number of retweets and likes, and largely by Twitter users with academic and expertise knowledge on the field of psychology, the article gains more Twitter mentions. In addition, articles with a greater number of authors, title length, abstract length, and citation count, and articles with research funding get more attention from Twitter users.

Association-based Unsupervised Feature Selection for High-dimensional Categorical Data (고차원 범주형 자료를 위한 비지도 연관성 기반 범주형 변수 선택 방법)

  • Lee, Changki;Jung, Uk
    • Journal of Korean Society for Quality Management
    • /
    • v.47 no.3
    • /
    • pp.537-552
    • /
    • 2019
  • Purpose: The development of information technology makes it easy to utilize high-dimensional categorical data. In this regard, the purpose of this study is to propose a novel method to select the proper categorical variables in high-dimensional categorical data. Methods: The proposed feature selection method consists of three steps: (1) The first step defines the goodness-to-pick measure. In this paper, a categorical variable is relevant if it has relationships among other variables. According to the above definition of relevant variables, the goodness-to-pick measure calculates the normalized conditional entropy with other variables. (2) The second step finds the relevant feature subset from the original variables set. This step decides whether a variable is relevant or not. (3) The third step eliminates redundancy variables from the relevant feature subset. Results: Our experimental results showed that the proposed feature selection method generally yielded better classification performance than without feature selection in high-dimensional categorical data, especially as the number of irrelevant categorical variables increase. Besides, as the number of irrelevant categorical variables that have imbalanced categorical values is increasing, the difference in accuracy between the proposed method and the existing methods being compared increases. Conclusion: According to experimental results, we confirmed that the proposed method makes it possible to consistently produce high classification accuracy rates in high-dimensional categorical data. Therefore, the proposed method is promising to be used effectively in high-dimensional situation.

Performance Improvement of Mean-Teacher Models in Audio Event Detection Using Derivative Features (차분 특징을 이용한 평균-교사 모델의 음향 이벤트 검출 성능 향상)

  • Kwak, Jin-Yeol;Chung, Yong-Joo
    • The Journal of the Korea institute of electronic communication sciences
    • /
    • v.16 no.3
    • /
    • pp.401-406
    • /
    • 2021
  • Recently, mean-teacher models based on convolutional recurrent neural networks are popularly used in audio event detection. The mean-teacher model is an architecture that consists of two parallel CRNNs and it is possible to train them effectively on the weakly-labelled and unlabeled audio data by using the consistency learning metric at the output of the two neural networks. In this study, we tried to improve the performance of the mean-teacher model by using additional derivative features of the log-mel spectrum. In the audio event detection experiments using the training and test data from the Task 4 of the DCASE 2018/2019 Challenges, we could obtain maximally a 8.1% relative decrease in the ER(Error Rate) in the mean-teacher model using proposed derivative features.

Lifesaver: Android-based Application for Human Emergency Falling State Recognition

  • Abbas, Qaisar
    • International Journal of Computer Science & Network Security
    • /
    • v.21 no.8
    • /
    • pp.267-275
    • /
    • 2021
  • Smart application is developed in this paper by using an android-based platform to automatically determine the human emergency state (Lifesaver) by using different technology sensors of the mobile. In practice, this Lifesaver has many applications, and it can be easily combined with other applications as well to determine the emergency of humans. For example, if an old human falls due to some medical reasons, then this application is automatically determining the human state and then calls a person from this emergency contact list. Moreover, if the car accidentally crashes due to an accident, then the Lifesaver application is also helping to call a person who is on the emergency contact list to save human life. Therefore, the main objective of this project is to develop an application that can save human life. As a result, the proposed Lifesaver application is utilized to assist the person to get immediate attention in case of absence of help in four different situations. To develop the Lifesaver system, the GPS is also integrated to get the exact location of a human in case of emergency. Moreover, the emergency list of friends and authorities is also maintained to develop this application. To test and evaluate the Lifesaver system, the 50 different human data are collected with different age groups in the range of (40-70) and the performance of the Lifesaver application is also evaluated and compared with other state-of-the-art applications. On average, the Lifesaver system is achieved 95.5% detection accuracy and the value of 91.5 based on emergency index metric, which is outperformed compared to other applications in this domain.

Object detection in financial reporting documents for subsequent recognition

  • Sokerin, Petr;Volkova, Alla;Kushnarev, Kirill
    • International journal of advanced smart convergence
    • /
    • v.10 no.1
    • /
    • pp.1-11
    • /
    • 2021
  • Document page segmentation is an important step in building a quality optical character recognition module. The study examined already existing work on the topic of page segmentation and focused on the development of a segmentation model that has greater functional significance for application in an organization, as well as broad capabilities for managing the quality of the model. The main problems of document segmentation were highlighted, which include a complex background of intersecting objects. As classes for detection, not only classic text, table and figure were selected, but also additional types, such as signature, logo and table without borders (or with partially missing borders). This made it possible to pose a non-trivial task of detecting non-standard document elements. The authors compared existing neural network architectures for object detection based on published research data. The most suitable architecture was RetinaNet. To ensure the possibility of quality control of the model, a method based on neural network modeling using the RetinaNet architecture is proposed. During the study, several models were built, the quality of which was assessed on the test sample using the Mean average Precision metric. The best result among the constructed algorithms was shown by a model that includes four neural networks: the focus of the first neural network on detecting tables and tables without borders, the second - seals and signatures, the third - pictures and logos, and the fourth - text. As a result of the analysis, it was revealed that the approach based on four neural networks showed the best results in accordance with the objectives of the study on the test sample in the context of most classes of detection. The method proposed in the article can be used to recognize other objects. A promising direction in which the analysis can be continued is the segmentation of tables; the areas of the table that differ in function will act as classes: heading, cell with a name, cell with data, empty cell.

Ensemble-based deep learning for autonomous bridge component and damage segmentation leveraging Nested Reg-UNet

  • Abhishek Subedi;Wen Tang;Tarutal Ghosh Mondal;Rih-Teng Wu;Mohammad R. Jahanshahi
    • Smart Structures and Systems
    • /
    • v.31 no.4
    • /
    • pp.335-349
    • /
    • 2023
  • Bridges constantly undergo deterioration and damage, the most common ones being concrete damage and exposed rebar. Periodic inspection of bridges to identify damages can aid in their quick remediation. Likewise, identifying components can provide context for damage assessment and help gauge a bridge's state of interaction with its surroundings. Current inspection techniques rely on manual site visits, which can be time-consuming and costly. More recently, robotic inspection assisted by autonomous data analytics based on Computer Vision (CV) and Artificial Intelligence (AI) has been viewed as a suitable alternative to manual inspection because of its efficiency and accuracy. To aid research in this avenue, this study performs a comparative assessment of different architectures, loss functions, and ensembling strategies for the autonomous segmentation of bridge components and damages. The experiments lead to several interesting discoveries. Nested Reg-UNet architecture is found to outperform five other state-of-the-art architectures in both damage and component segmentation tasks. The architecture is built by combining a Nested UNet style dense configuration with a pretrained RegNet encoder. In terms of the mean Intersection over Union (mIoU) metric, the Nested Reg-UNet architecture provides an improvement of 2.86% on the damage segmentation task and 1.66% on the component segmentation task compared to the state-of-the-art UNet architecture. Furthermore, it is demonstrated that incorporating the Lovasz-Softmax loss function to counter class imbalance can boost performance by 3.44% in the component segmentation task over the most employed alternative, weighted Cross Entropy (wCE). Finally, weighted softmax ensembling is found to be quite effective when used synchronously with the Nested Reg-UNet architecture by providing mIoU improvement of 0.74% in the component segmentation task and 1.14% in the damage segmentation task over a single-architecture baseline. Overall, the best mIoU of 92.50% for the component segmentation task and 84.19% for the damage segmentation task validate the feasibility of these techniques for autonomous bridge component and damage segmentation using RGB images.

RoutingConvNet: A Light-weight Speech Emotion Recognition Model Based on Bidirectional MFCC (RoutingConvNet: 양방향 MFCC 기반 경량 음성감정인식 모델)

  • Hyun Taek Lim;Soo Hyung Kim;Guee Sang Lee;Hyung Jeong Yang
    • Smart Media Journal
    • /
    • v.12 no.5
    • /
    • pp.28-35
    • /
    • 2023
  • In this study, we propose a new light-weight model RoutingConvNet with fewer parameters to improve the applicability and practicality of speech emotion recognition. To reduce the number of learnable parameters, the proposed model connects bidirectional MFCCs on a channel-by-channel basis to learn long-term emotion dependence and extract contextual features. A light-weight deep CNN is constructed for low-level feature extraction, and self-attention is used to obtain information about channel and spatial signals in speech signals. In addition, we apply dynamic routing to improve the accuracy and construct a model that is robust to feature variations. The proposed model shows parameter reduction and accuracy improvement in the overall experiments of speech emotion datasets (EMO-DB, RAVDESS, and IEMOCAP), achieving 87.86%, 83.44%, and 66.06% accuracy respectively with about 156,000 parameters. In this study, we proposed a metric to calculate the trade-off between the number of parameters and accuracy for performance evaluation against light-weight.