• Title/Summary/Keyword: Retrieval Model

Search Result 816, Processing Time 0.025 seconds

Opera Clustering: K-means on librettos datasets

  • Jeong, Harim;Yoo, Joo Hun
    • Journal of Internet Computing and Services
    • /
    • v.23 no.2
    • /
    • pp.45-52
    • /
    • 2022
  • With the development of artificial intelligence analysis methods, especially machine learning, various fields are widely expanding their application ranges. However, in the case of classical music, there still remain some difficulties in applying machine learning techniques. Genre classification or music recommendation systems generated by deep learning algorithms are actively used in general music, but not in classical music. In this paper, we attempted to classify opera among classical music. To this end, an experiment was conducted to determine which criteria are most suitable among, composer, period of composition, and emotional atmosphere, which are the basic features of music. To generate emotional labels, we adopted zero-shot classification with four basic emotions, 'happiness', 'sadness', 'anger', and 'fear.' After embedding the opera libretto with the doc2vec processing model, the optimal number of clusters is computed based on the result of the elbow method. Decided four centroids are then adopted in k-means clustering to classify unsupervised libretto datasets. We were able to get optimized clustering based on the result of adjusted rand index scores. With these results, we compared them with notated variables of music. As a result, it was confirmed that the four clusterings calculated by machine after training were most similar to the grouping result by period. Additionally, we were able to verify that the emotional similarity between composer and period did not appear significantly. At the end of the study, by knowing the period is the right criteria, we hope that it makes easier for music listeners to find music that suits their tastes.

Structural live load surveys by deep learning

  • Li, Yang;Chen, Jun
    • Smart Structures and Systems
    • /
    • v.30 no.2
    • /
    • pp.145-157
    • /
    • 2022
  • The design of safe and economical structures depends on the reliable live load from load survey. Live load surveys are traditionally conducted by randomly selecting rooms and weighing each item on-site, a method that has problems of low efficiency, high cost, and long cycle time. This paper proposes a deep learning-based method combined with Internet big data to perform live load surveys. The proposed survey method utilizes multi-source heterogeneous data, such as images, voice, and product identification, to obtain the live load without weighing each item through object detection, web crawler, and speech recognition. The indoor objects and face detection models are first developed based on fine-tuning the YOLOv3 algorithm to detect target objects and obtain the number of people in a room, respectively. Each detection model is evaluated using the independent testing set. Then web crawler frameworks with keyword and image retrieval are established to extract the weight information of detected objects from Internet big data. The live load in a room is derived by combining the weight and number of items and people. To verify the feasibility of the proposed survey method, a live load survey is carried out for a meeting room. The results show that, compared with the traditional method of sampling and weighing, the proposed method could perform efficient and convenient live load surveys and represents a new load research paradigm.

Vocabulary Recognition Retrieval Optimized System using MLHF Model (MLHF 모델을 적용한 어휘 인식 탐색 최적화 시스템)

  • Ahn, Chan-Shik;Oh, Sang-Yeob
    • Journal of the Korea Society of Computer and Information
    • /
    • v.14 no.10
    • /
    • pp.217-223
    • /
    • 2009
  • Vocabulary recognition system of Mobile terminal is executed statistical method for vocabulary recognition and used statistical grammar recognition system using N-gram. If limit arithmetic processing capacity in memory of vocabulary to grow then vocabulary recognition algorithm complicated and need a large scale search space and many processing time on account of impossible to process. This study suggest vocabulary recognition optimize using MLHF System. MLHF separate acoustic search and lexical search system using FLaVoR. Acoustic search feature vector of speech signal extract using HMM, lexical search recognition execution using Levenshtein distance algorithm. System performance as a result of represent vocabulary dependence recognition rate of 98.63%, vocabulary independence recognition rate of 97.91%, represent recognition speed of 1.61 second.

Comparative Study on Similarity Measurement Methods in CBR Cost Estimation

  • Ahn, Joseph;Park, Moonseo;Lee, Hyun-Soo;Ahn, Sung Jin;Ji, Sae-Hyun;Kim, Sooyoung;Song, Kwonsik;Lee, Jeong Hoon
    • International conference on construction engineering and project management
    • /
    • 2015.10a
    • /
    • pp.597-598
    • /
    • 2015
  • In order to improve the reliability of cost estimation results using CBR, there has been a continuous issue on similarity measurement to accurately compute the distance among attributes and cases to retrieve the most similar singular or plural cases. However, these existing similarity measures have limitations in taking the covariance among attributes into consideration and reflecting the effects of covariance in computation of distances among attributes. To deal with this challenging issue, this research examines the weighted Mahalanobis distance based similarity measure applied to CBR cost estimation and carries out the comparative study on the existing distance measurement methods of CBR. To validate the suggest CBR cost model, leave-one-out cross validation (LOOCV) using two different sets of simulation data are carried out. Consequently, this research is expected to provide an analysis of covariance effects in similarity measurement and a basis for further research on the fundamentals of case retrieval.

  • PDF

Brain Activation Pattern and Functional Connectivity Network during Experimental Design on the Biological Phenomena

  • Lee, Il-Sun;Lee, Jun-Ki;Kwon, Yong-Ju
    • Journal of The Korean Association For Science Education
    • /
    • v.29 no.3
    • /
    • pp.348-358
    • /
    • 2009
  • The purpose of this study was to investigate brain activation pattern and functional connectivity network during experimental design on the biological phenomena. Twenty six right-handed healthy science teachers volunteered to be in the present study. To investigate participants' brain activities during the tasks, 3.0T fMRI system with the block experimental-design was used to measure BOLD signals of their brain and SPM2 software package was applied to analyze the acquired initial image data from the fMRI system. According to the analyzed data, superior, middle and inferior frontal gyrus, superior and inferior parietal lobule, fusiform gyrus, lingual gyrus, and bilateral cerebellum were significantly activated during participants' carrying-out experimental design. The network model was consisting of six nodes (ROIs) and its six connections. These results suggested the notion that the activation and connections of these regions mean that experimental design process couldn't succeed just a memory retrieval process. These results enable the scientific experimental design process to be examined from the cognitive neuroscience perspective, and may be used as a basis for developing a teaching-learning program for scientific experimental design such as brain-based science education curriculum.

Adaptive Skin Color Segmentation in a Single Image using Image Feedback (영상 피드백을 이용한 단일 영상에서의 적응적 피부색 검출)

  • Do, Jun-Hyeong;Kim, Keun-Ho;Kim, Jong-Yeol
    • Journal of the Institute of Electronics Engineers of Korea SP
    • /
    • v.46 no.3
    • /
    • pp.112-118
    • /
    • 2009
  • Skin color segmentation techniques have been widely utilized for face/hand detection and tracking in many applications such as a diagnosis system using facial information, human-robot interaction, an image retrieval system. In case of a video image, it is common that the skin color model for a target is updated every frame for the robust target tracking against illumination change. As for a single image, however, most of studies employ a fixed skin color model which may result in low detection rate or high false positive errors. In this paper, we propose a novel method for effective skin color segmentation in a single image, which modifies the conditions for skin color segmentation iteratively by the image feedback of segmented skin color region in a given image.

Personal Information Management Based on the Concept Lattice of Formal Concept Analysis (FCA 개념 망 기반 개인정보관리)

  • Kim, Mi-Hye
    • Journal of Internet Computing and Services
    • /
    • v.6 no.6
    • /
    • pp.163-178
    • /
    • 2005
  • The ultimate objective of Personal Information Management (PIM) is to collect, handle and manage wanted information in a systematic way that enables individuals to search the information more easily and effectively, However, existing personal information management systems are usually based on a traditional hierarchical directory model for storing information, limiting effective organization and retrieval of information as well as providing less support in search by associative interrelationship between objects (documents) and their attributes, To improve these problems, in this paper we propose a personal information management model based on the concept lattice of Formal Concept Analysis (FCA) to easily build and maintain individuals' own information on the Web, The proposed system can overcome the limitations of the traditional hierarchy approach as well as supporting search of other useful information by the inter-relationships between objects and their attributes in the concept lattice of FCA beyond a narrow search.

  • PDF

A Study on implementation model for security log analysis system using Big Data platform (빅데이터 플랫폼을 이용한 보안로그 분석 시스템 구현 모델 연구)

  • Han, Ki-Hyoung;Jeong, Hyung-Jong;Lee, Doog-Sik;Chae, Myung-Hui;Yoon, Cheol-Hee;Noh, Kyoo-Sung
    • Journal of Digital Convergence
    • /
    • v.12 no.8
    • /
    • pp.351-359
    • /
    • 2014
  • The log data generated by security equipment have been synthetically analyzed on the ESM(Enterprise Security Management) base so far, but due to its limitations of the capacity and processing performance, it is not suited for big data processing. Therefore the another way of technology on the big data platform is necessary. Big Data platform can achieve a large amount of data collection, storage, processing, retrieval, analysis, and visualization by using Hadoop Ecosystem. Currently ESM technology has developed in the way of SIEM (Security Information & Event Management) technology, and to implement security technology in SIEM way, Big Data platform technology is essential that can handle large log data which occurs in the current security devices. In this paper, we have a big data platform Hadoop Ecosystem technology for analyzing the security log for sure how to implement the system model is studied.

A Study of Step-by-step Countermeasures Model through Analysis of SQL Injection Attacks Code (공격코드 사례분석을 기반으로 한 SQL Injection에 대한 단계적 대응모델 연구)

  • Kim, Jeom-Goo;Noh, Si-Choon
    • Convergence Security Journal
    • /
    • v.12 no.1
    • /
    • pp.17-25
    • /
    • 2012
  • SQL Injection techniques disclosed web hacking years passed, but these are classified the most dangerous attac ks. Recent web programming data for efficient storage and retrieval using a DBMS is essential. Mainly PHP, JSP, A SP, and scripting language used to interact with the DBMS. In this web environments application does not validate the client's invalid entry may cause abnormal SQL query. These unusual queries to bypass user authentication or da ta that is stored in the database can be exposed. SQL Injection vulnerability environment, an attacker can pass the web-based authentication using username and password and data stored in the database. Measures against SQL Inj ection on has been announced as a number of methods. But if you rely on any one method of many security hole ca n occur. The proposal of four levels leverage is composed with the source code, operational phases, database, server management side and the user input validation. This is a way to apply the measures in terms of why the accident preventive steps for creating a phased step-by-step response nodel, through the process of management measures, if applied, there is the possibility of SQL Injection attacks can be.

Development of Physical Human Bronchial Tree Models from X-ray CT Images (X선 CT영상으로부터 인체의 기관지 모델의 개발)

  • Won, Chul-Ho;Ro, Chul-Kyun
    • Journal of Sensor Science and Technology
    • /
    • v.11 no.5
    • /
    • pp.263-272
    • /
    • 2002
  • In this paper, we investigate the potential for retrieval of morphometric data from three dimensional images of conducting bronchus obtained by X-ray Computerized Tomography (CT) and to explore the potential for the use of rapid prototype machine to produce physical hollow bronchus casts for mathematical modeling and experimental verification of particle deposition models. We segment the bronchus of lung by mathematical morphology method from obtained images by CT. The surface data representing volumetric bronchus data in three dimensions are converted to STL(streolithography) file and three dimensional solid model is created by using input STL file and rapid prototype machine. Two physical hollow cast models are created from the CT images of bronchial tree phantom and living human bronchus. We evaluate the usefulness of the rapid prototype model of bronchial tree by comparing diameters of the cross sectional area bronchus segments of the original CT images and the rapid prototyping-derived models imaged by X-ray CT.