• Title/Summary/Keyword: Text Model learning

Search Result 436, Processing Time 0.027 seconds

The Edge Computing System for the Detection of Water Usage Activities with Sound Classification (음향 기반 물 사용 활동 감지용 엣지 컴퓨팅 시스템)

  • Seung-Ho Hyun;Youngjoon Chee
    • Journal of Biomedical Engineering Research
    • /
    • v.44 no.2
    • /
    • pp.147-156
    • /
    • 2023
  • Efforts to employ smart home sensors to monitor the indoor activities of elderly single residents have been made to assess the feasibility of a safe and healthy lifestyle. However, the bathroom remains an area of blind spot. In this study, we have developed and evaluated a new edge computer device that can automatically detect water usage activities in the bathroom and record the activity log on a cloud server. Three kinds of sound as flushing, showering, and washing using wash basin generated during water usage were recorded and cut into 1-second scenes. These sound clips were then converted into a 2-dimensional image using MEL-spectrogram. Sound data augmentation techniques were adopted to obtain better learning effect from smaller number of data sets. These techniques, some of which are applied in time domain and others in frequency domain, increased the number of training data set by 30 times. A deep learning model, called CRNN, combining Convolutional Neural Network and Recurrent Neural Network was employed. The edge device was implemented using Raspberry Pi 4 and was equipped with a condenser microphone and amplifier to run the pre-trained model in real-time. The detected activities were recorded as text-based activity logs on a Firebase server. Performance was evaluated in two bathrooms for the three water usage activities, resulting in an accuracy of 96.1% and 88.2%, and F1 Score of 96.1% and 87.8%, respectively. Most of the classification errors were observed in the water sound from washing. In conclusion, this system demonstrates the potential for use in recording the activities as a lifelog of elderly single residents to a cloud server over the long-term.

Korean speech recognition using deep learning (딥러닝 모형을 사용한 한국어 음성인식)

  • Lee, Suji;Han, Seokjin;Park, Sewon;Lee, Kyeongwon;Lee, Jaeyong
    • The Korean Journal of Applied Statistics
    • /
    • v.32 no.2
    • /
    • pp.213-227
    • /
    • 2019
  • In this paper, we propose an end-to-end deep learning model combining Bayesian neural network with Korean speech recognition. In the past, Korean speech recognition was a complicated task due to the excessive parameters of many intermediate steps and needs for Korean expertise knowledge. Fortunately, Korean speech recognition becomes manageable with the aid of recent breakthroughs in "End-to-end" model. The end-to-end model decodes mel-frequency cepstral coefficients directly as text without any intermediate processes. Especially, Connectionist Temporal Classification loss and Attention based model are a kind of the end-to-end. In addition, we combine Bayesian neural network to implement the end-to-end model and obtain Monte Carlo estimates. Finally, we carry out our experiments on the "WorimalSam" online dictionary dataset. We obtain 4.58% Word Error Rate showing improved results compared to Google and Naver API.

Deep Image Annotation and Classification by Fusing Multi-Modal Semantic Topics

  • Chen, YongHeng;Zhang, Fuquan;Zuo, WanLi
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.12 no.1
    • /
    • pp.392-412
    • /
    • 2018
  • Due to the semantic gap problem across different modalities, automatically retrieval from multimedia information still faces a main challenge. It is desirable to provide an effective joint model to bridge the gap and organize the relationships between them. In this work, we develop a deep image annotation and classification by fusing multi-modal semantic topics (DAC_mmst) model, which has the capacity for finding visual and non-visual topics by jointly modeling the image and loosely related text for deep image annotation while simultaneously learning and predicting the class label. More specifically, DAC_mmst depends on a non-parametric Bayesian model for estimating the best number of visual topics that can perfectly explain the image. To evaluate the effectiveness of our proposed algorithm, we collect a real-world dataset to conduct various experiments. The experimental results show our proposed DAC_mmst performs favorably in perplexity, image annotation and classification accuracy, comparing to several state-of-the-art methods.

Severity Analysis for Occupational Heat-related Injury Using the Multinomial Logit Model

  • Peiyi Lyu;Siyuan Song
    • Safety and Health at Work
    • /
    • v.15 no.2
    • /
    • pp.200-207
    • /
    • 2024
  • Background: Workers are often exposed to hazardous heat due to their work environment, leading to various injuries. As a result of climate change, heat-related injuries (HRIs) are becoming more problematic. This study aims to identify critical contributing factors to the severity of occupational HRIs. Methods: This study analyzed historical injury reports from the Occupational Safety and Health Administration (OSHA). Contributing factors to the severity of HRIs were identified using text mining and model-free machine learning methods. The Multinomial Logit Model (MNL) was applied to explore the relationship between impact factors and the severity of HRIs. Results: The results indicated a higher risk of fatal HRIs among middle-aged, older, and male workers, particularly in the construction, service, manufacturing, and agriculture industries. In addition, a higher heat index, collapses, heart attacks, and fall accidents increased the severity of HRIs, while symptoms such as dehydration, dizziness, cramps, faintness, and vomiting reduced the likelihood of fatal HRIs. Conclusions: The severity of HRIs was significantly influenced by factors like workers' age, gender, industry type, heat index , symptoms, and secondary injuries. The findings underscore the need for tailored preventive strategies and training across different worker groups to mitigate HRIs risks.

Methodology for Classifying Hierarchical Data Using Autoencoder-based Deeply Supervised Network (오토인코더 기반 심층 지도 네트워크를 활용한 계층형 데이터 분류 방법론)

  • Kim, Younha;Kim, Namgyu
    • Journal of Intelligence and Information Systems
    • /
    • v.28 no.3
    • /
    • pp.185-207
    • /
    • 2022
  • Recently, with the development of deep learning technology, researches to apply a deep learning algorithm to analyze unstructured data such as text and images are being actively conducted. Text classification has been studied for a long time in academia and industry, and various attempts are being performed to utilize data characteristics to improve classification performance. In particular, a hierarchical relationship of labels has been utilized for hierarchical classification. However, the top-down approach mainly used for hierarchical classification has a limitation that misclassification at a higher level blocks the opportunity for correct classification at a lower level. Therefore, in this study, we propose a methodology for classifying hierarchical data using the autoencoder-based deeply supervised network that high-level classification does not block the low-level classification while considering the hierarchical relationship of labels. The proposed methodology adds a main classifier that predicts a low-level label to the autoencoder's latent variable and an auxiliary classifier that predicts a high-level label to the hidden layer of the autoencoder. As a result of experiments on 22,512 academic papers to evaluate the performance of the proposed methodology, it was confirmed that the proposed model showed superior classification accuracy and F1-score compared to the traditional supervised autoencoder and DNN model.

A Study on Commodity Asset Investment Model Based on Machine Learning Technique (기계학습을 활용한 상품자산 투자모델에 관한 연구)

  • Song, Jin Ho;Choi, Heung Sik;Kim, Sun Woong
    • Journal of Intelligence and Information Systems
    • /
    • v.23 no.4
    • /
    • pp.127-146
    • /
    • 2017
  • Services using artificial intelligence have begun to emerge in daily life. Artificial intelligence is applied to products in consumer electronics and communications such as artificial intelligence refrigerators and speakers. In the financial sector, using Kensho's artificial intelligence technology, the process of the stock trading system in Goldman Sachs was improved. For example, two stock traders could handle the work of 600 stock traders and the analytical work for 15 people for 4weeks could be processed in 5 minutes. Especially, big data analysis through machine learning among artificial intelligence fields is actively applied throughout the financial industry. The stock market analysis and investment modeling through machine learning theory are also actively studied. The limits of linearity problem existing in financial time series studies are overcome by using machine learning theory such as artificial intelligence prediction model. The study of quantitative financial data based on the past stock market-related numerical data is widely performed using artificial intelligence to forecast future movements of stock price or indices. Various other studies have been conducted to predict the future direction of the market or the stock price of companies by learning based on a large amount of text data such as various news and comments related to the stock market. Investing on commodity asset, one of alternative assets, is usually used for enhancing the stability and safety of traditional stock and bond asset portfolio. There are relatively few researches on the investment model about commodity asset than mainstream assets like equity and bond. Recently machine learning techniques are widely applied on financial world, especially on stock and bond investment model and it makes better trading model on this field and makes the change on the whole financial area. In this study we made investment model using Support Vector Machine among the machine learning models. There are some researches on commodity asset focusing on the price prediction of the specific commodity but it is hard to find the researches about investment model of commodity as asset allocation using machine learning model. We propose a method of forecasting four major commodity indices, portfolio made of commodity futures, and individual commodity futures, using SVM model. The four major commodity indices are Goldman Sachs Commodity Index(GSCI), Dow Jones UBS Commodity Index(DJUI), Thomson Reuters/Core Commodity CRB Index(TRCI), and Rogers International Commodity Index(RI). We selected each two individual futures among three sectors as energy, agriculture, and metals that are actively traded on CME market and have enough liquidity. They are Crude Oil, Natural Gas, Corn, Wheat, Gold and Silver Futures. We made the equally weighted portfolio with six commodity futures for comparing with other commodity indices. We set the 19 macroeconomic indicators including stock market indices, exports & imports trade data, labor market data, and composite leading indicators as the input data of the model because commodity asset is very closely related with the macroeconomic activities. They are 14 US economic indicators, two Chinese economic indicators and two Korean economic indicators. Data period is from January 1990 to May 2017. We set the former 195 monthly data as training data and the latter 125 monthly data as test data. In this study, we verified that the performance of the equally weighted commodity futures portfolio rebalanced by the SVM model is better than that of other commodity indices. The prediction accuracy of the model for the commodity indices does not exceed 50% regardless of the SVM kernel function. On the other hand, the prediction accuracy of equally weighted commodity futures portfolio is 53%. The prediction accuracy of the individual commodity futures model is better than that of commodity indices model especially in agriculture and metal sectors. The individual commodity futures portfolio excluding the energy sector has outperformed the three sectors covered by individual commodity futures portfolio. In order to verify the validity of the model, it is judged that the analysis results should be similar despite variations in data period. So we also examined the odd numbered year data as training data and the even numbered year data as test data and we confirmed that the analysis results are similar. As a result, when we allocate commodity assets to traditional portfolio composed of stock, bond, and cash, we can get more effective investment performance not by investing commodity indices but by investing commodity futures. Especially we can get better performance by rebalanced commodity futures portfolio designed by SVM model.

Customer Behavior Prediction of Binary Classification Model Using Unstructured Information and Convolution Neural Network: The Case of Online Storefront (비정형 정보와 CNN 기법을 활용한 이진 분류 모델의 고객 행태 예측: 전자상거래 사례를 중심으로)

  • Kim, Seungsoo;Kim, Jongwoo
    • Journal of Intelligence and Information Systems
    • /
    • v.24 no.2
    • /
    • pp.221-241
    • /
    • 2018
  • Deep learning is getting attention recently. The deep learning technique which had been applied in competitions of the International Conference on Image Recognition Technology(ILSVR) and AlphaGo is Convolution Neural Network(CNN). CNN is characterized in that the input image is divided into small sections to recognize the partial features and combine them to recognize as a whole. Deep learning technologies are expected to bring a lot of changes in our lives, but until now, its applications have been limited to image recognition and natural language processing. The use of deep learning techniques for business problems is still an early research stage. If their performance is proved, they can be applied to traditional business problems such as future marketing response prediction, fraud transaction detection, bankruptcy prediction, and so on. So, it is a very meaningful experiment to diagnose the possibility of solving business problems using deep learning technologies based on the case of online shopping companies which have big data, are relatively easy to identify customer behavior and has high utilization values. Especially, in online shopping companies, the competition environment is rapidly changing and becoming more intense. Therefore, analysis of customer behavior for maximizing profit is becoming more and more important for online shopping companies. In this study, we propose 'CNN model of Heterogeneous Information Integration' using CNN as a way to improve the predictive power of customer behavior in online shopping enterprises. In order to propose a model that optimizes the performance, which is a model that learns from the convolution neural network of the multi-layer perceptron structure by combining structured and unstructured information, this model uses 'heterogeneous information integration', 'unstructured information vector conversion', 'multi-layer perceptron design', and evaluate the performance of each architecture, and confirm the proposed model based on the results. In addition, the target variables for predicting customer behavior are defined as six binary classification problems: re-purchaser, churn, frequent shopper, frequent refund shopper, high amount shopper, high discount shopper. In order to verify the usefulness of the proposed model, we conducted experiments using actual data of domestic specific online shopping company. This experiment uses actual transactions, customers, and VOC data of specific online shopping company in Korea. Data extraction criteria are defined for 47,947 customers who registered at least one VOC in January 2011 (1 month). The customer profiles of these customers, as well as a total of 19 months of trading data from September 2010 to March 2012, and VOCs posted for a month are used. The experiment of this study is divided into two stages. In the first step, we evaluate three architectures that affect the performance of the proposed model and select optimal parameters. We evaluate the performance with the proposed model. Experimental results show that the proposed model, which combines both structured and unstructured information, is superior compared to NBC(Naïve Bayes classification), SVM(Support vector machine), and ANN(Artificial neural network). Therefore, it is significant that the use of unstructured information contributes to predict customer behavior, and that CNN can be applied to solve business problems as well as image recognition and natural language processing problems. It can be confirmed through experiments that CNN is more effective in understanding and interpreting the meaning of context in text VOC data. And it is significant that the empirical research based on the actual data of the e-commerce company can extract very meaningful information from the VOC data written in the text format directly by the customer in the prediction of the customer behavior. Finally, through various experiments, it is possible to say that the proposed model provides useful information for the future research related to the parameter selection and its performance.

A Study on Development of Automatic Categorization System for Internet Documents (인터넷 문서 자동 분류 시스템 개발에 관한 연구)

  • Han, Kwang-Rok;Sun, B.K.;Han, Sang-Tae;Rim, Kee-Wook
    • The Transactions of the Korea Information Processing Society
    • /
    • v.7 no.9
    • /
    • pp.2867-2875
    • /
    • 2000
  • In this paper, we discuss the implementation of automatic internet text categorization system. A categorization algorithm is designed and the system is implemented by back propagation learning model. Internet documents are collected according to the established categories and tested by Chi-squre ($\chi^2$) for the document leaning, and the category features are extracted. The sets of learning and separating vector are productt>d by these features. As a result of experimental evaluation, we show that this system is more improved in the performance of automatic categorization than the nearest neigbor method.

  • PDF

Analysis of Sentential Paraphrase Patterns and Errors through Predicate-Argument Tuple-based Approximate Alignment (술어-논항 튜플 기반 근사 정렬을 이용한 문장 단위 바꿔쓰기표현 유형 및 오류 분석)

  • Choi, Sung-Pil;Song, Sa-Kwang;Myaeng, Sung-Hyon
    • The KIPS Transactions:PartB
    • /
    • v.19B no.2
    • /
    • pp.135-148
    • /
    • 2012
  • This paper proposes a model for recognizing sentential paraphrases through Predicate-Argument Tuple (PAT)-based approximate alignment between two texts. We cast the paraphrase recognition problem as a binary classification by defining and applying various alignment features which could effectively express the semantic relatedness between two sentences. Experiment confirmed the potential of our approach and error analysis revealed various paraphrase patterns not being solved by our system, which can help us devise methods for further performance improvement.

A Data-Driven Causal Analysis on Fatal Accidents in Construction Industry (건설 사고사례 데이터 기반 건설업 사망사고 요인분석)

  • Jiyoon Choi;Sihyeon Kim;Songe Lee;Kyunghun Kim;Sudong Lee
    • Journal of the Korea Safety Management & Science
    • /
    • v.25 no.3
    • /
    • pp.63-71
    • /
    • 2023
  • The construction industry stands out for its higher incidence of accidents in comparison to other sectors. A causal analysis of the accidents is necessary for effective prevention. In this study, we propose a data-driven causal analysis to find significant factors of fatal construction accidents. We collected 14,318 cases of structured and text data of construction accidents from the Construction Safety Management Integrated Information (CSI). For the variables in the collected dataset, we first analyze their patterns and correlations with fatal construction accidents by statistical analysis. In addition, machine learning algorithms are employed to develop a classification model for fatal accidents. The integration of SHAP (SHapley Additive exPlanations) allows for the identification of root causes driving fatal incidents. As a result, the outcome reveals the significant factors and keywords wielding notable influence over fatal accidents within construction contexts.