• Title/Summary/Keyword: time-domain analysis

Search Result 2,325, Processing Time 0.027 seconds

A study on the predictability of acoustic power distribution of English speech for English academic achievement in a Science Academy (과학영재학교 재학생 영어발화 주파수 대역별 음향 에너지 분포의 영어 성취도 예측성 연구)

  • Park, Soon;Ahn, Hyunkee
    • Phonetics and Speech Sciences
    • /
    • v.14 no.3
    • /
    • pp.41-49
    • /
    • 2022
  • The average acoustic distribution of American English speakers was statistically compared with the English-speaking patterns of gifted students in a Science Academy in Korea. By analyzing speech recordings, the duration time of which is much longer than in previous studies, this research identified the degree of acoustic proximity between the two parties and the predictability of English academic achievement of gifted high school students. Long-term spectral acoustic power distribution vectors were obtained for 2,048 center frequencies in the range of 20 Hz to 20,000 Hz by applying an long-term average speech spectrum (LTASS) MATLAB code. Three more variables were statistically compared to discover additional indices that can predict future English academic achievement: the receptive vocabulary size test, the cumulative vocabulary scores of English formative assessment, and the English Speaking Proficiency Test scores. Linear regression and correlational analyses between the four variables showed that the receptive vocabulary size test and the low-frequency vocabulary formative assessments which require both lexical and domain-specific science background knowledge are relatively more significant variables than a basic suprasegmental level English fluency in the predictability of gifted students' academic achievement.

Analysis and implications of North Korea's new strategic drones 'Satbyol-4', 'Satbyol-9' (북한의 신형 전략 무인기 '샛별-4형', '샛별-9형' 분석과 시사점)

  • Kang-Il Seo;Jong-Hoon Kim;Man-Hee Won;Dong-Min Lee;Jae-Hyung Bae;Sang-Hyuk Park
    • The Journal of the Convergence on Culture Technology
    • /
    • v.10 no.2
    • /
    • pp.167-172
    • /
    • 2024
  • In major wars of the 21st century, drones are expanding beyond surveillance and reconnaissance to include land and air as well as sea and underwater for purposes such as precision strikes, suicide attacks, and cognitive warfare. These drones will perform multi-domain operations, and to this end, they will continue to develop by improving the level of autonomy and strengthening scalability based on the High-Low Mix concept. Recently, drones have been used as a major means in major wars around the world, and there seems to be a good chance that they will evolve into game changers in the future. North Korea has also been making significant efforts to operate reconnaissance and attack drones for a long time. North Korea has recently continued to engage in provocations using drones, and its capabilities are gradually becoming more sophisticated. In addition, with the recent emergence of new strategic Drones, wartime and peacetime threats such as North Korea's use of these to secure surveillance, reconnaissance and early warning capabilities against South Korea and new types of provocations are expected to be strengthened. Through this study, we hope to provide implications by analyzing the capabilities of North Korea's strategic Drones, predicting their operation patterns, and conducting active follow-up research on the establishment of a comprehensive strategy, such as our military's drone deployment and counter-drone system solutions.

Mid Frequency Band Reverberation Model Development Using Ray Theory and Comparison with Experimental Data (음선 기반 중주파수 대역 잔향음 모델 개발 및 실측 데이터 비교)

  • Chu, Young-Min;Seong, Woo-Jae;Yang, In-Sik;Oh, Won-Tchon
    • The Journal of the Acoustical Society of Korea
    • /
    • v.28 no.8
    • /
    • pp.740-754
    • /
    • 2009
  • Sound in the ocean is scattered by inhomogeneities of many different kinds, such as the sea surface, the sea bottom, or the randomly distributed bubble layer and school of fish. The total sum of the scattered signals from these scatterers is called reverberation. In order to simulate the reverberation signal precisely, combination of a propagation model with proper scattering models, corresponding to each scattering mechanism, is required. In this article, we develop a reverberation model based on the ray theory easily combined with the existing scattering models. Developed reverberation model uses (1) Chapman-Harris empirical formula and APL-UW model/SSA model for the sea surface scattering. For the sea bottom scattering, it uses (2) Lambert's law and APL-UW model/SSA model. To verify our developed reverberation model, we compare our results with those in Ellis' article and 2006 reverberation workshop. This verified reverberation model SNURM is used to simulate reverberation signal for the neighboring seas of South Korea at mid frequency and the results from model are compared with experimental data in time domain. Through comparison between experiment data and model results, the features of reverberation signal dependent on environment of each sea is investigated and this analysis leads us to select an appropriate scattering function for each area of interest.

Construction and estimation of soil moisture site with FDR and COSMIC-ray (SM-FC) sensors for calibration/validation of satellite-based and COSMIC-ray soil moisture products in Sungkyunkwan university, South Korea (위성 토양수분 데이터 및 COSMIC-ray 데이터 보정/검증을 위한 성균관대학교 내 FDR 센서 토양수분 측정 연구(SM-FC) 및 데이터 분석)

  • Kim, Hyunglok;Sunwoo, Wooyeon;Kim, Seongkyun;Choi, Minha
    • Journal of Korea Water Resources Association
    • /
    • v.49 no.2
    • /
    • pp.133-144
    • /
    • 2016
  • In this study, Frequency Domain Reflectometry (FDR) and COSMIC-ray soil moisture (SM) stations were installed at Sungkyunkwan University in Suwon, South Korea. To provide reliable information about SM, soil property test, time series analysis of measured soil moisture, and comparison of measured SM with satellite-based SM product are conducted. In 2014, six FDR stations were set up for obtaining SM. Each of the stations had four FDR sensors with soil depth from 5 cm to 40 cm at 5~10 cm different intervals. The result showed that study region had heterogeneous soil layer properties such as sand and loamy sand. The measured SM data showed strong coupling with precipitation. Furthermore, they had a high correlation coefficient and a low root mean square deviation (RMSD) as compared to the satellite-based SM products. After verifying the accuracy of the data in 2014, four FDR stations and one COSMIC-ray station were additionally installed to establish the Soil Moisture site with FDR and COSMIC-ray, called SM-FC. COSMIC-ray-based SM had a high correlation coefficient of 0.95 compared with mean SM of FDR stations. From these results, the SM-FC will give a valuable insight for researchers into investigate satellite- and model-based SM validation study in South Korea.

Bankruptcy Prediction Modeling Using Qualitative Information Based on Big Data Analytics (빅데이터 기반의 정성 정보를 활용한 부도 예측 모형 구축)

  • Jo, Nam-ok;Shin, Kyung-shik
    • Journal of Intelligence and Information Systems
    • /
    • v.22 no.2
    • /
    • pp.33-56
    • /
    • 2016
  • Many researchers have focused on developing bankruptcy prediction models using modeling techniques, such as statistical methods including multiple discriminant analysis (MDA) and logit analysis or artificial intelligence techniques containing artificial neural networks (ANN), decision trees, and support vector machines (SVM), to secure enhanced performance. Most of the bankruptcy prediction models in academic studies have used financial ratios as main input variables. The bankruptcy of firms is associated with firm's financial states and the external economic situation. However, the inclusion of qualitative information, such as the economic atmosphere, has not been actively discussed despite the fact that exploiting only financial ratios has some drawbacks. Accounting information, such as financial ratios, is based on past data, and it is usually determined one year before bankruptcy. Thus, a time lag exists between the point of closing financial statements and the point of credit evaluation. In addition, financial ratios do not contain environmental factors, such as external economic situations. Therefore, using only financial ratios may be insufficient in constructing a bankruptcy prediction model, because they essentially reflect past corporate internal accounting information while neglecting recent information. Thus, qualitative information must be added to the conventional bankruptcy prediction model to supplement accounting information. Due to the lack of an analytic mechanism for obtaining and processing qualitative information from various information sources, previous studies have only used qualitative information. However, recently, big data analytics, such as text mining techniques, have been drawing much attention in academia and industry, with an increasing amount of unstructured text data available on the web. A few previous studies have sought to adopt big data analytics in business prediction modeling. Nevertheless, the use of qualitative information on the web for business prediction modeling is still deemed to be in the primary stage, restricted to limited applications, such as stock prediction and movie revenue prediction applications. Thus, it is necessary to apply big data analytics techniques, such as text mining, to various business prediction problems, including credit risk evaluation. Analytic methods are required for processing qualitative information represented in unstructured text form due to the complexity of managing and processing unstructured text data. This study proposes a bankruptcy prediction model for Korean small- and medium-sized construction firms using both quantitative information, such as financial ratios, and qualitative information acquired from economic news articles. The performance of the proposed method depends on how well information types are transformed from qualitative into quantitative information that is suitable for incorporating into the bankruptcy prediction model. We employ big data analytics techniques, especially text mining, as a mechanism for processing qualitative information. The sentiment index is provided at the industry level by extracting from a large amount of text data to quantify the external economic atmosphere represented in the media. The proposed method involves keyword-based sentiment analysis using a domain-specific sentiment lexicon to extract sentiment from economic news articles. The generated sentiment lexicon is designed to represent sentiment for the construction business by considering the relationship between the occurring term and the actual situation with respect to the economic condition of the industry rather than the inherent semantics of the term. The experimental results proved that incorporating qualitative information based on big data analytics into the traditional bankruptcy prediction model based on accounting information is effective for enhancing the predictive performance. The sentiment variable extracted from economic news articles had an impact on corporate bankruptcy. In particular, a negative sentiment variable improved the accuracy of corporate bankruptcy prediction because the corporate bankruptcy of construction firms is sensitive to poor economic conditions. The bankruptcy prediction model using qualitative information based on big data analytics contributes to the field, in that it reflects not only relatively recent information but also environmental factors, such as external economic conditions.

Ensemble of Nested Dichotomies for Activity Recognition Using Accelerometer Data on Smartphone (Ensemble of Nested Dichotomies 기법을 이용한 스마트폰 가속도 센서 데이터 기반의 동작 인지)

  • Ha, Eu Tteum;Kim, Jeongmin;Ryu, Kwang Ryel
    • Journal of Intelligence and Information Systems
    • /
    • v.19 no.4
    • /
    • pp.123-132
    • /
    • 2013
  • As the smartphones are equipped with various sensors such as the accelerometer, GPS, gravity sensor, gyros, ambient light sensor, proximity sensor, and so on, there have been many research works on making use of these sensors to create valuable applications. Human activity recognition is one such application that is motivated by various welfare applications such as the support for the elderly, measurement of calorie consumption, analysis of lifestyles, analysis of exercise patterns, and so on. One of the challenges faced when using the smartphone sensors for activity recognition is that the number of sensors used should be minimized to save the battery power. When the number of sensors used are restricted, it is difficult to realize a highly accurate activity recognizer or a classifier because it is hard to distinguish between subtly different activities relying on only limited information. The difficulty gets especially severe when the number of different activity classes to be distinguished is very large. In this paper, we show that a fairly accurate classifier can be built that can distinguish ten different activities by using only a single sensor data, i.e., the smartphone accelerometer data. The approach that we take to dealing with this ten-class problem is to use the ensemble of nested dichotomy (END) method that transforms a multi-class problem into multiple two-class problems. END builds a committee of binary classifiers in a nested fashion using a binary tree. At the root of the binary tree, the set of all the classes are split into two subsets of classes by using a binary classifier. At a child node of the tree, a subset of classes is again split into two smaller subsets by using another binary classifier. Continuing in this way, we can obtain a binary tree where each leaf node contains a single class. This binary tree can be viewed as a nested dichotomy that can make multi-class predictions. Depending on how a set of classes are split into two subsets at each node, the final tree that we obtain can be different. Since there can be some classes that are correlated, a particular tree may perform better than the others. However, we can hardly identify the best tree without deep domain knowledge. The END method copes with this problem by building multiple dichotomy trees randomly during learning, and then combining the predictions made by each tree during classification. The END method is generally known to perform well even when the base learner is unable to model complex decision boundaries As the base classifier at each node of the dichotomy, we have used another ensemble classifier called the random forest. A random forest is built by repeatedly generating a decision tree each time with a different random subset of features using a bootstrap sample. By combining bagging with random feature subset selection, a random forest enjoys the advantage of having more diverse ensemble members than a simple bagging. As an overall result, our ensemble of nested dichotomy can actually be seen as a committee of committees of decision trees that can deal with a multi-class problem with high accuracy. The ten classes of activities that we distinguish in this paper are 'Sitting', 'Standing', 'Walking', 'Running', 'Walking Uphill', 'Walking Downhill', 'Running Uphill', 'Running Downhill', 'Falling', and 'Hobbling'. The features used for classifying these activities include not only the magnitude of acceleration vector at each time point but also the maximum, the minimum, and the standard deviation of vector magnitude within a time window of the last 2 seconds, etc. For experiments to compare the performance of END with those of other methods, the accelerometer data has been collected at every 0.1 second for 2 minutes for each activity from 5 volunteers. Among these 5,900 ($=5{\times}(60{\times}2-2)/0.1$) data collected for each activity (the data for the first 2 seconds are trashed because they do not have time window data), 4,700 have been used for training and the rest for testing. Although 'Walking Uphill' is often confused with some other similar activities, END has been found to classify all of the ten activities with a fairly high accuracy of 98.4%. On the other hand, the accuracies achieved by a decision tree, a k-nearest neighbor, and a one-versus-rest support vector machine have been observed as 97.6%, 96.5%, and 97.6%, respectively.

An Ontology Model for Public Service Export Platform (공공 서비스 수출 플랫폼을 위한 온톨로지 모형)

  • Lee, Gang-Won;Park, Sei-Kwon;Ryu, Seung-Wan;Shin, Dong-Cheon
    • Journal of Intelligence and Information Systems
    • /
    • v.20 no.1
    • /
    • pp.149-161
    • /
    • 2014
  • The export of domestic public services to overseas markets contains many potential obstacles, stemming from different export procedures, the target services, and socio-economic environments. In order to alleviate these problems, the business incubation platform as an open business ecosystem can be a powerful instrument to support the decisions taken by participants and stakeholders. In this paper, we propose an ontology model and its implementation processes for the business incubation platform with an open and pervasive architecture to support public service exports. For the conceptual model of platform ontology, export case studies are used for requirements analysis. The conceptual model shows the basic structure, with vocabulary and its meaning, the relationship between ontologies, and key attributes. For the implementation and test of the ontology model, the logical structure is edited using Prot$\acute{e}$g$\acute{e}$ editor. The core engine of the business incubation platform is the simulator module, where the various contexts of export businesses should be captured, defined, and shared with other modules through ontologies. It is well-known that an ontology, with which concepts and their relationships are represented using a shared vocabulary, is an efficient and effective tool for organizing meta-information to develop structural frameworks in a particular domain. The proposed model consists of five ontologies derived from a requirements survey of major stakeholders and their operational scenarios: service, requirements, environment, enterprise, and county. The service ontology contains several components that can find and categorize public services through a case analysis of the public service export. Key attributes of the service ontology are composed of categories including objective, requirements, activity, and service. The objective category, which has sub-attributes including operational body (organization) and user, acts as a reference to search and classify public services. The requirements category relates to the functional needs at a particular phase of system (service) design or operation. Sub-attributes of requirements are user, application, platform, architecture, and social overhead. The activity category represents business processes during the operation and maintenance phase. The activity category also has sub-attributes including facility, software, and project unit. The service category, with sub-attributes such as target, time, and place, acts as a reference to sort and classify the public services. The requirements ontology is derived from the basic and common components of public services and target countries. The key attributes of the requirements ontology are business, technology, and constraints. Business requirements represent the needs of processes and activities for public service export; technology represents the technological requirements for the operation of public services; and constraints represent the business law, regulations, or cultural characteristics of the target country. The environment ontology is derived from case studies of target countries for public service operation. Key attributes of the environment ontology are user, requirements, and activity. A user includes stakeholders in public services, from citizens to operators and managers; the requirements attribute represents the managerial and physical needs during operation; the activity attribute represents business processes in detail. The enterprise ontology is introduced from a previous study, and its attributes are activity, organization, strategy, marketing, and time. The country ontology is derived from the demographic and geopolitical analysis of the target country, and its key attributes are economy, social infrastructure, law, regulation, customs, population, location, and development strategies. The priority list for target services for a certain country and/or the priority list for target countries for a certain public services are generated by a matching algorithm. These lists are used as input seeds to simulate the consortium partners, and government's policies and programs. In the simulation, the environmental differences between Korea and the target country can be customized through a gap analysis and work-flow optimization process. When the process gap between Korea and the target country is too large for a single corporation to cover, a consortium is considered an alternative choice, and various alternatives are derived from the capability index of enterprises. For financial packages, a mix of various foreign aid funds can be simulated during this stage. It is expected that the proposed ontology model and the business incubation platform can be used by various participants in the public service export market. It could be especially beneficial to small and medium businesses that have relatively fewer resources and experience with public service export. We also expect that the open and pervasive service architecture in a digital business ecosystem will help stakeholders find new opportunities through information sharing and collaboration on business processes.

Relationship between Entrepreneurial Education and Entrepreneurial Opportunity Recognition: Focused on the Entrepreneurship Major College Students (앙트러프러너십 교육과 창업기회인식 역량과의 관계: 숙명여대 앙트러프러너십 전공 사례를 중심으로)

  • Lee, Woo Jin;Son, Jong Seo;Oh, Hyemi
    • Asia-Pacific Journal of Business Venturing and Entrepreneurship
    • /
    • v.13 no.3
    • /
    • pp.71-83
    • /
    • 2018
  • Recently, there are many efforts to define the field of entrepreneurship as an area of independent study. According to Shane & Venkataraman, the study of entrepreneurship is moving toward understanding the combination of entrepreneurial individual and valuable opportunity in becoming entrepreneurs. In Korea, entrepreneurship education is spreading widely on the basis of universities and in 2010 the entrepreneurship major was created in Sookmyung Women's University for the first time in Korea. The results of this study are as follows. First, there are many research about examining the relationship between entrepreneurship education and entrepreneurship intention. Nevertheless, there are lack of the study focusing on the opportunity recognition which many scholars have recognized as the independent study field of entrepreneurship domain. Therefore, the purpose of this study is to examine the effect of satisfaction of entrepreneurship major education on entrepreneurial opportunity recognition and to examine the mediating effect of entrepreneurial opportunity recognition according to educational commitment. The questionnaires were carried out for 3 weeks to entrepreneurship major students in Sookmyung Woment's University. A total of 84 surveys were collected and statistically analyzed by the R program. As a result of the analysis, it was found that the satisfaction of education positively influences the recognition of entrepreneurial opportunities. Commitment also has a full mediating effect on the recognition of entrepreneurial opportunities. The results of this analysis confirm that the ability to recognize entrepreneurial opportunity is developed by entrepreneurship education, and during the study students' commitment has an important role in the relationship between educational satisfaction and entrepreneurial opportunity recognition. The results were verified through empirical analysis. Satisfaction with entrepreneurship education and awareness of entrepreneurship opportunities through entrepreneurship can be anticipated as entrepreneurship activities in the future.

The Effect of Acupuncture Treatment on the Heart Rate Variability of Chronic Headache Patients (만성두통환자에 대한 침치료가 심박변이도에 미치는 영향)

  • Jung, In-tae;Lee, Sang-hoon;Kim, Su-young;Cha, Nam-hyun;Kim, Keon-sik;Lee, Doo-ik;Lee, Jae-Dong;Lim, Sabina;Lee, Yun-ho;Choi, Do-young
    • Journal of Acupuncture Research
    • /
    • v.22 no.3
    • /
    • pp.105-112
    • /
    • 2005
  • Obiective : The purpose of this study was to assess the effect of acupuncture treatment for chronic headache patients using power spectrum analysis of the heart rate variability(HRV). Methods : 15 clinical experiment participants were gathered and through a questionnaire patients who experienced headache for more than 4 hours a day and more than IS days per month were qualified as Chronic Headache patients. Treatment was afplied 2 times a weeks for 8 weeks. The acupoints, GV2O, HN23, ST8, HN46, TEl7, GB2O, LI2O, LI11, LI14, ST36, and LR3 were stimulated for 20 minutes. The effects of acupuncture treatment were analyzed using power spectrum analysis of the HRV. HRV was recorded before and after acupuncture treatment. Results : HRV before and after treatment was compared after 8 weeks of acupuncture treatment. Increase in mean values of SDNN and RMSSD were observed but the increases were not statistically significant. Increase in mean values of TP, LF and HF were observed but, the increase was significant(p<0.05) only in TP. Conclusions : The results suggest that acupuncture treatment on chronic headache patients can increase the activity of autonomic nervous system. Further use of HRV for quantitative analysis of acupuncture treatment on autonomic nervous system related symptoms is suggested.

  • PDF

Transfer Learning using Multiple ConvNet Layers Activation Features with Principal Component Analysis for Image Classification (전이학습 기반 다중 컨볼류션 신경망 레이어의 활성화 특징과 주성분 분석을 이용한 이미지 분류 방법)

  • Byambajav, Batkhuu;Alikhanov, Jumabek;Fang, Yang;Ko, Seunghyun;Jo, Geun Sik
    • Journal of Intelligence and Information Systems
    • /
    • v.24 no.1
    • /
    • pp.205-225
    • /
    • 2018
  • Convolutional Neural Network (ConvNet) is one class of the powerful Deep Neural Network that can analyze and learn hierarchies of visual features. Originally, first neural network (Neocognitron) was introduced in the 80s. At that time, the neural network was not broadly used in both industry and academic field by cause of large-scale dataset shortage and low computational power. However, after a few decades later in 2012, Krizhevsky made a breakthrough on ILSVRC-12 visual recognition competition using Convolutional Neural Network. That breakthrough revived people interest in the neural network. The success of Convolutional Neural Network is achieved with two main factors. First of them is the emergence of advanced hardware (GPUs) for sufficient parallel computation. Second is the availability of large-scale datasets such as ImageNet (ILSVRC) dataset for training. Unfortunately, many new domains are bottlenecked by these factors. For most domains, it is difficult and requires lots of effort to gather large-scale dataset to train a ConvNet. Moreover, even if we have a large-scale dataset, training ConvNet from scratch is required expensive resource and time-consuming. These two obstacles can be solved by using transfer learning. Transfer learning is a method for transferring the knowledge from a source domain to new domain. There are two major Transfer learning cases. First one is ConvNet as fixed feature extractor, and the second one is Fine-tune the ConvNet on a new dataset. In the first case, using pre-trained ConvNet (such as on ImageNet) to compute feed-forward activations of the image into the ConvNet and extract activation features from specific layers. In the second case, replacing and retraining the ConvNet classifier on the new dataset, then fine-tune the weights of the pre-trained network with the backpropagation. In this paper, we focus on using multiple ConvNet layers as a fixed feature extractor only. However, applying features with high dimensional complexity that is directly extracted from multiple ConvNet layers is still a challenging problem. We observe that features extracted from multiple ConvNet layers address the different characteristics of the image which means better representation could be obtained by finding the optimal combination of multiple ConvNet layers. Based on that observation, we propose to employ multiple ConvNet layer representations for transfer learning instead of a single ConvNet layer representation. Overall, our primary pipeline has three steps. Firstly, images from target task are given as input to ConvNet, then that image will be feed-forwarded into pre-trained AlexNet, and the activation features from three fully connected convolutional layers are extracted. Secondly, activation features of three ConvNet layers are concatenated to obtain multiple ConvNet layers representation because it will gain more information about an image. When three fully connected layer features concatenated, the occurring image representation would have 9192 (4096+4096+1000) dimension features. However, features extracted from multiple ConvNet layers are redundant and noisy since they are extracted from the same ConvNet. Thus, a third step, we will use Principal Component Analysis (PCA) to select salient features before the training phase. When salient features are obtained, the classifier can classify image more accurately, and the performance of transfer learning can be improved. To evaluate proposed method, experiments are conducted in three standard datasets (Caltech-256, VOC07, and SUN397) to compare multiple ConvNet layer representations against single ConvNet layer representation by using PCA for feature selection and dimension reduction. Our experiments demonstrated the importance of feature selection for multiple ConvNet layer representation. Moreover, our proposed approach achieved 75.6% accuracy compared to 73.9% accuracy achieved by FC7 layer on the Caltech-256 dataset, 73.1% accuracy compared to 69.2% accuracy achieved by FC8 layer on the VOC07 dataset, 52.2% accuracy compared to 48.7% accuracy achieved by FC7 layer on the SUN397 dataset. We also showed that our proposed approach achieved superior performance, 2.8%, 2.1% and 3.1% accuracy improvement on Caltech-256, VOC07, and SUN397 dataset respectively compare to existing work.