• Title/Summary/Keyword: standard domain

Search Result 714, Processing Time 0.029 seconds

Tidal and Sub-tidal Current Characteristics in the Central part of Chunsu Bay, Yellow Sea, Korea during the Summer Season (서해 천수만 중앙부의 하계 조류/비조류 특성)

  • Jung, Kwang Young;Ro, Young Jae;Kim, Baek Jin
    • The Sea:JOURNAL OF THE KOREAN SOCIETY OF OCEANOGRAPHY
    • /
    • v.18 no.2
    • /
    • pp.53-64
    • /
    • 2013
  • This study analyzed the ADCP records along with wind by KMA and discharge records at Seosan A-, B-district tide embankment by KRC for 33 days obtained in the Chunsu Bay, Yellow Sea, Korea spanning from July 29 to August 30, 2010. Various analyses include descriptive statistics, harmonic analysis of tidal constituents, spectra and coherence, complex correlation, progressive vector diagram and cumulative curves to understand the tidal and sub-tidal current characteristics caused by local wind and discharge effect. Observed current speed ranges from -30 to 40 (cm/sec), with standard deviation from 1.7 (cm/sec) at bottom to 18.7 (cm/sec) at surface. According to the harmonic analysis results, the tidal current direction show NNW-SSE. The magnitudes of semi-major axes range from 9.4 to 14.8 (cm/sec) for M2 harmonic constituent and from 4.4 to 7.0 (cm/sec) for S2, respectively. And the magnitudes of semi-minor axes range from 0.1 to 0.5 (cm/sec) for M2 and from 0.4 to 1.4 (cm/sec) for S2, respectively. In the spectral analysis results in the frequency domain, we found 3~6 significant spectral peaks for band-passed wind and residual current of all depth. These peak periods represent various periodicities ranging from 2 to 8 (days). In the coherency analysis results between band-passed wind and residual current of all depth, several significant coherencies could be resolved in 3~5 periodicities within 2.8 (days). Highest coherency peak occurred at 4.6 (day) with 1.2-day phase lag of discharge to band-passed residual current. The progressive vector of wind and residual current travelled to northward at all layers, and the travel distance at middle layer was greater than surface layer distance. The Northward residual current was caused by a seasonal southern wind, and the density-driven current formed by fresh water input effected southward residual current. The sub-tidal current characteristics is determined by seasonal wind force and fresh water inflow in the Chunsu Bay, Yellow Sea, Korea.

The Prediction of DEA based Efficiency Rating for Venture Business Using Multi-class SVM (다분류 SVM을 이용한 DEA기반 벤처기업 효율성등급 예측모형)

  • Park, Ji-Young;Hong, Tae-Ho
    • Asia pacific journal of information systems
    • /
    • v.19 no.2
    • /
    • pp.139-155
    • /
    • 2009
  • For the last few decades, many studies have tried to explore and unveil venture companies' success factors and unique features in order to identify the sources of such companies' competitive advantages over their rivals. Such venture companies have shown tendency to give high returns for investors generally making the best use of information technology. For this reason, many venture companies are keen on attracting avid investors' attention. Investors generally make their investment decisions by carefully examining the evaluation criteria of the alternatives. To them, credit rating information provided by international rating agencies, such as Standard and Poor's, Moody's and Fitch is crucial source as to such pivotal concerns as companies stability, growth, and risk status. But these types of information are generated only for the companies issuing corporate bonds, not venture companies. Therefore, this study proposes a method for evaluating venture businesses by presenting our recent empirical results using financial data of Korean venture companies listed on KOSDAQ in Korea exchange. In addition, this paper used multi-class SVM for the prediction of DEA-based efficiency rating for venture businesses, which was derived from our proposed method. Our approach sheds light on ways to locate efficient companies generating high level of profits. Above all, in determining effective ways to evaluate a venture firm's efficiency, it is important to understand the major contributing factors of such efficiency. Therefore, this paper is constructed on the basis of following two ideas to classify which companies are more efficient venture companies: i) making DEA based multi-class rating for sample companies and ii) developing multi-class SVM-based efficiency prediction model for classifying all companies. First, the Data Envelopment Analysis(DEA) is a non-parametric multiple input-output efficiency technique that measures the relative efficiency of decision making units(DMUs) using a linear programming based model. It is non-parametric because it requires no assumption on the shape or parameters of the underlying production function. DEA has been already widely applied for evaluating the relative efficiency of DMUs. Recently, a number of DEA based studies have evaluated the efficiency of various types of companies, such as internet companies and venture companies. It has been also applied to corporate credit ratings. In this study we utilized DEA for sorting venture companies by efficiency based ratings. The Support Vector Machine(SVM), on the other hand, is a popular technique for solving data classification problems. In this paper, we employed SVM to classify the efficiency ratings in IT venture companies according to the results of DEA. The SVM method was first developed by Vapnik (1995). As one of many machine learning techniques, SVM is based on a statistical theory. Thus far, the method has shown good performances especially in generalizing capacity in classification tasks, resulting in numerous applications in many areas of business, SVM is basically the algorithm that finds the maximum margin hyperplane, which is the maximum separation between classes. According to this method, support vectors are the closest to the maximum margin hyperplane. If it is impossible to classify, we can use the kernel function. In the case of nonlinear class boundaries, we can transform the inputs into a high-dimensional feature space, This is the original input space and is mapped into a high-dimensional dot-product space. Many studies applied SVM to the prediction of bankruptcy, the forecast a financial time series, and the problem of estimating credit rating, In this study we employed SVM for developing data mining-based efficiency prediction model. We used the Gaussian radial function as a kernel function of SVM. In multi-class SVM, we adopted one-against-one approach between binary classification method and two all-together methods, proposed by Weston and Watkins(1999) and Crammer and Singer(2000), respectively. In this research, we used corporate information of 154 companies listed on KOSDAQ market in Korea exchange. We obtained companies' financial information of 2005 from the KIS(Korea Information Service, Inc.). Using this data, we made multi-class rating with DEA efficiency and built multi-class prediction model based data mining. Among three manners of multi-classification, the hit ratio of the Weston and Watkins method is the best in the test data set. In multi classification problems as efficiency ratings of venture business, it is very useful for investors to know the class with errors, one class difference, when it is difficult to find out the accurate class in the actual market. So we presented accuracy results within 1-class errors, and the Weston and Watkins method showed 85.7% accuracy in our test samples. We conclude that the DEA based multi-class approach in venture business generates more information than the binary classification problem, notwithstanding its efficiency level. We believe this model can help investors in decision making as it provides a reliably tool to evaluate venture companies in the financial domain. For the future research, we perceive the need to enhance such areas as the variable selection process, the parameter selection of kernel function, the generalization, and the sample size of multi-class.

A Study on the Positive Emotional Effects on Heart Rate Variability - Focused on Effects of '2002 FIFA World Cup' Sports Event on Emotion and General Health of Korean People - (긍정적 감성경험에 의한 심박변이도의 변화에 대한 연구 - 2002 한일 월드컵 행사가 한국의 국민 정서와 건강에 미친 영향을 중심으로 -)

  • Jeong Kee-Sam;Lee Byung-Chae;Choi Whan-Seok;Kim Bom-Taeck;Woo Jong-Min;Lee Kwae-Hi;Kim Min
    • Science of Emotion and Sensibility
    • /
    • v.9 no.2
    • /
    • pp.111-118
    • /
    • 2006
  • The purpose of the study is to examine the effects of the positive menial stress, eustress, on autonomic nervous system(ANS) and human health. For this, we analyzed heart rate variability(HRV) parameters, the most promising markers of ANS function to assess the changes of emotional and physiological states of human body. We measured HRV Signal of World Cup group(281 male subjects: $29.8{\pm}5.6yr$., 187 female subjects: $29.0{\pm}5.4yr$.) in two stadiums at least an hour before the game during '2002 FIFA World Cup Korea/Japan' event. We also measured control group's(331 male subjects: $30.9{\pm}4.7 yr$., 344 female subjects: $30.2{\pm}5.2 yr$.) in the health promotion centers in two university hospitals at least a month before and after the world cup event period. Considering physiological differences between males and females, the data analysis was applied to 'male group' and 'female group' separately. As a result, some tendency was observed that is different from what we have known as the stress reaction. In general, all parameter values except that of mean heart rate tend to decrease under stressed condition. However, under eustressed condition, both heart rate and standard deviation of the Normal to Normal intervals(SDNN) were higher then those of normal condition(p<0.05). Especially, in case of female group, contrary to distressed condition, every frequency-domain powers showed tile higher value(p<0.05, p<0.001). Considering that decrease of HRV indicates the loss of one's health, the increase of SDNN and frequency parameters means that homeostasis control mechanism of ANS is functioning positively. Accordingly, induction of eustress from international sports event may affect positively to the people's health.

  • PDF

Wearable Computers

  • Cho, Gil-Soo;Barfield, Woodrow;Baird, Kevin
    • Fiber Technology and Industry
    • /
    • v.2 no.4
    • /
    • pp.490-508
    • /
    • 1998
  • One of the latest fields of research in the area of output devices is tactual display devices [13,31]. These tactual or haptic devices allow the user to receive haptic feedback output from a variety of sources. This allows the user to actually feel virtual objects and manipulate them by touch. This is an emerging technology and will be instrumental in enhancing the realism of wearable augmented environments for certain applications. Tactual displays have previously been used for scientific visualization in virtual environments by chemists and engineers to improve perception and understanding of force fields and of world models populated with the impenetrable. In addition to tactual displays, the use of wearable audio displays that allow sound to be spatialized are being developed. With wearable computers, designers will soon be able to pair spatialized sound to virtual representations of objects when appropriate to make the wearable computer experience even more realistic to the user. Furthermore, as the number and complexity of wearable computing applications continues to grow, there will be increasing needs for systems that are faster, lighter, and have higher resolution displays. Better networking technology will also need to be developed to allow all users of wearable computers to have high bandwidth connections for real time information gathering and collaboration. In addition to the technology advances that make users need to wear computers in everyday life, there is also the desire to have users want to wear their computers. In order to do this, wearable computing needs to be unobtrusive and socially acceptable. By making wearables smaller and lighter, or actually embedding them in clothing, users can conceal them easily and wear them comfortably. The military is currently working on the development of the Personal Information Carrier (PIC) or digital dog tag. The PIC is a small electronic storage device containing medical information about the wearer. While old military dog tags contained only 5 lines of information, the digital tags may contain volumes of multi-media information including medical history, X-rays, and cardiograms. Using hand held devices in the field, medics would be able to call this information up in real time for better treatment. A fully functional transmittable device is still years off, but this technology once developed in the military, could be adapted tp civilian users and provide ant information, medical or otherwise, in a portable, not obstructive, and fashionable way. Another future device that could increase safety and well being of its users is the nose on-a-chip developed by the Oak Ridge National Lab in Tennessee. This tiny digital silicon chip about the size of a dime, is capable of 'smelling' natural gas leaks in stoves, heaters, and other appliances. It can also detect dangerous levels of carbon monoxide. This device can also be configured to notify the fire department when a leak is detected. This nose chip should be commercially available within 2 years, and is inexpensive, requires low power, and is very sensitive. Along with gas detection capabilities, this device may someday also be configured to detect smoke and other harmful gases. By embedding this chip into workers uniforms, name tags, etc., this could be a lifesaving computational accessory. In addition to the future safety technology soon to be available as accessories are devices that are for entertainment and security. The LCI computer group is developing a Smartpen, that electronically verifies a user's signature. With the increase in credit card use and the rise in forgeries, is the need for commercial industries to constantly verify signatures. This Smartpen writes like a normal pen but uses sensors to detect the motion of the pen as the user signs their name to authenticate the signature. This computational accessory should be available in 1999, and would bring increased peace of mind to consumers and vendors alike. In the entertainment domain, Panasonic is creating the first portable hand-held DVD player. This device weight less than 3 pounds and has a screen about 6' across. The color LCD has the same 16:9 aspect ratio of a cinema screen and supports a high resolution of 280,000 pixels and stereo sound. The player can play standard DVD movies and has a hour battery life for mobile use. To summarize, in this paper we presented concepts related to the design and use of wearable computers with extensions to smart spaces. For some time, researchers in telerobotics have used computer graphics to enhance remote scenes. Recent advances in augmented reality displays make it possible to enhance the user's local environment with 'information'. As shown in this paper, there are many application areas for this technology such as medicine, manufacturing, training, and recreation. Wearable computers allow a much closer association of information with the user. By embedding sensors in the wearable to allow it to see what the user sees, hear what the user hears, sense the user's physical state, and analyze what the user is typing, an intelligent agent may be able to analyze what the user is doing and try to predict the resources he will need next or in the near future. Using this information, the agent may download files, reserve communications bandwidth, post reminders, or automatically send updates to colleagues to help facilitate the user's daily interactions. This intelligent wearable computer would be able to act as a personal assistant, who is always around, knows the user's personal preferences and tastes, and tries to streamline interactions with the rest of the world.

  • PDF

The Changes of Dietary Reference Intakes for Koreans and Its Application to the New Text Book (한국인 영양섭취기준에 대한 이해 및 새 교과서에의 적용 방안)

  • Kim, Jung-Hyun;Lee, Min-June
    • Journal of Korean Home Economics Education Association
    • /
    • v.20 no.2
    • /
    • pp.75-94
    • /
    • 2008
  • The purposes of this paper are to describe the newly established reference values of nutrient intakes: to apply the changed dietary reference intakes to the new text book based on the revised curriculum: and to contrive substantial contents in the domain of dietary life(foods & nutrition) of new text book. Dietary Reference Intakes for Koreans(KDRIs) is newly established reference values of nutrient intakes that are considered necessary to maintain the health of Koreans at the optimal state and to prevent chronic diseases and overnutrition. Unlike previously used Recommended Dietary Allowances for Koreas(KRDA), which presented a single reference value for intake of each nutrient, multiple values are set at levels for nutrients to reduce risk of chronic diseases and toxicity as well as prevention of nutrient deficiency. The new KDRIs include the Estimated Average Requirement(EAR), Recommended Intake(RI), Adequate Intake(AI), and Tolerable Upper Intake Level(UL). The EAR is the daily nutrient intake estimated to meet the requirement of the half of the apparently healthy individuals in a target group and thus is set at the median of the distribution of requirements. The RI is set at two standard deviations above the EAR. The AI is established for nutrients for which existing body of knowledge are inadequate to establish the EAR and RI. The UL is the highest level of daily nutrient intake which is not likely to cause adverse effects for the human health. Age and gender subgroups are established in consideration of physiological characteristics and developmental stages: infancy, toddler, childhood, adolescence, adulthood and old age. Pregnancy and lactation periods were considered separately and gender is divided after early childhood. Reference heights and weights are from the Korean Agency for Technology and Standards, Ministry of Commerce, Industry and Energy. The practical application of DRIs to the new books based on the revision in the 7th curriculum is to assess the dietary and nutrient intake as well as to plan a meal. It can be utilized to set an appropriate nutrient goal for the diet as usually eaten and to develop a plan that the individual will consume using a nutrient based food guidance system in the new books based on the revision in the 7th curriculum.

  • PDF

Applying Meta-model Formalization of Part-Whole Relationship to UML: Experiment on Classification of Aggregation and Composition (UML의 부분-전체 관계에 대한 메타모델 형식화 이론의 적용: 집합연관 및 복합연관 판별 실험)

  • Kim, Taekyung
    • Journal of Intelligence and Information Systems
    • /
    • v.21 no.1
    • /
    • pp.99-118
    • /
    • 2015
  • Object-oriented programming languages have been widely selected for developing modern information systems. The use of concepts relating to object-oriented (OO, in short) programming has reduced efforts of reusing pre-existing codes, and the OO concepts have been proved to be a useful in interpreting system requirements. In line with this, we have witnessed that a modern conceptual modeling approach supports features of object-oriented programming. Unified Modeling Language or UML becomes one of de-facto standards for information system designers since the language provides a set of visual diagrams, comprehensive frameworks and flexible expressions. In a modeling process, UML users need to consider relationships between classes. Based on an explicit and clear representation of classes, the conceptual model from UML garners necessarily attributes and methods for guiding software engineers. Especially, identifying an association between a class of part and a class of whole is included in the standard grammar of UML. The representation of part-whole relationship is natural in a real world domain since many physical objects are perceived as part-whole relationship. In addition, even abstract concepts such as roles are easily identified by part-whole perception. It seems that a representation of part-whole in UML is reasonable and useful. However, it should be admitted that the use of UML is limited due to the lack of practical guidelines on how to identify a part-whole relationship and how to classify it into an aggregate- or a composite-association. Research efforts on developing the procedure knowledge is meaningful and timely in that misleading perception to part-whole relationship is hard to be filtered out in an initial conceptual modeling thus resulting in deterioration of system usability. The current method on identifying and classifying part-whole relationships is mainly counting on linguistic expression. This simple approach is rooted in the idea that a phrase of representing has-a constructs a par-whole perception between objects. If the relationship is strong, the association is classified as a composite association of part-whole relationship. In other cases, the relationship is an aggregate association. Admittedly, linguistic expressions contain clues for part-whole relationships; therefore, the approach is reasonable and cost-effective in general. Nevertheless, it does not cover concerns on accuracy and theoretical legitimacy. Research efforts on developing guidelines for part-whole identification and classification has not been accumulated sufficient achievements to solve this issue. The purpose of this study is to provide step-by-step guidelines for identifying and classifying part-whole relationships in the context of UML use. Based on the theoretical work on Meta-model Formalization, self-check forms that help conceptual modelers work on part-whole classes are developed. To evaluate the performance of suggested idea, an experiment approach was adopted. The findings show that UML users obtain better results with the guidelines based on Meta-model Formalization compared to a natural language classification scheme conventionally recommended by UML theorists. This study contributed to the stream of research effort about part-whole relationships by extending applicability of Meta-model Formalization. Compared to traditional approaches that target to establish criterion for evaluating a result of conceptual modeling, this study expands the scope to a process of modeling. Traditional theories on evaluation of part-whole relationship in the context of conceptual modeling aim to rule out incomplete or wrong representations. It is posed that qualification is still important; but, the lack of consideration on providing a practical alternative may reduce appropriateness of posterior inspection for modelers who want to reduce errors or misperceptions about part-whole identification and classification. The findings of this study can be further developed by introducing more comprehensive variables and real-world settings. In addition, it is highly recommended to replicate and extend the suggested idea of utilizing Meta-model formalization by creating different alternative forms of guidelines including plugins for integrated development environments.

Growth Responses of Potted Gerbera 'Sunny Lemon' under Non-Nutrient Solution Recycling System by Media and Nutrient Contents (비순환식 분화 양액재배시 배지와 양액함량에 따른 거베라 'Sunny Lemon'의 생육반응)

  • Kil, Mi-Jung;Shim, Myung-Sun;Park, Sang-Kun;Shin, Hak-Gi;Jung, Jae-A;Kwon, Young-Soon
    • Journal of agriculture & life science
    • /
    • v.45 no.6
    • /
    • pp.73-80
    • /
    • 2011
  • To investigate the characteristics of plant growth and flower quality of gerbera 'Sunny Lemon' by amount of nutrient solution, young seedling plants, 'Sunny Lemon' were transplanted to rock-wool and medium of peat moss and perlite mixed with 1 to 2 and they were acclimatized in greenhouse during about 1 month. Nutrient solution supplied to the plants is sonneveld solution of 1/2 concentration and treatments launched June 24, 2010 when average plant height was $20{\pm}1cm$. Nutrient contents as a standard for starting point of irrigation by time domain reflectometry (TDR) were determined with 60-65%, 70-75%, and 80-85%. Results of growth during vegetative growth, plant height, leaf width and leaf number increased by 10% in rockwool, but they were not significantly different. As for plant growth depending on nutrient content, 80-85% treatment showed the highest values. Leaf number increased by 60%, and leaf width and plant height had a about 40% increase than initial growth. Effectiveness for flower quality, yield and days to flowering were superior when nutrient content of media was higher than in the others. Especially, average days to flowering in 80-85% content was advanced by 7-10 days compared to the day in 60-65% treatment. The total amount of nutrient supply per plant was higher in mixed medium than in rockwool, but change patterns of EC and pH were enhanced in rockwool. Based on our results, we recommended that growth, cut flower, and yield of gerbera 'Sunny Lemon' were more effective when nutrient content of mixed medium was maintained at 80-85%.

Improvement of Personal Information Protection Laws in the era of the 4th industrial revolution (4차 산업혁명 시대의 개인정보보호법제 개선방안)

  • Choi, Kyoung-jin
    • Journal of Legislation Research
    • /
    • no.53
    • /
    • pp.177-211
    • /
    • 2017
  • In the course of the emergence and development of new ICT technologies and services such as Big Data, Internet of Things and Artificial Intelligence, the future will change by these new innovations in the Fourth Industrial Revolution. The future of this fourth industrial revolution will change and our future will be data-based society or economy. Since there is personal information at the center of it, the development of the economy through the utilization of personal information will depend on how to make the personal information protection laws. In Korea, which is trying to lead the 4th industrial revolution, it is a legal interest that can not give up the use of personal information, and also it is an important legal benefit that can not give up the personal interests of individuals who want to protect from personal information. Therefore, it is necessary to change the law on personal information protection in a rational way to harmonize the two. In this regard, this article discusses the problems of duplication and incompatibility of the personal information protection law, the scope of application of the personal information protection law and the uncertainty of the judgment standard, the lack of flexibility responding to the demand for the use of reasonable personal information, And there is a problem of reverse discrimination against domestic area compared to the regulated blind spot in foreign countries. In order to solve these problems and to improve the legislation of personal information protection in the era of the fourth industrial revolution, we proposed to consider both personal information protection and safe use by improving the purpose and regulation direction of the personal information protection law. The balance and harmony between the systematical maintenance of the personal information protection legislation and laws and regulations were also set as important directions. It is pointed out that the establishment of rational judgment criteria and the legislative review to clarify it are necessary for the constantly controversial personal information definition regulation and the method of allowing anonymization information as the intermediate domain. In addition to the legislative review for the legitimate and non-invasive use of personal information, there is a need to improve the collective consent system for collecting personal information to differentiate the subject and to improve the legislation to ensure the effectiveness of the regulation on the movement of personal information between countries. In addition to the issues discussed in this article, there may be a number of challenges, but overall, the protection and use of personal information should be harmonized while maintaining the direction indicated above.

Transfer Learning using Multiple ConvNet Layers Activation Features with Principal Component Analysis for Image Classification (전이학습 기반 다중 컨볼류션 신경망 레이어의 활성화 특징과 주성분 분석을 이용한 이미지 분류 방법)

  • Byambajav, Batkhuu;Alikhanov, Jumabek;Fang, Yang;Ko, Seunghyun;Jo, Geun Sik
    • Journal of Intelligence and Information Systems
    • /
    • v.24 no.1
    • /
    • pp.205-225
    • /
    • 2018
  • Convolutional Neural Network (ConvNet) is one class of the powerful Deep Neural Network that can analyze and learn hierarchies of visual features. Originally, first neural network (Neocognitron) was introduced in the 80s. At that time, the neural network was not broadly used in both industry and academic field by cause of large-scale dataset shortage and low computational power. However, after a few decades later in 2012, Krizhevsky made a breakthrough on ILSVRC-12 visual recognition competition using Convolutional Neural Network. That breakthrough revived people interest in the neural network. The success of Convolutional Neural Network is achieved with two main factors. First of them is the emergence of advanced hardware (GPUs) for sufficient parallel computation. Second is the availability of large-scale datasets such as ImageNet (ILSVRC) dataset for training. Unfortunately, many new domains are bottlenecked by these factors. For most domains, it is difficult and requires lots of effort to gather large-scale dataset to train a ConvNet. Moreover, even if we have a large-scale dataset, training ConvNet from scratch is required expensive resource and time-consuming. These two obstacles can be solved by using transfer learning. Transfer learning is a method for transferring the knowledge from a source domain to new domain. There are two major Transfer learning cases. First one is ConvNet as fixed feature extractor, and the second one is Fine-tune the ConvNet on a new dataset. In the first case, using pre-trained ConvNet (such as on ImageNet) to compute feed-forward activations of the image into the ConvNet and extract activation features from specific layers. In the second case, replacing and retraining the ConvNet classifier on the new dataset, then fine-tune the weights of the pre-trained network with the backpropagation. In this paper, we focus on using multiple ConvNet layers as a fixed feature extractor only. However, applying features with high dimensional complexity that is directly extracted from multiple ConvNet layers is still a challenging problem. We observe that features extracted from multiple ConvNet layers address the different characteristics of the image which means better representation could be obtained by finding the optimal combination of multiple ConvNet layers. Based on that observation, we propose to employ multiple ConvNet layer representations for transfer learning instead of a single ConvNet layer representation. Overall, our primary pipeline has three steps. Firstly, images from target task are given as input to ConvNet, then that image will be feed-forwarded into pre-trained AlexNet, and the activation features from three fully connected convolutional layers are extracted. Secondly, activation features of three ConvNet layers are concatenated to obtain multiple ConvNet layers representation because it will gain more information about an image. When three fully connected layer features concatenated, the occurring image representation would have 9192 (4096+4096+1000) dimension features. However, features extracted from multiple ConvNet layers are redundant and noisy since they are extracted from the same ConvNet. Thus, a third step, we will use Principal Component Analysis (PCA) to select salient features before the training phase. When salient features are obtained, the classifier can classify image more accurately, and the performance of transfer learning can be improved. To evaluate proposed method, experiments are conducted in three standard datasets (Caltech-256, VOC07, and SUN397) to compare multiple ConvNet layer representations against single ConvNet layer representation by using PCA for feature selection and dimension reduction. Our experiments demonstrated the importance of feature selection for multiple ConvNet layer representation. Moreover, our proposed approach achieved 75.6% accuracy compared to 73.9% accuracy achieved by FC7 layer on the Caltech-256 dataset, 73.1% accuracy compared to 69.2% accuracy achieved by FC8 layer on the VOC07 dataset, 52.2% accuracy compared to 48.7% accuracy achieved by FC7 layer on the SUN397 dataset. We also showed that our proposed approach achieved superior performance, 2.8%, 2.1% and 3.1% accuracy improvement on Caltech-256, VOC07, and SUN397 dataset respectively compare to existing work.

Ensemble of Nested Dichotomies for Activity Recognition Using Accelerometer Data on Smartphone (Ensemble of Nested Dichotomies 기법을 이용한 스마트폰 가속도 센서 데이터 기반의 동작 인지)

  • Ha, Eu Tteum;Kim, Jeongmin;Ryu, Kwang Ryel
    • Journal of Intelligence and Information Systems
    • /
    • v.19 no.4
    • /
    • pp.123-132
    • /
    • 2013
  • As the smartphones are equipped with various sensors such as the accelerometer, GPS, gravity sensor, gyros, ambient light sensor, proximity sensor, and so on, there have been many research works on making use of these sensors to create valuable applications. Human activity recognition is one such application that is motivated by various welfare applications such as the support for the elderly, measurement of calorie consumption, analysis of lifestyles, analysis of exercise patterns, and so on. One of the challenges faced when using the smartphone sensors for activity recognition is that the number of sensors used should be minimized to save the battery power. When the number of sensors used are restricted, it is difficult to realize a highly accurate activity recognizer or a classifier because it is hard to distinguish between subtly different activities relying on only limited information. The difficulty gets especially severe when the number of different activity classes to be distinguished is very large. In this paper, we show that a fairly accurate classifier can be built that can distinguish ten different activities by using only a single sensor data, i.e., the smartphone accelerometer data. The approach that we take to dealing with this ten-class problem is to use the ensemble of nested dichotomy (END) method that transforms a multi-class problem into multiple two-class problems. END builds a committee of binary classifiers in a nested fashion using a binary tree. At the root of the binary tree, the set of all the classes are split into two subsets of classes by using a binary classifier. At a child node of the tree, a subset of classes is again split into two smaller subsets by using another binary classifier. Continuing in this way, we can obtain a binary tree where each leaf node contains a single class. This binary tree can be viewed as a nested dichotomy that can make multi-class predictions. Depending on how a set of classes are split into two subsets at each node, the final tree that we obtain can be different. Since there can be some classes that are correlated, a particular tree may perform better than the others. However, we can hardly identify the best tree without deep domain knowledge. The END method copes with this problem by building multiple dichotomy trees randomly during learning, and then combining the predictions made by each tree during classification. The END method is generally known to perform well even when the base learner is unable to model complex decision boundaries As the base classifier at each node of the dichotomy, we have used another ensemble classifier called the random forest. A random forest is built by repeatedly generating a decision tree each time with a different random subset of features using a bootstrap sample. By combining bagging with random feature subset selection, a random forest enjoys the advantage of having more diverse ensemble members than a simple bagging. As an overall result, our ensemble of nested dichotomy can actually be seen as a committee of committees of decision trees that can deal with a multi-class problem with high accuracy. The ten classes of activities that we distinguish in this paper are 'Sitting', 'Standing', 'Walking', 'Running', 'Walking Uphill', 'Walking Downhill', 'Running Uphill', 'Running Downhill', 'Falling', and 'Hobbling'. The features used for classifying these activities include not only the magnitude of acceleration vector at each time point but also the maximum, the minimum, and the standard deviation of vector magnitude within a time window of the last 2 seconds, etc. For experiments to compare the performance of END with those of other methods, the accelerometer data has been collected at every 0.1 second for 2 minutes for each activity from 5 volunteers. Among these 5,900 ($=5{\times}(60{\times}2-2)/0.1$) data collected for each activity (the data for the first 2 seconds are trashed because they do not have time window data), 4,700 have been used for training and the rest for testing. Although 'Walking Uphill' is often confused with some other similar activities, END has been found to classify all of the ten activities with a fairly high accuracy of 98.4%. On the other hand, the accuracies achieved by a decision tree, a k-nearest neighbor, and a one-versus-rest support vector machine have been observed as 97.6%, 96.5%, and 97.6%, respectively.