• 제목/요약/키워드: 이진 분류

Search Result 605, Processing Time 0.024 seconds

Study of Adversarial Attack and Defense Deep Learning Model for Autonomous Driving (자율주행을 위한 적대적 공격 및 방어 딥러닝 모델 연구)

  • Kim, Chae-Hyeon;Lee, Jin-Kyu;Jung, Eun;Jung, Jae-Ho;Lee, Hyun-Jung;Lee, Gyu-Young
    • Proceedings of the Korea Information Processing Society Conference
    • /
    • 2022.11a
    • /
    • pp.803-805
    • /
    • 2022
  • 자율주행의 시대가 도래함에 따라, 딥러닝 모델에 대한 적대적 공격 위험이 함께 증가하고 있다. 카메라 기반 자율주행차량이 공격받을 경우 보행자나 표지판 등에 대한 오분류로 인해 심각한 사고로 이어질 수 있어, 자율주행 시스템에서의 적대적 공격에 대한 방어 및 보안 기술 연구가 필수적이다. 이에 본 논문에서는 GTSRB 표지판 데이터를 이용하여 각종 공격 및 방어 기법을 개발하고 제안한다. 시간 및 정확도 측면에서 성능을 비교함으로써, 자율주행에 최적인 모델을 탐구하고 더 나아가 해당 모델들의 완전자율주행을 위한 발전 방향을 제안한다.

A New Bias Scheduling Method for Improving Both Classification Performance and Precision on the Classification and Regression Problems (분류 및 회귀문제에서의 분류 성능과 정확도를 동시에 향상시키기 위한 새로운 바이어스 스케줄링 방법)

  • Kim Eun-Mi;Park Seong-Mi;Kim Kwang-Hee;Lee Bae-Ho
    • Journal of KIISE:Software and Applications
    • /
    • v.32 no.11
    • /
    • pp.1021-1028
    • /
    • 2005
  • The general solution for classification and regression problems can be found by matching and modifying matrices with the information in real world and then these matrices are teaming in neural networks. This paper treats primary space as a real world, and dual space that Primary space matches matrices using kernel. In practical study, there are two kinds of problems, complete system which can get an answer using inverse matrix and ill-posed system or singular system which cannot get an answer directly from inverse of the given matrix. Further more the problems are often given by the latter condition; therefore, it is necessary to find regularization parameter to change ill-posed or singular problems into complete system. This paper compares each performance under both classification and regression problems among GCV, L-Curve, which are well known for getting regularization parameter, and kernel methods. Both GCV and L-Curve have excellent performance to get regularization parameters, and the performances are similar although they show little bit different results from the different condition of problems. However, these methods are two-step solution because both have to calculate the regularization parameters to solve given problems, and then those problems can be applied to other solving methods. Compared with UV and L-Curve, kernel methods are one-step solution which is simultaneously teaming a regularization parameter within the teaming process of pattern weights. This paper also suggests dynamic momentum which is leaning under the limited proportional condition between learning epoch and the performance of given problems to increase performance and precision for regularization. Finally, this paper shows the results that suggested solution can get better or equivalent results compared with GCV and L-Curve through the experiments using Iris data which are used to consider standard data in classification, Gaussian data which are typical data for singular system, and Shaw data which is an one-dimension image restoration problems.

Automatic Extraction of the Land Readjustment Paddy for High-level Land Cover Classification (토지 피복 세분류를 위한 경지 정리 논 자동 추출)

  • Yeom, Jun Ho;Kim, Yong Il
    • Journal of the Korean Society of Surveying, Geodesy, Photogrammetry and Cartography
    • /
    • v.32 no.5
    • /
    • pp.443-450
    • /
    • 2014
  • To fulfill the recent increasement in the public and private demands for various spatial data, the central and local governments started to produce those data. The low-level land cover map has been produced since 2000, yet the production of high-level land covered map has started later in 2010, and recently, a few regions was completed recently. Although many studies have been carried to improve the quality of land that covered in the map, most of them have been focused on the low-level and mid-level classifications. For that reason, the study for high-level classification is still insufficient. Therefore, in this study, we suggested the automatic extraction of land readjustment for paddy land that updated in the mid-level land mapping. At the study, the RapidEye satellite images, which consider efficient to apply in the agricultural field, were used, and the high pass filtering emphasized the outline of paddy field. Also, the binary images of the paddy outlines were generated from the Otsu thresholding. The boundary information of paddy field was extracted from the image-to-map registrations and masking of paddy land cover. Lastly, the snapped edges were linked, as well as the linear features of paddy outlines were extracted by the regional Hough line extraction. The start and end points that were close to each other were linked to complete the paddy field outlines. In fact, the boundary of readjusted paddy fields was able to be extracted efficiently. We could conclude in that this study contributed to the automatic production of a high-level land cover map for paddy fields.

Methanogenic Archaeal Census of Ruminal Microbiomes (반추위 마이크로바이옴 내 메탄생성고세균 조사)

  • Lee, Seul;Baek, Youlchang;Lee, Jinwook;Kim, Minseok
    • Journal of the Korea Academia-Industrial cooperation Society
    • /
    • v.21 no.7
    • /
    • pp.312-320
    • /
    • 2020
  • The objective of the study was to undertake a phylogenetic diversity census of ruminal archaea based on a meta-analysis of 16S rRNA gene sequences that were publicly available in the Ribosomal Database Project. A total of 8,416 sequences were retrieved from the Ribosomal Database Project (release 11, update 5) and included in the construction of a taxonomy tree. Species-level operational taxonomic units (OTUs) were analyzed at a 97% sequence similarity by using the QIIME program. Of the 8,416 sequences, 8,412 were classified into one of three phyla; however, the remaining four sequences could not be classified into a known phylum. The Euryarchaeota phylum was predominant and accounted for 99.8% of the archaeal sequences examined. Among the Euryarchaeota, 65.4% were assigned to Methanobrevibacter, followed by Methanosphaera (10.4%), Methanomassillicoccus (10.4%), Methanomicrobium (7.9%), Methanobacterium (1.9%), Methanimicrococcus (0.5%), Methanosarcina (0.1%), and Methanoculleus (0.1%). The 7,544 sequences that had been trimmed to the V2 and V3 regions clustered into 493 OTUs. Only 17 of those 493 OTUs were dominant groups and accounted for more than 1% of the 7,544 sequences. These results can help guide future research into the dominant ruminal methanogens that significantly contribute to methane emissions from ruminants, research that may lead to the development of anti-methanogenic compounds that inhibit these methanogens regardless of diet or animal species.

Analysis of Co-relationship between Rock Mass Grade by RMR and Estimation Method of Rock Deformation Modulus by Suggested Formulas (RMR 분류에 의한 암반등급과 제안식에 의한 암반 변형계수 추정기법의 상관관계 분석)

  • Do, Jongnam;Lee, Jinkyu;Chun, Byungsik
    • Journal of the Korean GEO-environmental Society
    • /
    • v.13 no.4
    • /
    • pp.13-26
    • /
    • 2012
  • The deformation modulus of rock masses is a very important design factor for the computation of stability of tunnels and their support systems. Several empirical formulas to estimate the deformation modulus using simple rock classification methods such as RQD or RMR are widely used because field tests to evaluate the deformation modulus are very expensive and time consuming work. However, these formulas can be depended on experiences from the characteristics of local sites in each country. So it is possible that there might be limitations to estimate appropriate deformation modulus in South Korea using the empirical formulas. Therefore, in this study, the applicability of empirical formulas was analyzed by comparing estimated value with the measured value from eight sites in South Korea. The results show that the estimated value based on the empirical formulas partially have tendency to overestimate. Especially, in case of sedimentary rocks, it was too difficult to apply to the empirical formulas because there was no relation ship between estimated value and measured value. For these reasons, additional data from many tests and accurate analyses are necessary to evaluate the estimation method for the deformation modulus considering the local characteristics of rock masses.

A Study on the Improvement of Recommended Route in the Vicinity of Wando Island using Support Vector Machine (서포트 벡터 머신을 이용한 완도 인근해역 추천항로 개선안에 관한 연구)

  • Yoo, Sang-Lok;Jung, Cho-Young
    • Journal of Navigation and Port Research
    • /
    • v.41 no.6
    • /
    • pp.445-450
    • /
    • 2017
  • It is necessary to set a route to reflect the traffic flow for the safety of the traffic vessels. This ongoing analysis is needed to ensure that the vessels comply with a route. The purpose of this study is to discover the problems of the recommended route vicinity for Wando Harbor and suggest an improvement plan. We used a support vector machine based on the ship's trajectory to establish an efficient route center line. Since the vessels should navigate to the starboard side, with reference to the center line of the recommended route, the trajectories of the vessels were divided into two clusters. The support vector machine is being used in many fields such as pattern recognition, and it is effective for this binary classification. As a result of this study, about 79.5 % of the merchant eastbound ships in a 2.4 NM distance to Jangjuk Sudo did not observe the recommended route, so the risk of collision always existed. The contraflow traffic rate of the route of the eastbound ships decreased from 79.5 % to 30.9 % when the recommended route was reset about 300 meters to the north, from its present position. The support vector machine applied in this study is expected to be applicable, to effectively set the route center line because the ship trajectories can be classified into two clusters.

Research on Safety and Quality Regulatory Policy for Assistive Products (보조기기 안전·품질관리 방안 연구)

  • Kim, Hye-Won;Kim, Dong-A;Seo, Won-San;Kim, Jang-Hwan;Ko, Myeong Han;Son, Byung-Chang;Yi, JinBok
    • Journal of the Korea Academia-Industrial cooperation Society
    • /
    • v.19 no.12
    • /
    • pp.805-813
    • /
    • 2018
  • The research was conducted with the purpose of providing effective safety and quality control system for assistive products for handicapped those are used extensively. Assistive products couldn't be classified independently due to collision with the act of medical device and lack in legal basis. The issues about safety and quality have been solved by other legal frames on a case by case basis. We couldn't find any abroad case of independent safety and quality control policy. For the practical solution, this article suggested hybrid classification system mixed with existing policies. Each classified branches are allocated to the appropriate policy of safety and quality control so those are ease of understanding and prospect. And also a delicacy process was suggested not to leave off any assistive products. Through these suggests of the improvement it is expected that blind areas of safety and quality control for assistive products for handicapped could be solved and identity of assistive products could be established to provide product safety for handicapped and boost relevant industries.

Ensemble of Nested Dichotomies for Activity Recognition Using Accelerometer Data on Smartphone (Ensemble of Nested Dichotomies 기법을 이용한 스마트폰 가속도 센서 데이터 기반의 동작 인지)

  • Ha, Eu Tteum;Kim, Jeongmin;Ryu, Kwang Ryel
    • Journal of Intelligence and Information Systems
    • /
    • v.19 no.4
    • /
    • pp.123-132
    • /
    • 2013
  • As the smartphones are equipped with various sensors such as the accelerometer, GPS, gravity sensor, gyros, ambient light sensor, proximity sensor, and so on, there have been many research works on making use of these sensors to create valuable applications. Human activity recognition is one such application that is motivated by various welfare applications such as the support for the elderly, measurement of calorie consumption, analysis of lifestyles, analysis of exercise patterns, and so on. One of the challenges faced when using the smartphone sensors for activity recognition is that the number of sensors used should be minimized to save the battery power. When the number of sensors used are restricted, it is difficult to realize a highly accurate activity recognizer or a classifier because it is hard to distinguish between subtly different activities relying on only limited information. The difficulty gets especially severe when the number of different activity classes to be distinguished is very large. In this paper, we show that a fairly accurate classifier can be built that can distinguish ten different activities by using only a single sensor data, i.e., the smartphone accelerometer data. The approach that we take to dealing with this ten-class problem is to use the ensemble of nested dichotomy (END) method that transforms a multi-class problem into multiple two-class problems. END builds a committee of binary classifiers in a nested fashion using a binary tree. At the root of the binary tree, the set of all the classes are split into two subsets of classes by using a binary classifier. At a child node of the tree, a subset of classes is again split into two smaller subsets by using another binary classifier. Continuing in this way, we can obtain a binary tree where each leaf node contains a single class. This binary tree can be viewed as a nested dichotomy that can make multi-class predictions. Depending on how a set of classes are split into two subsets at each node, the final tree that we obtain can be different. Since there can be some classes that are correlated, a particular tree may perform better than the others. However, we can hardly identify the best tree without deep domain knowledge. The END method copes with this problem by building multiple dichotomy trees randomly during learning, and then combining the predictions made by each tree during classification. The END method is generally known to perform well even when the base learner is unable to model complex decision boundaries As the base classifier at each node of the dichotomy, we have used another ensemble classifier called the random forest. A random forest is built by repeatedly generating a decision tree each time with a different random subset of features using a bootstrap sample. By combining bagging with random feature subset selection, a random forest enjoys the advantage of having more diverse ensemble members than a simple bagging. As an overall result, our ensemble of nested dichotomy can actually be seen as a committee of committees of decision trees that can deal with a multi-class problem with high accuracy. The ten classes of activities that we distinguish in this paper are 'Sitting', 'Standing', 'Walking', 'Running', 'Walking Uphill', 'Walking Downhill', 'Running Uphill', 'Running Downhill', 'Falling', and 'Hobbling'. The features used for classifying these activities include not only the magnitude of acceleration vector at each time point but also the maximum, the minimum, and the standard deviation of vector magnitude within a time window of the last 2 seconds, etc. For experiments to compare the performance of END with those of other methods, the accelerometer data has been collected at every 0.1 second for 2 minutes for each activity from 5 volunteers. Among these 5,900 ($=5{\times}(60{\times}2-2)/0.1$) data collected for each activity (the data for the first 2 seconds are trashed because they do not have time window data), 4,700 have been used for training and the rest for testing. Although 'Walking Uphill' is often confused with some other similar activities, END has been found to classify all of the ten activities with a fairly high accuracy of 98.4%. On the other hand, the accuracies achieved by a decision tree, a k-nearest neighbor, and a one-versus-rest support vector machine have been observed as 97.6%, 96.5%, and 97.6%, respectively.

A Study on the Distribution Map Construction of Asbestos Buildings Owned by Seoul Using QGIS (QGIS를 활용한 서울시 소유 석면건축물 분포지도 제작에 관한 연구)

  • Lee, Jin Hyo;Bae, Il Sang;Ha, Kwang Tae;You, Seung Sung;Han, Kyu Mun;Eo, Soo Mi;Jung, Kweon;Lee, Jin Sook;Koo, Ja Yong
    • Journal of Korean Society of Environmental Engineers
    • /
    • v.38 no.9
    • /
    • pp.528-533
    • /
    • 2016
  • One of ways for effectively maintaining asbestos buildings is to select asbestos buildings to be removed firstly by manufacturing and analyzing asbestos map of various topics. Thus, in this study we manufactured asbestos map of various topics for the effective management of asbestos buildings owned by Seoul using QGIS (Quantum Geographic Information System). To select asbestos buildings likely to cause asbestos scattering problem and exposure into the air, we comprehensively took into consideration various topics such as asbestos buildings density, asbestos-area ratio, asbestos buildings distribution considering the population, first removal object, risk assessment, elapsed year. As described in this study, using the GIS may be utilized as a method for selecting asbestos buildings to be removed firstly as well as distribution of asbestos buildings. In the future, it is necessary to make assessment criteria considering diversification of property value in GIS such as the characteristics of the living environment around the asbestos buildings. This is expected to be utilized to manage the vulnerable region to asbestos exposure.

Text Filtering using Iterative Boosting Algorithms (반복적 부스팅 학습을 이용한 문서 여과)

  • Hahn, Sang-Youn;Zang, Byoung-Tak
    • Journal of KIISE:Software and Applications
    • /
    • v.29 no.4
    • /
    • pp.270-277
    • /
    • 2002
  • Text filtering is a task of deciding whether a document has relevance to a specified topic. As Internet and Web becomes wide-spread and the number of documents delivered by e-mail explosively grows the importance of text filtering increases as well. The aim of this paper is to improve the accuracy of text filtering systems by using machine learning techniques. We apply AdaBoost algorithms to the filtering task. An AdaBoost algorithm generates and combines a series of simple hypotheses. Each of the hypotheses decides the relevance of a document to a topic on the basis of whether or not the document includes a certain word. We begin with an existing AdaBoost algorithm which uses weak hypotheses with their output of 1 or -1. Then we extend the algorithm to use weak hypotheses with real-valued outputs which was proposed recently to improve error reduction rates and final filtering performance. Next, we attempt to achieve further improvement in the AdaBoost's performance by first setting weights randomly according to the continuous Poisson distribution, executing AdaBoost, repeating these steps several times, and then combining all the hypotheses learned. This has the effect of mitigating the ovefitting problem which may occur when learning from a small number of data. Experiments have been performed on the real document collections used in TREC-8, a well-established text retrieval contest. This dataset includes Financial Times articles from 1992 to 1994. The experimental results show that AdaBoost with real-valued hypotheses outperforms AdaBoost with binary-valued hypotheses, and that AdaBoost iterated with random weights further improves filtering accuracy. Comparison results of all the participants of the TREC-8 filtering task are also provided.