• Title/Summary/Keyword: Recall and Precision

Search Result 705, Processing Time 0.033 seconds

Chatting Pattern Based Game BOT Detection: Do They Talk Like Us?

  • Kang, Ah Reum;Kim, Huy Kang;Woo, Jiyoung
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.6 no.11
    • /
    • pp.2866-2879
    • /
    • 2012
  • Among the various security threats in online games, the use of game bots is the most serious problem. Previous studies on game bot detection have proposed many methods to find out discriminable behaviors of bots from humans based on the fact that a bot's playing pattern is different from that of a human. In this paper, we look at the chatting data that reflects gamers' communication patterns and propose a communication pattern analysis framework for online game bot detection. In massive multi-user online role playing games (MMORPGs), game bots use chatting message in a different way from normal users. We derive four features; a network feature, a descriptive feature, a diversity feature and a text feature. To measure the diversity of communication patterns, we propose lightly summarized indices, which are computationally inexpensive and intuitive. For text features, we derive lexical, syntactic and semantic features from chatting contents using text mining techniques. To build the learning model for game bot detection, we test and compare three classification models: the random forest, logistic regression and lazy learning. We apply the proposed framework to AION operated by NCsoft, a leading online game company in Korea. As a result of our experiments, we found that the random forest outperforms the logistic regression and lazy learning. The model that employs the entire feature sets gives the highest performance with a precision value of 0.893 and a recall value of 0.965.

Emotion Recognition Using Color and Pattern in Textile Images (컬러와 패턴을 이용한 텍스타일 영상에서의 감정인식 시스템)

  • Shin, Yun-Hee;Kim, Young-Rae;Kim, Eun-Yi
    • Journal of the Institute of Electronics Engineers of Korea CI
    • /
    • v.45 no.6
    • /
    • pp.154-161
    • /
    • 2008
  • In this paper, a novel method is proposed using color and pattern information for recognizing some emotions included in a fertile. Here we use 10 Kobayashi emotion to represent emotions. - { romantic, clear, natural, casual, elegant chic, dynamic, classic, dandy, modem } The proposed system is composed of feature extraction and classification. To transform the subjective emotions as physical visual features, we extract representative colors and Patterns from textile. Here, the representative color prototypes are extracted by color quantization method, and patterns exacted by wavelet transform followed by statistical analysis. These exacted features are given as input to the neural network (NN)-based classifiers, which decides whether or not a textile had the corresponding emotion. When assessing the effectiveness of the proposed system with 389 textiles collected from various application domains such as interior, fashion, and artificial ones. The results showed that the proposed method has the precision of 100% and the recall of 99%, thereby it can be used in various textile industries.

Project Failure Main Factors Analysis using Text Mining in Audit Evaluation (감리결과에 텍스트마이닝 기법을 적용한 프로젝트 실패 주요요인 분석)

  • Jang, Kyoungae;Jang, Seong Yong;Kim, Woo-Je
    • Journal of KIISE
    • /
    • v.42 no.4
    • /
    • pp.468-474
    • /
    • 2015
  • Corporations should make efforts to recognize the importance of projects, identify their failure factors, prevent risks in advance, and raise the success rates, because the corporations need to make quick responses to rapid external changes. There are some previous studies on success and failure factors of projects, however, most of them have limitations in terms of objectivity and quantitative analysis based on data gathering through surveys, statistical sampling and analysis. This study analyzes the failure factors of projects based on data mining to find problems with projects in an audit report, which is an objective project evaluation report. To do this, we identified the texts in the paragraph of suggestions about improvement. We made use of the superior classification algorithms in this study, which were NaiveBayes, SMO and J48. They were evaluated in terms of data of Recall and Precision after performing 10-fold-cross validation. In the identified texts, the failure factors of projects were analyzed so that they could be utilized in project implementation.

Development and Evaluation of D-Attention Unet Model Using 3D and Continuous Visual Context for Needle Detection in Continuous Ultrasound Images (연속 초음파영상에서의 바늘 검출을 위한 3D와 연속 영상문맥을 활용한 D-Attention Unet 모델 개발 및 평가)

  • Lee, So Hee;Kim, Jong Un;Lee, Su Yeol;Ryu, Jeong Won;Choi, Dong Hyuk;Tae, Ki Sik
    • Journal of Biomedical Engineering Research
    • /
    • v.41 no.5
    • /
    • pp.195-202
    • /
    • 2020
  • Needle detection in ultrasound images is sometimes difficult due to obstruction of fat tissues. Accurate needle detection using continuous ultrasound (CUS) images is a vital stage of treatment planning for tissue biopsy and brachytherapy. The main goal of the study is classified into two categories. First, new detection model, i.e. D-Attention Unet, is developed by combining the context information of 3D medical data and CUS images. Second, the D-Attention Unet model was compared with other models to verify its usefulness for needle detection in continuous ultrasound images. The continuous needle images taken with ultrasonic waves were converted into still images for dataset to evaluate the performance of the D-Attention Unet. The dataset was used for training and testing. Based on the results, the proposed D-Attention Unet model showed the better performance than other 3 models (Unet, D-Unet and Attention Unet), with Dice Similarity Coefficient (DSC), Recall and Precision at 71.9%, 70.6% and 73.7%, respectively. In conclusion, the D-Attention Unet model provides accurate needle detection for US-guided biopsy or brachytherapy, facilitating the clinical workflow. Especially, this kind of research is enthusiastically being performed on how to add image processing techniques to learning techniques. Thus, the proposed method is applied in this manner, it will be more effective technique than before.

Image Retrieval based on Color-Spatial Features using Quadtree and Texture Information Extracted from Object MBR (Quadtree를 사용한 색상-공간 특징과 객체 MBR의 질감 정보를 이용한 영상 검색)

  • 최창규;류상률;김승호
    • Journal of KIISE:Computing Practices and Letters
    • /
    • v.8 no.6
    • /
    • pp.692-704
    • /
    • 2002
  • In this paper, we present am image retrieval method based on color-spatial features using quadtree and texture information extracted from object MBRs in an image. Tile proposed method consists of creating a DC image from an original image, changing a color coordinate system, and decomposing regions using quadtree. As such, conditions are present to decompose the DC image, then the system extracts representative colors from each region. And, image segmentation is used to search for object MBRs, including object themselves, object included in the background, or certain background region, then the wavelet coefficients are calculated to provide texture information. Experiments were conducted using the proposed similarity method based on color-spatial and texture features. Our method was able to refute the amount of feature vector storage by about 53%, but was similar to the original image as regards precision and recall. Furthermore, to make up for the deficiency in using only color-spatial features, texture information was added and the results showed images that included objects from the query images.

Detection of Video Cut Using Autocorrelation Function and Edge Histogram (자기상관과 에지 히스토그램을 이용한 동영상 전환점 검출)

  • Noh, Jung-Jin;Moon, Young-Ho;Yoo, Ji-Sang
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.29 no.9C
    • /
    • pp.1269-1278
    • /
    • 2004
  • While the management of digital contents is getting more and more important, many researchers have studied about scene change detection algorithms to reduce similar scenes in the video contents and to efficiently summarize video data. The algorithms using histogram and pixel information are found out as being sensitive to light changes and motion. Therefore, visual rhythm gets used in recent work to solve this problem, which shows some characteristics of scenes and requires even less computational power. In this paper, a new scene detection algorithm using visual rhythm by direction is proposed. The proposed algorithm needs less computational power and is able to keep good performance even in the scenes with motion. Experimental results show the performance improvement of about 30% comparing with conventional methods with histogram. They also show that the proposed algorithm is able to keep the same performance even to music video contents with lots of motion.

Korean Base-Noun Extraction and its Application (한국어 기준명사 추출 및 그 응용)

  • Kim, Jae-Hoon
    • The KIPS Transactions:PartB
    • /
    • v.15B no.6
    • /
    • pp.613-620
    • /
    • 2008
  • Noun extraction plays an important part in the fields of information retrieval, text summarization, and so on. In this paper, we present a Korean base-noun extraction system and apply it to text summarization to deal with a huge amount of text effectively. The base-noun is an atomic noun but not a compound noun and we use tow techniques, filtering and segmenting. The filtering technique is used for removing non-nominal words from text before extracting base-nouns and the segmenting technique is employed for separating a particle from a nominal and for dividing a compound noun into base-nouns. We have shown that both of the recall and the precision of the proposed system are about 89% on the average under experimental conditions of ETRI corpus. The proposed system has applied to Korean text summarization system and is shown satisfactory results.

Improving the Performance of Korean Text Chunking by Machine learning Approaches based on Feature Set Selection (자질집합선택 기반의 기계학습을 통한 한국어 기본구 인식의 성능향상)

  • Hwang, Young-Sook;Chung, Hoo-jung;Park, So-Young;Kwak, Young-Jae;Rim, Hae-Chang
    • Journal of KIISE:Software and Applications
    • /
    • v.29 no.9
    • /
    • pp.654-668
    • /
    • 2002
  • In this paper, we present an empirical study for improving the Korean text chunking based on machine learning and feature set selection approaches. We focus on two issues: the problem of selecting feature set for Korean chunking, and the problem of alleviating the data sparseness. To select a proper feature set, we use a heuristic method of searching through the space of feature sets using the estimated performance from a machine learning algorithm as a measure of "incremental usefulness" of a particular feature set. Besides, for smoothing the data sparseness, we suggest a method of using a general part-of-speech tag set and selective lexical information under the consideration of Korean language characteristics. Experimental results showed that chunk tags and lexical information within a given context window are important features and spacing unit information is less important than others, which are independent on the machine teaming techniques. Furthermore, using the selective lexical information gives not only a smoothing effect but also the reduction of the feature space than using all of lexical information. Korean text chunking based on the memory-based learning and the decision tree learning with the selected feature space showed the performance of precision/recall of 90.99%/92.52%, and 93.39%/93.41% respectively.

A Mechanism to profile Pavement Blocks and detect Cracks using 2D Line Laser on Vehicles (이동체에서 2D 선레이저를 이용한 보도블럭 프로파일링 및 균열 검출 기법)

  • Choi, Seungho;Kim, Seoyeon;Jung, Young-Hoon;Kim, Taesik;Min, Hong;Jung, Jinman
    • The Journal of the Institute of Internet, Broadcasting and Communication
    • /
    • v.21 no.5
    • /
    • pp.135-140
    • /
    • 2021
  • In this paper, we propose an on-line mechanism that simultaneously detects cracks and profiling pavement blocks to detect the displacement of ground surface adjacent to the excavation in the urban area. The proposed method utilizes a 2D laser to profile the information about pavement blocks including the depth and distance among them. In particular, it is designed to enable the detection of cracks and portholes at runtime. For the experiment, real data was collected through Gocator, and trainng was carried out using Faster R-CNN. The performance evaluation shows that our detection precision and recall are more than 90% and the pavement blocks are profiled at the same time. Our proposed mechanism can be used for monitoring management to quantitatively detect the level of excavation risk before a large-scale ground collapse occurs.

Comparison of Deep Learning-based CNN Models for Crack Detection (콘크리트 균열 탐지를 위한 딥 러닝 기반 CNN 모델 비교)

  • Seol, Dong-Hyeon;Oh, Ji-Hoon;Kim, Hong-Jin
    • Journal of the Architectural Institute of Korea Structure & Construction
    • /
    • v.36 no.3
    • /
    • pp.113-120
    • /
    • 2020
  • The purpose of this study is to compare the models of Deep Learning-based Convolution Neural Network(CNN) for concrete crack detection. The comparison models are AlexNet, GoogLeNet, VGG16, VGG19, ResNet-18, ResNet-50, ResNet-101, and SqueezeNet which won ImageNet Large Scale Visual Recognition Challenge(ILSVRC). To train, validate and test these models, we constructed 3000 training data and 12000 validation data with 256×256 pixel resolution consisting of cracked and non-cracked images, and constructed 5 test data with 4160×3120 pixel resolution consisting of concrete images with crack. In order to increase the efficiency of the training, transfer learning was performed by taking the weight from the pre-trained network supported by MATLAB. From the trained network, the validation data is classified into crack image and non-crack image, yielding True Positive (TP), True Negative (TN), False Positive (FP), False Negative (FN), and 6 performance indicators, False Negative Rate (FNR), False Positive Rate (FPR), Error Rate, Recall, Precision, Accuracy were calculated. The test image was scanned twice with a sliding window of 256×256 pixel resolution to classify the cracks, resulting in a crack map. From the comparison of the performance indicators and the crack map, it was concluded that VGG16 and VGG19 were the most suitable for detecting concrete cracks.