• Title/Summary/Keyword: Co-occurrence feature

Search Result 89, Processing Time 0.029 seconds

Forensic Image Classification using Data Mining Decision Tree (데이터 마이닝 결정나무를 이용한 포렌식 영상의 분류)

  • RHEE, Kang Hyeon
    • Journal of the Institute of Electronics and Information Engineers
    • /
    • v.53 no.7
    • /
    • pp.49-55
    • /
    • 2016
  • In digital forensic images, there is a serious problem that is distributed with various image types. For the problem solution, this paper proposes a classification algorithm of the forensic image types. The proposed algorithm extracts the 21-dim. feature vector with the contrast and energy from GLCM (Gray Level Co-occurrence Matrix), and the entropy of each image type. The classification test of the forensic images is performed with an exhaustive combination of the image types. Through the experiments, TP (True Positive) and FN (False Negative) is detected respectively. While it is confirmed that performed class evaluation of the proposed algorithm is rated as 'Excellent(A)' because of the AUROC (Area Under Receiver Operating Characteristic Curve) is 0.9980 by the sensitivity and the 1-specificity. Also, the minimum average decision error is 0.1349. Also, at the minimum average decision error is 0.0179, the whole forensic image types which are involved then, our classification effectiveness is high.

Detection of Red Tide Distribution in the Southern Coast of the Korea Waters using Landsat Image and Euclidian Distance (Landsat 영상과 유클리디언 거리측정 방법을 이용한 한반도 남부해역 적조영역 검출)

  • Sur, Hyung-Soo;Kim, Seok-Gyu;Lee, Chil-Woo
    • Journal of the Korean Association of Geographic Information Studies
    • /
    • v.10 no.4
    • /
    • pp.1-13
    • /
    • 2007
  • We make image that accumulate two principal component after change picture to use GLCM(Gray Level Co-Occurrence Matrix)'s texture feature information. And then these images use preprocess to achieved corner detection and area detection. Experiment results, two principle component conversion accumulation images had most informations about six kind textures by Eigen value 94.6%. When compared with red tide area that uses sea color and red tide area of image that have all principle component, displayed the most superior result. Also, we creates Euclidian space using Euclidian distance measurement about red tide area and clear sea. We identify of red tide area by red tide area and clear sea about random sea area through Euclidian distance and spatial distribution.

  • PDF

Text Categorization Using TextRank Algorithm (TextRank 알고리즘을 이용한 문서 범주화)

  • Bae, Won-Sik;Cha, Jeong-Won
    • Journal of KIISE:Computing Practices and Letters
    • /
    • v.16 no.1
    • /
    • pp.110-114
    • /
    • 2010
  • We describe a new method for text categorization using TextRank algorithm. Text categorization is a problem that over one pre-defined categories are assigned to a text document. TextRank algorithm is a graph-based ranking algorithm. If we consider that each word is a vertex, and co-occurrence of two adjacent words is a edge, we can get a graph from a document. After that, we find important words using TextRank algorithm from the graph and make feature which are pairs of words which are each important word and a word adjacent to the important word. We use classifiers: SVM, Na$\ddot{i}$ve Bayesian classifier, Maximum Entropy Model, and k-NN classifier. We use non-cross-posted version of 20 Newsgroups data set. In consequence, we had an improved performance in whole classifiers, and the result tells that is a possibility of TextRank algorithm in text categorization.

Classification of Fall Crops Using Unmanned Aerial Vehicle Based Image and Support Vector Machine Model - Focusing on Idam-ri, Goesan-gun, Chungcheongbuk-do - (무인기 기반 영상과 SVM 모델을 이용한 가을수확 작물 분류 - 충북 괴산군 이담리 지역을 중심으로 -)

  • Jeong, Chan-Hee;Go, Seung-Hwan;Park, Jong-Hwa
    • Journal of Korean Society of Rural Planning
    • /
    • v.28 no.1
    • /
    • pp.57-69
    • /
    • 2022
  • Crop classification is very important for estimating crop yield and figuring out accurate cultivation area. The purpose of this study is to classify crops harvested in fall in Idam-ri, Goesan-gun, Chungcheongbuk-do by using unmanned aerial vehicle (UAV) images and support vector machine (SVM) model. The study proceeded in the order of image acquisition, variable extraction, model building, and evaluation. First, RGB and multispectral image were acquired on September 13, 2021. Independent variables which were applied to Farm-Map, consisted gray level co-occurrence matrix (GLCM)-based texture characteristics by using RGB images, and multispectral reflectance data. The crop classification model was built using texture characteristics and reflectance data, and finally, accuracy evaluation was performed using the error matrix. As a result of the study, the classification model consisted of four types to compare the classification accuracy according to the combination of independent variables. The result of four types of model analysis, recursive feature elimination (RFE) model showed the highest accuracy with an overall accuracy (OA) of 88.64%, Kappa coefficient of 0.84. UAV-based RGB and multispectral images effectively classified cabbage, rice and soybean when the SVM model was applied. The results of this study provided capacity usefully in classifying crops using single-period images. These technologies are expected to improve the accuracy and efficiency of crop cultivation area surveys by supplementing additional data learning, and to provide basic data for estimating crop yields.

Liver Tumor Detection Using Texture PCA of CT Images (CT영상의 텍스처 주성분 분석을 이용한 간종양 검출)

  • Sur, Hyung-Soo;Chong, Min-Young;Lee, Chil-Woo
    • The KIPS Transactions:PartB
    • /
    • v.13B no.6 s.109
    • /
    • pp.601-606
    • /
    • 2006
  • The image data amount that used in medical institution with great development of medical technology is increasing rapidly. Therefore, people need automation method that use image processing description than macrography of doctors for analysis many medical image. In this paper. we propose that acquire texture information to using GLCM about liver area of abdomen CT image, and automatically detects liver tumor using PCA from this data. Method by one feature as intensity of existent liver humor detection was most but we changed into 4 principal component accumulation images using GLCM's texture information 8 feature. Experiment result, 4 principal component accumulation image's variance percentage is 89.9%. It was seen this compare with liver tumor detecting that use only intensity about 92%. This means that can detect liver tumor even if reduce from dimension of image data to 4 dimensions that is the half in 8 dimensions.

Metalaxyl Sensitivity Related with Distribution Feature of Mating Type of Phytophthora capsici Population from Red Pepper in Korea (국내 고추역병균 Phytophthora capsici 집단의 교배형 분포 특성에 따른 Metalaxyl 감수성)

  • Song, Jeong-Young;Yoo, Sung-Joon;Lee, Youn-Su;Kim, Byung-Sup;Kim, Hong-Gi
    • The Korean Journal of Mycology
    • /
    • v.31 no.2
    • /
    • pp.98-102
    • /
    • 2003
  • Metalaxyl sensitivity related with distribution feature of mating type was characterized far Phytophthora capsici population, totally 433 isolates of the red-pepper pathogen collected from 75 pepper fields in Korea from 1995 to 1998. At the concentration of metalaxyl $2{\mu}g/ml$, inhibition rate of mycelial growth of P. capsici isolates was 68.2% in average compared to control, and 28.6% isolates in average were estimated as resistance to the chemical. Isolates of field unit with a single mating type revealed similar level of sensitivity to metalaxyl and showed sensitive or resistant in most field units. However, isolates of field units with both mating types revealed diverse sensitivity level to the chemical and various occurrence ratio of metalaxyl sensitive : resistant in each field unit. Results indicated that different levels of metalaxyl sensitivity of P. capsici population in Korea seem to be closely related with occurrence ratio of A1 : A2 mating type of each field.

Classification of Breast Tumor Cell Tissue Section Images (유방 종양 세포 조직 영상의 분류)

  • 황해길;최현주;윤혜경;남상희;최흥국
    • Journal of the Institute of Convergence Signal Processing
    • /
    • v.2 no.4
    • /
    • pp.22-30
    • /
    • 2001
  • In this paper we propose three classification algorithms to classify breast tumors that occur in duct into Benign, DCIS(ductal carcinoma in situ) NOS(invasive ductal carcinoma) The general approach for a creating classifier is composed of 2 steps: feature extraction and classification Above all feature extraction for a good classifier is very significance, because the classification performance depends on the extracted features, Therefore in the feature extraction step, we extracted morphology features describing the size of nuclei and texture features The internal structures of the tumor are reflected from wavelet transformed images with 10$\times$ and 40$\times$ magnification. Pariticulary to find the correlation between correct classification rates and wavelet depths we applied 1, 2, 3 and 4-level wavelet transforms to the images and extracted texture feature from the transformed images The morphology features used are area, perimeter, width of X axis width of Y axis and circularity The texture features used are entropy energy contrast and homogeneity. In the classification step, we created three classifiers from each of extracted features using discriminant analysis The first classifier was made by morphology features. The second and the third classifiers were made by texture features of wavelet transformed images with 10$\times$ and 40$\times$ magnification. Finally we analyzed and compared the correct classification rate of the three classifiers. In this study, we found that the best classifier was made by texture features of 3-level wavelet transformed images.

  • PDF

Student Group Division Algorithm based on Multi-view Attribute Heterogeneous Information Network

  • Jia, Xibin;Lu, Zijia;Mi, Qing;An, Zhefeng;Li, Xiaoyong;Hong, Min
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.16 no.12
    • /
    • pp.3836-3854
    • /
    • 2022
  • The student group division is benefit for universities to do the student management based on the group profile. With the widespread use of student smart cards on campus, especially where students living in campus residence halls, students' daily activities on campus are recorded with information such as smart card swiping time and location. Therefore, it is feasible to depict the students with the daily activity data and accordingly group students based on objective measuring from their campus behavior with some regular student attributions collected in the management system. However, it is challenge in feature representation due to diverse forms of the student data. To effectively and comprehensively represent students' behaviors for further student group division, we proposed to adopt activity data from student smart cards and student attributes as input data with taking account of activity and attribution relationship types from different perspective. Specially, we propose a novel student group division method based on a multi-view student attribute heterogeneous information network (MSA-HIN). The network nodes in our proposed MSA-HIN represent students with their multi-dimensional attribute information. Meanwhile, the edges are constructed to characterize student different relationships, such as co-major, co-occurrence, and co-borrowing books. Based on the MSA-HIN, embedded representations of students are learned and a deep graph cluster algorithm is applied to divide students into groups. Comparative experiments have been done on a real-life campus dataset collected from a university. The experimental results demonstrate that our method can effectively reveal the variability of student attributes and relationships and accordingly achieves the best clustering results for group division.

Visual Semantic Based 3D Video Retrieval System Using HDFS

  • Ranjith Kumar, C.;Suguna, S.
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.10 no.8
    • /
    • pp.3806-3825
    • /
    • 2016
  • This paper brings out a neoteric frame of reference for visual semantic based 3d video search and retrieval applications. Newfangled 3D retrieval application spotlight on shape analysis like object matching, classification and retrieval not only sticking up entirely with video retrieval. In this ambit, we delve into 3D-CBVR (Content Based Video Retrieval) concept for the first time. For this purpose we intent to hitch on BOVW and Mapreduce in 3D framework. Here, we tried to coalesce shape, color and texture for feature extraction. For this purpose, we have used combination of geometric & topological features for shape and 3D co-occurrence matrix for color and texture. After thriving extraction of local descriptors, TB-PCT (Threshold Based- Predictive Clustering Tree) algorithm is used to generate visual codebook. Further, matching is performed using soft weighting scheme with L2 distance function. As a final step, retrieved results are ranked according to the Index value and produce results .In order to handle prodigious amount of data and Efficacious retrieval, we have incorporated HDFS in our Intellection. Using 3D video dataset, we fiture the performance of our proposed system which can pan out that the proposed work gives meticulous result and also reduce the time intricacy.

Statistical Generation of Korean Chatting Sentences Using Multiple Feature Information (복합 자질 정보를 이용한 통계적 한국어 채팅 문장 생성)

  • Kim, Jong-Hwan;Chang, Du-Seong;Kim, Hark-Soo
    • Korean Journal of Cognitive Science
    • /
    • v.20 no.4
    • /
    • pp.421-437
    • /
    • 2009
  • A chatting system is a computer program that simulates conversations between a human and a computer using natural language. In this paper, we propose a statistical model to generate natural chatting sentences when keywords and speech acts are input. The proposed model first finds Eojeols (Korean spacing units) including input keywords from a corpus, and generate sentence candidates by using appearance information and syntactic information of Eojeols surrounding the found Eojeols. Then, the proposed model selects one among the sentence candidates by using a language model based on speech act information, co-occurrence information between Eojeols, and syntactic information of each Eojeol. In the experiment, the proposed model showed the better correct sentence generation rate of 86.2% than a previous conventional model based on a simple language model.

  • PDF