• 제목/요약/키워드: datasets

검색결과 2,005건 처리시간 0.029초

제한된 델로네 삼각분할을 이용한 공간 불확실한 영역 탐색 기법 (Detecting Uncertain Boundary Algorithm using Constrained Delaunay Triangulation)

  • 조성환
    • 한국측량학회지
    • /
    • 제32권2호
    • /
    • pp.87-93
    • /
    • 2014
  • 지적 필지를 구성하고 있는 폴리곤 집합은 현실세계의 국토를 반영하는 가장 기반이 되는 데이터 집합이다. 따라서 지적 필지는 서로 간에 겹쳐있거나 공백을 가지지 않는 위상적 무결성이 보장되어야하는 데이터이다. 하지만, 여러 가지 이유로 필지들 간의 겹침과 공백의 문제가 발생하고 있고, 이러한 경우 폴리곤의 경계들은 주변의 폴리곤과 정확하게 인접하고 있지 못하기 때문에 의도하지 않은 겹침 영역과 공백 영역이 생산되고 있다. 이와 같이 정확하게 인접되어 있지 않은 경계가 불확실한 모서리를 하나 이상 포함하고 있는 경우, 이 폴리곤을 불확실한 영역이라고 부른다. 본 논문에서는 이러한 영역을 탐색하기 위한 TTA 기법을 제안하고자 한다. TTA 처리 순서는 우선 폴리곤 데이터 집합으로부터 포인트와 폴리라인을 추출하여 제한된 델로네 삼각분할을 수행한다. 다음으로 각 삼각형마다 데이터 집합과 중첩되는 면의 수를 세어 삼각형에 태깅을 수행한다. 태깅 값이 0 또는 1 이상인 삼각형을 추출한 후 연결성을 가지고 있는 삼각형끼리 병합을 수행하여 위상적 모순이 있는 영역들을 발견한다. 본 실험에서는 제안하는 알고리즘을 자동화하여 실세계에서 경계가 교차하는 지적 데이터에 적용하여 실험을 하였다.

CDRgator: An Integrative Navigator of Cancer Drug Resistance Gene Signatures

  • Jang, Su-Kyeong;Yoon, Byung-Ha;Kang, Seung Min;Yoon, Yeo-Gha;Kim, Seon-Young;Kim, Wankyu
    • Molecules and Cells
    • /
    • 제42권3호
    • /
    • pp.237-244
    • /
    • 2019
  • Understanding the mechanisms of cancer drug resistance is a critical challenge in cancer therapy. For many cancer drugs, various resistance mechanisms have been identified such as target alteration, alternative signaling pathways, epithelial-mesenchymal transition, and epigenetic modulation. Resistance may arise via multiple mechanisms even for a single drug, making it necessary to investigate multiple independent models for comprehensive understanding and therapeutic application. In particular, we hypothesize that different resistance processes result in distinct gene expression changes. Here, we present a web-based database, CDRgator (Cancer Drug Resistance navigator) for comparative analysis of gene expression signatures of cancer drug resistance. Resistance signatures were extracted from two different types of datasets. First, resistance signatures were extracted from transcriptomic profiles of cancer cells or patient samples and their resistance-induced counterparts for >30 cancer drugs. Second, drug resistance group signatures were also extracted from two large-scale drug sensitivity datasets representing ~1,000 cancer cell lines. All the datasets are available for download, and are conveniently accessible based on drug class and cancer type, along with analytic features such as clustering analysis, multidimensional scaling, and pathway analysis. CDRgator allows meta-analysis of independent resistance models for more comprehensive understanding of drug-resistance mechanisms that is difficult to accomplish with individual datasets alone (database URL: http://cdrgator.ewha.ac.kr).

딥러닝 기반의 영상분할을 이용한 토지피복분류 (Land Cover Classification Using Sematic Image Segmentation with Deep Learning)

  • 이성혁;김진수
    • 대한원격탐사학회지
    • /
    • 제35권2호
    • /
    • pp.279-288
    • /
    • 2019
  • 본 연구에서는 항공정사영상을 이용하여 SegNet 기반의 의미분할을 수행하고, 토지피복분류에서의 그 성능을 평가하였다. 의미분할을 위한 분류 항목을 4가지(시가화건조지역, 농지, 산림, 수역)로 선정하였고, 항공정사영상과 세분류 토지피복도를 이용하여 총 2,000개의 데이터셋을 8:2 비율로 훈련(1,600개) 및 검증(400개)로 구분하여 구축하였다. 구축된 데이터셋은 훈련과 검증으로 나누어 학습하였고, 모델 학습 시 정확도에 영향을 미치는 하이퍼파라미터의 변화에 따른 검증 정확도를 평가하였다. SegNet 모델 검증 결과 반복횟수 100,000회, batch size 5에서 가장 높은 성능을 보였다. 이상과 같이 훈련된 SegNet 모델을 이용하여 테스트 데이터셋 200개에 대한 의미분할을 수행한 결과, 항목별 정확도는 농지(87.89%), 산림(87.18%), 수역(83.66%), 시가화건조지역(82.67%), 전체 분류정확도는 85.48%로 나타났다. 이 결과는 기존의 항공영상을 활용한 토지피복분류연구보다 향상된 정확도를 나타냈으며, 딥러닝 기반 의미분할 기법의 적용 가능성이 충분하다고 판단된다. 향후 다양한 채널의 자료와 지수의 활용과 함께 분류 정확도 향상에 크게 기여할 수 있을 것으로 기대된다.

Development of a Method for Analyzing and Visualizing Concept Hierarchies based on Relational Attributes and its Application on Public Open Datasets

  • Hwang, Suk-Hyung
    • 한국컴퓨터정보학회논문지
    • /
    • 제26권9호
    • /
    • pp.13-25
    • /
    • 2021
  • 인터넷과 정보통신, 인공지능기술을 기반으로 하는 디지털 혁신 시대를 맞이하면서 거대한 규모의 데이터집합이 발생, 수집, 축적되어, 다양한 공공기관에서 온라인에 오픈하여 유용한 공공정보를 제공하고 있다. 데이터를 분석하여 유용한 통찰력과 정보를 얻기 위하여, 데이터집합에 내재되어 있는 객체와 속성 사이의 이진 관계를 기반으로 데이터를 분석, 분류, 군집화 및 시각화하는 형식개념분석기법이 성공적으로 사용되어 왔다. 본 논문에서는 형식개념분석기법을 확장하여, 객체의 속성뿐만 아니라 객체들 사이의 관련 관계를 기반으로 데이터집합을 분류하고 개념화하여 가시화하기 위한 기법과 지원도구를 제안한다. 일부 공공 오픈 데이터집합을 대상으로 본 논문의 제안기법을 적용하여 몇 가지 실험을 수행한 결과, 데이터집합으로부터 개념 계층구조를 생성하고 시각화하여 보다 유용한 지식을 추출함으로써 제안기법의 타당성과 유용성을 실증하였다. 본 논문에서 제안한 분석기법은 효과적인 데이터분석, 분류, 군집화, 시각화, 정보검색 등을 위한 유용한 도구로 사용될 수 있다.

Issues and Challenges in the Extraction and Mapping of Linked Open Data Resources with Recommender Systems Datasets

  • Nawi, Rosmamalmi Mat;Noah, Shahrul Azman Mohd;Zakaria, Lailatul Qadri
    • Journal of Information Science Theory and Practice
    • /
    • 제9권2호
    • /
    • pp.66-82
    • /
    • 2021
  • Recommender Systems have gained immense popularity due to their capability of dealing with a massive amount of information in various domains. They are considered information filtering systems that make predictions or recommendations to users based on their interests and preferences. The more recent technology, Linked Open Data (LOD), has been introduced, and a vast amount of Resource Description Framework data have been published in freely accessible datasets. These datasets are connected to form the so-called LOD cloud. The need for semantic data representation has been identified as one of the next challenges in Recommender Systems. In a LOD-enabled recommendation framework where domain awareness plays a key role, the semantic information provided in the LOD can be exploited. However, dealing with a big chunk of the data from the LOD cloud and its integration with any domain datasets remains a challenge due to various issues, such as resource constraints and broken links. This paper presents the challenges of interconnecting and extracting the DBpedia data with the MovieLens 1 Million dataset. This study demonstrates how LOD can be a vital yet rich source of content knowledge that helps recommender systems address the issues of data sparsity and insufficient content analysis. Based on the challenges, we proposed a few alternatives and solutions to some of the challenges.

Performance Analysis of Cloud-Net with Cross-sensor Training Dataset for Satellite Image-based Cloud Detection

  • Kim, Mi-Jeong;Ko, Yun-Ho
    • 대한원격탐사학회지
    • /
    • 제38권1호
    • /
    • pp.103-110
    • /
    • 2022
  • Since satellite images generally include clouds in the atmosphere, it is essential to detect or mask clouds before satellite image processing. Clouds were detected using physical characteristics of clouds in previous research. Cloud detection methods using deep learning techniques such as CNN or the modified U-Net in image segmentation field have been studied recently. Since image segmentation is the process of assigning a label to every pixel in an image, precise pixel-based dataset is required for cloud detection. Obtaining accurate training datasets is more important than a network configuration in image segmentation for cloud detection. Existing deep learning techniques used different training datasets. And test datasets were extracted from intra-dataset which were acquired by same sensor and procedure as training dataset. Different datasets make it difficult to determine which network shows a better overall performance. To verify the effectiveness of the cloud detection network such as Cloud-Net, two types of networks were trained using the cloud dataset from KOMPSAT-3 images provided by the AIHUB site and the L8-Cloud dataset from Landsat8 images which was publicly opened by a Cloud-Net author. Test data from intra-dataset of KOMPSAT-3 cloud dataset were used for validating the network. The simulation results show that the network trained with KOMPSAT-3 cloud dataset shows good performance on the network trained with L8-Cloud dataset. Because Landsat8 and KOMPSAT-3 satellite images have different GSDs, making it difficult to achieve good results from cross-sensor validation. The network could be superior for intra-dataset, but it could be inferior for cross-sensor data. It is necessary to study techniques that show good results in cross-senor validation dataset in the future.

Accuracy of Phishing Websites Detection Algorithms by Using Three Ranking Techniques

  • Mohammed, Badiea Abdulkarem;Al-Mekhlafi, Zeyad Ghaleb
    • International Journal of Computer Science & Network Security
    • /
    • 제22권2호
    • /
    • pp.272-282
    • /
    • 2022
  • Between 2014 and 2019, the US lost more than 2.1 billion USD to phishing attacks, according to the FBI's Internet Crime Complaint Center, and COVID-19 scam complaints totaled more than 1,200. Phishing attacks reflect these awful effects. Phishing websites (PWs) detection appear in the literature. Previous methods included maintaining a centralized blacklist that is manually updated, but newly created pseudonyms cannot be detected. Several recent studies utilized supervised machine learning (SML) algorithms and schemes to manipulate the PWs detection problem. URL extraction-based algorithms and schemes. These studies demonstrate that some classification algorithms are more effective on different data sets. However, for the phishing site detection problem, no widely known classifier has been developed. This study is aimed at identifying the features and schemes of SML that work best in the face of PWs across all publicly available phishing data sets. The Scikit Learn library has eight widely used classification algorithms configured for assessment on the public phishing datasets. Eight was tested. Later, classification algorithms were used to measure accuracy on three different datasets for statistically significant differences, along with the Welch t-test. Assemblies and neural networks outclass classical algorithms in this study. On three publicly accessible phishing datasets, eight traditional SML algorithms were evaluated, and the results were calculated in terms of classification accuracy and classifier ranking as shown in tables 4 and 8. Eventually, on severely unbalanced datasets, classifiers that obtained higher than 99.0 percent classification accuracy. Finally, the results show that this could also be adapted and outperforms conventional techniques with good precision.

한국어 문서 요약 기법을 활용한 휘발유 재고량에 대한 미디어 분석 (Media-based Analysis of Gasoline Inventory with Korean Text Summarization)

  • 윤성연;박민서
    • 문화기술의 융합
    • /
    • 제9권5호
    • /
    • pp.509-515
    • /
    • 2023
  • 국가 차원의 지속적인 대체 에너지 개발에도 석유 제품의 사용량은 지속적으로 증가하고 있다. 특히, 대표적인 석유 제품인 휘발유는 국제유가의 변동에 그 가격이 크게 변동한다. 주유소에서는 휘발유의 가격 변화에 대응하기 위해 휘발유 재고량을 조절한다. 따라서, 휘발유 재고량의 주요 변화 요인을 분석하여 전반적인 휘발유 소비 행태를 분석할 필요가 있다. 본 연구에서는 주유소의 휘발유 재고량 변화에 영향을 미치는 요인을 파악하기 위해 뉴스 기사를 활용한다. 첫째, 웹 크롤링을 통해 자동으로 휘발유와 관련한 기사를 수집한다. 둘째, 수집한 뉴스 기사를 KoBART(Korean Bidirectional and Auto-Regressive Transformers) 텍스트 요약 모델을 활용하여 요약한다. 셋째, 추출한 요약문을 전처리하고, N-Gram 언어 모델과 TF-IDF(Term Frequency Inverse Document Frequency)를 통해 단어 및 구 단위의 주요 요인을 도출한다. 본 연구를 통해 휘발유 소비 형태의 파악 및 예측이 가능하다.

A Hybrid Multi-Level Feature Selection Framework for prediction of Chronic Disease

  • G.S. Raghavendra;Shanthi Mahesh;M.V.P. Chandrasekhara Rao
    • International Journal of Computer Science & Network Security
    • /
    • 제23권12호
    • /
    • pp.101-106
    • /
    • 2023
  • Chronic illnesses are among the most common serious problems affecting human health. Early diagnosis of chronic diseases can assist to avoid or mitigate their consequences, potentially decreasing mortality rates. Using machine learning algorithms to identify risk factors is an exciting strategy. The issue with existing feature selection approaches is that each method provides a distinct set of properties that affect model correctness, and present methods cannot perform well on huge multidimensional datasets. We would like to introduce a novel model that contains a feature selection approach that selects optimal characteristics from big multidimensional data sets to provide reliable predictions of chronic illnesses without sacrificing data uniqueness.[1] To ensure the success of our proposed model, we employed balanced classes by employing hybrid balanced class sampling methods on the original dataset, as well as methods for data pre-processing and data transformation, to provide credible data for the training model. We ran and assessed our model on datasets with binary and multivalued classifications. We have used multiple datasets (Parkinson, arrythmia, breast cancer, kidney, diabetes). Suitable features are selected by using the Hybrid feature model consists of Lassocv, decision tree, random forest, gradient boosting,Adaboost, stochastic gradient descent and done voting of attributes which are common output from these methods.Accuracy of original dataset before applying framework is recorded and evaluated against reduced data set of attributes accuracy. The results are shown separately to provide comparisons. Based on the result analysis, we can conclude that our proposed model produced the highest accuracy on multi valued class datasets than on binary class attributes.[1]

Wine Quality Prediction by Using Backward Elimination Based on XGBoosting Algorithm

  • Umer Zukaib;Mir Hassan;Tariq Khan;Shoaib Ali
    • International Journal of Computer Science & Network Security
    • /
    • 제24권2호
    • /
    • pp.31-42
    • /
    • 2024
  • Different industries mostly rely on quality certification for promoting their products or brands. Although getting quality certification, specifically by human experts is a tough job to do. But the field of machine learning play a vital role in every aspect of life, if we talk about quality certification, machine learning is having a lot of applications concerning, assigning and assessing quality certifications to different products on a macro level. Like other brands, wine is also having different brands. In order to ensure the quality of wine, machine learning plays an important role. In this research, we use two datasets that are publicly available on the "UC Irvine machine learning repository", for predicting the wine quality. Datasets that we have opted for our experimental research study were comprised of white wine and red wine datasets, there are 1599 records for red wine and 4898 records for white wine datasets. The research study was twofold. First, we have used a technique called backward elimination in order to find out the dependency of the dependent variable on the independent variable and predict the dependent variable, the technique is useful for predicting which independent variable has maximum probability for improving the wine quality. Second, we used a robust machine learning algorithm known as "XGBoost" for efficient prediction of wine quality. We evaluate our model on the basis of error measures, root mean square error, mean absolute error, R2 error and mean square error. We have compared the results generated by "XGBoost" with the other state-of-the-art machine learning techniques, experimental results have showed, "XGBoost" outperform as compared to other state of the art machine learning techniques.