• Title/Summary/Keyword: URL정보

Search Result 321, Processing Time 0.03 seconds

A Study on the OpenURL META-TAG of Observation Research Data for Metadata Interoperability (관측분야 과학데이터 관련 메타데이터 상호운용성 확보를 위한 OpenURL 메타태그 연구)

  • Kim, Sun-Tae;Lee, Tae-Young
    • Journal of Information Management
    • /
    • v.42 no.3
    • /
    • pp.147-165
    • /
    • 2011
  • This paper presents a core meta-tag of OpenURL written in Key/Encoded-Value format in the field of observation research, to distribute the scientific data, produced in many experimentations and observations, on the OpenURL service architecture. So far, the OpenURL hasn't supplied a meta-tag represented scientific data because it has focused on circulation of scholarly and technological information extracted from thesis, proceedings, journals, literatures, etc. The DataCite consortium metadata were analyzed and compared with the Dublin Core metadata, OECD metadata, and Directory Interchange Format metadata to develop a core meta-tag in observation research.

A Study on the Standardization of URL Identifier Pattern for Digital Contents (디지털 콘텐츠의 URL 식별패턴 표준화에 관한 연구)

  • 김문정;이두영
    • Proceedings of the Korean Society for Information Management Conference
    • /
    • 2001.08a
    • /
    • pp.265-270
    • /
    • 2001
  • 아날로그 환경에서와 마찬가지로 디지털 환경에서도 디지털 컨텐츠 하나 하나에 고유 식별기호를 부여하여야 한다. 이러한 디지털 컨텐츠를 위한 식별기호로 IETF(Internet Engineering Task Force)에서는 URI(Uniform Resource Identifier)체계 하에 인터넷 자원에 대한 접근 메카니즘을 지정하는 URL (uniform resource locator)을 사용하고 있다. 그러나 도서관의 경우 각각 다른 OPAC(Online Public Access)시스템 환경 하에서 각각 다른 URL 식별 패턴을 사용하고 있기 때문에 동일한 자원을 검색하는데 있어서 문제가 되고 있는 것이 현실이다. 이러한 문제에 착안하여 본 연구는 디지털 콘텐츠에 대한 URL 식별구문패턴의 표준화 방안을 연구하고자 한다.

  • PDF

Effects and Evaluations of URL Normalization (URL정규화의 적용 효과 및 평가)

  • Jeong, Hyo-Sook;Kim, Sung-Jin;Lee, Sang-Ho
    • Journal of KIISE:Databases
    • /
    • v.33 no.5
    • /
    • pp.486-494
    • /
    • 2006
  • A web page can be represented by syntactically different URLs. URL normalization is a process of transforming URL strings into canonical form. Through this process, duplicate URL representations for a web page can be reduced significantly. A number of normalization methods have been heuristically developed and used, and there has been no study on analyzing the normalization methods systematically. In this paper, we give a way to evaluate normalization methods in terms of efficiency and effectiveness of web applications, and give users guidelines for selecting appropriate methods. To this end, we examine all the effects that can take place when a normalization method is adopted to web applications, and describe seven metrics for evaluating normalization methods. Lastly, the evaluation results on 12 normalization methods with the 25 million actual URLs are reported.

Machine Learning-Based Malicious URL Detection Technique (머신러닝 기반 악성 URL 탐지 기법)

  • Han, Chae-rim;Yun, Su-hyun;Han, Myeong-jin;Lee, Il-Gu
    • Journal of the Korea Institute of Information Security & Cryptology
    • /
    • v.32 no.3
    • /
    • pp.555-564
    • /
    • 2022
  • Recently, cyberattacks are using hacking techniques utilizing intelligent and advanced malicious codes for non-face-to-face environments such as telecommuting, telemedicine, and automatic industrial facilities, and the damage is increasing. Traditional information protection systems, such as anti-virus, are a method of detecting known malicious URLs based on signature patterns, so unknown malicious URLs cannot be detected. In addition, the conventional static analysis-based malicious URL detection method is vulnerable to dynamic loading and cryptographic attacks. This study proposes a technique for efficiently detecting malicious URLs by dynamically learning malicious URL data. In the proposed detection technique, malicious codes are classified using machine learning-based feature selection algorithms, and the accuracy is improved by removing obfuscation elements after preprocessing using Weighted Euclidean Distance(WED). According to the experimental results, the proposed machine learning-based malicious URL detection technique shows an accuracy of 89.17%, which is improved by 2.82% compared to the conventional method.

A Spam Filter System Based on Maximum Entropy Model Using Co-training with Spamminess Features and URL Features (스팸성 자질과 URL 자질의 공동 학습을 이용한 최대 엔트로피 기반 스팸메일 필터 시스템)

  • Gong, Mi-Gyoung;Lee, Kyung-Soon
    • The KIPS Transactions:PartB
    • /
    • v.15B no.1
    • /
    • pp.61-68
    • /
    • 2008
  • This paper presents a spam filter system using co-training with spamminess features and URL features based on the maximum entropy model. Spamminess features are the emphasizing patterns or abnormal patterns in spam messages used by spammers to express their intention and to avoid being filtered by the spam filter system. Since spammers use URLs to give the details and make a change to the URL format not to be filtered by the black list, normal and abnormal URLs can be key features to detect the spam messages. Co-training with spamminess features and URL features uses two different features which are independent each other in training. The filter system can learn information from them independently. Experiment results on TREC spam test collection shows that the proposed approach achieves 9.1% improvement and 6.9% improvement in accuracy compared to the base system and bogo filter system, respectively. The result analysis shows that the proposed spamminess features and URL features are helpful. And an experiment result of the co-training shows that two feature sets are useful since the number of training documents are reduced while the accuracy is closed to the batch learning.

Design and Implementation of Combination Module between GIS and URL Information (GIS와 URL정보 연동 모듈 설계 및 구현)

  • Lee Jin-Wook;Jang Se-Hyun;Kim Chang-Soo
    • Proceedings of the Korea Information Processing Society Conference
    • /
    • 2006.05a
    • /
    • pp.395-398
    • /
    • 2006
  • 무선 인터넷 기술의 발달과 함께 이동 컴퓨팅 환경에서 수치지도를 신속하게 출력하고, 이동 지역에 대한 수치지도를 무선 인터넷을 이용하여 클라이언트에 신속하고 효과적으로 전달하기 위해서 국립지리정보원에서 제공하는 수치지도의 경량화가 필요하다. 또한 인터넷의 방대한 자료를 지리정보시스템에서 효과적으로 이용할 수 있도록 화면상에 출력된 수치지도의 특정 건물이나 지역에 URL를 저장시키고, URL이 삽입된 수치지도상의 특정지역에서 사용자의 간단한 조작만으로 웹 사이트에 접속할 수 있도록 본 논문에서는 지리정보와 웹 정보의 연동을 위한 시스템을 설계 및 구현한다.

  • PDF

Design and Implementation of Verification System for Malicious URL and Modified APK File on Cloud Platform (클라우드 플랫폼을 이용한 악성 URL 및 수정된 APK 파일 검증 시스템 설계 및 구현)

  • Je, Seolah;Nguyen, Vu Long;Jung, Souhwan
    • Journal of the Korea Institute of Information Security & Cryptology
    • /
    • v.26 no.4
    • /
    • pp.921-928
    • /
    • 2016
  • Over the past few years, Smishing attacks such as malicious url and malicious application have been emerged as a major problem in South Korea since it caused big problems such as leakage of personal information and financial loss. Users are susceptible to Smishing attacks due to the fact that text message may contain curios content. Because of that reason, user could follow the url, download and install malicious APK file without any doubt or verification process. However currently Anti-Smishing App that adopted post-processing method is difficult to respond quickly. Users need a system that can determine whether the modification of the APK file and malicious url in real time because the Smishing can cause financial damage. This paper present the cloud-based system for verifying malicious url and malicious APK file in user device to prevent secondary damage such as smishing attacks and privacy information leakage.

A Method of Efficient Web Crawling Using URL Pattern Scripts (URL 패턴 스크립트를 이용한 효율적인 웹문서 수집 방안)

  • Chang, Moon-Soo;Jung, June-Young
    • Journal of the Korean Institute of Intelligent Systems
    • /
    • v.17 no.6
    • /
    • pp.849-854
    • /
    • 2007
  • It is difficult that we collect only target documents from the Innumerable Web documents. One of solution to the problem is that we select target documents on the Web site which services many documents of target domain. In this paper, we will propose an intelligent crawling method collecting needed documents based on URL pattern script defined by XML. Proposed crawling method will efficiently apply to the sites which service structuralized information of a piece with database. In this paper, we collected 50 thousand Web documents using our crawling method.

A spam mail blocking method using URL frequency analysis (URL 빈도분석을 이용한 스팸메일 차단 방법)

  • Baek Ki-young;Lee Chul-soo;Ryou Jae-cheol
    • Journal of the Korea Institute of Information Security & Cryptology
    • /
    • v.14 no.6
    • /
    • pp.135-148
    • /
    • 2004
  • Recently, it is difficult to block the spam mail that changes variously with past spam distinction method by words. To solve such problem, This paper propose the method of generating spam distinction rule using URL frequency analysis. It is consist of collecting spam, drawing URL that get into characteristic from collected spam mail. URL noonalizing, generating spam distinction rule by time frequency, and blocking mail. It can effectively block various types of spam mail and various forms of spam mail that change.

Short URLs Verification Approach for Phishing Site Detection Improvement (피싱 사이트 탐지 성능 향상을 위한 단축 URL 검증 기법)

  • Kim, Yun-Gi;Kim, Hae-Soo;Kim, Mi-Hui
    • Proceedings of the Korea Information Processing Society Conference
    • /
    • 2022.11a
    • /
    • pp.80-81
    • /
    • 2022
  • 최근 소셜 미디어 서비스의 성장과 접근성이 편해짐에 따라 피싱 URL 자동 분류가 필요하다. 그런데 단축 URL 서비스가 대중화되면서 피싱 URL 또한 단축 URL 서비스를 이용하여 피싱 사이트로 통하는지 정상적인 사이트로 통하는지 알 수 없게 되었다. 이런 경우 콘텐츠 기반 탐지를 통해 확인할 수 있지만 URL 기반 방법보다 느리고 리소스를 많이 차지한다는 단점이 있어 본 논문에서는 단축 URL 여부를 판단하고 좀더 효율적으로 피싱 사이트를 탐지 기법을 제안한다.