• Title/Summary/Keyword: 크롤러

Search Result 87, Processing Time 0.029 seconds

BUSINESS GUIDE_업체탐방 - 빛으로 세상을 바꾸는 기업, 이제는 세계를 꿈꾼다! - (주)알비지테크롤러지 -

  • 한국전기제품안전협회
    • Product Safety
    • /
    • s.198
    • /
    • pp.38-41
    • /
    • 2010
  • '전기 없이도 작동하는 피난유도선', '와이어리스(무선) 피난유도선' 바론 (주)알지비테크롤러지(회장 조성복)사의 제품이다. 발광시트 전문생산업체인 알지비테크놀러지는 그 동안 알지비테크놀러지는 발광시트를 이용하여 기존 비상등 및 축광 표지판등이 가지고 있는 문제점들인 외부충격에 의한 파손, 짙은 연기에서의 시인성(연기속 시각효과)과 유도능력 등을 보완할 수 있는 제품 개발에 힘써왔다. 지난2010. 5.12 ~ 5.14간 이뤄진 '2010 재난대응 안전한국훈련' 중 지하철화재 대응훈련부분에 참가하기 위하여 '와이어리스피난유도선'을 몽촌토성역 설치로 그 첫 선을 보였는 바, 국내외 재난안전관련 업계로부터 최고의 찬사로 주목을 받았고, 이에 제품관련 문의와 상담이 폭주하고 있어 세계적으로 주목받는 기업으로의 성장을 기대하고 있다.

  • PDF

Development of Web Crawler and Network Analysis Technology for Occurrence and Prediction of Flooding (수난 발생 및 규모 예측을 위한 웹 크롤러 및 네트워크 분석기술 개발)

  • Seo, Dongmin;Kim, Hoyong;Lee, Jeongha;Hwang, Seokhwan
    • Proceedings of the Korea Contents Association Conference
    • /
    • 2019.05a
    • /
    • pp.5-6
    • /
    • 2019
  • 빅데이터 분석을 위해 활용되는 데이터로는 뉴스, 블로그, SNS, 논문, 특허 그리고 센서로부터 수집된 데이터 등 매우 다양한 유형의 데이터가 있다. 특히, 신뢰성 있는 데이터를 실시간 제공하는 웹 데이터의 활용이 점차 확산되고 있다. 그리고 빅데이터의 활용이 다양한 분야로 점차 확산되고 웹 데이터가 매년 기하급수적으로 증가하면서, 최근 웹 데이터는 재난대응 미디어로써 매우 중요한 역할을 하고 있다. 또한, 빅데이터 분석에 활용되는 원천 데이터는 네트워크 형태이며, 최근 소셜 네트워크 분석을 통한 효과적인 상품 광고, 핵심 유전자 발굴, 신약 재창출 등 다양한 영역에서 네트워크 분석 기술이 사회와 인류에게 가치 있는 정보를 제공할 수 있는 가능성을 제시하면서 네트워크 분석 기술의 중요성이 부각되고 있다. 본 논문에서는 웹에서 제공하는 뉴스와 SNS 데이터를 이용해 수난 발생 및 규모 예측을 지원하는 웹 크롤러 및 네트워크 분석기술을 제안한다.

  • PDF

Dark Web based Malicious Code Detection and Analysis (다크웹 크롤러를 사용한 악성코드 탐지 및 분석)

  • Kim, Ah-Lynne;Lee, Eun-Ji
    • Proceedings of the Korea Information Processing Society Conference
    • /
    • 2020.11a
    • /
    • pp.446-449
    • /
    • 2020
  • 다크웹을 이용한 사이버 범죄율이 국내외에서 가파르게 상승 중이다. 그러나 다크웹의 특성상 숨겨져 있는 인터넷 영역에서 공유되는 악성코드들을 찾기란 어렵다. 특히 다크웹상 여러 서비스들은 크롤러 bot과 같은 정보 수집을 막고자 다양한 기법을 적용하고 있다. 따라서 우리는 기존의 연구 방법에 따라 다크웹 상의 URL을 수집한 후, 추가적으로 다운로더를 만들어 exe, zip과 같은 특정 형식의 파일을 수집하였다. 앞으로 해당 파일들은 통합 바이러스 스캔 엔진에서 검사하여 의심 파일들을 분별할 예정이다. 의심 파일들은 정적 / 동적 분석을 통해 상세한 보고서를 제출하여 향후 다크웹 내의 악성코드 분포 / 출처 분석에 유의미한 결과를 도출할 수 있다.

Refresh Cycle Optimization for Web Crawlers (웹크롤러의 수집주기 최적화)

  • Cho, Wan-Sup;Lee, Jeong-Eun;Choi, Chi-Hwan
    • The Journal of the Korea Contents Association
    • /
    • v.13 no.6
    • /
    • pp.30-39
    • /
    • 2013
  • Web crawler should maintain fresh data with minimum server overhead for large amount of data in the web sites. The overhead in the server increases rapidly as the amount of data is exploding as in the big data era. The amount of web information is increasing rapidly with advanced wireless networks and emergence of diverse smart devices. Furthermore, the information is continuously being produced and updated in anywhere and anytime by means of easy web platforms, and smart devices. Now, it is becoming a hot issue how frequently updated web data has to be refreshed in data collection and integration. In this paper, we propose dynamic web-data crawling methods, which include sensitive checking of web site changes, and dynamic retrieving of web pages from target web sites based on historical update patterns. Furthermore, we implemented a Java-based web crawling application and compared efficiency between conventional static approaches and our dynamic one. Our experiment results showed 46.2% overhead benefits with more fresh data compared to the static crawling methods.

Effective Web Crawling Orderings from Graph Search Techniques (그래프 탐색 기법을 이용한 효율적인 웹 크롤링 방법들)

  • Kim, Jin-Il;Kwon, Yoo-Jin;Kim, Jin-Wook;Kim, Sung-Ryul;Park, Kun-Soo
    • Journal of KIISE:Computer Systems and Theory
    • /
    • v.37 no.1
    • /
    • pp.27-34
    • /
    • 2010
  • Web crawlers are fundamental programs which iteratively download web pages by following links of web pages starting from a small set of initial URLs. Previously several web crawling orderings have been proposed to crawl popular web pages in preference to other pages, but some graph search techniques whose characteristics and efficient implementations had been studied in graph theory community have not been applied yet for web crawling orderings. In this paper we consider various graph search techniques including lexicographic breadth-first search, lexicographic depth-first search and maximum cardinality search as well as well-known breadth-first search and depth-first search, and then choose effective web crawling orderings which have linear time complexity and crawl popular pages early. Especially, for maximum cardinality search and lexicographic breadth-first search whose implementations are non-trivial, we propose linear-time web crawling orderings by applying the partition refinement method. Experimental results show that maximum cardinality search has desirable properties in both time complexity and the quality of crawled pages.

Crawlers and Morphological Analyzers Utilize to Identify Personal Information Leaks on the Web System (크롤러와 형태소 분석기를 활용한 웹상 개인정보 유출 판별 시스템)

  • Lee, Hyeongseon;Park, Jaehee;Na, Cheolhun;Jung, Hoekyung
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 2017.10a
    • /
    • pp.559-560
    • /
    • 2017
  • Recently, as the problem of personal information leakage has emerged, studies on data collection and web document classification have been made. The existing system judges only the existence of personal information, and there is a problem in that unnecessary data is not filtered because classification of documents published by the same name or user is not performed. In this paper, we propose a system that can identify the types of data or homonyms using the crawler and morphological analyzer for solve the problem. The user collects personal information on the web through the crawler. The collected data can be classified through the morpheme analyzer, and then the leaked data can be confirmed. Also, if the system is reused, more accurate results can be obtained. It is expected that users will be provided with customized data.

  • PDF

Design and Implementation of Event-driven Real-time Web Crawler to Maintain Reliability (신뢰성 유지를 위한 이벤트 기반 실시간 웹크롤러의 설계 및 구현)

  • Ahn, Yong-Hak
    • Journal of the Korea Convergence Society
    • /
    • v.13 no.4
    • /
    • pp.1-6
    • /
    • 2022
  • Real-time systems using web cralwing data must provide users with data from the same database as remote data. To do this, the web crawler repeatedly sends HTTP(HtypeText Transfer Protocol) requests to the remote server to see if the remote data has changed. This process causes network load on the crawling server and remote server, causing problems such as excessive traffic generation. To solve this problem, in this paper, based on user events, we propose a real-time web crawling technique that can reduce the overload of the network while securing the reliability of maintaining the sameness between the data of the crawling server and data from multiple remote locations. The proposed method performs a crawling process based on an event that requests unit data and list data. The results show that the proposed method can reduce the overhead of network traffic in existing web crawlers and secure data reliability. In the future, research on the convergence of event-based crawling and time-based crawling is required.

Compensation of Relation Formula between Luffing Wire Tension and Overturning Moment in a Crawler Crane Considering the Deflection of Boom (크롤러 크레인에서 붐의 처짐을 고려한 러핑와이어 장력과 전도모멘트 사이의 관계식 보정)

  • Jang, Hyo-Pil;Han, Dong-Seop
    • Journal of the Korean Society of Manufacturing Process Engineers
    • /
    • v.10 no.4
    • /
    • pp.44-49
    • /
    • 2011
  • The crawler crane, which consists of a lattice boom, a driving system, and movable vehicle, is widely used in a construction site. It needs to be installed an overload limiter to prevent the overturning accident and the fracture of structure. This research is undertaken to provide the relation formula for designing the overload limiter as follows: First the relation formulas between the wire-rope tension and the hoisting load or the overturning ratio according to the luffing angle and length of a lattice boom are established. Secondly the derived formulas are corrected by using the compensated angle considering the deflection of boom through the finite element analysis. The stiffness analysis is carried out for 30-kinds of models as a combination of 6-kinds of luffing angle and 5-kinds of length of boom. Finally the shape design of a stick type load cell, which is the device to measure the wire-rope tension, is performed. 5-kinds of notch radius and 5-kinds of center hole radius are adopted as the design parameter for the strength analysis of the load cell.

Modern Concurrent Programming for Multicode Environment (동시성으로 작성하는 파이썬 크롤러)

  • Kim, Nam-gue;Kang, Young-Jin;Lee, HoonJae
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 2017.05a
    • /
    • pp.430-433
    • /
    • 2017
  • Programming that ensures concurrency is essential for developers. If you do not use it, it is hard to expect the speed of the program to improve unless there is technical advancement of the hardware itself. Programming languages that support good concurrency code include go, elixir, and scala. Python, which supports a number of useful libraries, also supports concurrent programming like asyncio and coroutine. This paper defines the concepts of concurrency and parallelism, and explains what to note when writing concurrency programming in Python. The crawler that collects web data is written in concurrent code and compared with programs written in sequential, multithreaded code.

  • PDF

Dynamic Model Development and Simulation of Crawler Type Excavator (크롤러형 굴삭기의 동역학적 모델 개발 및 시뮬레이션)

  • Kwon, Soon-Ki
    • Journal of the Korean Society of Manufacturing Technology Engineers
    • /
    • v.18 no.6
    • /
    • pp.642-651
    • /
    • 2009
  • The history of excavator design is not long enough which still causes most of the design considerations to be focused on static analysis or simple functional improvement based on static analysis. However, the real forces experiencing on each component of excavator are highly transient and impulsive. Therefore, the prediction and the evaluation of the movement of the excavator by dynamic load in the early design stage through the dynamic transient analysis of the excavator and ensuring of design technique plays an importance role to reduce development-cost, shorten product-deliver, decrease vehicle-weight and optimize the system design. In this paper, Commercial software DADS and ANSYS help to develop the track model of the crawler type excavator, and to evaluate the performance and the dynamic characteristics of excavator with various simulations. For that reason, the track of crawler type excavator is modelled with DADS Track Vehicle Superelement, and the reaction forces on the track rollers were predicted through the driving simulation. Also, the upper frame and cabin vibration characteristics, at the low RPM idle state, were evaluated with engine rigid body modelling. And flexibility body effects were considered to determine the more accurate joint reaction forces and accelerations under the upper frame swing motion.

  • PDF