• 제목/요약/키워드: darknet

검색결과 28건 처리시간 0.025초

Sharing Information for Event Analysis over the Wide Internet

  • Nagao, Masahiro;Koide, Kazuhide;Satoh, Akihiro;Keeni, Glenn Mansfield;Shiratori, Norio
    • Journal of Communications and Networks
    • /
    • 제12권4호
    • /
    • pp.382-394
    • /
    • 2010
  • Cross-domain event information sharing is a topic of great interest in the area of event based network management. In this work we use data sets which represent actual attacks in the operational Internet. We analyze the data sets to understand the dynamics of the attacks and then go onto show the effectiveness of sharing incident related information to contain these attacks. We describe universal data acquisition system for event based management (UniDAS), a novel system for secure and automated cross-domain event information sharing. The system uses a generic, structured data format based on a standardized incident object description and exchange format (IODEF). IODEF is an XML-based extensible data format for security incident information exchange. We propose a simple and effective security model for IODEF and apply it to the secure and automated generic event information sharing system UniDAS. We present the system we have developed and evaluate its effectiveness.

Remote Distance Measurement from a Single Image by Automatic Detection and Perspective Correction

  • Layek, Md Abu;Chung, TaeChoong;Huh, Eui-Nam
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • 제13권8호
    • /
    • pp.3981-4004
    • /
    • 2019
  • This paper proposes a novel method for locating objects in real space from a single remote image and measuring actual distances between them by automatic detection and perspective transformation. The dimensions of the real space are known in advance. First, the corner points of the interested region are detected from an image using deep learning. Then, based on the corner points, the region of interest (ROI) is extracted and made proportional to real space by applying warp-perspective transformation. Finally, the objects are detected and mapped to the real-world location. Removing distortion from the image using camera calibration improves the accuracy in most of the cases. The deep learning framework Darknet is used for detection, and necessary modifications are made to integrate perspective transformation, camera calibration, un-distortion, etc. Experiments are performed with two types of cameras, one with barrel and the other with pincushion distortions. The results show that the difference between calculated distances and measured on real space with measurement tapes are very small; approximately 1 cm on an average. Furthermore, automatic corner detection allows the system to be used with any type of camera that has a fixed pose or in motion; using more points significantly enhances the accuracy of real-world mapping even without camera calibration. Perspective transformation also increases the object detection efficiency by making unified sizes of all objects.

Multi-Layer Bitcoin Clustering through Off-Chain Data of Darkweb (다크웹 오프체인 데이터를 이용한 다계층 비트코인 클러스터링 기법)

  • Lee, Jin-hee;Kim, Min-jae;Hur, Junbeom
    • Journal of the Korea Institute of Information Security & Cryptology
    • /
    • 제31권4호
    • /
    • pp.715-729
    • /
    • 2021
  • Bitcoin is one of the cryptocurrencies, which is decentralized and transparent. However, due to its anonymity, it is currently being used for the purpose of transferring funds for illegal transactions in darknet markets. To solve this problem, clustering heuristic based on the characteristics of a Bitcoin transaction has been proposed. However, we found that the previous heuristis suffer from high false negative rates. In this study, we propose a novel heuristic for bitcoin clustering using off-chain data. Specifically, we collected and analyzed user review data from Silk Road 4 as off-chain data. As a result, 31.68% of the review data matched the actual Bitcoin transaction, and false negatives were reduced by 91.7% in the proposed method.

Transfer learning in a deep convolutional neural network for implant fixture classification: A pilot study

  • Kim, Hak-Sun;Ha, Eun-Gyu;Kim, Young Hyun;Jeon, Kug Jin;Lee, Chena;Han, Sang-Sun
    • Imaging Science in Dentistry
    • /
    • 제52권2호
    • /
    • pp.219-224
    • /
    • 2022
  • Purpose: This study aimed to evaluate the performance of transfer learning in a deep convolutional neural network for classifying implant fixtures. Materials and Methods: Periapical radiographs of implant fixtures obtained using the Superline (Dentium Co. Ltd., Seoul, Korea), TS III(Osstem Implant Co. Ltd., Seoul, Korea), and Bone Level Implant(Institut Straumann AG, Basel, Switzerland) systems were selected from patients who underwent dental implant treatment. All 355 implant fixtures comprised the total dataset and were annotated with the name of the system. The total dataset was split into a training dataset and a test dataset at a ratio of 8 to 2, respectively. YOLOv3 (You Only Look Once version 3, available at https://pjreddie.com/darknet/yolo/), a deep convolutional neural network that has been pretrained with a large image dataset of objects, was used to train the model to classify fixtures in periapical images, in a process called transfer learning. This network was trained with the training dataset for 100, 200, and 300 epochs. Using the test dataset, the performance of the network was evaluated in terms of sensitivity, specificity, and accuracy. Results: When YOLOv3 was trained for 200 epochs, the sensitivity, specificity, accuracy, and confidence score were the highest for all systems, with overall results of 94.4%, 97.9%, 96.7%, and 0.75, respectively. The network showed the best performance in classifying Bone Level Implant fixtures, with 100.0% sensitivity, specificity, and accuracy. Conclusion: Through transfer learning, high performance could be achieved with YOLOv3, even using a small amount of data.

Abnormal behaviour in rock bream (Oplegnathus fasciatus) detected using deep learning-based image analysis

  • Jang, Jun-Chul;Kim, Yeo-Reum;Bak, SuHo;Jang, Seon-Woong;Kim, Jong-Myoung
    • Fisheries and Aquatic Sciences
    • /
    • 제25권3호
    • /
    • pp.151-157
    • /
    • 2022
  • Various approaches have been applied to transform aquaculture from a manual, labour-intensive industry to one dependent on automation technologies in the era of the fourth industrial revolution. Technologies associated with the monitoring of physical condition have successfully been applied in most aquafarm facilities; however, real-time biological monitoring systems that can observe fish condition and behaviour are still required. In this study, we used a video recorder placed on top of a fish tank to observe the swimming patterns of rock bream (Oplegnathus fasciatus), first one fish alone and then a group of five fish. Rock bream in the video samples were successfully identified using the you-only-look-once v3 algorithm, which is based on the Darknet-53 convolutional neural network. In addition to recordings of swimming behaviour under normal conditions, the swimming patterns of fish under abnormal conditions were recorded on adding an anaesthetic or lowering the salinity. The abnormal conditions led to changes in the velocity of movement (3.8 ± 0.6 cm/s) involving an initial rapid increase in speed (up to 16.5 ± 3.0 cm/s, upon 2-phenoxyethanol treatment) before the fish stopped moving, as well as changing from swimming upright to dying lying on their sides. Machine learning was applied to datasets consisting of normal or abnormal behaviour patterns, to evaluate the fish behaviour. The proposed algorithm showed a high accuracy (98.1%) in discriminating normal and abnormal rock bream behaviour. We conclude that artificial intelligence-based detection of abnormal behaviour can be applied to develop an automatic bio-management system for use in the aquaculture industry.

Study of a underpass inundation forecast using object detection model (객체탐지 모델을 활용한 지하차도 침수 예측 연구)

  • Oh, Byunghwa;Hwang, Seok Hwan
    • Proceedings of the Korea Water Resources Association Conference
    • /
    • 한국수자원학회 2021년도 학술발표회
    • /
    • pp.302-302
    • /
    • 2021
  • 지하차도의 경우 국지 및 돌발홍수가 발생할 경우 대부분 침수됨에도 불구하고 2020년 7월 23일 부산 지역에 밤사이 시간당 80mm가 넘는 폭우가 발생하면서 순식간에 지하차도 천장까지 물이 차면서 선제적인 차량 통제가 우선적으로 수행되지 못하여 미처 대피하지 못한 3명의 운전자 인명사고가 발생하였다. 수재해를 비롯한 재난 관리를 빠르게 수행하기 위해서는 기존의 정부 및 관주도 중심의 단방향의 재난 대응에서 벗어나 정형 데이터와 비정형 데이터를 총칭하는 빅데이터의 통합적 수집 및 분석을 수행이 필요하다. 본 연구에서는 부산지역의 지하차도와 인접한 지하터널 CCTV 자료(센서)를 통한 재난 발생 시 인명피해를 최소화 정보 제공을 위한 Object Detection(객체 탐지)연구를 수행하였다. 지하터널 침수가 발생한 부산지역의 CCTV 영상을 사용하였으며, 영상편집에 사용되는 CCTV 자료의 음성자료를 제거하는 인코딩을 통하여 불러오는 영상파일 용량파일 감소 효과를 볼 수 있었다. 지하차도에 진입하는 물체를 탐지하는 방법으로 YOLO(You Only Look Once)를 사용하였으며, YOLO는 가장 빠른 객체 탐지 알고리즘 중 하나이며 최신 GPU에서 초당 170프레임의 속도로 실행될 수 있는 YOLOv3 방법을 적용하였으며, 분류작업에서 보다 높은 Classification을 가지는 Darknet-53을 적용하였다. YOLOv3 방법은 기존 객체탐지 모델 보다 좀 더 빠르고 정확한 물체 탐지가 가능하며 또한 모델의 크기를 변경하기만 하면 다시 학습시키지 않아도 속도와 정확도를 쉽게 변경가능한 장점이 있다. CCTV에서 오전(일반), 오후(침수발생) 시점을 나눈 후 Car, Bus, Truck, 사람을 분류하는 YOLO 알고리즘을 적용하여 지하터널 인근 Object Detection을 실제 수행 하였으며, CCTV자료를 이용하여 실제 물체 탐지의 정확도가 높은 것을 확인하였다.

  • PDF

Development of Real-time Video Surveillance System Using the Intelligent Behavior Recognition Technique (지능형 행동인식 기술을 이용한 실시간 동영상 감시 시스템 개발)

  • Chang, Jae-Young;Hong, Sung-Mun;Son, Damy;Yoo, Hojin;Ahn, Hyoung-Woo
    • The Journal of the Institute of Internet, Broadcasting and Communication
    • /
    • 제19권2호
    • /
    • pp.161-168
    • /
    • 2019
  • Recently, video equipments such as CCTV, which is spreading rapidly, is being used as a means to monitor and cope with abnormal situations in almost governments, companies, and households. However, in most cases, since recognizing the abnormal situation is carried out by the monitoring person, the immediate response is difficult and is used only for post-analysis. In this paper, we present the results of the development of video surveillance system that automatically recognizing the abnormal situations and sending such events to the smartphone immediately using the latest deep learning technology. The proposed system extracts skeletons from the human objects in real time using Openpose library and then recognizes the human behaviors automatically using deep learning technology. To this end, we reconstruct Openpose library, which developed in the Caffe framework, on Darknet framework to improve real-time processing. We also verified the performance improvement through experiments. The system to be introduced in this paper has accurate and fast behavioral recognition performance and scalability, so it is expected that it can be used for video surveillance systems for various applications.

Influence of Self-driving Data Set Partition on Detection Performance Using YOLOv4 Network (YOLOv4 네트워크를 이용한 자동운전 데이터 분할이 검출성능에 미치는 영향)

  • Wang, Xufei;Chen, Le;Li, Qiutan;Son, Jinku;Ding, Xilong;Song, Jeongyoung
    • The Journal of the Institute of Internet, Broadcasting and Communication
    • /
    • 제20권6호
    • /
    • pp.157-165
    • /
    • 2020
  • Aiming at the development of neural network and self-driving data set, it is also an idea to improve the performance of network model to detect moving objects by dividing the data set. In Darknet network framework, the YOLOv4 (You Only Look Once v4) network model was used to train and test Udacity data set. According to 7 proportions of the Udacity data set, it was divided into three subsets including training set, validation set and test set. K-means++ algorithm was used to conduct dimensional clustering of object boxes in 7 groups. By adjusting the super parameters of YOLOv4 network for training, Optimal model parameters for 7 groups were obtained respectively. These model parameters were used to detect and compare 7 test sets respectively. The experimental results showed that YOLOv4 can effectively detect the large, medium and small moving objects represented by Truck, Car and Pedestrian in the Udacity data set. When the ratio of training set, validation set and test set is 7:1.5:1.5, the optimal model parameters of the YOLOv4 have highest detection performance. The values show mAP50 reaching 80.89%, mAP75 reaching 47.08%, and the detection speed reaching 10.56 FPS.