• Title/Summary/Keyword: Processing

Search Result 69,659, Processing Time 0.086 seconds

A Study on the Methods of Building Tools and Equipment for Digital Forensics Laboratory (디지털증거분석실의 도구·장비 구축 방안에 관한 연구)

  • Su-Min Shin;Hyeon-Min Park;Gi-Bum Kim
    • Convergence Security Journal
    • /
    • v.22 no.5
    • /
    • pp.21-35
    • /
    • 2022
  • The use of digital information according to the development of information and communication technology and the 4th industrial revolution is continuously increasing and diversifying, and in proportion to this, crimes using digital information are also increasing. However, there are few cases of establishing an environment for processing and analysis of digital evidence in Korea. The budget allocated for each organization is different and the digital forensics laboratory built without solving the chronic problem of securing space has a problem in that there is no standard that can be referenced from the initial configuration stage. Based on this awareness of the problem, this thesis conducted an exploratory study focusing on tools and equipment necessary for building a digital forensics laboratory. As a research method, focus group interviews were conducted with 15 experts with extensive practical experience in the digital forensic laboratory or digital forensics field and experts' opinions were collected on the following 9 areas: network configuration, analyst computer, personal tools·equipment, imaging devices, dedicated software, open source software, common tools/equipment, accessories, and other considerations. As a result, a list of tools and equipment for digital forensic laboratories was derived.

The study of security management for application of blockchain technology in the Internet of Things environment (Focusing on security cases in autonomous vehicles including driving environment sensing data and occupant data) (사물인터넷 환경에서 블록체인 기술을 이용한 보안 관리에 관한 소고(주행 환경 센싱 데이터 및 탑승자 데이터를 포함한 자율주행차량에서의 보안 사례를 중심으로))

  • Jang Mook KANG
    • Convergence Security Journal
    • /
    • v.22 no.4
    • /
    • pp.161-168
    • /
    • 2022
  • After the corona virus, as non-face-to-face services are activated, domain services that guarantee integrity by embedding sensing information of the Internet of Things (IoT) with block chain technology are expanding. For example, in areas such as safety and security using CCTV, a process is required to safely update firmware in real time and to confirm that there is no malicious intrusion. In the existing safe security processing procedures, in many cases, the person in charge performing official duties carried a USB device and directly updated the firmware. However, when private blockchain technology such as Hyperledger is used, the convenience and work efficiency of the Internet of Things environment can be expected to increase. This article describes scenarios in how to prevent vulnerabilities in the operating environment of various customers such as firmware updates and device changes in a non-face-to-face environment. In particular, we introduced the optimal blockchain technique for the Internet of Things (IoT), which is easily exposed to malicious security risks such as hacking and information leakage. In this article, we tried to present the necessity and implications of security management that guarantees integrity through operation applying block chain technology in the increasingly expanding Internet of Things environment. If this is used, it is expected to gain insight into how to apply the blockchain technique to guidelines for strengthening the security of the IoT environment in the future.

Efficient Poisoning Attack Defense Techniques Based on Data Augmentation (데이터 증강 기반의 효율적인 포이즈닝 공격 방어 기법)

  • So-Eun Jeon;Ji-Won Ock;Min-Jeong Kim;Sa-Ra Hong;Sae-Rom Park;Il-Gu Lee
    • Convergence Security Journal
    • /
    • v.22 no.3
    • /
    • pp.25-32
    • /
    • 2022
  • Recently, the image processing industry has been activated as deep learning-based technology is introduced in the image recognition and detection field. With the development of deep learning technology, learning model vulnerabilities for adversarial attacks continue to be reported. However, studies on countermeasures against poisoning attacks that inject malicious data during learning are insufficient. The conventional countermeasure against poisoning attacks has a limitation in that it is necessary to perform a separate detection and removal operation by examining the training data each time. Therefore, in this paper, we propose a technique for reducing the attack success rate by applying modifications to the training data and inference data without a separate detection and removal process for the poison data. The One-shot kill poison attack, a clean label poison attack proposed in previous studies, was used as an attack model. The attack performance was confirmed by dividing it into a general attacker and an intelligent attacker according to the attacker's attack strategy. According to the experimental results, when the proposed defense mechanism is applied, the attack success rate can be reduced by up to 65% compared to the conventional method.

Real-time Steel Surface Defects Detection Appliocation based on Yolov4 Model and Transfer Learning (Yolov4와 전이학습을 기반으로한 실시간 철강 표면 결함 검출 연구)

  • Bok-Kyeong Kim;Jun-Hee Bae;NGUYEN VIET HOAN;Yong-Eun Lee;Young Seok Ock
    • The Journal of Bigdata
    • /
    • v.7 no.2
    • /
    • pp.31-41
    • /
    • 2022
  • Steel is one of the most fundamental components to mechanical industry. However, the quality of products are greatly impacted by the surface defects in the steel. Thus, researchers pay attention to the need for surface defects detector and the deep learning methods are the current trend of object detector. There are still limitations and rooms for improvements, for example, related works focus on developing the models but don't take into account real-time application with practical implication on industrial settings. In this paper, a real-time application of steel surface defects detection based on YOLOv4 is proposed. Firstly, as the aim of this work to deploying model on real-time application, we studied related works on this field, particularly focusing on one-stage detector and YOLO algorithm, which is one of the most famous algorithm for real-time object detectors. Secondly, using pre-trained Yolov4-Darknet platform models and transfer learning, we trained and test on the hot rolled steel defects open-source dataset NEU-DET. In our study, we applied our application with 4 types of typical defects of a steel surface, namely patches, pitted surface, inclusion and scratches. Thirdly, we evaluated YOLOv4 trained model real-time performance to deploying our system with accuracy of 87.1 % mAP@0.5 and over 60 fps with GPU processing.

Detection of Plastic Greenhouses by Using Deep Learning Model for Aerial Orthoimages (딥러닝 모델을 이용한 항공정사영상의 비닐하우스 탐지)

  • Byunghyun Yoon;Seonkyeong Seong;Jaewan Choi
    • Korean Journal of Remote Sensing
    • /
    • v.39 no.2
    • /
    • pp.183-192
    • /
    • 2023
  • The remotely sensed data, such as satellite imagery and aerial photos, can be used to extract and detect some objects in the image through image interpretation and processing techniques. Significantly, the possibility for utilizing digital map updating and land monitoring has been increased through automatic object detection since spatial resolution of remotely sensed data has improved and technologies about deep learning have been developed. In this paper, we tried to extract plastic greenhouses into aerial orthophotos by using fully convolutional densely connected convolutional network (FC-DenseNet), one of the representative deep learning models for semantic segmentation. Then, a quantitative analysis of extraction results had performed. Using the farm map of the Ministry of Agriculture, Food and Rural Affairsin Korea, training data was generated by labeling plastic greenhouses into Damyang and Miryang areas. And then, FC-DenseNet was trained through a training dataset. To apply the deep learning model in the remotely sensed imagery, instance norm, which can maintain the spectral characteristics of bands, was used as normalization. In addition, optimal weights for each band were determined by adding attention modules in the deep learning model. In the experiments, it was found that a deep learning model can extract plastic greenhouses. These results can be applied to digital map updating of Farm-map and landcover maps.

A Study on the AI Analysis of Crop Area Data in Aquaponics (아쿠아포닉스 환경에서의 작물 면적 데이터 AI 분석 연구)

  • Eun-Young Choi;Hyoun-Sup Lee;Joo Hyoung Cha;Lim-Gun Lee
    • The Journal of the Convergence on Culture Technology
    • /
    • v.9 no.3
    • /
    • pp.861-866
    • /
    • 2023
  • Unlike conventional smart farms that require chemical fertilizers and large spaces, aquaponics farming, which utilizes the symbiotic relationship between aquatic organisms and crops to grow crops even in abnormal environments such as environmental pollution and climate change, is being actively researched. Different crops require different environments and nutrients for growth, so it is necessary to configure the ratio of aquatic organisms optimized for crop growth. This study proposes a method to measure the degree of growth based on area and volume using image processing techniques in an aquaponics environment. Tilapia, carp, catfish, and lettuce crops, which are aquatic organisms that produce organic matter through excrement, were tested in an aquaponics environment. Through 2D and 3D image analysis of lettuce and real-time data analysis, the growth degree was evaluated using the area and volume information of lettuce. The results of the experiment proved that it is possible to manage cultivation by utilizing the area and volume information of lettuce. It is expected that it will be possible to provide production prediction services to farmers by utilizing aquatic life and growth information. It will also be a starting point for solving problems in the changing agricultural environment.

Automatic 3D data extraction method of fashion image with mannequin using watershed and U-net (워터쉐드와 U-net을 이용한 마네킹 패션 이미지의 자동 3D 데이터 추출 방법)

  • Youngmin Park
    • The Journal of the Convergence on Culture Technology
    • /
    • v.9 no.3
    • /
    • pp.825-834
    • /
    • 2023
  • The demands of people who purchase fashion products on Internet shopping are gradually increasing, and attempts are being made to provide user-friendly images with 3D contents and web 3D software instead of pictures and videos of products provided. As a reason for this issue, which has emerged as the most important aspect in the fashion web shopping industry, complaints that the product is different when the product is received and the image at the time of purchase has been heightened. As a way to solve this problem, various image processing technologies have been introduced, but there is a limit to the quality of 2D images. In this study, we proposed an automatic conversion technology that converts 2D images into 3D and grafts them to web 3D technology that allows customers to identify products in various locations and reduces the cost and calculation time required for conversion. We developed a system that shoots a mannequin by placing it on a rotating turntable using only 8 cameras. In order to extract only the clothing part from the image taken by this system, markers are removed using U-net, and an algorithm that extracts only the clothing area by identifying the color feature information of the background area and mannequin area is proposed. Using this algorithm, the time taken to extract only the clothes area after taking an image is 2.25 seconds per image, and it takes a total of 144 seconds (2 minutes and 4 seconds) when taking 64 images of one piece of clothing. It can extract 3D objects with very good performance compared to the system.

Epidemiologic Characteristics of Death in Breast Cancer Patients and Health Promotion Plans : Using Korean Cancer Registry data (유방암 환자 사망의 역학적 특성과 건강증진 방안 : 국가 암등록 자료를 이용하여)

  • Young-Hee Nam
    • The Journal of Korean Society for School & Community Health Education
    • /
    • v.24 no.1
    • /
    • pp.1-15
    • /
    • 2023
  • Objectives: The purpose of this study was to identify the major influencing factors of breast cancer death and to suggest policy measures to promote the health of breast cancer patients. Methods: The method of this study performed statistical analysis by applying weights to 2,300 cases of breast cancer registration statistics in Korea collected in 2018 due to the relatively small number of mortality data compared to survival. Statistical processing of the collected data was analyzed using SPSS 26.0. Results: The epidemiologic characteristics of death in breast cancer patients were 31.8% in those aged 70 years or older, and the mortality rate was 5.25 times higher in patients aged 70 years or older than those aged 39 years or younger. The anatomical site code was 36.4% in C50.4~C50.6, and the mortality rate was 1.82 times higher in C50.4~C50.6 than in C50.0~C50.1. The tumor size was 40.4% and larger than 4cm, and the mortality rate was 4.53 times higher in tumors larger than 4cm than those smaller than 1cm. The degree of differentiation was 13.9% in the poorly differentiated group, and the mortality rate was 4.38 times higher in the poorly differentiated group than in the highly differentiated group. In the hormone receptor test, non-triple negative cases were 59.6%, and the mortality rate was 0.57 times lower in non-triple negative cases than in triple negative cases. As for lymph node involvement, the presence or absence of lymph node involvement was 78.8%, and the mortality rate with lymph node involvement was 1.36 times higher than that without lymph node involvement. The survival period of 13 to 24 months was the highest at 26.5%, and the average survival period was 25.68 months (±14.830). Conclusion: A policy to advance the timing of national health examinations for early detection of breast cancer is necessary. In addition, a bill for the mandatory placement of health educators in medical institutions for patients with special diseases such as breast cancer should be prepared.

Quality Visualization of Quality Metric Indicators based on Table Normalization of Static Code Building Information (정적 코드 내부 정보의 테이블 정규화를 통한 품질 메트릭 지표들의 가시화를 위한 추출 메커니즘)

  • Chansol Park;So Young Moon;R. Young Chul Kim
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.12 no.5
    • /
    • pp.199-206
    • /
    • 2023
  • The current software becomes the huge size of source codes. Therefore it is increasing the importance and necessity of static analysis for high-quality product. With static analysis of the code, it needs to identify the defect and complexity of the code. Through visualizing these problems, we make it guild for developers and stakeholders to understand these problems in the source codes. Our previous visualization research focused only on the process of storing information of the results of static analysis into the Database tables, querying the calculations for quality indicators (CK Metrics, Coupling, Number of function calls, Bad-smell), and then finally visualizing the extracted information. This approach has some limitations in that it takes a lot of time and space to analyze a code using information extracted from it through static analysis. That is since the tables are not normalized, it may occur to spend space and time when the tables(classes, functions, attributes, Etc.) are joined to extract information inside the code. To solve these problems, we propose a regularized design of the database tables, an extraction mechanism for quality metric indicators inside the code, and then a visualization with the extracted quality indicators on the code. Through this mechanism, we expect that the code visualization process will be optimized and that developers will be able to guide the modules that need refactoring. In the future, we will conduct learning of some parts of this process.

Context-Dependent Video Data Augmentation for Human Instance Segmentation (인물 개체 분할을 위한 맥락-의존적 비디오 데이터 보강)

  • HyunJin Chun;JongHun Lee;InCheol Kim
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.12 no.5
    • /
    • pp.217-228
    • /
    • 2023
  • Video instance segmentation is an intelligent visual task with high complexity because it not only requires object instance segmentation for each image frame constituting a video, but also requires accurate tracking of instances throughout the frame sequence of the video. In special, human instance segmentation in drama videos has an unique characteristic that requires accurate tracking of several main characters interacting in various places and times. Also, it is also characterized by a kind of the class imbalance problem because there is a significant difference between the frequency of main characters and that of supporting or auxiliary characters in drama videos. In this paper, we introduce a new human instance datatset called MHIS, which is built upon drama videos, Miseang, and then propose a novel video data augmentation method, CDVA, in order to overcome the data imbalance problem between character classes. Different from the previous video data augmentation methods, the proposed CDVA generates more realistic augmented videos by deciding the optimal location within the background clip for a target human instance to be inserted with taking rich spatio-temporal context embedded in videos into account. Therefore, the proposed augmentation method, CDVA, can improve the performance of a deep neural network model for video instance segmentation. Conducting both quantitative and qualitative experiments using the MHIS dataset, we prove the usefulness and effectiveness of the proposed video data augmentation method.