• Title/Summary/Keyword: Computer optimization

Search Result 2,426, Processing Time 0.027 seconds

U-Net Cloud Detection for the SPARCS Cloud Dataset from Landsat 8 Images (Landsat 8 기반 SPARCS 데이터셋을 이용한 U-Net 구름탐지)

  • Kang, Jonggu;Kim, Geunah;Jeong, Yemin;Kim, Seoyeon;Youn, Youjeong;Cho, Soobin;Lee, Yangwon
    • Korean Journal of Remote Sensing
    • /
    • v.37 no.5_1
    • /
    • pp.1149-1161
    • /
    • 2021
  • With a trend of the utilization of computer vision for satellite images, cloud detection using deep learning also attracts attention recently. In this study, we conducted a U-Net cloud detection modeling using SPARCS (Spatial Procedures for Automated Removal of Cloud and Shadow) Cloud Dataset with the image data augmentation and carried out 10-fold cross-validation for an objective assessment of the model. Asthe result of the blind test for 1800 datasets with 512 by 512 pixels, relatively high performance with the accuracy of 0.821, the precision of 0.847, the recall of 0.821, the F1-score of 0.831, and the IoU (Intersection over Union) of 0.723. Although 14.5% of actual cloud shadows were misclassified as land, and 19.7% of actual clouds were misidentified as land, this can be overcome by increasing the quality and quantity of label datasets. Moreover, a state-of-the-art DeepLab V3+ model and the NAS (Neural Architecture Search) optimization technique can help the cloud detection for CAS500 (Compact Advanced Satellite 500) in South Korea.

Design of an Efficient Control System for Harbor Terminal based on the Commercial Network (상용망 기반의 항만터미널 효율적인 관제시스템 설계)

  • Kim, Yong-Ho;Ju, YoungKwan;Mun, Hyung-Jin
    • Journal of Industrial Convergence
    • /
    • v.16 no.1
    • /
    • pp.21-26
    • /
    • 2018
  • The Seaborne Trade Volume accounts for 97% of the total. This means that the port operation management system can improve port efficiency, reducing operating costs, and the manager who manages all operations at the port needs to check and respond quickly when delays of work and equipment support is needed. Based on the real-time location information confirmation of yard automation equipment used the existing system GPS, the real-time location information confirmation system is a GPS system of the tablet, rather than a port operation system that monitors location information for the entered information, depending on the completion of the task or the start of the task. Network configurations also reduce container processing delays by using commercial LTE services that do not have shading due to containers in the yard also reduce container processing delays. Trough introduction of smart devices using Android or IOS and container processing scheduling utilizing artificial intelligence, we will build a minimum delay system with Smart Device usage of container processing applications and optimization of container processing schedule. The adoption of smart devices and the minimization of container processing delays utilizing artificial intelligence are expected to improve the quality of port services by confirming the processing containers in real time to consumers who are container information demanders.

Detection and Identification of Moving Objects at Busy Traffic Road based on YOLO v4 (YOLO v4 기반 혼잡도로에서의 움직이는 물체 검출 및 식별)

  • Li, Qiutan;Ding, Xilong;Wang, Xufei;Chen, Le;Son, Jinku;Song, Jeong-Young
    • The Journal of the Institute of Internet, Broadcasting and Communication
    • /
    • v.21 no.1
    • /
    • pp.141-148
    • /
    • 2021
  • In some intersections or busy traffic roads, there are more pedestrians in a specific period of time, and there are many traffic accidents caused by road congestion. Especially at the intersection where there are schools nearby, it is particularly important to protect the traffic safety of students in busy hours. In the past, when designing traffic lights, the safety of pedestrians was seldom taken into account, and the identification of motor vehicles and traffic optimization were mostly studied. How to keep the road smooth as far as possible under the premise of ensuring the safety of pedestrians, especially students, will be the key research direction of this paper. This paper will focus on person, motorcycle, bicycle, car and bus recognition research. Through investigation and comparison, this paper proposes to use YOLO v4 network to identify the location and quantity of objects. YOLO v4 has the characteristics of strong ability of small target recognition, high precision and fast processing speed, and sets the data acquisition object to train and test the image set. Using the statistics of the accuracy rate, error rate and omission rate of the target in the video, the network trained in this paper can accurately and effectively identify persons, motorcycles, bicycles, cars and buses in the moving images.

A Study on the Cerber-Type Ransomware Detection Model Using Opcode and API Frequency and Correlation Coefficient (Opcode와 API의 빈도수와 상관계수를 활용한 Cerber형 랜섬웨어 탐지모델에 관한 연구)

  • Lee, Gye-Hyeok;Hwang, Min-Chae;Hyun, Dong-Yeop;Ku, Young-In;Yoo, Dong-Young
    • KIPS Transactions on Computer and Communication Systems
    • /
    • v.11 no.10
    • /
    • pp.363-372
    • /
    • 2022
  • Since the recent COVID-19 Pandemic, the ransomware fandom has intensified along with the expansion of remote work. Currently, anti-virus vaccine companies are trying to respond to ransomware, but traditional file signature-based static analysis can be neutralized in the face of diversification, obfuscation, variants, or the emergence of new ransomware. Various studies are being conducted for such ransomware detection, and detection studies using signature-based static analysis and behavior-based dynamic analysis can be seen as the main research type at present. In this paper, the frequency of ".text Section" Opcode and the Native API used in practice was extracted, and the association between feature information selected using K-means Clustering algorithm, Cosine Similarity, and Pearson correlation coefficient was analyzed. In addition, Through experiments to classify and detect worms among other malware types and Cerber-type ransomware, it was verified that the selected feature information was specialized in detecting specific ransomware (Cerber). As a result of combining the finally selected feature information through the above verification and applying it to machine learning and performing hyper parameter optimization, the detection rate was up to 93.3%.

Low Power ADC Design for Mixed Signal Convolutional Neural Network Accelerator (혼성신호 컨볼루션 뉴럴 네트워크 가속기를 위한 저전력 ADC설계)

  • Lee, Jung Yeon;Asghar, Malik Summair;Arslan, Saad;Kim, HyungWon
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.25 no.11
    • /
    • pp.1627-1634
    • /
    • 2021
  • This paper introduces a low-power compact ADC circuit for analog Convolutional filter for low-power neural network accelerator SOC. While convolutional neural network accelerators can speed up the learning and inference process, they have drawback of consuming excessive power and occupying large chip area due to large number of multiply-and-accumulate operators when implemented in complex digital circuits. To overcome these drawbacks, we implemented an analog convolutional filter that consists of an analog multiply-and-accumulate arithmetic circuit along with an ADC. This paper is focused on the design optimization of a low-power 8bit SAR ADC for the analog convolutional filter accelerator We demonstrate how to minimize the capacitor-array DAC, an important component of SAR ADC, which is three times smaller than the conventional circuit. The proposed ADC has been fabricated in CMOS 65nm process. It achieves an overall size of 1355.7㎛2, power consumption of 2.6㎼ at a frequency of 100MHz, SNDR of 44.19 dB, and ENOB of 7.04bit.

Deep Learning Braille Block Recognition Method for Embedded Devices (임베디드 기기를 위한 딥러닝 점자블록 인식 방법)

  • Hee-jin Kim;Jae-hyuk Yoon;Soon-kak Kwon
    • Journal of Korea Society of Industrial Information Systems
    • /
    • v.28 no.4
    • /
    • pp.1-9
    • /
    • 2023
  • In this paper, we propose a method to recognize the braille blocks for embedded devices in real time through deep learning. First, a deep learning model for braille block recognition is trained on a high-performance computer, and the learning model is applied to a lightweight tool to apply to an embedded device. To recognize the walking information of the braille block, an algorithm is used to determine the path using the distance from the braille block in the image. After detecting braille blocks, bollards, and crosswalks through the YOLOv8 model in the video captured by the embedded device, the walking information is recognized through the braille block path discrimination algorithm. We apply the model lightweight tool to YOLOv8 to detect braille blocks in real time. The precision of YOLOv8 model weights is lowered from the existing 32 bits to 8 bits, and the model is optimized by applying the TensorRT optimization engine. As the result of comparing the lightweight model through the proposed method with the existing model, the path recognition accuracy is 99.05%, which is almost the same as the existing model, but the recognition speed is reduced by 59% compared to the existing model, processing about 15 frames per second.

Performance analysis and prediction through various over-provision on NAND flash memory based storage (낸드 플래시 메모리기반 저장 장치에서 다양한 초과 제공을 통한 성능 분석 및 예측)

  • Lee, Hyun-Seob
    • Journal of Digital Convergence
    • /
    • v.20 no.3
    • /
    • pp.343-348
    • /
    • 2022
  • Recently, With the recent rapid development of technology, the amount of data generated by various systems is increasing, and enterprise servers and data centers that have to handle large amounts of big data need to apply high-stability and high-performance storage devices even if costs increase. In such systems, SSD(solid state disk) that provide high performance of read/write are often used as storage devices. However, due to the characteristics of reading and writing on a page-by-page basis, erasing operations on a block basis, and erassing-before-writing, there is a problem that performance is degraded when duplicate writes occur. Therefore, in order to delay this performance degradation problem, over-provision technology of SSD has been applied internally. However, since over-provided technologies have the disadvantage of consuming a lot of storage space instead of performance, the application of inefficient technologies above the right performance has a problem of over-costing. In this paper, we proposed a method of measuring the performance and cost incurred when various over-provisions are applied in an SSD and predicting the system-optimized over-provided ratio based on this. Through this research, we expect to find a trade-off with costs to meet the performance requirements in systems that process big data.

GIS Optimization for Bigdata Analysis and AI Applying (Bigdata 분석과 인공지능 적용한 GIS 최적화 연구)

  • Kwak, Eun-young;Park, Dea-woo
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 2022.05a
    • /
    • pp.171-173
    • /
    • 2022
  • The 4th industrial revolution technology is developing people's lives more efficiently. GIS provided on the Internet services such as traffic information and time information makes people getting more quickly to destination. National geographic information service(NGIS) and each local government are making basic data to investigate SOC accessibility for analyzing optimal point. To construct the shortest distance, the accessibility from the starting point to the arrival point is analyzed. Applying road network map, the starting point and the ending point, the shortest distance, the optimal accessibility is calculated by using Dijkstra algorithm. The analysis information from multiple starting points to multiple destinations was required more than 3 steps of manual analysis to decide the position for the optimal point, within about 0.1% error. It took more time to process the many-to-many (M×N) calculation, requiring at least 32G memory specification of the computer. If an optimal proximity analysis service is provided at a desired location more versatile, it is possible to efficiently analyze locations that are vulnerable to business start-up and living facilities access, and facility selection for the public.

  • PDF

A study on end-to-end speaker diarization system using single-label classification (단일 레이블 분류를 이용한 종단 간 화자 분할 시스템 성능 향상에 관한 연구)

  • Jaehee Jung;Wooil Kim
    • The Journal of the Acoustical Society of Korea
    • /
    • v.42 no.6
    • /
    • pp.536-543
    • /
    • 2023
  • Speaker diarization, which labels for "who spoken when?" in speech with multiple speakers, has been studied on a deep neural network-based end-to-end method for labeling on speech overlap and optimization of speaker diarization models. Most deep neural network-based end-to-end speaker diarization systems perform multi-label classification problem that predicts the labels of all speakers spoken in each frame of speech. However, the performance of the multi-label-based model varies greatly depending on what the threshold is set to. In this paper, it is studied a speaker diarization system using single-label classification so that speaker diarization can be performed without thresholds. The proposed model estimate labels from the output of the model by converting speaker labels into a single label. To consider speaker label permutations in the training, the proposed model is used a combination of Permutation Invariant Training (PIT) loss and cross-entropy loss. In addition, how to add the residual connection structures to model is studied for effective learning of speaker diarization models with deep structures. The experiment used the Librispech database to generate and use simulated noise data for two speakers. When compared with the proposed method and baseline model using the Diarization Error Rate (DER) performance the proposed method can be labeling without threshold, and it has improved performance by about 20.7 %.

Design and Implementation of MongoDB-based Unstructured Log Processing System over Cloud Computing Environment (클라우드 환경에서 MongoDB 기반의 비정형 로그 처리 시스템 설계 및 구현)

  • Kim, Myoungjin;Han, Seungho;Cui, Yun;Lee, Hanku
    • Journal of Internet Computing and Services
    • /
    • v.14 no.6
    • /
    • pp.71-84
    • /
    • 2013
  • Log data, which record the multitude of information created when operating computer systems, are utilized in many processes, from carrying out computer system inspection and process optimization to providing customized user optimization. In this paper, we propose a MongoDB-based unstructured log processing system in a cloud environment for processing the massive amount of log data of banks. Most of the log data generated during banking operations come from handling a client's business. Therefore, in order to gather, store, categorize, and analyze the log data generated while processing the client's business, a separate log data processing system needs to be established. However, the realization of flexible storage expansion functions for processing a massive amount of unstructured log data and executing a considerable number of functions to categorize and analyze the stored unstructured log data is difficult in existing computer environments. Thus, in this study, we use cloud computing technology to realize a cloud-based log data processing system for processing unstructured log data that are difficult to process using the existing computing infrastructure's analysis tools and management system. The proposed system uses the IaaS (Infrastructure as a Service) cloud environment to provide a flexible expansion of computing resources and includes the ability to flexibly expand resources such as storage space and memory under conditions such as extended storage or rapid increase in log data. Moreover, to overcome the processing limits of the existing analysis tool when a real-time analysis of the aggregated unstructured log data is required, the proposed system includes a Hadoop-based analysis module for quick and reliable parallel-distributed processing of the massive amount of log data. Furthermore, because the HDFS (Hadoop Distributed File System) stores data by generating copies of the block units of the aggregated log data, the proposed system offers automatic restore functions for the system to continually operate after it recovers from a malfunction. Finally, by establishing a distributed database using the NoSQL-based Mongo DB, the proposed system provides methods of effectively processing unstructured log data. Relational databases such as the MySQL databases have complex schemas that are inappropriate for processing unstructured log data. Further, strict schemas like those of relational databases cannot expand nodes in the case wherein the stored data are distributed to various nodes when the amount of data rapidly increases. NoSQL does not provide the complex computations that relational databases may provide but can easily expand the database through node dispersion when the amount of data increases rapidly; it is a non-relational database with an appropriate structure for processing unstructured data. The data models of the NoSQL are usually classified as Key-Value, column-oriented, and document-oriented types. Of these, the representative document-oriented data model, MongoDB, which has a free schema structure, is used in the proposed system. MongoDB is introduced to the proposed system because it makes it easy to process unstructured log data through a flexible schema structure, facilitates flexible node expansion when the amount of data is rapidly increasing, and provides an Auto-Sharding function that automatically expands storage. The proposed system is composed of a log collector module, a log graph generator module, a MongoDB module, a Hadoop-based analysis module, and a MySQL module. When the log data generated over the entire client business process of each bank are sent to the cloud server, the log collector module collects and classifies data according to the type of log data and distributes it to the MongoDB module and the MySQL module. The log graph generator module generates the results of the log analysis of the MongoDB module, Hadoop-based analysis module, and the MySQL module per analysis time and type of the aggregated log data, and provides them to the user through a web interface. Log data that require a real-time log data analysis are stored in the MySQL module and provided real-time by the log graph generator module. The aggregated log data per unit time are stored in the MongoDB module and plotted in a graph according to the user's various analysis conditions. The aggregated log data in the MongoDB module are parallel-distributed and processed by the Hadoop-based analysis module. A comparative evaluation is carried out against a log data processing system that uses only MySQL for inserting log data and estimating query performance; this evaluation proves the proposed system's superiority. Moreover, an optimal chunk size is confirmed through the log data insert performance evaluation of MongoDB for various chunk sizes.