• Title/Summary/Keyword: Network Computer

Search Result 12,553, Processing Time 0.041 seconds

Delayed offloading scheme for IoT tasks considering opportunistic fog computing environment (기회적 포그 컴퓨팅 환경을 고려한 IoT 테스크의 지연된 오프로딩 제공 방안)

  • Kyung, Yeunwoong
    • Journal of Internet of Things and Convergence
    • /
    • v.6 no.4
    • /
    • pp.89-92
    • /
    • 2020
  • According to the various IoT(Internet of Things) services, there have been lots of task offloading researches for IoT devices. Since there are service response delay and core network load issues in conventional cloud computing based offloadings, fog computing based offloading has been focused whose location is close to the IoT devices. However, even in the fog computing architecture, the load can be concentrated on the for computing node when the number of requests increase. To solve this problem, the opportunistic fog computing concept which offloads task to available computing resources such as cars and drones is introduced. In previous fog and opportunistic fog node researches, the offloading is performed immediately whenever the service request occurs. This means that the service requests can be offloaded to the opportunistic fog nodes only while they are available. However, if the service response delay requirement is satisfied, there is no need to offload the request immediately. In addition, the load can be distributed by making the best use of the opportunistic fog nodes. Therefore, this paper proposes a delayed offloading scheme to satisfy the response delay requirements and offload the request to the opportunistic fog nodes as efficiently as possible.

Effective Utilization of Domain Knowledge for Relational Reinforcement Learning (관계형 강화 학습을 위한 도메인 지식의 효과적인 활용)

  • Kang, MinKyo;Kim, InCheol
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.11 no.3
    • /
    • pp.141-148
    • /
    • 2022
  • Recently, reinforcement learning combined with deep neural network technology has achieved remarkable success in various fields such as board games such as Go and chess, computer games such as Atari and StartCraft, and robot object manipulation tasks. However, such deep reinforcement learning describes states, actions, and policies in vector representation. Therefore, the existing deep reinforcement learning has some limitations in generality and interpretability of the learned policy, and it is difficult to effectively incorporate domain knowledge into policy learning. On the other hand, dNL-RRL, a new relational reinforcement learning framework proposed to solve these problems, uses a kind of vector representation for sensor input data and lower-level motion control as in the existing deep reinforcement learning. However, for states, actions, and learned policies, It uses a relational representation with logic predicates and rules. In this paper, we present dNL-RRL-based policy learning for transportation mobile robots in a manufacturing environment. In particular, this study proposes a effective method to utilize the prior domain knowledge of human experts to improve the efficiency of relational reinforcement learning. Through various experiments, we demonstrate the performance improvement of the relational reinforcement learning by using domain knowledge as proposed in this paper.

A Deep Learning Method for Cost-Effective Feed Weight Prediction of Automatic Feeder for Companion Animals (반려동물용 자동 사료급식기의 비용효율적 사료 중량 예측을 위한 딥러닝 방법)

  • Kim, Hoejung;Jeon, Yejin;Yi, Seunghyun;Kwon, Ohbyung
    • Journal of Intelligence and Information Systems
    • /
    • v.28 no.2
    • /
    • pp.263-278
    • /
    • 2022
  • With the recent advent of IoT technology, automatic pet feeders are being distributed so that owners can feed their companion animals while they are out. However, due to behaviors of pets, the method of measuring weight, which is important in automatic feeding, can be easily damaged and broken when using the scale. The 3D camera method has disadvantages due to its cost, and the 2D camera method has relatively poor accuracy when compared to 3D camera method. Hence, the purpose of this study is to propose a deep learning approach that can accurately estimate weight while simply using a 2D camera. For this, various convolutional neural networks were used, and among them, the ResNet101-based model showed the best performance: an average absolute error of 3.06 grams and an average absolute ratio error of 3.40%, which could be used commercially in terms of technical and financial viability. The result of this study can be useful for the practitioners to predict the weight of a standardized object such as feed only through an easy 2D image.

The Accuracy Assessment of Species Classification according to Spatial Resolution of Satellite Image Dataset Based on Deep Learning Model (딥러닝 모델 기반 위성영상 데이터세트 공간 해상도에 따른 수종분류 정확도 평가)

  • Park, Jeongmook;Sim, Woodam;Kim, Kyoungmin;Lim, Joongbin;Lee, Jung-Soo
    • Korean Journal of Remote Sensing
    • /
    • v.38 no.6_1
    • /
    • pp.1407-1422
    • /
    • 2022
  • This study was conducted to classify tree species and assess the classification accuracy, using SE-Inception, a classification-based deep learning model. The input images of the dataset used Worldview-3 and GeoEye-1 images, and the size of the input images was divided into 10 × 10 m, 30 × 30 m, and 50 × 50 m to compare and evaluate the accuracy of classification of tree species. The label data was divided into five tree species (Pinus densiflora, Pinus koraiensis, Larix kaempferi, Abies holophylla Maxim. and Quercus) by visually interpreting the divided image, and then labeling was performed manually. The dataset constructed a total of 2,429 images, of which about 85% was used as learning data and about 15% as verification data. As a result of classification using the deep learning model, the overall accuracy of up to 78% was achieved when using the Worldview-3 image, the accuracy of up to 84% when using the GeoEye-1 image, and the classification accuracy was high performance. In particular, Quercus showed high accuracy of more than 85% in F1 regardless of the input image size, but trees with similar spectral characteristics such as Pinus densiflora and Pinus koraiensis had many errors. Therefore, there may be limitations in extracting feature amount only with spectral information of satellite images, and classification accuracy may be improved by using images containing various pattern information such as vegetation index and Gray-Level Co-occurrence Matrix (GLCM).

A Design of the Vehicle Crisis Detection System(VCDS) based on vehicle internal and external data and deep learning (차량 내·외부 데이터 및 딥러닝 기반 차량 위기 감지 시스템 설계)

  • Son, Su-Rak;Jeong, Yi-Na
    • The Journal of Korea Institute of Information, Electronics, and Communication Technology
    • /
    • v.14 no.2
    • /
    • pp.128-133
    • /
    • 2021
  • Currently, autonomous vehicle markets are commercializing a third-level autonomous vehicle, but there is a possibility that an accident may occur even during fully autonomous driving due to stability issues. In fact, autonomous vehicles have recorded 81 accidents. This is because, unlike level 3, autonomous vehicles after level 4 have to judge and respond to emergency situations by themselves. Therefore, this paper proposes a vehicle crisis detection system(VCDS) that collects and stores information outside the vehicle through CNN, and uses the stored information and vehicle sensor data to output the crisis situation of the vehicle as a number between 0 and 1. The VCDS consists of two modules. The vehicle external situation collection module collects surrounding vehicle and pedestrian data using a CNN-based neural network model. The vehicle crisis situation determination module detects a crisis situation in the vehicle by using the output of the vehicle external situation collection module and the vehicle internal sensor data. As a result of the experiment, the average operation time of VESCM was 55ms, R-CNN was 74ms, and CNN was 101ms. In particular, R-CNN shows similar computation time to VESCM when the number of pedestrians is small, but it takes more computation time than VESCM as the number of pedestrians increases. On average, VESCM had 25.68% faster computation time than R-CNN and 45.54% faster than CNN, and the accuracy of all three models did not decrease below 80% and showed high accuracy.

Integrated receptive field diversification method for improving speaker verification performance for variable-length utterances (가변 길이 입력 발성에서의 화자 인증 성능 향상을 위한 통합된 수용 영역 다양화 기법)

  • Shin, Hyun-seo;Kim, Ju-ho;Heo, Jungwoo;Shim, Hye-jin;Yu, Ha-Jin
    • The Journal of the Acoustical Society of Korea
    • /
    • v.41 no.3
    • /
    • pp.319-325
    • /
    • 2022
  • The variation of utterance lengths is a representative factor that can degrade the performance of speaker verification systems. To handle this issue, previous studies had attempted to extract speaker features from various branches or to use convolution layers with different receptive fields. Combining the advantages of the previous two approaches for variable-length input, this paper proposes integrated receptive field diversification that extracts speaker features through more diverse receptive field. The proposed method processes the input features by convolutional layers with different receptive fields at multiple time-axis branches, and extracts speaker embedding by dynamically aggregating the processed features according to the lengths of input utterances. The deep neural networks in this study were trained on the VoxCeleb2 dataset and tested on the VoxCeleb1 evaluation dataset that divided into 1 s, 2 s, 5 s, and full-length. Experimental results demonstrated that the proposed method reduces the equal error rate by 19.7 % compared to the baseline.

A Study on Non-Fungible Token Platform for Usability and Privacy Improvement (사용성 및 프라이버시 개선을 위한 NFT 플랫폼 연구)

  • Kang, Myung Joe;Kim, Mi Hui
    • KIPS Transactions on Computer and Communication Systems
    • /
    • v.11 no.11
    • /
    • pp.403-410
    • /
    • 2022
  • Non-Fungible Tokens (NFTs) created on the basis of blockchain have their own unique value, so they cannot be forged or exchanged with other tokens or coins. Using these characteristics, NFTs can be issued to digital assets such as images, videos, artworks, game characters, and items to claim ownership of digital assets among many users and objects in cyberspace, as well as proving the original. However, interest in NFTs exploded from the beginning of 2020, causing a lot of load on the blockchain network, and as a result, users are experiencing problems such as delays in computational processing or very large fees in the mining process. Additionally, all actions of users are stored in the blockchain, and digital assets are stored in a blockchain-based distributed file storage system, which may unnecessarily expose the personal information of users who do not want to identify themselves on the Internet. In this paper, we propose an NFT platform using cloud computing, access gate, conversion table, and cloud ID to improve usability and privacy problems that occur in existing system. For performance comparison between local and cloud systems, we measured the gas used for smart contract deployment and NFT-issued transaction. As a result, even though the cloud system used the same experimental environment and parameters, it saved about 3.75% of gas for smart contract deployment and about 4.6% for NFT-generated transaction, confirming that the cloud system can handle computations more efficiently than the local system.

Efficient Privacy-Preserving Duplicate Elimination in Edge Computing Environment Based on Trusted Execution Environment (신뢰실행환경기반 엣지컴퓨팅 환경에서의 암호문에 대한 효율적 프라이버시 보존 데이터 중복제거)

  • Koo, Dongyoung
    • KIPS Transactions on Computer and Communication Systems
    • /
    • v.11 no.9
    • /
    • pp.305-316
    • /
    • 2022
  • With the flood of digital data owing to the Internet of Things and big data, cloud service providers that process and store vast amount of data from multiple users can apply duplicate data elimination technique for efficient data management. The user experience can be improved as the notion of edge computing paradigm is introduced as an extension of the cloud computing to improve problems such as network congestion to a central cloud server and reduced computational efficiency. However, the addition of a new edge device that is not entirely reliable in the edge computing may cause increase in the computational complexity for additional cryptographic operations to preserve data privacy in duplicate identification and elimination process. In this paper, we propose an efficiency-improved duplicate data elimination protocol while preserving data privacy with an optimized user-edge-cloud communication framework by utilizing a trusted execution environment. Direct sharing of secret information between the user and the central cloud server can minimize the computational complexity in edge devices and enables the use of efficient encryption algorithms at the side of cloud service providers. Users also improve the user experience by offloading data to edge devices, enabling duplicate elimination and independent activity. Through experiments, efficiency of the proposed scheme has been analyzed such as up to 78x improvements in computation during data outsourcing process compared to the previous study which does not exploit trusted execution environment in edge computing architecture.

CycleGAN Based Translation Method between Asphalt and Concrete Crack Images for Data Augmentation (데이터 증강을 위한 순환 생성적 적대 신경망 기반의 아스팔트와 콘크리트 균열 영상 간의 변환 기법)

  • Shim, Seungbo
    • The Journal of The Korea Institute of Intelligent Transport Systems
    • /
    • v.21 no.5
    • /
    • pp.171-182
    • /
    • 2022
  • The safe use of a structure requires it to be maintained in an undamaged state. Thus, a typical factor that determines the safety of a structure is a crack in it. In addition, cracks are caused by various reasons, damage the structure in various ways, and exist in different shapes. Making matters worse, if these cracks are unattended, the risk of structural failure increases and proceeds to a catastrophe. Hence, recently, methods of checking structural damage using deep learning and computer vision technology have been introduced. These methods usually have the premise that there should be a large amount of training image data. However, the amount of training image data is always insufficient. Particularly, this insufficiency negatively affects the performance of deep learning crack detection algorithms. Hence, in this study, a method of augmenting crack image data based on the image translation technique was developed. In particular, this method obtained the crack image data for training a deep learning neural network model by transforming a specific case of a asphalt crack image into a concrete crack image or vice versa . Eventually, this method expected that a robust crack detection algorithm could be developed by increasing the diversity of its training data.

Analysis of ICT Education Trends using Keyword Occurrence Frequency Analysis and CONCOR Technique (키워드 출현 빈도 분석과 CONCOR 기법을 이용한 ICT 교육 동향 분석)

  • Youngseok Lee
    • Journal of Industrial Convergence
    • /
    • v.21 no.1
    • /
    • pp.187-192
    • /
    • 2023
  • In this study, trends in ICT education were investigated by analyzing the frequency of appearance of keywords related to machine learning and using conversion of iteration correction(CONCOR) techniques. A total of 304 papers from 2018 to the present published in registered sites were searched on Google Scalar using "ICT education" as the keyword, and 60 papers pertaining to ICT education were selected based on a systematic literature review. Subsequently, keywords were extracted based on the title and summary of the paper. For word frequency and indicator data, 49 keywords with high appearance frequency were extracted by analyzing frequency, via the term frequency-inverse document frequency technique in natural language processing, and words with simultaneous appearance frequency. The relationship degree was verified by analyzing the connection structure and centrality of the connection degree between words, and a cluster composed of words with similarity was derived via CONCOR analysis. First, "education," "research," "result," "utilization," and "analysis" were analyzed as main keywords. Second, by analyzing an N-GRAM network graph with "education" as the keyword, "curriculum" and "utilization" were shown to exhibit the highest correlation level. Third, by conducting a cluster analysis with "education" as the keyword, five groups were formed: "curriculum," "programming," "student," "improvement," and "information." These results indicate that practical research necessary for ICT education can be conducted by analyzing ICT education trends and identifying trends.