• Title/Summary/Keyword: computer models

Search Result 3,894, Processing Time 0.033 seconds

A Stratified Mixed Multiplicative Quantitative Randomize Response Model (층화 혼합 승법 양적속성 확률화응답모형)

  • Lee, Gi-Sung;Hong, Ki-Hak;Son, Chang-Kyoon
    • Journal of the Korean Data Analysis Society
    • /
    • v.20 no.6
    • /
    • pp.2895-2905
    • /
    • 2018
  • We present a mixed multiplicative quantitative randomized response model which added a unrelated quantitative attribute and forced answer to the multiplicative model suggested by Bar-Lev et al. (2004). We also try to set up theoretical grounds for estimating sensitive quantitative attribute according to circumstances whether or not the information for unrelated quantitative attribute is known. We also extend it into the stratified mixed multiplicative quantitative randomized response model for stratified population along with two allocation methods, proportional and optimum allocation. We can see that the various quantitative randomized response models such as Eichhorn-Hayre's model (1983), Bar-Lev et al.'s model (2004), Gjestvang-Singh's model (2007) and Lee's model (2016a), are one of the special occasions of the suggested model. Finally, We compare the efficiency of our suggested model with Bar-Lev et al.'s (2004) and see that the bigger the value of $C_z$, the more the efficiency of the suggested model is obtained.

A Noise-Tolerant Hierarchical Image Classification System based on Autoencoder Models (오토인코더 기반의 잡음에 강인한 계층적 이미지 분류 시스템)

  • Lee, Jong-kwan
    • Journal of Internet Computing and Services
    • /
    • v.22 no.1
    • /
    • pp.23-30
    • /
    • 2021
  • This paper proposes a noise-tolerant image classification system using multiple autoencoders. The development of deep learning technology has dramatically improved the performance of image classifiers. However, if the images are contaminated by noise, the performance degrades rapidly. Noise added to the image is inevitably generated in the process of obtaining and transmitting the image. Therefore, in order to use the classifier in a real environment, we have to deal with the noise. On the other hand, the autoencoder is an artificial neural network model that is trained to have similar input and output values. If the input data is similar to the training data, the error between the input data and output data of the autoencoder will be small. However, if the input data is not similar to the training data, the error will be large. The proposed system uses the relationship between the input data and the output data of the autoencoder, and it has two phases to classify the images. In the first phase, the classes with the highest likelihood of classification are selected and subject to the procedure again in the second phase. For the performance analysis of the proposed system, classification accuracy was tested on a Gaussian noise-contaminated MNIST dataset. As a result of the experiment, it was confirmed that the proposed system in the noisy environment has higher accuracy than the CNN-based classification technique.

Evaluation of Runoff Prediction from a Coniferous Forest Watersheds and Runoff Estimation under Various Cover Degree Scenarios using GeoWEPP Watershed Model (GeoWEPP을 이용한 침엽수림 지역 유출특성 예측 및 다양한 식생 피도에 따른 유출량 평가)

  • Choi, Jaewan;Shin, Min Hwan;Cheon, Se Uk;Shin, Dongseok;Lee, Sung Jun;Moon, Sun Jung;Ryu, Ji Cheol;Lim, Kyoung Jae
    • Journal of Korean Society on Water Environment
    • /
    • v.27 no.4
    • /
    • pp.425-432
    • /
    • 2011
  • To control non-point source pollution at a watershed scale, rainfall-runoff characteristics from forest watersheds should be investigated since the forest is the dominant land use in Korea. Long-term monitoring would be an ideal method. However, computer models have been utilized due to limitations in cost and labor in performing long-term monitoring at the watersheds. In this study, the Geo-spatial interface to the Water Erosion Prediction Project (GeoWEPP) model was evaluated for its runoff prediction from a coniferous forest dominant watersheds. The $R^2$ and the NSE for calibrated result comparisons were 0.77 and 0.63, validated result comparisons were 0.92, 0.89, respectively. These comparisons indicated that the GeoWEPP model can be used in evaluating rainfall-runoff characteristics. To estimate runoff changes from a coniferous forest watershed with various cover degree scenarios, ten cover degree scenarios (10%, 20%, 30%, 40%, 50%, 60%, 70%, 80%, 90%, 100%) were run using the calibrated GeoWEPP model. It was found that runoff increases with decrease in cover degree. Runoff volume was the highest ($206,218.66m^3$) at 10% cover degree, whereas the lowest ($134,074.58m^3$) at 100% cover degree due to changes in evapotranspiration under various cover degrees at the forest. As shown in this study, GeoWEPP model could be efficiently used to investigate runoff characteristics from the coniferous forest watershed and effects of various cover degree scenarios on runoff generation.

Distributed Edge Computing for DNA-Based Intelligent Services and Applications: A Review (딥러닝을 사용하는 IoT빅데이터 인프라에 필요한 DNA 기술을 위한 분산 엣지 컴퓨팅기술 리뷰)

  • Alemayehu, Temesgen Seyoum;Cho, We-Duke
    • KIPS Transactions on Computer and Communication Systems
    • /
    • v.9 no.12
    • /
    • pp.291-306
    • /
    • 2020
  • Nowadays, Data-Network-AI (DNA)-based intelligent services and applications have become a reality to provide a new dimension of services that improve the quality of life and productivity of businesses. Artificial intelligence (AI) can enhance the value of IoT data (data collected by IoT devices). The internet of things (IoT) promotes the learning and intelligence capability of AI. To extract insights from massive volume IoT data in real-time using deep learning, processing capability needs to happen in the IoT end devices where data is generated. However, deep learning requires a significant number of computational resources that may not be available at the IoT end devices. Such problems have been addressed by transporting bulks of data from the IoT end devices to the cloud datacenters for processing. But transferring IoT big data to the cloud incurs prohibitively high transmission delay and privacy issues which are a major concern. Edge computing, where distributed computing nodes are placed close to the IoT end devices, is a viable solution to meet the high computation and low-latency requirements and to preserve the privacy of users. This paper provides a comprehensive review of the current state of leveraging deep learning within edge computing to unleash the potential of IoT big data generated from IoT end devices. We believe that the revision will have a contribution to the development of DNA-based intelligent services and applications. It describes the different distributed training and inference architectures of deep learning models across multiple nodes of the edge computing platform. It also provides the different privacy-preserving approaches of deep learning on the edge computing environment and the various application domains where deep learning on the network edge can be useful. Finally, it discusses open issues and challenges leveraging deep learning within edge computing.

A Comparative Study of Machine Learning Algorithms Based on Tensorflow for Data Prediction (데이터 예측을 위한 텐서플로우 기반 기계학습 알고리즘 비교 연구)

  • Abbas, Qalab E.;Jang, Sung-Bong
    • KIPS Transactions on Computer and Communication Systems
    • /
    • v.10 no.3
    • /
    • pp.71-80
    • /
    • 2021
  • The selection of an appropriate neural network algorithm is an important step for accurate data prediction in machine learning. Many algorithms based on basic artificial neural networks have been devised to efficiently predict future data. These networks include deep neural networks (DNNs), recurrent neural networks (RNNs), long short-term memory (LSTM) networks, and gated recurrent unit (GRU) neural networks. Developers face difficulties when choosing among these networks because sufficient information on their performance is unavailable. To alleviate this difficulty, we evaluated the performance of each algorithm by comparing their errors and processing times. Each neural network model was trained using a tax dataset, and the trained model was used for data prediction to compare accuracies among the various algorithms. Furthermore, the effects of activation functions and various optimizers on the performance of the models were analyzed The experimental results show that the GRU and LSTM algorithms yields the lowest prediction error with an average RMSE of 0.12 and an average R2 score of 0.78 and 0.75 respectively, and the basic DNN model achieves the lowest processing time but highest average RMSE of 0.163. Furthermore, the Adam optimizer yields the best performance (with DNN, GRU, and LSTM) in terms of error and the worst performance in terms of processing time. The findings of this study are thus expected to be useful for scientists and developers.

Development of Cloud-Based Medical Image Labeling System and It's Quantitative Analysis of Sarcopenia (클라우드기반 의료영상 라벨링 시스템 개발 및 근감소증 정량 분석)

  • Lee, Chung-Sub;Lim, Dong-Wook;Kim, Ji-Eon;Noh, Si-Hyeong;Yu, Yeong-Ju;Kim, Tae-Hoon;Yoon, Kwon-Ha;Jeong, Chang-Won
    • KIPS Transactions on Computer and Communication Systems
    • /
    • v.11 no.7
    • /
    • pp.233-240
    • /
    • 2022
  • Most of the recent AI researches has focused on developing AI models. However, recently, artificial intelligence research has gradually changed from model-centric to data-centric, and the importance of learning data is getting a lot of attention based on this trend. However, it takes a lot of time and effort because the preparation of learning data takes up a significant part of the entire process, and the generation of labeling data also differs depending on the purpose of development. Therefore, it is need to develop a tool with various labeling functions to solve the existing unmetneeds. In this paper, we describe a labeling system for creating precise and fast labeling data of medical images. To implement this, a semi-automatic method using Back Projection, Grabcut techniques and an automatic method predicted through a machine learning model were implemented. We not only showed the advantage of running time for the generation of labeling data of the proposed system, but also showed superiority through comparative evaluation of accuracy. In addition, by analyzing the image data set of about 1,000 patients, meaningful diagnostic indexes were presented for men and women in the diagnosis of sarcopenia.

Anomaly detection and attack type classification mechanism using Extra Tree and ANN (Extra Tree와 ANN을 활용한 이상 탐지 및 공격 유형 분류 메커니즘)

  • Kim, Min-Gyu;Han, Myung-Mook
    • Journal of Internet Computing and Services
    • /
    • v.23 no.5
    • /
    • pp.79-85
    • /
    • 2022
  • Anomaly detection is a method to detect and block abnormal data flows in general users' data sets. The previously known method is a method of detecting and defending an attack based on a signature using the signature of an already known attack. This has the advantage of a low false positive rate, but the problem is that it is very vulnerable to a zero-day vulnerability attack or a modified attack. However, in the case of anomaly detection, there is a disadvantage that the false positive rate is high, but it has the advantage of being able to identify, detect, and block zero-day vulnerability attacks or modified attacks, so related studies are being actively conducted. In this study, we want to deal with these anomaly detection mechanisms, and we propose a new mechanism that performs both anomaly detection and classification while supplementing the high false positive rate mentioned above. In this study, the experiment was conducted with five configurations considering the characteristics of various algorithms. As a result, the model showing the best accuracy was proposed as the result of this study. After detecting an attack by applying the Extra Tree and Three-layer ANN at the same time, the attack type is classified using the Extra Tree for the classified attack data. In this study, verification was performed on the NSL-KDD data set, and the accuracy was 99.8%, 99.1%, 98.9%, 98.7%, and 97.9% for Normal, Dos, Probe, U2R, and R2L, respectively. This configuration showed superior performance compared to other models.

Impacts of Seasonal and Interannual Variabilities of Sea Surface Temperature on its Short-term Deep-learning Prediction Model Around the Southern Coast of Korea (한국 남부 해역 SST의 계절 및 경년 변동이 단기 딥러닝 모델의 SST 예측에 미치는 영향)

  • JU, HO-JEONG;CHAE, JEONG-YEOB;LEE, EUN-JOO;KIM, YOUNG-TAEG;PARK, JAE-HUN
    • The Sea:JOURNAL OF THE KOREAN SOCIETY OF OCEANOGRAPHY
    • /
    • v.27 no.2
    • /
    • pp.49-70
    • /
    • 2022
  • Sea Surface Temperature (SST), one of the ocean features, has a significant impact on climate, marine ecosystem and human activities. Therefore, SST prediction has been always an important issue. Recently, deep learning has drawn much attentions, since it can predict SST by training past SST patterns. Compared to the numerical simulations, deep learning model is highly efficient, since it can estimate nonlinear relationships between input data. With the recent development of Graphics Processing Unit (GPU) in computer, large amounts of data can be calculated repeatedly and rapidly. In this study, Short-term SST will be predicted through Convolutional Neural Network (CNN)-based U-Net that can handle spatiotemporal data concurrently and overcome the drawbacks of previously existing deep learning-based models. The SST prediction performance depends on the seasonal and interannual SST variabilities around the southern coast of Korea. The predicted SST has a wide range of variance during spring and summer, while it has small range of variance during fall and winter. A wide range of variance also has a significant correlation with the change of the Pacific Decadal Oscillation (PDO) index. These results are found to be affected by the intensity of the seasonal and PDO-related interannual SST fronts and their intensity variations along the southern Korean seas. This study implies that the SST prediction performance using the developed deep learning model can be significantly varied by seasonal and interannual variabilities in SST.

Deep Learning Based Group Synchronization for Networked Immersive Interactions (네트워크 환경에서의 몰입형 상호작용을 위한 딥러닝 기반 그룹 동기화 기법)

  • Lee, Joong-Jae
    • KIPS Transactions on Computer and Communication Systems
    • /
    • v.11 no.10
    • /
    • pp.373-380
    • /
    • 2022
  • This paper presents a deep learning based group synchronization that supports networked immersive interactions between remote users. The goal of group synchronization is to enable all participants to synchronously interact with others for increasing user presence Most previous methods focus on NTP-based clock synchronization to enhance time accuracy. Moving average filters are used to control media playout time on the synchronization server. As an example, the exponentially weighted moving average(EWMA) would be able to track and estimate accurate playout time if the changes in input data are not significant. However it needs more time to be stable for any given change over time due to codec and system loads or fluctuations in network status. To tackle this problem, this work proposes the Deep Group Synchronization(DeepGroupSync), a group synchronization based on deep learning that models important features from the data. This model consists of two Gated Recurrent Unit(GRU) layers and one fully-connected layer, which predicts an optimal playout time by utilizing the sequential playout delays. The experiments are conducted with an existing method that uses the EWMA and the proposed method that uses the DeepGroupSync. The results show that the proposed method are more robust against unpredictable or rapid network condition changes than the existing method.

A Design of the Vehicle Crisis Detection System(VCDS) based on vehicle internal and external data and deep learning (차량 내·외부 데이터 및 딥러닝 기반 차량 위기 감지 시스템 설계)

  • Son, Su-Rak;Jeong, Yi-Na
    • The Journal of Korea Institute of Information, Electronics, and Communication Technology
    • /
    • v.14 no.2
    • /
    • pp.128-133
    • /
    • 2021
  • Currently, autonomous vehicle markets are commercializing a third-level autonomous vehicle, but there is a possibility that an accident may occur even during fully autonomous driving due to stability issues. In fact, autonomous vehicles have recorded 81 accidents. This is because, unlike level 3, autonomous vehicles after level 4 have to judge and respond to emergency situations by themselves. Therefore, this paper proposes a vehicle crisis detection system(VCDS) that collects and stores information outside the vehicle through CNN, and uses the stored information and vehicle sensor data to output the crisis situation of the vehicle as a number between 0 and 1. The VCDS consists of two modules. The vehicle external situation collection module collects surrounding vehicle and pedestrian data using a CNN-based neural network model. The vehicle crisis situation determination module detects a crisis situation in the vehicle by using the output of the vehicle external situation collection module and the vehicle internal sensor data. As a result of the experiment, the average operation time of VESCM was 55ms, R-CNN was 74ms, and CNN was 101ms. In particular, R-CNN shows similar computation time to VESCM when the number of pedestrians is small, but it takes more computation time than VESCM as the number of pedestrians increases. On average, VESCM had 25.68% faster computation time than R-CNN and 45.54% faster than CNN, and the accuracy of all three models did not decrease below 80% and showed high accuracy.