• Title/Summary/Keyword: Artificial cloud

Search Result 243, Processing Time 0.026 seconds

A Study on Distributed System Construction and Numerical Calculation Using Raspberry Pi

  • Ko, Young-ho;Heo, Gyu-Seong;Lee, Sang-Hyun
    • International journal of advanced smart convergence
    • /
    • v.8 no.4
    • /
    • pp.194-199
    • /
    • 2019
  • As the performance of the system increases, more parallelized data is being processed than single processing of data. Today's cpu structure has been developed to leverage multicore, and hence data processing methods are being developed to enable parallel processing. In recent years desktop cpu has increased multicore, data is growing exponentially, and there is also a growing need for data processing as artificial intelligence develops. This neural network of artificial intelligence consists of a matrix, making it advantageous for parallel processing. This paper aims to speed up the processing of the system by using raspberrypi to implement the cluster building and parallel processing system against the backdrop of the foregoing discussion. Raspberrypi is a credit card-sized single computer made by the raspberrypi Foundation in England, developed for education in schools and developing countries. It is cheap and easy to get the information you need because many people use it. Distributed processing systems should be supported by programs that connected multiple computers in parallel and operate on a built-in system. RaspberryPi is connected to switchhub, each connected raspberrypi communicates using the internal network, and internally implements parallel processing using the Message Passing Interface (MPI). Parallel processing programs can be programmed in python and can also use C or Fortran. The system was tested for parallel processing as a result of multiplying the two-dimensional arrangement of 10000 size by 0.1. Tests have shown a reduction in computational time and that parallelism can be reduced to the maximum number of cores in the system. The systems in this paper are manufactured on a Linux-based single computer and are thought to require testing on systems in different environments.

Development of an intelligent edge computing device equipped with on-device AI vision model (온디바이스 AI 비전 모델이 탑재된 지능형 엣지 컴퓨팅 기기 개발)

  • Kang, Namhi
    • The Journal of the Institute of Internet, Broadcasting and Communication
    • /
    • v.22 no.5
    • /
    • pp.17-22
    • /
    • 2022
  • In this paper, we design a lightweight embedded device that can support intelligent edge computing, and show that the device quickly detects an object in an image input from a camera device in real time. The proposed system can be applied to environments without pre-installed infrastructure, such as an intelligent video control system for industrial sites or military areas, or video security systems mounted on autonomous vehicles such as drones. The On-Device AI(Artificial intelligence) technology is increasingly required for the widespread application of intelligent vision recognition systems. Computing offloading from an image data acquisition device to a nearby edge device enables fast service with less network and system resources than AI services performed in the cloud. In addition, it is expected to be safely applied to various industries as it can reduce the attack surface vulnerable to various hacking attacks and minimize the disclosure of sensitive data.

A Study on Integrity Protection of Edge Computing Application Based on Container Technology (컨테이너 기술을 활용한 엣지 컴퓨팅 환경 어플리케이션 무결성 보호에 대한 연구)

  • Lee, Changhoon;Shin, Youngjoo
    • Journal of the Korea Institute of Information Security & Cryptology
    • /
    • v.31 no.6
    • /
    • pp.1205-1214
    • /
    • 2021
  • Edge Computing is used as a solution to the cost problem and transmission delay problem caused by network bandwidth consumption that occurs when IoT/CPS devices are integrated into the cloud by performing artificial intelligence (AI) in an environment close to the data source. Since edge computing runs on devices that provide high-performance computation and network connectivity located in the real world, it is necessary to consider application integrity so that it is not exploited by cyber terrorism that can cause human and material damage. In this paper, we propose a technique to protect the integrity of edge computing applications implemented in a script language that is vulnerable to tampering, such as Python, which is used for implementing artificial intelligence, as container images and then digitally signed. The proposed method is based on the integrity protection technology (Docker Contents Trust) provided by the open source container technology. The Docker Client was modified and used to utilize the whitelist for container signature information so that only containers allowed on edge computing devices can be operated.

Centralized Machine Learning Versus Federated Averaging: A Comparison using MNIST Dataset

  • Peng, Sony;Yang, Yixuan;Mao, Makara;Park, Doo-Soon
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.16 no.2
    • /
    • pp.742-756
    • /
    • 2022
  • A flood of information has occurred with the rise of the internet and digital devices in the fourth industrial revolution era. Every millisecond, massive amounts of structured and unstructured data are generated; smartphones, wearable devices, sensors, and self-driving cars are just a few examples of devices that currently generate massive amounts of data in our daily. Machine learning has been considered an approach to support and recognize patterns in data in many areas to provide a convenient way to other sectors, including the healthcare sector, government sector, banks, military sector, and more. However, the conventional machine learning model requires the data owner to upload their information to train the model in one central location to perform the model training. This classical model has caused data owners to worry about the risks of transferring private information because traditional machine learning is required to push their data to the cloud to process the model training. Furthermore, the training of machine learning and deep learning models requires massive computing resources. Thus, many researchers have jumped to a new model known as "Federated Learning". Federated learning is emerging to train Artificial Intelligence models over distributed clients, and it provides secure privacy information to the data owner. Hence, this paper implements Federated Averaging with a Deep Neural Network to classify the handwriting image and protect the sensitive data. Moreover, we compare the centralized machine learning model with federated averaging. The result shows the centralized machine learning model outperforms federated learning in terms of accuracy, but this classical model produces another risk, like privacy concern, due to the data being stored in the data center. The MNIST dataset was used in this experiment.

A Predictive System for Equipment Fault Diagnosis based on Machine Learning in Smart Factory (스마트 팩토리에서 머신 러닝 기반 설비 장애진단 예측 시스템)

  • Chow, Jaehyung;Lee, Jaeoh
    • KNOM Review
    • /
    • v.24 no.1
    • /
    • pp.13-19
    • /
    • 2021
  • In recent, there is research to maximize production by preventing failures/accidents in advance through fault diagnosis/prediction and factory automation in the industrial field. Cloud technology for accumulating a large amount of data, big data technology for data processing, and Artificial Intelligence(AI) technology for easy data analysis are promising candidate technologies for accomplishing this. Also, recently, due to the development of fault diagnosis/prediction, the equipment maintenance method is also developing from Time Based Maintenance(TBM), being a method of regularly maintaining equipment, to the TBM of combining Condition Based Maintenance(CBM), being a method of maintenance according to the condition of the equipment. For CBM-based maintenance, it is necessary to define and analyze the condition of the facility. Therefore, we propose a machine learning-based system and data model for diagnosing the fault in this paper. And based on this, we will present a case of predicting the fault occurrence in advance.

A Study on the Development Issues of Digital Health Care Medical Information (디지털 헬스케어 의료정보의 발전과제에 관한 연구)

  • Moon, Yong
    • Industry Promotion Research
    • /
    • v.7 no.3
    • /
    • pp.17-26
    • /
    • 2022
  • As the well-being mindset to keep our minds and bodies free and healthy more than anything else in the society we live in is spreading, the meaning of health care has become a key part of the 4th industrial revolution such as big data, IoT, AI, and block chain. The advancement of the advanced medical information service industry is being promoted by utilizing convergence technology. In digital healthcare, the development of intelligent information technology such as artificial intelligence, big data, and cloud is being promoted as a digital transformation of the traditional medical and healthcare industry. In addition, due to rapid development in the convergence of science and technology environment, various issues such as health, medical care, welfare, etc., have been gradually expanded due to social change. Therefore, in this study, first, the general meaning and current status of digital health care medical information is examined, and then, developmental tasks to activate digital health care medical information are analyzed and reviewed. The purpose of this article is to improve usability to fully pursue our human freedom.

A Study on Hangul Handwriting Generation and Classification Mode for Intelligent OCR System (지능형 OCR 시스템을 위한 한글 필기체 생성 및 분류 모델에 관한 연구)

  • Jin-Seong Baek;Ji-Yun Seo;Sang-Joong Jung;Do-Un Jeong
    • Journal of the Institute of Convergence Signal Processing
    • /
    • v.23 no.4
    • /
    • pp.222-227
    • /
    • 2022
  • In this paper, we implemented a Korean text generation and classification model based on a deep learning algorithm that can be applied to various industries. It consists of two implemented GAN-based Korean handwriting generation models and CNN-based Korean handwriting classification models. The GAN model consists of a generator model for generating fake Korean handwriting data and a discriminator model for discriminating fake handwritten data. In the case of the CNN model, the model was trained using the 'PHD08' dataset, and the learning result was 92.45. It was confirmed that Korean handwriting was classified with % accuracy. As a result of evaluating the performance of the classification model by integrating the Korean cursive data generated through the implemented GAN model and the training dataset of the existing CNN model, it was confirmed that the classification performance was 96.86%, which was superior to the existing classification performance.

Study on the Application of Big Data Mining to Activate Physical Distribution Cooperation : Focusing AHP Technique (물류공동화 활성화를 위한 빅데이터 마이닝 적용 연구 : AHP 기법을 중심으로)

  • Young-Hyun Pak;Jae-Ho Lee;Kyeong-Woo Kim
    • Korea Trade Review
    • /
    • v.46 no.5
    • /
    • pp.65-81
    • /
    • 2021
  • The technological development in the era of the 4th industrial revolution is changing the paradigm of various industries. Various technologies such as big data, cloud, artificial intelligence, virtual reality, and the Internet of Things are used, creating synergy effects with existing industries, creating radical development and value creation. Among them, the logistics sector has been greatly influenced by quantitative data from the past and has been continuously accumulating and managing data, so it is highly likely to be linked with big data analysis and has a high utilization effect. The modern advanced technology has developed together with the data mining technology to discover hidden patterns and new correlations in such big data, and through this, meaningful results are being derived. Therefore, data mining occupies an important part in big data analysis, and this study tried to analyze data mining techniques that can contribute to the logistics field and common logistics using these data mining technologies. Therefore, by using the AHP technique, it was attempted to derive priorities for each type of efficient data mining for logisticalization, and R program and R Studio were used as tools to analyze this. Criteria of AHP method set association analysis, cluster analysis, decision tree method, artificial neural network method, web mining, and opinion mining. For the alternatives, common transport and delivery, common logistics center, common logistics information system, and common logistics partnership were set as factors.

A Study on Senior Behavioral Analysis and Care System Using Big Data (빅데이터를 활용한 시니어 행동분석 돌봄 시스템 연구)

  • Jang, Jae-Youl;Choi, Jin-Il;Uh, Je-Sun;Choi, Chul-Jae
    • The Journal of the Korea institute of electronic communication sciences
    • /
    • v.15 no.5
    • /
    • pp.973-980
    • /
    • 2020
  • Various applied solutions utilizing the technology of the 4th Industrial Revolution are being applied to the health and welfare sector. In the proposed paper, the senior care system solution based on big data is designed. The principles of operation of the proposed system are collecting senior behavioral analyses through API information of smart devices, and sending a primary notification to the relevant senior in cases where a senior reacts differently from the existing standards. A system is proposed to prevent dangerous situations by providing information to peer seniors, family members, and the emergency center in cases where there is no response.

Media and AI Technology: Media Intelligence (미디어와 AI 기술: 미디어 지능화)

  • Cho, Y.S.;Lee, N.K.;Choi, D.J.;Seo, J.I.;Lee, T.J.;Park, J.K.;Lee, H.W.;Kim, H.M.
    • Electronics and Telecommunications Trends
    • /
    • v.35 no.5
    • /
    • pp.92-101
    • /
    • 2020
  • Artificial intelligence (AI) has become the hottest topic in information and communications technology (ICT) in recent years. Along with the advancement of AI technology, technologies such as big data, cloud, and high-speed wired and wireless communication are being applied to existing media areas in earnest, affecting all parts of the media value chain from content production to consumption. AI technology is now spreading across the media industry faster than any other industry. In the future, the gap between those with and without AI technology will widen, further deepening the polarization of the media ecosystem. Media intelligence, which combines media and AI technologies, is now perceived as essential, not optional. In this paper, we examine the current status of technology development and standardization by major domestic and foreign institutions on how AI is being utilized in the media industry. In addition, we discuss what technology should be developed to lead media intelligence.