• Title/Summary/Keyword: edge intelligence

Search Result 155, Processing Time 0.024 seconds

A Study on the System for AI Service Production (인공지능 서비스 운영을 위한 시스템 측면에서의 연구)

  • Hong, Yong-Geun
    • KIPS Transactions on Computer and Communication Systems
    • /
    • v.11 no.10
    • /
    • pp.323-332
    • /
    • 2022
  • As various services using AI technology are being developed, much attention is being paid to AI service production. Recently, AI technology is acknowledged as one of ICT services, a lot of research is being conducted for general-purpose AI service production. In this paper, I describe the research results in terms of systems for AI service production, focusing on the distribution and production of machine learning models, which are the final steps of general machine learning development procedures. Three different Ubuntu systems were built, and experiments were conducted on the system, using data from 2017 validation COCO dataset in combination of different AI models (RFCN, SSD-Mobilenet) and different communication methods (gRPC, REST) to request and perform AI services through Tensorflow serving. Through various experiments, it was found that the type of AI model has a greater influence on AI service inference time than AI machine communication method, and in the case of object detection AI service, the number and complexity of objects in the image are more affected than the file size of the image to be detected. In addition, it was confirmed that if the AI service is performed remotely rather than locally, even if it is a machine with good performance, it takes more time to infer the AI service than if it is performed locally. Through the results of this study, it is expected that system design suitable for service goals, AI model development, and efficient AI service production will be possible.

A Study on Analysis of Problems in Data Collection for Smart Farm Construction (스마트팜 구축을 위한 데이터수집의 문제점 분석 연구)

  • Kim Song Gang;Nam Ki Po
    • Convergence Security Journal
    • /
    • v.22 no.5
    • /
    • pp.69-80
    • /
    • 2022
  • Now that climate change and food resource security are becoming issues around the world, smart farms are emerging as an alternative to solve them. In addition, changes in the production environment in the primary industry are a major concern for people engaged in all primary industries (agriculture, livestock, fishery), and the resulting food shortage problem is an important problem that we all need to solve. In order to solve this problem, in the primary industry, efforts are made to solve the food shortage problem through productivity improvement by introducing smart farms using the 4th industrial revolution such as ICT and BT and IoT big data and artificial intelligence technologies. This is done through the public and private sectors.This paper intends to consider the minimum requirements for the smart farm data collection system for the development and utilization of smart farms, the establishment of a sustainable agricultural management system, the sequential system construction method, and the purposeful, efficient and usable data collection system. In particular, we analyze and improve the problems of the data collection system for building a Korean smart farm standard model, which is facing limitations, based on in-depth investigations in the field of livestock and livestock (pig farming) and analysis of various cases, to establish an efficient and usable big data collection system. The goal is to propose a method for collecting big data.

Study on Disaster Response Strategies Using Multi-Sensors Satellite Imagery (다종 위성영상을 활용한 재난대응 방안 연구)

  • Jongsoo Park;Dalgeun Lee;Junwoo Lee;Eunji Cheon;Hagyu Jeong
    • Korean Journal of Remote Sensing
    • /
    • v.39 no.5_2
    • /
    • pp.755-770
    • /
    • 2023
  • Due to recent severe climate change, abnormal weather phenomena, and other factors, the frequency and magnitude of natural disasters are increasing. The need for disaster management using artificial satellites is growing, especially during large-scale disasters due to time and economic constraints. In this study, we have summarized the current status of next-generation medium-sized satellites and microsatellites in operation and under development, as well as trends in satellite imagery analysis techniques using a large volume of satellite imagery driven by the advancement of the space industry. Furthermore, by utilizing satellite imagery, particularly focusing on recent major disasters such as floods, landslides, droughts, and wildfires, we have confirmed how satellite imagery can be employed for damage analysis, thereby establishing its potential for disaster management. Through this study, we have presented satellite development and operational statuses, recent trends in satellite imagery analysis technology, and proposed disaster response strategies that utilize various types of satellite imagery. It was observed that during the stages of disaster progression, the utilization of satellite imagery is more prominent in the response and recovery stages than in the prevention and preparedness stages. In the future, with the availability of diverse imagery, we plan to research the fusion of cutting-edge technologies like artificial intelligence and deep learning, and their applicability for effective disaster management.

A Study on Efficient IPv6 Address Allocation for Future Military (미래 군을 위한 효율적인 IPv6 주소 할당에 관한 연구)

  • Hanwoo Lee;Suhwan Kim;Gunwoo Park
    • The Journal of the Convergence on Culture Technology
    • /
    • v.9 no.5
    • /
    • pp.613-618
    • /
    • 2023
  • The advancement of Information and Communication Technology (ICT) is accelerating innovation across society, and the defense sector is no exception as it adopts technologies aligned with the Fourth Industrial Revolution. In particular, the Army is making efforts to establish an advanced Army TIGER 4.0 system, aiming to create highly intelligent and interconnected mobile units. To achieve this, the Army is integrating cutting-edge scientific and technological advancements from the Fourth Industrial Revolution to enhance mobility, networking, and intelligence. However, the existing addressing system, IPv4, has limitations in meeting the exponentially increasing demands for network IP addresses. Consequently, the military considers IPv6 address allocation as an essential process to ensure efficient network management and address space provisioning. This study proposes an approach for IPv6 address allocation for the future military, considering the Army TIGER system. The proposal outlines how the application networks of the Army can be differentiated, and IP addresses can be allocated to future unit structures of the Army, Navy, and Air Force, from the Ministry of National Defense and the Joint Chiefs of Staff. Through this approach, the Army's advanced ground combat system, Army TIGER 4.0, is expected to operate more efficiently in network environments, enhancing overall information exchange and mobility for the future military.

A study on the design of an efficient hardware and software mixed-mode image processing system for detecting patient movement (환자움직임 감지를 위한 효율적인 하드웨어 및 소프트웨어 혼성 모드 영상처리시스템설계에 관한 연구)

  • Seungmin Jung;Euisung Jung;Myeonghwan Kim
    • Journal of Internet Computing and Services
    • /
    • v.25 no.1
    • /
    • pp.29-37
    • /
    • 2024
  • In this paper, we propose an efficient image processing system to detect and track the movement of specific objects such as patients. The proposed system extracts the outline area of an object from a binarized difference image by applying a thinning algorithm that enables more precise detection compared to previous algorithms and is advantageous for mixed-mode design. The binarization and thinning steps, which require a lot of computation, are designed based on RTL (Register Transfer Level) and replaced with optimized hardware blocks through logic circuit synthesis. The designed binarization and thinning block was synthesized into a logic circuit using the standard 180n CMOS library and its operation was verified through simulation. To compare software-based performance, performance analysis of binary and thinning operations was also performed by applying sample images with 640 × 360 resolution in a 32-bit FPGA embedded system environment. As a result of verification, it was confirmed that the mixed-mode design can improve the processing speed by 93.8% in the binary and thinning stages compared to the previous software-only processing speed. The proposed mixed-mode system for object recognition is expected to be able to efficiently monitor patient movements even in an edge computing environment where artificial intelligence networks are not applied.

The Factors Influencing Value Awareness of Personalized Service and Intention to Use Smart Home: An Analysis of Differences between "Generation MZ" and "Generation X and Baby Boomers" (스마트홈 개인화 서비스에 대한 가치 인식 및 사용의도에의 영향 요인: "MZ세대"와 "X세대 및 베이비붐 세대" 간 차이 분석)

  • Sang-Keul Lee;Ae Ri Lee
    • Information Systems Review
    • /
    • v.23 no.3
    • /
    • pp.201-223
    • /
    • 2021
  • Smart home is an advanced Internet of Things (IoT) service that enhances the convenience of human daily life and improves the quality of life at home. Recently, with the emergence of smart home products and services to which artificial intelligence (AI) technology is applied, interest in smart home is increasing. To gain a competitive edge in the smart home market, companies are providing "personalized service" to users, which is a key service that can promote smart home use. This study investigates the factors affecting the value awareness of personalized service and intention to use smart home. This research focuses on four-dimensional motivated innovativeness (cognitive, functional, hedonic, and social innovativeness) and privacy risk awareness as key factors that influence the value awareness of personalized service of smart home. In particular, this study conducts a comparative analysis between the generation MZ (young people in late teens to 30s), who are showing socially differentiated characteristics, and the generation X and baby boomers in 40s to 50s or older. Based on the analysis results, this study derives the distinctive characteristics of generation MZ that are different from the older generation, and provides academic and practical implications for expanding the use of smart home services.

A Design of Authentication Mechanism for Secure Communication in Smart Factory Environments (스마트 팩토리 환경에서 안전한 통신을 위한 인증 메커니즘 설계)

  • Joong-oh Park
    • Journal of Industrial Convergence
    • /
    • v.22 no.4
    • /
    • pp.1-9
    • /
    • 2024
  • Smart factories represent production facilities where cutting-edge information and communication technologies are fused with manufacturing processes, reflecting rapid advancements and changes in the global manufacturing sector. They capitalize on the integration of robotics and automation, the Internet of Things (IoT), and the convergence of artificial intelligence technologies to maximize production efficiency in various manufacturing environments. However, the smart factory environment is prone to security threats and vulnerabilities due to various attack techniques. When security threats occur in smart factories, they can lead to financial losses, damage to corporate reputation, and even human casualties, necessitating an appropriate security response. Therefore, this paper proposes a security authentication mechanism for safe communication in the smart factory environment. The components of the proposed authentication mechanism include smart devices, an internal operation management system, an authentication system, and a cloud storage server. The smart device registration process, authentication procedure, and the detailed design of anomaly detection and update procedures were meticulously developed. And the safety of the proposed authentication mechanism was analyzed, and through performance analysis with existing authentication mechanisms, we confirmed an efficiency improvement of approximately 8%. Additionally, this paper presents directions for future research on lightweight protocols and security strategies for the application of the proposed technology, aiming to enhance security.

Comparison of Deep Learning Frameworks: About Theano, Tensorflow, and Cognitive Toolkit (딥러닝 프레임워크의 비교: 티아노, 텐서플로, CNTK를 중심으로)

  • Chung, Yeojin;Ahn, SungMahn;Yang, Jiheon;Lee, Jaejoon
    • Journal of Intelligence and Information Systems
    • /
    • v.23 no.2
    • /
    • pp.1-17
    • /
    • 2017
  • The deep learning framework is software designed to help develop deep learning models. Some of its important functions include "automatic differentiation" and "utilization of GPU". The list of popular deep learning framework includes Caffe (BVLC) and Theano (University of Montreal). And recently, Microsoft's deep learning framework, Microsoft Cognitive Toolkit, was released as open-source license, following Google's Tensorflow a year earlier. The early deep learning frameworks have been developed mainly for research at universities. Beginning with the inception of Tensorflow, however, it seems that companies such as Microsoft and Facebook have started to join the competition of framework development. Given the trend, Google and other companies are expected to continue investing in the deep learning framework to bring forward the initiative in the artificial intelligence business. From this point of view, we think it is a good time to compare some of deep learning frameworks. So we compare three deep learning frameworks which can be used as a Python library. Those are Google's Tensorflow, Microsoft's CNTK, and Theano which is sort of a predecessor of the preceding two. The most common and important function of deep learning frameworks is the ability to perform automatic differentiation. Basically all the mathematical expressions of deep learning models can be represented as computational graphs, which consist of nodes and edges. Partial derivatives on each edge of a computational graph can then be obtained. With the partial derivatives, we can let software compute differentiation of any node with respect to any variable by utilizing chain rule of Calculus. First of all, the convenience of coding is in the order of CNTK, Tensorflow, and Theano. The criterion is simply based on the lengths of the codes and the learning curve and the ease of coding are not the main concern. According to the criteria, Theano was the most difficult to implement with, and CNTK and Tensorflow were somewhat easier. With Tensorflow, we need to define weight variables and biases explicitly. The reason that CNTK and Tensorflow are easier to implement with is that those frameworks provide us with more abstraction than Theano. We, however, need to mention that low-level coding is not always bad. It gives us flexibility of coding. With the low-level coding such as in Theano, we can implement and test any new deep learning models or any new search methods that we can think of. The assessment of the execution speed of each framework is that there is not meaningful difference. According to the experiment, execution speeds of Theano and Tensorflow are very similar, although the experiment was limited to a CNN model. In the case of CNTK, the experimental environment was not maintained as the same. The code written in CNTK has to be run in PC environment without GPU where codes execute as much as 50 times slower than with GPU. But we concluded that the difference of execution speed was within the range of variation caused by the different hardware setup. In this study, we compared three types of deep learning framework: Theano, Tensorflow, and CNTK. According to Wikipedia, there are 12 available deep learning frameworks. And 15 different attributes differentiate each framework. Some of the important attributes would include interface language (Python, C ++, Java, etc.) and the availability of libraries on various deep learning models such as CNN, RNN, DBN, and etc. And if a user implements a large scale deep learning model, it will also be important to support multiple GPU or multiple servers. Also, if you are learning the deep learning model, it would also be important if there are enough examples and references.

Implementation of a Self Controlled Mobile Robot with Intelligence to Recognize Obstacles (장애물 인식 지능을 갖춘 자율 이동로봇의 구현)

  • 류한성;최중경
    • Journal of the Institute of Electronics Engineers of Korea SP
    • /
    • v.40 no.5
    • /
    • pp.312-321
    • /
    • 2003
  • In this paper, we implement robot which are ability to recognize obstacles and moving automatically to destination. we present two results in this paper; hardware implementation of image processing board and software implementation of visual feedback algorithm for a self-controlled robot. In the first part, the mobile robot depends on commands from a control board which is doing image processing part. We have studied the self controlled mobile robot system equipped with a CCD camera for a long time. This robot system consists of a image processing board implemented with DSPs, a stepping motor, a CCD camera. We will propose an algorithm in which commands are delivered for the robot to move in the planned path. The distance that the robot is supposed to move is calculated on the basis of the absolute coordinate and the coordinate of the target spot. And the image signal acquired by the CCD camera mounted on the robot is captured at every sampling time in order for the robot to automatically avoid the obstacle and finally to reach the destination. The image processing board consists of DSP (TMS320VC33), ADV611, SAA7111, ADV7l76A, CPLD(EPM7256ATC144), and SRAM memories. In the second part, the visual feedback control has two types of vision algorithms: obstacle avoidance and path planning. The first algorithm is cell, part of the image divided by blob analysis. We will do image preprocessing to improve the input image. This image preprocessing consists of filtering, edge detection, NOR converting, and threshold-ing. This major image processing includes labeling, segmentation, and pixel density calculation. In the second algorithm, after an image frame went through preprocessing (edge detection, converting, thresholding), the histogram is measured vertically (the y-axis direction). Then, the binary histogram of the image shows waveforms with only black and white variations. Here we use the fact that since obstacles appear as sectional diagrams as if they were walls, there is no variation in the histogram. The intensities of the line histogram are measured as vertically at intervals of 20 pixels. So, we can find uniform and nonuniform regions of the waveforms and define the period of uniform waveforms as an obstacle region. We can see that the algorithm is very useful for the robot to move avoiding obstacles.

The Need for Paradigm Shift in Semantic Similarity and Semantic Relatedness : From Cognitive Semantics Perspective (의미간의 유사도 연구의 패러다임 변화의 필요성-인지 의미론적 관점에서의 고찰)

  • Choi, Youngseok;Park, Jinsoo
    • Journal of Intelligence and Information Systems
    • /
    • v.19 no.1
    • /
    • pp.111-123
    • /
    • 2013
  • Semantic similarity/relatedness measure between two concepts plays an important role in research on system integration and database integration. Moreover, current research on keyword recommendation or tag clustering strongly depends on this kind of semantic measure. For this reason, many researchers in various fields including computer science and computational linguistics have tried to improve methods to calculating semantic similarity/relatedness measure. This study of similarity between concepts is meant to discover how a computational process can model the action of a human to determine the relationship between two concepts. Most research on calculating semantic similarity usually uses ready-made reference knowledge such as semantic network and dictionary to measure concept similarity. The topological method is used to calculated relatedness or similarity between concepts based on various forms of a semantic network including a hierarchical taxonomy. This approach assumes that the semantic network reflects the human knowledge well. The nodes in a network represent concepts, and way to measure the conceptual similarity between two nodes are also regarded as ways to determine the conceptual similarity of two words(i.e,. two nodes in a network). Topological method can be categorized as node-based or edge-based, which are also called the information content approach and the conceptual distance approach, respectively. The node-based approach is used to calculate similarity between concepts based on how much information the two concepts share in terms of a semantic network or taxonomy while edge-based approach estimates the distance between the nodes that correspond to the concepts being compared. Both of two approaches have assumed that the semantic network is static. That means topological approach has not considered the change of semantic relation between concepts in semantic network. However, as information communication technologies make advantage in sharing knowledge among people, semantic relation between concepts in semantic network may change. To explain the change in semantic relation, we adopt the cognitive semantics. The basic assumption of cognitive semantics is that humans judge the semantic relation based on their cognition and understanding of concepts. This cognition and understanding is called 'World Knowledge.' World knowledge can be categorized as personal knowledge and cultural knowledge. Personal knowledge means the knowledge from personal experience. Everyone can have different Personal Knowledge of same concept. Cultural Knowledge is the knowledge shared by people who are living in the same culture or using the same language. People in the same culture have common understanding of specific concepts. Cultural knowledge can be the starting point of discussion about the change of semantic relation. If the culture shared by people changes for some reasons, the human's cultural knowledge may also change. Today's society and culture are changing at a past face, and the change of cultural knowledge is not negligible issues in the research on semantic relationship between concepts. In this paper, we propose the future directions of research on semantic similarity. In other words, we discuss that how the research on semantic similarity can reflect the change of semantic relation caused by the change of cultural knowledge. We suggest three direction of future research on semantic similarity. First, the research should include the versioning and update methodology for semantic network. Second, semantic network which is dynamically generated can be used for the calculation of semantic similarity between concepts. If the researcher can develop the methodology to extract the semantic network from given knowledge base in real time, this approach can solve many problems related to the change of semantic relation. Third, the statistical approach based on corpus analysis can be an alternative for the method using semantic network. We believe that these proposed research direction can be the milestone of the research on semantic relation.