• Title/Summary/Keyword: Cloud quality and performance

Search Result 97, Processing Time 0.024 seconds

An Analysis of Big Video Data with Cloud Computing in Ubiquitous City (클라우드 컴퓨팅을 이용한 유시티 비디오 빅데이터 분석)

  • Lee, Hak Geon;Yun, Chang Ho;Park, Jong Won;Lee, Yong Woo
    • Journal of Internet Computing and Services
    • /
    • v.15 no.3
    • /
    • pp.45-52
    • /
    • 2014
  • The Ubiquitous-City (U-City) is a smart or intelligent city to satisfy human beings' desire to enjoy IT services with any device, anytime, anywhere. It is a future city model based on Internet of everything or things (IoE or IoT). It includes a lot of video cameras which are networked together. The networked video cameras support a lot of U-City services as one of the main input data together with sensors. They generate huge amount of video information, real big data for the U-City all the time. It is usually required that the U-City manipulates the big data in real-time. And it is not easy at all. Also, many times, it is required that the accumulated video data are analyzed to detect an event or find a figure among them. It requires a lot of computational power and usually takes a lot of time. Currently we can find researches which try to reduce the processing time of the big video data. Cloud computing can be a good solution to address this matter. There are many cloud computing methodologies which can be used to address the matter. MapReduce is an interesting and attractive methodology for it. It has many advantages and is getting popularity in many areas. Video cameras evolve day by day so that the resolution improves sharply. It leads to the exponential growth of the produced data by the networked video cameras. We are coping with real big data when we have to deal with video image data which are produced by the good quality video cameras. A video surveillance system was not useful until we find the cloud computing. But it is now being widely spread in U-Cities since we find some useful methodologies. Video data are unstructured data thus it is not easy to find a good research result of analyzing the data with MapReduce. This paper presents an analyzing system for the video surveillance system, which is a cloud-computing based video data management system. It is easy to deploy, flexible and reliable. It consists of the video manager, the video monitors, the storage for the video images, the storage client and streaming IN component. The "video monitor" for the video images consists of "video translater" and "protocol manager". The "storage" contains MapReduce analyzer. All components were designed according to the functional requirement of video surveillance system. The "streaming IN" component receives the video data from the networked video cameras and delivers them to the "storage client". It also manages the bottleneck of the network to smooth the data stream. The "storage client" receives the video data from the "streaming IN" component and stores them to the storage. It also helps other components to access the storage. The "video monitor" component transfers the video data by smoothly streaming and manages the protocol. The "video translator" sub-component enables users to manage the resolution, the codec and the frame rate of the video image. The "protocol" sub-component manages the Real Time Streaming Protocol (RTSP) and Real Time Messaging Protocol (RTMP). We use Hadoop Distributed File System(HDFS) for the storage of cloud computing. Hadoop stores the data in HDFS and provides the platform that can process data with simple MapReduce programming model. We suggest our own methodology to analyze the video images using MapReduce in this paper. That is, the workflow of video analysis is presented and detailed explanation is given in this paper. The performance evaluation was experiment and we found that our proposed system worked well. The performance evaluation results are presented in this paper with analysis. With our cluster system, we used compressed $1920{\times}1080(FHD)$ resolution video data, H.264 codec and HDFS as video storage. We measured the processing time according to the number of frame per mapper. Tracing the optimal splitting size of input data and the processing time according to the number of node, we found the linearity of the system performance.

EDF: An Interactive Tool for Event Log Generation for Enabling Process Mining in Small and Medium-sized Enterprises

  • Frans Prathama;Seokrae Won;Iq Reviessay Pulshashi;Riska Asriana Sutrisnowati
    • Journal of the Korea Society of Computer and Information
    • /
    • v.29 no.6
    • /
    • pp.101-112
    • /
    • 2024
  • In this paper, we present EDF (Event Data Factory), an interactive tool designed to assist event log generation for process mining. EDF integrates various data connectors to improve its capability to assist users in connecting to diverse data sources. Our tool employs low-code/no-code technology, along with graph-based visualization, to help non-expert users understand process flow and enhance the user experience. By utilizing metadata information, EDF allows users to efficiently generate an event log containing case, activity, and timestamp attributes. Through log quality metrics, our tool enables users to assess the generated event log quality. We implement EDF under a cloud-based architecture and run a performance evaluation. Our case study and results demonstrate the usability and applicability of EDF. Finally, an observational study confirms that EDF is easy to use and beneficial, expanding small and medium-sized enterprises' (SMEs) access to process mining applications.

Development of Cloud Detection Method Considering Radiometric Characteristics of Satellite Imagery (위성영상의 방사적 특성을 고려한 구름 탐지 방법 개발)

  • Won-Woo Seo;Hongki Kang;Wansang Yoon;Pyung-Chae Lim;Sooahm Rhee;Taejung Kim
    • Korean Journal of Remote Sensing
    • /
    • v.39 no.6_1
    • /
    • pp.1211-1224
    • /
    • 2023
  • Clouds cause many difficult problems in observing land surface phenomena using optical satellites, such as national land observation, disaster response, and change detection. In addition, the presence of clouds affects not only the image processing stage but also the final data quality, so it is necessary to identify and remove them. Therefore, in this study, we developed a new cloud detection technique that automatically performs a series of processes to search and extract the pixels closest to the spectral pattern of clouds in satellite images, select the optimal threshold, and produce a cloud mask based on the threshold. The cloud detection technique largely consists of three steps. In the first step, the process of converting the Digital Number (DN) unit image into top-of-atmosphere reflectance units was performed. In the second step, preprocessing such as Hue-Value-Saturation (HSV) transformation, triangle thresholding, and maximum likelihood classification was applied using the top of the atmosphere reflectance image, and the threshold for generating the initial cloud mask was determined for each image. In the third post-processing step, the noise included in the initial cloud mask created was removed and the cloud boundaries and interior were improved. As experimental data for cloud detection, CAS500-1 L2G images acquired in the Korean Peninsula from April to November, which show the diversity of spatial and seasonal distribution of clouds, were used. To verify the performance of the proposed method, the results generated by a simple thresholding method were compared. As a result of the experiment, compared to the existing method, the proposed method was able to detect clouds more accurately by considering the radiometric characteristics of each image through the preprocessing process. In addition, the results showed that the influence of bright objects (panel roofs, concrete roads, sand, etc.) other than cloud objects was minimized. The proposed method showed more than 30% improved results(F1-score) compared to the existing method but showed limitations in certain images containing snow.

Protecting Privacy of User Data in Intelligent Transportation Systems

  • Yazed Alsaawy;Ahmad Alkhodre;Adnan Abi Sen
    • International Journal of Computer Science & Network Security
    • /
    • v.23 no.5
    • /
    • pp.163-171
    • /
    • 2023
  • The intelligent transportation system has made a huge leap in the level of human services, which has had a positive impact on the quality of life of users. On the other hand, these services are becoming a new source of risk due to the use of data collected from vehicles, on which intelligent systems rely to create automatic contextual adaptation. Most of the popular privacy protection methods, such as Dummy and obfuscation, cannot be used with many services because of their impact on the accuracy of the service provided itself, they depend on changing the number of vehicles or their physical locations. This research presents a new approach based on the shuffling Nicknames of vehicles. It fully maintains the quality of the service and prevents tracking users permanently, penetrating their privacy, revealing their whereabouts, or discovering additional details about the nature of their behavior and movements. Our approach is based on creating a central Nicknames Pool in the cloud as well as distributed subpools in fog nodes to avoid intelligent delays and overloading of the central architecture. Finally, we will prove by simulation and discussion by examples the superiority of the proposed approach and its ability to adapt to new services and provide an effective level of protection. In the comparison, we will rely on the wellknown privacy criteria: Entropy, Ubiquity, and Performance.

A Study on the Performance of Cloud-based VDI Adoption: Comparing between IS administrators and business users (클라우드 기반 VDI 도입 성과에 관한 연구 - 시스템 관리자와 일반 사용자의 비교를 중심으로 -)

  • Kim, Il-Han;Kwon, Sun-Dong
    • Management & Information Systems Review
    • /
    • v.37 no.2
    • /
    • pp.149-167
    • /
    • 2018
  • The purpose of this study is to analyze the performance of Virtual Desktop Infrastructure(VDI) adoption. VDI performance was measured by IS manager (system quality, security, and managerial operation) and business user (usability, access, and user satisfaction). The survey questionnaires were developed for measuring VDI performance. 84 data samples were collected from the companies that had adopted cloud-based VDI. This research model was verified by Smart-PLS and SPSS. The research findings were as follows: First, the companies using VDI experienced actual performance, but they did not attain their expectation. Second, as results of comparing between IS managers and business users, IS administrators had considerably higher performance than business users, which indicates that there were big differences in performance perception among users. Compared with prior research such as technical trend, system construction, and performance improvement, this study has the following implications. First, by comparing the expected performance with the actual performance of the companies that have implemented and operating VDI, it was suggested how a company that wants to adopt VDI can manage the expectation level of VDI and achieve higher actual performance. Second, because the perception of VDI performance differs between business users and system managers, it is meaningful that a fair evaluation of VDI performance requires a balanced consideration of business users and system managers.

Integrating UAV Remote Sensing with GIS for Predicting Rice Grain Protein

  • Sarkar, Tapash Kumar;Ryu, Chan-Seok;Kang, Ye-Seong;Kim, Seong-Heon;Jeon, Sae-Rom;Jang, Si-Hyeong;Park, Jun-Woo;Kim, Suk-Gu;Kim, Hyun-Jin
    • Journal of Biosystems Engineering
    • /
    • v.43 no.2
    • /
    • pp.148-159
    • /
    • 2018
  • Purpose: Unmanned air vehicle (UAV) remote sensing was applied to test various vegetation indices and make prediction models of protein content of rice for monitoring grain quality and proper management practice. Methods: Image acquisition was carried out by using NIR (Green, Red, NIR), RGB and RE (Blue, Green, Red-edge) camera mounted on UAV. Sampling was done synchronously at the geo-referenced points and GPS locations were recorded. Paddy samples were air-dried to 15% moisture content, and then dehulled and milled to 92% milling yield and measured the protein content by near-infrared spectroscopy. Results: Artificial neural network showed the better performance with $R^2$ (coefficient of determination) of 0.740, NSE (Nash-Sutcliffe model efficiency coefficient) of 0.733 and RMSE (root mean square error) of 0.187% considering all 54 samples than the models developed by PR (polynomial regression), SLR (simple linear regression), and PLSR (partial least square regression). PLSR calibration models showed almost similar result with PR as 0.663 ($R^2$) and 0.169% (RMSE) for cloud-free samples and 0.491 ($R^2$) and 0.217% (RMSE) for cloud-shadowed samples. However, the validation models performed poorly. This study revealed that there is a highly significant correlation between NDVI (normalized difference vegetation index) and protein content in rice. For the cloud-free samples, the SLR models showed $R^2=0.553$ and RMSE = 0.210%, and for cloud-shadowed samples showed 0.479 as $R^2$ and 0.225% as RMSE respectively. Conclusion: There is a significant correlation between spectral bands and grain protein content. Artificial neural networks have the strong advantages to fit the nonlinear problem when a sigmoid activation function is used in the hidden layer. Quantitatively, the neural network model obtained a higher precision result with a mean absolute relative error (MARE) of 2.18% and root mean square error (RMSE) of 0.187%.

Analysis of Sea Trial's Title for Naval Ships Based on Big Data (빅데이터 기반 함정 시운전 종목명 분석)

  • Lee, Hyeong-Sin;Seo, Hyeong-Pil;Beak, Yong-Kawn;Lee, Sang-Il
    • Journal of the Korea Academia-Industrial cooperation Society
    • /
    • v.21 no.11
    • /
    • pp.420-426
    • /
    • 2020
  • The purpose and main points of the ROK-US Navy were analyzed from various angles using the big data technology Word Cloud for efficient sea trials. First, a comparison of words extracted through keyword cleansing in the ROK-US Navy sea trial showed that the ROK Navy conducted a single equipment test, and the US Navy conducted an integrated test run focusing on the system. Second, an analysis of the ROK-US Navy sea trials showed that approximately 66.6% were analyzed as similar items, of which more than two items were 112 items Approximately 44% of the 252 items of the ROK Navy sea trials overlapped, and that 89 items (35% of the total) could be reduced when integrated into the US Navy sea trials. A ship is a complex system in which multiple equipment operates simultaneously. The focus on checking the functions and performance of individual equipment, such as the ROK Navy's sea trials, will increase the sea trial period because of the excessive number of sea trial targets. In addition, the budget required will inevitably increase due to an increase in schedule and evaluation costs. In the future, further research will be needed to achieve more efficient and accurate sea trials through integrated system evaluations, such as the U.S. Navy sea trials.

Graph-based Segmentation for Scene Understanding of an Autonomous Vehicle in Urban Environments (무인 자동차의 주변 환경 인식을 위한 도시 환경에서의 그래프 기반 물체 분할 방법)

  • Seo, Bo Gil;Choe, Yungeun;Roh, Hyun Chul;Chung, Myung Jin
    • The Journal of Korea Robotics Society
    • /
    • v.9 no.1
    • /
    • pp.1-10
    • /
    • 2014
  • In recent years, the research of 3D mapping technique in urban environments obtained by mobile robots equipped with multiple sensors for recognizing the robot's surroundings is being studied actively. However, the map generated by simple integration of multiple sensors data only gives spatial information to robots. To get a semantic knowledge to help an autonomous mobile robot from the map, the robot has to convert low-level map representations to higher-level ones containing semantic knowledge of a scene. Given a 3D point cloud of an urban scene, this research proposes a method to recognize the objects effectively using 3D graph model for autonomous mobile robots. The proposed method is decomposed into three steps: sequential range data acquisition, normal vector estimation and incremental graph-based segmentation. This method guarantees the both real-time performance and accuracy of recognizing the objects in real urban environments. Also, it can provide plentiful data for classifying the objects. To evaluate a performance of proposed method, computation time and recognition rate of objects are analyzed. Experimental results show that the proposed method has efficiently in understanding the semantic knowledge of an urban environment.

QoS-, Energy- and Cost-efficient Resource Allocation for Cloud-based Interactive TV Applications

  • Kulupana, Gosala;Talagala, Dumidu S.;Arachchi, Hemantha Kodikara;Fernando, Anil
    • IEIE Transactions on Smart Processing and Computing
    • /
    • v.6 no.3
    • /
    • pp.158-167
    • /
    • 2017
  • Internet-based social and interactive video applications have become major constituents of the envisaged applications for next-generation multimedia networks. However, inherently dynamic network conditions, together with varying user expectations, pose many challenges for resource allocation mechanisms for such applications. Yet, in addition to addressing these challenges, service providers must also consider how to mitigate their operational costs (e.g., energy costs, equipment costs) while satisfying the end-user quality of service (QoS) expectations. This paper proposes a heuristic solution to the problem, where the energy incurred by the applications, and the monetary costs associated with the service infrastructure, are minimized while simultaneously maximizing the average end-user QoS. We evaluate the performance of the proposed solution in terms of serving probability, i.e., the likelihood of being able to allocate resources to groups of users, the computation time of the resource allocation process, and the adaptability and sensitivity to dynamic network conditions. The proposed method demonstrates improvements in serving probability of up to 27%, in comparison with greedy resource allocation schemes, and a several-orders-of-magnitude reduction in computation time, compared to the linear programming approach, which significantly reduces the service-interrupted user percentage when operating under variable network conditions.

Development of Deep Learning-based Automatic Classification of Architectural Objects in Point Clouds for BIM Application in Renovating Aging Buildings (딥러닝 기반 노후 건축물 리모델링 시 BIM 적용을 위한 포인트 클라우드의 건축 객체 자동 분류 기술 개발)

  • Kim, Tae-Hoon;Gu, Hyeong-Mo;Hong, Soon-Min;Choo, Seoung-Yeon
    • Journal of KIBIM
    • /
    • v.13 no.4
    • /
    • pp.96-105
    • /
    • 2023
  • This study focuses on developing a building object recognition technology for efficient use in the remodeling of buildings constructed without drawings. In the era of the 4th industrial revolution, smart technologies are being developed. This research contributes to the architectural field by introducing a deep learning-based method for automatic object classification and recognition, utilizing point cloud data. We use a TD3D network with voxels, optimizing its performance through adjustments in voxel size and number of blocks. This technology enables the classification of building objects such as walls, floors, and roofs from 3D scanning data, labeling them in polygonal forms to minimize boundary ambiguities. However, challenges in object boundary classifications were observed. The model facilitates the automatic classification of non-building objects, thereby reducing manual effort in data matching processes. It also distinguishes between elements to be demolished or retained during remodeling. The study minimized data set loss space by labeling using the extremities of the x, y, and z coordinates. The research aims to enhance the efficiency of building object classification and improve the quality of architectural plans by reducing manpower and time during remodeling. The study aligns with its goal of developing an efficient classification technology. Future work can extend to creating classified objects using parametric tools with polygon-labeled datasets, offering meaningful numerical analysis for remodeling processes. Continued research in this direction is anticipated to significantly advance the efficiency of building remodeling techniques.