• Title/Summary/Keyword: Cloud Computing Services

Search Result 644, Processing Time 0.032 seconds

Combining Support Vector Machine Recursive Feature Elimination and Intensity-dependent Normalization for Gene Selection in RNAseq (RNAseq 빅데이터에서 유전자 선택을 위한 밀집도-의존 정규화 기반의 서포트-벡터 머신 병합법)

  • Kim, Chayoung
    • Journal of Internet Computing and Services
    • /
    • v.18 no.5
    • /
    • pp.47-53
    • /
    • 2017
  • In past few years, high-throughput sequencing, big-data generation, cloud computing, and computational biology are revolutionary. RNA sequencing is emerging as an attractive alternative to DNA microarrays. And the methods for constructing Gene Regulatory Network (GRN) from RNA-Seq are extremely lacking and urgently required. Because GRN has obtained substantial observation from genomics and bioinformatics, an elementary requirement of the GRN has been to maximize distinguishable genes. Despite of RNA sequencing techniques to generate a big amount of data, there are few computational methods to exploit the huge amount of the big data. Therefore, we have suggested a novel gene selection algorithm combining Support Vector Machines and Intensity-dependent normalization, which uses log differential expression ratio in RNAseq. It is an extended variation of support vector machine recursive feature elimination (SVM-RFE) algorithm. This algorithm accomplishes minimum relevancy with subsets of Big-Data, such as NCBI-GEO. The proposed algorithm was compared to the existing one which uses gene expression profiling DNA microarrays. It finds that the proposed algorithm have provided as convenient and quick method than previous because it uses all functions in R package and have more improvement with regard to the classification accuracy based on gene ontology and time consuming in terms of Big-Data. The comparison was performed based on the number of genes selected in RNAseq Big-Data.

Design and Implementation of eBPF-based Virtual TAP for Inter-VM Traffic Monitoring (가상 네트워크 트래픽 모니터링을 위한 eBPF 기반 Virtual TAP 설계 및 구현)

  • Hong, Jibum;Jeong, Seyeon;Yoo, Jae-Hyung;Hong, James Won-Ki
    • KNOM Review
    • /
    • v.21 no.2
    • /
    • pp.26-34
    • /
    • 2018
  • With the proliferation of cloud computing and services, the internet traffic and the demand for better quality of service are increasing. For this reason, server virtualization and network virtualization technology, which uses the resources of internal servers in the data center more efficiently, is receiving increased attention. However, the existing hardware Test Access Port (TAP) equipment is unfit for deployment in the virtual datapaths configured for server virtualization. Virtual TAP (vTAP), which is a software version of the hardware TAP, overcomes this problem by duplicating packets in a virtual switch. However, implementation of vTAP in a virtual switch has a performance problem because it shares the computing resources of the host machines with virtual switch and other VMs. We propose a vTAP implementation technique based on the extended Berkeley Packet Filter (eBPF), which is a high-speed packet processing technology, and compare its performance with that of the existing vTAP.

P2P Based Telemedicine System Using Thermographic Camera (열화상 카메라를 포함한 P2P 방식의 원격진료 시스템)

  • Kim, Kyoung Min;Ryu, Jae Hyun;Hong, Sung Jun;Kim, Hongjun
    • Journal of the Korea Institute of Information Security & Cryptology
    • /
    • v.32 no.3
    • /
    • pp.547-554
    • /
    • 2022
  • Recently, the field of telemedicine is growing rapidly due to the COVID-19 pandemic. However, the cost of telemedicine services is relatively high, since cloud computing, video conferencing, and cyber security should be considered. Therefore, in this paper, we design and implement a cost-effective P2P-based telemedicine system. It is implemented using the widely used the open source computing platform, Raspberry Pi, and P2P network that frees users from security problems such as the privacy leakage by the central server and DDoS attacks resulting from the server/client architecture and enables trustworthy identifying connection system using SSL protocol. Also it enables users to check the other party's status including body temperature in real time by installing a thermal imaging camera using Raspberry Pi. This allows several medical diagnoses that requires visual aids. The proposed telemedicine system will popularize telemedicine service and meet the ever-increasing demand for telemedicine.

Jumpstarting the Digital Revolution: Exploring Smart City Architecture and Themes

  • Maha Alqahtani;Kholod M. Alqahtani
    • International Journal of Computer Science & Network Security
    • /
    • v.23 no.3
    • /
    • pp.110-122
    • /
    • 2023
  • Over the last few decades, various innovative technologies have emerged that have significantly contributed to making life easier for humans. Various information and communication technologies (ITCs) have emerged as a result of the global technological revolution, including big data, IoT, 4G and 5G networks, cloud computing, mobile computing, and artificial intelligence. These technologies have been adopted in urban planning and development, which gave rise to the concept of smart cities in the 1990s. A smart city is a type of city that uses ITCs to exchange and share information to enhance the quality of services for its citizens. With the global population increasing at unprecedented levels, cities are overwhelmed with a myriad of challenges, such as the energy crisis, environmental pollution, sanitation and sewage challenges, and water quality issues, and therefore, have become a convergence point of economic, social, and environmental risks. The concept of a smart city is a multidisciplinary, unified approach that has been adopted by governments and municipalities worldwide to overcome these challenges. Though challenging, this transformation is essential for cities with differing technological and social features, which all have the potential to determine the success or failure of the digital transformation of cities into smart cities. In recent years, researchers, businesses, and the government have all turned their attention to the emerging field of smart cities. Accordingly, this paper aims to represent a thorough understanding of the movement toward smart cities. The key themes identified are smart city definitions and concepts, smart city dimensions, and smart city architecture of different layers. Furthermore, this article discusses the challenges and some examples of smart cities.

Real Time Distributed Parallel Processing to Visualize Noise Map with Big Sensor Data and GIS Data for Smart Cities (스마트시티의 빅 센서 데이터와 빅 GIS 데이터를 융합하여 실시간 온라인 소음지도로 시각화하기 위한 분산병렬처리 방법론)

  • Park, Jong-Won;Sim, Ye-Chan;Jung, Hae-Sun;Lee, Yong-Woo
    • Journal of Internet Computing and Services
    • /
    • v.19 no.4
    • /
    • pp.1-6
    • /
    • 2018
  • In smart cities, data from various kinds of sensors are collected and processed to provide smart services to the citizens. Noise information services with noise maps using the collected sensor data from various kinds of ubiquitous sensor networks is one of them. This paper presents a research result which generates three dimensional (3D) noise maps in real-time for smart cities. To make a noise map, we have to converge many informal data which include big image data of geographical Information and massive sensor data. Making such a 3D noise map in real-time requires the processing of the stream data from the ubiquitous sensor networks in real-time and the convergence operation in real-time. They are very challenging works. We developed our own methodology for real-time distributed and parallel processing for it and present it in this paper. Further, we developed our own real-time 3D noise map generation system, with the methodology. The system uses open source softwares for it. Here in this paper, we do introduce one of our systems which uses Apache Storm. We did performance evaluation using the developed system. Cloud computing was used for the performance evaluation experiments. It was confirmed that our system was working properly with good performance and the system can produce the 3D noise maps in real-time. The performance evaluation results are given in this paper, as well.

Design and Implementation of MongoDB-based Unstructured Log Processing System over Cloud Computing Environment (클라우드 환경에서 MongoDB 기반의 비정형 로그 처리 시스템 설계 및 구현)

  • Kim, Myoungjin;Han, Seungho;Cui, Yun;Lee, Hanku
    • Journal of Internet Computing and Services
    • /
    • v.14 no.6
    • /
    • pp.71-84
    • /
    • 2013
  • Log data, which record the multitude of information created when operating computer systems, are utilized in many processes, from carrying out computer system inspection and process optimization to providing customized user optimization. In this paper, we propose a MongoDB-based unstructured log processing system in a cloud environment for processing the massive amount of log data of banks. Most of the log data generated during banking operations come from handling a client's business. Therefore, in order to gather, store, categorize, and analyze the log data generated while processing the client's business, a separate log data processing system needs to be established. However, the realization of flexible storage expansion functions for processing a massive amount of unstructured log data and executing a considerable number of functions to categorize and analyze the stored unstructured log data is difficult in existing computer environments. Thus, in this study, we use cloud computing technology to realize a cloud-based log data processing system for processing unstructured log data that are difficult to process using the existing computing infrastructure's analysis tools and management system. The proposed system uses the IaaS (Infrastructure as a Service) cloud environment to provide a flexible expansion of computing resources and includes the ability to flexibly expand resources such as storage space and memory under conditions such as extended storage or rapid increase in log data. Moreover, to overcome the processing limits of the existing analysis tool when a real-time analysis of the aggregated unstructured log data is required, the proposed system includes a Hadoop-based analysis module for quick and reliable parallel-distributed processing of the massive amount of log data. Furthermore, because the HDFS (Hadoop Distributed File System) stores data by generating copies of the block units of the aggregated log data, the proposed system offers automatic restore functions for the system to continually operate after it recovers from a malfunction. Finally, by establishing a distributed database using the NoSQL-based Mongo DB, the proposed system provides methods of effectively processing unstructured log data. Relational databases such as the MySQL databases have complex schemas that are inappropriate for processing unstructured log data. Further, strict schemas like those of relational databases cannot expand nodes in the case wherein the stored data are distributed to various nodes when the amount of data rapidly increases. NoSQL does not provide the complex computations that relational databases may provide but can easily expand the database through node dispersion when the amount of data increases rapidly; it is a non-relational database with an appropriate structure for processing unstructured data. The data models of the NoSQL are usually classified as Key-Value, column-oriented, and document-oriented types. Of these, the representative document-oriented data model, MongoDB, which has a free schema structure, is used in the proposed system. MongoDB is introduced to the proposed system because it makes it easy to process unstructured log data through a flexible schema structure, facilitates flexible node expansion when the amount of data is rapidly increasing, and provides an Auto-Sharding function that automatically expands storage. The proposed system is composed of a log collector module, a log graph generator module, a MongoDB module, a Hadoop-based analysis module, and a MySQL module. When the log data generated over the entire client business process of each bank are sent to the cloud server, the log collector module collects and classifies data according to the type of log data and distributes it to the MongoDB module and the MySQL module. The log graph generator module generates the results of the log analysis of the MongoDB module, Hadoop-based analysis module, and the MySQL module per analysis time and type of the aggregated log data, and provides them to the user through a web interface. Log data that require a real-time log data analysis are stored in the MySQL module and provided real-time by the log graph generator module. The aggregated log data per unit time are stored in the MongoDB module and plotted in a graph according to the user's various analysis conditions. The aggregated log data in the MongoDB module are parallel-distributed and processed by the Hadoop-based analysis module. A comparative evaluation is carried out against a log data processing system that uses only MySQL for inserting log data and estimating query performance; this evaluation proves the proposed system's superiority. Moreover, an optimal chunk size is confirmed through the log data insert performance evaluation of MongoDB for various chunk sizes.

A Longitudinal Study on Customers' Usable Features and Needs of Activity Trackers as IoT based Devices (사물인터넷 기반 활동량측정기의 고객사용특성 및 욕구에 대한 종단연구)

  • Hong, Suk-Ki;Yoon, Sang-Chul
    • Journal of Internet Computing and Services
    • /
    • v.20 no.1
    • /
    • pp.17-24
    • /
    • 2019
  • Since the information of $4^{th}$ Industrial Revolution is introduced in WEF (World Economic Forum) in 2016, IoT, AI, Big Data, 5G, Cloud Computing, 3D/4DPrinting, Robotics, Nano Technology, and Bio Engineering have been rapidly developed as business applications as well as technologies themselves. Among the diverse business applications for IoT, wearable devices are recognized as the leading application devices for final customers. This longitudinal study is compared to the results of the 1st study conducted to identify customer needs of activity trackers, and links the identified users' needs with the well-known marketing frame of marketing mix. For this longitudinal study, a survey was applied to university students in June, 2018, and ANOVA were applied for major variables on usable features. Further, potential customer needs were identified and visualized by Word Cloud Technique. According to the analysis results, different from other high tech IT devices, activity trackers have diverse and unique potential needs. The results of this longitudinal study contribute primarily to understand usable features and their changes according to product maturity. It would provide some valuable implications in dynamic manner to activity tracker designers as well as researchers in this arena.

The Big Data Analysis and Medical Quality Management for Wellness (웰니스를 위한 빅데이터 분석과 의료 질 관리)

  • Cho, Young-Bok;Woo, Sung-Hee;Lee, Sang-Ho
    • Journal of the Korea Society of Computer and Information
    • /
    • v.19 no.12
    • /
    • pp.101-109
    • /
    • 2014
  • Medical technology development and increase the income level of a "Long and healthy Life=Wellness," with the growing interest in actively promoting and maintaining health and wellness has become enlarged. In addition, the demand for personalized health care services is growing and extensive medical moves of big data, disease prevention, too. In this paper, the main interest in the market, highlighting wellness in order to support big data-driven healthcare quality through patient-centered medical services purposes. Patients with drug dependence treatment is not to diet but to improve disease prevention and treatment based on analysis of big data. Analysing your Tweets-daily information and wellness disease prevention and treatment, based on the purpose of the dictionary. Efficient big data analysis for node while increasing processing time experiment. Test result case of total access time efficient 26% of one node to three nodes and case of data storage is 63%, case of data aggregate is 18% efficient of one node to three nodes.

Technology Trends, Research and Design of AIM Framework for Authentication Information Management (인증 정보 관리를 위한 기술 동향과 AIM 프레임워크 연구 및 설계)

  • Kim, Hyun-Joong;Cha, Byung-Rae;Pan, Sung-Bum
    • Journal of Digital Convergence
    • /
    • v.14 no.7
    • /
    • pp.373-383
    • /
    • 2016
  • With mobile-epoch and emerging of Fin-tech, Bio-recognition technology utilizing bio-information in secure method has spread. Specially, In order to change convenient payment services and transportation cards, the combination of biometrics and mobile services are being expanded. The basic concept of authentication such as access control, IA&A, OpenID, OAuth 1.0a, SSO, and Biometrics techniques are investigated, and the protocol stack for security API platform, FIDO, SCIM, OAuth 2.0, JSON Identity Suite, Keystone of OpenStack, Cloud-based SSO, and AIM Agent are described detailed in aspect of application of AIM. The authentication technology in domestic and foreign will accelerate technology development and research of standardization centered in the federated FIDO Universal Authentication Framework(UAF) and Universal 2 Factor Framework(U2F). To accommodate the changing needs of the social computing paradigm recently in this paper, the trends of various authentication technology, and design and function of AIM framework was defined.

Guideline of Building Information Modeling(BIM) Service Application Level using Service Level Agreement(SLA) in the Procurement Phase (발주단계에서 SLA를 활용한 BIM 서비스 적용 수준에 관한 연구)

  • Kim, Ji-Yun;Yun, Seok-Heon
    • Journal of the Korea Institute of Building Construction
    • /
    • v.17 no.1
    • /
    • pp.83-90
    • /
    • 2017
  • Recently, BIM has been actively adopted in construction projects and industries, and also integrated with Information and Communications Technology(ICT) such as cloud computing technology, sensor technology, 3D scanning and printing technology etc. However, it is very difficult to efficiently utilize BIM services, technologies and collaborate with each other because of differences of usage and requirements of technologies. Every participant in the same construction project has their own needs, requirements and details of the model in each phase. In order to enhance utilization BIM model, BIM services and technologies required in their project have to be clearly defined in the initial stage of the project. In order to support the owner to define the BIM level, BIM service level and application technologies are identified and guidelines how to define the level and technologies for their project purpose are suggested in this study.