• Title/Summary/Keyword: cloud-free

Search Result 112, Processing Time 0.03 seconds

Automated Geometric Correction of Geostationary Weather Satellite Images (정지궤도 기상위성의 자동기하보정)

  • Kim, Hyun-Suk;Lee, Tae-Yoon;Hur, Dong-Seok;Rhee, Soo-Ahm;Kim, Tae-Jung
    • Korean Journal of Remote Sensing
    • /
    • v.23 no.4
    • /
    • pp.297-309
    • /
    • 2007
  • The first Korean geostationary weather satellite, Communications, Oceanography and Meteorology Satellite (COMS) will be launched in 2008. The ground station for COMS needs to perform geometric correction to improve accuracy of satellite image data and to broadcast geometrically corrected images to users within 30 minutes after image acquisition. For such a requirement, we developed automated and fast geometric correction techniques. For this, we generated control points automatically by matching images against coastline data and by applying a robust estimation called RANSAC. We used GSHHS (Global Self-consistent Hierarchical High-resolution Shoreline) shoreline database to construct 211 landmark chips. We detected clouds within the images and applied matching to cloud-free sub images. When matching visible channels, we selected sub images located in day-time. We tested the algorithm with GOES-9 images. Control points were generated by matching channel 1 and channel 2 images of GOES against the 211 landmark chips. The RANSAC correctly removed outliers from being selected as control points. The accuracy of sensor models established using the automated control points were in the range of $1{\sim}2$ pixels. Geometric correction was performed and the performance was visually inspected by projecting coastline onto the geometrically corrected images. The total processing time for matching, RANSAC and geometric correction was around 4 minutes.

Design and Implementation of MongoDB-based Unstructured Log Processing System over Cloud Computing Environment (클라우드 환경에서 MongoDB 기반의 비정형 로그 처리 시스템 설계 및 구현)

  • Kim, Myoungjin;Han, Seungho;Cui, Yun;Lee, Hanku
    • Journal of Internet Computing and Services
    • /
    • v.14 no.6
    • /
    • pp.71-84
    • /
    • 2013
  • Log data, which record the multitude of information created when operating computer systems, are utilized in many processes, from carrying out computer system inspection and process optimization to providing customized user optimization. In this paper, we propose a MongoDB-based unstructured log processing system in a cloud environment for processing the massive amount of log data of banks. Most of the log data generated during banking operations come from handling a client's business. Therefore, in order to gather, store, categorize, and analyze the log data generated while processing the client's business, a separate log data processing system needs to be established. However, the realization of flexible storage expansion functions for processing a massive amount of unstructured log data and executing a considerable number of functions to categorize and analyze the stored unstructured log data is difficult in existing computer environments. Thus, in this study, we use cloud computing technology to realize a cloud-based log data processing system for processing unstructured log data that are difficult to process using the existing computing infrastructure's analysis tools and management system. The proposed system uses the IaaS (Infrastructure as a Service) cloud environment to provide a flexible expansion of computing resources and includes the ability to flexibly expand resources such as storage space and memory under conditions such as extended storage or rapid increase in log data. Moreover, to overcome the processing limits of the existing analysis tool when a real-time analysis of the aggregated unstructured log data is required, the proposed system includes a Hadoop-based analysis module for quick and reliable parallel-distributed processing of the massive amount of log data. Furthermore, because the HDFS (Hadoop Distributed File System) stores data by generating copies of the block units of the aggregated log data, the proposed system offers automatic restore functions for the system to continually operate after it recovers from a malfunction. Finally, by establishing a distributed database using the NoSQL-based Mongo DB, the proposed system provides methods of effectively processing unstructured log data. Relational databases such as the MySQL databases have complex schemas that are inappropriate for processing unstructured log data. Further, strict schemas like those of relational databases cannot expand nodes in the case wherein the stored data are distributed to various nodes when the amount of data rapidly increases. NoSQL does not provide the complex computations that relational databases may provide but can easily expand the database through node dispersion when the amount of data increases rapidly; it is a non-relational database with an appropriate structure for processing unstructured data. The data models of the NoSQL are usually classified as Key-Value, column-oriented, and document-oriented types. Of these, the representative document-oriented data model, MongoDB, which has a free schema structure, is used in the proposed system. MongoDB is introduced to the proposed system because it makes it easy to process unstructured log data through a flexible schema structure, facilitates flexible node expansion when the amount of data is rapidly increasing, and provides an Auto-Sharding function that automatically expands storage. The proposed system is composed of a log collector module, a log graph generator module, a MongoDB module, a Hadoop-based analysis module, and a MySQL module. When the log data generated over the entire client business process of each bank are sent to the cloud server, the log collector module collects and classifies data according to the type of log data and distributes it to the MongoDB module and the MySQL module. The log graph generator module generates the results of the log analysis of the MongoDB module, Hadoop-based analysis module, and the MySQL module per analysis time and type of the aggregated log data, and provides them to the user through a web interface. Log data that require a real-time log data analysis are stored in the MySQL module and provided real-time by the log graph generator module. The aggregated log data per unit time are stored in the MongoDB module and plotted in a graph according to the user's various analysis conditions. The aggregated log data in the MongoDB module are parallel-distributed and processed by the Hadoop-based analysis module. A comparative evaluation is carried out against a log data processing system that uses only MySQL for inserting log data and estimating query performance; this evaluation proves the proposed system's superiority. Moreover, an optimal chunk size is confirmed through the log data insert performance evaluation of MongoDB for various chunk sizes.

Practical Conjunctive Searchable Encryption Using Prime Table (소수테이블을 이용한 실용적인 다중 키워드 검색가능 암호시스템)

  • Yang, Yu-Jin;Kim, Sangjin
    • Journal of the Korea Institute of Information Security & Cryptology
    • /
    • v.24 no.1
    • /
    • pp.5-14
    • /
    • 2014
  • Searchable encryption systems provide search on encrypted data while preserving the privacy of the data and the search keywords used in queries. Recently, interest on data outsourcing has increased due to proliferation of cloud computing services. Many researches are on going to minimize the trust put on external servers and searchable encryption is one of them. However, most of previous searchable encryption schemes provide only a single keyword boolean search. Although, there have been proposals to provide conjunctive keyword search, most of these works use a fixed field which limit their application. In this paper, we propose a field-free conjunctive keyword searchable encryption that also provides rank information of search results. Our system uses prime tables and greatest common divisor operation, making our system very efficient. Moreover, our system is practical and can be implemented very easily since it does not require sophisticated cryptographic module.

Study on the Retreatment Techniques for NOAA Sea Surface Temperature Imagery (NOAA 수온영상 재처리 기법에 관한 연구)

  • Kim, Sang-Woo;Kang, Yong-Q.;Ahn, Ji-Sook
    • Journal of the Korean Society of Marine Environment & Safety
    • /
    • v.17 no.4
    • /
    • pp.331-337
    • /
    • 2011
  • We described for the production of cloud-free satellite sea surface temperature(SST) data around Northeast Asian using NOAA AVHRR(Advanced Very High Resolution Radiometer) SST data during 1990-2005. As a result of Markov model, it was found that the value of Markov coefficient in the strong current region such as Kuroshio region showed smaller than that in the weak current. The variations of average SST and regional difference of seasonal day-to-day SST in spring and fall were larger than those in summer and winter. In particular, the distribution of the regional difference appeared large in the vicinity of continental in spring and fall. The difference of seasonal day-to-day SST was also small in Kuroshio region and southern part of East Sea due to the heat advection by warm currents.

The Implementation of the Fine Dust Measuring System based on Internet of Things(IoT) (사물인터넷기반 미세먼지 측정 시스템 구현)

  • Noh, Jin-Ho;Tack, Han-Ho
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.21 no.4
    • /
    • pp.829-835
    • /
    • 2017
  • Recently, the health issues triggered by fine dust matters occurred in higher frequency. Having adverse effects on health, particulate matters affect the human body indoors as well as outdoors. There is thus a need for a system to measure the concentration of particulate matters and control harmful particulate matters for human health in the indoor spaces where people live. The present study applied Internet of Things(IoT) technologies in order to increase the efficiency of the conventional fine dust measurement system. Especially, for the bidirectional communication environment, directly construct a separate server and applied to the system instead of a free cloud server also we used it directly in the school lab and home. When the proposed system is used in schools and homes, it can recognize the indoor environment quickly and it is expected that this will gradually contribute to the health of the individual. Users can also check the server data outside and deal with the current indoor situations.

Transparent Manipulators Accomplished with RGB-D Sensor, AR Marker, and Color Correction Algorithm (RGB-D 센서, AR 마커, 색수정 알고리즘을 활용한 매니퓰레이터 투명화)

  • Kim, Dong Yeop;Kim, Young Jee;Son, Hyunsik;Hwang, Jung-Hoon
    • The Journal of Korea Robotics Society
    • /
    • v.15 no.3
    • /
    • pp.293-300
    • /
    • 2020
  • The purpose of our sensor system is to transparentize the large hydraulic manipulators of a six-ton dual arm excavator from the operator camera view. Almost 40% of the camera view is blocked by the manipulators. In other words, the operator loses 40% of visual information which might be useful for many manipulator control scenarios such as clearing debris on a disaster site. The proposed method is based on a 3D reconstruction technology. By overlaying the camera image from front top of the cabin with the point cloud data from RGB-D (red, green, blue and depth) cameras placed at the outer side of each manipulator, the manipulator-free camera image can be obtained. Two additional algorithms are proposed to further enhance the productivity of dual arm excavators. First, a color correction algorithm is proposed to cope with the different color distribution of the RGB and RGB-D sensors used on the system. Also, the edge overlay algorithm is proposed. Although the manipulators often limit the operator's view, the visual feedback of the manipulator's configurations or states may be useful to the operator. Thus, the overlay algorithm is proposed to show the edge of the manipulators on the camera image. The experimental results show that the proposed transparentization algorithm helps the operator get information about the environment and objects around the excavator.

Comparison of Online Shopping Mall BEST 100 using Exploratory Data Analysis (탐색적 자료 분석(EDA) 기법을 활용한 국내 11개 대표 온라인 쇼핑몰 BEST 100 비교)

  • Kang, Jicheon;Kang, Juyoung
    • The Journal of Bigdata
    • /
    • v.3 no.1
    • /
    • pp.1-12
    • /
    • 2018
  • Since the beginning of the first online shopping mall, BEST 100 is being provided as the core of all shopping mall websites. BEST 100 is greatly important because consumers can identify popular products at a glance. However, there are only studies using sales outcome indicators, and prior studies using BEST 100 are insignificant. Therefore, this study selected 11 online shopping malls and compared their main characteristics. As a research method, exploratory data analysis technique (EDA) was used by crawling the BEST 100 components of each shopping mall website, such as product name, price, and free shipping check. As a result, the total average price of 11 shopping malls was 72,891.41 won. Sales texts were classified into 8 categories by text mining. The most common category was the fashion part, but it is significant that the setting of the category analyzed the marketing text, not the product attribute. This study has implications for understanding the current online market flow and suggesting future directions by using EDA.

Statistical Study and Prediction of Variability of Erythemal Ultraviolet Irradiance Solar Values in Valencia, Spain

  • Gurrea, Gonzalo;Blanca-Gimenez, Vicente;Perez, Vicente;Serrano, Maria-Antonia;Moreno, Juan-Carlos
    • Asia-Pacific Journal of Atmospheric Sciences
    • /
    • v.54 no.4
    • /
    • pp.599-610
    • /
    • 2018
  • The goal of this study was to statistically analyse the variability of global irradiance and ultraviolet erythemal (UVER) irradiance and their interrelationships with global and UVER irradiance, global clearness indices and ozone. A prediction of short-term UVER solar irradiance values was also obtained. Extreme values of UVER irradiance were included in the data set, as well as a time series of ultraviolet irradiance variability (UIV). The study period was from 2005 to 2014 and approximately 250,000 readings were taken at 5-min intervals. The effect of the clearness indices on global irradiance variability (GIV) and UIV was also recorded and bi-dimensional distributions were used to gather information on the two measured variables. With regard to daily GIV and UIV, it is also shown that for global clearness index ($k_t$) values lower than 0.6 both global and UVER irradiance had greater variability and that UIVon cloud-free days ($k_t$ higher than 0.65) exceeds GIV. To study the dependence between UIVand GIV the ${\chi}^2$ statistical method was used. It can be concluded that there is a 95% probability of a clear dependency between the variabilities. A connection between high $k_t$ (corresponding to cloudless days) and low variabilities was found in the analysis of bidimensional distributions. Extreme values of UVER irradiance were also analyzed and it was possible to calculate the probable future values of UVER irradiance by extrapolating the values of the adjustment curve obtained from the Gumbel distribution.

A Fault Tolerant Data Management Scheme for Healthcare Internet of Things in Fog Computing

  • Saeed, Waqar;Ahmad, Zulfiqar;Jehangiri, Ali Imran;Mohamed, Nader;Umar, Arif Iqbal;Ahmad, Jamil
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.15 no.1
    • /
    • pp.35-57
    • /
    • 2021
  • Fog computing aims to provide the solution of bandwidth, network latency and energy consumption problems of cloud computing. Likewise, management of data generated by healthcare IoT devices is one of the significant applications of fog computing. Huge amount of data is being generated by healthcare IoT devices and such types of data is required to be managed efficiently, with low latency, without failure, and with minimum energy consumption and low cost. Failures of task or node can cause more latency, maximum energy consumption and high cost. Thus, a failure free, cost efficient, and energy aware management and scheduling scheme for data generated by healthcare IoT devices not only improves the performance of the system but also saves the precious lives of patients because of due to minimum latency and provision of fault tolerance. Therefore, to address all such challenges with regard to data management and fault tolerance, we have presented a Fault Tolerant Data management (FTDM) scheme for healthcare IoT in fog computing. In FTDM, the data generated by healthcare IoT devices is efficiently organized and managed through well-defined components and steps. A two way fault-tolerant mechanism i.e., task-based fault-tolerance and node-based fault-tolerance, is provided in FTDM through which failure of tasks and nodes are managed. The paper considers energy consumption, execution cost, network usage, latency, and execution time as performance evaluation parameters. The simulation results show significantly improvements which are performed using iFogSim. Further, the simulation results show that the proposed FTDM strategy reduces energy consumption 3.97%, execution cost 5.09%, network usage 25.88%, latency 44.15% and execution time 48.89% as compared with existing Greedy Knapsack Scheduling (GKS) strategy. Moreover, it is worthwhile to mention that sometimes the patients are required to be treated remotely due to non-availability of facilities or due to some infectious diseases such as COVID-19. Thus, in such circumstances, the proposed strategy is significantly efficient.

A Study on the Development Issues of Digital Health Care Medical Information (디지털 헬스케어 의료정보의 발전과제에 관한 연구)

  • Moon, Yong
    • Industry Promotion Research
    • /
    • v.7 no.3
    • /
    • pp.17-26
    • /
    • 2022
  • As the well-being mindset to keep our minds and bodies free and healthy more than anything else in the society we live in is spreading, the meaning of health care has become a key part of the 4th industrial revolution such as big data, IoT, AI, and block chain. The advancement of the advanced medical information service industry is being promoted by utilizing convergence technology. In digital healthcare, the development of intelligent information technology such as artificial intelligence, big data, and cloud is being promoted as a digital transformation of the traditional medical and healthcare industry. In addition, due to rapid development in the convergence of science and technology environment, various issues such as health, medical care, welfare, etc., have been gradually expanded due to social change. Therefore, in this study, first, the general meaning and current status of digital health care medical information is examined, and then, developmental tasks to activate digital health care medical information are analyzed and reviewed. The purpose of this article is to improve usability to fully pursue our human freedom.