• Title/Summary/Keyword: data handling

Search Result 1,497, Processing Time 0.044 seconds

Conceptual Design of 6U Micro-Satellite System for Optical Images of 3 m GSD (3 m급 광학영상 촬영을 위한 6U 초소형위성 시스템 개념설계)

  • Kim, Geuk-Nam;Park, Sang-Young;Kim, Gi-hwan;Park, Seung-Han;Song, Youngbum;Song, Sung Chan
    • Journal of Aerospace System Engineering
    • /
    • v.16 no.3
    • /
    • pp.105-114
    • /
    • 2022
  • The purpose of this study was to present a conceptual design of the 6U micro-satellite system for optical image of 3 m GSD. An optical camera system with a payload of 3 m GSD image was designed and optimized. The optical system has a diameter of Ø78 mm, length 250 mm, and 1400 mm focal length. The requirement and constraints were configured for the 6U micro-satellite bus system with the payload. Satisfying the requirement and constraints, the subsystems of the 6U bus were designed such as attitude and orbit control, propulsion, command and data handling, electrical power, communication, structures and mechanisms, and thermal control subsystem. The mass budget, power budget, and communication link budget were also confirmed for the 6U micro-satellite comprising the optical payload and the subsystems of bus. To take optical images, a mission operation concept is proposed for the 6U micro-satellite in a low-Earth orbit. A constellation comprising many 6U micro-satellites studied in this paper, can provide with various data for reconnaissance and disaster tracking.

A Narrative Inquiry on Korea Science Academy Physical Education Teachers's Assessment Experiences (한국과학영재학교 체육교사의 체육평가 경험에 대한 내러티브 탐구)

  • Lee, Jong-Min;Lee, Keun-Mo
    • 한국체육학회지인문사회과학편
    • /
    • v.55 no.3
    • /
    • pp.43-57
    • /
    • 2016
  • This narrative study aims to describe the experience of P.E. assessment that was conducted by P.E. teachers of Korea Science Academy of KAIST, and interpret the educational significance that was found in the process. The study participants were two P.E. teachers who were selected by decisive case sampling method. Data were collected mainly through official interviews with study participants, and through researcher's field notes, informal interviews, various minutes, students' evaluation of teaching, and emails between the researcher and study participants. Data were analyzed through inductive categorization, and to gain veracity of the study, there were integration of diverse materials, advice and suggestions of fellow researchers, continuous confirmation of study texts by study participants. Study participants, while conducting P.E. assessment in Korea Science Academy of KAIST, experienced effectiveness of evaluation such as qualitative development of P,E. classes in accordance with the simplified assessment, freedom from the chores of handling assessment results, students' improved perceptions of P.E. class, realization of safe classes without excessive competition, and the possibility of giving alternative evaluations to pass/fail system but at the same time experienced limitations such as concerns over gaining validity and reliability of P.E. evaluation, the students' attitude who take lightly of P.E. class, and the reality that teachers cannot fail students. The evaluation experiences of the two P.E teachers were educationally interpreted as encounter with good P.E. classes, invitation to P.E. class criticism, and the start of school P.E. culture that is led by students.

Study on the Efficient Integration of Long-term Care Facilities and Geriatric Hospitals by Using NHIC Survey Data (실태조사를 통한 장기요양시설과 요양병원의 효율적 연계방안)

  • Choi, in-duck;Lee, eun-mi
    • 한국노년학
    • /
    • v.30 no.3
    • /
    • pp.855-869
    • /
    • 2010
  • The purpose of this study is to identify how to efficiently integrate long-term care facilities into geriatric hospitals. We conducted a survey on the current operations of facilities and medical services of 2009 of 192 long-term facilities and 168 geriatric hospitals in Korea between October and November. Technical statistics and chi-square test were conducted on the collected data using the SPSS 13.0/Win program. There was a difference between the two facility types in terms of the co-payment levels of the food services. Both types selected the budget deficit as their major management problem. Ease of access and the surrounding environment were critical factors used to select the location of both types of facilities. Facility users benefited from the discounted co-payments of both facility types. However, facility users wanted more frequent visits and support from their family members during their stay at the facilities. It was discovered that users in the long-term care facilities stayed longer, that is until they died, compared to their counterparts in geriatric hospitals. The two types of facilities provided their services totally separately to users. Users of the two types of facilities are poorly supported and cared for by their families. This study suggests that setting reasonable service fees, paying caretakers, introducing an integrated facility, strengthening facility assessment standards, introducing the family doctor system, and introducing the handling of long-term care insurance by geriatric hospitals would allow the integration between long- term care facilities and geriatric hospitals to be beneficial.

Elevator Algorithm Design Using Time Table Data (시간표 데이터를 이용한 엘리베이터 알고리즘 설계)

  • Park, Jun-hyuk;Kyoung, Min-jun
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 2022.05a
    • /
    • pp.122-124
    • /
    • 2022
  • Handling Passenger Traffic is the main challenge for designing an elevator group-control algorithm. Advanced control systems such as Hyundai's Destination Selection System(DSS) lets passengers select the destination by pressing on a selecting screen, and the systems have shown great efficiency. However, the algorithm cannot be applied to the general elevator control system due to the expensive cost of the technology. Often many elevator systems use Nearest Car(NC) algorithms based on the SCAN algorithm, which results in time efficiency problems. In this paper, we designed an elevator group-control algorithm for specific buildings that have approximate timetable data for most of the passengers in the building. In that way, it is possible to predict the destination and the location of passenger calls. The algorithm consists of two parts; the waiting function and the assignment function. They evaluate elevators' actions with respect to the calls and the overall situation. 10 different timetables are created in reference to a real timetable following midday traffic and interfloor traffic. The specific coefficients in the function are set by going through the genetic algorithm process that represents the best algorithm. As result, the average waiting time has shortened by a noticeable amount and the efficiency was close to the known DSS result. Finally, we analyzed the algorithm by evaluating the meaning of each coefficient result from the genetic algorithm.

  • PDF

Autoregressive Cross-lagged Effects Between the Experience of Bullying and Victimization: Multigroup Analysis by Gender (학교폭력 가해경험과 피해경험의 종단관계 검증: 자기회귀교차지연 모형을 통한 성별 간 다집단 분석)

  • Jisu Park;Yoonsun Han
    • Korean Journal of Culture and Social Issue
    • /
    • v.24 no.1
    • /
    • pp.1-27
    • /
    • 2018
  • The purpose of this study was to identify the persistent and dynamic association between bullying and victimization. Gender differences in patterns of school bullying was hypothesized based on the literature. Analysis were based on waves 3-6 of the Korea Children and Youth Panel Survey, a nationally representative data of primary and secondary school students in South Korea (N = 1,881). Autoregressive cross-lagged model was employed to identify the reciprocal association between bullying and victimization in longitudinal data. As hypothesized, regardless of gender, lagged effects were statistically significant between each time points such that current bullying caused future bullying and current victimization led to future victimization. However, there was no cross-lagged effects of current victimization on future bullying nor current perpetration on future victimization for both male and female youth. Findings from this study may have implications for designing policies against school bulling. Not only is short-term intervention for handling immediate psycho-social maladjustment important, but so are long-term plans that prevent youth from falling into continued perpetration and victimization in the system of school bullying.

Study on Improving the Navigational Safety Evaluation Methodology based on Autonomous Operation Technology (자율운항기술 기반의 선박 통항 안전성 평가 방법론 개선 연구)

  • Jun-Mo Park
    • Journal of the Korean Society of Marine Environment & Safety
    • /
    • v.30 no.1
    • /
    • pp.74-81
    • /
    • 2024
  • In the near future, autonomous ships, ships controlled by shore remote control centers, and ships operated by navigators will coexist and operate the sea together. In the advent of this situation, a method is required to evaluate the safety of the maritime traffic environment. Therefore, in this study, a plan to evaluate the safety of navigation through ship control simulation was proposed in a maritime environment, where ships directly controlled by navigators and autonomous ships coexisted, using autonomous operation technology. Own ship was designed to have autonomous operational functions by learning the MMG model based on the six-DOF motion with the PPO algorithm, an in-depth reinforcement learning technique. The target ship constructed maritime traffic modeling data based on the maritime traffic data of the sea area to be evaluated and designed autonomous operational functions to be implemented in a simulation space. A numerical model was established by collecting date on tide, wave, current, and wind from the maritime meteorological database. A maritime meteorology model was created based on this and designed to reproduce maritime meteorology on the simulator. Finally, the safety evaluation proposed a system that enabled the risk of collision through vessel traffic flow simulation in ship control simulation while maintaining the existing evaluation method.

Design and Implementation of the SSL Component based on CBD (CBD에 기반한 SSL 컴포넌트의 설계 및 구현)

  • Cho Eun-Ae;Moon Chang-Joo;Baik Doo-Kwon
    • Journal of KIISE:Computing Practices and Letters
    • /
    • v.12 no.3
    • /
    • pp.192-207
    • /
    • 2006
  • Today, the SSL protocol has been used as core part in various computing environments or security systems. But, the SSL protocol has several problems, because of the rigidity on operating. First, SSL protocol brings considerable burden to the CPU utilization so that performance of the security service in encryption transaction is lowered because it encrypts all data which is transferred between a server and a client. Second, SSL protocol can be vulnerable for cryptanalysis due to the key in fixed algorithm being used. Third, it is difficult to add and use another new cryptography algorithms. Finally. it is difficult for developers to learn use cryptography API(Application Program Interface) for the SSL protocol. Hence, we need to cover these problems, and, at the same time, we need the secure and comfortable method to operate the SSL protocol and to handle the efficient data. In this paper, we propose the SSL component which is designed and implemented using CBD(Component Based Development) concept to satisfy these requirements. The SSL component provides not only data encryption services like the SSL protocol but also convenient APIs for the developer unfamiliar with security. Further, the SSL component can improve the productivity and give reduce development cost. Because the SSL component can be reused. Also, in case of that new algorithms are added or algorithms are changed, it Is compatible and easy to interlock. SSL Component works the SSL protocol service in application layer. First of all, we take out the requirements, and then, we design and implement the SSL Component, confidentiality and integrity component, which support the SSL component, dependently. These all mentioned components are implemented by EJB, it can provide the efficient data handling when data is encrypted/decrypted by choosing the data. Also, it improves the usability by choosing data and mechanism as user intend. In conclusion, as we test and evaluate these component, SSL component is more usable and efficient than existing SSL protocol, because the increase rate of processing time for SSL component is lower that SSL protocol's.

Study on the Possibility of Estimating Surface Soil Moisture Using Sentinel-1 SAR Satellite Imagery Based on Google Earth Engine (Google Earth Engine 기반 Sentinel-1 SAR 위성영상을 이용한 지표 토양수분량 산정 가능성에 관한 연구)

  • Younghyun Cho
    • Korean Journal of Remote Sensing
    • /
    • v.40 no.2
    • /
    • pp.229-241
    • /
    • 2024
  • With the advancement of big data processing technology using cloud platforms, access, processing, and analysis of large-volume data such as satellite imagery have recently been significantly improved. In this study, the Change Detection Method, a relatively simple technique for retrieving soil moisture, was applied to the backscattering coefficient values of pre-processed Sentinel-1 synthetic aperture radar (SAR) satellite imagery product based on Google Earth Engine (GEE), one of those platforms, to estimate the surface soil moisture for six observatories within the Yongdam Dam watershed in South Korea for the period of 2015 to 2023, as well as the watershed average. Subsequently, a correlation analysis was conducted between the estimated values and actual measurements, along with an examination of the applicability of GEE. The results revealed that the surface soil moisture estimated for small areas within the soil moisture observatories of the watershed exhibited low correlations ranging from 0.1 to 0.3 for both VH and VV polarizations, likely due to the inherent measurement accuracy of the SAR satellite imagery and variations in data characteristics. However, the surface soil moisture average, which was derived by extracting the average SAR backscattering coefficient values for the entire watershed area and applying moving averages to mitigate data uncertainties and variability, exhibited significantly improved results at the level of 0.5. The results obtained from estimating soil moisture using GEE demonstrate its utility despite limitations in directly conducting desired analyses due to preprocessed SAR data. However, the efficient processing of extensive satellite imagery data allows for the estimation and evaluation of soil moisture over broad ranges, such as long-term watershed averages. This highlights the effectiveness of GEE in handling vast satellite imagery datasets to assess soil moisture. Based on this, it is anticipated that GEE can be effectively utilized to assess long-term variations of soil moisture average in major dam watersheds, in conjunction with soil moisture observation data from various locations across the country in the future.

Design and Implementation of MongoDB-based Unstructured Log Processing System over Cloud Computing Environment (클라우드 환경에서 MongoDB 기반의 비정형 로그 처리 시스템 설계 및 구현)

  • Kim, Myoungjin;Han, Seungho;Cui, Yun;Lee, Hanku
    • Journal of Internet Computing and Services
    • /
    • v.14 no.6
    • /
    • pp.71-84
    • /
    • 2013
  • Log data, which record the multitude of information created when operating computer systems, are utilized in many processes, from carrying out computer system inspection and process optimization to providing customized user optimization. In this paper, we propose a MongoDB-based unstructured log processing system in a cloud environment for processing the massive amount of log data of banks. Most of the log data generated during banking operations come from handling a client's business. Therefore, in order to gather, store, categorize, and analyze the log data generated while processing the client's business, a separate log data processing system needs to be established. However, the realization of flexible storage expansion functions for processing a massive amount of unstructured log data and executing a considerable number of functions to categorize and analyze the stored unstructured log data is difficult in existing computer environments. Thus, in this study, we use cloud computing technology to realize a cloud-based log data processing system for processing unstructured log data that are difficult to process using the existing computing infrastructure's analysis tools and management system. The proposed system uses the IaaS (Infrastructure as a Service) cloud environment to provide a flexible expansion of computing resources and includes the ability to flexibly expand resources such as storage space and memory under conditions such as extended storage or rapid increase in log data. Moreover, to overcome the processing limits of the existing analysis tool when a real-time analysis of the aggregated unstructured log data is required, the proposed system includes a Hadoop-based analysis module for quick and reliable parallel-distributed processing of the massive amount of log data. Furthermore, because the HDFS (Hadoop Distributed File System) stores data by generating copies of the block units of the aggregated log data, the proposed system offers automatic restore functions for the system to continually operate after it recovers from a malfunction. Finally, by establishing a distributed database using the NoSQL-based Mongo DB, the proposed system provides methods of effectively processing unstructured log data. Relational databases such as the MySQL databases have complex schemas that are inappropriate for processing unstructured log data. Further, strict schemas like those of relational databases cannot expand nodes in the case wherein the stored data are distributed to various nodes when the amount of data rapidly increases. NoSQL does not provide the complex computations that relational databases may provide but can easily expand the database through node dispersion when the amount of data increases rapidly; it is a non-relational database with an appropriate structure for processing unstructured data. The data models of the NoSQL are usually classified as Key-Value, column-oriented, and document-oriented types. Of these, the representative document-oriented data model, MongoDB, which has a free schema structure, is used in the proposed system. MongoDB is introduced to the proposed system because it makes it easy to process unstructured log data through a flexible schema structure, facilitates flexible node expansion when the amount of data is rapidly increasing, and provides an Auto-Sharding function that automatically expands storage. The proposed system is composed of a log collector module, a log graph generator module, a MongoDB module, a Hadoop-based analysis module, and a MySQL module. When the log data generated over the entire client business process of each bank are sent to the cloud server, the log collector module collects and classifies data according to the type of log data and distributes it to the MongoDB module and the MySQL module. The log graph generator module generates the results of the log analysis of the MongoDB module, Hadoop-based analysis module, and the MySQL module per analysis time and type of the aggregated log data, and provides them to the user through a web interface. Log data that require a real-time log data analysis are stored in the MySQL module and provided real-time by the log graph generator module. The aggregated log data per unit time are stored in the MongoDB module and plotted in a graph according to the user's various analysis conditions. The aggregated log data in the MongoDB module are parallel-distributed and processed by the Hadoop-based analysis module. A comparative evaluation is carried out against a log data processing system that uses only MySQL for inserting log data and estimating query performance; this evaluation proves the proposed system's superiority. Moreover, an optimal chunk size is confirmed through the log data insert performance evaluation of MongoDB for various chunk sizes.

Applicability Analysis of Constructing UDM of Cloud and Cloud Shadow in High-Resolution Imagery Using Deep Learning (딥러닝 기반 구름 및 구름 그림자 탐지를 통한 고해상도 위성영상 UDM 구축 가능성 분석)

  • Nayoung Kim;Yerin Yun;Jaewan Choi;Youkyung Han
    • Korean Journal of Remote Sensing
    • /
    • v.40 no.4
    • /
    • pp.351-361
    • /
    • 2024
  • Satellite imagery contains various elements such as clouds, cloud shadows, and terrain shadows. Accurately identifying and eliminating these factors that complicate satellite image analysis is essential for maintaining the reliability of remote sensing imagery. For this reason, satellites such as Landsat-8, Sentinel-2, and Compact Advanced Satellite 500-1 (CAS500-1) provide Usable Data Masks(UDMs)with images as part of their Analysis Ready Data (ARD) product. Precise detection of clouds and their shadows is crucial for the accurate construction of these UDMs. Existing cloud and their shadow detection methods are categorized into threshold-based methods and Artificial Intelligence (AI)-based methods. Recently, AI-based methods, particularly deep learning networks, have been preferred due to their advantage in handling large datasets. This study aims to analyze the applicability of constructing UDMs for high-resolution satellite images through deep learning-based cloud and their shadow detection using open-source datasets. To validate the performance of the deep learning network, we compared the detection results generated by the network with pre-existing UDMs from Landsat-8, Sentinel-2, and CAS500-1 satellite images. The results demonstrated that high accuracy in the detection outcomes produced by the deep learning network. Additionally, we applied the network to detect cloud and their shadow in KOMPSAT-3/3A images, which do not provide UDMs. The experiment confirmed that the deep learning network effectively detected cloud and their shadow in high-resolution satellite images. Through this, we could demonstrate the applicability that UDM data for high-resolution satellite imagery can be constructed using the deep learning network.