• Title/Summary/Keyword: research data

Search Result 69,841, Processing Time 0.081 seconds

A study on Fusion image Expressed in Hair collections - Focusing on Juno Hair's 2013-2022 collection

  • Jin Hyun Park;Hye Rroon Jang
    • International Journal of Advanced Culture Technology
    • /
    • v.11 no.4
    • /
    • pp.202-209
    • /
    • 2023
  • In the 21st century, the dualistic worldview of the Cold War era collapsed and we entered an era of new creation and fusion. The fusion of different designs between East and West, the design activities of traditional clothing from the past, the use of new materials that are continuously being developed, and the mixing of unique items are being conducted in various fields. However, research is being conducted by combining fusion characteristics with hair. In addition, the period is short and the amount of research is small. Therefore, this study analyzed hairstyles of fusion images shown in hair collection using data of Juno Hair collection from 2013 to 2022 as analysis data and examined types of fusion images shown in the work of folk images, mixed images, and future images. Oriented images were divided into three categories and analyzed. In this study, we added Results of such research can be used not only as data for predicting future fashion trends, but also as basic data for exploring new design developments. In future research, it is expected that convergent research will be conducted, such as analyzing fusion images from an integrated perspective.

Validation for SOC Estimation from OC and EC concentration in PM2.5 measured at Seoul (서울 대기 중 PM2.5 내 OC와 EC로부터 SOC 추정방법의 비교 평가)

  • Yoo, Ha Young;Kim, Ki Ae;Kim, Yong Pyo;Jung, Chang Hoon;Shin, Hye Jung;Moon, Kwang Ju;Park, Seung Myung;Lee, Ji Yi
    • Particle and aerosol research
    • /
    • v.16 no.1
    • /
    • pp.19-30
    • /
    • 2020
  • The organic carbon in the ambient particulate matter (PM) is divided into primary organic carbon (POC) and secondary organic carbon (SOC) by their formation way. To regulate PM effectively, the estimation of the amount of POC and SOC separately is one of important consideration. Since SOC cannot be measured directly, previous studies have evaluated determination of SOC by the EC tracer method. The EC tracer method is a method of estimating the SOC value from calculating the POC by determining (OC/EC)pri which is the ratio of the measured values of OC and EC from the primary combustion source. In this study, three different ways were applied to OC and EC concentrations in PM2.5 measured at Seoul for determining (OC/EC)pri: 1) the minimum value of OC/EC ratio during the measurement period; 2) regression analysis of OC vs. EC to select the lower 5-20% OC/EC ratio; 3) determining the OC/EC ratio which has lowest correlation coefficient value (R2) between EC and SOC which is reported as minimum R squared method (MRS). Each (OC/EC)pri ratio of three ways are 0.35, 1.22, and 1.77, respectively from the 1 hourly data. We compared the (OC/EC)pri ratio from 1hourly data with 24 hourly data and revealed that (OC/EC)pri estimated from 24 hourly data had twice larger than 1hourly data due to the low time resolution of sampling. We finally confirmed that the most appropriate value of (OC/EC)pri is that calculated by a regression analysis of 1 hourly data and estimated SOC amounts at PM2.5 of the Seoul atmosphere.

Development of the 'Three-stage' Bayesian procedure and a reliability data processing code (3단계 베이지안 처리절차 및 신뢰도 자료 처리 코드 개발)

  • 임태진
    • Korean Management Science Review
    • /
    • v.11 no.2
    • /
    • pp.1-27
    • /
    • 1994
  • A reliability data processing MPRDP (Multi-Purpose Reliability Data Processor) has been developed in FORTRAN language since Jan. 1992 at KAERI (Korean Atomic Energy Research Institute). The purpose of the research is to construct a reliability database(plant-specific as well as generic) by processing various kinds of reliability data in most objective and systematic fashion. To account for generic estimates in various compendia as well as generic plants' operating experience, we developed a 'three-stage' Bayesian procedure[1] by logically combining the 'two-stage' procedure[2] and the idea for processing generic estimates[3]. The first stage manipulates generic plant data to determine a set of estimates for generic parameters,e.g. the mean and the error factor, which accordingly defines a generic failure rate distribution. Then the second stage combines these estimates with the other ones proposed by various generic compendia (we call these generic book type data). This stage adopts another Bayesian procedure to determine the final generic failure rate distribution which is to be used as a priori distribution in the third stage. Then the third stage updates the generic distribution by plant-specific data resulting in a posterior failure rate distribution. Both running failure and demand failure data can be handled in this code. In accordance with the growing needs for a consistent and well-structured reliability database, we constructed a generic reliability database by the MPRDP code[4]. About 30 generic data sources were reviewed and available data were collected and screened from them. We processed reliability data for about 100 safety related components frequently modeled in PSA. The underlying distribution for the failure rate was assumed to be lognormal or gamma, according to the PSA convention. The dependencies among the generic sources were not considered at this time. This problem will be approached in further study.

  • PDF

A Study on Policy Components of Data Access and Use Controls in Research Data Repositories (연구데이터 레포지터리의 데이터 접근 및 이용 통제 정책 요소에 관한 연구)

  • Kim, Jihyun
    • Journal of Korean Library and Information Science Society
    • /
    • v.47 no.3
    • /
    • pp.213-239
    • /
    • 2016
  • As Open Data has been emphasized globally, discussions on data policies have occurred for minimizing problems resulting from data sharing and reuse. This study aimed at investigating policy components for controlling access and use of data and examining similarities and differences of the policy components across disciplines. For the purpose, the study analyzed policy components for data access and use controls provided by 37 research data repositories overseas. These included twenty repositories in biological and health science, ten in chemistry, earth and environmental science and physics, as well as seven in social science and general science. The analysis showed that common policy components involve copyright/licenses, data citation, disclaimers and embargoes. However, there were differences in diversity of policy components among the disciplines and it indicated that the rationales of access and use controls emphasized would be different in the disciplines.

Development of Relational Database Management System for Agricultural Non-point Source Pollution Control (관계형 데이터베이스를 이용한 농업비점 자료 관리 시스템 개발)

  • Park, Jihoon;Kang, Moon Seong;Song, Inhong;Hwang, Soon Ho;Song, Jung-Hun;Jun, Sang Min
    • Journal of Korean Society of Rural Planning
    • /
    • v.19 no.4
    • /
    • pp.319-327
    • /
    • 2013
  • The objective of this research was to develop a relational database management system(RDBMS) to collect, manage and analyze data on agricultural non-point source(NPS) pollution. The system consists of the relational database for agricultural NPS data and data process modules. The data process modules were composed of four sub-modules for data input, management, analysis, and output. The data collected from the watershed of the upper Cheongmi stream and Geunsam-Ri were used in this study. The database was constructed using Apache Derby with meteorological, hydrological, water quality, and soil characteristics. Agricultural NPS-Data Management System(ANPS-DMS) was developed using Oracle Java. The system developed in this study can deal with a variety of agricultural NPS data and is expected to provide an appropriate data management tool for agricultural NPS studies.

The Influence of Data Quality Management on Data Utilization and Customer Orientation (데이터 품질관리가 데이터 활용도 및 고객 지향성에 미치는 영향)

  • An, Heejung;Kim, Hyunsoo
    • Journal of Service Research and Studies
    • /
    • v.5 no.2
    • /
    • pp.119-132
    • /
    • 2015
  • It is a problem that the poor quality of data hinder the efficient operation and rapid decision-making in enterprises. Thus, we examined if the management class support and business environment could influence the data utilization management in this research. We also verified that relevant activity promotes data utilization for the handling of work or decision-making, thereby affecting customer orientation. The study showed that data utilization quality control is a positive factor for utilizing data for better business decision-making process. It was also confirmed that utilizing the data has indirect effect on customer orientation. Finally we suggested the practical implications to the corporate executives. Future research will be needed to find relationships between data quality management and other factors including management performance.

A Study on the Supply Criteria for the Tax-exempted Vessel Fuel (어선 면세유류 공급기준량 산정에 관한 연구)

  • Kang Yeon-Sil;Kim Dae-hyon
    • The Journal of Fisheries Business Administration
    • /
    • v.36 no.3 s.69
    • /
    • pp.89-117
    • /
    • 2005
  • Currently, the tax - exempted vessel fuel is provided for commercial fishing in order to increase the competitive power of fishery production thorough the National Federation of Fisheries Cooperatives. The National Federation of Fisheries Cooperatives should predict the exact amount of fuel consumption for fishing every year to request the fuel from the government. Unfortunately, there is no sophisticated model to predict the tax - exempted vessel fuel consumption. In 2003, the consumption of the tax- exempted vessel fuel was only $25.1\%$ of the estimation amount by the National Federation of Fisheries Cooperatives. This causes an inefficiency in the petroleum management. Moreover, we need some data such as the annual average fishing hours, fishing days and fishing behavior to adopt a new policy regarding fishing. Up to now, the data have been obtained by survey with response in the fishery field. In the most case, we have a small number of data because we spend so much time and money consuming for collecting fishing data. As a result, the level of confidence of the data is associated with the sample size and normally low. In order to achieve more accurate data, we need to develope an efficient method for collecting fishing data. In this research, we proposed a new method to predict the tax- exempted vessel fuel consumption more exactly. The prediction results from the proposed method has been compared with the results from the current method. According to the results in this research, the method proposed here produced much better accuracy than the current method. In addition, we also proposed in the paper for collecting fishing data of the annual average fishing hours using the tax - exempted vessel fuel consumption and the gasoline consumption of vessel engine. The fishing data obtained by using the method proposed in this research could be much more efficient and accurate because it doesn't need to estimate from survey sample data.

  • PDF

Python Package Production for Agricultural Researcher to Use Meteorological Data (농업연구자의 기상자료 활용을 위한 파이썬 패키지 제작)

  • Hyeon Ji Yang;Joo Hyun Park;Mun-Il Ahn;Min Gu Kang;Yong Kyu Han;Eun Woo Park
    • Korean Journal of Agricultural and Forest Meteorology
    • /
    • v.25 no.2
    • /
    • pp.99-107
    • /
    • 2023
  • Recently, the abnormal weather events and crop damages occurred frequently likely due to climate change. The importance of meteorological data in agricultural research is increasing. Researchers can download weather observation data by accessing the websites provided by the KMA (Korea Meteorological Administration) and the RDA (Rural Development Administration). However, there is a disadvantage that multiple inquiry work is required when a large amount of meteorological data needs to be received. It is inefficient for each researcher to store and manage the data needed for research on an independent local computer in order to avoid this work. In addition, even if all the data were downloaded, additional work is required to find and open several files for research. In this study, data collected by the KMA and RDA were uploaded to GitHub, a remote storage service, and a package was created that allows easy access to weather data using Python. Through this, we propose a method to increase the accessibility and usability of meteorological data for agricultural personnel by adopting a method that allows anyone to take data without an additional authentication process.

A Review of Fin-and-Tube Heat Exchangers in Air-Conditioning Applications

  • Hu, Robert;Wan, Chi-Chuan
    • International Journal of Air-Conditioning and Refrigeration
    • /
    • v.15 no.3
    • /
    • pp.85-100
    • /
    • 2007
  • This study presents a short overview of the researches in connection to the fin-and-tube heat exchangers with and without the influence of dehumidification. Contents of this review article include the data reduction method, performance data, updated correlations, and the influence of hydrophilic coating for various enhanced fin patterns. This study emphasizes on the experimental researches. Performance of both sensible cooling and dehumidifying conditions are reported in this review article.

How to Develop a Scale Measuring an Affective Construct in Mathematics Education Research

  • Ryang, Dohyoung
    • Research in Mathematical Education
    • /
    • v.18 no.1
    • /
    • pp.75-87
    • /
    • 2014
  • It is central to use a scale to measure a person's level of a construct in mathematics education research. This article explains a practical process through which a researcher rapidly can develop an instrument to measure the construct. The process includes research questioning, reviewing the literature, framing a background theory, treating the data, and reviewing the instrument. The statistical treatment of data includes normality analysis, item-total correlation analysis, reliability analysis, and factor analysis. A virtual example is given for better understanding of the process.