• Title/Summary/Keyword: FeatureSelection

Search Result 1,094, Processing Time 0.024 seconds

The Science-Related Attitudes from Adults' Experiences during Science Cultural Activities: Focusing on the Case of Science Fiction Discussions (성인들의 과학문화 활동 경험에서 나타난 과학 관련 태도 -과학소설 독서토론 활동 사례를 중심으로-)

  • Eunji Kang;Chaeyeon Shin;Jinwoong Song
    • Journal of The Korean Association For Science Education
    • /
    • v.43 no.2
    • /
    • pp.139-150
    • /
    • 2023
  • This study started with the awareness of the need to explore various aspects of science education and was conducted according to the necessity of practical research on science cultural activities targeting adults. Accordingly, adults' book discussions of science fiction were selected as research cases, and science-related attitudes in science cultural activities were explored. There are four participants in the study, all of whom have engaged in a book club and have not majored or are working in science disciplines. Three science fictions were selected after establishing specific standards for the selection discussed with participants. For four months, a total of three unstructured book discussions of science fiction, post-interviews for each discussion, and in-depth individual interviews after the end of the entire activity were conducted. Various data such as recorded and transcribed reading discussion discourse, post- and in-depth individual interviews, researchers' observation records, and participants' book journals were collected and analyzed using a continuous comparison method. As a result of the study, as scientific thinking is illustrated in SF, the participants also demonstrated scientific attitudes during their discussions. In addition, the textual feature(storytelling) of science fiction was found to lessen cognitive overload and the burden of understanding science by providing scientific knowledge with context. Finally they demonstrated a shift in attitude toward science, valuing science cultural activities in themselves, rather than simply viewing science as a subject of understanding and learning. The conclusions and meanings of this study based on the above results are presented to enhance a positive attitude toward science for adults even after school education.

Comparison of NDVI in Rice Paddy according to the Resolution of Optical Satellite Images (광학위성영상의 해상도에 따른 논지역의 정규식생지수 비교)

  • Jeong Eun;Sun-Hwa Kim;Jee-Eun Min
    • Korean Journal of Remote Sensing
    • /
    • v.39 no.6_1
    • /
    • pp.1321-1330
    • /
    • 2023
  • Normalized Difference Vegetation Index (NDVI) is the most widely used remote sensing data in the agricultural field and is currently provided by most optical satellites. In particular, as high-resolution optical satellite images become available, the selection of optimal optical satellite images according to agricultural applications has become a very important issue. In this study, we aim to define the most optimal optical satellite image when monitoring NDVI in rice fields in Korea and derive the resolution-related requirements necessary for this. For this purpose, we compared and analyzed the spatial distribution and time series patterns of the Dangjin rice paddy in Korea from 2019 to 2022 using NDVI images from MOD13, Landsat-8, Sentinel-2A/B, and PlanetScope satellites, which are widely used around the world. Each data is provided with a spatial resolution of 3 m to 250 m and various periods, and the area of the spectral band used to calculate NDVI also has slight differences. As a result of the analysis, Landsat-8 showed the lowest NDVI value and had very low spatial variation. In comparison, the MOD13 NDVI image showed similar spatial distribution and time series patterns as the PlanetScope data but was affected by the area surrounding the rice field due to low spatial resolution. Sentinel-2A/B showed relatively low NDVI values due to the wide near-infrared band area, and this feature was especially noticeable in the early stages of growth. PlanetScope's NDVI provides detailed spatial variation and stable time series patterns, but considering its high purchase price, it is considered to be more useful in small field areas than in spatially uniform rice paddy. Accordingly, for rice field areas, 250 m MOD13 NDVI or 10 m Sentinel-2A/B are considered to be the most efficient, but high-resolution satellite images can be used to estimate detailed physical quantities of individual crops.

The completed SDSS-IV extended Baryon Oscillation Spectroscopic Survey: measurement of the BAO and growth rate of structure of the emission line galaxy sample from the anisotropic power spectrum between redshift 0.6 and 1.1

  • Arnaud de Mattia;Vanina Ruhlmann-Kleider;Anand Raichoor;Ashley J Ross;Amelie Tamone;Cheng Zhao;Shadab Alam;Santiago Avila;Etienne Burtin;Julian Bautista;Florian Beutler;Jonathan Brinkmann;Joel R Brownstein;Michael J Chapman;Chia-Hsun Chuang;Johan Comparat;Helion du Mas des Bourboux;Kyle S Dawson;Axel de la Macorra;Hector Gil-Marin;Violeta Gonzalez-Perez;Claudio Gorgoni;Jiamin Hou;Hui Kong;Sicheng Lin;Seshadri Nadathur;Jeffrey A Newman;Eva-Maria Mueller;Will J Percival;Mehdi Rezaie;Graziano Rossi;Donald P Schneider;Prabhakar Tiwari;M Vivek;Yuting Wang;Gong-Bo Zhao
    • Monthly Notices of the Royal Astronomical Society
    • /
    • v.501 no.4
    • /
    • pp.5616-5645
    • /
    • 2021
  • We analyse the large-scale clustering in Fourier space of emission line galaxies (ELG) from the Data Release 16 of the Sloan Digital Sky Survey IV extended Baryon Oscillation Spectroscopic Survey. The ELG sample contains 173 736 galaxies covering 1170 deg2 in the redshift range 0.6 eff = 0.845 we measure DV(zeff)/rdrag = 18.33+0.57-0.62, with DV the volume-averaged distance and rdrag the comoving sound horizon at the drag epoch. In combination with the RSD measurement, at zeff = 0.85 we find fσ8(zeff) = 0.289+0.085-0.096, with f the growth rate of structure and σ8 the normalization of the linear power spectrum, DH(zeff)/rdrag = 20.0+2.4-2.2 and DM(zeff)/rdrag = 19.17 ± 0.99 with DH and DM the Hubble and comoving angular distances, respectively. These results are in agreement with those obtained in configuration space, thus allowing a consensus measurement of fσ8(zeff) = 0.315 ± 0.095, DH(zeff)/rdrag = 19.6+2.2-2.1 and DM(zeff)/rdrag = 19.5 ± 1.0. This measurement is consistent with a flat ΛCDM model with Planck parameters.

Corporate Default Prediction Model Using Deep Learning Time Series Algorithm, RNN and LSTM (딥러닝 시계열 알고리즘 적용한 기업부도예측모형 유용성 검증)

  • Cha, Sungjae;Kang, Jungseok
    • Journal of Intelligence and Information Systems
    • /
    • v.24 no.4
    • /
    • pp.1-32
    • /
    • 2018
  • In addition to stakeholders including managers, employees, creditors, and investors of bankrupt companies, corporate defaults have a ripple effect on the local and national economy. Before the Asian financial crisis, the Korean government only analyzed SMEs and tried to improve the forecasting power of a default prediction model, rather than developing various corporate default models. As a result, even large corporations called 'chaebol enterprises' become bankrupt. Even after that, the analysis of past corporate defaults has been focused on specific variables, and when the government restructured immediately after the global financial crisis, they only focused on certain main variables such as 'debt ratio'. A multifaceted study of corporate default prediction models is essential to ensure diverse interests, to avoid situations like the 'Lehman Brothers Case' of the global financial crisis, to avoid total collapse in a single moment. The key variables used in corporate defaults vary over time. This is confirmed by Beaver (1967, 1968) and Altman's (1968) analysis that Deakins'(1972) study shows that the major factors affecting corporate failure have changed. In Grice's (2001) study, the importance of predictive variables was also found through Zmijewski's (1984) and Ohlson's (1980) models. However, the studies that have been carried out in the past use static models. Most of them do not consider the changes that occur in the course of time. Therefore, in order to construct consistent prediction models, it is necessary to compensate the time-dependent bias by means of a time series analysis algorithm reflecting dynamic change. Based on the global financial crisis, which has had a significant impact on Korea, this study is conducted using 10 years of annual corporate data from 2000 to 2009. Data are divided into training data, validation data, and test data respectively, and are divided into 7, 2, and 1 years respectively. In order to construct a consistent bankruptcy model in the flow of time change, we first train a time series deep learning algorithm model using the data before the financial crisis (2000~2006). The parameter tuning of the existing model and the deep learning time series algorithm is conducted with validation data including the financial crisis period (2007~2008). As a result, we construct a model that shows similar pattern to the results of the learning data and shows excellent prediction power. After that, each bankruptcy prediction model is restructured by integrating the learning data and validation data again (2000 ~ 2008), applying the optimal parameters as in the previous validation. Finally, each corporate default prediction model is evaluated and compared using test data (2009) based on the trained models over nine years. Then, the usefulness of the corporate default prediction model based on the deep learning time series algorithm is proved. In addition, by adding the Lasso regression analysis to the existing methods (multiple discriminant analysis, logit model) which select the variables, it is proved that the deep learning time series algorithm model based on the three bundles of variables is useful for robust corporate default prediction. The definition of bankruptcy used is the same as that of Lee (2015). Independent variables include financial information such as financial ratios used in previous studies. Multivariate discriminant analysis, logit model, and Lasso regression model are used to select the optimal variable group. The influence of the Multivariate discriminant analysis model proposed by Altman (1968), the Logit model proposed by Ohlson (1980), the non-time series machine learning algorithms, and the deep learning time series algorithms are compared. In the case of corporate data, there are limitations of 'nonlinear variables', 'multi-collinearity' of variables, and 'lack of data'. While the logit model is nonlinear, the Lasso regression model solves the multi-collinearity problem, and the deep learning time series algorithm using the variable data generation method complements the lack of data. Big Data Technology, a leading technology in the future, is moving from simple human analysis, to automated AI analysis, and finally towards future intertwined AI applications. Although the study of the corporate default prediction model using the time series algorithm is still in its early stages, deep learning algorithm is much faster than regression analysis at corporate default prediction modeling. Also, it is more effective on prediction power. Through the Fourth Industrial Revolution, the current government and other overseas governments are working hard to integrate the system in everyday life of their nation and society. Yet the field of deep learning time series research for the financial industry is still insufficient. This is an initial study on deep learning time series algorithm analysis of corporate defaults. Therefore it is hoped that it will be used as a comparative analysis data for non-specialists who start a study combining financial data and deep learning time series algorithm.