• Title/Summary/Keyword: Parameter Changes

Search Result 1,264, Processing Time 0.034 seconds

Differences in Seed Vigor, Early Growth, and Secondary Compounds in Hulled and Dehulled Barley, Malting Barley, and Naked Oat Collected from Various Areas (맥종별 주산지와 재배한계지 수집종자의 활력, 초기생장 및 이차화합물 차이)

  • Park, Hyung Hwa;Kuk, Yong In
    • KOREAN JOURNAL OF CROP SCIENCE
    • /
    • v.66 no.2
    • /
    • pp.171-181
    • /
    • 2021
  • The purposes of this study were to determine how changes in temperature affect germination rates and growth of hulled and dehulled barley, malting barley, and naked oat plants, and to measure chlorophyll content, photosynthetic efficiency, and secondary compounds (total phenol, total flavonoid, and 2,2-diphenyl-1-picrylhydrazyl (DPPH) radical scavenging activity) in plants grown at 13℃ or 25℃). Various types of barley seeds were collected from areas with ideal conditions for barley cultivation, hereinafter referred to as IA, and also from areas where barley cultivation is more difficult due to lower temperatures, hereinafter referred to as LTA. Seeds were tested for seed vigor. While there were significant differences in the electrical conductivity values between seeds collected from certain specific areas, no significant differences were evident between IA and LTA seeds, regardless of the type of barley seed. When plants were grown at 25℃, there were no significant differences in germination rates, plant height, root length and shoot fresh weight between plants originating from IA and LTA. However, there were differences in the measured parameters of some specific seeds. Similarly, under the low temperature condition of 13℃, no differences in the emergence rate, plant height, and shoot fresh weight were evident between plants originating from IA or LTA, regardless of the type of barley. However, there were differences between some specific seeds. One parameter that did vary significantly was the emergence date. Hulled barley and malting barley emerged 5 days after sowing, whereas naked oats emerged 7 days after sowing. There were no differences in the chlorophyll content and photosynthetic efficacy, regardless of the type of barley. There were no significant differences in total phenol, total flavonoid content, and DPPH radical scavenging activity between plants originating from IA and LTA, regardless of the type of barley. However, there were differences between some specific seeds. In particular, for malting barley the total flavonoid content differed in the order of Gangjin > Changwon > Haenam = Jeonju > Naju. The results indicate that crop growth, yield and content of secondary compounds in various types of barley may be affected by climate change.

Comparative evaluation of dose according to changes in rectal gas volume during radiation therapy for cervical cancer : Phantom Study (자궁경부암 방사선치료 시 직장가스 용적 변화에 따른 선량 비교 평가 - Phantom Study)

  • Choi, So Young;Kim, Tae Won;Kim, Min Su;Song, Heung Kwon;Yoon, In Ha;Back, Geum Mun
    • The Journal of Korean Society for Radiation Therapy
    • /
    • v.33
    • /
    • pp.89-97
    • /
    • 2021
  • Purpose: The purpose of this study is to compare and evaluate the dose change according to the gas volume variations in the rectum, which was not included in the treatment plan during radiation therapy for cervical cancer. Materials and methods: Static Intensity Modulated Radiation Therapy (S-IMRT) using a 9-field and Volumetric Modulated Arc Therapy (VMAT) using 2 full-arcs were established with treatment planning system on Computed Tomography images of a human phantom. Random gas parameters were included in the Planning Target Volume(PTV) with a maximum change of 2.0 cm in increments of 0.5 cm. Then, the Conformity Index (CI), Homogeneity Index (HI) and PTV Dmax for the target volume were calculated, and the minimum dose (Dmin), mean dose (Dmean) and Maximum Dose (Dmax) were calculated and compared for OAR(organs at risk). For statistical analysis, T-test was performed to obtain a p-value, where the significance level was set to 0.05. Result: The HI coefficients of determination(R2) of S-IMRT and VMAT were 0.9423 and 0.8223, respectively, indicating a relatively clear correlation, and PTV Dmax was found to increase up to 2.8% as the volume of a given gas parameter increased. In case of OAR evaluation, the dose in the bladder did not change with gas volume while a significant dose difference of more than Dmean 700 cGy was confirmed in rectum using both treatment plans at gas volumes of 1.0 cm or more. In all values except for Dmean of bladder, p-value was less than 0.05, confirming a statistically significant difference. Conclusion: In the case of gas generation not considered in the reference treatment plan, as the amount of gas increased, the dose difference at PTV and the dose delivered to the rectum increased. Therefore, during radiation therapy, it is necessary to make efforts to minimize the dose transmission error caused by a large amount of gas volumes in the rectum. Further studies will be necessary to evaluate dose transmission by not only varying the gas volume but also where the gas was located in the treatment field.

Surgical Decision for Elderly Spine Deformity Patient (노인 척추 변형 환자의 수술적 결정)

  • Kim, Yong-Chan;Juh, Hyung-Suk;Lee, Keunho
    • Journal of the Korean Orthopaedic Association
    • /
    • v.54 no.1
    • /
    • pp.1-8
    • /
    • 2019
  • Globally, the elderly population is increasing rapidly, which means that the number of deformity correction operations for elderly spine deformity patient has increased. On the other hand, for aged patients with deformity correction operation, preoperative considerations to reduce the complications and predict a good clinical outcome are not completely understood. First, medical comorbidity needs to be evaluated preoperatively with the Cumulative Illness Rating Scale for Geriatrics or the Charlson Comorbidity Index scores. Medical comorbidities are associated with the postoperative complication rate. Managing these comorbidities preoperatively decreases the complications after a spine deformity correction operation. Second, bone densitometry need to be checked for osteoporosis. Many surgical techniques have been introduced to prevent the complications associated with posterior instrumentation for osteoporosis patients. The preoperative use of an osteogenesis inducing agent - teriparatide was also reported to reduce the complication rate. Third, total body sagittal alignment need to be considered. Many elderly spine deformity patients accompanied degenerative changes and deformities at their lower extremities. In addition, a compensation mechanism induces the deformed posture of the lower extremities. Recently, some authors introduced a parameter including total body sagittal alignment, which can predict the clinical outcome better than previous parameters limited to the spine or pelvis. As a result, total body sagittal alignment needs to be considered for elderly spine deformity patients after a deformity correction operation. In conclusion, for elderly spine deformity patients, medical comorbidities and osteoporosis need to be evaluated and managed preoperatively to reduce the complication rate. In addition, total body sagittal alignment needs to be considered, which is associated with better clinical outcomes than the previous parameters limited to the spine or pelvis.

Analysis of Micro-Sedimentary Structure Characteristics Using Ultra-High Resolution UAV Imagery: Hwangdo Tidal Flat, South Korea (초고해상도 무인항공기 영상을 이용한 한국 황도 갯벌의 미세 퇴적 구조 특성 분석)

  • Minju Kim;Won-Kyung Baek;Hoi Soo Jung;Joo-Hyung Ryu
    • Korean Journal of Remote Sensing
    • /
    • v.40 no.3
    • /
    • pp.295-305
    • /
    • 2024
  • This study aims to analyze the micro-sedimentary structures of the Hwangdo tidal flats using ultra-high resolution unmanned aerial vehicle (UAV) data. Tidal flats, located in the transitional area between land and sea, constantly change due to tidal activities and provide a unique environment important for understanding sedimentary processes and environmental conditions. Traditional field observation methods are limited in spatial and temporal coverage, and existing satellite imagery does not provide sufficient resolution to study micro-sedimentary structures. To overcome these limitations, high-resolution images of the Hwangdo tidal flats in Chungcheongnam-do were acquired using UAVs. This area has experienced significant changes in its sedimentary environment due to coastal development projects such as sea wall construction. From May 17 to 18, 2022, sediment samples were collected from 91 points during field surveys and 25 in-situ points were intensively analyzed. UAV data with a spatial resolution of approximately 0.9 mm allowed identifying and extracting parameters related to micro-sedimentary structures. For mud cracks, the length of the major axis of the polygons was extracted, and the wavelength and ripple symmetry index were extracted for ripple marks. The results of the study showed that in areas with mud content above 80%, mud cracks formed at an average major axis length of 37.3 cm. In regions with sand content above 60%, ripples with an average wavelength of 8 cm and a ripple symmetry index of 2.0 were formed. This study demonstrated that micro-sedimentary structures of tidal flats can be effectively analyzed using ultra-high resolution UAV data without field surveys. This highlights the potential of UAV technology as an important tool in environmental monitoring and coastal management and shows its usefulness in the study of sedimentary structures. In addition, the results of this study are expected to serve as baseline data for more accurate sedimentary facies classification.

Microbial Influence on Soil Properties and Pollutant Reduction in a Horizontal Subsurface Flow Constructed Wetland Treating Urban Runoff (도시 강우유출수 처리 인공습지의 토양특성 및 오염물질 저감에 따른 미생물 영향 평가)

  • Chiny. C. Vispo;Miguel Enrico L. Robles;Yugyeong Oh;Haque Md Tashdedul;Lee Hyung Kim
    • Journal of Wetlands Research
    • /
    • v.26 no.2
    • /
    • pp.168-181
    • /
    • 2024
  • Constructed wetlands (CWs) deliver a range of ecosystem services, including the removal of contaminants, sequestration and storage of carbon, and enhancement of biodiversity. These services are facilitated through hydrological and ecological processes such as infiltration, adsorption, water retention, and evapotranspiration by plants and microorganisms. This study investigated the correlations between microbial populations, soil physicochemical properties, and treatment efficiency in a horizontal subsurface flow constructed wetland (HSSF CW) treating runoff from roads and parking lots. The methods employed included storm event monitoring, water quality analysis, soil sampling, soil quality parameter analysis, and microbial analysis. The facility achieved its highest pollutant removal efficiencies during the warm season (>15℃), with rates ranging from 33% to 74% for TSS, COD, TN, TP, and specific heavy metals including Fe, Zn, and Cd. Meanwhile, the highest removal efficiency was 35% for TOC during the cold season (≤15℃). These high removal rates can be attributed to sedimentation, adsorption, precipitation, plant uptake, and microbial transformations within the CW. Soil analysis revealed that the soil from HSSF CW had a soil organic carbon content 3.3 times higher than that of soil collected from a nearby landscape. Stoichiometric ratios of carbon (C), nitrogen (N), and phosphorus (P) in the inflow and outflow were recorded as C:N:P of 120:1.5:1 and 135.2:0.4:1, respectively, indicating an extremely low proportion of N and P compared to C, which may challenge microbial remediation efficiency. Additionally, microbial analyses indicated that the warm season was more conducive to microorganism growth, with higher abundance, richness, diversity, homogeneity, and evenness of the microbial community, as manifested in the biodiversity indices, compared to the cold season. Pollutants in stormwater runoff entering the HSSF CW fostered microbial growth, particularly for dominant phyla such as Proteobacteria, Actinobacteria, Acidobacteria, and Bacteroidetes, which have shown moderate to strong correlations with specific soil properties and changes in influent-effluent concentrations of water quality parameters.

Changes in The Sensitive Chemical Parameters of the Seawater in EEZ, Yellow Sea during and after the Sand Mining Operation (서해 EEZ 해역에서 바다모래 채굴에 민감한 해양수질인자들)

  • Yang, Jae-Sam;Jeong, Yong-Hoon;Ji, Kwang-Hee
    • The Sea:JOURNAL OF THE KOREAN SOCIETY OF OCEANOGRAPHY
    • /
    • v.13 no.1
    • /
    • pp.1-14
    • /
    • 2008
  • Eight comprehensive oceanographic cruises on a squared $30{\times}30\;km$ area have been made to investigate the short and long-term impacts on the water qualities due to the sand mining operations at Exclusive Economic Zone (EEZ) in the central Yellow Sea from 2004 to 2007. The area was categorized to 'Sand Mining Zone', 'Potentially Affected Zone', and 'Reference Zone'. The investigation covered suspended solids, nutrients (nitrate, nitrite, ammonium, phosphate), and chlorophyll-a in seawater and several parameters such as water temperature, salinity, pH, and ORP. Additionally, several intensive water collections were made to trace the suspended solids and other parameters along the turbid water by sand mining activities. The comprehensive investigation showed that suspended solids, nitrate, chlorophyll-a and ORP be sensitively responding parameters of seawater by sand mining operations. The intensive collection of seawater near the sand mining operation revealed that each parameter show different distribution pattern: suspended solids showed an oval-shaped distribution of the north-south direction of 8 km wide and the east-west direction of 5 km wide at the surface and bottom layers. On the other hand, phosphate showed so narrow distribution not to traceable. Also ammonium showed a limited distribution, but its boundary was connected to the high nitrate and chlorophyll-a concentrations with high N/P ratios. From the last 4 years of the comprehensive and intensive investigations, we found that suspended solids, ammonium, nitrate, chlorophyll-a, and ORP revealed the sensitive parameters of water quality for tracing the sand mining operations in seawater. Especially suspended solids and ORP would be useful tracers for monitoring the water qualities of remote area like EEZ in Yellow Sea.

An Iterative, Interactive and Unified Seismic Velocity Analysis (반복적 대화식 통합 탄성파 속도분석)

  • Suh Sayng-Yong;Chung Bu-Heung;Jang Seong-Hyung
    • Geophysics and Geophysical Exploration
    • /
    • v.2 no.1
    • /
    • pp.26-32
    • /
    • 1999
  • Among the various seismic data processing sequences, the velocity analysis is the most time consuming and man-hour intensive processing steps. For the production seismic data processing, a good velocity analysis tool as well as the high performance computer is required. The tool must give fast and accurate velocity analysis. There are two different approches in the velocity analysis, batch and interactive. In the batch processing, a velocity plot is made at every analysis point. Generally, the plot consisted of a semblance contour, super gather, and a stack pannel. The interpreter chooses the velocity function by analyzing the velocity plot. The technique is highly dependent on the interpreters skill and requires human efforts. As the high speed graphic workstations are becoming more popular, various interactive velocity analysis programs are developed. Although, the programs enabled faster picking of the velocity nodes using mouse, the main improvement of these programs is simply the replacement of the paper plot by the graphic screen. The velocity spectrum is highly sensitive to the presence of the noise, especially the coherent noise often found in the shallow region of the marine seismic data. For the accurate velocity analysis, these noise must be removed before the spectrum is computed. Also, the velocity analysis must be carried out by carefully choosing the location of the analysis point and accuarate computation of the spectrum. The analyzed velocity function must be verified by the mute and stack, and the sequence must be repeated most time. Therefore an iterative, interactive, and unified velocity analysis tool is highly required. An interactive velocity analysis program, xva(X-Window based Velocity Analysis) was invented. The program handles all processes required in the velocity analysis such as composing the super gather, computing the velocity spectrum, NMO correction, mute, and stack. Most of the parameter changes give the final stack via a few mouse clicks thereby enabling the iterative and interactive processing. A simple trace indexing scheme is introduced and a program to nike the index of the Geobit seismic disk file was invented. The index is used to reference the original input, i.e., CDP sort, directly A transformation techinique of the mute function between the T-X domain and NMOC domain is introduced and adopted to the program. The result of the transform is simliar to the remove-NMO technique in suppressing the shallow noise such as direct wave and refracted wave. However, it has two improvements, i.e., no interpolation error and very high speed computing time. By the introduction of the technique, the mute times can be easily designed from the NMOC domain and applied to the super gather in the T-X domain, thereby producing more accurate velocity spectrum interactively. The xva program consists of 28 files, 12,029 lines, 34,990 words and 304,073 characters. The program references Geobit utility libraries and can be installed under Geobit preinstalled environment. The program runs on X-Window/Motif environment. The program menu is designed according to the Motif style guide. A brief usage of the program has been discussed. The program allows fast and accurate seismic velocity analysis, which is necessary computing the AVO (Amplitude Versus Offset) based DHI (Direct Hydrocarn Indicator), and making the high quality seismic sections.

  • PDF

Comparison between phosphorus absorption coefficient and Langmuir adsorption maximum (전토양(田土壤) 인산(燐酸)의 흡수계수(吸收係數)와 Langmuir 최대흡착량(最大吸着量)과의 비교연구(比較硏究))

  • Ryu, In Soo
    • Korean Journal of Soil Science and Fertilizer
    • /
    • v.8 no.1
    • /
    • pp.1-17
    • /
    • 1975
  • Laboratory experiments on the phosphorus adsorption by soil were conducted to evaluate the parameters for determination of phosphorus adsorption capacity of soil, which serve as a basis for establishing the amount of phosphorus required to improve newly reclaimed soil and volcanic ash soil. The calculated Langmuir adsorption maxima varied from 6.2-32.9, 74.7-90.4 and 720-915mg p/100g soil for cultivated soils, non-cultivated soils, and volcanic ash soils respectively. The phosphorus absorption coefficient ranged from 116-179, 161-259 and 1,098-1,205mg p/100g soil for cultivated soils, non-cultivated soils, and volcanic ash soils respectively. The ratio of the phosphorus absorption coefficient to Langmuir adsorption maximum was low in soils of high phosphorus adsorption capacity (1.3-1.5) and high in soils of low phosphorus adsorption capacity (2.2-18.7). Changes in the amount of phosphurus adsorption induced by liming and preaddition of phosphorus were hadly detected by the phosphorus absorption coefficient, which is measured using a test solution with a relatively high phosphorus concentration. The Langmuir adsorption maximum was a more sensitive index of the phosphorus adsorption capacity. The Langmuir adsorption maxima of the non-cultivated soils, which were treated with an amount of calcium hydroxide equivalent to the exchangeable Al and incubated ($25-30^{\circ}C$) for 40 days at field capacity, were lower than the original soils. The change in the adorption maximum on incubation following the liming of soils was insignificant for other soils. The secondary adsorption maximum of soils, which received phosphorus equivalent to the Langmuir adsorption maximum of the limed soils incubated ($25-30^{\circ}C$) for 50 days at held capacity, was 74.5, 5.6 and 23.8% of the primary adsorption maximum for volcanic ash soils, non-cultivated soils, and cultivated soils respectively. The amount of phosphorus adsorbed by soils increased quadratically with the concentration of phosphorus solution added to the soils. The amount of phosphorus adsorbed by 5-g soil samples from 100ml of 100- and 1,000mg p/l solution for the mineral soils and volcanic ash soils respectively was found to be close to the Langmuir adsorption maximum. The amount of the phosphorus adsorbed at these concentrations is defined as a saturation adsorption maximum and proposed as a new parameter for the phosphorus adsorption capacity of the soil. The evaluation of the phosphorus adsorption capacity by the saturation adsorption maximum is regarded as a more practical method in that it obviates the need for the various concentrations used for the determination of the Langmuir adsorption maximum.

  • PDF

Customer Behavior Prediction of Binary Classification Model Using Unstructured Information and Convolution Neural Network: The Case of Online Storefront (비정형 정보와 CNN 기법을 활용한 이진 분류 모델의 고객 행태 예측: 전자상거래 사례를 중심으로)

  • Kim, Seungsoo;Kim, Jongwoo
    • Journal of Intelligence and Information Systems
    • /
    • v.24 no.2
    • /
    • pp.221-241
    • /
    • 2018
  • Deep learning is getting attention recently. The deep learning technique which had been applied in competitions of the International Conference on Image Recognition Technology(ILSVR) and AlphaGo is Convolution Neural Network(CNN). CNN is characterized in that the input image is divided into small sections to recognize the partial features and combine them to recognize as a whole. Deep learning technologies are expected to bring a lot of changes in our lives, but until now, its applications have been limited to image recognition and natural language processing. The use of deep learning techniques for business problems is still an early research stage. If their performance is proved, they can be applied to traditional business problems such as future marketing response prediction, fraud transaction detection, bankruptcy prediction, and so on. So, it is a very meaningful experiment to diagnose the possibility of solving business problems using deep learning technologies based on the case of online shopping companies which have big data, are relatively easy to identify customer behavior and has high utilization values. Especially, in online shopping companies, the competition environment is rapidly changing and becoming more intense. Therefore, analysis of customer behavior for maximizing profit is becoming more and more important for online shopping companies. In this study, we propose 'CNN model of Heterogeneous Information Integration' using CNN as a way to improve the predictive power of customer behavior in online shopping enterprises. In order to propose a model that optimizes the performance, which is a model that learns from the convolution neural network of the multi-layer perceptron structure by combining structured and unstructured information, this model uses 'heterogeneous information integration', 'unstructured information vector conversion', 'multi-layer perceptron design', and evaluate the performance of each architecture, and confirm the proposed model based on the results. In addition, the target variables for predicting customer behavior are defined as six binary classification problems: re-purchaser, churn, frequent shopper, frequent refund shopper, high amount shopper, high discount shopper. In order to verify the usefulness of the proposed model, we conducted experiments using actual data of domestic specific online shopping company. This experiment uses actual transactions, customers, and VOC data of specific online shopping company in Korea. Data extraction criteria are defined for 47,947 customers who registered at least one VOC in January 2011 (1 month). The customer profiles of these customers, as well as a total of 19 months of trading data from September 2010 to March 2012, and VOCs posted for a month are used. The experiment of this study is divided into two stages. In the first step, we evaluate three architectures that affect the performance of the proposed model and select optimal parameters. We evaluate the performance with the proposed model. Experimental results show that the proposed model, which combines both structured and unstructured information, is superior compared to NBC(Naïve Bayes classification), SVM(Support vector machine), and ANN(Artificial neural network). Therefore, it is significant that the use of unstructured information contributes to predict customer behavior, and that CNN can be applied to solve business problems as well as image recognition and natural language processing problems. It can be confirmed through experiments that CNN is more effective in understanding and interpreting the meaning of context in text VOC data. And it is significant that the empirical research based on the actual data of the e-commerce company can extract very meaningful information from the VOC data written in the text format directly by the customer in the prediction of the customer behavior. Finally, through various experiments, it is possible to say that the proposed model provides useful information for the future research related to the parameter selection and its performance.

Corporate Default Prediction Model Using Deep Learning Time Series Algorithm, RNN and LSTM (딥러닝 시계열 알고리즘 적용한 기업부도예측모형 유용성 검증)

  • Cha, Sungjae;Kang, Jungseok
    • Journal of Intelligence and Information Systems
    • /
    • v.24 no.4
    • /
    • pp.1-32
    • /
    • 2018
  • In addition to stakeholders including managers, employees, creditors, and investors of bankrupt companies, corporate defaults have a ripple effect on the local and national economy. Before the Asian financial crisis, the Korean government only analyzed SMEs and tried to improve the forecasting power of a default prediction model, rather than developing various corporate default models. As a result, even large corporations called 'chaebol enterprises' become bankrupt. Even after that, the analysis of past corporate defaults has been focused on specific variables, and when the government restructured immediately after the global financial crisis, they only focused on certain main variables such as 'debt ratio'. A multifaceted study of corporate default prediction models is essential to ensure diverse interests, to avoid situations like the 'Lehman Brothers Case' of the global financial crisis, to avoid total collapse in a single moment. The key variables used in corporate defaults vary over time. This is confirmed by Beaver (1967, 1968) and Altman's (1968) analysis that Deakins'(1972) study shows that the major factors affecting corporate failure have changed. In Grice's (2001) study, the importance of predictive variables was also found through Zmijewski's (1984) and Ohlson's (1980) models. However, the studies that have been carried out in the past use static models. Most of them do not consider the changes that occur in the course of time. Therefore, in order to construct consistent prediction models, it is necessary to compensate the time-dependent bias by means of a time series analysis algorithm reflecting dynamic change. Based on the global financial crisis, which has had a significant impact on Korea, this study is conducted using 10 years of annual corporate data from 2000 to 2009. Data are divided into training data, validation data, and test data respectively, and are divided into 7, 2, and 1 years respectively. In order to construct a consistent bankruptcy model in the flow of time change, we first train a time series deep learning algorithm model using the data before the financial crisis (2000~2006). The parameter tuning of the existing model and the deep learning time series algorithm is conducted with validation data including the financial crisis period (2007~2008). As a result, we construct a model that shows similar pattern to the results of the learning data and shows excellent prediction power. After that, each bankruptcy prediction model is restructured by integrating the learning data and validation data again (2000 ~ 2008), applying the optimal parameters as in the previous validation. Finally, each corporate default prediction model is evaluated and compared using test data (2009) based on the trained models over nine years. Then, the usefulness of the corporate default prediction model based on the deep learning time series algorithm is proved. In addition, by adding the Lasso regression analysis to the existing methods (multiple discriminant analysis, logit model) which select the variables, it is proved that the deep learning time series algorithm model based on the three bundles of variables is useful for robust corporate default prediction. The definition of bankruptcy used is the same as that of Lee (2015). Independent variables include financial information such as financial ratios used in previous studies. Multivariate discriminant analysis, logit model, and Lasso regression model are used to select the optimal variable group. The influence of the Multivariate discriminant analysis model proposed by Altman (1968), the Logit model proposed by Ohlson (1980), the non-time series machine learning algorithms, and the deep learning time series algorithms are compared. In the case of corporate data, there are limitations of 'nonlinear variables', 'multi-collinearity' of variables, and 'lack of data'. While the logit model is nonlinear, the Lasso regression model solves the multi-collinearity problem, and the deep learning time series algorithm using the variable data generation method complements the lack of data. Big Data Technology, a leading technology in the future, is moving from simple human analysis, to automated AI analysis, and finally towards future intertwined AI applications. Although the study of the corporate default prediction model using the time series algorithm is still in its early stages, deep learning algorithm is much faster than regression analysis at corporate default prediction modeling. Also, it is more effective on prediction power. Through the Fourth Industrial Revolution, the current government and other overseas governments are working hard to integrate the system in everyday life of their nation and society. Yet the field of deep learning time series research for the financial industry is still insufficient. This is an initial study on deep learning time series algorithm analysis of corporate defaults. Therefore it is hoped that it will be used as a comparative analysis data for non-specialists who start a study combining financial data and deep learning time series algorithm.