• Title/Summary/Keyword: 이동변수

Search Result 1,537, Processing Time 0.038 seconds

MICROLEAKAGE OF COMPOSITE RESIN RESTORATION ACCORDING TO THE NUMBER OF THERMOCYCLING (열순환 횟수에 따른 복합레진의 미세누출)

  • Kim, Chang-Youn;Shin, Dong-Hoon
    • Restorative Dentistry and Endodontics
    • /
    • v.32 no.4
    • /
    • pp.377-384
    • /
    • 2007
  • Present tooth bonding system can be categorized into total etching bonding system (TE) and self-etching boding system (SE) based on their way of smear layer treatment. The purposes of this study were to compare the effectiveness between these two systems and to evaluate the effect of number of themocycling on microleakage of class V composite resin restorations. Total forty class V cavities were prepared on the single-rooted bovine teeth and were randomly divided into four experimental groups: two kinds of bonding system and another two kinds of thermocycling groups. Half of the cavities were filed with Z250 following the use of TE system, Single Bond and another twenty cavities were filled with Metafil and AQ Bond, SE system. All composite restoratives were cured using light curing unit (XL2500, 3M ESPE, St. Paul, MN, USA) for 40 seconds with a light intensity of $600mW/cm^2$. Teeth were stored in distilled water for one day at room temperature and were finished and polished with Sof-Lex system. Half of teeth were thermocycled 500 times and the other half were thermocycled 5,000 times between $5^{\circ}C$ and $55^{\circ}C$ for 30 second at each temperature. Teeth were isolated with two layers of nail varnish except the restoration surface and 1 mm surrounding margins. Electrical conductivity (${\mu}A$) was recorded in distilled water by electrochemical method. Microleakage scores were compared and analyzed using two-way ANOVA at 95% level. From this study, following results were obtained: There was no interaction between variables of bonding system and number of thermocycling (p = 0.485). Microleakage was not affected by the number of thermocycling either (p = 0.814). However, Composite restoration of Metafil and AQ Bond, SE bond system showed less microleakage than composite restoration of Z250 and Single Bond, TE bond system (p = 0.005).

Statics corrections for shallow seismic refraction data (천부 굴절법 탄성파 탐사 자료의 정보정)

  • Palmer Derecke;Nikrouz Ramin;Spyrou Andreur
    • Geophysics and Geophysical Exploration
    • /
    • v.8 no.1
    • /
    • pp.7-17
    • /
    • 2005
  • The determination of seismic velocities in refractors for near-surface seismic refraction investigations is an ill-posed problem. Small variations in the computed time parameters can result in quite large lateral variations in the derived velocities, which are often artefacts of the inversion algorithms. Such artefacts are usually not recognized or corrected with forward modelling. Therefore, if detailed refractor models are sought with model based inversion, then detailed starting models are required. The usual source of artefacts in seismic velocities is irregular refractors. Under most circumstances, the variable migration of the generalized reciprocal method (GRM) is able to accommodate irregular interfaces and generate detailed starting models of the refractor. However, where the very-near-surface environment of the Earth is also irregular, the efficacy of the GRM is reduced, and weathering corrections can be necessary. Standard methods for correcting for surface irregularities are usually not practical where the very-near-surface irregularities are of limited lateral extent. In such circumstances, the GRM smoothing statics method (SSM) is a simple and robust approach, which can facilitate more-accurate estimates of refractor velocities. The GRM SSM generates a smoothing 'statics' correction by subtracting an average of the time-depths computed with a range of XY values from the time-depths computed with a zero XY value (where the XY value is the separation between the receivers used to compute the time-depth). The time-depths to the deeper target refractors do not vary greatly with varying XY values, and therefore an average is much the same as the optimum value. However, the time-depths for the very-near-surface irregularities migrate laterally with increasing XY values and they are substantially reduced with the averaging process. As a result, the time-depth profile averaged over a range of XY values is effectively corrected for the near-surface irregularities. In addition, the time-depths computed with a Bero XY value are the sum of both the near-surface effects and the time-depths to the target refractor. Therefore, their subtraction generates an approximate 'statics' correction, which in turn, is subtracted from the traveltimes The GRM SSM is essentially a smoothing procedure, rather than a deterministic weathering correction approach, and it is most effective with near-surface irregularities of quite limited lateral extent. Model and case studies demonstrate that the GRM SSM substantially improves the reliability in determining detailed seismic velocities in irregular refractors.

Estimation of Surface fCO2 in the Southwest East Sea using Machine Learning Techniques (기계학습법을 이용한 동해 남서부해역의 표층 이산화탄소분압(fCO2) 추정)

  • HAHM, DOSHIK;PARK, SOYEONA;CHOI, SANG-HWA;KANG, DONG-JIN;RHO, TAEKEUN;LEE, TONGSUP
    • The Sea:JOURNAL OF THE KOREAN SOCIETY OF OCEANOGRAPHY
    • /
    • v.24 no.3
    • /
    • pp.375-388
    • /
    • 2019
  • Accurate evaluation of sea-to-air $CO_2$ flux and its variability is crucial information to the understanding of global carbon cycle and the prediction of atmospheric $CO_2$ concentration. $fCO_2$ observations are sparse in space and time in the East Sea. In this study, we derived high resolution time series of surface $fCO_2$ values in the southwest East Sea, by feeding sea surface temperature (SST), salinity (SSS), chlorophyll-a (CHL), and mixed layer depth (MLD) values, from either satellite-observations or numerical model outputs, to three machine learning models. The root mean square error of the best performing model, a Random Forest (RF) model, was $7.1{\mu}atm$. Important parameters in predicting $fCO_2$ in the RF model were SST and SSS along with time information; CHL and MLD were much less important than the other parameters. The net $CO_2$ flux in the southwest East Sea, calculated from the $fCO_2$ predicted by the RF model, was $-0.76{\pm}1.15mol\;m^{-2}yr^{-1}$, close to the lower bound of the previous estimates in the range of $-0.66{\sim}-2.47mol\;m^{-2}yr^{-1}$. The time series of $fCO_2$ predicted by the RF model showed a significant variation even in a short time interval of a week. For accurate evaluation of the $CO_2$ flux in the Ulleung Basin, it is necessary to conduct high resolution in situ observations in spring when $fCO_2$ changes rapidly.

The effect of Big-data investment on the Market value of Firm (기업의 빅데이터 투자가 기업가치에 미치는 영향 연구)

  • Kwon, Young jin;Jung, Woo-Jin
    • Journal of Intelligence and Information Systems
    • /
    • v.25 no.2
    • /
    • pp.99-122
    • /
    • 2019
  • According to the recent IDC (International Data Corporation) report, as from 2025, the total volume of data is estimated to reach ten times higher than that of 2016, corresponding to 163 zettabytes. then the main body of generating information is moving more toward corporations than consumers. So-called "the wave of Big-data" is arriving, and the following aftermath affects entire industries and firms, respectively and collectively. Therefore, effective management of vast amounts of data is more important than ever in terms of the firm. However, there have been no previous studies that measure the effects of big data investment, even though there are number of previous studies that quantitatively the effects of IT investment. Therefore, we quantitatively analyze the Big-data investment effects, which assists firm's investment decision making. This study applied the Event Study Methodology, which is based on the efficient market hypothesis as the theoretical basis, to measure the effect of the big data investment of firms on the response of market investors. In addition, five sub-variables were set to analyze this effect in more depth: the contents are firm size classification, industry classification (finance and ICT), investment completion classification, and vendor existence classification. To measure the impact of Big data investment announcements, Data from 91 announcements from 2010 to 2017 were used as data, and the effect of investment was more empirically observed by observing changes in corporate value immediately after the disclosure. This study collected data on Big Data Investment related to Naver 's' News' category, the largest portal site in Korea. In addition, when selecting the target companies, we extracted the disclosures of listed companies in the KOSPI and KOSDAQ market. During the collection process, the search keywords were searched through the keywords 'Big data construction', 'Big data introduction', 'Big data investment', 'Big data order', and 'Big data development'. The results of the empirically proved analysis are as follows. First, we found that the market value of 91 publicly listed firms, who announced Big-data investment, increased by 0.92%. In particular, we can see that the market value of finance firms, non-ICT firms, small-cap firms are significantly increased. This result can be interpreted as the market investors perceive positively the big data investment of the enterprise, allowing market investors to better understand the company's big data investment. Second, statistical demonstration that the market value of financial firms and non - ICT firms increases after Big data investment announcement is proved statistically. Third, this study measured the effect of big data investment by dividing by company size and classified it into the top 30% and the bottom 30% of company size standard (market capitalization) without measuring the median value. To maximize the difference. The analysis showed that the investment effect of small sample companies was greater, and the difference between the two groups was also clear. Fourth, one of the most significant features of this study is that the Big Data Investment announcements are classified and structured according to vendor status. We have shown that the investment effect of a group with vendor involvement (with or without a vendor) is very large, indicating that market investors are very positive about the involvement of big data specialist vendors. Lastly but not least, it is also interesting that market investors are evaluating investment more positively at the time of the Big data Investment announcement, which is scheduled to be built rather than completed. Applying this to the industry, it would be effective for a company to make a disclosure when it decided to invest in big data in terms of increasing the market value. Our study has an academic implication, as prior research looked for the impact of Big-data investment has been nonexistent. This study also has a practical implication in that it can be a practical reference material for business decision makers considering big data investment.

Stage Structure and Population Persistence of Cypripedium japonicum Thunb., a Rare and Endangered Plants (희귀 및 멸종위기식물인 광릉요강꽃의 개체군 구조 및 지속성)

  • Lee, Dong-hyoung;Kim, So-dam;Kim, Hwi-min;Moon, Ae-Ra;Kim, Sang-Yong;Park, Byung-Bae;Son, Sung-won
    • Korean Journal of Environment and Ecology
    • /
    • v.35 no.5
    • /
    • pp.548-557
    • /
    • 2021
  • Cypripedium japonicum Thunb. is an endemic plant in East Asia, distributed only in Korea, China, and Japan. At the global level, the IUCN Red List evaluates it as "Endangered Species (EN)," and at the national level in Korea, it is evaluated as "Critically Endangered Species (CR)." In this study, we investigated the characteristics of the age structure and the sustainability of the population based on the data obtained by demographic monitoring conducted for seven years in the natural habitat. C. japonicum habitats were observed in 7 regions of Korea (Pochoen, Gapyeong, Hwacheon, Chuncheon, Yeongdong, Muju, Gwangyang), and 4,356 individuals in 15 subpopulations were identified. The population size and structure differed from region to region, and artificial management had a very important effect on the size and structural change of the population. Population viability analysis (PVA) based on changes in the number of individuals of C. japonicum showed a very diverse tendency by region. And the probability of population extinction in the next 100 years was 0.00% for Pocheon, 10.90% for Gwangyang, 24.05% for Chuncheon, and 79.50% for Hwacheon. Since the above monitored study sites were located within the conservation shelters, which restricted access by humans, unauthorized collection of C. japonicum, the biggest threat to the species, was not reflected in the individual viability. So, the risk of extinction in Korea is expected to be significantly higher than that estimated in this study. Therefore, it is necessary to reflect population information in several regions that may represent various threats to determine the extinction risk of the C. japonicum population objectively. In the future, we should expand the demographic monitoring of the C. japonicum population known in Korea.

Development of A Material Flow Model for Predicting Nano-TiO2 Particles Removal Efficiency in a WWTP (하수처리장 내 나노 TiO2 입자 제거효율 예측을 위한 물질흐름모델 개발)

  • Ban, Min Jeong;Lee, Dong Hoon;Shin, Sangwook;Lee, Byung-Tae;Hwang, Yu Sik;Kim, Keugtae;Kang, Joo-Hyon
    • Journal of Wetlands Research
    • /
    • v.24 no.4
    • /
    • pp.345-353
    • /
    • 2022
  • A wastewater treatment plant (WWTP) is a major gateway for the engineered nano-particles (ENPs) entering the water bodies. However existing studies have reported that many WWTPs exceed the No Observed Effective Concentration (NOEC) for ENPs in the effluent and thus they need to be designed or operated to more effectively control ENPs. Understanding and predicting ENPs behaviors in the unit and \the whole process of a WWTP should be the key first step to develop strategies for controlling ENPs using a WWTP. This study aims to provide a modeling tool for predicting behaviors and removal efficiencies of ENPs in a WWTP associated with process characteristics and major operating conditions. In the developed model, four unit processes for water treatment (primary clarifier, bioreactor, secondary clarifier, and tertiary treatment unit) were considered. Additionally the model simulates the sludge treatment system as a single process that integrates multiple unit processes including thickeners, digesters, and dewatering units. The simulated ENP was nano-sized TiO2, (nano-TiO2) assuming that its behavior in a WWTP is dominated by the attachment with suspendid solids (SS), while dissolution and transformation are insignificant. The attachment mechanism of nano-TiO2 to SS was incorporated into the model equations using the apparent solid-liquid partition coefficient (Kd) under the equilibrium assumption between solid and liquid phase, and a steady state condition of nano-TiO2 was assumed. Furthermore, an MS Excel-based user interface was developed to provide user-friendly environment for the nano-TiO2 removal efficiency calculations. Using the developed model, a preliminary simulation was conducted to examine how the solid retention time (SRT), a major operating variable affects the removal efficiency of nano-TiO2 particles in a WWTP.

Non-astronomical Tides and Monthly Mean Sea Level Variations due to Differing Hydrographic Conditions and Atmospheric Pressure along the Korean Coast from 1999 to 2017 (한국 연안에서 1999년부터 2017년까지 해수물성과 대기압 변화에 따른 계절 비천문조와 월평균 해수면 변화)

  • BYUN, DO-SEONG;CHOI, BYOUNG-JU;KIM, HYOWON
    • The Sea:JOURNAL OF THE KOREAN SOCIETY OF OCEANOGRAPHY
    • /
    • v.26 no.1
    • /
    • pp.11-36
    • /
    • 2021
  • The solar annual (Sa) and semiannual (Ssa) tides account for much of the non-uniform annual and seasonal variability observed in sea levels. These non-equilibrium tides depend on atmospheric variations, forced by changes in the Sun's distance and declination, as well as on hydrographic conditions. Here we employ tidal harmonic analyses to calculate Sa and Ssa harmonic constants for 21 Korean coastal tidal stations (TS), operated by the Korea Hydrographic and Oceanographic Agency. We used 19 year-long (1999 to 2017) 1 hr-interval sea level records from each site, and used two conventional harmonic analysis (HA) programs (Task2K and UTide). The stability of Sa harmonic constants was estimated with respect to starting date and record length of the data, and we examined the spatial distribution of the calculated Sa and Ssa harmonic constants. HA was performed on Incheon TS (ITS) records using 369-day subsets; the first start date was January 1, 1999, the subsequent data subset starting 24 hours later, and so on up until the final start date was December 27, 2017. Variations in the Sa constants produced by the two HA packages had similar magnitudes and start date sensitivity. Results from the two HA packages had a large difference in phase lag (about 78°) but relatively small amplitude (<1 cm) difference. The phase lag difference occurred in large part since Task2K excludes the perihelion astronomical variable. Sensitivity of the ITS Sa constants to data record length (i.e., 1, 2, 3, 5, 9, and 19 years) was also tested to determine the data length needed to yield stable Sa results. HA results revealed that 5 to 9 year sea level records could estimate Sa harmonic constants with relatively small error, while the best results are produced using 19 year-long records. As noted earlier, Sa amplitudes vary with regional hydrographic and atmospheric conditions. Sa amplitudes at the twenty one TS ranged from 15.0 to 18.6 cm, 10.7 to 17.5 cm, and 10.5 to 13.0 cm, along the west coast, south coast including Jejudo, and east coast including Ulleungdo, respectively. Except at Ulleungdo, it was found that the Ssa constituent contributes to produce asymmetric seasonal sea level variation and it delays (hastens) the highest (lowest) sea levels. Comparisons between monthly mean, air-pressure adjusted, and steric sea level variations revealed that year-to-year and asymmetric seasonal variations in sea levels were largely produced by steric sea level variation and inverted barometer effect.

Development of Continuous Monitoring Method of Root-zone Electrical Conductivity using FDR Sensor in Greenhouse Hydroponics Cultivation (시설 수경재배에서 FDR 센서를 활용한 근권 내 농도의 연속적 모니터링 방법)

  • Lee, Jae Seong;Shin, Jong Hwa
    • Journal of Bio-Environment Control
    • /
    • v.31 no.4
    • /
    • pp.409-415
    • /
    • 2022
  • Plant growth and development are also affected by root-zone environment. Therefore, it is important to consider the variables of the root-zone environment when establishing an irrigation strategy. The purpose of this study is to analyze the relationship between the volumetric moisture content (VWC), Bulk EC (ECb), and Pore EC (ECp) used by plant roots using FDR sensors in two types of rockwool media with different water transmission characteristics, using the method above this was used to establish a method for collecting and correcting available root-zone environmental data. For the experiment, two types of rockwool medium (RW1, RW2) with different physical characteristics were used. The moisture content (MC) and ECb were measured using an FDR sensor, ECp was measured after extracting the residual nutrient solution from the medium using a disposable syringe in the center of the medium at a volumetric moisture content (VWC) of 10-100%. Then, ECb and ECp are measured by supplying nutrient solution having different concentration (distilled water, 0.5-5.0) to two types of media (RW1, RW2) in each volume water content range (0 to 100%). The relationship between ECb and ECp in RW1 and RW2 media is best suited for cubic polynomial. The relationship between ECb and ECp according to volume moisture content (VWC) range showed a large error rate in the low volume moisture content (VWC) range of 10-60%. The correlation between the sensor measured value (ECb) and the ECp used by plant roots according to the volumetric water content (VWC) range was the most suitable for the Paraboloid equation in both media (RW1, RW2). The coefficient of determination the calibration equation for RW1 and RW2 media were 0.936, 0.947, respectively.

Development of Diameter Distribution Change and Site Index in a Stand of Robinia pseudoacacia, a Major Honey Plant (꿀샘식물 아까시나무의 지위지수 도출 및 직경분포 변화)

  • Kim, Sora;Song, Jungeun;Park, Chunhee;Min, Suhui;Hong, Sunghee;Yun, Junhyuk;Son, Yeongmo
    • Journal of Korean Society of Forest Science
    • /
    • v.111 no.2
    • /
    • pp.311-318
    • /
    • 2022
  • We conducted this study to derive the site index, which is a criterion for the planting of Robinia pseudoacacia, a honey plant, and to investigate the diameter distribution change by derived site index. We applied the Chapman-Richards equation model to estimate the site index of the Robinia pseudoacacia stand. The site index was distributed within the range of 16-22 when the base age was 30 years. The fitness index of the site index estimation model was low, but we judged that there was no problem in the application because the residual distribution of the equation had not shifted to one side. We used the Weibull diameter distribution function to determine the diameter distribution of the Robinia pseudoacacia stand by site index. We used the mean diameter and the dominant tree height as independent variables to present the diameter distribution, and our analysis procedure was to estimate and recover the parameters of the Weibull diameter distribution function. We used the mean diameter and the dominant tree height of the Robinia pseudoacacia stand to show distribution by diameter class, and the fitness index for dbh distribution estimation was about 80.5%. As a result of schematizing the diameter distribution by site indices as a 30-year-old, we found that the higher the site index, the more the curve of the diameter distribution moved to the right. This suggests that if the plantation were to be established in a high site index stand, considering the suitable trees on the site, the growth of Robinia pseudoacacia woul d become active, and not onl y the production of wood but al so the production of honey would increase. We therefore anticipate that the site index classification table and curve of this Robinia pseudoacacia stand will become the standard for decision making in the plantation and management of this tree.

Performance Optimization of Numerical Ocean Modeling on Cloud Systems (클라우드 시스템에서 해양수치모델 성능 최적화)

  • JUNG, KWANGWOOG;CHO, YANG-KI;TAK, YONG-JIN
    • The Sea:JOURNAL OF THE KOREAN SOCIETY OF OCEANOGRAPHY
    • /
    • v.27 no.3
    • /
    • pp.127-143
    • /
    • 2022
  • Recently, many attempts to run numerical ocean models in cloud computing environments have been tried actively. A cloud computing environment can be an effective means to implement numerical ocean models requiring a large-scale resource or quickly preparing modeling environment for global or large-scale grids. Many commercial and private cloud computing systems provide technologies such as virtualization, high-performance CPUs and instances, ether-net based high-performance-networking, and remote direct memory access for High Performance Computing (HPC). These new features facilitate ocean modeling experimentation on commercial cloud computing systems. Many scientists and engineers expect cloud computing to become mainstream in the near future. Analysis of the performance and features of commercial cloud services for numerical modeling is essential in order to select appropriate systems as this can help to minimize execution time and the amount of resources utilized. The effect of cache memory is large in the processing structure of the ocean numerical model, which processes input/output of data in a multidimensional array structure, and the speed of the network is important due to the communication characteristics through which a large amount of data moves. In this study, the performance of the Regional Ocean Modeling System (ROMS), the High Performance Linpack (HPL) benchmarking software package, and STREAM, the memory benchmark were evaluated and compared on commercial cloud systems to provide information for the transition of other ocean models into cloud computing. Through analysis of actual performance data and configuration settings obtained from virtualization-based commercial clouds, we evaluated the efficiency of the computer resources for the various model grid sizes in the virtualization-based cloud systems. We found that cache hierarchy and capacity are crucial in the performance of ROMS using huge memory. The memory latency time is also important in the performance. Increasing the number of cores to reduce the running time for numerical modeling is more effective with large grid sizes than with small grid sizes. Our analysis results will be helpful as a reference for constructing the best computing system in the cloud to minimize time and cost for numerical ocean modeling.