• Title/Summary/Keyword: Optimal distribution

Search Result 2,864, Processing Time 0.028 seconds

Comparison of Temperature-dependent Development Model of Aphis gossypii (Hemiptera: Aphididae) under Constant Temperature and Fluctuating Temperature (실내 항온과 온실 변온조건에서 목화진딧물의 온도 발육비교)

  • Kim, Do-Ik;Ko, Suk-Ju;Choi, Duck-Soo;Kang, Beom-Ryong;Park, Chang-Gyu;Kim, Seon-Gon;Park, Jong-Dae;Kim, Sang-Soo
    • Korean journal of applied entomology
    • /
    • v.51 no.4
    • /
    • pp.421-429
    • /
    • 2012
  • The developmental time period of Aphis gossypii was studied in laboratory (six constant temperatures from 15 to $30^{\circ}C$ with 50~60% RH, and a photoperiod of 14L:10D) and in a cucumber plastic house. The mortality of A. gossypii in the laboratory was high in the 2nd (20.0%) and 3rd stage(13.3%) at low temperature but high in the 3rd (26.7%) and 4th stage (33.3%) at high temperatures. Mortality in the plastic house was high in the 1st and 2nd stage but there was no mortality in the 4th stage at low temperature. The total developmental period was longest at $15^{\circ}C$ (12.2 days) in the laboratory and shortest at $28.5^{\circ}C$ (4.09 days) in the plastic house. The lower threshold temperature at the total nymphal stage was $6.8^{\circ}C$ in laboratory. The thermal constant required to reach the total nymphal stage was 111.1DD. The relationship between the developmental rate and temperature fit the nonlinear model of Logan-6 which has the lowest value for the Akaike information criterion(AIC) and Bayesian information criterion(BIC). The distribution of completion of each development stage was well described by the 3-parameter Weibull function ($r^2=0.89{\sim}0.96$). This model accurately described the predicted and observed outcomes. Thus it is considered that the model can be used for predicting the optimal spray time for Aphis gossypii.

Recent Progress in Air Conditioning and Refrigeration Research - A Review of Papers Published in the Korean Journal of Air-Conditioning and Refrigeration Engineering in 2004 and 2005 - (공기조화, 냉동 분야의 최근 연구 동향 -2004년 및 2005년 학회지 논문에 대한 종합적 고찰-)

  • Choi, Yong-Don;Kang, Yong-Tae;Kim, Nae-Hyun;Kim, Man-Hoe;Park, Kyoung-Kuhn;Park, Byung-Yoon;Park, Jin-Chul;Hong, Hi-Ki
    • Korean Journal of Air-Conditioning and Refrigeration Engineering
    • /
    • v.19 no.1
    • /
    • pp.94-131
    • /
    • 2007
  • A review on the papers published in the Korean Journal of Air-Conditioning and Refrigerating Engineering in 2004 and 2005 has been done. Focus has been put on current status of research in the aspect of heating, cooling, air-conditioning, ventilation, sanitation and building environment. The conclusions are as follows. (1) Most of fundamental studies on fluid flow were related with heat transportation of facilities. Drop formation and rivulet flow on solid surfaces were interesting topics related with condensation augmentation. Research on micro environment considering flow, heat, humidity was also interesting for comfortable living environment. It can be extended considering biological aspects. Development of fans and blowers of high performance and low noise were continuing topics. Well developed CFD and flow visualization(PIV, PTV and LDV methods) technologies were widely applied for developing facilities and their systems. (2) The research trends of the previous two yews are surveyed as groups of natural convection, forced convection, electronic cooling, heat transfer enhancement, frosting and defrosting, thermal properties, etc. New research topics introduced include natural convection heat transfer enhancement using nanofluid, supercritical cooling performance or oil miscibility of $CO_2$, enthalpy heat exchanger for heat recovery, heat transfer enhancement in a plate heat exchanger using fluid resonance. (3) The literature for the last two years($2004{\sim}2005$) is reviewed in the areas of heat pump, ice and water storage, cycle analysis and reused energy including geothermal, solar and unused energy). The research on cycle analysis and experiments for $CO_2$ was extensively carried out to replace the Ozone depleting and global warming refrigerants such as HFC and HCFC refrigerants. From the year of 2005, the Gas Engine Heat Pump(GHP) has been paid attention from the viewpoint of the gas cooling application. The heat pipe was focused on the performance improvement by the parametric analysis and the heat recovery applications. The storage systems were studied on the performance enhancement of the storage tank and cost analysis for heating and cooling applications. In the area of unused energy, the hybrid systems were extensively introduced and the life cycle cost analysis(LCCA) for the unused energy systems was also intensively carried out. (4) Recent studies of various refrigeration and air-conditioning systems have focused on the system performance and efficiency enhancement. Heat transfer characteristics during evaporation and condensation are investigated for several tube shapes and of alternative refrigerants including carbon dioxide. Efficiency of various compressors and expansion devices are also dealt with for better modeling and, in particular, performance improvement. Thermoelectric module and cooling systems are analyzed theoretically and experimentally. (5) According to the review of recent studies on ventilation systems, an appropriate ventilation systems including machenical and natural are required to satisfied the level of IAQ. Also, an recent studies on air-conditioning and absorption refrigeration systems, it has mainly focused on distribution and dehumidification of indoor air to improve the performance were carried out. (6) Based on a review of recent studies on indoor environment and building service systems, it is noticed that research issues have mainly focused on optimal thermal comfort, improvement of indoor air Quality and many innovative systems such as air-barrier type perimeter-less system with UFAC, radiant floor heating and cooling system and etc. New approaches are highlighted for improving indoor environmental condition as well as minimizing energy consumption, various activities of building control and operation strategy and energy performance analysis for economic evaluation.

Underpricing of Initial Offerings and the Efficiency of Investments (신주(新株)의 저가상장현상(低價上場現象)과 투자(投資)의 효율성(效率成)에 대한 연구(硏究))

  • Nam, Il-chong
    • KDI Journal of Economic Policy
    • /
    • v.12 no.2
    • /
    • pp.95-120
    • /
    • 1990
  • The underpricing of new shares of a firm that are offered to the public for the first time (initial offerings) is well known and has puzzled financial economists for a long time since it seems at odds with the optimal behavior of the owners of issuing firms. Past attempts by financial economists to explain this phenomenon have not been successful in the sense that the explanations given by them are either inconsistent with the equilibrium theory or implausible. Approaches by such authors as Welch or Allen and Faulhaber are no exceptions. In this paper, we develop a signalling model of capital investment to explain the underpricing phenomenon and also analyze the efficiency of investment. The model focuses on the information asymmetry between the owners of issuing firms and general investors. We consider a firm that has been owned and operated by a single owner and that has a profitable project but has no capital to develop it. The profit from the project depends on the capital invested in the project as well as a profitability parameter. The model also assumes that the financial market is represented by a single investor who maximizes the expected wealth. The owner has superior information as to the value of the firm to investors in the sense that it knows the true value of the parameter while investors have only a probability distribution about the parameter. The owner offers the representative investor a fraction of the ownership of the firm in return for a certain amount of investment in the firm. This offer condition is equivalent to the usual offer condition consisting of the number of issues to sell and the unit price of a share. Thus, the model is a signalling game. Using Kreps' criterion as the solution concept, we obtained an essentially unique separating equilibrium offer condition. Analysis of this separating equilibrium shows that the owner of the firm with high profitability chooses an offer condition that raises an amount of capital that is short of the amount that maximizes the potential profit from the project. It also reveals that the fraction of the ownership of the firm that the representative investor receives from the owner of the highly profitable firm in return for its investment has a value that exceeds the investment. In other words, the initial offering in the model is underpriced when the profitability of the firm is high. The source of underpricing and underinvestment is the signalling activity by the owner of the highly profitable firm who attempts to convince investors that his firm has a highly profitable project by choosing an offer condition that cannot be imitated by the owner of a firm with low profitability. Thus, we obtained two main results. First, underpricing is a result of a signalling activity by the owner of a firm with high profitability when there exists information asymmetry between the owner of the issuing firm and investors. Second, such information asymmetry also leads to underinvestment in a highly profitable project. Those results clearly show the underpricing entails underinvestment and that information asymmetry leads to a social cost as well as a private cost. The above results are quite general in the sense that they are based upon a neoclassical profit function and full rationality of economic agents. We believe that the results of this paper can be used as a basis for further research on the capital investment process. For instance, one can view the results of this paper as a subgame equilibrium in a larger game in which a firm chooses among diverse ways to raise capital. In addition, the method used in this paper can be used in analyzing a wide range of problems arising from information asymmetry that the Korean financial market faces.

  • PDF

A Study on the Optimal Image Acquisition Time of 18F- Flutemetamol using List Mode (LIST mode를 이용한 18F-Flutemetamol 의 최적 영상획득 시간에 관한 연구)

  • Ryu, Chan-Ju
    • Journal of the Korean Society of Radiology
    • /
    • v.15 no.6
    • /
    • pp.891-897
    • /
    • 2021
  • With the development of Amyloid PET Tracer, the accuracy of Alzheimer's diagnosis can be improved through the identification of beta-amyloid neurites. However, the long image acquisition time of 20 minutes can be difficult for the patient. PET/CT scans are sensitive to patient movement and may partially affect test results. In this study, we studied the proper image acquisition time without affecting the quantitative evaluation of the image through the list mode acquisition method according to the time of the distribution of radioactive drugs in the body. The list mode includes information about time compared to the existing frame mode, and it is easy to analyze data because it can reconstruct images about the time that researchers want. The research method obtained a reconstructed image by time using a list mode of 5min frame/bed, 10min frame/bed, 15min frame/bed, and 20min frame/bed to compare the difference between signal-to-pons take ratio (SNR) and lesion-to-pons uptake ratio (LPR) and the difference in reading time to obtain an appropriate image. As a result of quantitative analysis, when measuring in list mode, SUVmean values decreased in 6 regions of interest as the image acquisition time increased, but showed the largest difference in 5 min/bed images, followed by 10 min/bed and 15 min/bed. As a result, the difference in SUVmean values decreased. Therefore, it was found that SUVmean values at 15 min/bed did not differ enough to not affect image evaluation. There was no difference in LPR values. As a result of the qualitative analysis, there was no change in the reading findings according to the PET image acquisition time and there was no significant difference in the qualitative analysis score of the image reconstruction according to time. As a result of the study, there is no significant difference between 15 min/bed and 20 min/bed images during the 18F-flutemetamol PET/CT test, so it can be said that it is clinically useful to reduce the image acquisition time selectively using 15 min/bed via list mode depending on the patient's condition.

Conditional Generative Adversarial Network based Collaborative Filtering Recommendation System (Conditional Generative Adversarial Network(CGAN) 기반 협업 필터링 추천 시스템)

  • Kang, Soyi;Shin, Kyung-shik
    • Journal of Intelligence and Information Systems
    • /
    • v.27 no.3
    • /
    • pp.157-173
    • /
    • 2021
  • With the development of information technology, the amount of available information increases daily. However, having access to so much information makes it difficult for users to easily find the information they seek. Users want a visualized system that reduces information retrieval and learning time, saving them from personally reading and judging all available information. As a result, recommendation systems are an increasingly important technologies that are essential to the business. Collaborative filtering is used in various fields with excellent performance because recommendations are made based on similar user interests and preferences. However, limitations do exist. Sparsity occurs when user-item preference information is insufficient, and is the main limitation of collaborative filtering. The evaluation value of the user item matrix may be distorted by the data depending on the popularity of the product, or there may be new users who have not yet evaluated the value. The lack of historical data to identify consumer preferences is referred to as data sparsity, and various methods have been studied to address these problems. However, most attempts to solve the sparsity problem are not optimal because they can only be applied when additional data such as users' personal information, social networks, or characteristics of items are included. Another problem is that real-world score data are mostly biased to high scores, resulting in severe imbalances. One cause of this imbalance distribution is the purchasing bias, in which only users with high product ratings purchase products, so those with low ratings are less likely to purchase products and thus do not leave negative product reviews. Due to these characteristics, unlike most users' actual preferences, reviews by users who purchase products are more likely to be positive. Therefore, the actual rating data is over-learned in many classes with high incidence due to its biased characteristics, distorting the market. Applying collaborative filtering to these imbalanced data leads to poor recommendation performance due to excessive learning of biased classes. Traditional oversampling techniques to address this problem are likely to cause overfitting because they repeat the same data, which acts as noise in learning, reducing recommendation performance. In addition, pre-processing methods for most existing data imbalance problems are designed and used for binary classes. Binary class imbalance techniques are difficult to apply to multi-class problems because they cannot model multi-class problems, such as objects at cross-class boundaries or objects overlapping multiple classes. To solve this problem, research has been conducted to convert and apply multi-class problems to binary class problems. However, simplification of multi-class problems can cause potential classification errors when combined with the results of classifiers learned from other sub-problems, resulting in loss of important information about relationships beyond the selected items. Therefore, it is necessary to develop more effective methods to address multi-class imbalance problems. We propose a collaborative filtering model using CGAN to generate realistic virtual data to populate the empty user-item matrix. Conditional vector y identify distributions for minority classes and generate data reflecting their characteristics. Collaborative filtering then maximizes the performance of the recommendation system via hyperparameter tuning. This process should improve the accuracy of the model by addressing the sparsity problem of collaborative filtering implementations while mitigating data imbalances arising from real data. Our model has superior recommendation performance over existing oversampling techniques and existing real-world data with data sparsity. SMOTE, Borderline SMOTE, SVM-SMOTE, ADASYN, and GAN were used as comparative models and we demonstrate the highest prediction accuracy on the RMSE and MAE evaluation scales. Through this study, oversampling based on deep learning will be able to further refine the performance of recommendation systems using actual data and be used to build business recommendation systems.

Investigating Data Preprocessing Algorithms of a Deep Learning Postprocessing Model for the Improvement of Sub-Seasonal to Seasonal Climate Predictions (계절내-계절 기후예측의 딥러닝 기반 후보정을 위한 입력자료 전처리 기법 평가)

  • Uran Chung;Jinyoung Rhee;Miae Kim;Soo-Jin Sohn
    • Korean Journal of Agricultural and Forest Meteorology
    • /
    • v.25 no.2
    • /
    • pp.80-98
    • /
    • 2023
  • This study explores the effectiveness of various data preprocessing algorithms for improving subseasonal to seasonal (S2S) climate predictions from six climate forecast models and their Multi-Model Ensemble (MME) using a deep learning-based postprocessing model. A pipeline of data transformation algorithms was constructed to convert raw S2S prediction data into the training data processed with several statistical distribution. A dimensionality reduction algorithm for selecting features through rankings of correlation coefficients between the observed and the input data. The training model in the study was designed with TimeDistributed wrapper applied to all convolutional layers of U-Net: The TimeDistributed wrapper allows a U-Net convolutional layer to be directly applied to 5-dimensional time series data while maintaining the time axis of data, but every input should be at least 3D in U-Net. We found that Robust and Standard transformation algorithms are most suitable for improving S2S predictions. The dimensionality reduction based on feature selections did not significantly improve predictions of daily precipitation for six climate models and even worsened predictions of daily maximum and minimum temperatures. While deep learning-based postprocessing was also improved MME S2S precipitation predictions, it did not have a significant effect on temperature predictions, particularly for the lead time of weeks 1 and 2. Further research is needed to develop an optimal deep learning model for improving S2S temperature predictions by testing various models and parameters.

Improvement of the Efficacy Test Methods for Hand Sanitizers (Gel, Liquid, and Wipes): Emerging Trends from in vivo/ex vivo Test Strategies for Application in the Hand Microbiome (손소독제(겔형, 액제형, 와이프형)의 효능 평가법 개선: 평가 전략 연구 사례 및 손 균총 정보 활용 등 최근 동향)

  • Yun O;Ji Seop Son;Han Sol Park;Young Hoon Lee;Jin Song Shin;Da som Park;Eun NamGung;Tae Jin Cho
    • Journal of Food Hygiene and Safety
    • /
    • v.38 no.1
    • /
    • pp.1-11
    • /
    • 2023
  • Skin sanitizers are effective in killing or removing pathogenic microbial contaminants from the skin of food handlers, and the progressive growth of consumer interest in personal hygiene tends to drive product diversification. This review covers the advances in the application of efficacy tests for hand sanitizers to suggest future perspectives to establish an assessment system that is optimized to each product type (gel, liquid, and wipes). Previous research on the in vivo simulative test of actual consumer use has adopted diverse experimental conditions regardless of the product type. This highlights the importance of establishing optimal test protocols specialized for the compositional characteristics of sanitizers through the comparative analysis of test methods. Although the operational conditions of the mechanical actions associated with wiping can affect the efficacy of the removal and/or the inactivation of target microorganisms from the skin's surface, currently there is a lack of standardized use patterns for the exposure of hand sanitizing wipes to skin. Thus, major determinants affecting the results from each step of the overall assessment procedures [pre-treatment - exposure of sanitizers - microbial recovery] should be identified to modify current protocols and develop novel test methods. The ex vivo test, designed to overcome the limited reproducibility of in vivo human trials, is also expected to replicate the environment for the contact of sanitizers targeting skin microorganisms. Recent progress in the area of skin microbiome research revealed distinct microbial characteristics and distribution patterns after the application of sanitizers on hands to establish the test methods with the perspectives on the antimicrobial effects at the community level. The future perspectives presented in this study on the improvement of efficacy test methods for hand sanitizers can also contribute to public health and food safety through the commercialization of effective sanitizer products.

Development of Cloud Detection Method Considering Radiometric Characteristics of Satellite Imagery (위성영상의 방사적 특성을 고려한 구름 탐지 방법 개발)

  • Won-Woo Seo;Hongki Kang;Wansang Yoon;Pyung-Chae Lim;Sooahm Rhee;Taejung Kim
    • Korean Journal of Remote Sensing
    • /
    • v.39 no.6_1
    • /
    • pp.1211-1224
    • /
    • 2023
  • Clouds cause many difficult problems in observing land surface phenomena using optical satellites, such as national land observation, disaster response, and change detection. In addition, the presence of clouds affects not only the image processing stage but also the final data quality, so it is necessary to identify and remove them. Therefore, in this study, we developed a new cloud detection technique that automatically performs a series of processes to search and extract the pixels closest to the spectral pattern of clouds in satellite images, select the optimal threshold, and produce a cloud mask based on the threshold. The cloud detection technique largely consists of three steps. In the first step, the process of converting the Digital Number (DN) unit image into top-of-atmosphere reflectance units was performed. In the second step, preprocessing such as Hue-Value-Saturation (HSV) transformation, triangle thresholding, and maximum likelihood classification was applied using the top of the atmosphere reflectance image, and the threshold for generating the initial cloud mask was determined for each image. In the third post-processing step, the noise included in the initial cloud mask created was removed and the cloud boundaries and interior were improved. As experimental data for cloud detection, CAS500-1 L2G images acquired in the Korean Peninsula from April to November, which show the diversity of spatial and seasonal distribution of clouds, were used. To verify the performance of the proposed method, the results generated by a simple thresholding method were compared. As a result of the experiment, compared to the existing method, the proposed method was able to detect clouds more accurately by considering the radiometric characteristics of each image through the preprocessing process. In addition, the results showed that the influence of bright objects (panel roofs, concrete roads, sand, etc.) other than cloud objects was minimized. The proposed method showed more than 30% improved results(F1-score) compared to the existing method but showed limitations in certain images containing snow.

Development of Marine Ecotoxicological Standard Methods for Ulva Sporulation Test (파래의 포자형성률을 이용한 해양생태독성시험 방법에 관한 연구)

  • Han, Tae-Jun;Han, Young-Seok;Park, Gyung-Soo;Lee, Seung-Min
    • The Sea:JOURNAL OF THE KOREAN SOCIETY OF OCEANOGRAPHY
    • /
    • v.13 no.2
    • /
    • pp.121-128
    • /
    • 2008
  • As an aquatic ecotoxicity test method, a bioassay using the inhibition of sporualtion of the green macroalga, Ulva pertusa, has been developed. Optimal test conditions determined for photon irradiance, pH, salinity and temperature were $100\;{\mu}mol{\cdot}m^{-2}{\cdot}s^{-1}$, $7{\sim}9$, $25{\sim}35\;psu$ and $15{\sim}20^{\circ}C$, respectively. The validity of the test endpoint was evaluated by assessing the toxicity of four metals (Cd, Cu, Pb, Zn) and elutriates of sewage or waste sludge collected from 9 different locations. When the metals were assayed, the $EC_{50}$ values indicated the following toxicity rankings: Cu ($0.062\;mg{\cdot}L^{-1}$) > Cd ($0.208\;mg{\cdot}L^{-1}$) > Pb ($0.718\;mg{\cdot}L^{-1}$) > Zn ($0.776\;mg{\cdot}L^{-1}$). When compared with other commonly used bioassays of metal pollution listed on US ECOTOX database, the sporualtion test proved to be the most sensitive. Ulva sporulation was significantly inhibited in all elutriates with the greatest and least effects observed in elutriates of sludge from industrial waste ($EC_{50}=6.78%$) and filtration bed ($EC_{50}=15.0%$), respectively. The results of the Spearman rank correlation analysis for $EC_{50}$ data versus the concentrations of toxicants in the sludge presented a significant correlation between toxicity and four heavy metals(Cd, Cu, Pb, Zn). The method described here is sensitive to toxicants, simple to use, easy to interpret and economical. It is also easy to procure samples and maintain cultures. The present method would therefore probably make a useful assessment of aquatic toxicity of a wide range of toxicants. In addition, the genus Ulva has a wide geographical distribution and species have similar reproductive processes, so the test method would have a potential application worldwide.

A Comparative Study on Topic Modeling of LDA, Top2Vec, and BERTopic Models Using LIS Journals in WoS (LDA, Top2Vec, BERTopic 모형의 토픽모델링 비교 연구 - 국외 문헌정보학 분야를 중심으로 -)

  • Yong-Gu Lee;SeonWook Kim
    • Journal of the Korean Society for Library and Information Science
    • /
    • v.58 no.1
    • /
    • pp.5-30
    • /
    • 2024
  • The purpose of this study is to extract topics from experimental data using the topic modeling methods(LDA, Top2Vec, and BERTopic) and compare the characteristics and differences between these models. The experimental data consist of 55,442 papers published in 85 academic journals in the field of library and information science, which are indexed in the Web of Science(WoS). The experimental process was as follows: The first topic modeling results were obtained using the default parameters for each model, and the second topic modeling results were obtained by setting the same optimal number of topics for each model. In the first stage of topic modeling, LDA, Top2Vec, and BERTopic models generated significantly different numbers of topics(100, 350, and 550, respectively). Top2Vec and BERTopic models seemed to divide the topics approximately three to five times more finely than the LDA model. There were substantial differences among the models in terms of the average and standard deviation of documents per topic. The LDA model assigned many documents to a relatively small number of topics, while the BERTopic model showed the opposite trend. In the second stage of topic modeling, generating the same 25 topics for all models, the Top2Vec model tended to assign more documents on average per topic and showed small deviations between topics, resulting in even distribution of the 25 topics. When comparing the creation of similar topics between models, LDA and Top2Vec models generated 18 similar topics(72%) out of 25. This high percentage suggests that the Top2Vec model is more similar to the LDA model. For a more comprehensive comparison analysis, expert evaluation is necessary to determine whether the documents assigned to each topic in the topic modeling results are thematically accurate.