• Title/Summary/Keyword: cost-efficiency

Search Result 5,311, Processing Time 0.04 seconds

Development of Tree Carbon Calculator to Support Landscape Design for the Carbon Reduction (탄소저감설계 지원을 위한 수목 탄소계산기 개발 및 적용)

  • Ha, Jee-Ah;Park, Jae-Min
    • Journal of the Korean Institute of Landscape Architecture
    • /
    • v.51 no.1
    • /
    • pp.42-55
    • /
    • 2023
  • A methodology to predict the carbon performance of newly created urban greening plans is required as policies based on quantifying carbon performance are rapidly being introduced in the face of the climate crisis caused by global warming. This study developed a tree carbon calculator that can be used for carbon reduction designs in landscaping and attempted to verify its effectiveness in landscape design. For practical operability, MS Excel was selected as a format, and carbon absorption and storage by tree type and size were extracted from 93 representative species to reflect plant design characteristics. The database, including tree unit prices, was established to reflect cost limitations. A plantation experimental design to verify the performance of the tree carbon calculator was conducted by simulating the design of parks in the central region for four landscape design, and the causal relationship was analyzed by conducting semi-structured interviews before and after. As a result, carbon absorption and carbon storage in the design using the tree carbon calculator were about 17-82% and about 14-85% higher, respectively, compared to not using it. It was confirmed that the reason for the increase in carbon performance efficiency was that additional planting was actively carried out within a given budget, along with the replacement of excellent carbon performance species. Pre-interviews revealed that designers distrusted data and the burdens caused by new programs before using the arboreal carbon calculator but tended to change positively because of its usefulness and ease of use. In order to implement carbon reduction design in the landscaping field, it is necessary to develop it into a carbon calculator for trees and landscaping performance. This study is expected to present a useful direction for ntroducing carbon reduction designs based on quantitative data in landscape design.

An Installation and Model Assessment of the UM, U.K. Earth System Model, in a Linux Cluster (U.K. 지구시스템모델 UM의 리눅스 클러스터 설치와 성능 평가)

  • Daeok Youn;Hyunggyu Song;Sungsu Park
    • Journal of the Korean earth science society
    • /
    • v.43 no.6
    • /
    • pp.691-711
    • /
    • 2022
  • The state-of-the-art Earth system model as a virtual Earth is required for studies of current and future climate change or climate crises. This complex numerical model can account for almost all human activities and natural phenomena affecting the atmosphere of Earth. The Unified Model (UM) from the United Kingdom Meteorological Office (UK Met Office) is among the best Earth system models as a scientific tool for studying the atmosphere. However, owing to the expansive numerical integration cost and substantial output size required to maintain the UM, individual research groups have had to rely only on supercomputers. The limitations of computer resources, especially the computer environment being blocked from outside network connections, reduce the efficiency and effectiveness of conducting research using the model, as well as improving the component codes. Therefore, this study has presented detailed guidance for installing a new version of the UM on high-performance parallel computers (Linux clusters) owned by individual researchers, which would help researchers to easily work with the UM. The numerical integration performance of the UM on Linux clusters was also evaluated for two different model resolutions, namely N96L85 (1.875° ×1.25° with 85 vertical levels up to 85 km) and N48L70 (3.75° ×2.5° with 70 vertical levels up to 80 km). The one-month integration times using 256 cores for the AMIP and CMIP simulations of N96L85 resolution were 169 and 205 min, respectively. The one-month integration time for an N48L70 AMIP run using 252 cores was 33 min. Simulated results on 2-m surface temperature and precipitation intensity were compared with ERA5 re-analysis data. The spatial distributions of the simulated results were qualitatively compared to those of ERA5 in terms of spatial distribution, despite the quantitative differences caused by different resolutions and atmosphere-ocean coupling. In conclusion, this study has confirmed that UM can be successfully installed and used in high-performance Linux clusters.

Development of sequential sampling plan for Frankliniella occidentalis in greenhouse pepper (고추 온실에서 꽃노랑총채벌레의 축차표본조사법 개발)

  • SoEun Eom;Taechul Park;Kimoon Son;Jung-Joon Park
    • Korean Journal of Environmental Biology
    • /
    • v.40 no.2
    • /
    • pp.164-171
    • /
    • 2022
  • Frankliniella occidentalis is an invasive pest insect, which affects over 500 different species of host plants and transmits viruses (tomato spotted wilt virus; TSWV). Despite their efficiency in controling insect pests, pesticides are limited by residence, cost and environmental burden. Therefore, a fixed-precision level sampling plan was developed. The sampling method for F. occidentalis adults in pepper greenhouses consists of spatial distribution analysis, sampling stop line, and control decision making. For sampling, the plant was divided into the upper part(180 cm above ground), middle part (120-160 cm above ground), and lower part (70-110 cm above ground). Through ANCOVA, the P values of intercept and slope were estimated to be 0.94 and 0.87, respectively, which meant there were no significant differences between values of all the levels of the pepper plant. In spatial distribution analysis, the coefficients were derived from Taylor's power law (TPL) at pooling data of each level in the plant, based on the 3-flowers sampling unit. F. occidentalis adults showed aggregated distribution in greenhouse peppers. TPL coefficients were used to develop a fixed-precision sampling stop line. For control decision making, the pre-referred action thresholds were set at 3 and 18. With two action thresholds, Nmax values were calculated at 97 and 1149, respectively. Using the Resampling Validation for Sampling Program (RVSP) and the results gained from the greenhouses, the simulated validation of our sampling method showed a reasonable level of precision.

Design and Implementation of a Web Application Firewall with Multi-layered Web Filter (다중 계층 웹 필터를 사용하는 웹 애플리케이션 방화벽의 설계 및 구현)

  • Jang, Sung-Min;Won, Yoo-Hun
    • Journal of the Korea Society of Computer and Information
    • /
    • v.14 no.12
    • /
    • pp.157-167
    • /
    • 2009
  • Recently, the leakage of confidential information and personal information is taking place on the Internet more frequently than ever before. Most of such online security incidents are caused by attacks on vulnerabilities in web applications developed carelessly. It is impossible to detect an attack on a web application with existing firewalls and intrusion detection systems. Besides, the signature-based detection has a limited capability in detecting new threats. Therefore, many researches concerning the method to detect attacks on web applications are employing anomaly-based detection methods that use the web traffic analysis. Much research about anomaly-based detection through the normal web traffic analysis focus on three problems - the method to accurately analyze given web traffic, system performance needed for inspecting application payload of the packet required to detect attack on application layer and the maintenance and costs of lots of network security devices newly installed. The UTM(Unified Threat Management) system, a suggested solution for the problem, had a goal of resolving all of security problems at a time, but is not being widely used due to its low efficiency and high costs. Besides, the web filter that performs one of the functions of the UTM system, can not adequately detect a variety of recent sophisticated attacks on web applications. In order to resolve such problems, studies are being carried out on the web application firewall to introduce a new network security system. As such studies focus on speeding up packet processing by depending on high-priced hardware, the costs to deploy a web application firewall are rising. In addition, the current anomaly-based detection technologies that do not take into account the characteristics of the web application is causing lots of false positives and false negatives. In order to reduce false positives and false negatives, this study suggested a realtime anomaly detection method based on the analysis of the length of parameter value contained in the web client's request. In addition, it designed and suggested a WAF(Web Application Firewall) that can be applied to a low-priced system or legacy system to process application data without the help of an exclusive hardware. Furthermore, it suggested a method to resolve sluggish performance attributed to copying packets into application area for application data processing, Consequently, this study provide to deploy an effective web application firewall at a low cost at the moment when the deployment of an additional security system was considered burdened due to lots of network security systems currently used.

Ammonia Decomposition over Ni Catalysts Supported on Zeolites for Clean Hydrogen Production (청정수소 생산을 위한 암모니아 분해 반응에서 Ni/Zeolite 촉매의 반응활성에 관한 연구)

  • Jiyu Kim;Kyoung Deok Kim;Unho Jung;Yongha Park;Ki Bong Lee;Kee Young Koo
    • Journal of the Korean Institute of Gas
    • /
    • v.27 no.3
    • /
    • pp.19-26
    • /
    • 2023
  • Hydrogen, a clean energy source free of COx emissions, is poised to replace fossil fuels, with its usage on the rise. Despite its high energy content per unit mass, hydrogen faces limitations in storage and transportation due to its low storage density and challenges in long-term storage. In contrast, ammonia offers a high storage capacity per unit volume and is relatively easy to liquefy, making it an attractive option for storing and transporting large volumes of hydrogen. While NH3 decomposition is an endothermic reaction, achieving excellent low-temperature catalytic activity is essential for process efficiency and cost-effectiveness. The study examined the effects of different zeolite types (5A, NaY, ZSM5) on NH3 decomposition activity, considering differences in pore structure, cations, and Si/Al-ratio. Notably, the 5A zeolite facilitated the high dispersion of Ni across the surface, inside pores, and within the structure. Its low Si/Al ratio contributed to abundant acidity, enhancing ammonia adsorption. Additionally, the presence of Na and Ca cations in the support created medium basic sites that improved N2 desorption rates. As a result, among the prepared catalysts, the 15 wt%Ni/5A catalyst exhibited the highest NH3 conversion and a high H2 formation rate of 23.5 mmol/gcat·min (30,000 mL/gcat·h, 600 ℃). This performance was attributed to the strong metal-support interaction and the enhancement of N2 desorption rates through the presence of medium basic sites.

Comparison of Batch Assay and Random Assay Using Automatic Dispenser in Radioimmunoassay (핵의학 체외 검사에서 자동분주기를 이용한 Random Assay 가능성평가)

  • Moon, Seung-Hwan;Lee, Ho-Young;Shin, Sun-Young;Min, Gyeong-Sun;Lee, Hyun-Joo;Jang, Su-Jin;Kang, Ji-Yeon;Lee, Dong-Soo;Chung, June-Key;Lee, Myung-Chul
    • Nuclear Medicine and Molecular Imaging
    • /
    • v.43 no.4
    • /
    • pp.323-329
    • /
    • 2009
  • Purpose: Radioimmunoassay (RIA) was usually performed by the batch assay. To improve the efficiency of RIA without increase of the cost and time, random assay could be a choice. We investigated the possibility of the random assay using automatic dispenser by assessing the agreement between batch assay and random assay. Materials and Methods: The experiments were performed with four items; Triiodothyronine (T3), free thyroxine (fT4), Prostate specific antigen (PSA), Carcinoembryonic antigen (CEA). In each item, the sera of twenty patients, the standard, and the control samples were used. The measurements were done 4 times with 3 hour time intervals by random assay and batch assay. The coefficient of variation (CV) of the standard samples and patients' data in T3, fT4, PSA, and CEA were assessed. ICC (Intraclass correlation coefficient) and coefficient of correlation were measured to assessing the agreement between two methods. Results: The CVs (%) of T3, fT4, PSA, and CEA measured by batch assay were 3.2$\pm$1.7%, 3.9$\pm$2.1%, 7.1$\pm$6.2%, 11.2$\pm$7.2%. The CVs by random assay were 2.1$\pm$1.7%, 4.8$\pm$3.1%, 3.6$\pm$4.8%, and 7.4$\pm$6.2%. The ICC between the batch assay and random assay were 0.9968 (T3), 0.9973 (fT4), 0.9996 (PSA), and 0.9901 (CEA). The coefficient of correlation between the batch assay and random assay were 0.9924(T3), 0.9974 (fT4), 0.9994 (PSA), and 0.9989 (CEA) (p<0.05). Conclusion: The results of random assay showed strong agreement with the batch assay in a day. These results suggest that random assay using automatic dispenser could be used in radioimmunoassay.

Adaptive RFID anti-collision scheme using collision information and m-bit identification (충돌 정보와 m-bit인식을 이용한 적응형 RFID 충돌 방지 기법)

  • Lee, Je-Yul;Shin, Jongmin;Yang, Dongmin
    • Journal of Internet Computing and Services
    • /
    • v.14 no.5
    • /
    • pp.1-10
    • /
    • 2013
  • RFID(Radio Frequency Identification) system is non-contact identification technology. A basic RFID system consists of a reader, and a set of tags. RFID tags can be divided into active and passive tags. Active tags with power source allows their own operation execution and passive tags are small and low-cost. So passive tags are more suitable for distribution industry than active tags. A reader processes the information receiving from tags. RFID system achieves a fast identification of multiple tags using radio frequency. RFID systems has been applied into a variety of fields such as distribution, logistics, transportation, inventory management, access control, finance and etc. To encourage the introduction of RFID systems, several problems (price, size, power consumption, security) should be resolved. In this paper, we proposed an algorithm to significantly alleviate the collision problem caused by simultaneous responses of multiple tags. In the RFID systems, in anti-collision schemes, there are three methods: probabilistic, deterministic, and hybrid. In this paper, we introduce ALOHA-based protocol as a probabilistic method, and Tree-based protocol as a deterministic one. In Aloha-based protocols, time is divided into multiple slots. Tags randomly select their own IDs and transmit it. But Aloha-based protocol cannot guarantee that all tags are identified because they are probabilistic methods. In contrast, Tree-based protocols guarantee that a reader identifies all tags within the transmission range of the reader. In Tree-based protocols, a reader sends a query, and tags respond it with their own IDs. When a reader sends a query and two or more tags respond, a collision occurs. Then the reader makes and sends a new query. Frequent collisions make the identification performance degrade. Therefore, to identify tags quickly, it is necessary to reduce collisions efficiently. Each RFID tag has an ID of 96bit EPC(Electronic Product Code). The tags in a company or manufacturer have similar tag IDs with the same prefix. Unnecessary collisions occur while identifying multiple tags using Query Tree protocol. It results in growth of query-responses and idle time, which the identification time significantly increases. To solve this problem, Collision Tree protocol and M-ary Query Tree protocol have been proposed. However, in Collision Tree protocol and Query Tree protocol, only one bit is identified during one query-response. And, when similar tag IDs exist, M-ary Query Tree Protocol generates unnecessary query-responses. In this paper, we propose Adaptive M-ary Query Tree protocol that improves the identification performance using m-bit recognition, collision information of tag IDs, and prediction technique. We compare our proposed scheme with other Tree-based protocols under the same conditions. We show that our proposed scheme outperforms others in terms of identification time and identification efficiency.

Analysis on Factors Influencing Welfare Spending of Local Authority : Implementing the Detailed Data Extracted from the Social Security Information System (지방자치단체 자체 복지사업 지출 영향요인 분석 : 사회보장정보시스템을 통한 접근)

  • Kim, Kyoung-June;Ham, Young-Jin;Lee, Ki-Dong
    • Journal of Intelligence and Information Systems
    • /
    • v.19 no.2
    • /
    • pp.141-156
    • /
    • 2013
  • Researchers in welfare services of local government in Korea have rather been on isolated issues as disables, childcare, aging phenomenon, etc. (Kang, 2004; Jung et al., 2009). Lately, local officials, yet, realize that they need more comprehensive welfare services for all residents, not just for above-mentioned focused groups. Still cases dealt with focused group approach have been a main research stream due to various reason(Jung et al., 2009; Lee, 2009; Jang, 2011). Social Security Information System is an information system that comprehensively manages 292 welfare benefits provided by 17 ministries and 40 thousand welfare services provided by 230 local authorities in Korea. The purpose of the system is to improve efficiency of social welfare delivery process. The study of local government expenditure has been on the rise over the last few decades after the restarting the local autonomy, but these studies have limitations on data collection. Measurement of a local government's welfare efforts(spending) has been primarily on expenditures or budget for an individual, set aside for welfare. This practice of using monetary value for an individual as a "proxy value" for welfare effort(spending) is based on the assumption that expenditure is directly linked to welfare efforts(Lee et al., 2007). This expenditure/budget approach commonly uses total welfare amount or percentage figure as dependent variables (Wildavsky, 1985; Lee et al., 2007; Kang, 2000). However, current practice of using actual amount being used or percentage figure as a dependent variable may have some limitation; since budget or expenditure is greatly influenced by the total budget of a local government, relying on such monetary value may create inflate or deflate the true "welfare effort" (Jang, 2012). In addition, government budget usually contain a large amount of administrative cost, i.e., salary, for local officials, which is highly unrelated to the actual welfare expenditure (Jang, 2011). This paper used local government welfare service data from the detailed data sets linked to the Social Security Information System. The purpose of this paper is to analyze the factors that affect social welfare spending of 230 local authorities in 2012. The paper applied multiple regression based model to analyze the pooled financial data from the system. Based on the regression analysis, the following factors affecting self-funded welfare spending were identified. In our research model, we use the welfare budget/total budget(%) of a local government as a true measurement for a local government's welfare effort(spending). Doing so, we exclude central government subsidies or support being used for local welfare service. It is because central government welfare support does not truly reflect the welfare efforts(spending) of a local. The dependent variable of this paper is the volume of the welfare spending and the independent variables of the model are comprised of three categories, in terms of socio-demographic perspectives, the local economy and the financial capacity of local government. This paper categorized local authorities into 3 groups, districts, and cities and suburb areas. The model used a dummy variable as the control variable (local political factor). This paper demonstrated that the volume of the welfare spending for the welfare services is commonly influenced by the ratio of welfare budget to total local budget, the population of infants, self-reliance ratio and the level of unemployment factor. Interestingly, the influential factors are different by the size of local government. Analysis of determinants of local government self-welfare spending, we found a significant effect of local Gov. Finance characteristic in degree of the local government's financial independence, financial independence rate, rate of social welfare budget, and regional economic in opening-to-application ratio, and sociology of population in rate of infants. The result means that local authorities should have differentiated welfare strategies according to their conditions and circumstances. There is a meaning that this paper has successfully proven the significant factors influencing welfare spending of local government in Korea.

Self-optimizing feature selection algorithm for enhancing campaign effectiveness (캠페인 효과 제고를 위한 자기 최적화 변수 선택 알고리즘)

  • Seo, Jeoung-soo;Ahn, Hyunchul
    • Journal of Intelligence and Information Systems
    • /
    • v.26 no.4
    • /
    • pp.173-198
    • /
    • 2020
  • For a long time, many studies have been conducted on predicting the success of campaigns for customers in academia, and prediction models applying various techniques are still being studied. Recently, as campaign channels have been expanded in various ways due to the rapid revitalization of online, various types of campaigns are being carried out by companies at a level that cannot be compared to the past. However, customers tend to perceive it as spam as the fatigue of campaigns due to duplicate exposure increases. Also, from a corporate standpoint, there is a problem that the effectiveness of the campaign itself is decreasing, such as increasing the cost of investing in the campaign, which leads to the low actual campaign success rate. Accordingly, various studies are ongoing to improve the effectiveness of the campaign in practice. This campaign system has the ultimate purpose to increase the success rate of various campaigns by collecting and analyzing various data related to customers and using them for campaigns. In particular, recent attempts to make various predictions related to the response of campaigns using machine learning have been made. It is very important to select appropriate features due to the various features of campaign data. If all of the input data are used in the process of classifying a large amount of data, it takes a lot of learning time as the classification class expands, so the minimum input data set must be extracted and used from the entire data. In addition, when a trained model is generated by using too many features, prediction accuracy may be degraded due to overfitting or correlation between features. Therefore, in order to improve accuracy, a feature selection technique that removes features close to noise should be applied, and feature selection is a necessary process in order to analyze a high-dimensional data set. Among the greedy algorithms, SFS (Sequential Forward Selection), SBS (Sequential Backward Selection), SFFS (Sequential Floating Forward Selection), etc. are widely used as traditional feature selection techniques. It is also true that if there are many risks and many features, there is a limitation in that the performance for classification prediction is poor and it takes a lot of learning time. Therefore, in this study, we propose an improved feature selection algorithm to enhance the effectiveness of the existing campaign. The purpose of this study is to improve the existing SFFS sequential method in the process of searching for feature subsets that are the basis for improving machine learning model performance using statistical characteristics of the data to be processed in the campaign system. Through this, features that have a lot of influence on performance are first derived, features that have a negative effect are removed, and then the sequential method is applied to increase the efficiency for search performance and to apply an improved algorithm to enable generalized prediction. Through this, it was confirmed that the proposed model showed better search and prediction performance than the traditional greed algorithm. Compared with the original data set, greed algorithm, genetic algorithm (GA), and recursive feature elimination (RFE), the campaign success prediction was higher. In addition, when performing campaign success prediction, the improved feature selection algorithm was found to be helpful in analyzing and interpreting the prediction results by providing the importance of the derived features. This is important features such as age, customer rating, and sales, which were previously known statistically. Unlike the previous campaign planners, features such as the combined product name, average 3-month data consumption rate, and the last 3-month wireless data usage were unexpectedly selected as important features for the campaign response, which they rarely used to select campaign targets. It was confirmed that base attributes can also be very important features depending on the type of campaign. Through this, it is possible to analyze and understand the important characteristics of each campaign type.

Interpretation and Meaning of Celadon Inlaid with Sanskrit Mantras in the late Goryeo Dynasty (고려 후기 범자 진언명 상감청자의 해석과 의미)

  • Lee Jun-kwang
    • MISULJARYO - National Museum of Korea Art Journal
    • /
    • v.104
    • /
    • pp.70-100
    • /
    • 2023
  • The celadon made in the Goryeo era, a time when Buddhism was flourishing in Korea, naturally contains many elements of Buddhist culture. Among them, inlaid celadon with Sanskrit inscriptions bears a close relationship with esoteric Buddhism. However, the research on deciphering the Sanskrit inscriptions has made little progress due to the small number of extant examples. However, the four recent excavations at the No. 23 kiln site in Sadang-ri, Gangjin have yielded new materials that allow the existing materials to be categorized into several types. The results obtained through the reading and interpretation of the inscriptions are as follows: First, the Sanskrit characters inlaid on the celadon were parts of mantras. Inscriptions where only one character is apparent cannot be deciphered, but scholars have revealed that others are written in the manner of a wheel mantra represent the "Mantra for Purifying the Dharma-Realm," "Six-Syllable Mantra of the Vidyaraja," "Sweet Dew Mantra," "Jewel Pavilion Mantra," "Mantra of the Savior Bodhisattva," "Dharani of the Mind of the Budha of Infinite Life," and "Mantra for Extinguishing Evil Rebirth." Each mantra was written in Siddham script. Second, they are believed to have been produced during the thirteenth and fourteenth centuries based on the arrangement of the inscriptions and the way the "Sweet Dew Mantra" is included in the "40 Hands Mantra." In particular, the celadon pieces with a mantra inlaid in a concentric manner are dated to the late thirteenth and early fourteenth centuries based on their production characteristics. Third, the interpretation of the inlaid mantras suggests that they all refer to the "Shattering Hell" and "Rebirth in the Pure Land." Based on this, it can be concluded that some of these inlaid celadon wares with mantras may have been used in Buddhist rituals for the dead, such as the ritual for feeding hungry ghosts (施餓鬼會). Also, because the Sadang-ri No. 23 kiln site and the "ga" area of the site are believed to have produced royal celadon, it is likely that these rituals were performed at the royal court or a temple under its influence. Fourth, this inlaid Goryeo celadon with Sanskrit mantras was not a direct influence of the ceramics of Yuan China. It emerged by adopting Yuan Chinese Buddhist culture, which was influenced by Tibetan Buddhism, into Goryeo Korea's existing esoteric practices. Fifth, the celadon wares inlaid with a Sanskrit mantra reveal a facet of the personal esoteric rituals that prevailed in late Goryeo society. Changes in esotericism triggered by the desire for relief from anxieties can be exemplified in epitaph tablets and coffins that express a shared desire for escaping hell and being born again in paradise. Sixth, the inlaid celadon with Sanskrit mantras shares some common features with other crafts. The similarities include the use of Siddham Sanskrit, the focus on Six-Syllable Mantra of the Vidyaraja, the correspondence with the contents of the mantras found on Buddhist bells, wooden coffins, and memorial tablets, and their arraignment in a similar manner with rooftiles. The major difference between them is that the Mantra for Extinguishing Evil Rebirth and the Sweet Dew Manta have not yet been found on other craftworks. I believe that the inscriptions of Sanskrit mantras are found mainly on inlaid celadon vessels due to their relatively low production cost and efficiency.