• Title/Summary/Keyword: Site-Specific Performance

Search Result 186, Processing Time 0.03 seconds

Ordinary kriging approach to predicting long-term particulate matter concentrations in seven major Korean cities

  • Kim, Sun-Young;Yi, Seon-Ju;Eum, Young Seob;Choi, Hae-Jin;Shin, Hyesop;Ryou, Hyoung Gon;Kim, Ho
    • Environmental Analysis Health and Toxicology
    • /
    • v.29
    • /
    • pp.12.1-12.8
    • /
    • 2014
  • Objectives Cohort studies of associations between air pollution and health have used exposure prediction approaches to estimate individual-level concentrations. A common prediction method used in Korean cohort studies is ordinary kriging. In this study, performance of ordinary kriging models for long-term particulate matter less than or equal to $10{\mu}m$ in diameter ($PM_{10}$) concentrations in seven major Korean cities was investigated with a focus on spatial prediction ability. Methods We obtained hourly $PM_{10}$ data for 2010 at 226 urban-ambient monitoring sites in South Korea and computed annual average $PM_{10}$ concentrations at each site. Given the annual averages, we developed ordinary kriging prediction models for each of the seven major cities and for the entire country by using an exponential covariance reference model and a maximum likelihood estimation method. For model evaluation, cross-validation was performed and mean square error and R-squared ($R^2$) statistics were computed. Results Mean annual average $PM_{10}$ concentrations in the seven major cities ranged between 45.5 and $66.0{\mu}g/m^3$ (standard deviation=2.40 and $9.51{\mu}g/m^3$, respectively). Cross-validated $R^2$ values in Seoul and Busan were 0.31 and 0.23, respectively, whereas the other five cities had $R^2$ values of zero. The national model produced a higher cross-validated $R^2$ (0.36) than those for the city-specific models. Conclusions In general, the ordinary kriging models performed poorly for the seven major cities and the entire country of South Korea, but the model performance was better in the national model. To improve model performance, future studies should examine different prediction approaches that incorporate $PM_{10}$ source characteristics.

Low Temperature Thermal Desorption (LTTD) Treatment of Contaminated Soil

  • Alistair Montgomery;Joo, Wan-Ho;Shin, Won-Sik
    • Proceedings of the Korean Society of Soil and Groundwater Environment Conference
    • /
    • 2002.09a
    • /
    • pp.44-52
    • /
    • 2002
  • Low temperature thermal desorption (LTTD) has become one of the cornerstone technologies used for the treatment of contaminated soils and sediments in the United States. LTTD technology was first used in the mid-1980s for soil treatment on sites managed under the Comprehensive Environmental Respones, Compensation and Liability Act (CERCLA) or Superfund. Implementation was facilitated by CERCLA regulations that require only that spplicable regulations shall be met thus avoiding the need for protracted and expensive permit applications for thermal treatment equipment. The initial equipment designs used typically came from technology transfer sources. Asphalt manufacturing plants were converted to direct-fired LTTD systems, and conventional calciners were adapted for use as indirect-fired LTTD systems. Other innovative designs included hot sand recycle technology (initially developed for synfuels production from tar sand and oil shale), recycle sweep gas, travelling belts and batch-charged vacuum chambers, among others. These systems were used to treat soil contaminated with total petroleum hydrocarbons (TPH), polycyclic aromatic hydrocarbons (PAHs), pesticides, polychlorinated biphenyls (PCBs) and dioxin with varying degrees of success. Ultimately, performance and cost considerations established the suite of systems that are used for LTTD soil treatment applications today. This paper briefly reviews the develpoment of LTTD systems and summarizes the design, performance and cost characteristics of the equipment in use today. Designs reviewed include continuous feed direct-fired and indirect-fired equipment, batch feed systems and in-situ equipment. Performance is compared in terms of before-and-after contaminant levels in the soil and permissible emissions levels in the stack gas vented to the atmosphere. The review of air emissions standards includes a review of regulations in the U.S. and the European Union (EU). Key cost centers for the mobilization and operation of LTTD equipment are identified and compared for the different types of LTTD systems in use today. A work chart is provided for the selection of the optmum LTTD system for site-specific applications. LTTD technology continues to be a cornerstone technology for soil treatment in the U.S. and elsewhere. Examples of leading-edge LTTD technologies developed in the U.S. that are now being delivered locally in global projects are described.

  • PDF

Assessment of Discoidal Polymeric Nanoconstructs as a Drug Carrier (약물 운반체로서의 폴리머 디스크 나노 입자에 대한 평가)

  • BAE, J.Y.;OH, E.S.;AHN, H.J.;KEY, Jaehong
    • Journal of Biomedical Engineering Research
    • /
    • v.38 no.1
    • /
    • pp.43-48
    • /
    • 2017
  • Chemotherapy, radiation therapy, and surgery are major methods to treat cancer. However, current cancer treatments report severe side effects and high recurrences. Recent studies about engineering nanoparticles as a drug carrier suggest possibilities in terms of specific targeting and spatiotemporal release of drugs. While many nanoparticles demonstrate lower toxicity and better targeting results than free drugs, they still need to improve their performance dramatically in terms of targeting accuracy, immune responses, and non-specific accumulation at organs. One possible way to overcome the challenges is to make precisely controlled nanoparticles with respect to size, shape, surface properties, and mechanical stiffness. Here, we demonstrate $500{\times}200nm$ discoidal polymeric nanoconstructs (DPNs) as a drug delivery carrier. DPNs were prepared by using a top-down fabrication method that we previously reported to control shape as well as size. Moreover, DPNs have multiple payloads, poly lactic-co-glycolic acid (PLGA), polyethylene glycol (PEG), lipid-Rhodamine B dye (RhB) and Salinomycin. In this study, we demonstrated a potential of DPNs as a drug carrier to treat cancer.

The Effect of Meta-Features of Multiclass Datasets on the Performance of Classification Algorithms (다중 클래스 데이터셋의 메타특징이 판별 알고리즘의 성능에 미치는 영향 연구)

  • Kim, Jeonghun;Kim, Min Yong;Kwon, Ohbyung
    • Journal of Intelligence and Information Systems
    • /
    • v.26 no.1
    • /
    • pp.23-45
    • /
    • 2020
  • Big data is creating in a wide variety of fields such as medical care, manufacturing, logistics, sales site, SNS, and the dataset characteristics are also diverse. In order to secure the competitiveness of companies, it is necessary to improve decision-making capacity using a classification algorithm. However, most of them do not have sufficient knowledge on what kind of classification algorithm is appropriate for a specific problem area. In other words, determining which classification algorithm is appropriate depending on the characteristics of the dataset was has been a task that required expertise and effort. This is because the relationship between the characteristics of datasets (called meta-features) and the performance of classification algorithms has not been fully understood. Moreover, there has been little research on meta-features reflecting the characteristics of multi-class. Therefore, the purpose of this study is to empirically analyze whether meta-features of multi-class datasets have a significant effect on the performance of classification algorithms. In this study, meta-features of multi-class datasets were identified into two factors, (the data structure and the data complexity,) and seven representative meta-features were selected. Among those, we included the Herfindahl-Hirschman Index (HHI), originally a market concentration measurement index, in the meta-features to replace IR(Imbalanced Ratio). Also, we developed a new index called Reverse ReLU Silhouette Score into the meta-feature set. Among the UCI Machine Learning Repository data, six representative datasets (Balance Scale, PageBlocks, Car Evaluation, User Knowledge-Modeling, Wine Quality(red), Contraceptive Method Choice) were selected. The class of each dataset was classified by using the classification algorithms (KNN, Logistic Regression, Nave Bayes, Random Forest, and SVM) selected in the study. For each dataset, we applied 10-fold cross validation method. 10% to 100% oversampling method is applied for each fold and meta-features of the dataset is measured. The meta-features selected are HHI, Number of Classes, Number of Features, Entropy, Reverse ReLU Silhouette Score, Nonlinearity of Linear Classifier, Hub Score. F1-score was selected as the dependent variable. As a result, the results of this study showed that the six meta-features including Reverse ReLU Silhouette Score and HHI proposed in this study have a significant effect on the classification performance. (1) The meta-features HHI proposed in this study was significant in the classification performance. (2) The number of variables has a significant effect on the classification performance, unlike the number of classes, but it has a positive effect. (3) The number of classes has a negative effect on the performance of classification. (4) Entropy has a significant effect on the performance of classification. (5) The Reverse ReLU Silhouette Score also significantly affects the classification performance at a significant level of 0.01. (6) The nonlinearity of linear classifiers has a significant negative effect on classification performance. In addition, the results of the analysis by the classification algorithms were also consistent. In the regression analysis by classification algorithm, Naïve Bayes algorithm does not have a significant effect on the number of variables unlike other classification algorithms. This study has two theoretical contributions: (1) two new meta-features (HHI, Reverse ReLU Silhouette score) was proved to be significant. (2) The effects of data characteristics on the performance of classification were investigated using meta-features. The practical contribution points (1) can be utilized in the development of classification algorithm recommendation system according to the characteristics of datasets. (2) Many data scientists are often testing by adjusting the parameters of the algorithm to find the optimal algorithm for the situation because the characteristics of the data are different. In this process, excessive waste of resources occurs due to hardware, cost, time, and manpower. This study is expected to be useful for machine learning, data mining researchers, practitioners, and machine learning-based system developers. The composition of this study consists of introduction, related research, research model, experiment, conclusion and discussion.

Performance of Northern Exposure Index in Reducing Estimation Error for Daily Maximum Temperature over a Rugged Terrain (북향개방지수가 복잡지형의 일 최고기온 추정오차 저감에 미치는 영향)

  • Chung, U-Ran;Lee, Kwang-Hoe;Yun, Jin-I.
    • Korean Journal of Agricultural and Forest Meteorology
    • /
    • v.9 no.3
    • /
    • pp.195-202
    • /
    • 2007
  • The normalized difference in incident solar energy between a target surface and a level surface (overheating index, OHI) is useful in eliminating estimation error of site-specific maximum temperature in complex terrain. Due to the complexity in its calculation, however, an empirical proxy variable called northern exposure index (NEI) which combines slope and aspect has been used to estimate OHI based on empirical relationships between the two. An experiment with real-world landscape and temperature data was carried out to evaluate performance of the NEI - derived OHI (N-OHI) in reduction of spatial interpolation error for daily maximum temperature compared with that by the original OHI. We collected daily maximum temperature data from 7 sites in a mountainous watershed with a $149 km^2$ area and a 795m elevation range ($651{\sim}1,445m$) in Pyongchang, Kangwon province. Northern exposure index was calculated for the entire 166,050 grid cells constituting the watershed based on a 30-m digital elevation model. Daily OHI was calculated for the same watershed ana regressed to the variation of NEI. The regression equations were used to estimate N-OHI for 15th of each month. Deviations in daily maximum temperature at 7 sites from those measured at the nearby synoptic station were calculated from June 2006 to February 2007 and regressed to the N-OHI. The same procedure was repeated with the original OHI values. The ratio sum of square errors contributable by the N-OHI were 0.46 (winter), 0.24 (fall), and 0.01 (summer), while those by the original OHI were 0.52, 0.37 and 0.15, respectively.

Case study on frequency bands contributing the single number quantity for heavy-weight impact sound based on assessment method changes (중량충격음 평가방법 변화에 따른 단일수치평가량 기여 주파수 대역 사례 분석)

  • Hye-kyung Shin;Sang Hee Park;Kyoung-woo Kim
    • The Journal of the Acoustical Society of Korea
    • /
    • v.42 no.6
    • /
    • pp.565-571
    • /
    • 2023
  • With the introduction of the post-verification system, the measurement of floor impact noise performance on-site has become mandatory, and the evaluation method has changed. To track the performance changes since the policy implementation, research is needed on how the characteristics of heavyweight impact sound change according to the varied evaluation method. In this study, we analyzed the contribution rate of the frequency band-specific sound pressure level on the single-number quantity for a multi-family housing unit with the same floor plan and floor structure, comprising 59 households, based on the changed impact sources and evaluation indicators. It is difficult to compare simply because the method of calculating contributions by frequency band according to the single-day evaluation is different, but the average contribution rate of 63 Hz was 80.8 % in the evaluation method before the introduction of the post-confirmation system (Tire measurement and evaluated as L'i,Fmax,AW), and the average contribution rate of 125 Hz was 19.2 %. The current evaluation method (rubber ball measurement and evaluation as L'iA,Fmax) shows that the contribution rate has decreased to 33.1 % on average at 50 Hz ~ 80 Hz, 58.7 % on average at 100 Hz ~ 160 Hz, 6.9 % on average at 200 Hz ~ 315 Hz, and 1.3 % on average at 400 Hz ~ 630 Hz. This result is a case analysis for the target apartment house, and it is necessary to analyze measurement data for more diverse apartment houses.

A Folksonomy Ranking Framework: A Semantic Graph-based Approach (폭소노미 사이트를 위한 랭킹 프레임워크 설계: 시맨틱 그래프기반 접근)

  • Park, Hyun-Jung;Rho, Sang-Kyu
    • Asia pacific journal of information systems
    • /
    • v.21 no.2
    • /
    • pp.89-116
    • /
    • 2011
  • In collaborative tagging systems such as Delicious.com and Flickr.com, users assign keywords or tags to their uploaded resources, such as bookmarks and pictures, for their future use or sharing purposes. The collection of resources and tags generated by a user is called a personomy, and the collection of all personomies constitutes the folksonomy. The most significant need of the folksonomy users Is to efficiently find useful resources or experts on specific topics. An excellent ranking algorithm would assign higher ranking to more useful resources or experts. What resources are considered useful In a folksonomic system? Does a standard superior to frequency or freshness exist? The resource recommended by more users with mere expertise should be worthy of attention. This ranking paradigm can be implemented through a graph-based ranking algorithm. Two well-known representatives of such a paradigm are Page Rank by Google and HITS(Hypertext Induced Topic Selection) by Kleinberg. Both Page Rank and HITS assign a higher evaluation score to pages linked to more higher-scored pages. HITS differs from PageRank in that it utilizes two kinds of scores: authority and hub scores. The ranking objects of these pages are limited to Web pages, whereas the ranking objects of a folksonomic system are somewhat heterogeneous(i.e., users, resources, and tags). Therefore, uniform application of the voting notion of PageRank and HITS based on the links to a folksonomy would be unreasonable, In a folksonomic system, each link corresponding to a property can have an opposite direction, depending on whether the property is an active or a passive voice. The current research stems from the Idea that a graph-based ranking algorithm could be applied to the folksonomic system using the concept of mutual Interactions between entitles, rather than the voting notion of PageRank or HITS. The concept of mutual interactions, proposed for ranking the Semantic Web resources, enables the calculation of importance scores of various resources unaffected by link directions. The weights of a property representing the mutual interaction between classes are assigned depending on the relative significance of the property to the resource importance of each class. This class-oriented approach is based on the fact that, in the Semantic Web, there are many heterogeneous classes; thus, applying a different appraisal standard for each class is more reasonable. This is similar to the evaluation method of humans, where different items are assigned specific weights, which are then summed up to determine the weighted average. We can check for missing properties more easily with this approach than with other predicate-oriented approaches. A user of a tagging system usually assigns more than one tags to the same resource, and there can be more than one tags with the same subjectivity and objectivity. In the case that many users assign similar tags to the same resource, grading the users differently depending on the assignment order becomes necessary. This idea comes from the studies in psychology wherein expertise involves the ability to select the most relevant information for achieving a goal. An expert should be someone who not only has a large collection of documents annotated with a particular tag, but also tends to add documents of high quality to his/her collections. Such documents are identified by the number, as well as the expertise, of users who have the same documents in their collections. In other words, there is a relationship of mutual reinforcement between the expertise of a user and the quality of a document. In addition, there is a need to rank entities related more closely to a certain entity. Considering the property of social media that ensures the popularity of a topic is temporary, recent data should have more weight than old data. We propose a comprehensive folksonomy ranking framework in which all these considerations are dealt with and that can be easily customized to each folksonomy site for ranking purposes. To examine the validity of our ranking algorithm and show the mechanism of adjusting property, time, and expertise weights, we first use a dataset designed for analyzing the effect of each ranking factor independently. We then show the ranking results of a real folksonomy site, with the ranking factors combined. Because the ground truth of a given dataset is not known when it comes to ranking, we inject simulated data whose ranking results can be predicted into the real dataset and compare the ranking results of our algorithm with that of a previous HITS-based algorithm. Our semantic ranking algorithm based on the concept of mutual interaction seems to be preferable to the HITS-based algorithm as a flexible folksonomy ranking framework. Some concrete points of difference are as follows. First, with the time concept applied to the property weights, our algorithm shows superior performance in lowering the scores of older data and raising the scores of newer data. Second, applying the time concept to the expertise weights, as well as to the property weights, our algorithm controls the conflicting influence of expertise weights and enhances overall consistency of time-valued ranking. The expertise weights of the previous study can act as an obstacle to the time-valued ranking because the number of followers increases as time goes on. Third, many new properties and classes can be included in our framework. The previous HITS-based algorithm, based on the voting notion, loses ground in the situation where the domain consists of more than two classes, or where other important properties, such as "sent through twitter" or "registered as a friend," are added to the domain. Forth, there is a big difference in the calculation time and memory use between the two kinds of algorithms. While the matrix multiplication of two matrices, has to be executed twice for the previous HITS-based algorithm, this is unnecessary with our algorithm. In our ranking framework, various folksonomy ranking policies can be expressed with the ranking factors combined and our approach can work, even if the folksonomy site is not implemented with Semantic Web languages. Above all, the time weight proposed in this paper will be applicable to various domains, including social media, where time value is considered important.

A Case Study of the Performance and Success Factors of ISMP(Information Systems Master Plan) (정보시스템 마스터플랜(ISMP) 수행 성과와 성공요인에 관한 사례연구)

  • Park, So-Hyun;Lee, Kuk-Hie;Gu, Bon-Jae;Kim, Min-Seog
    • Information Systems Review
    • /
    • v.14 no.1
    • /
    • pp.85-103
    • /
    • 2012
  • ISMP is a method of writing clearly the user requirements in the RFP(Request for Proposal) of the IS development projects. Unlike the conventional methods of RFP preparation that describe the user requirements of target systems in a rather superficial manner, ISMP systematically identifies the businesses needs and the status of information technology, analyzes in detail the user requirements, and defines in detail the specific functions of the target systems. By increasing the clarity of RFP, the scale and complexity of related businesses can be calculated accurately, many responding companies can prepare proposals clearly, and the level of fairness during the evaluation of many proposals can be improved, as well. Above all though, the problems that are posed as chronic challenges in this field, i.e., the misunderstanding and conflicts between the users and developers, excessive burden on developers, etc. can be resolved. This study is a case study that analyzes the execution process, execution accomplishment, problems, and the success factors of two pilot projects that introduced ISMP for the first time. ISMP performance procedures of actual site were verified, and how the user needs in the request for quote are described was examined. The satisfaction levels of ISMP RFP for quote were found to be high as compared to the conventional RFP. Although occurred were some problems such as RFP preparation difficulties, increased workload, etc. due to the lack of understanding and execution experience of ISMP, in overall, also occurred were some positive effects such as the establishment of the scope of target systems, improved information sharing and cooperation between the users and the developers, seamless communication between issuing customer corporations and IT service companies, reduction of changes in user requirements, etc. As a result of conducting action research type in-depth interviews on the persons in charge of actual work, factors were derived as ISMP success factors: prior consensus on the need for ISMP, the acquisition of execution resources resulting from the support of CEO and CIO, and the selection of specification level of the user requirements. The results of this study will provide useful site information to the corporations that are considering adopting ISMP and IT service firms, and present meaningful suggestions on the future study directions to researchers in the field of IT service competitive advantages.

  • PDF

Recent Changes in Bloom Dates of Robinia pseudoacacia and Bloom Date Predictions Using a Process-Based Model in South Korea (최근 12년간 아까시나무 만개일의 변화와 과정기반모형을 활용한 지역별 만개일 예측)

  • Kim, Sukyung;Kim, Tae Kyung;Yoon, Sukhee;Jang, Keunchang;Lim, Hyemin;Lee, Wi Young;Won, Myoungsoo;Lim, Jong-Hwan;Kim, Hyun Seok
    • Journal of Korean Society of Forest Science
    • /
    • v.110 no.3
    • /
    • pp.322-340
    • /
    • 2021
  • Due to climate change and its consequential spring temperature rise, flowering time of Robinia pseudoacacia has advanced and a simultaneous blooming phenomenon occurred in different regions in South Korea. These changes in flowering time became a major crisis in the domestic beekeeping industry and the demand for accurate prediction of flowering time for R. pseudoacacia is increasing. In this study, we developed and compared performance of four different models predicting flowering time of R. pseudoacacia for the entire country: a Single Model for the country (SM), Modified Single Model (MSM) using correction factors derived from SM, Group Model (GM) estimating parameters for each region, and Local Model (LM) estimating parameters for each site. To achieve this goal, the bloom date data observed at 26 points across the country for the past 12 years (2006-2017) and daily temperature data were used. As a result, bloom dates for the north central region, where spring temperature increase was more than two-fold higher than southern regions, have advanced and the differences compared with the southwest region decreased by 0.7098 days per year (p-value=0.0417). Model comparisons showed MSM and LM performed better than the other models, as shown by 24% and 15% lower RMSE than SM, respectively. Furthermore, validation with 16 additional sites for 4 years revealed co-krigging of LM showed better performance than expansion of MSM for the entire nation (RMSE: p-value=0.0118, Bias: p-value=0.0471). This study improved predictions of bloom dates for R. pseudoacacia and proposed methods for reliable expansion to the entire nation.

Radiation Therapy Alone for Early Stage Non-small Cell Carcinoma of the Lung (초기 비소세포폐암의 방사선 단독치료)

  • Chun, Ha-Chung;Lee, Myung-Za
    • Radiation Oncology Journal
    • /
    • v.20 no.4
    • /
    • pp.323-327
    • /
    • 2002
  • Purpose : To evaluate the outcome of early stage non-small cell lung cancer patients who were treated with radiation therapy alone and define the optimal radiotherapeutic regimen for these patients. Materials and Methods : A retrospective review was peformed on patients with sage I or II non-small cell carcinoma of the lung that were treated at our institution between June, 1987 and May, 2000. A total of 21 patients treated definitively with radiation therapy alone were included in this study. The age of the patients ranged from 53 to 81 years with a median of 66 years. All the patients were male. The medical reasons for inoperability were lack of pulmonary reserve, cardiovascular disease, poor performance status, old age, and patient refusal in the decreasing order. Pathological evidence was not adequate to characterize the non-small cell subtype in two patients. Of the remaining 19 patients, 16 had squamous cell carcinoma and 3 had adenocarcinoma. Treatment was given with conventional fractionation, once a day, five times a week. The doses to the primary site ranged from 56 Gy to 59 Gy. No patients were lost to follow-up. Results : The overall survival rates for the entire group at 2, 3 and 5 years were 41, 30 and $21\%$, respectively. The cause specific survivals at 2, 3 and 5 years were 55, 36 and $25\%$, respectively. An intercurrent disease was the cause of death in two patients. The cumulative local failure rate at 5 years was $43\%$. Nine of the 21 patients had treatment failures after the curative radiotherapy was attempted. Local recurrences as the first site of failure were documented in 7 patients. Therefore, local failure alone represented $78\%$ of the total failures. Those patients whose tumor sizes were less than 4 cm had a significantly better 5 year disease free survival than those with tumors greater than 4 cm $(0\%\;vs\;36\%)$. Those patients with a Karnofsky performance status less than 70 did not differ significantly with respect to actuarial survival when compared to those with a status greater than 70 $(25\%\;vs\;26\%,\;p>0.05)$. Conclusion : Radiation therapy 리one is an effective and safe treatment for early stage non-small ceil lung cancer patients who are medically inoperable or refuse surgery. Also we believe that a higher radiation dose to the primary site could improve the local control rate, and ultimately the overall survival rate.