• Title/Summary/Keyword: s approach.

Search Result 14,661, Processing Time 0.043 seconds

A Study on Characteristics of Lincomycin Degradation by Optimized TiO2/HAP/Ge Composite using Mixture Analysis (혼합물분석을 통해 최적화된 TiO2/HAP/Ge 촉매를 이용한 Lincomycin 제거특성 연구)

  • Kim, Dongwoo;Chang, Soonwoong
    • Journal of the Korean GEO-environmental Society
    • /
    • v.15 no.1
    • /
    • pp.63-68
    • /
    • 2014
  • In this study, it was found that determined the photocatalytic degradation of antibiotics (lincomycin, LM) with various catalyst composite of titanium dioxide ($TiO_2$), hydroxyapatite (HAP) and germanium (Ge) under UV-A irradiation. At first, various type of complex catalysts were investigated to compare the enhanced photocatalytic potential. It was observed that in order to obtain the removal efficiencies were $TiO_2/HAP/Ge$ > $TiO_2/Ge$ > $TiO_2/HAP$. The composition of $TiO_2/HAP/Ge$ using a statistical approach based on mixture analysis design, one of response surface method was investigated. The independent variables of $TiO_2$ ($X_1$), HAP ($X_2$) and Ge ($X_3$) which consisted of 6 condition in each variables was set up to determine the effects on LM ($Y_1$) and TOC ($Y_2$) degradation. Regression analysis on analysis of variance (ANOVA) showed significant p-value (p < 0.05) and high coefficients for determination value ($R^2$ of $Y_1=99.28%$ and $R^2$ of $Y_2=98.91%$). Contour plot and response curve showed that the effects of $TiO_2/HAP/Ge$ composition for LM degradation under UV-A irradiation. And the estimated optimal composition for TOC removal ($Y_2$) were $X_1=0.6913$, $X_2=0.2313$ and $X_3=0.0756$ by coded value. By comparison with actual applications, the experimental results were found to be in good agreement with the model's predictions, with mean results for LM and TOC removal of 99.2% and 49.3%, respectively.

OBSTETRICIAN'S VIEW OF TEENAGE PREGNANCY:PRESENT STATUS, PREVENTION AND PSYCHIATRIC CONSULTATION (산과 의사가 인지한 10대 임신의 현황, 예방, 정신과 자문)

  • Kim, Eun-Young;Kim, Boong-Nyun;Hong, Kang-E;Lee, Young-Sik
    • Journal of the Korean Academy of Child and Adolescent Psychiatry
    • /
    • v.13 no.1
    • /
    • pp.117-128
    • /
    • 2002
  • Objectives:For the purpose of obtaining the more vivid present status and prevention program of teenage pregnancy, this survey was done by Obstetricians, as study subject, who manage the pregnant teenager in real clinical situation. Methods:Structured survey form about teenage pregnancy was sent to 2,800 obstetricians. That form contained frequency, characteristics, decision making processes, and psychiatric aspects of the teenage pregnancy. 349 obstetricians replied that survey form and we analysed these datas. Results:(1) The trend of teenage pregnancy was mildly increased. (2) The most common cases were unwanted pregnancy by continuing sexual relationship with boyfriends rather than by forced, accidental sexual relationship with multiple partners. (3) The most common reason of labor was loss the time of artificial abotion. (4) Problems of pregnant girls' were conduct behaviors and poor informations about contraception rather than sexual abuse or mental retardation. (5) Most obstetricians percepted the necessity of psychiatric consultation, however psychiatric consultation was rare due to parents refusal and abscense of available psychiatric facility. (6) For the prevention of teenage pregnancy, the most important thing was practical education about contraception. Conclusions:Based on the result of this study, further study using structured interview schedule with pregnant girl is needed for the detecting risk factor of teenage pregnancy and effective systematic approach to pregnant girl.

  • PDF

Social Network-based Hybrid Collaborative Filtering using Genetic Algorithms (유전자 알고리즘을 활용한 소셜네트워크 기반 하이브리드 협업필터링)

  • Noh, Heeryong;Choi, Seulbi;Ahn, Hyunchul
    • Journal of Intelligence and Information Systems
    • /
    • v.23 no.2
    • /
    • pp.19-38
    • /
    • 2017
  • Collaborative filtering (CF) algorithm has been popularly used for implementing recommender systems. Until now, there have been many prior studies to improve the accuracy of CF. Among them, some recent studies adopt 'hybrid recommendation approach', which enhances the performance of conventional CF by using additional information. In this research, we propose a new hybrid recommender system which fuses CF and the results from the social network analysis on trust and distrust relationship networks among users to enhance prediction accuracy. The proposed algorithm of our study is based on memory-based CF. But, when calculating the similarity between users in CF, our proposed algorithm considers not only the correlation of the users' numeric rating patterns, but also the users' in-degree centrality values derived from trust and distrust relationship networks. In specific, it is designed to amplify the similarity between a target user and his or her neighbor when the neighbor has higher in-degree centrality in the trust relationship network. Also, it attenuates the similarity between a target user and his or her neighbor when the neighbor has higher in-degree centrality in the distrust relationship network. Our proposed algorithm considers four (4) types of user relationships - direct trust, indirect trust, direct distrust, and indirect distrust - in total. And, it uses four adjusting coefficients, which adjusts the level of amplification / attenuation for in-degree centrality values derived from direct / indirect trust and distrust relationship networks. To determine optimal adjusting coefficients, genetic algorithms (GA) has been adopted. Under this background, we named our proposed algorithm as SNACF-GA (Social Network Analysis - based CF using GA). To validate the performance of the SNACF-GA, we used a real-world data set which is called 'Extended Epinions dataset' provided by 'trustlet.org'. It is the data set contains user responses (rating scores and reviews) after purchasing specific items (e.g. car, movie, music, book) as well as trust / distrust relationship information indicating whom to trust or distrust between users. The experimental system was basically developed using Microsoft Visual Basic for Applications (VBA), but we also used UCINET 6 for calculating the in-degree centrality of trust / distrust relationship networks. In addition, we used Palisade Software's Evolver, which is a commercial software implements genetic algorithm. To examine the effectiveness of our proposed system more precisely, we adopted two comparison models. The first comparison model is conventional CF. It only uses users' explicit numeric ratings when calculating the similarities between users. That is, it does not consider trust / distrust relationship between users at all. The second comparison model is SNACF (Social Network Analysis - based CF). SNACF differs from the proposed algorithm SNACF-GA in that it considers only direct trust / distrust relationships. It also does not use GA optimization. The performances of the proposed algorithm and comparison models were evaluated by using average MAE (mean absolute error). Experimental result showed that the optimal adjusting coefficients for direct trust, indirect trust, direct distrust, indirect distrust were 0, 1.4287, 1.5, 0.4615 each. This implies that distrust relationships between users are more important than trust ones in recommender systems. From the perspective of recommendation accuracy, SNACF-GA (Avg. MAE = 0.111943), the proposed algorithm which reflects both direct and indirect trust / distrust relationships information, was found to greatly outperform a conventional CF (Avg. MAE = 0.112638). Also, the algorithm showed better recommendation accuracy than the SNACF (Avg. MAE = 0.112209). To confirm whether these differences are statistically significant or not, we applied paired samples t-test. The results from the paired samples t-test presented that the difference between SNACF-GA and conventional CF was statistical significant at the 1% significance level, and the difference between SNACF-GA and SNACF was statistical significant at the 5%. Our study found that the trust/distrust relationship can be important information for improving performance of recommendation algorithms. Especially, distrust relationship information was found to have a greater impact on the performance improvement of CF. This implies that we need to have more attention on distrust (negative) relationships rather than trust (positive) ones when tracking and managing social relationships between users.

Development of Systematic Process for Estimating Commercialization Duration and Cost of R&D Performance (기술가치 평가를 위한 기술사업화 기간 및 비용 추정체계 개발)

  • Jun, Seoung-Pyo;Choi, Daeheon;Park, Hyun-Woo;Seo, Bong-Goon;Park, Do-Hyung
    • Journal of Intelligence and Information Systems
    • /
    • v.23 no.2
    • /
    • pp.139-160
    • /
    • 2017
  • Technology commercialization creates effective economic value by linking the company's R & D processes and outputs to the market. This technology commercialization is important in that a company can retain and maintain a sustained competitive advantage. In order for a specific technology to be commercialized, it goes through the stage of technical planning, technology research and development, and commercialization. This process involves a lot of time and money. Therefore, the duration and cost of technology commercialization are important decision information for determining the market entry strategy. In addition, it is more important information for a technology investor to rationally evaluate the technology value. In this way, it is very important to scientifically estimate the duration and cost of the technology commercialization. However, research on technology commercialization is insufficient and related methodology are lacking. In this study, we propose an evaluation model that can estimate the duration and cost of R & D technology commercialization for small and medium-sized enterprises. To accomplish this, this study collected the public data of the National Science & Technology Information Service (NTIS) and the survey data provided by the Small and Medium Business Administration. Also this study will develop the estimation model of commercialization duration and cost of R&D performance on using these data based on the market approach, one of the technology valuation methods. Specifically, this study defined the process of commercialization as consisting of development planning, development progress, and commercialization. We collected the data from the NTIS database and the survey of SMEs technical statistics of the Small and Medium Business Administration. We derived the key variables such as stage-wise R&D costs and duration, the factors of the technology itself, the factors of the technology development, and the environmental factors. At first, given data, we estimates the costs and duration in each technology readiness level (basic research, applied research, development research, prototype production, commercialization), for each industry classification. Then, we developed and verified the research model of each industry classification. The results of this study can be summarized as follows. Firstly, it is reflected in the technology valuation model and can be used to estimate the objective economic value of technology. The duration and the cost from the technology development stage to the commercialization stage is a critical factor that has a great influence on the amount of money to discount the future sales from the technology. The results of this study can contribute to more reliable technology valuation because it estimates the commercialization duration and cost scientifically based on past data. Secondly, we have verified models of various fields such as statistical model and data mining model. The statistical model helps us to find the important factors to estimate the duration and cost of technology Commercialization, and the data mining model gives us the rules or algorithms to be applied to an advanced technology valuation system. Finally, this study reaffirms the importance of commercialization costs and durations, which has not been actively studied in previous studies. The results confirm the significant factors to affect the commercialization costs and duration, furthermore the factors are different depending on industry classification. Practically, the results of this study can be reflected in the technology valuation system, which can be provided by national research institutes and R & D staff to provide sophisticated technology valuation. The relevant logic or algorithm of the research result can be implemented independently so that it can be directly reflected in the system, so researchers can use it practically immediately. In conclusion, the results of this study can be a great contribution not only to the theoretical contributions but also to the practical ones.

Predicting Regional Soybean Yield using Crop Growth Simulation Model (작물 생육 모델을 이용한 지역단위 콩 수량 예측)

  • Ban, Ho-Young;Choi, Doug-Hwan;Ahn, Joong-Bae;Lee, Byun-Woo
    • Korean Journal of Remote Sensing
    • /
    • v.33 no.5_2
    • /
    • pp.699-708
    • /
    • 2017
  • The present study was to develop an approach for predicting soybean yield using a crop growth simulation model at the regional level where the detailed and site-specific information on cultivation management practices is not easily accessible for model input. CROPGRO-Soybean model included in Decision Support System for Agrotechnology Transfer (DSSAT) was employed for this study, and Illinois which is a major soybean production region of USA was selected as a study region. As a first step to predict soybean yield of Illinois using CROPGRO-Soybean model, genetic coefficients representative for each soybean maturity group (MG I~VI) were estimated through sowing date experiments using domestic and foreign cultivars with diverse maturity in Seoul National University Farm ($37.27^{\circ}N$, $126.99^{\circ}E$) for two years. The model using the representative genetic coefficients simulated the developmental stages of cultivars within each maturity group fairly well. Soybean yields for the grids of $10km{\times}10km$ in Illinois state were simulated from 2,000 to 2,011 with weather data under 18 simulation conditions including the combinations of three maturity groups, three seeding dates and two irrigation regimes. Planting dates and maturity groups were assigned differently to the three sub-regions divided longitudinally. The yearly state yields that were estimated by averaging all the grid yields simulated under non-irrigated and fully-Irrigated conditions showed a big difference from the statistical yields and did not explain the annual trend of yield increase due to the improved cultivation technologies. Using the grain yield data of 9 agricultural districts in Illinois observed and estimated from the simulated grid yield under 18 simulation conditions, a multiple regression model was constructed to estimate soybean yield at agricultural district level. In this model a year variable was also added to reflect the yearly yield trend. This model explained the yearly and district yield variation fairly well with a determination coefficients of $R^2=0.61$ (n = 108). Yearly state yields which were calculated by weighting the model-estimated yearly average agricultural district yield by the cultivation area of each agricultural district showed very close correspondence ($R^2=0.80$) to the yearly statistical state yields. Furthermore, the model predicted state yield fairly well in 2012 in which data were not used for the model construction and severe yield reduction was recorded due to drought.

The Effects of International Entrepreneurial Proclivity of SME's on Corporate Capability and Export Performance: Focused on Consumer Goods and Industrial Goods (중소기업의 국제기업가 성향이 기업역량 및 수출성과에 미치는 영향: 산업재와 소비재를 중심으로)

  • Yang, Hee-Soon;Jung, Min-Ji
    • Asia-Pacific Journal of Business Venturing and Entrepreneurship
    • /
    • v.10 no.2
    • /
    • pp.121-134
    • /
    • 2015
  • This study empirically analyzed the effects of international entrepreneurial proclivity of exporting small and medium enterprises on corporate capability and export performance according to product type of industrial and consumer goods. International entrepreneurial proclivity of exporting small and medium enterprises consists of risk-taking, proactiveness, and innovativeness, and corporate capability consists of technological capability and product differentiation capability. Risk-taking, innovativeness, and proactiveness had a significant impact on technological capability in case of industrial goods, and in case of consumer goods, only risk-taking and innovativeness had significant impact. Product differentiation capability of consumer goods was significantly influenced by the order of innovativeness, proactiveness, and risk-taking while only innovativeness had a negative impact on industrial goods. When the impact of corporate capability on export performance was examined, only technological capability had a significant impact on both financial and strategic performance in case of industrial goods while both technological capability and product differentiation capability had significant impact in case of consumer goods. After examining the direct impact of international entrepreneurial proclivity on financial performance, it was found that financial performance in the case of industrial goods was significantly influenced by the order of proactiveness and risk-taking, and in the case of consumer goods by the order of innovativeness and proactiveness. However, the impact of international entrepreneurial proclivity on strategic performance showed different results. In case of industrial goods, only risk-taking had a significant impact on strategic performance while in the case of consumer goods it was significantly influenced by the order of innovativeness, proactivenesspro, and risk-taking. The direct impact of international entrepreneurial proclivity on export performance was different in case of financial and strategic performance, and there was difference regarding product type as well. It suggests that different approach is needed according to product type in order to increase export performance since the impact of international entrepreneurial proclivity on corporate capability, the impact of corporate capability on export performance, and the impact of international entrepreneurial proclivity on export performance were all different according to product type.

  • PDF

The Study on the Priority of First Person Shooter game Elements using Delphi Methodology (FPS게임 구성요소의 중요도 분석방법에 관한 연구 1 -델파이기법을 이용한 독립요소의 계층설계와 검증을 중심으로-)

  • Bae, Hye-Jin;Kim, Suk-Tae
    • Archives of design research
    • /
    • v.20 no.3 s.71
    • /
    • pp.61-72
    • /
    • 2007
  • Having started with "Space War", the first game produced by MIT in the 1960's, the gaming industry expanded rapidly and grew to a large size over a short period of time: the brand new games being launched on the market are found to contain many different elements making up a single content in that it is often called the 'the most comprehensive ultimate fruits' of the design technologies. This also translates into a large increase in the number of things which need to be considered in developing games, complicating the plans on the financial budget, the work force, and the time to be committed. Therefore, an approach for analyzing the elements which make up a game, computing the importance of each of them, and assessing those games to be developed in the future, is the key to a successful development of games. Many decision-making activities are often required under such a planning process. The decision-making task involves many difficulties which are outlined as follows: the multi-factor problem; the uncertainty problem impeding the elements from being "quantified" the complex multi-purpose problem for which the outcome aims confusion among decision-makers and the problem with determining the priority order of multi-stages leading to the decision-making process. In this study we plan to suggest AHP (Analytic Hierarchy Process) so that these problems can be worked out comprehensively, and logical and rational alternative plan can be proposed through the quantification of the "uncertain" data. The analysis was conducted by taking FPS (First Person Shooting) which is currently dominating the gaming industry, as subjects for this study. The most important consideration in conducting AHP analysis is to accurately group the elements of the subjects to be analyzed objectively, and arrange them hierarchically, and to analyze the importance through pair-wise comparison between the elements. The study is composed of 2 parts of analyzing these elements and computing the importance between them, and choosing an alternative plan. Among these this paper is particularly focused on the Delphi technique-based objective element analyzing and hierarchy of the FPS games.

  • PDF

A Study on Design Education Re-engineering by Multi-disciplinary Approach (다학제적 접근을 통한 대학디자인 교육혁신 프로그램 연구)

  • Lee, Soon-Jong;Kim, Jong-Won;Chu, Wu-Jin;Chae, Sung-Zin;Yoon, Su-Hyun
    • Archives of design research
    • /
    • v.20 no.3 s.71
    • /
    • pp.299-314
    • /
    • 2007
  • For the past 20 years, the growth and development of university-design-educational institutes contributed to the industrial development of our country. Due to the technological fluctuation and changes in the industrial structure in the latter half of the 20th century, the enterprise is demanding professionally-oriented design manpower. The principle which appears from instances of the advanced nations is to accommodate the demands in social changes and apply them to educational design programs. In order to respond promptly to the industrial demand especially, the advanced nations adopted "multidisciplinary design education programs" to lead innovation in the area of design globally. The objective of the research consequently is to suggest an educational system and a program through which the designer can be educated to obtain complex knowledge and the technique demanded by the industry and enterprise. Nowadays in order to adapt to a new business environment, designers specially should have both the knowledge and techniques in engineering and business administration. We suggest that the IPDI, a multidisciplinary design educational system and program is made up of the coordinated operation of major classes, on-the-job training connection, educational system for research base creation, renovation design development program for the application and the synthesis of alternative proposals about the training facility joint ownership by connecting with the education of design, business administration and engineering.

  • PDF

The Study of Volume Data Aggregation Method According to Lane Usage Ratio (차로이용률을 고려한 지점 교통량 자료의 집락화 방법에 관한 연구)

  • An Kwang-Hun;Baek Seung-Kirl;NamKoong Sung
    • The Journal of The Korea Institute of Intelligent Transport Systems
    • /
    • v.4 no.3 s.8
    • /
    • pp.33-43
    • /
    • 2005
  • Traffic condition monitoring system serves as the foundation for all intelligent transportation system operation. Loop detectors and Video Image Processing are the most widely common technology approach to condition monitoring in korea Highways. Lane Usage is defined as the proportion of total link volume served by each lane. In this research, the lane Usage(LU) of two lane link for one day. Interval is 56% : 44%. The LU of three lane link is 39% : 37% : 24%. The LU of four lane link is 25% : 29% : 26% : 21%. These analysis reveal that each lane distributions of link are not same. This research investigates the general concept of lane usage by using collected loop detector data and the investigated that lane distribution is different by traffic lane and lane usage is consistent by time of day.

  • PDF

The Relationship Analysis between the Epicenter and Lineaments in the Odaesan Area using Satellite Images and Shaded Relief Maps (위성영상과 음영기복도를 이용한 오대산 지역 진앙의 위치와 선구조선의 관계 분석)

  • CHA, Sung-Eun;CHI, Kwang-Hoon;JO, Hyun-Woo;KIM, Eun-Ji;LEE, Woo-Kyun
    • Journal of the Korean Association of Geographic Information Studies
    • /
    • v.19 no.3
    • /
    • pp.61-74
    • /
    • 2016
  • The purpose of this paper is to analyze the relationship between the location of the epicenter of a medium-sized earthquake(magnitude 4.8) that occurred on January 20, 2007 in the Odaesan area with lineament features using a shaded relief map(1/25,000 scale) and satellite images from LANDSAT-8 and KOMPSAT-2. Previous studies have analyzed lineament features in tectonic settings primarily by examining two-dimensional satellite images and shaded relief maps. These methods, however, limit the application of the visual interpretation of relief features long considered as the major component of lineament extraction. To overcome some existing limitations of two-dimensional images, this study examined three-dimensional images, produced from a Digital Elevation Model and drainage network map, for lineament extraction. This approach reduces mapping errors introduced by visual interpretation. In addition, spline interpolation was conducted to produce density maps of lineament frequency, intersection, and length required to estimate the density of lineament at the epicenter of the earthquake. An algorithm was developed to compute the Value of the Relative Density(VRD) representing the relative density of lineament from the map. The VRD is the lineament density of each map grid divided by the maximum density value from the map. As such, it is a quantified value that indicates the concentration level of the lineament density across the area impacted by the earthquake. Using this algorithm, the VRD calculated at the earthquake epicenter using the lineament's frequency, intersection, and length density maps ranged from approximately 0.60(min) to 0.90(max). However, because there were differences in mapped images such as those for solar altitude and azimuth, the mean of VRD was used rather than those categorized by the images. The results show that the average frequency of VRD was approximately 0.85, which was 21% higher than the intersection and length of VRD, demonstrating the close relationship that exists between lineament and the epicenter. Therefore, it is concluded that the density map analysis described in this study, based on lineament extraction, is valid and can be used as a primary data analysis tool for earthquake research in the future.