• Title/Summary/Keyword: Flash-over

Search Result 188, Processing Time 0.022 seconds

The Monitoring Study of Exchange Cycle of Automatic Transmission Fluid (자동변속기유(ATF) 교환주기 모니터링 연구)

  • Lim, Young-Kwan;Jung, Choong-Sub;Lee, Jeong-Min;Han, Kwan-Wook;Na, Byung-Ki
    • Applied Chemistry for Engineering
    • /
    • v.24 no.3
    • /
    • pp.274-278
    • /
    • 2013
  • Automatic transmission fluid (ATF) is used as an automatic transmission in the vehicle or as a characterized fluid for automatic transmission. Recently, vehicle manufacturers usually guarantee for changing fluids over 80000~100000 km mileage or no exchange. However, most drivers usually change ATF below every 50000 km driving distance when driving in Republic of Korea according to a survey from the Korea Institute of Petroleum Management which can cause both a serious environmental contamination by the used ATF and an increase in the cost of driving. In this study, various physical properties such as flash point, pour point, kinematic viscosity, dynamic viscosity at low temperature, total acid number and four-ball test were investigated for both fresh ATF and used ATF after the actual vehicle driving distance of 50000 km and 100000 km. It was shown that most physical properties were suitable for the specification of ATF, but the foam characteristics of the used oil after running 100000 km was unsuitable for the specification of fresh ATF. Therefore, the exchange cycle of ATF every 80000~100000 km driving distance is recommended considering great positive contributions to preventing environmental pollution and reducing driving cost.

Optimization and characterization of biodiesel produced from vegetable oil

  • Mustapha, Amina T.;Abdulkareem, Saka A.;Jimoh, Abdulfatai;Agbajelola, David O.;Okafor, Joseph O.
    • Advances in Energy Research
    • /
    • v.1 no.2
    • /
    • pp.147-163
    • /
    • 2013
  • The world faces several issues of energy crisis and environmental deterioration due to over-dependence on single source of which is fossil fuel. Though, fuel is needed as ingredients for industrial development and growth of any country, however the fossil fuel which is a major source of energy for this purpose has always been terrifying thus the need for alternative and renewable energy sources. The search for alternative energy sources resulted into the acceptance of a biofuel as a reliable alternative energy source. This work presents the study of optimization of process of transesterification of vegetable oil to biodiesel using NaOH as catalyst. A $2^4$ factorial design method was employed to investigate the influence of ratio of oil to methanol, temperature, NaOH concentration, and transesterification time on the yield of biodiesel from vegetable oil. Low and high levels of the key factors considered were 4:1 and 6:1 mole ratio, 30 and $60^{\circ}C$ temperatures, 0.5 and 1.0 wt% catalyst concentration, and 30 and 60 min reaction time. Results obtained revealed that oil to methanol molar ratio of 6:1, tranesetrification temperature of $60^{\circ}C$, catalyst concentration of 1.0wt % and reaction time of 30 min are the best operating conditions for the optimum yield of biofuel from vegetable oil, with optimum yield of 95.8%. Results obtained on the characterizzation of the produced biodiesel indicate that the specific gravity, cloud point, flash point, sulphur content, viscosity, diesel index, centane number, acid value, free glycerine, total glycerine and total recovery are 0.8899, 4, 13, 0.0087%, 4.83, 25, 54.6. 0.228mgKOH/g, 0.018, 0.23% and 96% respectively. Results also indicate that the qualities of the biodiesel tested for are in conformity with the set standard. A model equation was developed based on the results obtained using a statistical tool. Analysis of variance (ANOVA) of data shows that mole ratio of ground nut oil to methanol and transesterification time have the most pronounced effect on the biodiesel yield with contributions of 55.06% and 9.22% respectively. It can be inferred from the results various conducted that vegetable oil locally produced from groundnut oil can be utilized as a feedstock for biodiesel production.

A study on the evaluation of metal component in automatic transmission fluid by vehicle driving (차량 운행에 따른 자동변속기유(ATF) 금속분 분석평가 연구)

  • Lee, Joung-Min;Lim, Young-Kwan;Doe, Jin-Woo;Jung, Choong-Sub;Han, Kwan-Wook;Na, Byung-Ki
    • Journal of Energy Engineering
    • /
    • v.23 no.2
    • /
    • pp.28-34
    • /
    • 2014
  • Automatic transmission fluid (ATF) is used for automatic transmissions in the vehicle as the characterized fluid. Recently, the vehicle manufacture usually guarantee for fluid change over 80000~100000 km mileage or no exchange, but most drivers usually change ATF below every 50000 km driving in Republic of Korea. It can cause to raise environmental contamination by used ATF and increase the cost of driving by frequently ATF change. In this study, we investigate the various physical properties such as flash point, fire point, pour point, kinematic viscosity, cold cranking simulator, total acid number, and metal component concentration for fresh and used ATF after driving (50000 km, 100000 km). The result showed that the total acid number, pour point, Fe, Al and Cu component had increased than fresh ATF, but 2 kind of used oil (50000 km and 100000km) had similar physical values and metal component concentration.

Disaster risk predicted by the Topographic Position and Landforms Analysis of Mountainous Watersheds (산지유역의 지형위치 및 지형분석을 통한 재해 위험도 예측)

  • Oh, Chae-Yeon;Jun, Kye-Won
    • Journal of Korean Society of Disaster and Security
    • /
    • v.11 no.2
    • /
    • pp.1-8
    • /
    • 2018
  • Extreme climate phenomena are occurring around the world caused by global climate change. The heavy rains exceeds the previous record of highest rainfall. In particular, as flash floods generate heavy rainfall on the mountains over a relatively a short period of time, the likelihood of landslides increases. Gangwon region is especially suffered by landslide damages, because the most of the part is mountainous, steep, and having shallow soil. Therefore, in this study, is to predict the risk of disasters by applying topographic classification techniques and landslide risk prediction techniques to mountain watersheds. Classify the hazardous area by calculating the topographic position index (TPI) as a topographic classification technique. The SINMAP method, one of the earth rock predictors, was used to predict possible areas of a landslide. Using the SINMAP method, we predicted the area where the mountainous disaster can occur. As a result, the topographic classification technique classified more than 63% of the total watershed into open slope and upper slope. In the SINMAP analysis, about 58% of the total watershed was analyzed as a hazard area. Due to recent developments, measures to reduce mountain disasters are urgently needed. Stability measures should be established for hazard zone.

Flood Runoff Computation for Mountainous Small Basins using WMS Model (WMS 모형을 활용한 산지 소하천 유역의 유출량 산정)

  • Chang, Hyung Joon;Lee, Jung Young;Lee, Hyo Sang
    • Journal of Korean Society of Disaster and Security
    • /
    • v.14 no.4
    • /
    • pp.9-15
    • /
    • 2021
  • The frequency of flash floods in mountainous areas is increasing due to the abnormal weather that occurs increasingly in the recent, and it causes human and material damages is increasing. Various plans for disaster mitigation have been established, but artificial plans such as raising embankment and dredging operation are inappropriate for valleys and rivers in national parks that prioritize nature protection. In this study, flood risk assessment was conducted for Gyeryongsan National Park in Korea using the WMS (Watershed Modeling System)which is rainfall runoff model for valleys and rivers in the catchment. As the result, it was simulated that it is flooding in three sub-catchments (Jusukgol, Sutonggol, Dinghaksa) of a total in Gyeryongsan National Park when rainfall over the 50 years return period occurs, and it was confirmed that the risk of trails and facilities what visitors are using was high. The risk of trails in national parks was quantitatively presented through the results of this study, and we intend to present the safe management guidelines of national parks in the future.

Critical Analyses of '2nd Science Inquiry Experiment Contest' (과학탐구 실험대회의 문제점 분석)

  • Paik, Seoung-Hey
    • Journal of The Korean Association For Science Education
    • /
    • v.15 no.2
    • /
    • pp.173-184
    • /
    • 1995
  • The purpose of this study was to analyse the problems of 'Science Inquiry Experiment Contest(SIEC)' which was one of 8 programs of 'The 2nd Student Science Inquiry Olympic Meet(SSIOM)'. The results and conclusions of this study were as follows: 1. It needs to reconsider the role of practical work within science experiment because practical work skills form one of the mainstays in current science. But the assessment of students' laboratory skills in the contest was made little account of. It is necessary to remind of what it means to be 'good at science'. There are two aspects: knowing and doing. Both are important and, in certain respects, quite distinct. Doing science is more of a craft activity, relying more on craft skill and tacit knowledge than on the conscious application of explicit knowledge. Doing science is also divided into two aspects, 'process' and 'skill' by many science educators. 2. The report's and checklist's assessment items were overlapped. Therefore it was suggested that the checklist assessment items were set limit to the students' acts which can't be found in reports. It is important to identify those activities which produce a permanent assessable product, and those which do not. Skills connected with recording and reporting are likely to produce permanent evidence which can be evaluated after the experiment. Those connected with manipulative skills involving processes are more ephemeral and need to be assessed as they occur. The division of student's experimental skills will contribute to the accurate assess of student's scientific inquiry experimental ability. 3. There was a wide difference among the scores of one participant recorded by three evaluators. This means that there was no concrete discussion among the evaluators before the contest. Despite the items of the checklists were set by preparers of the contest experiments, the concrete discussions before the contest were necessary because students' experimental acts were very diverse. There is a variety of scientific skills. So it is necessary to assess the performance of individual students in a range of skills. But the most of the difficulties in the assessment of skills arise from the interaction between measurement and the use. To overcome the difficulties, not only must the mark needed for each skill be recorded, something which all examination groups obviously need, but also a description of the work that the student did when the skill was assessed must also be given, and not all groups need this. Fuller details must also be available for the purposes of moderation. This is a requirement for all students that there must be provision for samples of any end-product or other tangible form of evidence of candidates' work to be submitted for inspection. This is rather important if one is to be as fair as possible to students because, not only can this work be made available to moderators if necessary, but also it can be used to help in arriving at common standards among several evaluators, and in ensuring consistent standards from one evaluator over the assessment period. This need arises because there are problems associated with assessing different students on the same skill in different activities. 4. Most of the students' reports were assessed intuitively by the evaluators despite the assessment items were established concretely by preparers of the experiment. This result means that the evaluators were new to grasp the essence of the established assessment items of the experiment report and that the students' assessment scores were short of objectivity. Lastly, there are suggestions from the results and the conclusions. The students' experimental acts which were difficult to observe because they occur in a flash and which can be easily imitated should be excluded from the assessment items. Evaluators are likely to miss the time to observe the acts, and the students who are assessed later have more opportunity to practise the skill which is being assessed. It is necessary to be aware of these problems and try to reduce their influence or remove them. The skills and processes analysis has made a very useful checklist for scientific inquiry experiment assessment. But in itself it is of little value. It must be seen alongside the other vital attributes needed in the making of a good scientist, the affective aspects of commitment and confidence, the personal insights which come both through formal and informal learning, and the tacit knowledge that comes through experience, both structured and acquired in play. These four aspects must be continually interacting, in a flexible and individualistic way, throughout the scientific education of students. An increasing ability to be good at science, to be good at doing investigational practical work, will be gained through continually, successively, but often unpredictably, developing more experience, developing more insights, developing more skills, and producing more confidence and commitment.

  • PDF

The Understanding and Application of Noise Reduction Software in Static Images (정적 영상에서 Noise Reduction Software의 이해와 적용)

  • Lee, Hyung-Jin;Song, Ho-Jun;Seung, Jong-Min;Choi, Jin-Wook;Kim, Jin-Eui;Kim, Hyun-Joo
    • The Korean Journal of Nuclear Medicine Technology
    • /
    • v.14 no.1
    • /
    • pp.54-60
    • /
    • 2010
  • Purpose: Nuclear medicine manufacturers provide various softwares which shorten imaging time using their own image processing techniques such as UlatraSPECT, ASTONISH, Flash3D, Evolution, and nSPEED. Seoul National University Hospital has introduced softwares from Siemens and Philips, but it was still hard to understand algorithm difference between those two softwares. Thus, the purpose of this study was to figure out the difference of two softwares in planar images and research the possibility of application to images produced with high energy isotopes. Materials and Methods: First, a phantom study was performed to understand the difference of softwares in static studies. Various amounts of count were acquired and the images were analyzed quantitatively after application of PIXON, Siemens and ASTONISH, Philips, respectively. Then, we applied them to some applicable static studies and searched for merits and demerits. And also, they have been applied to images produced with high energy isotopes. Finally, A blind test was conducted by nuclear medicine doctors except phantom images. Results: There was nearly no difference between pre and post processing image with PIXON for FWHM test using capillary source whereas ASTONISH was improved. But, both of standard deviation(SD) and variance were decreased for PIXON while ASTONISH was highly increased. And in background variability comparison test using IEC phantom, PIXON has been decreased over all while ASTONISH has shown to be somewhat increased. Contrast ratio in each spheres has also been increased for both methods. For image scale, window width has been increased for 4~5 times after processing with PIXON while ASTONISH showed nearly no difference. After phantom test analysis, ASTONISH seemed to be applicable for some studies which needs quantitative analysis or high contrast, and PIXON seemed to be applicable for insufficient counts studies or long time studies. Conclusion: Quantitative values used for usual analysis were generally improved after application of the two softwares, however it seems that it's hard to maintain the consistency for all of nuclear medicine studies because result images can not be the same due to the difference of algorithm characteristic rather than the difference of gamma cameras. And also, it's hard to expect high image quality with the time shortening method such as whole body scan. But it will be possible to apply to static studies considering the algorithm characteristic or we can expect a change of image quality through application to high energy isotope images.

  • PDF

Information Privacy Concern in Context-Aware Personalized Services: Results of a Delphi Study

  • Lee, Yon-Nim;Kwon, Oh-Byung
    • Asia pacific journal of information systems
    • /
    • v.20 no.2
    • /
    • pp.63-86
    • /
    • 2010
  • Personalized services directly and indirectly acquire personal data, in part, to provide customers with higher-value services that are specifically context-relevant (such as place and time). Information technologies continue to mature and develop, providing greatly improved performance. Sensory networks and intelligent software can now obtain context data, and that is the cornerstone for providing personalized, context-specific services. Yet, the danger of overflowing personal information is increasing because the data retrieved by the sensors usually contains privacy information. Various technical characteristics of context-aware applications have more troubling implications for information privacy. In parallel with increasing use of context for service personalization, information privacy concerns have also increased such as an unrestricted availability of context information. Those privacy concerns are consistently regarded as a critical issue facing context-aware personalized service success. The entire field of information privacy is growing as an important area of research, with many new definitions and terminologies, because of a need for a better understanding of information privacy concepts. Especially, it requires that the factors of information privacy should be revised according to the characteristics of new technologies. However, previous information privacy factors of context-aware applications have at least two shortcomings. First, there has been little overview of the technology characteristics of context-aware computing. Existing studies have only focused on a small subset of the technical characteristics of context-aware computing. Therefore, there has not been a mutually exclusive set of factors that uniquely and completely describe information privacy on context-aware applications. Second, user survey has been widely used to identify factors of information privacy in most studies despite the limitation of users' knowledge and experiences about context-aware computing technology. To date, since context-aware services have not been widely deployed on a commercial scale yet, only very few people have prior experiences with context-aware personalized services. It is difficult to build users' knowledge about context-aware technology even by increasing their understanding in various ways: scenarios, pictures, flash animation, etc. Nevertheless, conducting a survey, assuming that the participants have sufficient experience or understanding about the technologies shown in the survey, may not be absolutely valid. Moreover, some surveys are based solely on simplifying and hence unrealistic assumptions (e.g., they only consider location information as a context data). A better understanding of information privacy concern in context-aware personalized services is highly needed. Hence, the purpose of this paper is to identify a generic set of factors for elemental information privacy concern in context-aware personalized services and to develop a rank-order list of information privacy concern factors. We consider overall technology characteristics to establish a mutually exclusive set of factors. A Delphi survey, a rigorous data collection method, was deployed to obtain a reliable opinion from the experts and to produce a rank-order list. It, therefore, lends itself well to obtaining a set of universal factors of information privacy concern and its priority. An international panel of researchers and practitioners who have the expertise in privacy and context-aware system fields were involved in our research. Delphi rounds formatting will faithfully follow the procedure for the Delphi study proposed by Okoli and Pawlowski. This will involve three general rounds: (1) brainstorming for important factors; (2) narrowing down the original list to the most important ones; and (3) ranking the list of important factors. For this round only, experts were treated as individuals, not panels. Adapted from Okoli and Pawlowski, we outlined the process of administrating the study. We performed three rounds. In the first and second rounds of the Delphi questionnaire, we gathered a set of exclusive factors for information privacy concern in context-aware personalized services. The respondents were asked to provide at least five main factors for the most appropriate understanding of the information privacy concern in the first round. To do so, some of the main factors found in the literature were presented to the participants. The second round of the questionnaire discussed the main factor provided in the first round, fleshed out with relevant sub-factors. Respondents were then requested to evaluate each sub factor's suitability against the corresponding main factors to determine the final sub-factors from the candidate factors. The sub-factors were found from the literature survey. Final factors selected by over 50% of experts. In the third round, a list of factors with corresponding questions was provided, and the respondents were requested to assess the importance of each main factor and its corresponding sub factors. Finally, we calculated the mean rank of each item to make a final result. While analyzing the data, we focused on group consensus rather than individual insistence. To do so, a concordance analysis, which measures the consistency of the experts' responses over successive rounds of the Delphi, was adopted during the survey process. As a result, experts reported that context data collection and high identifiable level of identical data are the most important factor in the main factors and sub factors, respectively. Additional important sub-factors included diverse types of context data collected, tracking and recording functionalities, and embedded and disappeared sensor devices. The average score of each factor is very useful for future context-aware personalized service development in the view of the information privacy. The final factors have the following differences comparing to those proposed in other studies. First, the concern factors differ from existing studies, which are based on privacy issues that may occur during the lifecycle of acquired user information. However, our study helped to clarify these sometimes vague issues by determining which privacy concern issues are viable based on specific technical characteristics in context-aware personalized services. Since a context-aware service differs in its technical characteristics compared to other services, we selected specific characteristics that had a higher potential to increase user's privacy concerns. Secondly, this study considered privacy issues in terms of service delivery and display that were almost overlooked in existing studies by introducing IPOS as the factor division. Lastly, in each factor, it correlated the level of importance with professionals' opinions as to what extent users have privacy concerns. The reason that it did not select the traditional method questionnaire at that time is that context-aware personalized service considered the absolute lack in understanding and experience of users with new technology. For understanding users' privacy concerns, professionals in the Delphi questionnaire process selected context data collection, tracking and recording, and sensory network as the most important factors among technological characteristics of context-aware personalized services. In the creation of a context-aware personalized services, this study demonstrates the importance and relevance of determining an optimal methodology, and which technologies and in what sequence are needed, to acquire what types of users' context information. Most studies focus on which services and systems should be provided and developed by utilizing context information on the supposition, along with the development of context-aware technology. However, the results in this study show that, in terms of users' privacy, it is necessary to pay greater attention to the activities that acquire context information. To inspect the results in the evaluation of sub factor, additional studies would be necessary for approaches on reducing users' privacy concerns toward technological characteristics such as highly identifiable level of identical data, diverse types of context data collected, tracking and recording functionality, embedded and disappearing sensor devices. The factor ranked the next highest level of importance after input is a context-aware service delivery that is related to output. The results show that delivery and display showing services to users in a context-aware personalized services toward the anywhere-anytime-any device concept have been regarded as even more important than in previous computing environment. Considering the concern factors to develop context aware personalized services will help to increase service success rate and hopefully user acceptance for those services. Our future work will be to adopt these factors for qualifying context aware service development projects such as u-city development projects in terms of service quality and hence user acceptance.