• Title/Summary/Keyword: 현장실험기법

Search Result 478, Processing Time 0.028 seconds

An Experimental Study on Real Time CO Concentration Measurement of Combustion Gas in LPG/Air Flame Using TDLAS (TDLAS를 이용한 LPG/공기 화염 연소가스의 실시간 CO 농도 측정에 관한 연구)

  • So, Sunghyun;Park, Daegeun;Park, Jiyeon;Song, Aran;Jeong, Nakwon;Yoo, Miyeon;Hwang, Jungho;Lee, Changyeop
    • Clean Technology
    • /
    • v.25 no.4
    • /
    • pp.316-323
    • /
    • 2019
  • In order to enhance combustion efficiency and reduce atmosphere pollutants, it is essential to measure carbon monoxide (CO) concentration precisely in combustion exhaust. CO is the important gas species regarding pollutant emission and incomplete combustion because it can trade off with NOx and increase rapidly when incomplete combustion occurs. In the case of a steel annealing system, CO is generated intentionally to maintain the deoxidation atmosphere. However, it is difficult to measure the CO concentration in a combustion environment in real-time, because of unsteady combustion reactions and harsh environment. Tunable Diode Laser Absorption Spectroscopy (TDLAS), which is an optical measurement method, is highly attractive for measuring the concentration of certain gas species, temperature, velocity, and pressure in a combustion environment. TDLAS has several advantages such as sensitive, non-invasive, and fast response, and in-situ measurement capability. In this study, a combustion system is designed to control the equivalence ratio. Also, the combustion exhaust gases are produced in a Liquefied Petroleum Gas (LPG)/air flame. Measurement of CO concentration according to the change of equivalence ratio is confirmed through TDLAS method and compared with the simulation based on Voigt function. In order to measure the CO concentration without interference from other combustion products, a near-infrared laser at 4300.6 cm-1 was selected.

OBSTETRICIAN'S VIEW OF TEENAGE PREGNANCY:PRESENT STATUS, PREVENTION AND PSYCHIATRIC CONSULTATION (산과 의사가 인지한 10대 임신의 현황, 예방, 정신과 자문)

  • Kim, Eun-Young;Kim, Boong-Nyun;Hong, Kang-E;Lee, Young-Sik
    • Journal of the Korean Academy of Child and Adolescent Psychiatry
    • /
    • v.13 no.1
    • /
    • pp.117-128
    • /
    • 2002
  • Objectives:For the purpose of obtaining the more vivid present status and prevention program of teenage pregnancy, this survey was done by Obstetricians, as study subject, who manage the pregnant teenager in real clinical situation. Methods:Structured survey form about teenage pregnancy was sent to 2,800 obstetricians. That form contained frequency, characteristics, decision making processes, and psychiatric aspects of the teenage pregnancy. 349 obstetricians replied that survey form and we analysed these datas. Results:(1) The trend of teenage pregnancy was mildly increased. (2) The most common cases were unwanted pregnancy by continuing sexual relationship with boyfriends rather than by forced, accidental sexual relationship with multiple partners. (3) The most common reason of labor was loss the time of artificial abotion. (4) Problems of pregnant girls' were conduct behaviors and poor informations about contraception rather than sexual abuse or mental retardation. (5) Most obstetricians percepted the necessity of psychiatric consultation, however psychiatric consultation was rare due to parents refusal and abscense of available psychiatric facility. (6) For the prevention of teenage pregnancy, the most important thing was practical education about contraception. Conclusions:Based on the result of this study, further study using structured interview schedule with pregnant girl is needed for the detecting risk factor of teenage pregnancy and effective systematic approach to pregnant girl.

  • PDF

Development and Application of an After-school Program for an Astronomy Observation Club in a Highschool: Standardized Coefficient Decision Program in Consideration of the Observation Site's Environment (고등학교 천체 관측 동아리를 위한 방과 후 학교 프로그램 개발 및 적용: 관측지 주변 환경을 고려한 표준화 계수 결정 프로그램)

  • Kim, Seung-Hwan;Lee, Hyo-Nyong;Lee, Hyun-Dong;Jeong, Jae-Hwa
    • Journal of the Korean earth science society
    • /
    • v.29 no.6
    • /
    • pp.495-505
    • /
    • 2008
  • The main purposes of this study are to: (1) to develop astronomy observation program based on a standardized coefficient decision program; and (2) to apply the developed program to after-school or club activities. As a first step, we analyzed activities related to astronomy in the authorized textbooks that are currently adopted in high schools. based on the analysis, we developed an astronomy observation program according to the standardized coefficient decision program, and the program was applied to students' astronomical observations as part of the club activities. Specifically, this program used a 102 mm refracting telescope and digital camera. we took into account the observation site's environment of the urban areas in which many school were located and then developed a the computer program for observation activities. The results of this study are as follows. First, the current astronomical education in schools was based off of the textbooks. Specifically, it was mostly about analyzing the materials and making simulated experiments. Second, most schools participated in this study were located in urban areas where students had more difficulty in observation than in rural areas. Third, an exemplary method was investigated in order to make an astronomical observation efficiently in urban areas with the existing devices. In addition, the standardized coefficient decision program was developed to standardize the magnitude of stars according to the observed value. Finally, based on the students' observations, we found that there was no difference between the magnitude of a star in urban sites and in rural sites. The current astronomical education in schools lacks an activity of practical experiments, and many schools have not good observational sites because they are located in urban areas. However, use of this program makes it possible to collect significant data after a series of standardized corrections. In conclusion, this program not only helps schools to create an active astronomy observation activity in fields, but also promotes students to be more interested in astronomical observation through a series of field-based activities.

Methodological Comparison of the Quantification of Total Carbon and Organic Carbon in Marine Sediment (해양 퇴적물내 총탄소 및 유기탄소의 분석기법 고찰)

  • Kim, Kyeong-Hong;Son, Seung-Kyu;Son, Ju-Won;Ju, Se-Jong
    • Journal of the Korean Society for Marine Environment & Energy
    • /
    • v.9 no.4
    • /
    • pp.235-242
    • /
    • 2006
  • The precise estimation of total and organic carbon contents in sediments is fundamental to understand the benthic environment. To test the precision and accuracy of CHN analyzer and the procedure to quantify total and organic carbon contents(using in-situ acidification with sulfurous acid($H_2SO_3$)) in the sediment, the reference material s such as Acetanilide($C_8H_9NO$), Sulfanilammide($C_6H_8N_2O_2S$), and BCSS-1(standard estuary sediment) were used. The results indicate that CHN analyzer to quantify carbon and nitrogen content has high precision(percent error=3.29%) and accuracy(relative standard deviation=1.26%). Additionally, we conducted the instrumental comparison of carbon values analyzed using CHN analyzer and Coulometeric Carbon Analyzer. Total carbon contents measured from two different instruments were highly correlated($R^2=0.9993$, n=84, p<0.0001) with a linear relationship and show no significant differences(paired t-test, p=0.0003). The organic carbon contents from two instruments also showed the similar results with a significant linear relationship($R^2=0.8867$, n=84, p<0.0001) and no significant differences(paired t-test, p<0.0001). Although it is possible to overestimate organic carbon contents for some sediment types having high inorganic carbon contents(such as calcareous ooze) due to procedural and analytical errors, analysis of organic carbon contents in sediments using CHN Analyzer and current procedures seems to provide the best estimates. Therefore, we recommend that this method can be applied to measure the carbon content in normal any sediment samples and are considered to be one of the best procedure far routine analysis of total and organic carbon.

  • PDF

Analyzing Self-Introduction Letter of Freshmen at Korea National College of Agricultural and Fisheries by Using Semantic Network Analysis : Based on TF-IDF Analysis (언어네트워크분석을 활용한 한국농수산대학 신입생 자기소개서 분석 - TF-IDF 분석을 기초로 -)

  • Joo, J.S.;Lee, S.Y.;Kim, J.S.;Kim, S.H.;Park, N.B.
    • Journal of Practical Agriculture & Fisheries Research
    • /
    • v.23 no.1
    • /
    • pp.89-104
    • /
    • 2021
  • Based on the TF-IDF weighted value that evaluates the importance of words that play a key role, the semantic network analysis(SNA) was conducted on the self-introduction letter of freshman at Korea National College of Agriculture and Fisheries(KNCAF) in 2020. The top three words calculated by TF-IDF weights were agriculture, mathematics, study (Q. 1), clubs, plants, friends (Q. 2), friends, clubs, opinions, (Q. 3), mushrooms, insects, and fathers (Q. 4). In the relationship between words, the words with high betweenness centrality are reason, high school, attending (Q. 1), garbage, high school, school (Q. 2), importance, misunderstanding, completion (Q.3), processing, feed, and farmhouse (Q. 4). The words with high degree centrality are high school, inquiry, grades (Q. 1), garbage, cleanup, class time (Q. 2), opinion, meetings, volunteer activities (Q.3), processing, space, and practice (Q. 4). The combination of words with high frequency of simultaneous appearances, that is, high correlation, appeared as 'certification - acquisition', 'problem - solution', 'science - life', and 'misunderstanding - concession'. In cluster analysis, the number of clusters obtained by the height of cluster dendrogram was 2(Q.1), 4(Q.2, 4) and 5(Q. 3). At this time, the cohesion in Cluster was high and the heterogeneity between Clusters was clearly shown.

State of Health and State of Charge Estimation of Li-ion Battery for Construction Equipment based on Dual Extended Kalman Filter (이중확장칼만필터(DEKF)를 기반한 건설장비용 리튬이온전지의 State of Charge(SOC) 및 State of Health(SOH) 추정)

  • Hong-Ryun Jung;Jun Ho Kim;Seung Woo Kim;Jong Hoon Kim;Eun Jin Kang;Jeong Woo Yun
    • Journal of the Microelectronics and Packaging Society
    • /
    • v.31 no.1
    • /
    • pp.16-22
    • /
    • 2024
  • Along with the high interest in electric vehicles and new renewable energy, there is a growing demand to apply lithium-ion batteries in the construction equipment industry. The capacity of heavy construction equipment that performs various tasks at construction sites is rapidly decreasing. Therefore, it is essential to accurately predict the state of batteries such as SOC (State of Charge) and SOH (State of Health). In this paper, the errors between actual electrochemical measurement data and estimated data were compared using the Dual Extended Kalman Filter (DEKF) algorithm that can estimate SOC and SOH at the same time. The prediction of battery charge state was analyzed by measuring OCV at SOC 5% intervals under 0.2C-rate conditions after the battery cell was fully charged, and the degradation state of the battery was predicted after 50 cycles of aging tests under various C-rate (0.2, 0.3, 0.5, 1.0, 1.5C rate) conditions. It was confirmed that the SOC and SOH estimation errors using DEKF tended to increase as the C-rate increased. It was confirmed that the SOC estimation using DEKF showed less than 6% at 0.2, 0.5, and 1C-rate. In addition, it was confirmed that the SOH estimation results showed good performance within the maximum error of 1.0% and 1.3% at 0.2 and 0.3C-rate, respectively. Also, it was confirmed that the estimation error also increased from 1.5% to 2% as the C-rate increased from 0.5 to 1.5C-rate. However, this result shows that all SOH estimation results using DEKF were excellent within about 2%.

Export Control System based on Case Based Reasoning: Design and Evaluation (사례 기반 지능형 수출통제 시스템 : 설계와 평가)

  • Hong, Woneui;Kim, Uihyun;Cho, Sinhee;Kim, Sansung;Yi, Mun Yong;Shin, Donghoon
    • Journal of Intelligence and Information Systems
    • /
    • v.20 no.3
    • /
    • pp.109-131
    • /
    • 2014
  • As the demand of nuclear power plant equipment is continuously growing worldwide, the importance of handling nuclear strategic materials is also increasing. While the number of cases submitted for the exports of nuclear-power commodity and technology is dramatically increasing, preadjudication (or prescreening to be simple) of strategic materials has been done so far by experts of a long-time experience and extensive field knowledge. However, there is severe shortage of experts in this domain, not to mention that it takes a long time to develop an expert. Because human experts must manually evaluate all the documents submitted for export permission, the current practice of nuclear material export is neither time-efficient nor cost-effective. Toward alleviating the problem of relying on costly human experts only, our research proposes a new system designed to help field experts make their decisions more effectively and efficiently. The proposed system is built upon case-based reasoning, which in essence extracts key features from the existing cases, compares the features with the features of a new case, and derives a solution for the new case by referencing similar cases and their solutions. Our research proposes a framework of case-based reasoning system, designs a case-based reasoning system for the control of nuclear material exports, and evaluates the performance of alternative keyword extraction methods (full automatic, full manual, and semi-automatic). A keyword extraction method is an essential component of the case-based reasoning system as it is used to extract key features of the cases. The full automatic method was conducted using TF-IDF, which is a widely used de facto standard method for representative keyword extraction in text mining. TF (Term Frequency) is based on the frequency count of the term within a document, showing how important the term is within a document while IDF (Inverted Document Frequency) is based on the infrequency of the term within a document set, showing how uniquely the term represents the document. The results show that the semi-automatic approach, which is based on the collaboration of machine and human, is the most effective solution regardless of whether the human is a field expert or a student who majors in nuclear engineering. Moreover, we propose a new approach of computing nuclear document similarity along with a new framework of document analysis. The proposed algorithm of nuclear document similarity considers both document-to-document similarity (${\alpha}$) and document-to-nuclear system similarity (${\beta}$), in order to derive the final score (${\gamma}$) for the decision of whether the presented case is of strategic material or not. The final score (${\gamma}$) represents a document similarity between the past cases and the new case. The score is induced by not only exploiting conventional TF-IDF, but utilizing a nuclear system similarity score, which takes the context of nuclear system domain into account. Finally, the system retrieves top-3 documents stored in the case base that are considered as the most similar cases with regard to the new case, and provides them with the degree of credibility. With this final score and the credibility score, it becomes easier for a user to see which documents in the case base are more worthy of looking up so that the user can make a proper decision with relatively lower cost. The evaluation of the system has been conducted by developing a prototype and testing with field data. The system workflows and outcomes have been verified by the field experts. This research is expected to contribute the growth of knowledge service industry by proposing a new system that can effectively reduce the burden of relying on costly human experts for the export control of nuclear materials and that can be considered as a meaningful example of knowledge service application.

Deriving adoption strategies of deep learning open source framework through case studies (딥러닝 오픈소스 프레임워크의 사례연구를 통한 도입 전략 도출)

  • Choi, Eunjoo;Lee, Junyeong;Han, Ingoo
    • Journal of Intelligence and Information Systems
    • /
    • v.26 no.4
    • /
    • pp.27-65
    • /
    • 2020
  • Many companies on information and communication technology make public their own developed AI technology, for example, Google's TensorFlow, Facebook's PyTorch, Microsoft's CNTK. By releasing deep learning open source software to the public, the relationship with the developer community and the artificial intelligence (AI) ecosystem can be strengthened, and users can perform experiment, implementation and improvement of it. Accordingly, the field of machine learning is growing rapidly, and developers are using and reproducing various learning algorithms in each field. Although various analysis of open source software has been made, there is a lack of studies to help develop or use deep learning open source software in the industry. This study thus attempts to derive a strategy for adopting the framework through case studies of a deep learning open source framework. Based on the technology-organization-environment (TOE) framework and literature review related to the adoption of open source software, we employed the case study framework that includes technological factors as perceived relative advantage, perceived compatibility, perceived complexity, and perceived trialability, organizational factors as management support and knowledge & expertise, and environmental factors as availability of technology skills and services, and platform long term viability. We conducted a case study analysis of three companies' adoption cases (two cases of success and one case of failure) and revealed that seven out of eight TOE factors and several factors regarding company, team and resource are significant for the adoption of deep learning open source framework. By organizing the case study analysis results, we provided five important success factors for adopting deep learning framework: the knowledge and expertise of developers in the team, hardware (GPU) environment, data enterprise cooperation system, deep learning framework platform, deep learning framework work tool service. In order for an organization to successfully adopt a deep learning open source framework, at the stage of using the framework, first, the hardware (GPU) environment for AI R&D group must support the knowledge and expertise of the developers in the team. Second, it is necessary to support the use of deep learning frameworks by research developers through collecting and managing data inside and outside the company with a data enterprise cooperation system. Third, deep learning research expertise must be supplemented through cooperation with researchers from academic institutions such as universities and research institutes. Satisfying three procedures in the stage of using the deep learning framework, companies will increase the number of deep learning research developers, the ability to use the deep learning framework, and the support of GPU resource. In the proliferation stage of the deep learning framework, fourth, a company makes the deep learning framework platform that improves the research efficiency and effectiveness of the developers, for example, the optimization of the hardware (GPU) environment automatically. Fifth, the deep learning framework tool service team complements the developers' expertise through sharing the information of the external deep learning open source framework community to the in-house community and activating developer retraining and seminars. To implement the identified five success factors, a step-by-step enterprise procedure for adoption of the deep learning framework was proposed: defining the project problem, confirming whether the deep learning methodology is the right method, confirming whether the deep learning framework is the right tool, using the deep learning framework by the enterprise, spreading the framework of the enterprise. The first three steps (i.e. defining the project problem, confirming whether the deep learning methodology is the right method, and confirming whether the deep learning framework is the right tool) are pre-considerations to adopt a deep learning open source framework. After the three pre-considerations steps are clear, next two steps (i.e. using the deep learning framework by the enterprise and spreading the framework of the enterprise) can be processed. In the fourth step, the knowledge and expertise of developers in the team are important in addition to hardware (GPU) environment and data enterprise cooperation system. In final step, five important factors are realized for a successful adoption of the deep learning open source framework. This study provides strategic implications for companies adopting or using deep learning framework according to the needs of each industry and business.