• Title/Summary/Keyword: Systems Engineering Standard

Search Result 2,087, Processing Time 0.031 seconds

Smart Farm Expert System for Paprika using Decision Tree Technique (의사결정트리 기법을 이용한 파프리카용 스마트팜 전문가 시스템)

  • Jeong, Hye-sun;Lee, In-yong;Lim, Joong-seon
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 2018.10a
    • /
    • pp.373-376
    • /
    • 2018
  • Traditional paprika smart farm systems are often harmful to paprika growth because they are set to follow the values of several sensors to the reference value, so the system is often unable to make optimal judgement. Using decision tree techniques, the expert system for the paprika smart farm is designed to create a control system with a decision-making structure similar to that of farmers using data generated by factors that depend on their surroundings. With the current smart farm control system, it is essential for farmers to intervene in the surrounding environment because it is designed to follow sensor values to the reference values set by the farmer. To solve this problem even slightly, it is going to obtain environmental data and design controllers that apply decision tree method. The expert system is established for complex control by selecting the most influential environmental factors before controlling the paprika smart farm equipment, including criteria for selecting decisions by farmers. The study predicts that each environmental element will be a standard when creating smart farms for professionals because of the interrelationships of data, and more surrounding environmental factors affecting growth.

  • PDF

The first KREDOS-EPR intercomparison exercise using alanine pellet dosimeter in South Korea

  • Park, Byeong Ryong;Kim, Jae Seok;Yoo, Jaeryong;Ha, Wi-Ho;Jang, Seongjae;Kang, Yeong-Rok;Kim, HyoJin;Jang, Han-Ki;Han, Ki-Tek;Min, Jeho;Choi, Hoon;Kim, Jeongin;Lee, Jungil;Kim, Hyoungtaek;Kim, Jang-Lyul
    • Nuclear Engineering and Technology
    • /
    • v.52 no.10
    • /
    • pp.2379-2386
    • /
    • 2020
  • This paper presents the results of the first intercomparison exercise performed by the Korea retrospective dosimetry (KREDOS) working group using electron paramagnetic resonance (EPR) spectroscopy. The intercomparison employed the alanine dosimeter, which is commonly used as the standard dosimeter in EPR methods. Four laboratories participated in the dose assessment of blind samples, and one laboratory carried out irradiation of blind samples. Two types of alanine dosimeters (Bruker and Magnettech) with different geometries were used. Both dosimeters were blindly irradiated at three dose levels (0.60, 2.70, and 8.00 Gy) and four samples per dose were distributed to the participating laboratories. Assessments of blind doses by the laboratories were performed using their own measurement protocols. One laboratory did not participate in the measurements of Magnettech alanine dosimeter samples. Intercomparison results were analyzed by calculating the relative bias, En value, and z-score. The results reported by participating laboratories were overall satisfactory for doses of 2.70 and 8.00 Gy but were considerably overestimated with a relative bias range of 10-95% for 0.60 Gy, which is lower than the minimum detectable dose (MDD) of the alanine dosimeter. After the first intercomparison, participating laboratories are working to improve their alanine-EPR dosimetry systems through continuous meetings and are preparing a second intercomparison exercise for other materials.

Integration Process of Federation Object Model for Interoperation of Federations (페더레이션 연동을 위한 객체 모델 통합 프로세스)

  • Kwon, Se Jung;Yu, Minwook;Kim, Tag Gon
    • Journal of the Korea Society for Simulation
    • /
    • v.26 no.2
    • /
    • pp.1-8
    • /
    • 2017
  • High Level Architecture(HLA) is a specification for interoperation among heterogeneous simulators which are executed in a distributed environment. HLA originally allows many federates to join in a federation using a single RTI(Run-Time Infrastructure). As the target systems become more complex, the need for the interoperation of federations, performed in a RTI-RTI interoperation environment, has been growing. It can be performed by the confederation interface with the agents, which subrogate the API calls and callbacks of each federation. The existing studies have assumed that the object models of each federation are based on same HLA standard and their object descriptions are equal. Because the existing federations are usually not under this assumption, this paper proposes the integration process of object models for the federation interoperation environment. To integrate the object models for the interoperation of federations, this process resolves the differences of HLA standards, provides conversion process between objects with different descriptions and excludes the security objects. We expect that this process enhances the reusability and effectiveness of interoperation of federations in various domains.

Farm Animal Mortality Management Practices in Sunchon-si (순천시의 폐사가축 처리실태에 관한 연구)

  • Hong, Ji-Hyung
    • Journal of Animal Environmental Science
    • /
    • v.16 no.3
    • /
    • pp.245-252
    • /
    • 2010
  • Disposal methods of managing carcass in Korea livestock production systems include burying, digesting, rendering, carcass dumping to manure pile, dead animal disposer and mini-incinerator. Burying was usually the most practical method of carcass disposal in our livestock farms. Burying, carcass dumping to manure pile, dead animal disposer and mini-incinerator may have environmental regulatory and economic liabilities when used as a means of carcass disposal. In many cases in this survey, these disposal methods offer a poor choice for the producer due to individual site conditions, geology, cost, air emissions, rendering plants. A survey questionnaire that addressed the issues to livestock producers was prepared. The questionnaire addressed two main topics as follows: 1) types of livestock and generation amounts of carcass 2) Number of breeding animals and disposal methods of livestock mortality. A total of 36 livestock producers were interviewed. The results of obtained in this survey were summarized as follows: The number of breeding poultry, swine, beef cow and dairy cow was 251,000, 2,600, 142 and 92 heads per year and the generation amounts of annually carcass was 0.46, 15.32, 0.36, 1.36 tons per year of each poultry, swine, beef cow and dairy cow farms, respectively. The disposal methods of carcass were burying (42%), carcass dumping to manure pile (36%), rendering (8%), incineration (6%), digesting (6%), carcass disposer (2%), respectively. These results can be used as basic information to establish the standard of carcass composting facility.

Study on Radioactive Material Management Plan and Environmental Analysis of Water (II) Study of Management System in Water Environment of Japan (물 환경의 방사성 물질 관리 방안과 분석법에 관한 연구 (II) 일본의 물 환경 방사성물질 관리 체계에 대한 고찰)

  • Han, Seong-Gyu;Kim, Jung-Min
    • Journal of radiological science and technology
    • /
    • v.38 no.3
    • /
    • pp.305-313
    • /
    • 2015
  • After Fukushima Daiichi nuclear disaster in 2011, study and maintenance of monitoring systems have been made at home and abroad. As concerns about radioactive contamination of water have increased in Korea, update of maintenance of managing radioactive materials in water is being made mainly by Ministry of Environment. In this study, we analysed current state of monitoring system modification in Japan, the country directly involved and neighboring country. According to the result, Japan modified the legislations first. Then Ministry of Education, Culture, Sports, Science and Technology (MEXT) provides theoretical background of radiological monitoring. And Ministry of the Environment actually watches state of water pollution in public waters and underground water. Finally related agencies like local government are monitoring current state of radioactive contamination in water environment. By region, local monitoring stations share the investigation of the whole country. Also, additional monitoring is running around nuclear facilities. After Fukushima disaster, monitoring for area near Fukushima is added. Among the reference levels, management target value of drinking water and tap water is 10 Bq/kg, and those of public water and underground water are 1 Bq/L. Measuring intervals varied from every hour to once a year, regularly or irregularly depending on the investigation. The main measuring items are air dose rate, gross ${\alpha}$, gross ${\beta}$, ${\gamma}$ radionuclide, Cs-134, Cs-137, Sr-89, Sr-90, I-131, and so on. In comparison, regulations about general public water in Korea need to be modified, while those about area near nuclear facility and drinking water are organized well. In future, therefore, domestic system would be expected to be modified with making reference to the guidelines like WHO's one. As good case of applying international guideline to domestic environment, Japanese system could be a reference when general standard of radioactivity in public water is made in Korea.

A study on the implementation of Medical Telemetry systems using wireless public data network (무선공중망을 이용한 의료 정보 데이터 원격 모니터링 시스템에 관한 연구)

  • 이택규;김영길
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 2000.10a
    • /
    • pp.278-283
    • /
    • 2000
  • As information communication technology developed we could check our blood pressure, pulsation electrocardiogram, SpO2 and blood test easily at home. To check our health at ordinary times is able though interlocking the house medical instrument with the wireless public data network This service will help the inconvenience to visit the hospital everytime and will save the individual's time and cost. In each house an organism data which is detected from the human body will be transmitted to the distance hospital and will be essentially applied through wireless public data network The medical information transmit system is utilized by wireless close range network It would transmit the obtained organism signal wirelessly from the personal device to the main center system in the hospital. Remote telemetry system is embodied by utilizing wireless media access protocol. The protocol is embodied by grafting CSMA/CA(Carrier Sense Multiple Access with Collision Avoidance) protocol falling mode which is standards from IEEE 802.11. Among the house care telemetry system which could measure blood pressure, pulsation, electrocardiogram, SpO2 the study embodies the ECC(electrocardiograph) measure part. It within the ECC function into the movable device and add 900㎒ band wireless public data interface. Then the aged, the patients even anyone in the house could obtain ECG and keep, record the data. It would be essential to control those who had a health-examination heart diseases or more complicated heart diseases and to observe the latent heart disease patient continuously. To embody the medical information transmit system which is based on wireless network. It would transmit the ECG data among the organism signal data which would be utilized by wireless network modem and NCL(Native Control Language) protocol to contact through wireless network Through the SCR(Standard Context Routing) protocol in the network it will be connected to the wired host computer. The computer will check the recorded individual information and the obtained ECC data then send the correspond examination to the movable device. The study suggests the medical transmit system model utilized by the wireless public data network.

  • PDF

A study on the prediction of korean NPL market return (한국 NPL시장 수익률 예측에 관한 연구)

  • Lee, Hyeon Su;Jeong, Seung Hwan;Oh, Kyong Joo
    • Journal of Intelligence and Information Systems
    • /
    • v.25 no.2
    • /
    • pp.123-139
    • /
    • 2019
  • The Korean NPL market was formed by the government and foreign capital shortly after the 1997 IMF crisis. However, this market is short-lived, as the bad debt has started to increase after the global financial crisis in 2009 due to the real economic recession. NPL has become a major investment in the market in recent years when the domestic capital market's investment capital began to enter the NPL market in earnest. Although the domestic NPL market has received considerable attention due to the overheating of the NPL market in recent years, research on the NPL market has been abrupt since the history of capital market investment in the domestic NPL market is short. In addition, decision-making through more scientific and systematic analysis is required due to the decline in profitability and the price fluctuation due to the fluctuation of the real estate business. In this study, we propose a prediction model that can determine the achievement of the benchmark yield by using the NPL market related data in accordance with the market demand. In order to build the model, we used Korean NPL data from December 2013 to December 2017 for about 4 years. The total number of things data was 2291. As independent variables, only the variables related to the dependent variable were selected for the 11 variables that indicate the characteristics of the real estate. In order to select the variables, one to one t-test and logistic regression stepwise and decision tree were performed. Seven independent variables (purchase year, SPC (Special Purpose Company), municipality, appraisal value, purchase cost, OPB (Outstanding Principle Balance), HP (Holding Period)). The dependent variable is a bivariate variable that indicates whether the benchmark rate is reached. This is because the accuracy of the model predicting the binomial variables is higher than the model predicting the continuous variables, and the accuracy of these models is directly related to the effectiveness of the model. In addition, in the case of a special purpose company, whether or not to purchase the property is the main concern. Therefore, whether or not to achieve a certain level of return is enough to make a decision. For the dependent variable, we constructed and compared the predictive model by calculating the dependent variable by adjusting the numerical value to ascertain whether 12%, which is the standard rate of return used in the industry, is a meaningful reference value. As a result, it was found that the hit ratio average of the predictive model constructed using the dependent variable calculated by the 12% standard rate of return was the best at 64.60%. In order to propose an optimal prediction model based on the determined dependent variables and 7 independent variables, we construct a prediction model by applying the five methodologies of discriminant analysis, logistic regression analysis, decision tree, artificial neural network, and genetic algorithm linear model we tried to compare them. To do this, 10 sets of training data and testing data were extracted using 10 fold validation method. After building the model using this data, the hit ratio of each set was averaged and the performance was compared. As a result, the hit ratio average of prediction models constructed by using discriminant analysis, logistic regression model, decision tree, artificial neural network, and genetic algorithm linear model were 64.40%, 65.12%, 63.54%, 67.40%, and 60.51%, respectively. It was confirmed that the model using the artificial neural network is the best. Through this study, it is proved that it is effective to utilize 7 independent variables and artificial neural network prediction model in the future NPL market. The proposed model predicts that the 12% return of new things will be achieved beforehand, which will help the special purpose companies make investment decisions. Furthermore, we anticipate that the NPL market will be liquidated as the transaction proceeds at an appropriate price.

Using the METHONTOLOGY Approach to a Graduation Screen Ontology Development: An Experiential Investigation of the METHONTOLOGY Framework

  • Park, Jin-Soo;Sung, Ki-Moon;Moon, Se-Won
    • Asia pacific journal of information systems
    • /
    • v.20 no.2
    • /
    • pp.125-155
    • /
    • 2010
  • Ontologies have been adopted in various business and scientific communities as a key component of the Semantic Web. Despite the increasing importance of ontologies, ontology developers still perceive construction tasks as a challenge. A clearly defined and well-structured methodology can reduce the time required to develop an ontology and increase the probability of success of a project. However, no reliable knowledge-engineering methodology for ontology development currently exists; every methodology has been tailored toward the development of a particular ontology. In this study, we developed a Graduation Screen Ontology (GSO). The graduation screen domain was chosen for the several reasons. First, the graduation screen process is a complicated task requiring a complex reasoning process. Second, GSO may be reused for other universities because the graduation screen process is similar for most universities. Finally, GSO can be built within a given period because the size of the selected domain is reasonable. No standard ontology development methodology exists; thus, one of the existing ontology development methodologies had to be chosen. The most important considerations for selecting the ontology development methodology of GSO included whether it can be applied to a new domain; whether it covers a broader set of development tasks; and whether it gives sufficient explanation of each development task. We evaluated various ontology development methodologies based on the evaluation framework proposed by G$\acute{o}$mez-P$\acute{e}$rez et al. We concluded that METHONTOLOGY was the most applicable to the building of GSO for this study. METHONTOLOGY was derived from the experience of developing Chemical Ontology at the Polytechnic University of Madrid by Fern$\acute{a}$ndez-L$\acute{o}$pez et al. and is regarded as the most mature ontology development methodology. METHONTOLOGY describes a very detailed approach for building an ontology under a centralized development environment at the conceptual level. This methodology consists of three broad processes, with each process containing specific sub-processes: management (scheduling, control, and quality assurance); development (specification, conceptualization, formalization, implementation, and maintenance); and support process (knowledge acquisition, evaluation, documentation, configuration management, and integration). An ontology development language and ontology development tool for GSO construction also had to be selected. We adopted OWL-DL as the ontology development language. OWL was selected because of its computational quality of consistency in checking and classification, which is crucial in developing coherent and useful ontological models for very complex domains. In addition, Protege-OWL was chosen for an ontology development tool because it is supported by METHONTOLOGY and is widely used because of its platform-independent characteristics. Based on the GSO development experience of the researchers, some issues relating to the METHONTOLOGY, OWL-DL, and Prot$\acute{e}$g$\acute{e}$-OWL were identified. We focused on presenting drawbacks of METHONTOLOGY and discussing how each weakness could be addressed. First, METHONTOLOGY insists that domain experts who do not have ontology construction experience can easily build ontologies. However, it is still difficult for these domain experts to develop a sophisticated ontology, especially if they have insufficient background knowledge related to the ontology. Second, METHONTOLOGY does not include a development stage called the "feasibility study." This pre-development stage helps developers ensure not only that a planned ontology is necessary and sufficiently valuable to begin an ontology building project, but also to determine whether the project will be successful. Third, METHONTOLOGY excludes an explanation on the use and integration of existing ontologies. If an additional stage for considering reuse is introduced, developers might share benefits of reuse. Fourth, METHONTOLOGY fails to address the importance of collaboration. This methodology needs to explain the allocation of specific tasks to different developer groups, and how to combine these tasks once specific given jobs are completed. Fifth, METHONTOLOGY fails to suggest the methods and techniques applied in the conceptualization stage sufficiently. Introducing methods of concept extraction from multiple informal sources or methods of identifying relations may enhance the quality of ontologies. Sixth, METHONTOLOGY does not provide an evaluation process to confirm whether WebODE perfectly transforms a conceptual ontology into a formal ontology. It also does not guarantee whether the outcomes of the conceptualization stage are completely reflected in the implementation stage. Seventh, METHONTOLOGY needs to add criteria for user evaluation of the actual use of the constructed ontology under user environments. Eighth, although METHONTOLOGY allows continual knowledge acquisition while working on the ontology development process, consistent updates can be difficult for developers. Ninth, METHONTOLOGY demands that developers complete various documents during the conceptualization stage; thus, it can be considered a heavy methodology. Adopting an agile methodology will result in reinforcing active communication among developers and reducing the burden of documentation completion. Finally, this study concludes with contributions and practical implications. No previous research has addressed issues related to METHONTOLOGY from empirical experiences; this study is an initial attempt. In addition, several lessons learned from the development experience are discussed. This study also affords some insights for ontology methodology researchers who want to design a more advanced ontology development methodology.

Dual Codec Based Joint Bit Rate Control Scheme for Terrestrial Stereoscopic 3DTV Broadcast (지상파 스테레오스코픽 3DTV 방송을 위한 이종 부호화기 기반 합동 비트율 제어 연구)

  • Chang, Yong-Jun;Kim, Mun-Churl
    • Journal of Broadcast Engineering
    • /
    • v.16 no.2
    • /
    • pp.216-225
    • /
    • 2011
  • Following the proliferation of three-dimensional video contents and displays, many terrestrial broadcasting companies have been preparing for stereoscopic 3DTV service. In terrestrial stereoscopic broadcast, it is a difficult task to code and transmit two video sequences while sustaining as high quality as 2DTV broadcast due to the limited bandwidth defined by the existing digital TV standards such as ATSC. Thus, a terrestrial 3DTV broadcasting with a heterogeneous video codec system, where the left image and right images are based on MPEG-2 and H.264/AVC, respectively, is considered in order to achieve both high quality broadcasting service and compatibility for the existing 2DTV viewers. Without significant change in the current terrestrial broadcasting systems, we propose a joint rate control scheme for stereoscopic 3DTV service based on the heterogeneous dual codec systems. The proposed joint rate control scheme applies to the MPEG-2 encoder a quadratic rate-quantization model which is adopted in the H.264/AVC. Then the controller is designed for the sum of the left and right bitstreams to meet the bandwidth requirement of broadcasting standards while the sum of image distortions is minimized by adjusting quantization parameter obtained from the proposed optimization scheme. Besides, we consider a condition on maintaining quality difference between the left and right images around a desired level in the optimization in order to mitigate negative effects on human visual system. Experimental results demonstrate that the proposed bit rate control scheme outperforms the rate control method where each video coding standard uses its own bit rate control algorithm independently in terms of the increase in PSNR by 2.02%, the decrease in the average absolute quality difference by 77.6% and the reduction in the variance of the quality difference by 74.38%.

Computer Assisted EPID Analysis of Breast Intrafractional and Interfractional Positioning Error (유방암 방사선치료에 있어 치료도중 및 분할치료 간 위치오차에 대한 전자포탈영상의 컴퓨터를 이용한 자동 분석)

  • Sohn Jason W.;Mansur David B.;Monroe James I.;Drzymala Robert E.;Jin Ho-Sang;Suh Tae-Suk;Dempsey James F.;Klein Eric E.
    • Progress in Medical Physics
    • /
    • v.17 no.1
    • /
    • pp.24-31
    • /
    • 2006
  • Automated analysis software was developed to measure the magnitude of the intrafractional and interfractional errors during breast radiation treatments. Error analysis results are important for determining suitable planning target volumes (PTV) prior to Implementing breast-conserving 3-D conformal radiation treatment (CRT). The electrical portal imaging device (EPID) used for this study was a Portal Vision LC250 liquid-filled ionization detector (fast frame-averaging mode, 1.4 frames per second, 256X256 pixels). Twelve patients were imaged for a minimum of 7 treatment days. During each treatment day, an average of 8 to 9 images per field were acquired (dose rate of 400 MU/minute). We developed automated image analysis software to quantitatively analyze 2,931 images (encompassing 720 measurements). Standard deviations ($\sigma$) of intrafractional (breathing motion) and intefractional (setup uncertainty) errors were calculated. The PTV margin to include the clinical target volume (CTV) with 95% confidence level was calculated as $2\;(1.96\;{\sigma})$. To compensate for intra-fractional error (mainly due to breathing motion) the required PTV margin ranged from 2 mm to 4 mm. However, PTV margins compensating for intefractional error ranged from 7 mm to 31 mm. The total average error observed for 12 patients was 17 mm. The intefractional setup error ranged from 2 to 15 times larger than intrafractional errors associated with breathing motion. Prior to 3-D conformal radiation treatment or IMRT breast treatment, the magnitude of setup errors must be measured and properly incorporated into the PTV. To reduce large PTVs for breast IMRT or 3-D CRT, an image-guided system would be extremely valuable, if not required. EPID systems should incorporate automated analysis software as described in this report to process and take advantage of the large numbers of EPID images available for error analysis which will help Individual clinics arrive at an appropriate PTV for their practice. Such systems can also provide valuable patient monitoring information with minimal effort.

  • PDF