• Title/Summary/Keyword: multi-output

Search Result 1,933, Processing Time 0.026 seconds

Effects of streambed geomorphology on nitrous oxide flux are influenced by carbon availability (하상 미지형에 따른 N2O 발생량 변화 효과에 대한 탄소 가용성의 영향)

  • Ko, Jongmin;Kim, Youngsun;Ji, Un;Kang, Hojeong
    • Journal of Korea Water Resources Association
    • /
    • v.52 no.11
    • /
    • pp.917-929
    • /
    • 2019
  • Denitrification in streams is of great importance because it is essential for amelioration of water quality and accurate estimation of $N_2O$ budgets. Denitrification is a major biological source or sink of $N_2O$, an important greenhouse gas, which is a multi-step respiratory process that converts nitrate ($NO_3{^-}$) to gaseous forms of nitrogen ($N_2$ or $N_2O$). In aquatic ecosystems, the complex interactions of water flooding condition, substrate supply, hydrodynamic and biogeochemical properties modulate the extent of multi-step reactions required for $N_2O$ flux. Although water flow in streambed and residence time affect reaction output, effects of a complex interaction of hydrodynamic, geomorphology and biogeochemical controls on the magnitude of denitrification in streams are still illusive. In this work, we built a two-dimensional water flow channel and measured $N_2O$ flux from channel sediment with different bed geomorphology by using static closed chambers. Two independent experiments were conducted with identical flume and geomorphology but sediment with differences in dissolved organic carbon (DOC). The experiment flume was a circulation channel through which the effluent flows back, and the size of it was $37m{\times}1.2m{\times}1m$. Five days before the experiment began, urea fertilizer (46% N) was added to sediment with the rate of $0.5kg\;N/m^2$. A sand dune (1 m length and 0.15 m height) was made at the middle of channel to simulate variations in microtopography. In high- DOC experiment, $N_2O$ flux increases in the direction of flow, while the highest flux ($14.6{\pm}8.40{\mu}g\;N_2O-N/m^2\;hr$) was measured in the slope on the back side of the sand dune. followed by decreases afterward. In contrast, low DOC sediment did not show the geomorphological variations. We found that even though topographic variation influenced $N_2O$ flux and chemical properties, this effect is highly constrained by carbon availability.

Feasibility of Deep Learning Algorithms for Binary Classification Problems (이진 분류문제에서의 딥러닝 알고리즘의 활용 가능성 평가)

  • Kim, Kitae;Lee, Bomi;Kim, Jong Woo
    • Journal of Intelligence and Information Systems
    • /
    • v.23 no.1
    • /
    • pp.95-108
    • /
    • 2017
  • Recently, AlphaGo which is Bakuk (Go) artificial intelligence program by Google DeepMind, had a huge victory against Lee Sedol. Many people thought that machines would not be able to win a man in Go games because the number of paths to make a one move is more than the number of atoms in the universe unlike chess, but the result was the opposite to what people predicted. After the match, artificial intelligence technology was focused as a core technology of the fourth industrial revolution and attracted attentions from various application domains. Especially, deep learning technique have been attracted as a core artificial intelligence technology used in the AlphaGo algorithm. The deep learning technique is already being applied to many problems. Especially, it shows good performance in image recognition field. In addition, it shows good performance in high dimensional data area such as voice, image and natural language, which was difficult to get good performance using existing machine learning techniques. However, in contrast, it is difficult to find deep leaning researches on traditional business data and structured data analysis. In this study, we tried to find out whether the deep learning techniques have been studied so far can be used not only for the recognition of high dimensional data but also for the binary classification problem of traditional business data analysis such as customer churn analysis, marketing response prediction, and default prediction. And we compare the performance of the deep learning techniques with that of traditional artificial neural network models. The experimental data in the paper is the telemarketing response data of a bank in Portugal. It has input variables such as age, occupation, loan status, and the number of previous telemarketing and has a binary target variable that records whether the customer intends to open an account or not. In this study, to evaluate the possibility of utilization of deep learning algorithms and techniques in binary classification problem, we compared the performance of various models using CNN, LSTM algorithm and dropout, which are widely used algorithms and techniques in deep learning, with that of MLP models which is a traditional artificial neural network model. However, since all the network design alternatives can not be tested due to the nature of the artificial neural network, the experiment was conducted based on restricted settings on the number of hidden layers, the number of neurons in the hidden layer, the number of output data (filters), and the application conditions of the dropout technique. The F1 Score was used to evaluate the performance of models to show how well the models work to classify the interesting class instead of the overall accuracy. The detail methods for applying each deep learning technique in the experiment is as follows. The CNN algorithm is a method that reads adjacent values from a specific value and recognizes the features, but it does not matter how close the distance of each business data field is because each field is usually independent. In this experiment, we set the filter size of the CNN algorithm as the number of fields to learn the whole characteristics of the data at once, and added a hidden layer to make decision based on the additional features. For the model having two LSTM layers, the input direction of the second layer is put in reversed position with first layer in order to reduce the influence from the position of each field. In the case of the dropout technique, we set the neurons to disappear with a probability of 0.5 for each hidden layer. The experimental results show that the predicted model with the highest F1 score was the CNN model using the dropout technique, and the next best model was the MLP model with two hidden layers using the dropout technique. In this study, we were able to get some findings as the experiment had proceeded. First, models using dropout techniques have a slightly more conservative prediction than those without dropout techniques, and it generally shows better performance in classification. Second, CNN models show better classification performance than MLP models. This is interesting because it has shown good performance in binary classification problems which it rarely have been applied to, as well as in the fields where it's effectiveness has been proven. Third, the LSTM algorithm seems to be unsuitable for binary classification problems because the training time is too long compared to the performance improvement. From these results, we can confirm that some of the deep learning algorithms can be applied to solve business binary classification problems.

Speed-up Techniques for High-Resolution Grid Data Processing in the Early Warning System for Agrometeorological Disaster (농업기상재해 조기경보시스템에서의 고해상도 격자형 자료의 처리 속도 향상 기법)

  • Park, J.H.;Shin, Y.S.;Kim, S.K.;Kang, W.S.;Han, Y.K.;Kim, J.H.;Kim, D.J.;Kim, S.O.;Shim, K.M.;Park, E.W.
    • Korean Journal of Agricultural and Forest Meteorology
    • /
    • v.19 no.3
    • /
    • pp.153-163
    • /
    • 2017
  • The objective of this study is to enhance the model's speed of estimating weather variables (e.g., minimum/maximum temperature, sunshine hour, PRISM (Parameter-elevation Regression on Independent Slopes Model) based precipitation), which are applied to the Agrometeorological Early Warning System (http://www.agmet.kr). The current process of weather estimation is operated on high-performance multi-core CPUs that have 8 physical cores and 16 logical threads. Nonetheless, the server is not even dedicated to the handling of a single county, indicating that very high overhead is involved in calculating the 10 counties of the Seomjin River Basin. In order to reduce such overhead, several cache and parallelization techniques were used to measure the performance and to check the applicability. Results are as follows: (1) for simple calculations such as Growing Degree Days accumulation, the time required for Input and Output (I/O) is significantly greater than that for calculation, suggesting the need of a technique which reduces disk I/O bottlenecks; (2) when there are many I/O, it is advantageous to distribute them on several servers. However, each server must have a cache for input data so that it does not compete for the same resource; and (3) GPU-based parallel processing method is most suitable for models such as PRISM with large computation loads.

Estimation of Jaw and MLC Transmission Factor Obtained by the Auto-modeling Process in the Pinnacle3 Treatment Planning System (피나클치료계획시스템에서 자동모델화과정으로 얻은 Jaw와 다엽콜리메이터의 투과 계수 평가)

  • Hwang, Tae-Jin;Kang, Sei-Kwon;Cheong, Kwang-Ho;Park, So-Ah;Lee, Me-Yeon;Kim, Kyoung-Ju;Oh, Do-Hoon;Bae, Hoon-Sik;Suh, Tae-Suk
    • Progress in Medical Physics
    • /
    • v.20 no.4
    • /
    • pp.269-276
    • /
    • 2009
  • Radiation treatment techniques using photon beam such as three-dimensional conformal radiation therapy (3D-CRT) as well as intensity modulated radiotherapy treatment (IMRT) demand accurate dose calculation in order to increase target coverage and spare healthy tissue. Both jaw collimator and multi-leaf collimators (MLCs) for photon beams have been used to achieve such goals. In the Pinnacle3 treatment planning system (TPS), which we are using in our clinics, a set of model parameters like jaw collimator transmission factor (JTF) and MLC transmission factor (MLCTF) are determined from the measured data because it is using a model-based photon dose algorithm. However, model parameters obtained by this auto-modeling process can be different from those by direct measurement, which can have a dosimetric effect on the dose distribution. In this paper we estimated JTF and MLCTF obtained by the auto-modeling process in the Pinnacle3 TPS. At first, we obtained JTF and MLCTF by direct measurement, which were the ratio of the output at the reference depth under the closed jaw collimator (MLCs for MLCTF) to that at the same depth with the field size $10{\times}10\;cm^2$ in the water phantom. And then JTF and MLCTF were also obtained by auto-modeling process. And we evaluated the dose difference through phantom and patient study in the 3D-CRT plan. For direct measurement, JTF was 0.001966 for 6 MV and 0.002971 for 10 MV, and MLCTF was 0.01657 for 6 MV and 0.01925 for 10 MV. On the other hand, for auto-modeling process, JTF was 0.001983 for 6 MV and 0.010431 for 10 MV, and MLCTF was 0.00188 for 6 MV and 0.00453 for 10 MV. JTF and MLCTF by direct measurement were very different from those by auto-modeling process and even more reasonable considering each beam quality of 6 MV and 10 MV. These different parameters affect the dose in the low-dose region. Since the wrong estimation of JTF and MLCTF can lead some dosimetric error, comparison of direct measurement and auto-modeling of JTF and MLCTF would be helpful during the beam commissioning.

  • PDF

Emergency Coronary Artery Bypass Operation for Card iogen ic Shock (심인성 쇼크에 대한 응급 관상동맥 우회술)

  • 김응중;이원용
    • Journal of Chest Surgery
    • /
    • v.30 no.10
    • /
    • pp.966-972
    • /
    • 1997
  • Between June 1994 to August 1996, 13 patients underwent emergency coronary artery bypass operations. There were 3 males and 10 females and ages ranged from 56 to 80 years with the mean of 65.5 years. The indications for emergency operations were cardiogenic shock in 12 cases and intractable polymorphic VT(ve'ntricular tachycardia) in 1 case. The causes of cardiogenic shock were acute evolving infarction in 6 cases, PTCA failure in 4 cases, acute myocardial infarction in 1 case, and post-AMI VSR(ventricular septal rupture) in 1 case. Pive out of 13 patients could go to operating room within 2 hours. However, the operations were delayed from 3 to 10 hours in 8 patients due to non-medical causes. In 12 patients, 37 distal anastomoses were constructed with only 3 LITA's(left internal thoracic arteries) and 34 saphenous veins. In a patient with post-AMI VSR, VSR repair was added. In a patient with intractable VT and critical sten sis limited to left main coronary artery, left main coronary angioplasty was performed. Pive patients died after operation with the operative mortality of 38.5%. Three patients died in the operating room due to LV pump failure, one patient died due to intractable ventricular tachycardia on postoperative second day, and one patient died on postoperative 7th day due to multi-organ failure with complications of mediastinal bleeding, low cardiac output syndrome, ARF, and lower extremity ischemia due to IABP. In 8 survived patients, 3 major complications (mediastinitis, PMI, UGI bleeding) developed but eventually recovered. We think that the aggressive approach to critically ill patients will salvage some of such patients and the most important factor for patient salvage is early surgical intervention before irreversible damage occurs.

  • PDF

Evaluate the implementation of Volumetric Modulated Arc Therapy QA in the radiation therapy treatment according to Various factors by using the Portal Dosimetry (용적변조회전 방사선치료에서 Portal Dosimetry를 이용한 선량평가의 재현성 분석)

  • Kim, Se Hyeon;Bae, Sun Myung;Seo, Dong Rin;Kang, Tae Young;Baek, Geum Mun
    • The Journal of Korean Society for Radiation Therapy
    • /
    • v.27 no.2
    • /
    • pp.167-174
    • /
    • 2015
  • Purpose : The pre-treatment QA using Portal dosimetry for Volumetric Arc Therapy To analyze whether maintaining the reproducibility depending on various factors. Materials and Methods : Test was used for TrueBeam STx$^{TM}$ (Ver.1.5, Varian, USA). Varian Eclipse Treatment planning system(TPS) was used for planning with total of seven patients include head and neck cancer, lung cancer, prostate cancer, and cervical cancer was established for a Portal dosimetry QA plan. In order to measure these plans, Portal Dosimetry application (Ver.10) (Varian) and Portal Vision aS1000 Imager was used. Each Points of QA was determined by dividing, before and after morning treatment, and the after afternoon treatment ended (after 4 hours). Calibration of EPID(Dark field correction, Flood field correction, Dose normalization) was implemented before Every QA measure points. MLC initialize was implemented after each QA points and QA was retried. Also before QA measurements, Beam Ouput at the each of QA points was measured using the Water Phantom and Ionization chamber(IBA dosimetry, Germany). Results : The mean values of the Gamma pass rate(GPR, 3%, 3mm) for every patients between morning, afternoon and evening was 97.3%, 96.1%, 95.4% and the patient's showing maximum difference was 95.7%, 94.2% 93.7%. The mean value of GPR before and after EPID calibration were 95.94%, 96.01%. The mean value of Beam Output were 100.45%, 100.46%, 100.59% at each QA points. The mean value of GPR before and after MLC initialization were 95.83%, 96.40%. Conclusion : Maintain the reproducibility of the Portal Dosimetry as a VMAT QA tool required management of the various factors that can affect the dosimetry.

  • PDF

The Effects of Global Entrepreneurship and Social Capital Within Supply Chain on the Export Performance (글로벌 기업가정신과 공급사슬 내 사회적 자본이 수출성과에 미치는 영향)

  • Yoon, Heon-Deok;Kwak, Ki-Young;Seo, Ri-Bin
    • Asia-Pacific Journal of Business Venturing and Entrepreneurship
    • /
    • v.7 no.3
    • /
    • pp.1-16
    • /
    • 2012
  • Under the international business circumstance, global supply chain management is considered a vital strategic challenge to small and medium-sized enterprises(SMEs) suffering from deficient resources and capabilities to exploit overseas markets comparing with large corporations. That is because they can expand their business domains into overseas markets by establishing strategic alliances with global supply chain partners. Although a wide range of previous researches have emphasized the cooperative networks in the chain, most are ignoring the importance of developing relational characteristics such as trust and reciprocity with the partners. Besides, verifying the relational factors influencing firms' export performances, some studies proposed different and inconsistent factors. According to the social capital theory, which is the social quality and networks facilitating close cooperation of inter-individual and inter-organization, provides the integrated view to identify the relational characteristics in the aspects of network, trust and reciprocal norm. Meanwhile, a number of researchers shows that global entrepreneurship is the internal and intangible resource necessary to promote SMEs' internationalization. Upon closer examination, however, they cannot explain clearly its influencing mechanism in the inter-firm cooperative relationships. This study is to verify the effect of social capital accumulated within global supply chain on SMEs' qualitative and quantitative export performance. In addition, we shed new light on global entrepreneurship expected to be concerned with the formation of social capital and the enhancement of export performances. For this purpose, the questionnaires, developed through literature review, were collected from 192 Korean SMEs affiliated in Korean Medium Industries Association and Global Chief Executive Officer's Club focusing on their memberships' international business. As a result of multi-regression analysis, the social capital - network, trust and reciprocal norm shared with global supply chain partner - as well as global entrepreneurship - innovativeness, proactiveness and risk-taking - have positive effect on SMEs' export performances. Also global entrepreneurship affects positively social capital which has mediating effect partially in the relationship between global entrepreneurship and performances. These results means that there is a structural process - global entrepreneurship(input), social capital(output), and export performances(outcome). In other words, a firm should consistently invest in and develop the social capital with global supply chain partners in order to achieve common goals, establish strategic collaborations and obtain long-term export performances. Furthermore, it is required to foster the global entrepreneurship in an organization so as to build up the social capital. More detailed practical issues and discussion are made in the conclusion.

  • PDF

Enhancement of Inter-Image Statistical Correlation for Accurate Multi-Sensor Image Registration (정밀한 다중센서 영상정합을 위한 통계적 상관성의 증대기법)

  • Kim, Kyoung-Soo;Lee, Jin-Hak;Ra, Jong-Beom
    • Journal of the Institute of Electronics Engineers of Korea SP
    • /
    • v.42 no.4 s.304
    • /
    • pp.1-12
    • /
    • 2005
  • Image registration is a process to establish the spatial correspondence between images of the same scene, which are acquired at different view points, at different times, or by different sensors. This paper presents a new algorithm for robust registration of the images acquired by multiple sensors having different modalities; the EO (electro-optic) and IR(infrared) ones in the paper. The two feature-based and intensity-based approaches are usually possible for image registration. In the former selection of accurate common features is crucial for high performance, but features in the EO image are often not the same as those in the R image. Hence, this approach is inadequate to register the E0/IR images. In the latter normalized mutual Information (nHr) has been widely used as a similarity measure due to its high accuracy and robustness, and NMI-based image registration methods assume that statistical correlation between two images should be global. Unfortunately, since we find out that EO and IR images don't often satisfy this assumption, registration accuracy is not high enough to apply to some applications. In this paper, we propose a two-stage NMI-based registration method based on the analysis of statistical correlation between E0/1R images. In the first stage, for robust registration, we propose two preprocessing schemes: extraction of statistically correlated regions (ESCR) and enhancement of statistical correlation by filtering (ESCF). For each image, ESCR automatically extracts the regions that are highly correlated to the corresponding regions in the other image. And ESCF adaptively filters out each image to enhance statistical correlation between them. In the second stage, two output images are registered by using NMI-based algorithm. The proposed method provides prospective results for various E0/1R sensor image pairs in terms of accuracy, robustness, and speed.

A Design of PLL and Spread Spectrum Clock Generator for 2.7Gbps/1.62Gbps DisplayPort Transmitter (2.7Gbps/1.62Gbps DisplayPort 송신기용 PLL 및 확산대역 클록 발생기의 설계)

  • Kim, Young-Shin;Kim, Seong-Geun;Pu, Young-Gun;Hur, Jeong;Lee, Kang-Yoon
    • Journal of the Institute of Electronics Engineers of Korea SD
    • /
    • v.47 no.2
    • /
    • pp.21-31
    • /
    • 2010
  • This paper presents a design of PLL and SSCG for reducing the EMI effect at the electronic machinery and tools for DisplayPort application. This system is composed of the essential element of PLL and Charge-Pump2 and Reference Clock Divider to implement the SSCG operation. In this paper, 270MHz/162MHz dual-mode PLL that can provide 10-phase and 1.35GHz/810MHz PLL that can reduce the jitter are designed for 2.7Gbps/162Gbps DisplayPort application. The jitter can be reduced drastically by combining 270MHz/162MHz PLL with 2-stage 5 to 1 serializer and 1.35GHz PLL with 2 to 1 serializer. This paper propose the frequency divider topology which can share the divider between modes and guarantee the 50% duty ratio. And, the output current mismatch can be reduced by using the proposed charge-pump topology. It is implemented using 0.13 um CMOS process and die areas of 270MHz/162MHz PLL and 1.35GHz/810MHz PLL are $650um\;{\times}\;500um$ and $600um\;{\times}\;500um$, respectively. The VCO tuning range of 270 MHz/162 MHz PLL is 330 MHz and the phase noise is -114 dBc/Hz at 1 MHz offset. The measured SSCG down spread amplitude is 0.5% and modulation frequency is 31kHz. The total power consumption is 48mW.

A Study on Ontology and Topic Modeling-based Multi-dimensional Knowledge Map Services (온톨로지와 토픽모델링 기반 다차원 연계 지식맵 서비스 연구)

  • Jeong, Hanjo
    • Journal of Intelligence and Information Systems
    • /
    • v.21 no.4
    • /
    • pp.79-92
    • /
    • 2015
  • Knowledge map is widely used to represent knowledge in many domains. This paper presents a method of integrating the national R&D data and assists of users to navigate the integrated data via using a knowledge map service. The knowledge map service is built by using a lightweight ontology and a topic modeling method. The national R&D data is integrated with the research project as its center, i.e., the other R&D data such as research papers, patents, and reports are connected with the research project as its outputs. The lightweight ontology is used to represent the simple relationships between the integrated data such as project-outputs relationships, document-author relationships, and document-topic relationships. Knowledge map enables us to infer further relationships such as co-author and co-topic relationships. To extract the relationships between the integrated data, a Relational Data-to-Triples transformer is implemented. Also, a topic modeling approach is introduced to extract the document-topic relationships. A triple store is used to manage and process the ontology data while preserving the network characteristics of knowledge map service. Knowledge map can be divided into two types: one is a knowledge map used in the area of knowledge management to store, manage and process the organizations' data as knowledge, the other is a knowledge map for analyzing and representing knowledge extracted from the science & technology documents. This research focuses on the latter one. In this research, a knowledge map service is introduced for integrating the national R&D data obtained from National Digital Science Library (NDSL) and National Science & Technology Information Service (NTIS), which are two major repository and service of national R&D data servicing in Korea. A lightweight ontology is used to design and build a knowledge map. Using the lightweight ontology enables us to represent and process knowledge as a simple network and it fits in with the knowledge navigation and visualization characteristics of the knowledge map. The lightweight ontology is used to represent the entities and their relationships in the knowledge maps, and an ontology repository is created to store and process the ontology. In the ontologies, researchers are implicitly connected by the national R&D data as the author relationships and the performer relationships. A knowledge map for displaying researchers' network is created, and the researchers' network is created by the co-authoring relationships of the national R&D documents and the co-participation relationships of the national R&D projects. To sum up, a knowledge map-service system based on topic modeling and ontology is introduced for processing knowledge about the national R&D data such as research projects, papers, patent, project reports, and Global Trends Briefing (GTB) data. The system has goals 1) to integrate the national R&D data obtained from NDSL and NTIS, 2) to provide a semantic & topic based information search on the integrated data, and 3) to provide a knowledge map services based on the semantic analysis and knowledge processing. The S&T information such as research papers, research reports, patents and GTB are daily updated from NDSL, and the R&D projects information including their participants and output information are updated from the NTIS. The S&T information and the national R&D information are obtained and integrated to the integrated database. Knowledge base is constructed by transforming the relational data into triples referencing R&D ontology. In addition, a topic modeling method is employed to extract the relationships between the S&T documents and topic keyword/s representing the documents. The topic modeling approach enables us to extract the relationships and topic keyword/s based on the semantics, not based on the simple keyword/s. Lastly, we show an experiment on the construction of the integrated knowledge base using the lightweight ontology and topic modeling, and the knowledge map services created based on the knowledge base are also introduced.