This study was conducted to investigate the influences of sapflow flux on soil water tensions and soil moisture content at the Abies holophylla plots in Gwangneung, Gyeonggido, from September to October 2004. The Abies holophylla had been planted in 1976 and thinning and pruning were carried out in 1996 and 2004. Sapflow flux was measured by the heat pulse method, and soil water tension was measured by tensiometer at hillslope and streamside. Time domain reflectometry probes (TDR) were positioned horizontally at the depth of 10, 30 and 50 cm to measure soil moisture content. All of data were recorded every 30 minutes with the dataloggers. The sapflow flux responded sensitively to rainfall, so little sapflow was detected in rainy days. The average daily sapflow flux of sample trees was 10.16l, a maximum was 15.09l, and a minimum was 0.0l. The sapflow flux's diurnal changes showed that sapflow flux increased from 9 am and up to 0.74 l/30 min. The highest sapflow flux maintained by 3 pm and decreased almost 0.0 l/30 mm after 7 pm. The average soil water tensions were low (
Purpose : To investigate the signal enhancement ratio by NOE effect on in vivo
Development of technologies in artificial intelligence has been rapidly increasing with the Fourth Industrial Revolution, and researches related to AI have been actively conducted in a variety of fields such as autonomous vehicles, natural language processing, and robotics. These researches have been focused on solving cognitive problems such as learning and problem solving related to human intelligence from the 1950s. The field of artificial intelligence has achieved more technological advance than ever, due to recent interest in technology and research on various algorithms. The knowledge-based system is a sub-domain of artificial intelligence, and it aims to enable artificial intelligence agents to make decisions by using machine-readable and processible knowledge constructed from complex and informal human knowledge and rules in various fields. A knowledge base is used to optimize information collection, organization, and retrieval, and recently it is used with statistical artificial intelligence such as machine learning. Recently, the purpose of the knowledge base is to express, publish, and share knowledge on the web by describing and connecting web resources such as pages and data. These knowledge bases are used for intelligent processing in various fields of artificial intelligence such as question answering system of the smart speaker. However, building a useful knowledge base is a time-consuming task and still requires a lot of effort of the experts. In recent years, many kinds of research and technologies of knowledge based artificial intelligence use DBpedia that is one of the biggest knowledge base aiming to extract structured content from the various information of Wikipedia. DBpedia contains various information extracted from Wikipedia such as a title, categories, and links, but the most useful knowledge is from infobox of Wikipedia that presents a summary of some unifying aspect created by users. These knowledge are created by the mapping rule between infobox structures and DBpedia ontology schema defined in DBpedia Extraction Framework. In this way, DBpedia can expect high reliability in terms of accuracy of knowledge by using the method of generating knowledge from semi-structured infobox data created by users. However, since only about 50% of all wiki pages contain infobox in Korean Wikipedia, DBpedia has limitations in term of knowledge scalability. This paper proposes a method to extract knowledge from text documents according to the ontology schema using machine learning. In order to demonstrate the appropriateness of this method, we explain a knowledge extraction model according to the DBpedia ontology schema by learning Wikipedia infoboxes. Our knowledge extraction model consists of three steps, document classification as ontology classes, proper sentence classification to extract triples, and value selection and transformation into RDF triple structure. The structure of Wikipedia infobox are defined as infobox templates that provide standardized information across related articles, and DBpedia ontology schema can be mapped these infobox templates. Based on these mapping relations, we classify the input document according to infobox categories which means ontology classes. After determining the classification of the input document, we classify the appropriate sentence according to attributes belonging to the classification. Finally, we extract knowledge from sentences that are classified as appropriate, and we convert knowledge into a form of triples. In order to train models, we generated training data set from Wikipedia dump using a method to add BIO tags to sentences, so we trained about 200 classes and about 2,500 relations for extracting knowledge. Furthermore, we evaluated comparative experiments of CRF and Bi-LSTM-CRF for the knowledge extraction process. Through this proposed process, it is possible to utilize structured knowledge by extracting knowledge according to the ontology schema from text documents. In addition, this methodology can significantly reduce the effort of the experts to construct instances according to the ontology schema.
The social venture start-up phenomenon is found from the perspectives of social enterprise and for-profit enterprise. This study aims to fundamentally explore the start-up phenomenon of social ventures from these two perspectives. Considering the lack of prior research that researched both social and commercial perspectives at the same time, this paper analyzed using grounded theory approach of Strauss & Corbin(1998), an inductive research method that analyzes based on prior research and interview data. In order to collect data for this study, eight corporate representatives currently operating social ventures were interviewed and data and phenomena were analyzed. This progressed to a theoretical saturation where no additional information was derived. The analysis results of this study using the grounded theory approach are as follows. As a result of open coding and axial coding, 147 concepts and 70 subcategories were derived, and 18 categories were derived through the final abstraction process. In the selective coding, 'expansion of social venture entry in the social domain' and 'expansion of social function of for-profit companies' were selected as key categories, and a story line was formed around this. In this study, we saw that it is necessary to conduct academic research and analysis on the competitive factors required for companies that pursue the values of two conflicting relationships, such as social ventures, to survive with competitiveness. In practice, concepts such as collaboration with for-profit companies, value combination, entrepreneurship competency and performance improvement, social value execution competency reinforcement, communication strategy, for-profit enterprise value investment, and entrepreneur management competency were derived. This study explains the social venture phenomenon for social enterprises, commercial enterprises, and entrepreneurs who want to enter the social venture field. It is expected to provide the implications necessary for successful social venture startups.
Numerical simulation in exploration geophysics provides important insights into subsurface wave propagation phenomena. Although elastic wave simulations take longer to compute than acoustic simulations, an elastic simulator can construct more realistic wavefields including shear components. Therefore, it is suitable for exploration of the responses of elastic bodies. To overcome the long duration of the calculations, we use a Graphic Processing Unit (GPU) to accelerate the elastic wave simulation. Because a GPU has many processors and a wide memory bandwidth, we can use it in a parallelised computing architecture. The GPU board used in this study is an NVIDIA Tesla C1060, which has 240 processors and a 102 GB/s memory bandwidth. Despite the availability of a parallel computing architecture (CUDA), developed by NVIDIA, we must optimise the usage of the different types of memory on the GPU device, and the sequence of calculations, to obtain a significant speedup of the computation. In this study, we simulate two- (2D) and threedimensional (3D) elastic wave propagation using the Finite-Difference Time-Domain (FDTD) method on GPUs. In the wave propagation simulation, we adopt the staggered-grid method, which is one of the conventional FD schemes, since this method can achieve sufficient accuracy for use in numerical modelling in geophysics. Our simulator optimises the usage of memory on the GPU device to reduce data access times, and uses faster memory as much as possible. This is a key factor in GPU computing. By using one GPU device and optimising its memory usage, we improved the computation time by more than 14 times in the 2D simulation, and over six times in the 3D simulation, compared with one CPU. Furthermore, by using three GPUs, we succeeded in accelerating the 3D simulation 10 times.
In many solute transport studies, either flux or resident concentration has been used. Choice of the concentration mode was dependent on the monitoring device in solute displacement experiments. It has been accepted that no priority exists in the selection of concentration mode in the study of solute transport. It would be questionable, however, to accept the equivalency in the solute transport parameters between flux and resident concentrations in structured soils exhibiting preferential movement of solute. In this study, we investigate how they differ in the monitored breakthrough curves (BTCs) and transport parameters for a given boundary and flow condition by performing solute displacement experiments on a number of undisturbed soil columns. Both flux and resident concentrations have been simultaneously obtained by monitoring the effluent and resistance of the horizontally-positioned TDR probes. Two different solute transport models namely, convection-dispersion equation (CDE) and convective lognormal transfer function (CLT) models, were fitted to the observed breakthrough data in order to quantify the difference between two concentration modes. The study reveals that soil columns having relatively high flux densities exhibited great differences in the degree of peak concentration and travel time of peak between flux and resident concentrations. The peak concentration in flux mode was several times higher than that in resident one. Accordingly, the estimated parameters of flux mode differed greatly from those of resident mode and the difference was more pronounced in CDE than CLT model. Especially in CDE model, the parameters of flux mode were much higher than those of resident mode. This was mainly due to the bypassing of solute through soil macropores and failure of the equilibrium CDE model to adequate description of solute transport in studied soils. In the domain of the relationship between the ratio of hydrodynamic dispersion to molecular diffusion and the peclet number, both concentrations fall on a zone of predominant mechanical dispersion. However, it appears that more molecular diffusion contributes to the solute spreading in the matrix region than the macropore region due to the nonliearity present in the pore water velocity and dispersion coefficient relationship.
Over the past decade, deep learning has been in spotlight among various machine learning algorithms. In particular, CNN(Convolutional Neural Network), which is known as the effective solution for recognizing and classifying images or voices, has been popularly applied to classification and prediction problems. In this study, we investigate the way to apply CNN in business problem solving. Specifically, this study propose to apply CNN to stock market prediction, one of the most challenging tasks in the machine learning research. As mentioned, CNN has strength in interpreting images. Thus, the model proposed in this study adopts CNN as the binary classifier that predicts stock market direction (upward or downward) by using time series graphs as its inputs. That is, our proposal is to build a machine learning algorithm that mimics an experts called 'technical analysts' who examine the graph of past price movement, and predict future financial price movements. Our proposed model named 'CNN-FG(Convolutional Neural Network using Fluctuation Graph)' consists of five steps. In the first step, it divides the dataset into the intervals of 5 days. And then, it creates time series graphs for the divided dataset in step 2. The size of the image in which the graph is drawn is
Background: Vitamin K antagonist (VKA) to prevent thromboembolism in non-valvular atrial fibrillation (NVAF) patients has limitations such as drug interaction. This study investigated the clinical characteristics of Korean patients treated with VKA for stroke prevention and assessed quality of VKA therapy and treatment satisfaction. Methods: We conducted a multicenter, prospective, non-interventional study. Patients with
The wall shear stress in the vicinity of end-to end anastomoses under steady flow conditions was measured using a flush-mounted hot-film anemometer(FMHFA) probe. The experimental measurements were in good agreement with numerical results except in flow with low Reynolds numbers. The wall shear stress increased proximal to the anastomosis in flow from the Penrose tubing (simulating an artery) to the PTFE: graft. In flow from the PTFE graft to the Penrose tubing, low wall shear stress was observed distal to the anastomosis. Abnormal distributions of wall shear stress in the vicinity of the anastomosis, resulting from the compliance mismatch between the graft and the host artery, might be an important factor of ANFH formation and the graft failure. The present study suggests a correlation between regions of the low wall shear stress and the development of anastomotic neointimal fibrous hyperplasia(ANPH) in end-to-end anastomoses. 30523 T00401030523 ^x Air pressure decay(APD) rate and ultrafiltration rate(UFR) tests were performed on new and saline rinsed dialyzers as well as those roused in patients several times. C-DAK 4000 (Cordis Dow) and CF IS-11 (Baxter Travenol) reused dialyzers obtained from the dialysis clinic were used in the present study. The new dialyzers exhibited a relatively flat APD, whereas saline rinsed and reused dialyzers showed considerable amount of decay. C-DAH dialyzers had a larger APD(11.70
The wall shear stress in the vicinity of end-to end anastomoses under steady flow conditions was measured using a flush-mounted hot-film anemometer(FMHFA) probe. The experimental measurements were in good agreement with numerical results except in flow with low Reynolds numbers. The wall shear stress increased proximal to the anastomosis in flow from the Penrose tubing (simulating an artery) to the PTFE: graft. In flow from the PTFE graft to the Penrose tubing, low wall shear stress was observed distal to the anastomosis. Abnormal distributions of wall shear stress in the vicinity of the anastomosis, resulting from the compliance mismatch between the graft and the host artery, might be an important factor of ANFH formation and the graft failure. The present study suggests a correlation between regions of the low wall shear stress and the development of anastomotic neointimal fibrous hyperplasia(ANPH) in end-to-end anastomoses. 30523 T00401030523 ^x Air pressure decay(APD) rate and ultrafiltration rate(UFR) tests were performed on new and saline rinsed dialyzers as well as those roused in patients several times. C-DAK 4000 (Cordis Dow) and CF IS-11 (Baxter Travenol) reused dialyzers obtained from the dialysis clinic were used in the present study. The new dialyzers exhibited a relatively flat APD, whereas saline rinsed and reused dialyzers showed considerable amount of decay. C-DAH dialyzers had a larger APD(11.70