• Title/Summary/Keyword: Dynamic Experiment

Search Result 1,914, Processing Time 0.031 seconds

Analysis of the Impact of Reflected Waves on Deep Neural Network-Based Heartbeat Detection for Pulsatile Extracorporeal Membrane Oxygenator Control (반사파가 박동형 체외막산화기 제어에 사용되는 심층신경망의 심장 박동 감지에 미치는 영향 분석)

  • Seo Jun Yoon;Hyun Woo Jang;Seong Wook Choi
    • Journal of Biomedical Engineering Research
    • /
    • v.45 no.3
    • /
    • pp.128-137
    • /
    • 2024
  • It is necessary to develop a pulsatile Extracorporeal Membrane Oxygenator (p-ECMO) with counter-pulsation control(CPC), which ejects blood during the diastolic phase of the heart rather than the systolic phase, due to the known issues with conventional ECMO causing fatal complications such as ventricular dilation and pulmonary edema. A promising method to simultaneously detect the pulsations of the heart and p-ECMO is to analyze blood pressure waveforms using deep neural network technology(DNN). However, the accurate detection of cardiac rhythms by DNNs is challenging due to various noises such as pulsations from p-ECMO, reflected waves in the vessels, and other dynamic noises. This study aims to evaluate the accuracy of DNNs developed for CPC in p-ECMO, using human-like blood pressure waveforms reproduced in an in-vitro experiment. Especially, an experimental setup that reproduces reflected waves commonly observed in actual patients was developed, and the impact of these waves on DNN judgments was assessed using a multiple DNN (m-DNN) that provides accurate determinations along with a separate index for heartbeat recognition ability. In the experimental setup inducing reflected waves, it was observed that the shape of the blood pressure waveform became increasingly complex, which coincided with an increase in harmonic components, as evident from the Fast Fourier Transform results of the blood pressure wave. It was observed that the recognition score (RS) of DNNs decreased in blood pressure waveforms with significant harmonic components, separate from the frequency components caused by the heart and p-ECMO. This study demonstrated that each DNN trained on blood pressure waveforms without reflected waves showed low RS when faced with waveforms containing reflected waves. However, the accuracy of the final results from the m-DNN remained high even in the presence of reflected waves.

Investigating Dynamic Mutation Process of Issues Using Unstructured Text Analysis (비정형 텍스트 분석을 활용한 이슈의 동적 변이과정 고찰)

  • Lim, Myungsu;Kim, Namgyu
    • Journal of Intelligence and Information Systems
    • /
    • v.22 no.1
    • /
    • pp.1-18
    • /
    • 2016
  • Owing to the extensive use of Web media and the development of the IT industry, a large amount of data has been generated, shared, and stored. Nowadays, various types of unstructured data such as image, sound, video, and text are distributed through Web media. Therefore, many attempts have been made in recent years to discover new value through an analysis of these unstructured data. Among these types of unstructured data, text is recognized as the most representative method for users to express and share their opinions on the Web. In this sense, demand for obtaining new insights through text analysis is steadily increasing. Accordingly, text mining is increasingly being used for different purposes in various fields. In particular, issue tracking is being widely studied not only in the academic world but also in industries because it can be used to extract various issues from text such as news, (SocialNetworkServices) to analyze the trends of these issues. Conventionally, issue tracking is used to identify major issues sustained over a long period of time through topic modeling and to analyze the detailed distribution of documents involved in each issue. However, because conventional issue tracking assumes that the content composing each issue does not change throughout the entire tracking period, it cannot represent the dynamic mutation process of detailed issues that can be created, merged, divided, and deleted between these periods. Moreover, because only keywords that appear consistently throughout the entire period can be derived as issue keywords, concrete issue keywords such as "nuclear test" and "separated families" may be concealed by more general issue keywords such as "North Korea" in an analysis over a long period of time. This implies that many meaningful but short-lived issues cannot be discovered by conventional issue tracking. Note that detailed keywords are preferable to general keywords because the former can be clues for providing actionable strategies. To overcome these limitations, we performed an independent analysis on the documents of each detailed period. We generated an issue flow diagram based on the similarity of each issue between two consecutive periods. The issue transition pattern among categories was analyzed by using the category information of each document. In this study, we then applied the proposed methodology to a real case of 53,739 news articles. We derived an issue flow diagram from the articles. We then proposed the following useful application scenarios for the issue flow diagram presented in the experiment section. First, we can identify an issue that actively appears during a certain period and promptly disappears in the next period. Second, the preceding and following issues of a particular issue can be easily discovered from the issue flow diagram. This implies that our methodology can be used to discover the association between inter-period issues. Finally, an interesting pattern of one-way and two-way transitions was discovered by analyzing the transition patterns of issues through category analysis. Thus, we discovered that a pair of mutually similar categories induces two-way transitions. In contrast, one-way transitions can be recognized as an indicator that issues in a certain category tend to be influenced by other issues in another category. For practical application of the proposed methodology, high-quality word and stop word dictionaries need to be constructed. In addition, not only the number of documents but also additional meta-information such as the read counts, written time, and comments of documents should be analyzed. A rigorous performance evaluation or validation of the proposed methodology should be performed in future works.

Development of Summer Leaf Vegetable Crop Energy Model for Rooftop Greenhouse (옥상온실에서의 여름철 엽채류 작물에너지 교환 모델 개발)

  • Cho, Jeong-Hwa;Lee, In-Bok;Lee, Sang-Yeon;Kim, Jun-Gyu;Decano, Cristina;Choi, Young-Bae;Lee, Min-Hyung;Jeong, Hyo-Hyeog;Jeong, Deuk-Young
    • Journal of Bio-Environment Control
    • /
    • v.31 no.3
    • /
    • pp.246-254
    • /
    • 2022
  • Domestic facility agriculture grows rapidly, such as modernization and large-scale. And the production scale increases significantly compared to the area, accounting for about 60% of the total agricultural production. Greenhouses require energy input to create an appropriate environment for stable mass production throughout the year, but the energy load per unit area is large because of low insulation properties. Through the rooftop greenhouse, one of the types of urban agriculture, energy that is not discarded or utilized in the building can be used in the rooftop greenhouse. And the cooling and heating load of the building can be reduced through optimal greenhouse operation. Dynamic energy analysis for various environmental conditions should be preceded for efficient operation of rooftop greenhouses, and about 40% of the solar energy introduced in the greenhouse is energy exchange for crops, so it should be considered essential. A major analysis is needed for each sensible heat and latent heat load by leaf surface temperature and evapotranspiration, dominant in energy flow. Therefore, an experiment was conducted in a rooftop greenhouse located at the Korea Institute of Machinery and Materials to analyze the energy exchange according to the growth stage of crops. A micro-meteorological and nutrient solution environment and growth survey were conducted around the crops. Finally, a regression model of leaf temperature and evapotranspiration according to the growth stage of leafy vegetables was developed, and using this, the dynamic energy model of the rooftop greenhouse considering heat transfer between crops and the surrounding air can be analyzed.

Evaluation on the Usefulness of Alternative Radiopharmaceutical by Particle size in Sentinel Lymphoscintigraphy (감시림프절 검사 시 입자크기에 따른 대체 방사성의약품의 유용성평가)

  • Jo, Gwang Mo;Jeong, Yeong Hwan;Choi, Do Cheol;Shin, Ju Cheol
    • The Korean Journal of Nuclear Medicine Technology
    • /
    • v.20 no.2
    • /
    • pp.36-41
    • /
    • 2016
  • Purpose Sentinel lymphoscintigraphy (SLS) was using only $^{99m}Tc-phytate$. If the supply is interrupted temporarily, there is no alternative radiopharmaceuticals. The aim of this study measure the particle size of radiopharmaceuticals and look for radiopharmaceuticals which can be substituted for $^{99m}Tc-phytate$. Materials and Methods The particle size of radiopharmaceuticals were analyzed by a nano-particle analyzer. This study were selected known radiopharmaceuticals to be useful particle size for SLS. We were divided into control and experimental groups using $^{99m}Tc-DPD$, $^{99m}Tc-MAG3$, $^{99m}Tc-DMSA$ with $^{99m}Tc-phytate$. For in-vivo experiment, radiopharmaceuticals were injected intradermally at both foot to perform lymphoscintigraphy. Imaging was acquired to dynamic and delayed static image and observe the inguinal lymph nodes with the naked eye. Results Particle size was measured respectively Phytate 105~255 nm (81.9%), MAG3 91~255 nm (98.7%), DPD 105~342 nm (77.3%), DMSA 164~ 342 nm (99.2%), MAA 1281~2305 nm (90.6%), DTPA 342~1106 nm (79.4%), and HDP 295~955 nm (94%). In-vivo delayed static image, inguinal lymph nodes of all experiment groups and two control groups are visible to naked eye. however, $^{99m}Tc-MAG3$ of control groups is not visible to naked eye. Conclusion We were analyzed to the particle size of the radiopharmaceuticals that are used in in-vivo. Consequently, $^{99m}Tc-DPD$, $^{99m}Tc-DMSA $are possible in an alternative radiopharmaceuticals of emergency.

  • PDF

The role of the middle term in the integration of the two premises in linear syllogistic reasoning (선형 삼단 논법의 두 전제 통합 과정에서 중간 항목의 역할)

  • 정혜선;조명한
    • Korean Journal of Cognitive Science
    • /
    • v.12 no.3
    • /
    • pp.29-46
    • /
    • 2001
  • This study attempted to demonstrate that the integration of the two premises in linear syllogism is mediated by the middle term the term that is repeated in the two premises. In Experiment 1. we examined whether representing the middle term is more important than representing the end terms. We asked a question to each premise. Depending on the order of the questions either the two end terms or the middle term became the answer in both premises. Participants solved the problems better when the middle term became the answer suggesting that it is more important to represent the middle term than the end terms. In Experiment 2 we examined whether additional processing is needed for the integration beyond establishing co-referential link through the middle term. We pronominalized the middle term in the second premise and provided two kinds of information to disambiguate the pronoun. In the direct information condition we provided information about who the pronoun is whereas in the indirect information condition we provided information about the relative location of the pronoun. Participants solved the problems more quickly in the indirect information condition than in the direct information condition indicating that mere co-referential link was not enough and that the relative location of the middle term needs to be computed for the integration of the two premises.

  • PDF

Context Prediction Using Right and Wrong Patterns to Improve Sequential Matching Performance for More Accurate Dynamic Context-Aware Recommendation (보다 정확한 동적 상황인식 추천을 위해 정확 및 오류 패턴을 활용하여 순차적 매칭 성능이 개선된 상황 예측 방법)

  • Kwon, Oh-Byung
    • Asia pacific journal of information systems
    • /
    • v.19 no.3
    • /
    • pp.51-67
    • /
    • 2009
  • Developing an agile recommender system for nomadic users has been regarded as a promising application in mobile and ubiquitous settings. To increase the quality of personalized recommendation in terms of accuracy and elapsed time, estimating future context of the user in a correct way is highly crucial. Traditionally, time series analysis and Makovian process have been adopted for such forecasting. However, these methods are not adequate in predicting context data, only because most of context data are represented as nominal scale. To resolve these limitations, the alignment-prediction algorithm has been suggested for context prediction, especially for future context from the low-level context. Recently, an ontological approach has been proposed for guided context prediction without context history. However, due to variety of context information, acquiring sufficient context prediction knowledge a priori is not easy in most of service domains. Hence, the purpose of this paper is to propose a novel context prediction methodology, which does not require a priori knowledge, and to increase accuracy and decrease elapsed time for service response. To do so, we have newly developed pattern-based context prediction approach. First of ail, a set of individual rules is derived from each context attribute using context history. Then a pattern consisted of results from reasoning individual rules, is developed for pattern learning. If at least one context property matches, say R, then regard the pattern as right. If the pattern is new, add right pattern, set the value of mismatched properties = 0, freq = 1 and w(R, 1). Otherwise, increase the frequency of the matched right pattern by 1 and then set w(R,freq). After finishing training, if the frequency is greater than a threshold value, then save the right pattern in knowledge base. On the other hand, if at least one context property matches, say W, then regard the pattern as wrong. If the pattern is new, modify the result into wrong answer, add right pattern, and set frequency to 1 and w(W, 1). Or, increase the matched wrong pattern's frequency by 1 and then set w(W, freq). After finishing training, if the frequency value is greater than a threshold level, then save the wrong pattern on the knowledge basis. Then, context prediction is performed with combinatorial rules as follows: first, identify current context. Second, find matched patterns from right patterns. If there is no pattern matched, then find a matching pattern from wrong patterns. If a matching pattern is not found, then choose one context property whose predictability is higher than that of any other properties. To show the feasibility of the methodology proposed in this paper, we collected actual context history from the travelers who had visited the largest amusement park in Korea. As a result, 400 context records were collected in 2009. Then we randomly selected 70% of the records as training data. The rest were selected as testing data. To examine the performance of the methodology, prediction accuracy and elapsed time were chosen as measures. We compared the performance with case-based reasoning and voting methods. Through a simulation test, we conclude that our methodology is clearly better than CBR and voting methods in terms of accuracy and elapsed time. This shows that the methodology is relatively valid and scalable. As a second round of the experiment, we compared a full model to a partial model. A full model indicates that right and wrong patterns are used for reasoning the future context. On the other hand, a partial model means that the reasoning is performed only with right patterns, which is generally adopted in the legacy alignment-prediction method. It turned out that a full model is better than a partial model in terms of the accuracy while partial model is better when considering elapsed time. As a last experiment, we took into our consideration potential privacy problems that might arise among the users. To mediate such concern, we excluded such context properties as date of tour and user profiles such as gender and age. The outcome shows that preserving privacy is endurable. Contributions of this paper are as follows: First, academically, we have improved sequential matching methods to predict accuracy and service time by considering individual rules of each context property and learning from wrong patterns. Second, the proposed method is found to be quite effective for privacy preserving applications, which are frequently required by B2C context-aware services; the privacy preserving system applying the proposed method successfully can also decrease elapsed time. Hence, the method is very practical in establishing privacy preserving context-aware services. Our future research issues taking into account some limitations in this paper can be summarized as follows. First, user acceptance or usability will be tested with actual users in order to prove the value of the prototype system. Second, we will apply the proposed method to more general application domains as this paper focused on tourism in amusement park.

Semantic Process Retrieval with Similarity Algorithms (유사도 알고리즘을 활용한 시맨틱 프로세스 검색방안)

  • Lee, Hong-Joo;Klein, Mark
    • Asia pacific journal of information systems
    • /
    • v.18 no.1
    • /
    • pp.79-96
    • /
    • 2008
  • One of the roles of the Semantic Web services is to execute dynamic intra-organizational services including the integration and interoperation of business processes. Since different organizations design their processes differently, the retrieval of similar semantic business processes is necessary in order to support inter-organizational collaborations. Most approaches for finding services that have certain features and support certain business processes have relied on some type of logical reasoning and exact matching. This paper presents our approach of using imprecise matching for expanding results from an exact matching engine to query the OWL(Web Ontology Language) MIT Process Handbook. MIT Process Handbook is an electronic repository of best-practice business processes. The Handbook is intended to help people: (1) redesigning organizational processes, (2) inventing new processes, and (3) sharing ideas about organizational practices. In order to use the MIT Process Handbook for process retrieval experiments, we had to export it into an OWL-based format. We model the Process Handbook meta-model in OWL and export the processes in the Handbook as instances of the meta-model. Next, we need to find a sizable number of queries and their corresponding correct answers in the Process Handbook. Many previous studies devised artificial dataset composed of randomly generated numbers without real meaning and used subjective ratings for correct answers and similarity values between processes. To generate a semantic-preserving test data set, we create 20 variants for each target process that are syntactically different but semantically equivalent using mutation operators. These variants represent the correct answers of the target process. We devise diverse similarity algorithms based on values of process attributes and structures of business processes. We use simple similarity algorithms for text retrieval such as TF-IDF and Levenshtein edit distance to devise our approaches, and utilize tree edit distance measure because semantic processes are appeared to have a graph structure. Also, we design similarity algorithms considering similarity of process structure such as part process, goal, and exception. Since we can identify relationships between semantic process and its subcomponents, this information can be utilized for calculating similarities between processes. Dice's coefficient and Jaccard similarity measures are utilized to calculate portion of overlaps between processes in diverse ways. We perform retrieval experiments to compare the performance of the devised similarity algorithms. We measure the retrieval performance in terms of precision, recall and F measure? the harmonic mean of precision and recall. The tree edit distance shows the poorest performance in terms of all measures. TF-IDF and the method incorporating TF-IDF measure and Levenshtein edit distance show better performances than other devised methods. These two measures are focused on similarity between name and descriptions of process. In addition, we calculate rank correlation coefficient, Kendall's tau b, between the number of process mutations and ranking of similarity values among the mutation sets. In this experiment, similarity measures based on process structure, such as Dice's, Jaccard, and derivatives of these measures, show greater coefficient than measures based on values of process attributes. However, the Lev-TFIDF-JaccardAll measure considering process structure and attributes' values together shows reasonably better performances in these two experiments. For retrieving semantic process, we can think that it's better to consider diverse aspects of process similarity such as process structure and values of process attributes. We generate semantic process data and its dataset for retrieval experiment from MIT Process Handbook repository. We suggest imprecise query algorithms that expand retrieval results from exact matching engine such as SPARQL, and compare the retrieval performances of the similarity algorithms. For the limitations and future work, we need to perform experiments with other dataset from other domain. And, since there are many similarity values from diverse measures, we may find better ways to identify relevant processes by applying these values simultaneously.

Dynamic Behavior of Model Set Net in the Flow (모형 정치망의 흐름에 대한 거동)

  • Jung, Gi-Cheul;Kwon, Byeong-Guk;Le, Ju-Hee
    • Journal of the Korean Society of Fisheries and Ocean Technology
    • /
    • v.33 no.4
    • /
    • pp.275-284
    • /
    • 1997
  • This experiment was carried out to measure the sinking depth of each buoy, the change in the net shape of the net, and the tension of sand bag line according to the R (from bag net to the fish court) and L (from fish court to the bag net) current directions and their velocity by the model experiment. The model net was one-fiftieth of the real net, and its size was determined after considering the Tauti’s Similarity Law and the dimension of the experimental tank. 1. The changes of the net shape were as follows : In the current R, the end net of fish court moved 20mm down the lowerward tide and 10mm upper part. So the whole model net moved up at 0.2m/sec. The shape of the net showed an almost linear state from bag net to the fish court at 0.6m/sec. In the current L, the door net moved 242mm down the lowerward tide and 18mm upper part. So the whole model net moved up at 0.2m/sec. The net shape showed an almost linear state from the fish court to the bag net at 0.5m/sec. 2. The sinking depths of each buoy were as follows: In the current R, the head buoy started sinking at 0.2m/sec and sank 20mm, 99mm at 0.3m/sec and 0.6m/sec, respectively. The end buoy didn't sink from 0m/sec to 0.6m/sec but showed a slight quake. In the current L, the end buoy started sinking at 0.1m/sec, and sank 5mm and 108mm at 0.2m/sec and 0.6m/sec, respectively. The whole model net sank at 0.5m/sec except the head buoy. 3. The changes of the sand bag line tension were as follows: In the current R, the tension affected by the sand bag line of the head buoy showed 273.51g at 0.1m/sec increased to 1298.40g at 0.6m/sec. In the current L, the tension affected by the sand bag line of the end buoy on one side showed 137.08g at 0.1m/sec increased to 646.00g at 0.6m/sec. The changes in the sand bag line tension were concentrated on the sand bag line of the upperward tide with increasing velocity at the R and L current directions. However, no significant increase in tension was observed in the other sand bag lines.

  • PDF

A Study on the Development of Multimedia CAI in Smoking Prevention for Adolescents (청소년 흡연예방을 위한 멀티미디어 CAI 개발)

  • Lee, Sook-Ja;Park, Tae-Jin;Joung, Young-Il;Cho, Hyun
    • Korean Journal of Health Education and Promotion
    • /
    • v.20 no.2
    • /
    • pp.35-61
    • /
    • 2003
  • Background: The purpose of this study was to develop a structured and individualized smoking prevention program for adolescents by utilizing a multimedia computer-assisted instruction model and to empirically assess its effect. Method: For the purpose of this study, a guide book of smoking prevention program for middle and high school students was developed as the first step. The contents of this book were summarized and developed into an actual multimedia CAI smoking prevention program according to the Gane & Briggs instructional design and Keller's ARCS motivation design models as the second step. At the final step, the short-tenn effects of this program were examined by an experiment. This experiment were made for middle school and high school students and the quasi experimental design was the pretest - intervention - posttest. The measured data was attitude, belief, and knowledge about smoking, interest in the program, and learning motivation. Result: The results of this study were as follows: First, the guide book of a smoking prevention program was developed and the existing literature on adolescent smoking was analyzed to develop the content of the guide book. Then the curriculum was divided into three main domains on tobacco and smoking history, smoking and health, adolescent smoking and each main domain was divided into sub-domains. Second, the contents of the guide book were translated into a multimedia CAI program of smoking prevention througn Powerpoint software according to the instructional design theory. The characteristics of this program were interactive, learner controllable, and structured The program contents consisted of entrance(5.6%), history of tobacco(30%), smoking and health(38.9%), adolescent smoking(22.2%), video(4.7%), and exit(1.6%). Multimedia materials consisted of text(121), sound and music, image(still 84, dynamic 32), and videogram(6). The program took about 40 minutes to complete. Third, the results on analysis of the program effects were as follows: 1) There was significant knowledge increase between the pre-test and post-test with total mean difference 3.44, and the highest increase was in the 1st grade students of high school(p<0.001). 2) There was significant decrease in general belief on smoking between the pre-test and post-test with total mean difference 0.28. In subgroup analysis, the difference was significantly higher in the 1st grade of high school (p<0.001), low income class (p<0.001), and daily smokers (p<0.01). 3) There was no significant difference in attitudes on his personal smoking between the pre-test and post-test. 4) The interest in the program seemed to lower as students got older. The score of motivation toward this prevention program was the highest in the middle school 3rd grade. Among sub-domains of motivation, the confidence score was the highest. Conclusion: To be most effective, the smoking prevention program for adolescents should utilize the most up-to-date and accurate information on smoking, and then instructional material should be developed so that the learners can approach the program with enjoyment. Through this study, a guide book with the most up-to-date information was developed and the multimedia CAI smoking prevention program was also developed based on the guide book. The program showed positive effect on the students' knowledge and belief in smoking.

The Beneficial Effects of Pectin on Obesity In vitro and In vivo (In vitro 및 In vivo에서 펙틴의 비만 억제 효과)

  • Kwon, Jin-Young;Ann, In-Sook;Park, Kun-Young;Cheigh, Hong-Sik;Song, Yeong-Ok
    • Journal of the Korean Society of Food Science and Nutrition
    • /
    • v.34 no.1
    • /
    • pp.13-20
    • /
    • 2005
  • The effects of pectin on obesity was studied using 3T3-L1 pre-adipocytes and rats fed 20% high fat diets. The concentration of leptin released from 3T3-L1 adipocytes in the presence of pectin was significantly decreased by 85% compared to that of the control (p<0.05), however, glycerol concentration was not changed. These data indicate that pectin seems to inhibit lipids accumulation in the adipocytes rather than enhance the lipolytic activity. Forty Sprague Dawley rats were fed 20% high fat diet for 8 weeks to induce obesity and then divided equally into four groups. Experimental groups were normal diet group (ND), high fat diet group (HFD), HDF with 10% pectin group (HFP10), and HDF with 20% pectin group (HFP20). Diet for the each group was prepared to be iso-caloric following AIN-76 guideline. After obesity was induced, rats were placed on an restricted diet for 9 weeks. The body weight of HFD increased 50% (p<0.05) compared to the ND, while it was decreased by 12% and 16% for HFP10 and HFP20, respectively (p<0.05). The relative amount of visceral fats for HFDl0 and HFD20 were decreased by 45% and 59% compared to that of HDF (130%), respectively (p<0.05). Pectin seems to have a greater effect on reducing visceral fats accumulation than weight reduction. Significantly increased level of triglyceride, total cholesterol or LDL-cholesterol in the plasma of HFD was returned to the normal or even below the normal by pectin diet, while the level of HDL-cholesterol increased. Lipid lowering effect was also observed in the liver and heart. These effects of pectin were dosedependent. In conclusion, the beneficial effect of pectin on the obesity was observed from cell culture experiment and animal study in terms of inhibiting the accumulation of lipids in the adipocytes.