• Title/Summary/Keyword: 정보처리기술

Search Result 13,424, Processing Time 0.055 seconds

Nonlinear Vector Alignment Methodology for Mapping Domain-Specific Terminology into General Space (전문어의 범용 공간 매핑을 위한 비선형 벡터 정렬 방법론)

  • Kim, Junwoo;Yoon, Byungho;Kim, Namgyu
    • Journal of Intelligence and Information Systems
    • /
    • v.28 no.2
    • /
    • pp.127-146
    • /
    • 2022
  • Recently, as word embedding has shown excellent performance in various tasks of deep learning-based natural language processing, researches on the advancement and application of word, sentence, and document embedding are being actively conducted. Among them, cross-language transfer, which enables semantic exchange between different languages, is growing simultaneously with the development of embedding models. Academia's interests in vector alignment are growing with the expectation that it can be applied to various embedding-based analysis. In particular, vector alignment is expected to be applied to mapping between specialized domains and generalized domains. In other words, it is expected that it will be possible to map the vocabulary of specialized fields such as R&D, medicine, and law into the space of the pre-trained language model learned with huge volume of general-purpose documents, or provide a clue for mapping vocabulary between mutually different specialized fields. However, since linear-based vector alignment which has been mainly studied in academia basically assumes statistical linearity, it tends to simplify the vector space. This essentially assumes that different types of vector spaces are geometrically similar, which yields a limitation that it causes inevitable distortion in the alignment process. To overcome this limitation, we propose a deep learning-based vector alignment methodology that effectively learns the nonlinearity of data. The proposed methodology consists of sequential learning of a skip-connected autoencoder and a regression model to align the specialized word embedding expressed in each space to the general embedding space. Finally, through the inference of the two trained models, the specialized vocabulary can be aligned in the general space. To verify the performance of the proposed methodology, an experiment was performed on a total of 77,578 documents in the field of 'health care' among national R&D tasks performed from 2011 to 2020. As a result, it was confirmed that the proposed methodology showed superior performance in terms of cosine similarity compared to the existing linear vector alignment.

Evaluations of Spectral Analysis of in vitro 2D-COSY and 2D-NOESY on Human Brain Metabolites (인체 뇌 대사물질에서의 In vitro 2D-COSY와 2D-NOESY 스펙트럼 분석 평가)

  • Choe, Bo-Young;Woo, Dong-Cheol;Kim, Sang-Young;Choi, Chi-Bong;Lee, Sung-Im;Kim, Eun-Hee;Hong, Kwan-Soo;Jeon, Young-Ho;Cheong, Chae-Joon;Kim, Sang-Soo;Lim, Hyang-Sook
    • Investigative Magnetic Resonance Imaging
    • /
    • v.12 no.1
    • /
    • pp.8-19
    • /
    • 2008
  • Purpose : To investigate the 3-bond and spatial connectivity of human brain metabolites by scalar coupling and dipolar nuclear Overhauser effect/enhancement (NOE) interaction through 2D- correlation spectroscopy (COSY) and 2D- NOE spectroscopy (NOESY) techniques. Materials and Methods : All 2D experiments were performed on Bruker Avance 500 (11.8 T) with the zshield gradient triple resonance cryoprobe at 298 K. Human brain metabolites were prepared with 10% $D_2O$. Two-dimensional spectra with 2048 data points contains 320 free induction decay (FID) averaging. Repetition delay was 2 sec. The Top Spin 2.0 software was used for post-processing. Total 7 metabolites such as N-acetyl aspartate (NAA), creatine (Cr), choline (Cho), lutamine (Gln), glutamate (Glu), myo-inositol (Ins), and lactate (Lac) were included for major target metabolites. Results : Symmetrical 2D-COSY and 2D-NOESY pectra were successfully acquired: COSY cross peaks were observed in the only 1.0-4.5 ppm, however, NOESY cross peaks were observed in the 1.0-4.5 ppm and 7.9 ppm. From the result of the 2-D COSY data, cross peaks between the methyl protons ($CH_3$(3)) at 1.33 ppm and methine proton (CH(2)) at 4.11 ppm were observed in Lac. Cross peaks between the methylene protons (CH2(3,$H{\alpha}$)) at 2.50ppm and methylene protons ($CH_2$,(3,$H_B$)) at 2.70 ppm were observed in NAA. Cross peaks between the methine proton (CH(5)) at 3.27 ppm and the methine proton (CH(4,6)) at 3.59 ppm, between the methine proton (CH(1,3)) at 3.53 ppm and methine proton (CH(4,6)) at 3.59 ppm, and between the methine proton (CH(1,3)) at 3.53 ppm and methine proton (CH(2)) at 4.05 ppm were observed in Ins. From the result of 2-D NOESY data, cross peaks between the NH proton at 8.00 ppm and methyl protons ($CH_3$) were observed in NAA. Cross peaks between the methyl protons ($CH_3$(3)) at 1.33 ppm and methine proton (CH(2)) at 4.11 ppm were observed in Lac. Cross peaks between the methyl protons (CH3) at 3.03 ppm and methylene protons (CH2) at 3.93 ppm were observed in Cr. Cross peaks between the methylene protons ($CH_2$(3)) at 2.11 ppm and methylene protons ($CH_2$(4)) at 2.35 ppm, and between the methylene protons($CH_2$ (3)) at 2.11 ppm and methine proton (CH(2)) at 3.76 ppm were observed in Glu. Cross peaks between the methylene protons (CH2 (3)) at 2.14 ppm and methine proton (CH(2)) at 3.79 ppm were observed in Gln. Cross peaks between the methine proton (CH(5)) at 3.27 ppm and the methine proton (CH(4,6)) at 3.59 ppm, and between the methine proton (CH(1,3)) at 3.53 ppm and methine proton (CH(2)) at 4.05 ppm were observed in Ins. Conclusion : The present study demonstrated that in vitro 2D-COSY and NOESY represented the 3-bond and spatial connectivity of human brain metabolites by scalar coupling and dipolar NOE interaction. This study could aid in better understanding the interactions between human brain metabolites in vivo 2DCOSY study.

  • PDF

Export Control System based on Case Based Reasoning: Design and Evaluation (사례 기반 지능형 수출통제 시스템 : 설계와 평가)

  • Hong, Woneui;Kim, Uihyun;Cho, Sinhee;Kim, Sansung;Yi, Mun Yong;Shin, Donghoon
    • Journal of Intelligence and Information Systems
    • /
    • v.20 no.3
    • /
    • pp.109-131
    • /
    • 2014
  • As the demand of nuclear power plant equipment is continuously growing worldwide, the importance of handling nuclear strategic materials is also increasing. While the number of cases submitted for the exports of nuclear-power commodity and technology is dramatically increasing, preadjudication (or prescreening to be simple) of strategic materials has been done so far by experts of a long-time experience and extensive field knowledge. However, there is severe shortage of experts in this domain, not to mention that it takes a long time to develop an expert. Because human experts must manually evaluate all the documents submitted for export permission, the current practice of nuclear material export is neither time-efficient nor cost-effective. Toward alleviating the problem of relying on costly human experts only, our research proposes a new system designed to help field experts make their decisions more effectively and efficiently. The proposed system is built upon case-based reasoning, which in essence extracts key features from the existing cases, compares the features with the features of a new case, and derives a solution for the new case by referencing similar cases and their solutions. Our research proposes a framework of case-based reasoning system, designs a case-based reasoning system for the control of nuclear material exports, and evaluates the performance of alternative keyword extraction methods (full automatic, full manual, and semi-automatic). A keyword extraction method is an essential component of the case-based reasoning system as it is used to extract key features of the cases. The full automatic method was conducted using TF-IDF, which is a widely used de facto standard method for representative keyword extraction in text mining. TF (Term Frequency) is based on the frequency count of the term within a document, showing how important the term is within a document while IDF (Inverted Document Frequency) is based on the infrequency of the term within a document set, showing how uniquely the term represents the document. The results show that the semi-automatic approach, which is based on the collaboration of machine and human, is the most effective solution regardless of whether the human is a field expert or a student who majors in nuclear engineering. Moreover, we propose a new approach of computing nuclear document similarity along with a new framework of document analysis. The proposed algorithm of nuclear document similarity considers both document-to-document similarity (${\alpha}$) and document-to-nuclear system similarity (${\beta}$), in order to derive the final score (${\gamma}$) for the decision of whether the presented case is of strategic material or not. The final score (${\gamma}$) represents a document similarity between the past cases and the new case. The score is induced by not only exploiting conventional TF-IDF, but utilizing a nuclear system similarity score, which takes the context of nuclear system domain into account. Finally, the system retrieves top-3 documents stored in the case base that are considered as the most similar cases with regard to the new case, and provides them with the degree of credibility. With this final score and the credibility score, it becomes easier for a user to see which documents in the case base are more worthy of looking up so that the user can make a proper decision with relatively lower cost. The evaluation of the system has been conducted by developing a prototype and testing with field data. The system workflows and outcomes have been verified by the field experts. This research is expected to contribute the growth of knowledge service industry by proposing a new system that can effectively reduce the burden of relying on costly human experts for the export control of nuclear materials and that can be considered as a meaningful example of knowledge service application.

Sentiment Analysis of Movie Review Using Integrated CNN-LSTM Mode (CNN-LSTM 조합모델을 이용한 영화리뷰 감성분석)

  • Park, Ho-yeon;Kim, Kyoung-jae
    • Journal of Intelligence and Information Systems
    • /
    • v.25 no.4
    • /
    • pp.141-154
    • /
    • 2019
  • Rapid growth of internet technology and social media is progressing. Data mining technology has evolved to enable unstructured document representations in a variety of applications. Sentiment analysis is an important technology that can distinguish poor or high-quality content through text data of products, and it has proliferated during text mining. Sentiment analysis mainly analyzes people's opinions in text data by assigning predefined data categories as positive and negative. This has been studied in various directions in terms of accuracy from simple rule-based to dictionary-based approaches using predefined labels. In fact, sentiment analysis is one of the most active researches in natural language processing and is widely studied in text mining. When real online reviews aren't available for others, it's not only easy to openly collect information, but it also affects your business. In marketing, real-world information from customers is gathered on websites, not surveys. Depending on whether the website's posts are positive or negative, the customer response is reflected in the sales and tries to identify the information. However, many reviews on a website are not always good, and difficult to identify. The earlier studies in this research area used the reviews data of the Amazon.com shopping mal, but the research data used in the recent studies uses the data for stock market trends, blogs, news articles, weather forecasts, IMDB, and facebook etc. However, the lack of accuracy is recognized because sentiment calculations are changed according to the subject, paragraph, sentiment lexicon direction, and sentence strength. This study aims to classify the polarity analysis of sentiment analysis into positive and negative categories and increase the prediction accuracy of the polarity analysis using the pretrained IMDB review data set. First, the text classification algorithm related to sentiment analysis adopts the popular machine learning algorithms such as NB (naive bayes), SVM (support vector machines), XGboost, RF (random forests), and Gradient Boost as comparative models. Second, deep learning has demonstrated discriminative features that can extract complex features of data. Representative algorithms are CNN (convolution neural networks), RNN (recurrent neural networks), LSTM (long-short term memory). CNN can be used similarly to BoW when processing a sentence in vector format, but does not consider sequential data attributes. RNN can handle well in order because it takes into account the time information of the data, but there is a long-term dependency on memory. To solve the problem of long-term dependence, LSTM is used. For the comparison, CNN and LSTM were chosen as simple deep learning models. In addition to classical machine learning algorithms, CNN, LSTM, and the integrated models were analyzed. Although there are many parameters for the algorithms, we examined the relationship between numerical value and precision to find the optimal combination. And, we tried to figure out how the models work well for sentiment analysis and how these models work. This study proposes integrated CNN and LSTM algorithms to extract the positive and negative features of text analysis. The reasons for mixing these two algorithms are as follows. CNN can extract features for the classification automatically by applying convolution layer and massively parallel processing. LSTM is not capable of highly parallel processing. Like faucets, the LSTM has input, output, and forget gates that can be moved and controlled at a desired time. These gates have the advantage of placing memory blocks on hidden nodes. The memory block of the LSTM may not store all the data, but it can solve the CNN's long-term dependency problem. Furthermore, when LSTM is used in CNN's pooling layer, it has an end-to-end structure, so that spatial and temporal features can be designed simultaneously. In combination with CNN-LSTM, 90.33% accuracy was measured. This is slower than CNN, but faster than LSTM. The presented model was more accurate than other models. In addition, each word embedding layer can be improved when training the kernel step by step. CNN-LSTM can improve the weakness of each model, and there is an advantage of improving the learning by layer using the end-to-end structure of LSTM. Based on these reasons, this study tries to enhance the classification accuracy of movie reviews using the integrated CNN-LSTM model.

Change Acceptable In-Depth Searching in LOD Cloud for Efficient Knowledge Expansion (효과적인 지식확장을 위한 LOD 클라우드에서의 변화수용적 심층검색)

  • Kim, Kwangmin;Sohn, Yonglak
    • Journal of Intelligence and Information Systems
    • /
    • v.24 no.2
    • /
    • pp.171-193
    • /
    • 2018
  • LOD(Linked Open Data) cloud is a practical implementation of semantic web. We suggested a new method that provides identity links conveniently in LOD cloud. It also allows changes in LOD to be reflected to searching results without any omissions. LOD provides detail descriptions of entities to public in RDF triple form. RDF triple is composed of subject, predicates, and objects and presents detail description for an entity. Links in LOD cloud, named identity links, are realized by asserting entities of different RDF triples to be identical. Currently, the identity link is provided with creating a link triple explicitly in which associates its subject and object with source and target entities. Link triples are appended to LOD. With identity links, a knowledge achieves from an LOD can be expanded with different knowledge from different LODs. The goal of LOD cloud is providing opportunity of knowledge expansion to users. Appending link triples to LOD, however, has serious difficulties in discovering identity links between entities one by one notwithstanding the enormous scale of LOD. Newly added entities cannot be reflected to searching results until identity links heading for them are serialized and published to LOD cloud. Instead of creating enormous identity links, we propose LOD to prepare its own link policy. The link policy specifies a set of target LODs to link and constraints necessary to discover identity links to entities on target LODs. On searching, it becomes possible to access newly added entities and reflect them to searching results without any omissions by referencing the link policies. Link policy specifies a set of predicate pairs for discovering identity between associated entities in source and target LODs. For the link policy specification, we have suggested a set of vocabularies that conform to RDFS and OWL. Identity between entities is evaluated in accordance with a similarity of the source and the target entities' objects which have been associated with the predicates' pair in the link policy. We implemented a system "Change Acceptable In-Depth Searching System(CAIDS)". With CAIDS, user's searching request starts from depth_0 LOD, i.e. surface searching. Referencing the link policies of LODs, CAIDS proceeds in-depth searching, next LODs of next depths. To supplement identity links derived from the link policies, CAIDS uses explicit link triples as well. Following the identity links, CAIDS's in-depth searching progresses. Content of an entity obtained from depth_0 LOD expands with the contents of entities of other LODs which have been discovered to be identical to depth_0 LOD entity. Expanding content of depth_0 LOD entity without user's cognition of such other LODs is the implementation of knowledge expansion. It is the goal of LOD cloud. The more identity links in LOD cloud, the wider content expansions in LOD cloud. We have suggested a new way to create identity links abundantly and supply them to LOD cloud. Experiments on CAIDS performed against DBpedia LODs of Korea, France, Italy, Spain, and Portugal. They present that CAIDS provides appropriate expansion ratio and inclusion ratio as long as degree of similarity between source and target objects is 0.8 ~ 0.9. Expansion ratio, for each depth, depicts the ratio of the entities discovered at the depth to the entities of depth_0 LOD. For each depth, inclusion ratio illustrates the ratio of the entities discovered only with explicit links to the entities discovered only with link policies. In cases of similarity degrees with under 0.8, expansion becomes excessive and thus contents become distorted. Similarity degree of 0.8 ~ 0.9 provides appropriate amount of RDF triples searched as well. Experiments have evaluated confidence degree of contents which have been expanded in accordance with in-depth searching. Confidence degree of content is directly coupled with identity ratio of an entity, which means the degree of identity to the entity of depth_0 LOD. Identity ratio of an entity is obtained by multiplying source LOD's confidence and source entity's identity ratio. By tracing the identity links in advance, LOD's confidence is evaluated in accordance with the amount of identity links incoming to the entities in the LOD. While evaluating the identity ratio, concept of identity agreement, which means that multiple identity links head to a common entity, has been considered. With the identity agreement concept, experimental results present that identity ratio decreases as depth deepens, but rebounds as the depth deepens more. For each entity, as the number of identity links increases, identity ratio rebounds early and reaches at 1 finally. We found out that more than 8 identity links for each entity would lead users to give their confidence to the contents expanded. Link policy based in-depth searching method, we proposed, is expected to contribute to abundant identity links provisions to LOD cloud.

A prognosis discovering lethal-related genes in plants for target identification and inhibitor design (식물 치사관련 유전자를 이용하는 신규 제초제 작용점 탐색 및 조절물질 개발동향)

  • Hwang, I.T.;Lee, D.H.;Choi, J.S.;Kim, T.J.;Kim, B.T.;Park, Y.S.;Cho, K.Y.
    • The Korean Journal of Pesticide Science
    • /
    • v.5 no.3
    • /
    • pp.1-11
    • /
    • 2001
  • New technologies will have a large impact on the discovery of new herbicide site of action. Genomics, combinatorial chemistry, and bioinformatics help take advantage of serendipity through tile sequencing of huge numbers of genes or the synthesis of large numbers of chemical compounds. There are approximately $10^{30}\;to\;10^{50}$ possible molecules in molecular space of which only a fraction have been synthesized. Combining this potential with having access to 50,000 plant genes in the future elevates tile probability of discovering flew herbicidal site of actions. If 0.1, 1.0 or 10% of total genes in a typical plant are valid for herbicide target, a plant with 50,000 genes would provide about 50, 500, and 5,000 targets, respectively. However, only 11 herbicide targets have been identified and commercialized. The successful design of novel herbicides depends on careful consideration of a number of factors including target enzyme selections and validations, inhibitor designs, and the metabolic fates. Biochemical information can be used to identify enzymes which produce lethal phenotypes. The identification of a lethal target site is an important step to this approach. An examination of the characteristics of known targets provides of crucial insight as to the definition of a lethal target. Recently, antisense RNA suppression of an enzyme translation has been used to determine the genes required for toxicity and offers a strategy for identifying lethal target sites. After the identification of a lethal target, detailed knowledge such as the enzyme kinetics and the protein structure may be used to design potent inhibitors. Various types of inhibitors may be designed for a given enzyme. Strategies for the selection of new enzyme targets giving the desired physiological response upon partial inhibition include identification of chemical leads, lethal mutants and the use of antisense technology. Enzyme inhibitors having agrochemical utility can be categorized into six major groups: ground-state analogues, group specific reagents, affinity labels, suicide substrates, reaction intermediate analogues, and extraneous site inhibitors. In this review, examples of each category, and their advantages and disadvantages, will be discussed. The target identification and construction of a potent inhibitor, in itself, may not lead to develop an effective herbicide. The desired in vivo activity, uptake and translocation, and metabolism of the inhibitor should be studied in detail to assess the full potential of the target. Strategies for delivery of the compound to the target enzyme and avoidance of premature detoxification may include a proherbicidal approach, especially when inhibitors are highly charged or when selective detoxification or activation can be exploited. Utilization of differences in detoxification or activation between weeds and crops may lead to enhance selectivity. Without a full appreciation of each of these facets of herbicide design, the chances for success with the target or enzyme-driven approach are reduced.

  • PDF

Predicting Regional Soybean Yield using Crop Growth Simulation Model (작물 생육 모델을 이용한 지역단위 콩 수량 예측)

  • Ban, Ho-Young;Choi, Doug-Hwan;Ahn, Joong-Bae;Lee, Byun-Woo
    • Korean Journal of Remote Sensing
    • /
    • v.33 no.5_2
    • /
    • pp.699-708
    • /
    • 2017
  • The present study was to develop an approach for predicting soybean yield using a crop growth simulation model at the regional level where the detailed and site-specific information on cultivation management practices is not easily accessible for model input. CROPGRO-Soybean model included in Decision Support System for Agrotechnology Transfer (DSSAT) was employed for this study, and Illinois which is a major soybean production region of USA was selected as a study region. As a first step to predict soybean yield of Illinois using CROPGRO-Soybean model, genetic coefficients representative for each soybean maturity group (MG I~VI) were estimated through sowing date experiments using domestic and foreign cultivars with diverse maturity in Seoul National University Farm ($37.27^{\circ}N$, $126.99^{\circ}E$) for two years. The model using the representative genetic coefficients simulated the developmental stages of cultivars within each maturity group fairly well. Soybean yields for the grids of $10km{\times}10km$ in Illinois state were simulated from 2,000 to 2,011 with weather data under 18 simulation conditions including the combinations of three maturity groups, three seeding dates and two irrigation regimes. Planting dates and maturity groups were assigned differently to the three sub-regions divided longitudinally. The yearly state yields that were estimated by averaging all the grid yields simulated under non-irrigated and fully-Irrigated conditions showed a big difference from the statistical yields and did not explain the annual trend of yield increase due to the improved cultivation technologies. Using the grain yield data of 9 agricultural districts in Illinois observed and estimated from the simulated grid yield under 18 simulation conditions, a multiple regression model was constructed to estimate soybean yield at agricultural district level. In this model a year variable was also added to reflect the yearly yield trend. This model explained the yearly and district yield variation fairly well with a determination coefficients of $R^2=0.61$ (n = 108). Yearly state yields which were calculated by weighting the model-estimated yearly average agricultural district yield by the cultivation area of each agricultural district showed very close correspondence ($R^2=0.80$) to the yearly statistical state yields. Furthermore, the model predicted state yield fairly well in 2012 in which data were not used for the model construction and severe yield reduction was recorded due to drought.

Development of a Comprehensive Model of Disaster Management in Korea Based on the Result of Response to Sampung Building Collapse (1995), - Disaster Law, and 98 Disaster Preparedness Plan of Seoul City - (우리나라 사고예방과 재난관리 모형 개발을 위한 연구)

  • Lee, In-Sook
    • Research in Community and Public Health Nursing
    • /
    • v.11 no.1
    • /
    • pp.289-316
    • /
    • 2000
  • 우리나라의 경우 지역사회 재난 관리계획과 훈련이 보건의료적 모형이라기 보다는 민방위 모형에 입각하기 때문에 사고 현장에서의 환자 중증도 분류, 합리적 환자배분 및 이송, 병원 응급실에서의 대처 등이 체계적으로 이루어지지 못하고 있으며, 지역사회가 이에 즉각적으로 반응할 수 없다. 본 연구는 삼풍 붕괴사고 시에 대응방식과 그 후의 우리나라 응급의료 체계를 분석함으로써 대형사고 예방과 재난관리를 위한 우리나라 응급의료체계의 개선방안과 간호교육에서의 준비부분을 제시하고자 한다. 1 삼풍 사고 발생시에는 이를 관장할 만한 법적 근거인 인위적 재해에 관한 재난관리법이 없었다. 따라서 현장에서는 의학적 명령체계를 확보하지 못했기 때문에 현장에서의 응급 처치는 전혀 이루어지지 못하였다. 현장에서의 중증도 분류. 응급조치와 의뢰, 병원과 현장본부 그리고 구급차간의 통신 체계 두절, 환자 운송 중 의료지시를 받을 수 있도록 인력, 장비, 통신 체계가 준비되지 못하였던 점이 주요한 문제였다. 또한 병원 응급실에서는 재난 계획이 없거나 있었더라도 이를 활성화하여 병원의 운영 체계를 변환해가지 못하였다. 2. 삼풍백화점 붕괴사고 한달 후에는 인위적 재해에 대한 재난관리법이 제정되고, 행정부 수준별로 매년 지역요구에 합당한 재난관리 계획을 세우도록 법으로 규정하였다. 재난 관리법에는 보건의료 측면에서의 현장대응, 주민 참여, 응급 의료적 대처, 정보의 배된. 교육/훈련 등이 포함되어 있어야 한다. 그러나 법적 기반이 마련된 이후에도 한국 재난 계획 내에는 응급의료 측면의 대응 영역은 부처간 역할의 명시가 미흡하며, 현장에서의 응급 대응과정을 수행할 수 있는 운영 지침이 없이 명목상 언급으로 그치고 있기 때문에 계획을 활성화시켜 지역사회에서 운영하기는 어렵다. 즉 이 내용 속에는 사고의 확인 /공고, 응급 사고 지령, 요구 평가, 사상자의 중증도 분류와 안정화, 사상자 수집, 현장 처치 생명보존과 내과 외과적 응급처치가 수반된 이송, 사고 후 정신적 스트레스 관리, 사고의 총괄적 평가 부분에 대한 인력간 부처간 역할과 업무가 분명히 제시되어 있지 못하여, 사고 발생시 가장 중요한 연계적 업무 처리나 부문간 협조를 하기 어렵다. 의료 기관과 응급실/중환자실, 시민 안전을 책임지고 있는 기관들과의 상호 협력의 연계는 부족하다. 즉 현재의 재난 대비 계획 속에는 부처별 분명한 업무 분장, 재난 상황에 따른 시나리오적 대비 계획과 이를 훈련할 틀을 확보하고 있지 못하다. 3. 지방 정부 수준의 재난 계획서에는 재난 발생시 보건의료에 관한 사항 전반을 공공 보건소가 핵심적 역할을 하며 재난 관리에 대처해야 된다고 규정하고 있다. 그러므로 보건소는 지역사회 중심의 재난 관리 계획을 구성하고 이를 운영하며, 재난 현장에서의 응급 치료 대응 과정은 구조/ 구명을 책임지고 있는 공공기관인 소방서와 지역의 응급의료병원에게 위임한다. 즉 지역사회 재난 관리 계획이 보건소 주도하에 관내 병원과 관련기관(소방서. 경찰서)이 협동하여 만들고 업무를 명확히 분담하여 연계방안을 만든다. 이는 재난관리 대처에 성공여부를 결정하는 주요 요인이다. 4 대한 적십자사의 지역사회 주민에 대한 교육 프로그램은 연중 열리고 있다. 그러나 대부분의 교육주제는 건강증진 영역이며. 응급의료 관리는 전체 교육시간의 8%를 차지하며 이중 재난 준비를 위한 주민 교육 프로그램은 없다. 또한 특정 연령층이 모여있는 학교의 경우도 정규 보건교육 시간이 없기 때문에 생명구조나 응급처치를 체계적으로 배우고 연습할 기회가 없으면서 국민의 재난 준비의 기반확대가 되고 있지 못하다. 5. 병원은 재난 관리 위원회를 군성하여 병원의 진료권역 내에 있는 여러 자원을 감안한 포괄적인 재난관리계획을 세우고, 지역사회를 포함한 훈련을 해야 한다. 그러나 현재 병원은 명목상의 재난 관리 계획을 갖고 있을 뿐이다. 6. 재난관리 준비도를 평가할 때 병원응급실 치료 팀의 인력과 장비 등은 비교적 기준을 충족시키고 있었으나 병원의 재난 관리 계획은 전혀 훈련되고 있지 못하였다 그러므로 우리나라 재난 관리의 준비를 위해서는 현장의 응급의료체계, 재난 대응 계획, 이의 훈련을 통한 주민교육이 선행되어야만 개선될 수 있다. 즉 민방위 훈련 모델이 아닌 응급의료 서비스 모델에 입각한 장기적 노력과 재원의 투입이 필요하며, 지역사회를 중심으로 대응 준비와 이의 활성화 전략 개발, 훈련과 연습. 교육에 노력을 부여해야 한다. 7. 현장의 1차 응급처치자에 대해서는 법적으로 명시하고 있는 역할이 없다. 한국에서는 응급구조사 1급과 2급에 대한 교육과 규정을 1995년 이후 응급의료에 관한 법률에서 정하고 있다. 이 교육과정은 미국이 정하고 있는 응급구조사 과정 기준과 유사하지만 실습실이나 현장에서의 실습시간이 절대적으로 부족하다. 덧붙여 승인된 응급구조사 교육 기관의 강사는 강사로서의 자격기준을 충족할 뿐 아니라 실습강사는 대체적으로 1주일의 1/2은 응급 구조차를 탑승하여 현장 활동을 끊임없이 하고 있으며, 실습은 시나리오 유형으로 진행된다. 그러므로 우리나라의 경우 응급 구조사가 현장 기술 인력으로 역할 할 수 있도록 교과과정 내에서 실습을 강화 시켜야하며, 졸업생은 인턴쉽을 통한 현장 능력을 배양시키는 것이 필요하다. 8. 간호사의 경우 응급전문간호사의 자격을 부여받게 됨에 따라, 이를 위한 표준 교육 지침을 개발함으로써 병원 전 처치와 재난시 대응할 수 있는 역량을 보완해야 한다. 또한 현 자격 부여 프로그램 내용을 고려하여 정규자격 간호사가 현장 1차 치료자(first responder)로 역할 할 수 있도록 간호학 교과과정을 부분 보완해야한다.

  • PDF

Effects of Baicalin on Gene Expression Profiles during Adipogenesis of 3T3-L1 Cells (3T3-L1 세포의 지방세포형성과정에서 Baicalin에 의한 유전자 발현 프로파일 분석)

  • Lee, Hae-Yong;Kang, Ryun-Hwa;Chung, Sang-In;Cho, Soo-Hyun;Yoon, Yoo-Sik
    • Journal of the Korean Society of Food Science and Nutrition
    • /
    • v.39 no.1
    • /
    • pp.54-63
    • /
    • 2010
  • Baicalin, a flavonoid, was shown to have diverse effects such as anti-inflammatory, anti-cancer, anti-viral, anti-bacterial and others. Recently, we found that the baicalin inhibits adipogenesis through the modulations of anti-adipogenic and pro-adipogenic factors of the adipogenesis pathway. In the present study, we further characterized the molecular mechanism of the anti-adipogenic effect of baicalin using microarray technology. Microarray analyses were conducted to analyze the gene expression profiles during the differentiation time course (0 day, 2 day, 4 day and 7 day) in 3T3-L1 cells with or without baicalin treatment. We identified a total of 3972 genes of which expressions were changed more than 2 fold. These 3972 genes were further analyzed using hierarchical clustering analysis, resulting in 20 clusters. Four clusters among 20 showed clearly up-regulated expression patterns (cluster 8 and cluster 10) or clearly down-regulated expression patterns (cluster 12 and cluster 14) by baicalin treatment for over-all differentiation period. The cluster 8 and cluster 10 included many genes which enhance cell proliferation or inhibit adipogenesis. On the other hand, the cluster 12 and cluster 14 included many genes which are related with proliferation inhibition, cell cycle arrest, cell growth suppression or adipogenesis induction. In conclusion, these data provide detailed information on the molecular mechanism of baicalin-induced inhibition of adipogenesis.

A Review of the Neurocognitive Mechanisms for Mathematical Thinking Ability (수학적 사고력에 관한 인지신경학적 연구 개관)

  • Kim, Yon Mi
    • Korean Journal of Cognitive Science
    • /
    • v.27 no.2
    • /
    • pp.159-219
    • /
    • 2016
  • Mathematical ability is important for academic achievement and technological renovations in the STEM disciplines. This study concentrated on the relationship between neural basis of mathematical cognition and its mechanisms. These cognitive functions include domain specific abilities such as numerical skills and visuospatial abilities, as well as domain general abilities which include language, long term memory, and working memory capacity. Individuals can perform higher cognitive functions such as abstract thinking and reasoning based on these basic cognitive functions. The next topic covered in this study is about individual differences in mathematical abilities. Neural efficiency theory was incorporated in this study to view mathematical talent. According to the theory, a person with mathematical talent uses his or her brain more efficiently than the effortful endeavour of the average human being. Mathematically gifted students show different brain activities when compared to average students. Interhemispheric and intrahemispheric connectivities are enhanced in those students, particularly in the right brain along fronto-parietal longitudinal fasciculus. The third topic deals with growth and development in mathematical capacity. As individuals mature, practice mathematical skills, and gain knowledge, such changes are reflected in cortical activation, which include changes in the activation level, redistribution, and reorganization in the supporting cortex. Among these, reorganization can be related to neural plasticity. Neural plasticity was observed in professional mathematicians and children with mathematical learning disabilities. Last topic is about mathematical creativity viewed from Neural Darwinism. When the brain is faced with a novel problem, it needs to collect all of the necessary concepts(knowledge) from long term memory, make multitudes of connections, and test which ones have the highest probability in helping solve the unusual problem. Having followed the above brain modifying steps, once the brain finally finds the correct response to the novel problem, the final response comes as a form of inspiration. For a novice, the first step of acquisition of knowledge structure is the most important. However, as expertise increases, the latter two stages of making connections and selection become more important.