• Title/Summary/Keyword: Administration Process

Search Result 3,034, Processing Time 0.035 seconds

NEW ANTIDEPRESSANTS IN CHILD AND ADOLESCENT PSYCHIATRY (소아청소년정신과영역의 새로운 항우울제)

  • Lee, Soo-Jung
    • Journal of the Korean Academy of Child and Adolescent Psychiatry
    • /
    • v.14 no.1
    • /
    • pp.12-25
    • /
    • 2003
  • Objectives:As increasing number of new antidepressants have been being introduced in clinical practice, pharmacological understanding has been broadened. These changes mandate new information and theories to be incorporated into the treatment process of children with depressive disorders. In light of newly coming knowledge, this review intended to recapitulate the characteristics of new antidepressants and to consider the pivotal issues to develope guidelines for the treatment of depression in childhood and adolescence. Methods:Searching the Pub-Med online database for the articles with the key words of 'new', 'antidepressants' and 'children' ninety-seven headings of review articles were obtained. The author selected the articles of pertinent subjects in terms of either treatment guideline or psychopharmacology of new antidepressants. When required, articles about the clinical effectiveness of individual antidepressants were separatedly searched. In addition, the safety information of new antidepressants was acquired by browsing the official sites of the United States Food and Drugs Administration and Department of Health and Human Services. Results:1) For the clinical course, treatment phase, and treatment outcome, the reviews or treatment guidelines adopted the information from adult treatment guidelines. 2) Systematic and critical reviews unambiguously concluded that selective serotonin reuptake inhibitors(SSRIs) excelled tricyclic antidepressants( TCAs) for both efficacy and side effect profiles, and were recommend for the first-line choice for the treatment of children with depressive disorders. 3) New antidepressants generally lacked treatment experiences and randomized controlled clinical trials. 4) SSRIs and other new antidepressants, when used together, might result in pharmacokinetic and/or pharmacodynamic drug-to-drug interaction. 5) The difference of the clinical effectiveness of antidepressants between children and adults should be addressed from developmental aspects, which required further evidence. Conclusion:Treatment guidelines for the pharmacological treatment of childhood and adolescence depression could be constructed on the basis of clinical trial findings and practical experiences. Treatment guidelines are to best serve as the frame of reference for a clinician to make reasonable decisions for a particular therapeutic situation. In order to fulfill this role, guidelines should be updated as soon as new research data become available.

  • PDF

A Study on the Essence and Tendency of Modern Manager (현대 경영자로서의 본질과 성향 연구)

  • Yeom, Bae-Hoon;Kim, Hyunsoo
    • Journal of Service Research and Studies
    • /
    • v.10 no.3
    • /
    • pp.23-42
    • /
    • 2020
  • This study conceptualized the essence and propensity of modern management in service age, based on philosophy, and developed items to evaluate the conceptualized content. It was carried out as a new study to deepen the study of management philosophy and management theory by the new management framework. In order to establish the philosophical foundation of the modern management, the essence of the modern management was conceptualized based on the fundamental ideas of the East and West, and then an evaluation item was developed to put the essence and propensity of the modern management into practical use through analytical and empirical methods. After analyzing the representative ideas of mankind, it was derived that the Book of Change has the qualification as a philosophical model that can derive the essence of modern management. The Book of Change explains the reasoning of the world in the structure of two opposing parties, such as Taiji or Yin and Yang, and the process of acknowledging the contradictions within each opposing party and overcoming the contradictions through change is the central idea. Because you can see. After conducting a conceptual study, through empirical research, the essence and propensity of a modern manager should be conceptualized. The concept of essence and empirical study of the modern management using the leading role was conducted in two stages. First, a qualitative study using repetitive comparative analysis (CCM), focus group interview (FGI), and text mining was conducted to derive the essential and propensity conceptualization items that modern managers should possess. In addition, a quantitative study using factor analysis to develop sample items and develop measurement items through literature review and FGI was conducted to derive the essential concept of the modern management. Finally, the essence of modern management was derived: learning, preparation, challenge, inclusion, trust, morality, and sacrifice. In the future, it is necessary to conduct empirical research on the effectiveness of the essence of modern management for global and Korean representative companies.

An Efficient Heuristic for Storage Location Assignment and Reallocation for Products of Different Brands at Internet Shopping Malls for Clothing (의류 인터넷 쇼핑몰에서 브랜드를 고려한 상품 입고 및 재배치 방법 연구)

  • Song, Yong-Uk;Ahn, Byung-Hyuk
    • Journal of Intelligence and Information Systems
    • /
    • v.16 no.2
    • /
    • pp.129-141
    • /
    • 2010
  • An Internet shopping mall for clothing operates a warehouse for packing and shipping products to fulfill its orders. All the products in the warehouse are put into the boxes of same brands and the boxes are stored in a row on shelves equiped in the warehouse. To make picking and managing easy, boxes of the same brands are located side by side on the shelves. When new products arrive to the warehouse for storage, the products of a brand are put into boxes and those boxes are located adjacent to the boxes of the same brand. If there is not enough space for the new coming boxes, however, some boxes of other brands should be moved away and then the new coming boxes are located adjacent in the resultant vacant spaces. We want to minimize the movement of the existing boxes of other brands to another places on the shelves during the warehousing of new coming boxes, while all the boxes of the same brand are kept side by side on the shelves. Firstly, we define the adjacency of boxes by looking the shelves as an one dimensional series of spaces to store boxes, i.e. cells, tagging the series of cells by a series of numbers starting from one, and considering any two boxes stored in the cells to be adjacent to each other if their cell numbers are continuous from one number to the other number. After that, we tried to formulate the problem into an integer programming model to obtain an optimal solution. An integer programming formulation and Branch-and-Bound technique for this problem may not be tractable because it would take too long time to solve the problem considering the number of the cells or boxes in the warehouse and the computing power of the Internet shopping mall. As an alternative approach, we designed a fast heuristic method for this reallocation problem by focusing on just the unused spaces-empty cells-on the shelves, which results in an assignment problem model. In this approach, the new coming boxes are assigned to each empty cells and then those boxes are reorganized so that the boxes of a brand are adjacent to each other. The objective of this new approach is to minimize the movement of the boxes during the reorganization process while keeping the boxes of a brand adjacent to each other. The approach, however, does not ensure the optimality of the solution in terms of the original problem, that is, the problem to minimize the movement of existing boxes while keeping boxes of the same brands adjacent to each other. Even though this heuristic method may produce a suboptimal solution, we could obtain a satisfactory solution within a satisfactory time, which are acceptable by real world experts. In order to justify the quality of the solution by the heuristic approach, we generate 100 problems randomly, in which the number of cells spans from 2,000 to 4,000, solve the problems by both of our heuristic approach and the original integer programming approach using a commercial optimization software package, and then compare the heuristic solutions with their corresponding optimal solutions in terms of solution time and the number of movement of boxes. We also implement our heuristic approach into a storage location assignment system for the Internet shopping mall.

Valuation of Mining Investment Projects by the Real Option Approach - A Case Study of Uzbekistan's Copper Mining Industry - (실물옵션평가방법에 의한 광산투자의 가치평가 -우즈베키스탄 구리광산업의 사례연구를 중심으로-)

  • Makhkamov, Mumm Sh.;Kim, Dong-Hwan
    • Journal of the Korea Academia-Industrial cooperation Society
    • /
    • v.8 no.6
    • /
    • pp.1634-1647
    • /
    • 2007
  • "To invest or not to invest?" Most business leaders are frequently faced with this question on new and ongoing projects. The challenge lies in deciding what projects to choose, expand, contract, defer, or abandon. The project valuation tools used in this process are vital to making the right decisions. Traditional tools such as discounted cash flow (DCF)/net present value (NPV) assume a "fixed" path ahead, but real world projects face uncertainties, forcing us to change the path often. Comparing to other traditional valuation methods, the real options approach captures the flexibility inherent to investment decisions. The use of real options has gained wide acceptance among practitioners in a number of several industries during the last few decades. Even though the options are present in all types of business decisions, it is still not considered as a proper method of valuation in some industries. Mining has been comparably slow to adopt new valuation techniques over the years. The reason fur this is not entirely clear. One possible reason is the level and types of risks in mining. Not only are these risks high, but they are also more numerous and involve natural risks compared with other industries. That is why the purpose of this study is to deal with a more practical approach to project valuation, known as real options analysis in mining industry. This paper provides a case study approach to the copper mining industry using a real options analysis. It shows how companies can minimize investment risks, exercise flexibility in decision making and maximize returns.

  • PDF

Features and Trends of IEC Particular Standards for Medical Equipment Related to Diagnostic X-ray Based on IEC 60601-1:2005 Ed. 3.0 (IEC 60601-1: 3판이 적용된 진단용 X선장치 관련 개별규격의 동향 및 특징)

  • Kim, Hyun-Ji;Kim, Jung-Min;Choi, In-Seok;Yoon, Yong-Su;Seo, Deok-Nam;Kim, Jung-Su;Kim, Dae-Young;Park, Sung-Yong
    • Journal of radiological science and technology
    • /
    • v.36 no.1
    • /
    • pp.1-10
    • /
    • 2013
  • IEC publications have applied in many countries all over the world such as Europe or Japan and these also have been published as in dustrial standards (KS) and notifications of Korea Food and Drug Administration (KFDA) in Korea. As the general standard of IEC 60601 series for medical electric (ME) equipment was revised as $3^{rd}$ edition in 2005, additional and particular standards have been revised or established newly. Under these circumstances, it is importance for manufacturing and assembling companies or authorized testing companies to understand the trend for revisions of IEC publications. Therefore in this study, the latest version of 3 IEC standards related to medical X-ray equipment : IEC 60601-2-44 for X-ray equipment for computed tomography (CT), IEC 60601-2-45 for mammographic X-ray equipment and IEC 60601-2-54 for X-ray equipment for radiography or radioscopy were covered and analyzed for trends and features accompanied by revision based on IEC 60601-1 $3^{rd}$ Ed. As KFDA notifications in force have referred to the particular standards based on 2nd edition of IEC 60601-1, those revised version of 3 particular standards were compared to KFDA notifications in force. The features of the latest standards applying IEC 60601-1 $3^{rd}$ Ed were shown as following: 1) Requirements for mechanical hazards, especially (motorized) moving parts were emphasized. 2) Indication and recording of patient dose were required. 3) Risk management process was introduced and enabled to monitor potential risks systematically. 4) DR system (digital radiography system) as well as analogue system (film-screen system) was included in the scope. Presently, KFDA will revise the notifications applying the particular standards based on IEC 60601-1 $3^{rd}$ Ed in a few years. Therefore the features of particular standards applying IEC 60601-1 $3^{rd}$ Ed was expected to help manufacturers, assemblers or testing companies of medical electric equipment understand IEC publications or KFDA notifications slated to be published.

Radiation-Induced Proctitis in Rat and Role of Nitric Oxide (백서모델에서 방사선 직장염 유발인자로서의 Nitric oxide의 역할)

  • Chun Mison;Kang Seunghee;Jin Yoon-Mi;Oh Young-Taek;Kil Hoon-Jong;Oh Tae-Young;Ahn Byoung-Ok
    • Radiation Oncology Journal
    • /
    • v.19 no.3
    • /
    • pp.265-274
    • /
    • 2001
  • Purpose : Proctitis is one of acute complications encountered when radiotherapy was appled to the pelvis. Radiation-induced proctitis represents similar microscopic findings that are observed in inflammatory bowel disease (IBD). Nitric oxide (NO) plays an important role in the inflammatory process and many data suggest a close relationship between NO production and gastrointestinal inflammation. This study was aimed to establish the optimal radiation dose for radiation-induced proctitis in rat and to find a relationship between radiation proctitis and NO production. Materials and methods : Female Wistar rats, weighing from 150 to 220 g, received various doses(10-30 Gy) of radiation to the rectum. On the 5th and 10th day after irradiation, rectal specimens were evaluated grossly and microscopically. In addition, the degree of NO production by irradiation dose was evaluated by study with NOS expression and nitrite production in the irradiated rectal tissue. To evaluate relationship between radiation proctitis and NO, we administered aminoguanidine, iNOS inhibitor and L-arginine, substrate of NOS to rats from 2 days before to 7 days after the irradiation. Results : There were obvious gross and hostological changes after 17.5 Gy or higher radiation dose but not with 15 Gy or less radiation dose. Twenty Gy or higher dose of radiation caused Grade 4 damage in most of rectal specimens which were more likely to be related to the late complications such as fibrosis, rectal bleeding and rectal obstruction. A single fraction of 17.5 Gy to the rat rectum is considered to be an optimal dose to produce commonly experienced proctitis in the clinic. The result demonstrated that severity of microscopic damage of rectal mucosa from irradiation significantly correlated with iNOS over-expression. However, administration of iNOS inhibitor or substrate of iNOS did not influence the degree of rectal damage. Conclusion : A single fraction of 17.5 Gy irradiation to the rat rectum considered to be an optimal dose for radiation induced proctitis model. These results indicated that an excess production of NO contributes to pathogenesis of radiation-induced proctitis in part but was not the direct cause of rectal damage.

  • PDF

Analysis and de lege ferenda of the Acts Related with Spread of MERS in Korea in the Year 2015 - Focused on the Controversial Clauses of Medical Service Act and Infectious Disease Control and Prevention Act - (중동호흡기증후군 2015년 사태와 관련된 의료법령의 분석과 입법론 - 「의료법」 및 「감염병의 예방 및 관리에 관한 법률」의 쟁점 조항을 중심으로 -)

  • Kim, Cheonsoo
    • The Korean Society of Law and Medicine
    • /
    • v.16 no.2
    • /
    • pp.197-225
    • /
    • 2015
  • The presentation of this paper was triggered by the spread of MERS in Korea in the year 2015. The analysis of the present acts related with MERS is necessary in order to cope efficiently with any probable spread of such infectious diseases as MERS in future. The acts that should be analyzed in this paper include 'Medical Service Act' and 'Infectious Disease Control And Prevention Act' (hereafter, IDCAPA). At first the classification of the infectious diseases in IDCAPA should be referred to. The Act does not properly classify them because the scope of concept of each group of the infectious diseases overlaps each other. This overlap should be removed. The present system in IDCAPA is not proper for the efficient notification and reporting of the infectious disease patients. This is so in some viewpoints including the persons obligated to make the notification and reporting, the persons to whom they should notify and report such patients, and the process of notification and reporting. The efficient approach to the information related with the infectious disease is necessary for the rapid prevention of its spread. Cohort isolation and quarantine of the infectious patients and exposed contacts are the strongest and most efficient steps for the prevention of spread of the infectious diseases. One of the great problems related with such steps would be the conflict of powers or attributions, the likelihood of which is inevitable under the present system of IDCAPA. The IDCAPA distributed the power or attribution to take the steps to the three governments including the central government, the metropolitan government and the primary local government. The power should be concentrated in the central government, which could afford financially to compensate for the huge amount of damages caused likely by the steps. The power to take the steps would be actually just a useless thing for its holder without such financial capacity. The remedy for the victims by the fault of spreader should be approached to in the sense of national wealth. The general principle of tort law could not supply the victims with the sufficient remedy because the damages would be likely too huge for the wealth of such spreader to cope with. In future another parliamentary inspection could reveal another problems in the administration by the government of the MERS event in the year 2015. Any problem caused by defect in the legal system of the control and prevention of the infectious diseases should be taken into consideration when the legal system would be reformed in future.

  • PDF

Quality Characteristics, Carbon Dioxide, and Ethylene Production of Asparagus (Asparagus officinalis L.) Treated with 1-Methylcyclopropene and 2-Chloroethylphosphonic Acid during Storage (아스파라거스에서 1-MCP와 CEPA 처리에 따른 CO2 및 에틸렌 발생과 품질특성)

  • Lee, Jung-Soo
    • Horticultural Science & Technology
    • /
    • v.33 no.5
    • /
    • pp.675-686
    • /
    • 2015
  • Asparagus (Asparagus officinalis L.) needs proper post-harvest treatment to prolong its storage life. This study investigated the effect of 1-methylcyclopropene (1-MCP) on the quality and storage life of asparagus. Fresh-harvested asparagus was treated with 1-MCP ($1mg{\cdot}L^{-1}$), CEPA($10mg{\cdot}L^{-1}$), and 1-MCP($1mg{\cdot}L^{-1}$) + CEPA($10mg{\cdot}L^{-1}$) and compared with an untreated control. The carbon dioxide ($CO_2$) production, ethylene production, and morphological characteristics of the preserved asparagus were observed. The flow-system and the static-type measurement methods for ethylene and $CO_2$ production (respiration rate) were used. Weight loss, respiration rate, degree of freshness, and ethylene production were monitored during storage at $7^{\circ}C$. The results further showed that CEPA (2-chloroethylphosphonic acid) treatment had greater effects on $CO_2$ and ethylene production than using the 1-MCP process. The asparagus treated with CEPA or 1-MCP + CEPA had significantly increased the ethylene production rate compared to the control or using only 1-MCP during storage. There were no evident changes in the respiration rate of asparagus under 1-MCP treatment as compared with the control. Using the flow-system, slight differences in the rates of $CO_2$ and ethylene production were noted as compared to using the static type. Findings showed that in using the flow-system, asparagus manifested clearer results as compared with the static type. Weight loss in asparagus was significantly lower in control and 1-MCP treated samples than in those treated with CEPA. Likewise, the $CO_2$ and ethylene production of the CEPA treated samples significantly increased. The 1-MCP treatment reduced the effects of CEPA on weight loss, soluble solids content, and osmolality. The effect was not observed with exogenous ethylene as CEPA treatment had no visible effect as compared to the untreated group. Thus, 1-MCP treatment of asparagus could slightly reduce damage to the quality of asparagus during its distribution where ethylene gas is produced. Therefore, this study suggests that 1-MCP treatment can reduce the damage induced by ethylene gas on asparagus in poor distribution environments.

Deep Learning Architectures and Applications (딥러닝의 모형과 응용사례)

  • Ahn, SungMahn
    • Journal of Intelligence and Information Systems
    • /
    • v.22 no.2
    • /
    • pp.127-142
    • /
    • 2016
  • Deep learning model is a kind of neural networks that allows multiple hidden layers. There are various deep learning architectures such as convolutional neural networks, deep belief networks and recurrent neural networks. Those have been applied to fields like computer vision, automatic speech recognition, natural language processing, audio recognition and bioinformatics where they have been shown to produce state-of-the-art results on various tasks. Among those architectures, convolutional neural networks and recurrent neural networks are classified as the supervised learning model. And in recent years, those supervised learning models have gained more popularity than unsupervised learning models such as deep belief networks, because supervised learning models have shown fashionable applications in such fields mentioned above. Deep learning models can be trained with backpropagation algorithm. Backpropagation is an abbreviation for "backward propagation of errors" and a common method of training artificial neural networks used in conjunction with an optimization method such as gradient descent. The method calculates the gradient of an error function with respect to all the weights in the network. The gradient is fed to the optimization method which in turn uses it to update the weights, in an attempt to minimize the error function. Convolutional neural networks use a special architecture which is particularly well-adapted to classify images. Using this architecture makes convolutional networks fast to train. This, in turn, helps us train deep, muti-layer networks, which are very good at classifying images. These days, deep convolutional networks are used in most neural networks for image recognition. Convolutional neural networks use three basic ideas: local receptive fields, shared weights, and pooling. By local receptive fields, we mean that each neuron in the first(or any) hidden layer will be connected to a small region of the input(or previous layer's) neurons. Shared weights mean that we're going to use the same weights and bias for each of the local receptive field. This means that all the neurons in the hidden layer detect exactly the same feature, just at different locations in the input image. In addition to the convolutional layers just described, convolutional neural networks also contain pooling layers. Pooling layers are usually used immediately after convolutional layers. What the pooling layers do is to simplify the information in the output from the convolutional layer. Recent convolutional network architectures have 10 to 20 hidden layers and billions of connections between units. Training deep learning networks has taken weeks several years ago, but thanks to progress in GPU and algorithm enhancement, training time has reduced to several hours. Neural networks with time-varying behavior are known as recurrent neural networks or RNNs. A recurrent neural network is a class of artificial neural network where connections between units form a directed cycle. This creates an internal state of the network which allows it to exhibit dynamic temporal behavior. Unlike feedforward neural networks, RNNs can use their internal memory to process arbitrary sequences of inputs. Early RNN models turned out to be very difficult to train, harder even than deep feedforward networks. The reason is the unstable gradient problem such as vanishing gradient and exploding gradient. The gradient can get smaller and smaller as it is propagated back through layers. This makes learning in early layers extremely slow. The problem actually gets worse in RNNs, since gradients aren't just propagated backward through layers, they're propagated backward through time. If the network runs for a long time, that can make the gradient extremely unstable and hard to learn from. It has been possible to incorporate an idea known as long short-term memory units (LSTMs) into RNNs. LSTMs make it much easier to get good results when training RNNs, and many recent papers make use of LSTMs or related ideas.

The Need for Paradigm Shift in Semantic Similarity and Semantic Relatedness : From Cognitive Semantics Perspective (의미간의 유사도 연구의 패러다임 변화의 필요성-인지 의미론적 관점에서의 고찰)

  • Choi, Youngseok;Park, Jinsoo
    • Journal of Intelligence and Information Systems
    • /
    • v.19 no.1
    • /
    • pp.111-123
    • /
    • 2013
  • Semantic similarity/relatedness measure between two concepts plays an important role in research on system integration and database integration. Moreover, current research on keyword recommendation or tag clustering strongly depends on this kind of semantic measure. For this reason, many researchers in various fields including computer science and computational linguistics have tried to improve methods to calculating semantic similarity/relatedness measure. This study of similarity between concepts is meant to discover how a computational process can model the action of a human to determine the relationship between two concepts. Most research on calculating semantic similarity usually uses ready-made reference knowledge such as semantic network and dictionary to measure concept similarity. The topological method is used to calculated relatedness or similarity between concepts based on various forms of a semantic network including a hierarchical taxonomy. This approach assumes that the semantic network reflects the human knowledge well. The nodes in a network represent concepts, and way to measure the conceptual similarity between two nodes are also regarded as ways to determine the conceptual similarity of two words(i.e,. two nodes in a network). Topological method can be categorized as node-based or edge-based, which are also called the information content approach and the conceptual distance approach, respectively. The node-based approach is used to calculate similarity between concepts based on how much information the two concepts share in terms of a semantic network or taxonomy while edge-based approach estimates the distance between the nodes that correspond to the concepts being compared. Both of two approaches have assumed that the semantic network is static. That means topological approach has not considered the change of semantic relation between concepts in semantic network. However, as information communication technologies make advantage in sharing knowledge among people, semantic relation between concepts in semantic network may change. To explain the change in semantic relation, we adopt the cognitive semantics. The basic assumption of cognitive semantics is that humans judge the semantic relation based on their cognition and understanding of concepts. This cognition and understanding is called 'World Knowledge.' World knowledge can be categorized as personal knowledge and cultural knowledge. Personal knowledge means the knowledge from personal experience. Everyone can have different Personal Knowledge of same concept. Cultural Knowledge is the knowledge shared by people who are living in the same culture or using the same language. People in the same culture have common understanding of specific concepts. Cultural knowledge can be the starting point of discussion about the change of semantic relation. If the culture shared by people changes for some reasons, the human's cultural knowledge may also change. Today's society and culture are changing at a past face, and the change of cultural knowledge is not negligible issues in the research on semantic relationship between concepts. In this paper, we propose the future directions of research on semantic similarity. In other words, we discuss that how the research on semantic similarity can reflect the change of semantic relation caused by the change of cultural knowledge. We suggest three direction of future research on semantic similarity. First, the research should include the versioning and update methodology for semantic network. Second, semantic network which is dynamically generated can be used for the calculation of semantic similarity between concepts. If the researcher can develop the methodology to extract the semantic network from given knowledge base in real time, this approach can solve many problems related to the change of semantic relation. Third, the statistical approach based on corpus analysis can be an alternative for the method using semantic network. We believe that these proposed research direction can be the milestone of the research on semantic relation.