• Title/Summary/Keyword: 평가에 대한 인식

Search Result 3,832, Processing Time 0.033 seconds

A study on the case of education to train an archivist - Focus on archival training courses and the tradition of archival science in Italiy - (기록관리전문가의 양성교육에 관한 사례연구 -이탈리아의 기록관리학 전통과 교육과정을 중심으로-)

  • Kim, Jung-Ha
    • Journal of Korean Society of Archives and Records Management
    • /
    • v.1 no.1
    • /
    • pp.201-230
    • /
    • 2001
  • Conserving the recored cultural inheritance is actually the duty of all of us. Above all, the management and conservation of archives and documents is up to archivists who have technical knowledge about archival science. Archivists have to not only conserve archives and documents but also carry out classifying and appraising them in order to define them as current historic ones. The fundamental education about archival science is made up of history and law. Because Archive is the organisation which manage archives and documents produced by legal and administrative actions. Although there are still arguments about technical knowledge and degree archivists have to acquire, most of them prefer the studies related with history and emphasize legal studies to be the general boundary of archivits' ideology and trust. The training course about conservation of archives is conducted in about 9 National Archives of Torino, Milano, Venezia, Genova, Bologna, Parma, Roma, Napoli, Palermo. The training course in 19th was mostly based on the lectures of Phaleography, Diplomatics. There were not the education about archival science yet. Toward the end of 19th and 20th, people stressed the most basic subject in the training course of National Archive was not Phaleography and Diplomatics but archival science. The goal of archival science is to study the institution and organisation transferring archives and documents to Archive. And also it help archivists not wander about with ignorance of organisational and original procedures and divisions but know exactly theirs works. Like this, the studies on institution and organisation have got in the saddle as a branch of archival science since a few ten years. While archival science didn't evoke sympathy among people and experienced the tedious and difficult path in italy and other countries, Archive was managed by experts of other branches. As a result, there were a lot of faults in Archival Science. Specializing training course for Italian archivists came into being under the backdrop of Social Science Institute of Roma National University in 1925. The archival course of universities accomplished by the studies of history, law and economy. And such as Eugenio Casanova and Giorgio Cencetti were devoted archival science was abled to settle down in national archive. The training course for experts of 'archival science, 'Phaleography and Diplomatics' in National Archive of Bologna(Archivio di Stato di Bologna) is one of courses conducted in 17 National Archives in italy. This course is gratuitous and made up of 8 subjects(Archivistica, Paleografia, Diplomatica, Storia dell' Archivio, Notariato e documenti privati, istituzione medievale, istituzione moderna, istituzione contemporanea) students have to complete for two years. Students can receive the degree through passing twice written exam and once oral test. After department of Culture and education finally puts the marks of students, the chief Nationa Archive of Bologna confer the degree of 'archival science Phaleography and Diplomatics' on students passing the exams. This degree authenticates trainees' qualification which enables him to work at the archive in province, district and administrative capital city and archive of comunity and so on. Italian training course naturally leads archivists to keep in contact with valuable cultural inheritance through training in Archive. And it shows the intention to strengthen the affinity with each documents in the spot of archival management before training archivists. Also this is appraised as one of positive policies to conserve the local cultual inheritante in connection with the original qualitity of national archive with testify the history of each region. Traning course for archivist in Italy shows us the way how we have to prepare and proceed it. First, from producing documents to conserving than forever there has introduced 'original order that is to say a general rule to respect the first order given at the time producing documents'. Management of administrative documents is related consistently with one of historical documents. Second, the traning course for archivist is managing around 17 national archives. because italian national archive lay stress not or rducation of theory bus on train for archivest working in the first time of archival science. Third, diplomatics and phaleography for studies about historical document support archives. Forth, the studies on history id proceeding by cooperation between archivist and historian around archive. How our duties is non continuinf disputer who has to conserve and manage document and archives, but traing experts who having ability, vision and flexible thought, responsibility about archivals.

축산식품중(畜産食品中)의 Cholestrerol에 관(關)한 고찰(考察)

  • Han, Seok-Hyeon
    • Proceedings of the Korean Society for Food Science of Animal Resources Conference
    • /
    • 1995.11a
    • /
    • pp.1-48
    • /
    • 1995
  • 식생활은 인간 생활의 주체이고 먹는다는 것은 그 수단이다. 그중 중요한 하나의 명제는 인간이 놓여진 여러 환경에서 어떻게 건강을 유지하고 그 개체가 소유하고 있는 능력을 최대치까지 생리적으로 성장 발전시킴과 동시에 최대한 수명을 연장시키기 위한 식물 섭취방법을 마이크로 레벨까지 해명하는데 있다. 인간은 일생동안 엄청난 양의 음식물을 먹는다(70세 수명일 경우 200만 파운드 즉 체중의 1,400배). 그러나 먹기는 먹되 자신의 건강과 장수를 위하여 어떤 음식을 어떻게 선택하여 어떻게 먹어야 하는 문제가 매우 중요하다. 최근 우리나라도 국민 소득이 늘면서 식생활은 서구화 경향으로 기우는 듯하다. 공해를 비롯한 수입식품 등 여러 가지 문제점이 제기됨에 따라 자연식과 건강식을 주장하는 소리가 높이 일고 있다. 그중에는 축산 식품이 콜레스테롤 함량이 다른 식품에 비하여 높게 함유하고 있다는 것으로 심혈관질환의 주범인양 무차별 강조하는 나머지 육식공포 내지는 계란 등의 혐오감 마저 불러 일으키는 경향까지 있는 듯하다. 따라서 본논고에서는 축산식품중의 콜레스테롤 함량수준이 과연 성인병의 주범인지 아니면 다른 지방산과 관련해서 올바르게 평가하고 그 문제점과 대책을 개관해 보고 요약하면 다음과 같다. 1. 사람은 유사이래 본능적으로 주변의 식물이나 동물의 고기를 먹고 성장하여 자손을 증식시키고 어느 사이에 늙으면 죽음을 맞이 하는 싸이클을 반복하면서 기나긴 세월동안 진화를 하여 오늘날의 인간으로서의 자태를 이루었다. 유인원과 같은 인류의 선조들은 수렵을 통해 육식을 많이 하였을 것이므로 인간은 원래 육식동물이 아닐까? 구석기시대의 유물을 보면 많은 뼈가 출토되고 “얄타미라”나 “라스코” 동굴벽화가 선명하게 묘사되고 있다. 2. 우리나라 선조 승구족의 일파가 백두산을 비롯한 만주 송화강 유역에 유입되면서 수렵과 목축을 주요 식품획득의 수단으로 식품문화권을 형성하면서 남하하여 한반도 민족의 조상인 맥족(貊族)으로 맥적(貊炙)이라고 하는 요리(오늘날의 불고기)를 먹었다는 기록이 있다. 3. 인간의 수명을 1900년대로 거슬러 올라가서 뉴질랜드가 세계최장수국(호주는 2위)로서 평균수명은 남자 58세, 여자 69세인 반면 일본과 한국은 당시 남자 36세, 여자 37세이던 것이 일본은 1989년에 이르러 세계 최장수국으로 등장했으나 1990년 당시 뉴질랜드${\cdot}$호주 등은 목축 및 밀(小麥) 생산국가였기 때문이라는 것과 일본은 오늘날 합리적인 식생활 국가라는 것을 간과해서는 안된다. 4. 우리나라 10대 사망원인중 (1994년도) 뇌혈관질환이 1위, 교통사고 2위, 암이 3위 순위로서 연령별로는 10~30대의 불의의 사고(교통사고), 40~60대는 암, 70대 이상은 뇌혈관질환이 가장 많다. 구미${\cdot}$일 7개국 정상국가들은 심질환 사망이 가장 높다. 5. 식생활의 변화에 있어서 우리나라는 주식으로 섭취해 왔던 곡류는 70년 대비 94년에는 0.7배 감소된 반면 육류 5배, 계란 2.4배, 우유는 무려 29.3배 증가되었다. 식생활 패턴이 서구화 경향으로 바뀌는 것 같다. 6. 71년도 우리나라의 지질섭취량은 국민 1인당 1일 평균 13.1g에 섭취에너지의 5.7%수준이었으나 92년도에는 34.5g으로서 총에너지 섭취량의 16.6%에 달하고 총섭취 지방질중 동물성 섭취 비율은 47%를 차지 한다. 국민 평균 혈청콜레스테롤 농도는 80년에 비해 88년에는 11%가 증가되었고 80년에 210mg/dl 이상 되는 콜레스테롤 혈증인 사람의 비율이 5%에서 88년에는 23%로 크게 증가했다. 7. 세계 정상국가들의 단백질 즉 축산식품의 섭취는 우리나라보다 적게는 2배, 많게는 6~7배 더 섭취하고 90년도 우리나라의 지질섭취량은 일본의 1/3수준에 불과하다. 8. 콜레스테롤은 인체를 비롯한 모든 동물체에 필수적으로 분포하고 있는 것으로 체내 존재하고 있는 총량은 90~150g, 이중 혈청콜레스테롤은 4%(6g)를 차지하고 있음에도 불구하고 이 아주 적은 콜레스테롤에 일희일비(一喜一悲) 논쟁은 60~70년 끄러오고 있다. 9. 콜레스테롤의 생체내 기능으로서는 (1) 세포벽의 지지물질 (2) 신경세포 보호막물질 (3) 담즙산의 합성 (4) 비타민 D의 합성 (5) 임신시에 반듯이 필요한 분자 (6) 기타 여러 가지 기능을 수행하는 것으로 필수적인 물질이다. 10. 우리가 식이를 통해서 섭취 콜레스테롤을 550mg정도를 섭취한다고 하더라도 이 정도의 양은 배설 소모되는 양과 거의 맞먹는 양이다. 피부와 땀샘에서 소실되는 양만도 100~300mg에 달하기 때문에 미국농무성에서 섭취량을 300mg로 제한하는 것은 무의미하다. 11. 콜레스테롤 운반체로서의 지단백질은 그 밀도가 낮은 것으로부터 킬로미크론(chylomicron), 초저밀도 지단백질(VLDL), 저밀도 지단백질(LDL) 및 고밀도 지단백질(HDL)으로 나누는데 LDL은 혈청콜레스테롤 중 약 70%, HDL은 약20%를 함유한다. 12. 혈중 콜레스테롤 수준에 영향을 미치는 요인을 열거하여 보면 다음과 같다. 1) 음식을 통해서 섭취되는 콜레스테롤 중 단지 10~40%정도가 흡수되고, 체내에서 합성되는 콜레스테롤이 증가할수록 식이콜레스테롤은 실제 혈청콜레스테롤 수준에 거의 영향을 미치지 않으므로 식이중함량에 대하여 공포를 느끼고 기피할 필요가 없다. 2) 고도불포화지방산, 단가불포화지방산, 포화지방산의 비 즉 P/M/S의 비가 균형되도록 권장한다. 3) 동맥경화를 비롯한 성인병의 원인이 되는 혈전증에는 EPA의 양을 높여줌으로서 성인병을 예방할 수 있다. 4) 오메가6지방산 아라키도닉산과 오메가3지방산인 EPA로 유도되는 에이코사에노이드 또는 프로스타노이드는 오메가6와 3지방산을 전구체로 하여 생합성되는 중요한 생리활성 물질이다. 5) 사람은 일반적으로 20세에서 60세까지 나이를 먹어감에 따라 혈중 콜레스테롤 수준이 증가하고 60세 이후부터는 일정한 수준을 유지하며 심장보호성 HDL-콜레스테롤은 감소하는 반면에 죽상경화성 LDL콜레스테롤은 증가한다. 6) 높은 HDL콜레스테롤 수준이 심장병 발생 위험요인을 감소시키는 기능을 갖고 있기 때문에 좋은 HDL이라 부르고, LDL은 나쁜 콜레스테롤이라 부르기도 하는데, 이것은 유전적 요인보다도 환경적 요인이 보다 큰 영향을 미친다. 7) 이것은 생활 형태와 영양섭취상태를 포함해서 개인적 생활패턴에 영향을 받는다. 8) 많은 실험에서 혈중 콜레스테롤 상승은 노년의 가령(加齡)에 적응하기 위한 자연적 또는 생리적인 세포의 생화학적이고 대사적인 기능을 위해 필수적일 수 있다는 것을 간과해서는 안될 것이다. 이 점으로 미루어 노년의 여성들을 위한 콜레스테롤 농도를 200mg/dl이 가장 알맞은 양이 아닌 듯하다. 9) 스트레스는 두가지 모양으로 유발되는데 해로은 스트레스(negative), 이로운 스트레스(positive)로서 긴장완화는 혈중 콜레스테롤 농도를 10% 떨어진다. 10) 자주 운동을 하는 사람들은 혈중 HDL콜레스테롤치가 운동을 하지 않는 사람보다 높다. 육체적인 운동의 정도와 혈중 HDL콜레스테롤 농도와는 정비례한다. 11) 흡연은 지방을 흡착시키므로 혈전증의 원이이 되며 혈관속의 HDL농도를 감소시킨다. 12) 에너지의 과잉섭취에 의한 체중 증가느 일반적으로 지단백질대사에 영향을 미치고, 간에서는 콜레스테롤 과잉 생산과 더불어 VLDL콜레스테롤의 LDL콜레스테롤 혈증을 나타냄으로 운동과 더불어 비만이 되지 않도록 하여야 한다. 13. 콜레스테롤 함량에 대한 조절기술 1) 식품의 우열을 평가할 때 단순히 동물성 또는 식물성 식품으로 분류해서 총괄적으로 논한다는 것은 지양되어야 한다. 이것은 그 식품에 함유하고 있는 지방산의 종류에 따라서 다르기 때문이다. 2) 인체의 원할한 기능 유지를 위해서는 P /M /S비율 뿐만 아니라 섭취 지방질의 오메가6 /오메가3계 지방산의 비율이 모두 적절한 범위에 있어야 하며 한두가지 지방산만이 과량일 때는 또 다른 불균형을 일으킬 수 있다는 점을 알아야 한다. 3) 닭고기는 오메가6지방산 함량을 높이기 위하여 사료중에 등푸른 생선이나 어분이나 어유를 첨가하여 닭고기는 첨가수준에 따라 증가됨을 알 수 있다. 4) 오늘날 계란내의 지방산 조성을 변화시켜 난황내의 오메가 3계열 지방산 함량을 증가시킨 계란의 개발이 활발해졌다. 14. 계란 콜레스테롤에 대한 소비자들의 부정적 인식을 불식시키고자 계란의 클레스테롤 함량을 낮추는 과제가 등장하면서 그 기술개발이 여러모로 시도되고 있으나 아직 실용 단계에 이르지 못했다. 15. 계란의 콜레스테롤 문제에 대한 대책으로서 난황의 크기를 감소시키는 방법에 대한 연구도 필요하다. 16. 계란 중 콜레스테롤 함량 분석치는 표현 방식에 따라서 소비자들을 혼란시킬 가능성이 있다. 또한 과거에는 비색법으로 분석했으나 오늘날은 효소법으로 분석하면 분석치에 상당한 차이가 있다. 17. 소비자의 요구를 만족시키고 버터 소비를 촉진시키기 위해 콜레스테롤을 감소시키는 물리적${\cdot}$생물학적 방식이 제안되어 있으나 현장적용이 가능한 것은 아직 없다. 18. 우리나라에서 이미 시판되고 있는 DHA우유가 선보였고 무콜레스테롤 버터의 경우 트란스(trans)형 지방산에 관해서는 논란의 여지가 많을 것이다. 끝으로 국가 목표의 하나는 복지사회 건설에 있고 복지국가 실현에는 국민 기본 욕망의 하나인 식생활 합리화가 선행되어야 한다. 소득이 늘고 국가가 발전해감에 따라 영양식${\cdot}$건강식 및 기호식을 추구하게 됨을 매우 당연한 추세라 하겠다. 우리의 식생활이 날로 향상되어 지난날의 당질 위주에서 점차 축산물쪽으로 질적 개선이 이루어진다는 것은 고무적임에 틀림없다. 이 축산물을 통한 풍요로운 식의 문화를 창출하면서 건강과 장수 그리고 후손에 이르기까지 번영하고 국가 경쟁력 강화에 심혈을 기우려야 할 때이다.

  • PDF

A Clinical Study on Transpulmonary Leukostasis and Prophylactic Effects of Steroid in Cardiac Surgery (심장수술시 백혈구의 폐내정체와 스테로이드의 예방적 효과에 관한 연구)

  • 최석철
    • Biomedical Science Letters
    • /
    • v.2 no.2
    • /
    • pp.133-151
    • /
    • 1996
  • After cardiac surgery, it has been recognized that various complications were associated with injured humoral and cellular immunity by cardiopulmonary bypass(CPB). Especially, in postoperative pulmonary dysfunction, transpulmonary leukostasis followed complement activation and inflammatory responses are major pathogen. Some studies have showed that pretreated-corticosteroids before CPB protected postoperative pulmonary dysfunction. Corticosteroids may inhibit complement and leukocyte activation. On based previous studies, present investigator determined changes of leukocyte counts and transpulmonary leukostasis during cardiac surgery and postoperative periods. For the evaluation of postoperative pulmonary function and edema, $PaO_2$ and chest X-ray were compared between pre-CPB and post-CPB. Fever and other parameters were also observed postoperatively. The aim of this study was to define for the prophylactic effects of corticosteroid(Solu-Medrol: 30mg/kg) on all the researched parameters. This study was prospectively designed with randomized-blind fashion for 50 patients undergoing cardiac surgery. According to the purpose of study, all patients were divided into placebo and steroid group. : Placebo group was 25 patients received normal saline(not corticosteroid), and steroid group was 25patients received corticosteroid(Solu-Medrol: 30mg/kg) before initiation of CPB. The results of study were summarized as follows. 1. Total peripheral leukocyte counts decreased significantly at 5 minutes of CPB in all patients(P<0.01), and began to increase progressively at later periods of CPB with neutrophilia. The significant rise remained at postoperative 7th day(P<0.05). 2. During partial CPB, transpulmonary leukostasis occurred in placebo group(P<0.001), whereas it was prevented in steroid group. 3. In both groups, peripheral lymphocyte counts were stable during CPB, but began to reduce at time of intensive care unit(ICU) and the lymphocytopenia remained until postoperative 3rd day. The lymphocyte counts recovered on postoperative 7th day. 4. In both groups, peripheral counts of monocyte were relatively stable in the early peroid of CPB, and increased gradually in the later periods of CPB. This significant monocytosis remained throughout postoperlative periods(P<0.05). 5. The mean value of postoperative $paO)_2$ was lower than that of pre-CPB in placebo group(P=0.01) but didn't significant in steroid group(P=0.90). In the incidence of pulmonary edema signs and fever, placebo group was higher than steroid group(P=0.001, p=0.01, respectively). However mechanical respiratory supporting and care periods at intensive care unit were not significant difference between two groups(P>.0.05).With the above results, the investigator concluded that leukocyte activation and pulmonary sequestration were caused by cardiac surgery with CPB and demonstrated that high dose corticosteroid will provide prophylactic effect for pulmonary leukostasis and higher neutrophilia. These effects may ameliorate postoperative pulmonary dysfunction and contribute to postoperative less morbidity. However, further study should be performed because postoperative lymphocytopenia continued for 3 days in both groups, which may suspected damage or suppression of cell-mediated immunity with used corticosteroid.

  • PDF

The Definition of Outer Space and the Air/Outer Space Boundary Question (우주의 법적 지위와 경계획정 문제)

  • Lee, Young-Jin
    • The Korean Journal of Air & Space Law and Policy
    • /
    • v.30 no.2
    • /
    • pp.427-468
    • /
    • 2015
  • To date, we have considered the theoretical views, the standpoint of states and the discourse within the international community such as the UN Committee on the Peaceful Uses of Outer Space(COPUOS) regarding the Air/Outer Space Boundary Question which is one of the first issues of UN COPUOS established in line with marking the starting point of Outer Space Area. As above mentioned, discussions in the United Nations and among scholars of within each state regarding the delimitation issue often saw a division between those in favor of a functional approach (the functionalists) and those seeking the delineation of a boundary (the spatialists). The spatialists emphasize that the boundary between air and outer space should be delimited because the status of outer space is a type of public domain from which sovereign jurisdiction is excluded, as stated in Article 2 of Outer Space Treaty. On the contrary art. I of Chicago Convention is evidence of the acknowledgement of sovereignty over airspace existing as an international customary law, has the binding force of which exists independently of the Convention. The functionalists, backed initially by the major space powers, which viewed any boundary demarcation as possibly restricting their access to space, whether for peaceful or non-military purposes, considered it insufficient or inadequate to delimit a boundary of outer space without obvious scientific and technological evidences. Last more than 50 years there were large development in the exploration and use of outer space. But a large number states including those taking the view of a functionalist have taken on a negative attitude. As the element of location is a decisive factor for the choice of the legal regime to be applied, a purely functional approach to the regulation of activities in the space above the Earth does not offer a solution. It seems therefore to welcome the arrival of clear evidence of a growing recognition of and national practices concerning a spatial approach to the problem is gaining support both by a large number of States as well as by publicists. The search for a solution to the problem of demarcating the two different legal regimes governing the space above Earth has undoubtedly been facilitated and a number of countries including Russia have already advocated the acceptance of the lowest perigee boundary of outer space at a height of 100km. As a matter of fact the lowest perigee where space objects are still able to continue in their orbiting around the earth has already been imposed as a natural criterion for the delimitation of outer space. This delimitation of outer space has also been evidenced by the constant practice of a large number of States and their tacit consent to space activities accomplished so far at this distance and beyond it. Of course there are still numerous opposing views on the delineation of a outer space boundary by space powers like U.S.A., England, France and so on. Therefore, first of all to solve the legal issues faced by the international community in outer space activities like delimitation problem, there needs a positive and peaceful will of international cooperation. From this viewpoint, President John F. Kennedy once described the rationale behind the outer space activities in his famous "Moon speech" given at Rice University in 1962. He called upon Americans and all mankind to strive for peaceful cooperation and coexistence in our future outer space activities. And Kennedy explained, "There is no strife, ${\ldots}$ nor any international conflict in outer space as yet. But its hazards are hostile to us all: Its conquest deserves the best of all mankind, and its opportunity for peaceful cooperation may never come again." This speech seems to even present us in the contemporary era with ample suggestions for further peaceful cooperation in outer space activities including the delimitation of outer space.

Feasibility of Deep Learning Algorithms for Binary Classification Problems (이진 분류문제에서의 딥러닝 알고리즘의 활용 가능성 평가)

  • Kim, Kitae;Lee, Bomi;Kim, Jong Woo
    • Journal of Intelligence and Information Systems
    • /
    • v.23 no.1
    • /
    • pp.95-108
    • /
    • 2017
  • Recently, AlphaGo which is Bakuk (Go) artificial intelligence program by Google DeepMind, had a huge victory against Lee Sedol. Many people thought that machines would not be able to win a man in Go games because the number of paths to make a one move is more than the number of atoms in the universe unlike chess, but the result was the opposite to what people predicted. After the match, artificial intelligence technology was focused as a core technology of the fourth industrial revolution and attracted attentions from various application domains. Especially, deep learning technique have been attracted as a core artificial intelligence technology used in the AlphaGo algorithm. The deep learning technique is already being applied to many problems. Especially, it shows good performance in image recognition field. In addition, it shows good performance in high dimensional data area such as voice, image and natural language, which was difficult to get good performance using existing machine learning techniques. However, in contrast, it is difficult to find deep leaning researches on traditional business data and structured data analysis. In this study, we tried to find out whether the deep learning techniques have been studied so far can be used not only for the recognition of high dimensional data but also for the binary classification problem of traditional business data analysis such as customer churn analysis, marketing response prediction, and default prediction. And we compare the performance of the deep learning techniques with that of traditional artificial neural network models. The experimental data in the paper is the telemarketing response data of a bank in Portugal. It has input variables such as age, occupation, loan status, and the number of previous telemarketing and has a binary target variable that records whether the customer intends to open an account or not. In this study, to evaluate the possibility of utilization of deep learning algorithms and techniques in binary classification problem, we compared the performance of various models using CNN, LSTM algorithm and dropout, which are widely used algorithms and techniques in deep learning, with that of MLP models which is a traditional artificial neural network model. However, since all the network design alternatives can not be tested due to the nature of the artificial neural network, the experiment was conducted based on restricted settings on the number of hidden layers, the number of neurons in the hidden layer, the number of output data (filters), and the application conditions of the dropout technique. The F1 Score was used to evaluate the performance of models to show how well the models work to classify the interesting class instead of the overall accuracy. The detail methods for applying each deep learning technique in the experiment is as follows. The CNN algorithm is a method that reads adjacent values from a specific value and recognizes the features, but it does not matter how close the distance of each business data field is because each field is usually independent. In this experiment, we set the filter size of the CNN algorithm as the number of fields to learn the whole characteristics of the data at once, and added a hidden layer to make decision based on the additional features. For the model having two LSTM layers, the input direction of the second layer is put in reversed position with first layer in order to reduce the influence from the position of each field. In the case of the dropout technique, we set the neurons to disappear with a probability of 0.5 for each hidden layer. The experimental results show that the predicted model with the highest F1 score was the CNN model using the dropout technique, and the next best model was the MLP model with two hidden layers using the dropout technique. In this study, we were able to get some findings as the experiment had proceeded. First, models using dropout techniques have a slightly more conservative prediction than those without dropout techniques, and it generally shows better performance in classification. Second, CNN models show better classification performance than MLP models. This is interesting because it has shown good performance in binary classification problems which it rarely have been applied to, as well as in the fields where it's effectiveness has been proven. Third, the LSTM algorithm seems to be unsuitable for binary classification problems because the training time is too long compared to the performance improvement. From these results, we can confirm that some of the deep learning algorithms can be applied to solve business binary classification problems.

Multivessel Coronary Revascularization with Composite LITA-RA Y Graft (좌내흉동맥-요골동맥 복합이식편을 이용한 다중혈관 관상동맥우회술)

  • Lee Sub;Ko Mgo-Sung;Park Ki-Sung;Ryu Jae-Kean;Jang Jae-Suk;Kwon Oh-Choon
    • Journal of Chest Surgery
    • /
    • v.39 no.5 s.262
    • /
    • pp.359-365
    • /
    • 2006
  • Background: Arterial grafts have been used to achieve better long-term results for coronary revascularization. Bilateral internal thoracic artery (ITA) grafts have a better results, but it may be not used in some situations such as diabetes and chronic obstructive pulmonary disease (COPD). We evaluated the clinical and angiographic results of composite left internal thoracic artery-radial artery (LITA-RA) Y graft. Material and Method: Between April 2002 and September 2004, 119 patients were enrolled in composite Y graft for coronary bypass surgery. The mean age was $62.6{\pm}8.8$ years old and female was 34.5%. Preoperative cardiac risk factors were as follows: hypertension 43.7%, diabetes 33.6%, smoker 41.2%, and hyperlipidemia 22.7%, There were emergency operation (14), cardiogenic shock (6), left ventricle ejection fraction (LVEF) less than 40% (17), and 17 cases of left main disease. Coronary angiography was done in 35 patients before the hospital discharge. Result: The number of distal anastomoses was $3.1{\pm}0.91$ and three patients (2.52%) died during hospital stay. The off-pump coronary artery bypass (OPCAB) was applied to 79 patients (66.4%). The LITA was anastomosed to left anterior descending system except three cases which was to lateral wall. The radial Y grafts were anastomosed to diagonal branches (4), ramus intermedius (21), obtuse marginal branches (109), posterolateral branches (12), and posterior descending coronary artery (8). Postoperative coronary angiography in 35 patients showed excellent patency rates (LITA 100%, and RA 88.5%; 3 RA grafts which anastomosed to coronary arteries <70% stenosed showed string sign with competitive flow). Conclusion: The LITA-RA Y composite graft provided good early clinical and angiographic results in multivessel coronary revascularization. But it should be cautiously used in selected patients.

The Precision Test Based on States of Bone Mineral Density (골밀도 상태에 따른 검사자의 재현성 평가)

  • Yoo, Jae-Sook;Kim, Eun-Hye;Kim, Ho-Seong;Shin, Sang-Ki;Cho, Si-Man
    • The Korean Journal of Nuclear Medicine Technology
    • /
    • v.13 no.1
    • /
    • pp.67-72
    • /
    • 2009
  • Purpose: ISCD (International Society for Clinical Densitometry) requests that users perform mandatory Precision test to raise their quality even though there is no recommendation about patient selection for the test. Thus, we investigated the effect on precision test by measuring reproducibility of 3 bone density groups (normal, osteopenia, osteoporosis). Materials and Methods: 4 users performed precision test with 420 patients (age: $57.8{\pm}9.02$) for BMD in Asan Medical Center (JAN-2008 ~ JUN-2008). In first group (A), 4 users selected 30 patient respectively regardless of bone density condition and measured 2 part (L-spine, femur) in twice. In second group (B), 4 users measured bone density of 10 patients respectively in the same manner of first group (A) users but dividing patient into 3 stages (normal, osteopenia, osteoporosis). In third group (C), 2 users measured 30 patients respectively in the same manner of first group (A) users considering bone density condition. We used GE Lunar Prodigy Advance (Encore. V11.4) and analyzed the result by comparing %CV to LSC using precision tool from ISCD. Check back was done using SPSS. Results: In group A, the %CV calculated by 4 users (a, b, c, d) were 1.16, 1.01, 1.19, 0.65 g/$cm^2$ in L-spine and 0.69, 0.58, 0.97, 0.47 g/$cm^2$ in femur. In group B, the %CV calculated by 4 users (a, b, c, d) were 1.01, 1.19, 0.83, 1.37 g/$cm^2$ in L-spine and 1.03, 0.54, 0.69, 0.58 g/$cm^2$ in femur. When comparing results (group A, B), we found no considerable differences. In group C, the user_1's %CV of normal, osteopenia and osteoporosis were 1.26, 0.94, 0.94 g/$cm^2$ in L-spine and 0.94, 0.79, 1.01 g/$cm^2$ in femur. And the user_2's %CV were 0.97, 0.83, 0.72 g/$cm^2$ L-spine and 0.65, 0.65, 1.05 g/$cm^2$ in femur. When analyzing the result, we figured out that the difference of reproducibility was almost not found but the differences of two users' several result values have effect on total reproducibility. Conclusions: Precision test is a important factor of bone density follow up. When Machine and user's reproducibility is getting better, it’s useful in clinics because of low range of deviation. Users have to check machine's reproducibility before the test and keep the same mind doing BMD test for patient. In precision test, the difference of measured value is usually found for ROI change caused by patient position. In case of osteoporosis patient, there is difficult to make initial ROI accurately more than normal and osteopenia patient due to lack of bone recognition even though ROI is made automatically by computer software. However, initial ROI is very important and users have to make coherent ROI because we use ROI Copy function in a follow up. In this study, we performed precision test considering bone density condition and found LSC value was stayed within 3%. There was no considerable difference. Thus, patient selection could be done regardless of bone density condition.

  • PDF

Application of Support Vector Regression for Improving the Performance of the Emotion Prediction Model (감정예측모형의 성과개선을 위한 Support Vector Regression 응용)

  • Kim, Seongjin;Ryoo, Eunchung;Jung, Min Kyu;Kim, Jae Kyeong;Ahn, Hyunchul
    • Journal of Intelligence and Information Systems
    • /
    • v.18 no.3
    • /
    • pp.185-202
    • /
    • 2012
  • .Since the value of information has been realized in the information society, the usage and collection of information has become important. A facial expression that contains thousands of information as an artistic painting can be described in thousands of words. Followed by the idea, there has recently been a number of attempts to provide customers and companies with an intelligent service, which enables the perception of human emotions through one's facial expressions. For example, MIT Media Lab, the leading organization in this research area, has developed the human emotion prediction model, and has applied their studies to the commercial business. In the academic area, a number of the conventional methods such as Multiple Regression Analysis (MRA) or Artificial Neural Networks (ANN) have been applied to predict human emotion in prior studies. However, MRA is generally criticized because of its low prediction accuracy. This is inevitable since MRA can only explain the linear relationship between the dependent variables and the independent variable. To mitigate the limitations of MRA, some studies like Jung and Kim (2012) have used ANN as the alternative, and they reported that ANN generated more accurate prediction than the statistical methods like MRA. However, it has also been criticized due to over fitting and the difficulty of the network design (e.g. setting the number of the layers and the number of the nodes in the hidden layers). Under this background, we propose a novel model using Support Vector Regression (SVR) in order to increase the prediction accuracy. SVR is an extensive version of Support Vector Machine (SVM) designated to solve the regression problems. The model produced by SVR only depends on a subset of the training data, because the cost function for building the model ignores any training data that is close (within a threshold ${\varepsilon}$) to the model prediction. Using SVR, we tried to build a model that can measure the level of arousal and valence from the facial features. To validate the usefulness of the proposed model, we collected the data of facial reactions when providing appropriate visual stimulating contents, and extracted the features from the data. Next, the steps of the preprocessing were taken to choose statistically significant variables. In total, 297 cases were used for the experiment. As the comparative models, we also applied MRA and ANN to the same data set. For SVR, we adopted '${\varepsilon}$-insensitive loss function', and 'grid search' technique to find the optimal values of the parameters like C, d, ${\sigma}^2$, and ${\varepsilon}$. In the case of ANN, we adopted a standard three-layer backpropagation network, which has a single hidden layer. The learning rate and momentum rate of ANN were set to 10%, and we used sigmoid function as the transfer function of hidden and output nodes. We performed the experiments repeatedly by varying the number of nodes in the hidden layer to n/2, n, 3n/2, and 2n, where n is the number of the input variables. The stopping condition for ANN was set to 50,000 learning events. And, we used MAE (Mean Absolute Error) as the measure for performance comparison. From the experiment, we found that SVR achieved the highest prediction accuracy for the hold-out data set compared to MRA and ANN. Regardless of the target variables (the level of arousal, or the level of positive / negative valence), SVR showed the best performance for the hold-out data set. ANN also outperformed MRA, however, it showed the considerably lower prediction accuracy than SVR for both target variables. The findings of our research are expected to be useful to the researchers or practitioners who are willing to build the models for recognizing human emotions.

Predicting the Direction of the Stock Index by Using a Domain-Specific Sentiment Dictionary (주가지수 방향성 예측을 위한 주제지향 감성사전 구축 방안)

  • Yu, Eunji;Kim, Yoosin;Kim, Namgyu;Jeong, Seung Ryul
    • Journal of Intelligence and Information Systems
    • /
    • v.19 no.1
    • /
    • pp.95-110
    • /
    • 2013
  • Recently, the amount of unstructured data being generated through a variety of social media has been increasing rapidly, resulting in the increasing need to collect, store, search for, analyze, and visualize this data. This kind of data cannot be handled appropriately by using the traditional methodologies usually used for analyzing structured data because of its vast volume and unstructured nature. In this situation, many attempts are being made to analyze unstructured data such as text files and log files through various commercial or noncommercial analytical tools. Among the various contemporary issues dealt with in the literature of unstructured text data analysis, the concepts and techniques of opinion mining have been attracting much attention from pioneer researchers and business practitioners. Opinion mining or sentiment analysis refers to a series of processes that analyze participants' opinions, sentiments, evaluations, attitudes, and emotions about selected products, services, organizations, social issues, and so on. In other words, many attempts based on various opinion mining techniques are being made to resolve complicated issues that could not have otherwise been solved by existing traditional approaches. One of the most representative attempts using the opinion mining technique may be the recent research that proposed an intelligent model for predicting the direction of the stock index. This model works mainly on the basis of opinions extracted from an overwhelming number of economic news repots. News content published on various media is obviously a traditional example of unstructured text data. Every day, a large volume of new content is created, digitalized, and subsequently distributed to us via online or offline channels. Many studies have revealed that we make better decisions on political, economic, and social issues by analyzing news and other related information. In this sense, we expect to predict the fluctuation of stock markets partly by analyzing the relationship between economic news reports and the pattern of stock prices. So far, in the literature on opinion mining, most studies including ours have utilized a sentiment dictionary to elicit sentiment polarity or sentiment value from a large number of documents. A sentiment dictionary consists of pairs of selected words and their sentiment values. Sentiment classifiers refer to the dictionary to formulate the sentiment polarity of words, sentences in a document, and the whole document. However, most traditional approaches have common limitations in that they do not consider the flexibility of sentiment polarity, that is, the sentiment polarity or sentiment value of a word is fixed and cannot be changed in a traditional sentiment dictionary. In the real world, however, the sentiment polarity of a word can vary depending on the time, situation, and purpose of the analysis. It can also be contradictory in nature. The flexibility of sentiment polarity motivated us to conduct this study. In this paper, we have stated that sentiment polarity should be assigned, not merely on the basis of the inherent meaning of a word but on the basis of its ad hoc meaning within a particular context. To implement our idea, we presented an intelligent investment decision-support model based on opinion mining that performs the scrapping and parsing of massive volumes of economic news on the web, tags sentiment words, classifies sentiment polarity of the news, and finally predicts the direction of the next day's stock index. In addition, we applied a domain-specific sentiment dictionary instead of a general purpose one to classify each piece of news as either positive or negative. For the purpose of performance evaluation, we performed intensive experiments and investigated the prediction accuracy of our model. For the experiments to predict the direction of the stock index, we gathered and analyzed 1,072 articles about stock markets published by "M" and "E" media between July 2011 and September 2011.

A Study on the Tree Surgery Problem and Protection Measures in Monumental Old Trees (천연기념물 노거수 외과수술 문제점 및 보존 관리방안에 관한 연구)

  • Jung, Jong Soo
    • Korean Journal of Heritage: History & Science
    • /
    • v.42 no.1
    • /
    • pp.122-142
    • /
    • 2009
  • This study explored all domestic and international theories for maintenance and health enhancement of an old and big tree, and carried out the anatomical survey of the operation part of the tree toward he current status of domestic surgery and the perception survey of an expert group, and drew out following conclusion through the process of suggesting its reform plan. First, as a result of analyzing the correlation of the 67 subject trees with their ages, growth status. surroundings, it revealed that they were closely related to positional characteristic, damage size, whereas were little related to materials by fillers. Second, the size of the affected part was the most frequent at the bough sheared part under $0.09m^2$, and the hollow size by position(part) was the biggest at 'root + stem' starting from the behind of the main root and stem As a result of analyzing the correlation, the same result was elicited at the group with low correlation. Third, the problem was serious in charging the fillers (especially urethane) in the big hollow or exposed root produced at the behind of the root and stem part, or surface-processing it. The benefit by charging the hollow part was analyzed as not so much. Fourth, the surface-processing of fillers currently used (artificial bark) is mainly 'epoxy+woven fabric+cork', but it is not flexible, so it has brought forth problems of frequent cracks and cracked surface at the joint part with the treetextured part. Fifth, the correlation with the external status of the operated part was very high with the closeness, surface condition, formation of adhesive tissue and internal survey result. Sixth, the most influential thing on flushing by the wrong management of an old and big tree was banking, and a wrong pruning was the source of the ground part damage. In pruning a small bough can easily recover itself from its damage as its formation of adhesive tissue when it is cut by a standard method. Seventh, the parameters affecting the times of related business handling of an old and big tree are 'the need of the conscious reform of the manager and related business'. Eighth, a reform plan in an institutional aspect can include the arrangement of the law and organization of the old and big tree management and preservation at an institutional aspect. This study for preparing a reform plan through the status survey of the designated old and big tree, has a limit inducing a reform plan based on the status survey through individual research, and a weak point suggesting grounds by any statistical data. This can be complemented by subsequent studies.