• Title/Summary/Keyword: phenomenon

Search Result 11,724, Processing Time 0.043 seconds

The Analysis of the Current Status of Medical Accidents and Disputes Researched in the Korean Web Sites (인터넷 사이트를 통해 살펴본 의료사고 및 의료분쟁의 현황에 관한 분석)

  • Cha, Yu-Rim;Kwon, Jeong-Seung;Choi, Jong-Hoon;Kim, Chong-Youl
    • Journal of Oral Medicine and Pain
    • /
    • v.31 no.4
    • /
    • pp.297-316
    • /
    • 2006
  • The increasing tendency of medical disputes is one of the remarkable social phenomena. Especially we must not overlook the phenomenon that production and circulation of information related to medical accidents is increasing rapidly through the internet. In this research, we evaluated the web sites which provide the information related to medical accidents using the keyword "medical accidents" in March 2006, and classified the 28 web sites according to the kinds of establishers. We also analyzed the contents of the sites, and checked and compared the current status of the web sites and problems that have to be improved. Finally, we suggested the possible solutions to prevent medical accidents. The detailed results were listed below. 1. Medical practitioners, general public, and lawyers were all familiar with and prefer the term "medical accidents" mainly. 2. In the number of sites searched by the keyword "medical accidents", lawyer had the most sites and medical practitioners had the least ones. 3. Many sites by general public and lawyers had their own medical record analysts but there was little professional analysts for dentistry. 4. General public were more interested in the prevention of medical accidents but the lawyers were more interested in the process after medical accidents. The sites by medical practitioners dealt with the least remedies of medical accidents, compared with other sites. 5. General public wanted the third party such as government intervention into the disputes including the medical dispute arbitration law or/and the establishment of independent medical dispute judgment institution. 6. In the comparison among the establishers of web sites, medical practitioners dealt with the least examples of medical accidents. 7. The suggestion of cases in counseling articles related to dental accidents were considered less importantly than the reality. 8. Whereas there were many articles about domestic cases related to the bloody dental treatment, in the open counseling articles the number of dental treatment regarding to non insurance treatment was large. 9. In comparing offered information of medical accidents based on the establishers, general public offered vocabularies, lawyers offered related laws and medical practitioners offered medical knowledge relatively. 10. They all cited the news pressed by the media to offer the current status of domestic medical accidents. Especially among the web sites by general public, NGOs provided the plentiful statistical data related to medical accidents. 11. The web sites that collect the medical accidents were only two. As a result of our research, we found out that, in the flood of information, medical disputes can be occurred by the wrong information from third party, and the medical practitioners have the most passive attitudes on the medical accidents. Thus, it is crucial to have the mutual interchange and exchange of information between lawyer, patients and medical practitioners, so that based on clear mutual comprehension we can solve the accidents and disputes more positively and actively.

THE EFFECTS OF THE PLATELET-DERIVED GROWTH FACTOR-BB ON THE PERIODONTAL TISSUE REGENERATION OF THE FURCATION INVOLVEMENT OF DOGS (혈소판유래성장인자-BB가 성견 치근이개부병변의 조직재생에 미치는 효과)

  • Cho, Moo-Hyun;Park, Kwang-Beom;Park, Joon-Bong
    • Journal of Periodontal and Implant Science
    • /
    • v.23 no.3
    • /
    • pp.535-563
    • /
    • 1993
  • New techniques for regenerating the destructed periodontal tissue have been studied for many years. Current acceptable methods of promoting periodontal regeneration alre basis of removal of diseased soft tissue, root treatment, guided tissue regeneration, graft materials, biological mediators. Platelet-derived growth factor (PDGF) is one of polypeptide growth factor. PDGF have been reported as a biological mediator which regulate activities of wound healing progress including cell proliferation, migration, and metabolism. The purposes of this study is to evaluate the possibility of using the PDGF as a regeneration promoting agent for furcation involvement defect. Eight adult mongrel dogs were used in this experiment. The dogs were anesthetized with Pentobarbital Sodium (25-30 mg/kg of body weight, Tokyo chemical Co., Japan) and conventional periodontal prophylaxis were performed with ultrasonic scaler. With intrasulcular and crestal incision, mucoperiosteal flap was elevated. Following decortication with 1/2 high speed round bur, degree III furcation defect was made on mandibular second(P2) and fourth(P4) premolar. For the basic treatment of root surface, fully saturated citric acid was applied on the exposed root surface for 3 minutes. On the right P4 20ug of human recombinant PDGF-BB dissolved in acetic acid was applied with polypropylene autopipette. On the left P2 and right P2 PDGF-BB was applied after insertion of ${\beta}-Tricalcium$ phosphate(TCP) and collagen (Collatape) respectively. Left mandibular P4 was used as control. Systemic antibiotics (Penicillin-G benzathine and penicillin-G procaine, 1 ml per 10-25 1bs body weight) were administrated intramuscular for 2 weeks after surgery. Irrigation with 0.1% Chlorhexidine Gluconate around operated sites was performed during the whole experimental period except one day immediate after surgery. Soft diets were fed through the whole experiment period. After 2, 4, 8, 12 weeks, the animals were sacrificed by perfusion technique. Tissue block was excised including the tooth and prepared for light microscope with H-E staining. At 2 weeks after surgery, therer were rapid osteogenesis phenomenon on the defected area of the PDGF only treated group and early trabeculation pattern was made with new osteoid tissue produced by activated osteoblast. Bone formation was almost completed to the fornix of furcation by 8 weeks after surgery. New cementum fromation was observed from 2 weeks after surgery, and the thickness was increased until 8 weeks with typical Sharpey’s fibers reembedded into new bone and cementum. In both PDGF-BB with TCP group and PDGF-BB with Collagen group, regeneration process including new bone and new cementum formation and the group especially in the early weeks. It might be thought that the migration of actively proliferating cells was prohibited by the graft materials. In conclusion, platelet-derived growth factor can promote rapid osteogenesis during early stage of periodontal tissue regeneration.

  • PDF

Twitter Issue Tracking System by Topic Modeling Techniques (토픽 모델링을 이용한 트위터 이슈 트래킹 시스템)

  • Bae, Jung-Hwan;Han, Nam-Gi;Song, Min
    • Journal of Intelligence and Information Systems
    • /
    • v.20 no.2
    • /
    • pp.109-122
    • /
    • 2014
  • People are nowadays creating a tremendous amount of data on Social Network Service (SNS). In particular, the incorporation of SNS into mobile devices has resulted in massive amounts of data generation, thereby greatly influencing society. This is an unmatched phenomenon in history, and now we live in the Age of Big Data. SNS Data is defined as a condition of Big Data where the amount of data (volume), data input and output speeds (velocity), and the variety of data types (variety) are satisfied. If someone intends to discover the trend of an issue in SNS Big Data, this information can be used as a new important source for the creation of new values because this information covers the whole of society. In this study, a Twitter Issue Tracking System (TITS) is designed and established to meet the needs of analyzing SNS Big Data. TITS extracts issues from Twitter texts and visualizes them on the web. The proposed system provides the following four functions: (1) Provide the topic keyword set that corresponds to daily ranking; (2) Visualize the daily time series graph of a topic for the duration of a month; (3) Provide the importance of a topic through a treemap based on the score system and frequency; (4) Visualize the daily time-series graph of keywords by searching the keyword; The present study analyzes the Big Data generated by SNS in real time. SNS Big Data analysis requires various natural language processing techniques, including the removal of stop words, and noun extraction for processing various unrefined forms of unstructured data. In addition, such analysis requires the latest big data technology to process rapidly a large amount of real-time data, such as the Hadoop distributed system or NoSQL, which is an alternative to relational database. We built TITS based on Hadoop to optimize the processing of big data because Hadoop is designed to scale up from single node computing to thousands of machines. Furthermore, we use MongoDB, which is classified as a NoSQL database. In addition, MongoDB is an open source platform, document-oriented database that provides high performance, high availability, and automatic scaling. Unlike existing relational database, there are no schema or tables with MongoDB, and its most important goal is that of data accessibility and data processing performance. In the Age of Big Data, the visualization of Big Data is more attractive to the Big Data community because it helps analysts to examine such data easily and clearly. Therefore, TITS uses the d3.js library as a visualization tool. This library is designed for the purpose of creating Data Driven Documents that bind document object model (DOM) and any data; the interaction between data is easy and useful for managing real-time data stream with smooth animation. In addition, TITS uses a bootstrap made of pre-configured plug-in style sheets and JavaScript libraries to build a web system. The TITS Graphical User Interface (GUI) is designed using these libraries, and it is capable of detecting issues on Twitter in an easy and intuitive manner. The proposed work demonstrates the superiority of our issue detection techniques by matching detected issues with corresponding online news articles. The contributions of the present study are threefold. First, we suggest an alternative approach to real-time big data analysis, which has become an extremely important issue. Second, we apply a topic modeling technique that is used in various research areas, including Library and Information Science (LIS). Based on this, we can confirm the utility of storytelling and time series analysis. Third, we develop a web-based system, and make the system available for the real-time discovery of topics. The present study conducted experiments with nearly 150 million tweets in Korea during March 2013.

The Characteristics of Rural Population, Korea, 1960~1995: Population Composition and Internal Migration (농촌인구의 특성과 그 변화, 1960~1995: 인구구성 및 인구이동)

  • 김태헌
    • Korea journal of population studies
    • /
    • v.19 no.2
    • /
    • pp.77-105
    • /
    • 1996
  • The rural problems which we are facing start from the extremely small sized population and the skewed population structure by age and sex. Thus we analyzed the change of the rural population. And we analyzed the recent return migration to the rural areas by comparing the recent in-migrants with out-migrants to rural areas. And by analyzing the rural village survey data which was to show the current characteristics of rural population, we found out the effects of the in-migrants to the rural areas and predicted the futures of rural villages by characteristics. The changes of rural population composition by age was very clear. As the out-migrants towards cities carried on, the population composition of young children aged 0~4 years was low and the aged became thick. The proportion of the population aged 0~4 years was 45.1% of the total population in 1970 and dropped down to 20.4% in 1995, which is predicted to become under 20% from now on. In the same period(1970~1995), the population aged 65 years and over rose from 4.2% to 11.9%. In 1960, before industrialization, the proportion of the population aged 0~4 years in rural areas was higher than that of cities. As the rural young population continuously moves to cities it became lower than that in urban areas from 1975 and the gap grew till 1990. But the proportion of rural population aged 0~4 years in 1995 became 6.2% and the gap reduced. We can say this is the change of the characteristics of in-migrants and out-migrants in the rural areas. Also considering the composition of the population by age group moving from urban to rural area in the late 1980s, 51.8% of the total migrants concentrates upon age group of 20~34 years and these people's educational level was higher than that of out-migrants to urban areas. This fact predicted the changes of the rural population, and the results will turn out as a change in the rural society. However, after comparing the population structure between the pure rural village of Boeun-gun and suburban village of Paju-gun which was agriculture centered village but recently changed rapidly, the recent change of the rural population structure which the in-migrants to rural areas becomes younger is just a phenomenon in the suburban rural areas, not the change of the total rural areas in general. From the characteristics of the population structure of rural village from the field survey on these villages, we can see that in the pure rural villages without any effects from cities the regidents are highly aged, while industrialization and urbanization are making a progress in suburban villages. Therefore, the recent partial change of the rural population structure and the change of characteristics of the in-migrants toward rural areas is effecting and being effected by the population change of areas like suburban rural villages. Although there are return migrants to rural areas to change their jobs into agriculture, this is too minor to appear as a statistic effect.

  • PDF

Efficient Topic Modeling by Mapping Global and Local Topics (전역 토픽의 지역 매핑을 통한 효율적 토픽 모델링 방안)

  • Choi, Hochang;Kim, Namgyu
    • Journal of Intelligence and Information Systems
    • /
    • v.23 no.3
    • /
    • pp.69-94
    • /
    • 2017
  • Recently, increase of demand for big data analysis has been driving the vigorous development of related technologies and tools. In addition, development of IT and increased penetration rate of smart devices are producing a large amount of data. According to this phenomenon, data analysis technology is rapidly becoming popular. Also, attempts to acquire insights through data analysis have been continuously increasing. It means that the big data analysis will be more important in various industries for the foreseeable future. Big data analysis is generally performed by a small number of experts and delivered to each demander of analysis. However, increase of interest about big data analysis arouses activation of computer programming education and development of many programs for data analysis. Accordingly, the entry barriers of big data analysis are gradually lowering and data analysis technology being spread out. As the result, big data analysis is expected to be performed by demanders of analysis themselves. Along with this, interest about various unstructured data is continually increasing. Especially, a lot of attention is focused on using text data. Emergence of new platforms and techniques using the web bring about mass production of text data and active attempt to analyze text data. Furthermore, result of text analysis has been utilized in various fields. Text mining is a concept that embraces various theories and techniques for text analysis. Many text mining techniques are utilized in this field for various research purposes, topic modeling is one of the most widely used and studied. Topic modeling is a technique that extracts the major issues from a lot of documents, identifies the documents that correspond to each issue and provides identified documents as a cluster. It is evaluated as a very useful technique in that reflect the semantic elements of the document. Traditional topic modeling is based on the distribution of key terms across the entire document. Thus, it is essential to analyze the entire document at once to identify topic of each document. This condition causes a long time in analysis process when topic modeling is applied to a lot of documents. In addition, it has a scalability problem that is an exponential increase in the processing time with the increase of analysis objects. This problem is particularly noticeable when the documents are distributed across multiple systems or regions. To overcome these problems, divide and conquer approach can be applied to topic modeling. It means dividing a large number of documents into sub-units and deriving topics through repetition of topic modeling to each unit. This method can be used for topic modeling on a large number of documents with limited system resources, and can improve processing speed of topic modeling. It also can significantly reduce analysis time and cost through ability to analyze documents in each location or place without combining analysis object documents. However, despite many advantages, this method has two major problems. First, the relationship between local topics derived from each unit and global topics derived from entire document is unclear. It means that in each document, local topics can be identified, but global topics cannot be identified. Second, a method for measuring the accuracy of the proposed methodology should be established. That is to say, assuming that global topic is ideal answer, the difference in a local topic on a global topic needs to be measured. By those difficulties, the study in this method is not performed sufficiently, compare with other studies dealing with topic modeling. In this paper, we propose a topic modeling approach to solve the above two problems. First of all, we divide the entire document cluster(Global set) into sub-clusters(Local set), and generate the reduced entire document cluster(RGS, Reduced global set) that consist of delegated documents extracted from each local set. We try to solve the first problem by mapping RGS topics and local topics. Along with this, we verify the accuracy of the proposed methodology by detecting documents, whether to be discerned as the same topic at result of global and local set. Using 24,000 news articles, we conduct experiments to evaluate practical applicability of the proposed methodology. In addition, through additional experiment, we confirmed that the proposed methodology can provide similar results to the entire topic modeling. We also proposed a reasonable method for comparing the result of both methods.

Analysis of Football Fans' Uniform Consumption: Before and After Son Heung-Min's Transfer to Tottenham Hotspur FC (국내 프로축구 팬들의 유니폼 소비 분석: 손흥민의 토트넘 홋스퍼 FC 이적 전후 비교)

  • Choi, Yeong-Hyeon;Lee, Kyu-Hye
    • Journal of Intelligence and Information Systems
    • /
    • v.26 no.3
    • /
    • pp.91-108
    • /
    • 2020
  • Korea's famous soccer players are steadily performing well in international leagues, which led to higher interests of Korean fans in the international leagues. Reflecting the growing social phenomenon of rising interests on international leagues by Korean fans, the study examined the overall consumer perception in the consumption of uniform by domestic soccer fans and compared the changes in perception following the transfers of the players. Among others, the paper examined the consumer perception and purchase factors of soccer fans shown in social media, focusing on periods before and after the recruitment of Heung-Min Son to English Premier League's Tottenham Football Club. To this end, the EPL uniform is the collection keyword the paper utilized and collected consumer postings from domestic website and social media via Python 3.7, and analyzed them using Ucinet 6, NodeXL 1.0.1, and SPSS 25.0 programs. The results of this study can be summarized as follows. First, the uniform of the club that consistently topped the league, has been gaining attention as a popular uniform, and the players' performance, and the players' position have been identified as key factors in the purchase and search of professional football uniforms. In the case of the club, the actual ranking and whether the league won are shown to be important factors in the purchase and search of professional soccer uniforms. The club's emblem and the sponsor logo that will be attached to the uniform are also factors of interest to consumers. In addition, in the decision making process of purchase of a uniform by professional soccer fan, uniform's form, marking, authenticity, and sponsors are found to be more important than price, design, size, and logo. The official online store has emerged as a major purchasing channel, followed by gifts for friends or requests from acquaintances when someone travels to the United Kingdom. Second, a classification of key control categories through the convergence of iteration correlation analysis and Clauset-Newman-Moore clustering algorithm shows differences in the classification of individual groups, but groups that include the EPL's club and player keywords are identified as the key topics in relation to professional football uniforms. Third, between 2002 and 2006, the central theme for professional football uniforms was World Cup and English Premier League, but from 2012 to 2015, the focus has shifted to more interest of domestic and international players in the English Premier League. The subject has changed to the uniform itself from this time on. In this context, the paper can confirm that the major issues regarding the uniforms of professional soccer players have changed since Ji-Sung Park's transfer to Manchester United, and Sung-Yong Ki, Chung-Yong Lee, and Heung-Min Son's good performances in these leagues. The paper also identified that the uniforms of the clubs to which the players have transferred to are of interest. Fourth, both male and female consumers are showing increasing interest in Son's league, the English Premier League, which Tottenham FC belongs to. In particular, the increasing interest in Son has shown a tendency to increase interest in football uniforms for female consumers. This study presents a variety of researches on sports consumption and has value as a consumer study by identifying unique consumption patterns. It is meaningful in that the accuracy of the interpretation has been enhanced by using a cluster analysis via convergence of iteration correlation analysis and Clauset-Newman-Moore clustering algorithm to identify the main topics. Based on the results of this study, the clubs will be able to maximize its profits and maintain good relationships with fans by identifying key drivers of consumer awareness and purchasing for professional soccer fans and establishing an effective marketing strategy.

Analysis on Factors Influencing Welfare Spending of Local Authority : Implementing the Detailed Data Extracted from the Social Security Information System (지방자치단체 자체 복지사업 지출 영향요인 분석 : 사회보장정보시스템을 통한 접근)

  • Kim, Kyoung-June;Ham, Young-Jin;Lee, Ki-Dong
    • Journal of Intelligence and Information Systems
    • /
    • v.19 no.2
    • /
    • pp.141-156
    • /
    • 2013
  • Researchers in welfare services of local government in Korea have rather been on isolated issues as disables, childcare, aging phenomenon, etc. (Kang, 2004; Jung et al., 2009). Lately, local officials, yet, realize that they need more comprehensive welfare services for all residents, not just for above-mentioned focused groups. Still cases dealt with focused group approach have been a main research stream due to various reason(Jung et al., 2009; Lee, 2009; Jang, 2011). Social Security Information System is an information system that comprehensively manages 292 welfare benefits provided by 17 ministries and 40 thousand welfare services provided by 230 local authorities in Korea. The purpose of the system is to improve efficiency of social welfare delivery process. The study of local government expenditure has been on the rise over the last few decades after the restarting the local autonomy, but these studies have limitations on data collection. Measurement of a local government's welfare efforts(spending) has been primarily on expenditures or budget for an individual, set aside for welfare. This practice of using monetary value for an individual as a "proxy value" for welfare effort(spending) is based on the assumption that expenditure is directly linked to welfare efforts(Lee et al., 2007). This expenditure/budget approach commonly uses total welfare amount or percentage figure as dependent variables (Wildavsky, 1985; Lee et al., 2007; Kang, 2000). However, current practice of using actual amount being used or percentage figure as a dependent variable may have some limitation; since budget or expenditure is greatly influenced by the total budget of a local government, relying on such monetary value may create inflate or deflate the true "welfare effort" (Jang, 2012). In addition, government budget usually contain a large amount of administrative cost, i.e., salary, for local officials, which is highly unrelated to the actual welfare expenditure (Jang, 2011). This paper used local government welfare service data from the detailed data sets linked to the Social Security Information System. The purpose of this paper is to analyze the factors that affect social welfare spending of 230 local authorities in 2012. The paper applied multiple regression based model to analyze the pooled financial data from the system. Based on the regression analysis, the following factors affecting self-funded welfare spending were identified. In our research model, we use the welfare budget/total budget(%) of a local government as a true measurement for a local government's welfare effort(spending). Doing so, we exclude central government subsidies or support being used for local welfare service. It is because central government welfare support does not truly reflect the welfare efforts(spending) of a local. The dependent variable of this paper is the volume of the welfare spending and the independent variables of the model are comprised of three categories, in terms of socio-demographic perspectives, the local economy and the financial capacity of local government. This paper categorized local authorities into 3 groups, districts, and cities and suburb areas. The model used a dummy variable as the control variable (local political factor). This paper demonstrated that the volume of the welfare spending for the welfare services is commonly influenced by the ratio of welfare budget to total local budget, the population of infants, self-reliance ratio and the level of unemployment factor. Interestingly, the influential factors are different by the size of local government. Analysis of determinants of local government self-welfare spending, we found a significant effect of local Gov. Finance characteristic in degree of the local government's financial independence, financial independence rate, rate of social welfare budget, and regional economic in opening-to-application ratio, and sociology of population in rate of infants. The result means that local authorities should have differentiated welfare strategies according to their conditions and circumstances. There is a meaning that this paper has successfully proven the significant factors influencing welfare spending of local government in Korea.

Deep Learning-based Professional Image Interpretation Using Expertise Transplant (전문성 이식을 통한 딥러닝 기반 전문 이미지 해석 방법론)

  • Kim, Taejin;Kim, Namgyu
    • Journal of Intelligence and Information Systems
    • /
    • v.26 no.2
    • /
    • pp.79-104
    • /
    • 2020
  • Recently, as deep learning has attracted attention, the use of deep learning is being considered as a method for solving problems in various fields. In particular, deep learning is known to have excellent performance when applied to applying unstructured data such as text, sound and images, and many studies have proven its effectiveness. Owing to the remarkable development of text and image deep learning technology, interests in image captioning technology and its application is rapidly increasing. Image captioning is a technique that automatically generates relevant captions for a given image by handling both image comprehension and text generation simultaneously. In spite of the high entry barrier of image captioning that analysts should be able to process both image and text data, image captioning has established itself as one of the key fields in the A.I. research owing to its various applicability. In addition, many researches have been conducted to improve the performance of image captioning in various aspects. Recent researches attempt to create advanced captions that can not only describe an image accurately, but also convey the information contained in the image more sophisticatedly. Despite many recent efforts to improve the performance of image captioning, it is difficult to find any researches to interpret images from the perspective of domain experts in each field not from the perspective of the general public. Even for the same image, the part of interests may differ according to the professional field of the person who has encountered the image. Moreover, the way of interpreting and expressing the image also differs according to the level of expertise. The public tends to recognize the image from a holistic and general perspective, that is, from the perspective of identifying the image's constituent objects and their relationships. On the contrary, the domain experts tend to recognize the image by focusing on some specific elements necessary to interpret the given image based on their expertise. It implies that meaningful parts of an image are mutually different depending on viewers' perspective even for the same image. So, image captioning needs to implement this phenomenon. Therefore, in this study, we propose a method to generate captions specialized in each domain for the image by utilizing the expertise of experts in the corresponding domain. Specifically, after performing pre-training on a large amount of general data, the expertise in the field is transplanted through transfer-learning with a small amount of expertise data. However, simple adaption of transfer learning using expertise data may invoke another type of problems. Simultaneous learning with captions of various characteristics may invoke so-called 'inter-observation interference' problem, which make it difficult to perform pure learning of each characteristic point of view. For learning with vast amount of data, most of this interference is self-purified and has little impact on learning results. On the contrary, in the case of fine-tuning where learning is performed on a small amount of data, the impact of such interference on learning can be relatively large. To solve this problem, therefore, we propose a novel 'Character-Independent Transfer-learning' that performs transfer learning independently for each character. In order to confirm the feasibility of the proposed methodology, we performed experiments utilizing the results of pre-training on MSCOCO dataset which is comprised of 120,000 images and about 600,000 general captions. Additionally, according to the advice of an art therapist, about 300 pairs of 'image / expertise captions' were created, and the data was used for the experiments of expertise transplantation. As a result of the experiment, it was confirmed that the caption generated according to the proposed methodology generates captions from the perspective of implanted expertise whereas the caption generated through learning on general data contains a number of contents irrelevant to expertise interpretation. In this paper, we propose a novel approach of specialized image interpretation. To achieve this goal, we present a method to use transfer learning and generate captions specialized in the specific domain. In the future, by applying the proposed methodology to expertise transplant in various fields, we expected that many researches will be actively conducted to solve the problem of lack of expertise data and to improve performance of image captioning.

Physiological studies on the sudden wilting of JAPONICA/INDICA crossed rice varieties in Korea -I. The effects of plant nutritional status on the occurrence of sudden wilting (일(日). 인원연교잡(印遠緣交雜) 수도품종(水稻品種)의 급성위조증상(急性萎凋症狀) 발생(發生)에 관(關)한 영양생리학적(營養生理學的) 연구(硏究) -I. 수도(水稻)의 영양상태(營養狀態)가 급성위조증상(急性萎凋症狀) 발생(發生)에 미치는 영향(影響))

  • Kim, Yoo-Seob
    • Korean Journal of Soil Science and Fertilizer
    • /
    • v.21 no.3
    • /
    • pp.316-338
    • /
    • 1988
  • To identify the physiological phenomena on the sudden wilting of japonica/indica crossed varieties, Pot experiment was carried out under the heavy N application with various levels of potassium in Japan. The results obtained are as follows. 1. Sudden wilting was occurred in both varieties used, Yushin and Milyang 23. The former showed a higher degree than the latter. 2. Sudden wilting was occurred into two types, one at early ripening stage and the other at late ripening stage. The former type was found in the field with low potassium supply and the latter was seemed to be related to varietal wilting tolerence. 3. By the investigation of concerning the effective tillering rate and the change of dry weight of each organ at the heading stage, it was inferred that the growth status from young panicle formation stage to heading stage were related to sudden wilting tolerence. 4. Manganese content at heading stage, ratio of Fe/Mn and Fe. Fe/Mn in stern at late ripening stage and $K_2$ O/N ratio of stem at harvesting stage were recognized as the specific factors in connection with sudden wilting. Mn content in the sudden wilting rice plant was already in creased remarkably at heading stage. In relation to root age and absoption characteristics of Mn, the senility of root before heading stage was inferred as the cause of increase the value of Fe/Mn or Fe. Fe/Mn. 5. The $K_2$ O/N ratio of culm at harvesting stage was lower in upper node than lower node in relation to sudden wilting. And it was well accordance with the fact that the symptoms of sudden wilting proceeded from upper leaf to lower leaf. These phenomenon was different from the usual one that the effect of potassium deficiency was more remarkable in lower node than upper node. 6. All varieties which have a condition of potassium deficiency have a high degree of nitrogen content of leaves at heading stage and the $K_2$ O/N ratio of each organ was low, Especialy, $K_2$ O/N ratio is much lower in sheath and culm than leaves.

  • PDF

A Study on Knowledge Entity Extraction Method for Individual Stocks Based on Neural Tensor Network (뉴럴 텐서 네트워크 기반 주식 개별종목 지식개체명 추출 방법에 관한 연구)

  • Yang, Yunseok;Lee, Hyun Jun;Oh, Kyong Joo
    • Journal of Intelligence and Information Systems
    • /
    • v.25 no.2
    • /
    • pp.25-38
    • /
    • 2019
  • Selecting high-quality information that meets the interests and needs of users among the overflowing contents is becoming more important as the generation continues. In the flood of information, efforts to reflect the intention of the user in the search result better are being tried, rather than recognizing the information request as a simple string. Also, large IT companies such as Google and Microsoft focus on developing knowledge-based technologies including search engines which provide users with satisfaction and convenience. Especially, the finance is one of the fields expected to have the usefulness and potential of text data analysis because it's constantly generating new information, and the earlier the information is, the more valuable it is. Automatic knowledge extraction can be effective in areas where information flow is vast, such as financial sector, and new information continues to emerge. However, there are several practical difficulties faced by automatic knowledge extraction. First, there are difficulties in making corpus from different fields with same algorithm, and it is difficult to extract good quality triple. Second, it becomes more difficult to produce labeled text data by people if the extent and scope of knowledge increases and patterns are constantly updated. Third, performance evaluation is difficult due to the characteristics of unsupervised learning. Finally, problem definition for automatic knowledge extraction is not easy because of ambiguous conceptual characteristics of knowledge. So, in order to overcome limits described above and improve the semantic performance of stock-related information searching, this study attempts to extract the knowledge entity by using neural tensor network and evaluate the performance of them. Different from other references, the purpose of this study is to extract knowledge entity which is related to individual stock items. Various but relatively simple data processing methods are applied in the presented model to solve the problems of previous researches and to enhance the effectiveness of the model. From these processes, this study has the following three significances. First, A practical and simple automatic knowledge extraction method that can be applied. Second, the possibility of performance evaluation is presented through simple problem definition. Finally, the expressiveness of the knowledge increased by generating input data on a sentence basis without complex morphological analysis. The results of the empirical analysis and objective performance evaluation method are also presented. The empirical study to confirm the usefulness of the presented model, experts' reports about individual 30 stocks which are top 30 items based on frequency of publication from May 30, 2017 to May 21, 2018 are used. the total number of reports are 5,600, and 3,074 reports, which accounts about 55% of the total, is designated as a training set, and other 45% of reports are designated as a testing set. Before constructing the model, all reports of a training set are classified by stocks, and their entities are extracted using named entity recognition tool which is the KKMA. for each stocks, top 100 entities based on appearance frequency are selected, and become vectorized using one-hot encoding. After that, by using neural tensor network, the same number of score functions as stocks are trained. Thus, if a new entity from a testing set appears, we can try to calculate the score by putting it into every single score function, and the stock of the function with the highest score is predicted as the related item with the entity. To evaluate presented models, we confirm prediction power and determining whether the score functions are well constructed by calculating hit ratio for all reports of testing set. As a result of the empirical study, the presented model shows 69.3% hit accuracy for testing set which consists of 2,526 reports. this hit ratio is meaningfully high despite of some constraints for conducting research. Looking at the prediction performance of the model for each stocks, only 3 stocks, which are LG ELECTRONICS, KiaMtr, and Mando, show extremely low performance than average. this result maybe due to the interference effect with other similar items and generation of new knowledge. In this paper, we propose a methodology to find out key entities or their combinations which are necessary to search related information in accordance with the user's investment intention. Graph data is generated by using only the named entity recognition tool and applied to the neural tensor network without learning corpus or word vectors for the field. From the empirical test, we confirm the effectiveness of the presented model as described above. However, there also exist some limits and things to complement. Representatively, the phenomenon that the model performance is especially bad for only some stocks shows the need for further researches. Finally, through the empirical study, we confirmed that the learning method presented in this study can be used for the purpose of matching the new text information semantically with the related stocks.