• Title/Summary/Keyword: 발전 시스템

Search Result 12,195, Processing Time 0.048 seconds

A Study on Improvement of Laws regarding Welfare for the Aged (노인복지 관련법제의 발전방향)

  • Park, Ji-Soon
    • Journal of Legislation Research
    • /
    • no.41
    • /
    • pp.87-123
    • /
    • 2011
  • Korea is expected to become an 'aged society' with more than 14 percent of the public aged 65 years or more by 2018. The rapid aging is giving rise to various problems within the society along with falling birthrate in a short period of time. In this context, the role and function of laws on welfare for the aged must be particularly emphasized. Also the Senior Citizens Welfare Act is of great importance as it provides social welfare service on the basis of functional connection with social insurance and public assistance. First, this paper looks into the history of laws related to welfare for the elderly such as the Senior Welfare Act, the Act on Long-term Care Insurance for Senior Citizens and the Basic Old Age Pension Act as well as the findings of earlier studies. In the second place, it will break down such laws by main components aiming to examine details of the laws and questions raised regarding them and to seek ways to achieve improvement with an emphasis on health care, old age income security, housing welfare(assisted living facilities), job security for the aged. The Senior Welfare Act offers substance of social welfare service for the elderly. Income security, health and medical care, welfare measures through long-term care and assisted living facilities, social participation by working are the key elements and all of them should be closely associated to ensure citizens get sufficient public support in their old age. For this purpose, the Senior Welfare Act is under a normative network with laws such as Act on Long-term Care Insurance for Senior Citizens and Basic Old Age Pension Act. Current laws on welfare for the aged including Senior Welfare Act are not sufficiently responsive to the aged society of the 21st century. Income security combined with decent social participation, health and medical care closely connected with long-term care system, efficient expense sharing between government and local government, enhancement of effectiveness of welfare measures can be considered as means to improve current welfare system so that the elderly can enjoy their old age with dignity and respect.

Current Statues of Phenomics and its Application for Crop Improvement: Imaging Systems for High-throughput Screening (작물육종 효율 극대화를 위한 피노믹스(phenomics) 연구동향: 화상기술을 이용한 식물 표현형 분석을 중심으로)

  • Lee, Seong-Kon;Kwon, Tack-Ryoun;Suh, Eun-Jung;Bae, Shin-Chul
    • Korean Journal of Breeding Science
    • /
    • v.43 no.4
    • /
    • pp.233-240
    • /
    • 2011
  • Food security has been a main global issue due to climate changes and growing world population expected to 9 billion by 2050. While biodiversity is becoming more highlight, breeders are confronting shortage of various genetic materials needed for new variety to tackle food shortage challenge. Though biotechnology is still under debate on potential risk to human and environment, it is considered as one of alternative tools to address food supply issue for its potential to create a number of variations in genetic resource. The new technology, phenomics, is developing to improve efficiency of crop improvement. Phenomics is concerned with the measurement of phenomes which are the physical, morphological, physiological and/or biochemical traits of organisms as they change in response to genetic mutation and environmental influences. It can be served to provide better understanding of phenotypes at whole plant. For last decades, high-throughput screening (HTS) systems have been developed to measure phenomes, rapidly and quantitatively. Imaging technology such as thermal and chlorophyll fluorescence imaging systems is an area of HTS which has been used in agriculture. In this article, we review the current statues of high-throughput screening system in phenomics and its application for crop improvement.

A Study on the Establishment of Buddhist Temple Records Management System (사찰기록 관리 체계화 방안 연구)

  • Park, Sung-Su
    • The Korean Journal of Archival Studies
    • /
    • no.26
    • /
    • pp.33-62
    • /
    • 2010
  • Buddhism was introduced in the Korea Peninsula 1600 years ago, and now there are over 10 million believers in Korea. The systematic Management of Temple Records has a spiritual and cultural value in a rapidly changing modern society. This study proposes a better management system of Buddhist temple records for the Jogye Order of Korean Buddhism. this system Not only supports transparency of religious affairs, but presents a way for a more effective management. in this study, I conducted a study on the national legislation for the preservation of buddhist temples and the local rules of religious affairs from the Jogye Order. Through this, I analyzed the problems of Buddhist records management. in the long term, to improve these problems, I purpose the establishment of temple archives be maintained by parish head offices. This study presents a retention schedule for this systematic establishment system. I present charts for the standard Buddhist records management that manage the total process systematically from the production of records to its discard. Also I present a general plan to prevent random defamation of Buddhist temple documents and impose a duty for preservation. I intend for this plan to be subject to discussion and tailored to the particular needs of temple reads. In creating these charts standard of Buddhist temple records management, I analyzed operating examples of foreign religious institutions and examined their retention periods. I also examined the retention periods and classification system from the Jogye Order. Then I presented ways for this management system to operate through computer programs. There is a need to establish a large scale management system to arrange the records of buddhist documents. We must enforce the duty of conserving records through the proposed management system. We need the system to manage even the local parish temple records through the proposed management system and the operation of the proposed archive system. This study presents research to from the basic of the preservation and the passing of traditional records to future generations. I also discovered the historical cultural and social value that these records contain. Systematically confirmed Buddhist temple records management will pave the way that these tangible and intangible cultural records handed down from history can be the cultural heritages. establishing a temple records management system will pave the way for these cultural records to be handed down to future generations as cultural heritages.

A Study on the Strategy of IoT Industry Development in the 4th Industrial Revolution: Focusing on the direction of business model innovation (4차 산업혁명 시대의 사물인터넷 산업 발전전략에 관한 연구: 기업측면의 비즈니스 모델혁신 방향을 중심으로)

  • Joeng, Min Eui;Yu, Song-Jin
    • Journal of Intelligence and Information Systems
    • /
    • v.25 no.2
    • /
    • pp.57-75
    • /
    • 2019
  • In this paper, we conducted a study focusing on the innovation direction of the documentary model on the Internet of Things industry, which is the most actively industrialized among the core technologies of the 4th Industrial Revolution. Policy, economic, social, and technical issues were derived using PEST analysis for global trend analysis. It also presented future prospects for the Internet of Things industry of ICT-related global research institutes such as Gartner and International Data Corporation. Global research institutes predicted that competition in network technologies will be an issue for industrial Internet (IIoST) and IoT (Internet of Things) based on infrastructure and platforms. As a result of the PEST analysis, developed countries are pushing policies to respond to the fourth industrial revolution through cooperation of private (business/ research institutes) led by the government. It was also in the process of expanding related R&D budgets and establishing related policies in South Korea. On the economic side, the growth tax of the related industries (based on the aggregate value of the market) and the performance of the entity were reviewed. The growth of industries related to the fourth industrial revolution in advanced countries overseas was found to be faster than other industries, while in Korea, the growth of the "technical hardware and equipment" and "communication service" sectors was relatively low among industries related to the fourth industrial revolution. On the social side, it is expected to cause enormous ripple effects across society, largely due to changes in technology and industrial structure, changes in employment structure, changes in job volume, etc. On the technical side, changes were taking place in each industry, representing the health and medical sectors and manufacturing sectors, which were rapidly changing as they merged with the technology of the Fourth Industrial Revolution. In this paper, various management methodologies for innovation of existing business model were reviewed to cope with rapidly changing industrial environment due to the fourth industrial revolution. In addition, four criteria were established to select a management model to cope with the new business environment: 'Applicability', 'Agility', 'Diversity' and 'Connectivity'. The expert survey results in an AHP analysis showing that Business Model Canvas is best suited for business model innovation methodology. The results showed very high importance, 42.5 percent in terms of "Applicability", 48.1 percent in terms of "Agility", 47.6 percent in terms of "diversity" and 42.9 percent in terms of "connectivity." Thus, it was selected as a model that could be diversely applied according to the industrial ecology and paradigm shift. Business Model Canvas is a relatively recent management strategy that identifies the value of a business model through a nine-block approach as a methodology for business model innovation. It identifies the value of a business model through nine block approaches and covers the four key areas of business: customer, order, infrastructure, and business feasibility analysis. In the paper, the expansion and application direction of the nine blocks were presented from the perspective of the IoT company (ICT). In conclusion, the discussion of which Business Model Canvas models will be applied in the ICT convergence industry is described. Based on the nine blocks, if appropriate applications are carried out to suit the characteristics of the target company, various applications are possible, such as integration and removal of five blocks, seven blocks and so on, and segmentation of blocks that fit the characteristics. Future research needs to develop customized business innovation methodologies for Internet of Things companies, or those that are performing Internet-based services. In addition, in this study, the Business Model Canvas model was derived from expert opinion as a useful tool for innovation. For the expansion and demonstration of the research, a study on the usability of presenting detailed implementation strategies, such as various model application cases and application models for actual companies, is needed.

Successful Management and Operating System of a UNESCO World Heritage Site - A Case Study on the Wadi Al-Hitan of Egypt - (유네스코 세계자연유산의 성공적인 관리와 운영체계 - 『이집트 Wadi Al-Hitan』의 사례 -)

  • Lim, Jong Deock
    • Korean Journal of Heritage: History & Science
    • /
    • v.44 no.1
    • /
    • pp.106-121
    • /
    • 2011
  • The number of World Natural Heritage Sites is smaller than that of World Cultural Heritage Sites. As of 2010, the total number of natural sites was 180, which is less than 1/3 of all cultural sites. The reason why the number of natural sites is smaller can be attributed to the evaluating criteria of OUV(outstanding universal value). Only 9 fossil related sites were designated as World Heritage Sites among 180 Natural Sites. This study compares their OUVs including the academic value and characteristics of the 9 World Heritage Sites to provide data and reference for KCDC(Korean Cretaceous Dinosaur Coast) to apply as a World Natural Heritage Site. This study was carried out to obtain information and data on the Wadi Al-Hitan of Egypt which was designated as a World Natural Heritage Site. The study includes field investigation for whale fossils, interviews of site paleontologists and staff, and inspections of facilities. Three factors can likely be attributed to its successful management and operating system. First, there is a system for comprehensive research and a monitoring plan. Secondly, experts have been recruited and hired and professional training for staff members has been done properly. Finally, the Wadi Al-Hitan has developed local resources with specialized techniques for conservation and construction design, which matched well with whale fossils and the environment at the site. The Wadi Al-Hitan put a master plan into practice and achieved goals for action plans. To designate a future World Natural Heritage Site in Korea, it is important to be recognized by international experts including IUCN specialists as the best in one's field with OUV. Full-time regular-status employees for a research position are necessary from the preparation stage for the UNESCO World Heritage Site. Local government and related organizations must do their best to control monitoring plans and to improve academic value after the UNESCO World Heritage Site designation. As we experienced during the designation process of Jeju Volcanic Island and Lava Tubes as the first Korean World Natural Heritage Site, participation by various scholars and specialists need to be in harmony with active endeavors from local governments and NGOs.

Development Process for User Needs-based Chatbot: Focusing on Design Thinking Methodology (사용자 니즈 기반의 챗봇 개발 프로세스: 디자인 사고방법론을 중심으로)

  • Kim, Museong;Seo, Bong-Goon;Park, Do-Hyung
    • Journal of Intelligence and Information Systems
    • /
    • v.25 no.3
    • /
    • pp.221-238
    • /
    • 2019
  • Recently, companies and public institutions have been actively introducing chatbot services in the field of customer counseling and response. The introduction of the chatbot service not only brings labor cost savings to companies and organizations, but also enables rapid communication with customers. Advances in data analytics and artificial intelligence are driving the growth of these chatbot services. The current chatbot can understand users' questions and offer the most appropriate answers to questions through machine learning and deep learning. The advancement of chatbot core technologies such as NLP, NLU, and NLG has made it possible to understand words, understand paragraphs, understand meanings, and understand emotions. For this reason, the value of chatbots continues to rise. However, technology-oriented chatbots can be inconsistent with what users want inherently, so chatbots need to be addressed in the area of the user experience, not just in the area of technology. The Fourth Industrial Revolution represents the importance of the User Experience as well as the advancement of artificial intelligence, big data, cloud, and IoT technologies. The development of IT technology and the importance of user experience have provided people with a variety of environments and changed lifestyles. This means that experiences in interactions with people, services(products) and the environment become very important. Therefore, it is time to develop a user needs-based services(products) that can provide new experiences and values to people. This study proposes a chatbot development process based on user needs by applying the design thinking approach, a representative methodology in the field of user experience, to chatbot development. The process proposed in this study consists of four steps. The first step is 'setting up knowledge domain' to set up the chatbot's expertise. Accumulating the information corresponding to the configured domain and deriving the insight is the second step, 'Knowledge accumulation and Insight identification'. The third step is 'Opportunity Development and Prototyping'. It is going to start full-scale development at this stage. Finally, the 'User Feedback' step is to receive feedback from users on the developed prototype. This creates a "user needs-based service (product)" that meets the process's objectives. Beginning with the fact gathering through user observation, Perform the process of abstraction to derive insights and explore opportunities. Next, it is expected to develop a chatbot that meets the user's needs through the process of materializing to structure the desired information and providing the function that fits the user's mental model. In this study, we present the actual construction examples for the domestic cosmetics market to confirm the effectiveness of the proposed process. The reason why it chose the domestic cosmetics market as its case is because it shows strong characteristics of users' experiences, so it can quickly understand responses from users. This study has a theoretical implication in that it proposed a new chatbot development process by incorporating the design thinking methodology into the chatbot development process. This research is different from the existing chatbot development research in that it focuses on user experience, not technology. It also has practical implications in that companies or institutions propose realistic methods that can be applied immediately. In particular, the process proposed in this study can be accessed and utilized by anyone, since 'user needs-based chatbots' can be developed even if they are not experts. This study suggests that further studies are needed because only one field of study was conducted. In addition to the cosmetics market, additional research should be conducted in various fields in which the user experience appears, such as the smart phone and the automotive market. Through this, it will be able to be reborn as a general process necessary for 'development of chatbots centered on user experience, not technology centered'.

Knowledge Extraction Methodology and Framework from Wikipedia Articles for Construction of Knowledge-Base (지식베이스 구축을 위한 한국어 위키피디아의 학습 기반 지식추출 방법론 및 플랫폼 연구)

  • Kim, JaeHun;Lee, Myungjin
    • Journal of Intelligence and Information Systems
    • /
    • v.25 no.1
    • /
    • pp.43-61
    • /
    • 2019
  • Development of technologies in artificial intelligence has been rapidly increasing with the Fourth Industrial Revolution, and researches related to AI have been actively conducted in a variety of fields such as autonomous vehicles, natural language processing, and robotics. These researches have been focused on solving cognitive problems such as learning and problem solving related to human intelligence from the 1950s. The field of artificial intelligence has achieved more technological advance than ever, due to recent interest in technology and research on various algorithms. The knowledge-based system is a sub-domain of artificial intelligence, and it aims to enable artificial intelligence agents to make decisions by using machine-readable and processible knowledge constructed from complex and informal human knowledge and rules in various fields. A knowledge base is used to optimize information collection, organization, and retrieval, and recently it is used with statistical artificial intelligence such as machine learning. Recently, the purpose of the knowledge base is to express, publish, and share knowledge on the web by describing and connecting web resources such as pages and data. These knowledge bases are used for intelligent processing in various fields of artificial intelligence such as question answering system of the smart speaker. However, building a useful knowledge base is a time-consuming task and still requires a lot of effort of the experts. In recent years, many kinds of research and technologies of knowledge based artificial intelligence use DBpedia that is one of the biggest knowledge base aiming to extract structured content from the various information of Wikipedia. DBpedia contains various information extracted from Wikipedia such as a title, categories, and links, but the most useful knowledge is from infobox of Wikipedia that presents a summary of some unifying aspect created by users. These knowledge are created by the mapping rule between infobox structures and DBpedia ontology schema defined in DBpedia Extraction Framework. In this way, DBpedia can expect high reliability in terms of accuracy of knowledge by using the method of generating knowledge from semi-structured infobox data created by users. However, since only about 50% of all wiki pages contain infobox in Korean Wikipedia, DBpedia has limitations in term of knowledge scalability. This paper proposes a method to extract knowledge from text documents according to the ontology schema using machine learning. In order to demonstrate the appropriateness of this method, we explain a knowledge extraction model according to the DBpedia ontology schema by learning Wikipedia infoboxes. Our knowledge extraction model consists of three steps, document classification as ontology classes, proper sentence classification to extract triples, and value selection and transformation into RDF triple structure. The structure of Wikipedia infobox are defined as infobox templates that provide standardized information across related articles, and DBpedia ontology schema can be mapped these infobox templates. Based on these mapping relations, we classify the input document according to infobox categories which means ontology classes. After determining the classification of the input document, we classify the appropriate sentence according to attributes belonging to the classification. Finally, we extract knowledge from sentences that are classified as appropriate, and we convert knowledge into a form of triples. In order to train models, we generated training data set from Wikipedia dump using a method to add BIO tags to sentences, so we trained about 200 classes and about 2,500 relations for extracting knowledge. Furthermore, we evaluated comparative experiments of CRF and Bi-LSTM-CRF for the knowledge extraction process. Through this proposed process, it is possible to utilize structured knowledge by extracting knowledge according to the ontology schema from text documents. In addition, this methodology can significantly reduce the effort of the experts to construct instances according to the ontology schema.

Historical Observation and the Characteristics of the Records and Archives Management in Korea (한국 기록관리의 사적 고찰과 그 특징)

  • Lee, Young-Hak
    • The Korean Journal of Archival Studies
    • /
    • no.34
    • /
    • pp.221-250
    • /
    • 2012
  • This paper introduces the characteristics of the records and archives management of Korea from Joseon dynasty to now. This paper also explains historical background of making the records and archives management in Joseon dynasty. This paper introduces the process of establishment of modern records management system by adopting records management system and public administration of USA after liberation in 1945. The Joseon bureaucrats established systematic methodologies for managing and arranging the records. Jeseon dynasty managed its records systematically since it was a bureaucratic regime. It is also noticeable that the famous Joseonwangjosilrok(Annals of Joseon dynasty) came out of the power struggles for the control of the national affairs between the king and the nobility during the time of establishment of the dynasty. Another noticeable feature of the records tradition in Joseon dynasty was that the nobility recorded their experience and allowed future generations use and refer their experiences and examples when they performed similar business. The records of Joseon period are the historical records which recorded contemporary incidents and the compilers expected the future historians evaluate the incidents they recorded. In 1894, the reformation policy of Gaboh governments changed society into modernity. The policy of Gaboh governments prescribed archive management process through 'Regulation(命令頒布式)'. They revised the form of official documents entirely. They changed a name of an era from Chinese to unique style of Korean, and changed original Chinese into Korean or Korean-Chinese together. Also, instead of a blank sheet of paper they used printed paper to print the name of each office. Korea was liberated from Japanese Imperialism in 1945 and the government of Republic of Korea was established in 1948. In 1950s Republic of Korea used the records management system of the Government-General of Joseon without any alteration. In the late of 1950's Republic of Korea constructed the new records management system by adopting records management system and public administration of USA. However, understanding of records management was scarce, so records and archives management was not accomplished. Consequently, many important records like presidential archives were deserted or destroyed. A period that made the biggest difference on National Records Management System was from 1999 when was enacted. Especially, it was the period of President Roh's five-year tenure called Participation Government (2003-2008). The first distinctive characteristic of Participation Government's records management is that it implemented governance actively. Another remarkable feature is a nomination of records management specialists at public institutions. The Participation Government also legislated (completely revised) . It led to a beginning of developing records management in Republic of Korea.

Label Embedding for Improving Classification Accuracy UsingAutoEncoderwithSkip-Connections (다중 레이블 분류의 정확도 향상을 위한 스킵 연결 오토인코더 기반 레이블 임베딩 방법론)

  • Kim, Museong;Kim, Namgyu
    • Journal of Intelligence and Information Systems
    • /
    • v.27 no.3
    • /
    • pp.175-197
    • /
    • 2021
  • Recently, with the development of deep learning technology, research on unstructured data analysis is being actively conducted, and it is showing remarkable results in various fields such as classification, summary, and generation. Among various text analysis fields, text classification is the most widely used technology in academia and industry. Text classification includes binary class classification with one label among two classes, multi-class classification with one label among several classes, and multi-label classification with multiple labels among several classes. In particular, multi-label classification requires a different training method from binary class classification and multi-class classification because of the characteristic of having multiple labels. In addition, since the number of labels to be predicted increases as the number of labels and classes increases, there is a limitation in that performance improvement is difficult due to an increase in prediction difficulty. To overcome these limitations, (i) compressing the initially given high-dimensional label space into a low-dimensional latent label space, (ii) after performing training to predict the compressed label, (iii) restoring the predicted label to the high-dimensional original label space, research on label embedding is being actively conducted. Typical label embedding techniques include Principal Label Space Transformation (PLST), Multi-Label Classification via Boolean Matrix Decomposition (MLC-BMaD), and Bayesian Multi-Label Compressed Sensing (BML-CS). However, since these techniques consider only the linear relationship between labels or compress the labels by random transformation, it is difficult to understand the non-linear relationship between labels, so there is a limitation in that it is not possible to create a latent label space sufficiently containing the information of the original label. Recently, there have been increasing attempts to improve performance by applying deep learning technology to label embedding. Label embedding using an autoencoder, a deep learning model that is effective for data compression and restoration, is representative. However, the traditional autoencoder-based label embedding has a limitation in that a large amount of information loss occurs when compressing a high-dimensional label space having a myriad of classes into a low-dimensional latent label space. This can be found in the gradient loss problem that occurs in the backpropagation process of learning. To solve this problem, skip connection was devised, and by adding the input of the layer to the output to prevent gradient loss during backpropagation, efficient learning is possible even when the layer is deep. Skip connection is mainly used for image feature extraction in convolutional neural networks, but studies using skip connection in autoencoder or label embedding process are still lacking. Therefore, in this study, we propose an autoencoder-based label embedding methodology in which skip connections are added to each of the encoder and decoder to form a low-dimensional latent label space that reflects the information of the high-dimensional label space well. In addition, the proposed methodology was applied to actual paper keywords to derive the high-dimensional keyword label space and the low-dimensional latent label space. Using this, we conducted an experiment to predict the compressed keyword vector existing in the latent label space from the paper abstract and to evaluate the multi-label classification by restoring the predicted keyword vector back to the original label space. As a result, the accuracy, precision, recall, and F1 score used as performance indicators showed far superior performance in multi-label classification based on the proposed methodology compared to traditional multi-label classification methods. This can be seen that the low-dimensional latent label space derived through the proposed methodology well reflected the information of the high-dimensional label space, which ultimately led to the improvement of the performance of the multi-label classification itself. In addition, the utility of the proposed methodology was identified by comparing the performance of the proposed methodology according to the domain characteristics and the number of dimensions of the latent label space.

Recast of the EU patent law system and its Lessons (유럽연합 특허시스템의 대대적 변혁과 그 교훈)

  • Kim, Yong-Jin
    • Journal of Legislation Research
    • /
    • no.54
    • /
    • pp.303-343
    • /
    • 2018
  • In 2013 a new era for EU patent law system was launched. The creation of the EU patent with unitary effect and the establishment of the Unified Patent Court established a new legal framework on substantive patent protection and patent litigation in Europe. This year the EU Patent Package would become a reality. It includes a regulation on a unitary patent, a regulation on the translation regime and an international Agreement on the Unitary Patent Court. In contrast to the classical European patent, the post-grant life of unitary patent will be governed by the newly created unified patent court and it will have unitary effect. In this article, I highlight the effect of the unitary patent and the jurisdiction of the unified patent court over unitary patents (and 'traditional' patents granted under the EPC that are not opted-out) for actions in relation to patent infringement or to revocation of a European patent and to licences of right. This article explores on the one hand the relation between national patent, the classical European patent and EU patent with unitary effect and on the other hand the relation of unified patent court to the Brussels $I^{bis}$ Regulation. Particular attention is paid to the institutional changes created by the unitary patent package abd the new supplementary forum that enables the UPC to hear disputes involving defendants from third States that relate to an infringement of a European patent and give rise to damage inside as well as outside the Union. Furthermore on the perspective North-east Asia this essay examines the lessons from the experiences of EU patent package.