• Title/Summary/Keyword: 마크

Search Result 3,082, Processing Time 0.03 seconds

Is Religion Possible in the Age of Artificial Intelligence? - From the View of Kantian and Blochian Philosophy of Religion - (인공지능시대에도 종교는 가능한가? - 칸트와 블로흐의 종교철학적 관점에서 -)

  • Kim, Jin
    • Journal of Korean Philosophical Society
    • /
    • v.147
    • /
    • pp.117-146
    • /
    • 2018
  • This paper discusses, whether religion is possible even in the age of artificial intelligence, and whether humans alone are the subject of religious faith or ultra intelligent machines with human minds can be also subjects of faith. In order for ultra intelligent machines to be subjects of faith in the same conditions as humans, they must be able to have unique characteristics such as emotion, will, and self-consciousness. With the advent of ultra intelligent machines with the same level of cognitive and emotional abilities as human beings, the religious actions of artificial intelligence will be inevitable. The ultra intelligent machines after 'singularity' will go beyond the subject of religious belief and reign as God who can rule humans, nature and the world. This is also the common view of Morabeck, Kurzweil and Harari. Leonhart also reminds us that technological advances should make us used to the fact that we are now 'gods'. But we fear we may face distopia despite the general affluence of the 'Star Trec' economy. For this reason, even if a man says he has learned the religious truth, one can't help but wonder if it is true. Kant and Bloch are thinkers who critically reflected on our religious ideals and highest concept in different world-view premises. Kant's concept of God as 'idea of pure reason' and 'postulate of practical reason', can seem like a 'god of gap' as Jesse Bering said earlier. Kant recognized the need for religious faith only on a strict basis of moral necessity. The subjects of religious faith should always strive to do the moral good, but such efforts themselves were not enough to reach perfection and so postulated immortality of the soul. But if an ultra intelligent machines that has emerged above a singularity is given a new status in an intellectual explosion, it can reach its morality by blocking evil tendencies and by the infinite evolution of super intelligence. So it will no longer need Kant's 'Postulate for continuous progress towards greater goodness', 'Postulate for divine grace' and 'Postulate for infinite expansion of the kingdom of God on earth.' Artificial intelligence robots would not necessarily consider religious performance in the Kant's meaning, and therefore religion will also have to be abolished. Ernst Bloch transforms Kant's postulate to be Persian dualism. Therefore, in Bloch, even though the ultra intelligent machines is a divine being, one must critically ask whether it is a wicked or a good God. Artificial intelligence experts warn that ultra intellectual machine as Pandora's gift will bring disaster to mankind. In the Kant's Matrix, a ultra intelligent machines, which is the completion of morality and God itself, may fall into a bad god in Bloch's Matrix. Therefore, despite the myth of singularity, we still believe that ultra intelligent machines, whether as God leads us to the completion of one of our only religious beliefs, or as bad god to the collapse of mankind through complete denial of existence.

Greenhouse Gas Mitigation Effect Analysis by Cool Biz and Warm Biz (쿨맵시 및 온맵시 복장 착용에 의한 온실가스 감축 효과 분석)

  • Yeo, So-Young;Ryu, Ji-Yeon;Lee, Sue-Been;Kim, Dai-Gon;Hong, Yoo-Deog;Seong, Mi-Ae;Lee, Kyoung-Mi
    • Journal of Climate Change Research
    • /
    • v.2 no.2
    • /
    • pp.93-106
    • /
    • 2011
  • Republic of Korea officially announced its mid term reduction target which reduce about 30% of BAU GHG emission by 2020 in the 15th meeting of UNFCCC(COP 15) held in Copenhagen, Denmark 2009. To achieve this goal, it is necessary to understand the serious of climate change and take part in GHG reduction not only industry but also the nation. However, such positive participation in green life which may cause inconvenient of the life of the people. It should be accomplished with providing reliable information. This study suggests the scientific potentialities of GHG emission by guideline on low carbon life and green life to form and change a lifestyle suitable for coping with climate change. And also, this study quantitate the GHG reduction which may reduce demand for air conditioning by cool biz and warm biz. In Korea, this campaign has become known as 'CoolMaebsi' by Ministry of Environmental of Korea. 'CoolMaebsi' is a compound word of 'Cool' which means feel refreshed, and 'Maebsi' is a Korean word which means attire. Though this campaign is effective and significant to reduce the GHG emission yet there were no study on quantitative analysis. Therefore this study calculated reduced energy consumption and potential GHG emission by measuring variation of skin temperature. As the result, wearing warm biz and cool biz have an effect of reducing not only the energy consumption but also GHG emission. To achieve the low carbon society, it is necessary to improve the energy saving system and introduce the policy which guide to change a life style.

KB-BERT: Training and Application of Korean Pre-trained Language Model in Financial Domain (KB-BERT: 금융 특화 한국어 사전학습 언어모델과 그 응용)

  • Kim, Donggyu;Lee, Dongwook;Park, Jangwon;Oh, Sungwoo;Kwon, Sungjun;Lee, Inyong;Choi, Dongwon
    • Journal of Intelligence and Information Systems
    • /
    • v.28 no.2
    • /
    • pp.191-206
    • /
    • 2022
  • Recently, it is a de-facto approach to utilize a pre-trained language model(PLM) to achieve the state-of-the-art performance for various natural language tasks(called downstream tasks) such as sentiment analysis and question answering. However, similar to any other machine learning method, PLM tends to depend on the data distribution seen during the training phase and shows worse performance on the unseen (Out-of-Distribution) domain. Due to the aforementioned reason, there have been many efforts to develop domain-specified PLM for various fields such as medical and legal industries. In this paper, we discuss the training of a finance domain-specified PLM for the Korean language and its applications. Our finance domain-specified PLM, KB-BERT, is trained on a carefully curated financial corpus that includes domain-specific documents such as financial reports. We provide extensive performance evaluation results on three natural language tasks, topic classification, sentiment analysis, and question answering. Compared to the state-of-the-art Korean PLM models such as KoELECTRA and KLUE-RoBERTa, KB-BERT shows comparable performance on general datasets based on common corpora like Wikipedia and news articles. Moreover, KB-BERT outperforms compared models on finance domain datasets that require finance-specific knowledge to solve given problems.

Performance Optimization of Numerical Ocean Modeling on Cloud Systems (클라우드 시스템에서 해양수치모델 성능 최적화)

  • JUNG, KWANGWOOG;CHO, YANG-KI;TAK, YONG-JIN
    • The Sea:JOURNAL OF THE KOREAN SOCIETY OF OCEANOGRAPHY
    • /
    • v.27 no.3
    • /
    • pp.127-143
    • /
    • 2022
  • Recently, many attempts to run numerical ocean models in cloud computing environments have been tried actively. A cloud computing environment can be an effective means to implement numerical ocean models requiring a large-scale resource or quickly preparing modeling environment for global or large-scale grids. Many commercial and private cloud computing systems provide technologies such as virtualization, high-performance CPUs and instances, ether-net based high-performance-networking, and remote direct memory access for High Performance Computing (HPC). These new features facilitate ocean modeling experimentation on commercial cloud computing systems. Many scientists and engineers expect cloud computing to become mainstream in the near future. Analysis of the performance and features of commercial cloud services for numerical modeling is essential in order to select appropriate systems as this can help to minimize execution time and the amount of resources utilized. The effect of cache memory is large in the processing structure of the ocean numerical model, which processes input/output of data in a multidimensional array structure, and the speed of the network is important due to the communication characteristics through which a large amount of data moves. In this study, the performance of the Regional Ocean Modeling System (ROMS), the High Performance Linpack (HPL) benchmarking software package, and STREAM, the memory benchmark were evaluated and compared on commercial cloud systems to provide information for the transition of other ocean models into cloud computing. Through analysis of actual performance data and configuration settings obtained from virtualization-based commercial clouds, we evaluated the efficiency of the computer resources for the various model grid sizes in the virtualization-based cloud systems. We found that cache hierarchy and capacity are crucial in the performance of ROMS using huge memory. The memory latency time is also important in the performance. Increasing the number of cores to reduce the running time for numerical modeling is more effective with large grid sizes than with small grid sizes. Our analysis results will be helpful as a reference for constructing the best computing system in the cloud to minimize time and cost for numerical ocean modeling.

Comparative Study on the Carbon Stock Changes Measurement Methodologies of Perennial Woody Crops-focusing on Overseas Cases (다년생 목본작물의 탄소축적 변화량 산정방법론 비교 연구-해외사례를 중심으로)

  • Hae-In Lee;Yong-Ju Lee;Kyeong-Hak Lee;Chang-Bae Lee
    • Korean Journal of Agricultural and Forest Meteorology
    • /
    • v.25 no.4
    • /
    • pp.258-266
    • /
    • 2023
  • This study analyzed methodologies for estimating carbon stocks of perennial woody crops and the research cases in overseas countries. As a result, we found that Australia, Bulgaria, Canada, and Japan are using the stock-difference method, while Austria, Denmark, and Germany are estimating the change in the carbon stock based on the gain-loss method. In some overseas countries, the researches were conducted on estimating the carbon stock change using image data as tier 3 phase beyond the research developing country-specific factors as tier 2 phase. In South Korea, convergence studies as the third stage were conducted in forestry field, but advanced research in the agricultural field is at the beginning stage. Based on these results, we suggest directions for the following four future researches: 1) securing national-specific factors related to emissions and removals in the agricultural field through the development of allometric equation and carbon conversion factors for perennial woody crops to improve the completeness of emission and removals statistics, 2) implementing policy studies on the cultivation area calculation refinement with fruit tree-biomass-based maturity, 3) developing a more advanced estimation technique for perennial woody crops in the agricultural sector using allometric equation and remote sensing techniques based on the agricultural and forestry satellite scheduled to be launched in 2025, and to establish a matrix and monitoring system for perennial woody crop cultivation areas in the agricultural sector, Lastly, 4) estimating soil carbon stocks change, which is currently estimated by treating all agricultural areas as one, by sub-land classification to implement a dynamic carbon cycle model. This study suggests a detailed guideline and advanced methods of carbon stock change calculation for perennial woody crops, which supports 2050 Carbon Neutral Strategy of Ministry of Agriculture, Food, and Rural Affairs and activate related research in agricultural sector.

Biochemical Assessment of Deer Velvet Antler Extract and its Cytotoxic Effect including Acute Oral Toxicity using an ICR Mice Model (ICR 마우스 모델을 이용한 녹용 추출물의 생화학적 평가 및 급성 경구 독성을 포함한 세포 독성 효과)

  • Ramakrishna Chilakala;Hyeon Jeong Moon;Hwan Lee;Dong-Sung Lee;Sun Hee Cheong
    • Journal of Food Hygiene and Safety
    • /
    • v.38 no.6
    • /
    • pp.430-441
    • /
    • 2023
  • Velvet antler is widely used as a traditional medicine, and numerous studies have demonstrated its tremendous nutritional and medicinal values including immunity-enhancing effects. This study aimed to investigate different deer velvet extracts (Sample 1: raw extract, Sample 2: dried extract, and Sample 3: freeze-dried extract) for proximate composition, uronic acid, sulfated glycosaminoglycan, sialic acid, collagen levels, and chemical components using ultra-performance liquid chromatography-quadrupole-time-of-light mass spectrometry. In addition, we evaluated the cytotoxic effect of the deer velvet extracts on BV2 microglia, HT22 hippocampal cells, HaCaT keratinocytes, and RAW264.7 macrophages using the cell viability MTT assay. Furthermore, we evaluated acute toxicity of the deer velvet extracts at different doses (0, 500, 1000, and 2000 mg/kg) administered orally to both male and female ICR mice for 14 d (five mice per group). After treatment, we evaluated general toxicity, survival rate, body weight changes, mortality, clinical signs, and necropsy findings in the experimental mice based on OECD guidelines. The results suggested that in vitro treatment with the evaluated extracts had no cytotoxic effect in HaCaT keratinocytes cells, whereas Sample-2 had a cytotoxic effect at 500 and 1000 ㎍/mL on HT22 hippocampal cells and RAW264.7 macrophages. Sample 3 was also cytotoxic at concentrations of 500 and 1000 ㎍/mL to RAW264.7 and BV2 microglial cells. However, the mice treated in vivo with the velvet extracts at doses of 500-2000 mg/kg BW showed no clinical signs, mortality, or necropsy findings, indicating that the LD50 is higher than this dosage. These findings indicate that there were no toxicological abnormalities connected with the deer velvet extract treatment in mice. However, further human and animal studies are needed before sufficient safety information is available to justify its use in humans.

Context Sharing Framework Based on Time Dependent Metadata for Social News Service (소셜 뉴스를 위한 시간 종속적인 메타데이터 기반의 컨텍스트 공유 프레임워크)

  • Ga, Myung-Hyun;Oh, Kyeong-Jin;Hong, Myung-Duk;Jo, Geun-Sik
    • Journal of Intelligence and Information Systems
    • /
    • v.19 no.4
    • /
    • pp.39-53
    • /
    • 2013
  • The emergence of the internet technology and SNS has increased the information flow and has changed the way people to communicate from one-way to two-way communication. Users not only consume and share the information, they also can create and share it among their friends across the social network service. It also changes the Social Media behavior to become one of the most important communication tools which also includes Social TV. Social TV is a form which people can watch a TV program and at the same share any information or its content with friends through Social media. Social News is getting popular and also known as a Participatory Social Media. It creates influences on user interest through Internet to represent society issues and creates news credibility based on user's reputation. However, the conventional platforms in news services only focus on the news recommendation domain. Recent development in SNS has changed this landscape to allow user to share and disseminate the news. Conventional platform does not provide any special way for news to be share. Currently, Social News Service only allows user to access the entire news. Nonetheless, they cannot access partial of the contents which related to users interest. For example user only have interested to a partial of the news and share the content, it is still hard for them to do so. In worst cases users might understand the news in different context. To solve this, Social News Service must provide a method to provide additional information. For example, Yovisto known as an academic video searching service provided time dependent metadata from the video. User can search and watch partial of video content according to time dependent metadata. They also can share content with a friend in social media. Yovisto applies a method to divide or synchronize a video based whenever the slides presentation is changed to another page. However, we are not able to employs this method on news video since the news video is not incorporating with any power point slides presentation. Segmentation method is required to separate the news video and to creating time dependent metadata. In this work, In this paper, a time dependent metadata-based framework is proposed to segment news contents and to provide time dependent metadata so that user can use context information to communicate with their friends. The transcript of the news is divided by using the proposed story segmentation method. We provide a tag to represent the entire content of the news. And provide the sub tag to indicate the segmented news which includes the starting time of the news. The time dependent metadata helps user to track the news information. It also allows them to leave a comment on each segment of the news. User also may share the news based on time metadata as segmented news or as a whole. Therefore, it helps the user to understand the shared news. To demonstrate the performance, we evaluate the story segmentation accuracy and also the tag generation. For this purpose, we measured accuracy of the story segmentation through semantic similarity and compared to the benchmark algorithm. Experimental results show that the proposed method outperforms benchmark algorithms in terms of the accuracy of story segmentation. It is important to note that sub tag accuracy is the most important as a part of the proposed framework to share the specific news context with others. To extract a more accurate sub tags, we have created stop word list that is not related to the content of the news such as name of the anchor or reporter. And we applied to framework. We have analyzed the accuracy of tags and sub tags which represent the context of news. From the analysis, it seems that proposed framework is helpful to users for sharing their opinions with context information in Social media and Social news.

The Characteristics of Rural Population, Korea, 1960~1995: Population Composition and Internal Migration (농촌인구의 특성과 그 변화, 1960~1995: 인구구성 및 인구이동)

  • 김태헌
    • Korea journal of population studies
    • /
    • v.19 no.2
    • /
    • pp.77-105
    • /
    • 1996
  • The rural problems which we are facing start from the extremely small sized population and the skewed population structure by age and sex. Thus we analyzed the change of the rural population. And we analyzed the recent return migration to the rural areas by comparing the recent in-migrants with out-migrants to rural areas. And by analyzing the rural village survey data which was to show the current characteristics of rural population, we found out the effects of the in-migrants to the rural areas and predicted the futures of rural villages by characteristics. The changes of rural population composition by age was very clear. As the out-migrants towards cities carried on, the population composition of young children aged 0~4 years was low and the aged became thick. The proportion of the population aged 0~4 years was 45.1% of the total population in 1970 and dropped down to 20.4% in 1995, which is predicted to become under 20% from now on. In the same period(1970~1995), the population aged 65 years and over rose from 4.2% to 11.9%. In 1960, before industrialization, the proportion of the population aged 0~4 years in rural areas was higher than that of cities. As the rural young population continuously moves to cities it became lower than that in urban areas from 1975 and the gap grew till 1990. But the proportion of rural population aged 0~4 years in 1995 became 6.2% and the gap reduced. We can say this is the change of the characteristics of in-migrants and out-migrants in the rural areas. Also considering the composition of the population by age group moving from urban to rural area in the late 1980s, 51.8% of the total migrants concentrates upon age group of 20~34 years and these people's educational level was higher than that of out-migrants to urban areas. This fact predicted the changes of the rural population, and the results will turn out as a change in the rural society. However, after comparing the population structure between the pure rural village of Boeun-gun and suburban village of Paju-gun which was agriculture centered village but recently changed rapidly, the recent change of the rural population structure which the in-migrants to rural areas becomes younger is just a phenomenon in the suburban rural areas, not the change of the total rural areas in general. From the characteristics of the population structure of rural village from the field survey on these villages, we can see that in the pure rural villages without any effects from cities the regidents are highly aged, while industrialization and urbanization are making a progress in suburban villages. Therefore, the recent partial change of the rural population structure and the change of characteristics of the in-migrants toward rural areas is effecting and being effected by the population change of areas like suburban rural villages. Although there are return migrants to rural areas to change their jobs into agriculture, this is too minor to appear as a statistic effect.

  • PDF

Content-based Recommendation Based on Social Network for Personalized News Services (개인화된 뉴스 서비스를 위한 소셜 네트워크 기반의 콘텐츠 추천기법)

  • Hong, Myung-Duk;Oh, Kyeong-Jin;Ga, Myung-Hyun;Jo, Geun-Sik
    • Journal of Intelligence and Information Systems
    • /
    • v.19 no.3
    • /
    • pp.57-71
    • /
    • 2013
  • Over a billion people in the world generate new news minute by minute. People forecasts some news but most news are from unexpected events such as natural disasters, accidents, crimes. People spend much time to watch a huge amount of news delivered from many media because they want to understand what is happening now, to predict what might happen in the near future, and to share and discuss on the news. People make better daily decisions through watching and obtaining useful information from news they saw. However, it is difficult that people choose news suitable to them and obtain useful information from the news because there are so many news media such as portal sites, broadcasters, and most news articles consist of gossipy news and breaking news. User interest changes over time and many people have no interest in outdated news. From this fact, applying users' recent interest to personalized news service is also required in news service. It means that personalized news service should dynamically manage user profiles. In this paper, a content-based news recommendation system is proposed to provide the personalized news service. For a personalized service, user's personal information is requisitely required. Social network service is used to extract user information for personalization service. The proposed system constructs dynamic user profile based on recent user information of Facebook, which is one of social network services. User information contains personal information, recent articles, and Facebook Page information. Facebook Pages are used for businesses, organizations and brands to share their contents and connect with people. Facebook users can add Facebook Page to specify their interest in the Page. The proposed system uses this Page information to create user profile, and to match user preferences to news topics. However, some Pages are not directly matched to news topic because Page deals with individual objects and do not provide topic information suitable to news. Freebase, which is a large collaborative database of well-known people, places, things, is used to match Page to news topic by using hierarchy information of its objects. By using recent Page information and articles of Facebook users, the proposed systems can own dynamic user profile. The generated user profile is used to measure user preferences on news. To generate news profile, news category predefined by news media is used and keywords of news articles are extracted after analysis of news contents including title, category, and scripts. TF-IDF technique, which reflects how important a word is to a document in a corpus, is used to identify keywords of each news article. For user profile and news profile, same format is used to efficiently measure similarity between user preferences and news. The proposed system calculates all similarity values between user profiles and news profiles. Existing methods of similarity calculation in vector space model do not cover synonym, hypernym and hyponym because they only handle given words in vector space model. The proposed system applies WordNet to similarity calculation to overcome the limitation. Top-N news articles, which have high similarity value for a target user, are recommended to the user. To evaluate the proposed news recommendation system, user profiles are generated using Facebook account with participants consent, and we implement a Web crawler to extract news information from PBS, which is non-profit public broadcasting television network in the United States, and construct news profiles. We compare the performance of the proposed method with that of benchmark algorithms. One is a traditional method based on TF-IDF. Another is 6Sub-Vectors method that divides the points to get keywords into six parts. Experimental results demonstrate that the proposed system provide useful news to users by applying user's social network information and WordNet functions, in terms of prediction error of recommended news.

Aspect-Based Sentiment Analysis Using BERT: Developing Aspect Category Sentiment Classification Models (BERT를 활용한 속성기반 감성분석: 속성카테고리 감성분류 모델 개발)

  • Park, Hyun-jung;Shin, Kyung-shik
    • Journal of Intelligence and Information Systems
    • /
    • v.26 no.4
    • /
    • pp.1-25
    • /
    • 2020
  • Sentiment Analysis (SA) is a Natural Language Processing (NLP) task that analyzes the sentiments consumers or the public feel about an arbitrary object from written texts. Furthermore, Aspect-Based Sentiment Analysis (ABSA) is a fine-grained analysis of the sentiments towards each aspect of an object. Since having a more practical value in terms of business, ABSA is drawing attention from both academic and industrial organizations. When there is a review that says "The restaurant is expensive but the food is really fantastic", for example, the general SA evaluates the overall sentiment towards the 'restaurant' as 'positive', while ABSA identifies the restaurant's aspect 'price' as 'negative' and 'food' aspect as 'positive'. Thus, ABSA enables a more specific and effective marketing strategy. In order to perform ABSA, it is necessary to identify what are the aspect terms or aspect categories included in the text, and judge the sentiments towards them. Accordingly, there exist four main areas in ABSA; aspect term extraction, aspect category detection, Aspect Term Sentiment Classification (ATSC), and Aspect Category Sentiment Classification (ACSC). It is usually conducted by extracting aspect terms and then performing ATSC to analyze sentiments for the given aspect terms, or by extracting aspect categories and then performing ACSC to analyze sentiments for the given aspect category. Here, an aspect category is expressed in one or more aspect terms, or indirectly inferred by other words. In the preceding example sentence, 'price' and 'food' are both aspect categories, and the aspect category 'food' is expressed by the aspect term 'food' included in the review. If the review sentence includes 'pasta', 'steak', or 'grilled chicken special', these can all be aspect terms for the aspect category 'food'. As such, an aspect category referred to by one or more specific aspect terms is called an explicit aspect. On the other hand, the aspect category like 'price', which does not have any specific aspect terms but can be indirectly guessed with an emotional word 'expensive,' is called an implicit aspect. So far, the 'aspect category' has been used to avoid confusion about 'aspect term'. From now on, we will consider 'aspect category' and 'aspect' as the same concept and use the word 'aspect' more for convenience. And one thing to note is that ATSC analyzes the sentiment towards given aspect terms, so it deals only with explicit aspects, and ACSC treats not only explicit aspects but also implicit aspects. This study seeks to find answers to the following issues ignored in the previous studies when applying the BERT pre-trained language model to ACSC and derives superior ACSC models. First, is it more effective to reflect the output vector of tokens for aspect categories than to use only the final output vector of [CLS] token as a classification vector? Second, is there any performance difference between QA (Question Answering) and NLI (Natural Language Inference) types in the sentence-pair configuration of input data? Third, is there any performance difference according to the order of sentence including aspect category in the QA or NLI type sentence-pair configuration of input data? To achieve these research objectives, we implemented 12 ACSC models and conducted experiments on 4 English benchmark datasets. As a result, ACSC models that provide performance beyond the existing studies without expanding the training dataset were derived. In addition, it was found that it is more effective to reflect the output vector of the aspect category token than to use only the output vector for the [CLS] token as a classification vector. It was also found that QA type input generally provides better performance than NLI, and the order of the sentence with the aspect category in QA type is irrelevant with performance. There may be some differences depending on the characteristics of the dataset, but when using NLI type sentence-pair input, placing the sentence containing the aspect category second seems to provide better performance. The new methodology for designing the ACSC model used in this study could be similarly applied to other studies such as ATSC.