• Title/Summary/Keyword: system availability

Search Result 2,147, Processing Time 0.028 seconds

A Study on Human Rights in North Korea in terms of Haewon-sangsaeng (해원상생 관점에서의 북한인권문제 고찰)

  • Kim Young-jin
    • Journal of the Daesoon Academy of Sciences
    • /
    • v.43
    • /
    • pp.67-102
    • /
    • 2022
  • The purpose of this study is to analyze the human rights found in the North Korean Constitution and their core problem by focusing on elements of human rights suggested by Daesoon Jinrihoe's doctrine of Haewon-sangsaeng (解冤相生 the Resolution of Grievances for Mutual Beneficence). Haewon-sangsaeng is seemingly the only natural law that could resolve human resentment lingering from the Mutual Contention of the Former World while leading humans work for the betterment of one another. Haewon-sangsaeng, as a natural law, includes the right to life, the right to autonomous decision-making, and duty to act according to human dignity (physical freedom, the freedom of conscience, freedom of religion, freedom of speech, freedom of press, etc.), the right to equal treatment in one's social environment, and the right to ensure the highest level of health through treatment. The North Korean Constitution does not have a character as an institutional device to guarantee natural human rights, the fundamental principle of the Constitution, and stipulates the right of revolutionary warriors to defend dictators and dictatorships. The right to life is specified so that an individual's life belongs to the life of the group according to their socio-political theory of life. Rights to freedom are stipulated to prioritize group interests over individual interests in accordance with the principle of collectivism. The right to equality and the right to health justify discrimination through class discrimination. The right to life provided to North Koreans is not guaranteed due to the death penalty system found within the North Korean Criminal Code and the Criminal Code Supplementary Provisions. The North Korean regime deprives North Koreans of their right to die with dignity through public executions. The North Korean regime places due process under the direction of the Korea Worker's Party, recognizes religion as superstition or opium, and the Korea Worker's Party acknowledge the freedoms of bodily autonomy, religion, media, or press. North Koreans are classified according to their status, and their rights to equality are not guaranteed because they are forced to live a pre-modern lifestyle according to the patriarchal order. In addition, health rights are not guaranteed due biased availability selection and accessibility in the medical field as well as the frequent shortages of free treatments.

Overcoming taxonomic challenges in DNA barcoding for improvement of identification and preservation of clariid catfish species

  • Piangjai Chalermwong;Thitipong Panthum;Pish Wattanadilokcahtkun;Nattakan Ariyaraphong;Thanyapat Thong;Phanitada Srikampa;Worapong Singchat;Syed Farhan Ahmad;Kantika Noito;Ryan Rasoarahona;Artem Lisachov;Hina Ali;Ekaphan Kraichak;Narongrit Muangmai;Satid Chatchaiphan6;Kednapat Sriphairoj;Sittichai Hatachote;Aingorn Chaiyes;Chatchawan Jantasuriyarat;Visarut Chailertlit;Warong Suksavate;Jumaporn Sonongbua;Witsanu Srimai;Sunchai Payungporn;Kyudong Han;Agostinho Antunes;Prapansak Srisapoome;Akihiko Koga;Prateep Duengkae;Yoichi Matsuda;Uthairat Na-Nakorn;Kornsorn Srikulnath
    • Genomics & Informatics
    • /
    • v.21 no.3
    • /
    • pp.39.1-39.15
    • /
    • 2023
  • DNA barcoding without assessing reliability and validity causes taxonomic errors of species identification, which is responsible for disruptions of their conservation and aquaculture industry. Although DNA barcoding facilitates molecular identification and phylogenetic analysis of species, its availability in clariid catfish lineage remains uncertain. In this study, DNA barcoding was developed and validated for clariid catfish. 2,970 barcode sequences from mitochondrial cytochrome c oxidase I (COI) and cytochrome b (Cytb) genes and D-loop sequences were analyzed for 37 clariid catfish species. The highest intraspecific nearest neighbor distances were 85.47%, 98.03%, and 89.10% for COI, Cytb, and D-loop sequences, respectively. This suggests that the Cytb gene is the most appropriate for identifying clariid catfish and can serve as a standard region for DNA barcoding. A positive barcoding gap between interspecific and intraspecific sequence divergence was observed in the Cytb dataset but not in the COI and D-loop datasets. Intraspecific variation was typically less than 4.4%, whereas interspecific variation was generally more than 66.9%. However, a species complex was detected in walking catfish and significant intraspecific sequence divergence was observed in North African catfish. These findings suggest the need to focus on developing a DNA barcoding system for classifying clariid catfish properly and to validate its efficacy for a wider range of clariid catfish. With an enriched database of multiple sequences from a target species and its genus, species identification can be more accurate and biodiversity assessment of the species can be facilitated.

A Study on the Availability of the On-Board Imager(OBI) and Cone-Beam CT(CBCT) in the Verification of Patient Set-up (온보드 영상장치(On-Board Imager) 및 콘빔CT(CBCT)를 이용한 환자 자세 검증의 유용성에 대한 연구)

  • Bak, Jino;Park, Sung-Ho;Park, Suk-Won
    • Radiation Oncology Journal
    • /
    • v.26 no.2
    • /
    • pp.118-125
    • /
    • 2008
  • Purpose: On-line image guided radiation therapy(on-line IGRT) and(kV X-ray images or cone beam CT images) were obtained by an on-board imager(OBI) and cone beam CT(CBCT), respectively. The images were then compared with simulated images to evaluate the patient's setup and correct for deviations. The setup deviations between the simulated images(kV or CBCT images), were computed from 2D/2D match or 3D/3D match programs, respectively. We then investigated the correctness of the calculated deviations. Materials and Methods: After the simulation and treatment planning for the RANDO phantom, the phantom was positioned on the treatment table. The phantom setup process was performed with side wall lasers which standardized treatment setup of the phantom with the simulated images, after the establishment of tolerance limits for laser line thickness. After a known translation or rotation angle was applied to the phantom, the kV X-ray images and CBCT images were obtained. Next, 2D/2D match and 3D/3D match with simulation CT images were taken. Lastly, the results were analyzed for accuracy of positional correction. Results: In the case of the 2D/2D match using kV X-ray and simulation images, a setup correction within $0.06^{\circ}$ for rotation only, 1.8 mm for translation only, and 2.1 mm and $0.3^{\circ}$ for both rotation and translation, respectively, was possible. As for the 3D/3D match using CBCT images, a correction within $0.03^{\circ}$ for rotation only, 0.16 mm for translation only, and 1.5 mm for translation and $0.0^{\circ}$ for rotation, respectively, was possible. Conclusion: The use of OBI or CBCT for the on-line IGRT provides the ability to exactly reproduce the simulated images in the setup of a patient in the treatment room. The fast detection and correction of a patient's positional error is possible in two dimensions via kV X-ray images from OBI and in three dimensions via CBCT with a higher accuracy. Consequently, the on-line IGRT represents a promising and reliable treatment procedure.

A Study on Transition of Rice Culture Practices During Chosun Dynasty Through Old References IX. Intergrated Discussion on Rice (주요(主要) 고농서(古農書)를 통(通)한 조선시대(朝鮮時代)의 도작기술(稻作技術) 전개(展開) 과정(過程) 연구(硏究) - IX. 도작기술(稻作技術)에 대(對)한 종합고찰(綜合考察))

  • Guh, J.O.;Lee, S.K.;Lee, E.W.;Lee, H.S.
    • Korean Journal of Weed Science
    • /
    • v.12 no.1
    • /
    • pp.70-79
    • /
    • 1992
  • From the beginning of the chosun dynasty, an agriculture-first policy was imposed by being written farming books, for instance, Nongsajiksul, matched with real conditions of local agriculture, which provided the grounds of new, intensive farming technologies. This farming book was the collection of good fanning technologies that were experienced in rural farm areas at that time. According to Nongsajiksul, rice culture systems were divided into "Musarmi"(Water-Seeded rice), /"Kunsarmi"(dry-seeded rice), /transplanted rice and mountainous rice (upland rice) culture. The characteristics of these rice cultures with high technologies were based of scientific weeding methods, improved fertilization, and cultivation works using cattle power and manpower tools systematically. Reclamation of coastal swampy and barren land was possible in virtue of fire cultivation farming(火耕) and a weeding tool called "Yoonmok"(輪木). Also, there was an improved hoe to do weeding works as well as thinning and heaping-up of soil at seeding stages of rice. Direct-seeded rice culture in flat paddy fields were expanded by constructing the irrigation reservoirs and ponds, and the valley paddy fields was reclaimed by constructing "Boh(洑)". These were possible due to weed control by irrigation waters, keeping soil fertility by inorganic fertilization during irrigation, and increased productivity of rice fields by supplying good physiological conditions for rice. Also, labor-saving culture of rice was feasible by transplanting but in national-wide, rice should not basically be transplanted because of the restriction of water use. Thus, direct-seeded rice in dry soils was established, in which rice was direct-seeded and grown in dry soils by seedling stages and was grown in flooded fields when rained, as in the book "Nongsajiksul". During the middle of the dynasty(AD 1495-1725), the excellent labor-saving farmings include check-rowing transplanting because of weeding efficiency and availability in rice("Hanjongrok"), and, nurserybed techniques (early transplanting of rice) were emphasized on the basis of rice transplanting ["Nongajibsung"]. The techniques for deep plowing with cattle powers and for putting more fertilizers were to improve the productivity of labor and land, The matters advanced in "Sanlimkyungje" more than in "Nongajibsung" were, development of "drybed of rice nursery stock", like "upland rice nursery" today, transplanting, establishment of "winter barly on drained paddy field, and improvement of labor and land-productivity in rice". This resulted in the community of large-scale farming by changing the pattern of small-farming into the production system of rice management. Woo-hayoung(1741-1812) in his book "Chonilrok" tried to reform from large-scale farmings into intensive farmings, of which as eminent view was to divide the land use into transplanting (paddy) and groove-seeding methods(dry field). Especially as insisted by Seo-yugo ("Sanlimkyungjeji"), the advantages of transplanting were curtailment of weeding labors, good growth of rice because of soil fertility of both nurserybed and paddy field, and newly active growth because rice plants were pulled out and replanted. Of course, there were reestimation of transplanting, limitation of two croppings a year, restriction of "paddy-upland alternation", and a ban for large-scale farming. At that period, Lee-jiyum had written on rice farming technologies in dry upland with consider of the land, water physiology of rice, and convenience for weeding, and it was a creative cropping system to secure the farm income most safely. As a integrated considerations, the followings must be introduced to practice the improved farming methods ; namely, improvement of farming tools, putting more fertilizers, introduction of cultural technologies more rational and efficient, management of labor power, improvement of cropping system to enhance use of irrigation water and land, introduction of new crops and new varieties.

  • PDF

A Study on the Impact of Artificial Intelligence on Decision Making : Focusing on Human-AI Collaboration and Decision-Maker's Personality Trait (인공지능이 의사결정에 미치는 영향에 관한 연구 : 인간과 인공지능의 협업 및 의사결정자의 성격 특성을 중심으로)

  • Lee, JeongSeon;Suh, Bomil;Kwon, YoungOk
    • Journal of Intelligence and Information Systems
    • /
    • v.27 no.3
    • /
    • pp.231-252
    • /
    • 2021
  • Artificial intelligence (AI) is a key technology that will change the future the most. It affects the industry as a whole and daily life in various ways. As data availability increases, artificial intelligence finds an optimal solution and infers/predicts through self-learning. Research and investment related to automation that discovers and solves problems on its own are ongoing continuously. Automation of artificial intelligence has benefits such as cost reduction, minimization of human intervention and the difference of human capability. However, there are side effects, such as limiting the artificial intelligence's autonomy and erroneous results due to algorithmic bias. In the labor market, it raises the fear of job replacement. Prior studies on the utilization of artificial intelligence have shown that individuals do not necessarily use the information (or advice) it provides. Algorithm error is more sensitive than human error; so, people avoid algorithms after seeing errors, which is called "algorithm aversion." Recently, artificial intelligence has begun to be understood from the perspective of the augmentation of human intelligence. We have started to be interested in Human-AI collaboration rather than AI alone without human. A study of 1500 companies in various industries found that human-AI collaboration outperformed AI alone. In the medicine area, pathologist-deep learning collaboration dropped the pathologist cancer diagnosis error rate by 85%. Leading AI companies, such as IBM and Microsoft, are starting to adopt the direction of AI as augmented intelligence. Human-AI collaboration is emphasized in the decision-making process, because artificial intelligence is superior in analysis ability based on information. Intuition is a unique human capability so that human-AI collaboration can make optimal decisions. In an environment where change is getting faster and uncertainty increases, the need for artificial intelligence in decision-making will increase. In addition, active discussions are expected on approaches that utilize artificial intelligence for rational decision-making. This study investigates the impact of artificial intelligence on decision-making focuses on human-AI collaboration and the interaction between the decision maker personal traits and advisor type. The advisors were classified into three types: human, artificial intelligence, and human-AI collaboration. We investigated perceived usefulness of advice and the utilization of advice in decision making and whether the decision-maker's personal traits are influencing factors. Three hundred and eleven adult male and female experimenters conducted a task that predicts the age of faces in photos and the results showed that the advisor type does not directly affect the utilization of advice. The decision-maker utilizes it only when they believed advice can improve prediction performance. In the case of human-AI collaboration, decision-makers higher evaluated the perceived usefulness of advice, regardless of the decision maker's personal traits and the advice was more actively utilized. If the type of advisor was artificial intelligence alone, decision-makers who scored high in conscientiousness, high in extroversion, or low in neuroticism, high evaluated the perceived usefulness of the advice so they utilized advice actively. This study has academic significance in that it focuses on human-AI collaboration that the recent growing interest in artificial intelligence roles. It has expanded the relevant research area by considering the role of artificial intelligence as an advisor of decision-making and judgment research, and in aspects of practical significance, suggested views that companies should consider in order to enhance AI capability. To improve the effectiveness of AI-based systems, companies not only must introduce high-performance systems, but also need employees who properly understand digital information presented by AI, and can add non-digital information to make decisions. Moreover, to increase utilization in AI-based systems, task-oriented competencies, such as analytical skills and information technology capabilities, are important. in addition, it is expected that greater performance will be achieved if employee's personal traits are considered.

Information Privacy Concern in Context-Aware Personalized Services: Results of a Delphi Study

  • Lee, Yon-Nim;Kwon, Oh-Byung
    • Asia pacific journal of information systems
    • /
    • v.20 no.2
    • /
    • pp.63-86
    • /
    • 2010
  • Personalized services directly and indirectly acquire personal data, in part, to provide customers with higher-value services that are specifically context-relevant (such as place and time). Information technologies continue to mature and develop, providing greatly improved performance. Sensory networks and intelligent software can now obtain context data, and that is the cornerstone for providing personalized, context-specific services. Yet, the danger of overflowing personal information is increasing because the data retrieved by the sensors usually contains privacy information. Various technical characteristics of context-aware applications have more troubling implications for information privacy. In parallel with increasing use of context for service personalization, information privacy concerns have also increased such as an unrestricted availability of context information. Those privacy concerns are consistently regarded as a critical issue facing context-aware personalized service success. The entire field of information privacy is growing as an important area of research, with many new definitions and terminologies, because of a need for a better understanding of information privacy concepts. Especially, it requires that the factors of information privacy should be revised according to the characteristics of new technologies. However, previous information privacy factors of context-aware applications have at least two shortcomings. First, there has been little overview of the technology characteristics of context-aware computing. Existing studies have only focused on a small subset of the technical characteristics of context-aware computing. Therefore, there has not been a mutually exclusive set of factors that uniquely and completely describe information privacy on context-aware applications. Second, user survey has been widely used to identify factors of information privacy in most studies despite the limitation of users' knowledge and experiences about context-aware computing technology. To date, since context-aware services have not been widely deployed on a commercial scale yet, only very few people have prior experiences with context-aware personalized services. It is difficult to build users' knowledge about context-aware technology even by increasing their understanding in various ways: scenarios, pictures, flash animation, etc. Nevertheless, conducting a survey, assuming that the participants have sufficient experience or understanding about the technologies shown in the survey, may not be absolutely valid. Moreover, some surveys are based solely on simplifying and hence unrealistic assumptions (e.g., they only consider location information as a context data). A better understanding of information privacy concern in context-aware personalized services is highly needed. Hence, the purpose of this paper is to identify a generic set of factors for elemental information privacy concern in context-aware personalized services and to develop a rank-order list of information privacy concern factors. We consider overall technology characteristics to establish a mutually exclusive set of factors. A Delphi survey, a rigorous data collection method, was deployed to obtain a reliable opinion from the experts and to produce a rank-order list. It, therefore, lends itself well to obtaining a set of universal factors of information privacy concern and its priority. An international panel of researchers and practitioners who have the expertise in privacy and context-aware system fields were involved in our research. Delphi rounds formatting will faithfully follow the procedure for the Delphi study proposed by Okoli and Pawlowski. This will involve three general rounds: (1) brainstorming for important factors; (2) narrowing down the original list to the most important ones; and (3) ranking the list of important factors. For this round only, experts were treated as individuals, not panels. Adapted from Okoli and Pawlowski, we outlined the process of administrating the study. We performed three rounds. In the first and second rounds of the Delphi questionnaire, we gathered a set of exclusive factors for information privacy concern in context-aware personalized services. The respondents were asked to provide at least five main factors for the most appropriate understanding of the information privacy concern in the first round. To do so, some of the main factors found in the literature were presented to the participants. The second round of the questionnaire discussed the main factor provided in the first round, fleshed out with relevant sub-factors. Respondents were then requested to evaluate each sub factor's suitability against the corresponding main factors to determine the final sub-factors from the candidate factors. The sub-factors were found from the literature survey. Final factors selected by over 50% of experts. In the third round, a list of factors with corresponding questions was provided, and the respondents were requested to assess the importance of each main factor and its corresponding sub factors. Finally, we calculated the mean rank of each item to make a final result. While analyzing the data, we focused on group consensus rather than individual insistence. To do so, a concordance analysis, which measures the consistency of the experts' responses over successive rounds of the Delphi, was adopted during the survey process. As a result, experts reported that context data collection and high identifiable level of identical data are the most important factor in the main factors and sub factors, respectively. Additional important sub-factors included diverse types of context data collected, tracking and recording functionalities, and embedded and disappeared sensor devices. The average score of each factor is very useful for future context-aware personalized service development in the view of the information privacy. The final factors have the following differences comparing to those proposed in other studies. First, the concern factors differ from existing studies, which are based on privacy issues that may occur during the lifecycle of acquired user information. However, our study helped to clarify these sometimes vague issues by determining which privacy concern issues are viable based on specific technical characteristics in context-aware personalized services. Since a context-aware service differs in its technical characteristics compared to other services, we selected specific characteristics that had a higher potential to increase user's privacy concerns. Secondly, this study considered privacy issues in terms of service delivery and display that were almost overlooked in existing studies by introducing IPOS as the factor division. Lastly, in each factor, it correlated the level of importance with professionals' opinions as to what extent users have privacy concerns. The reason that it did not select the traditional method questionnaire at that time is that context-aware personalized service considered the absolute lack in understanding and experience of users with new technology. For understanding users' privacy concerns, professionals in the Delphi questionnaire process selected context data collection, tracking and recording, and sensory network as the most important factors among technological characteristics of context-aware personalized services. In the creation of a context-aware personalized services, this study demonstrates the importance and relevance of determining an optimal methodology, and which technologies and in what sequence are needed, to acquire what types of users' context information. Most studies focus on which services and systems should be provided and developed by utilizing context information on the supposition, along with the development of context-aware technology. However, the results in this study show that, in terms of users' privacy, it is necessary to pay greater attention to the activities that acquire context information. To inspect the results in the evaluation of sub factor, additional studies would be necessary for approaches on reducing users' privacy concerns toward technological characteristics such as highly identifiable level of identical data, diverse types of context data collected, tracking and recording functionality, embedded and disappearing sensor devices. The factor ranked the next highest level of importance after input is a context-aware service delivery that is related to output. The results show that delivery and display showing services to users in a context-aware personalized services toward the anywhere-anytime-any device concept have been regarded as even more important than in previous computing environment. Considering the concern factors to develop context aware personalized services will help to increase service success rate and hopefully user acceptance for those services. Our future work will be to adopt these factors for qualifying context aware service development projects such as u-city development projects in terms of service quality and hence user acceptance.

Transfer Learning using Multiple ConvNet Layers Activation Features with Principal Component Analysis for Image Classification (전이학습 기반 다중 컨볼류션 신경망 레이어의 활성화 특징과 주성분 분석을 이용한 이미지 분류 방법)

  • Byambajav, Batkhuu;Alikhanov, Jumabek;Fang, Yang;Ko, Seunghyun;Jo, Geun Sik
    • Journal of Intelligence and Information Systems
    • /
    • v.24 no.1
    • /
    • pp.205-225
    • /
    • 2018
  • Convolutional Neural Network (ConvNet) is one class of the powerful Deep Neural Network that can analyze and learn hierarchies of visual features. Originally, first neural network (Neocognitron) was introduced in the 80s. At that time, the neural network was not broadly used in both industry and academic field by cause of large-scale dataset shortage and low computational power. However, after a few decades later in 2012, Krizhevsky made a breakthrough on ILSVRC-12 visual recognition competition using Convolutional Neural Network. That breakthrough revived people interest in the neural network. The success of Convolutional Neural Network is achieved with two main factors. First of them is the emergence of advanced hardware (GPUs) for sufficient parallel computation. Second is the availability of large-scale datasets such as ImageNet (ILSVRC) dataset for training. Unfortunately, many new domains are bottlenecked by these factors. For most domains, it is difficult and requires lots of effort to gather large-scale dataset to train a ConvNet. Moreover, even if we have a large-scale dataset, training ConvNet from scratch is required expensive resource and time-consuming. These two obstacles can be solved by using transfer learning. Transfer learning is a method for transferring the knowledge from a source domain to new domain. There are two major Transfer learning cases. First one is ConvNet as fixed feature extractor, and the second one is Fine-tune the ConvNet on a new dataset. In the first case, using pre-trained ConvNet (such as on ImageNet) to compute feed-forward activations of the image into the ConvNet and extract activation features from specific layers. In the second case, replacing and retraining the ConvNet classifier on the new dataset, then fine-tune the weights of the pre-trained network with the backpropagation. In this paper, we focus on using multiple ConvNet layers as a fixed feature extractor only. However, applying features with high dimensional complexity that is directly extracted from multiple ConvNet layers is still a challenging problem. We observe that features extracted from multiple ConvNet layers address the different characteristics of the image which means better representation could be obtained by finding the optimal combination of multiple ConvNet layers. Based on that observation, we propose to employ multiple ConvNet layer representations for transfer learning instead of a single ConvNet layer representation. Overall, our primary pipeline has three steps. Firstly, images from target task are given as input to ConvNet, then that image will be feed-forwarded into pre-trained AlexNet, and the activation features from three fully connected convolutional layers are extracted. Secondly, activation features of three ConvNet layers are concatenated to obtain multiple ConvNet layers representation because it will gain more information about an image. When three fully connected layer features concatenated, the occurring image representation would have 9192 (4096+4096+1000) dimension features. However, features extracted from multiple ConvNet layers are redundant and noisy since they are extracted from the same ConvNet. Thus, a third step, we will use Principal Component Analysis (PCA) to select salient features before the training phase. When salient features are obtained, the classifier can classify image more accurately, and the performance of transfer learning can be improved. To evaluate proposed method, experiments are conducted in three standard datasets (Caltech-256, VOC07, and SUN397) to compare multiple ConvNet layer representations against single ConvNet layer representation by using PCA for feature selection and dimension reduction. Our experiments demonstrated the importance of feature selection for multiple ConvNet layer representation. Moreover, our proposed approach achieved 75.6% accuracy compared to 73.9% accuracy achieved by FC7 layer on the Caltech-256 dataset, 73.1% accuracy compared to 69.2% accuracy achieved by FC8 layer on the VOC07 dataset, 52.2% accuracy compared to 48.7% accuracy achieved by FC7 layer on the SUN397 dataset. We also showed that our proposed approach achieved superior performance, 2.8%, 2.1% and 3.1% accuracy improvement on Caltech-256, VOC07, and SUN397 dataset respectively compare to existing work.