• Title/Summary/Keyword: 언어적 정보

Search Result 4,733, Processing Time 0.038 seconds

The research for the yachting development of Korean Marina operation plans (요트 발전을 위한 한국형 마리나 운영방안에 관한 연구)

  • Jeong Jong-Seok;Hugh Ihl
    • Journal of Navigation and Port Research
    • /
    • v.28 no.10 s.96
    • /
    • pp.899-908
    • /
    • 2004
  • The rise of income and introduction of 5 day a week working system give korean people opportunities to enjoy their leisure time. And many korean people have much interest in oceanic sports such as yachting and also oceanic leisure equipments. With the popularization and development of the equipments, the scope of oceanic activities has been expanding in Korea just as in the advanced oceanic countries. However, The current conditions for the sports in Korea are not advanced and even worse than underdeveloped countries. In order to develop the underdeveloped resources of Korean marina, we need to customize the marina models of advanced nations to serve the specific needs and circumstances of Korea As such we have carried out a comparative analysis of how Austrailia, Newzealand, Singapore, japan and Malaysia operate their marina, reaching the following conclusions. Firstly, in marina operations, in order to protect personal property rights and to preserve the environment, we must operate membership and non-membership, profit and non-profit schemes separately, yet without regulating the dress code entering or leaving the club house. Secondly, in order to accumulate greater value added, new sporting events should be hosted each year. There is also the need for an active use of volunteers, the generation of greater interest in yacht tourism, and the simplification of CIQ procedures for foreign yachts as well as the provision of language services. Thirdly, a permanent yacht school should be established, and classes should be taught by qualified instructors. Beginners, intermediary, and advanced learner classes should be managed separately with special emphasis on the dinghy yacht program for children. Fourthly, arrival and departure at the moorings must be regulated autonomically, and there must be systematic measures for the marina to be able, in part, to compensate for loss and damages to equipment, security and surveillance after usage fees have been paid for. Fifthly, marine safety personnel must be formed in accordance with Korea's current circumstances from civilian organizations in order to be used actively in benchmarking, rescue operations, and oceanic searches at times of disaster at sea.

Aspect-Based Sentiment Analysis Using BERT: Developing Aspect Category Sentiment Classification Models (BERT를 활용한 속성기반 감성분석: 속성카테고리 감성분류 모델 개발)

  • Park, Hyun-jung;Shin, Kyung-shik
    • Journal of Intelligence and Information Systems
    • /
    • v.26 no.4
    • /
    • pp.1-25
    • /
    • 2020
  • Sentiment Analysis (SA) is a Natural Language Processing (NLP) task that analyzes the sentiments consumers or the public feel about an arbitrary object from written texts. Furthermore, Aspect-Based Sentiment Analysis (ABSA) is a fine-grained analysis of the sentiments towards each aspect of an object. Since having a more practical value in terms of business, ABSA is drawing attention from both academic and industrial organizations. When there is a review that says "The restaurant is expensive but the food is really fantastic", for example, the general SA evaluates the overall sentiment towards the 'restaurant' as 'positive', while ABSA identifies the restaurant's aspect 'price' as 'negative' and 'food' aspect as 'positive'. Thus, ABSA enables a more specific and effective marketing strategy. In order to perform ABSA, it is necessary to identify what are the aspect terms or aspect categories included in the text, and judge the sentiments towards them. Accordingly, there exist four main areas in ABSA; aspect term extraction, aspect category detection, Aspect Term Sentiment Classification (ATSC), and Aspect Category Sentiment Classification (ACSC). It is usually conducted by extracting aspect terms and then performing ATSC to analyze sentiments for the given aspect terms, or by extracting aspect categories and then performing ACSC to analyze sentiments for the given aspect category. Here, an aspect category is expressed in one or more aspect terms, or indirectly inferred by other words. In the preceding example sentence, 'price' and 'food' are both aspect categories, and the aspect category 'food' is expressed by the aspect term 'food' included in the review. If the review sentence includes 'pasta', 'steak', or 'grilled chicken special', these can all be aspect terms for the aspect category 'food'. As such, an aspect category referred to by one or more specific aspect terms is called an explicit aspect. On the other hand, the aspect category like 'price', which does not have any specific aspect terms but can be indirectly guessed with an emotional word 'expensive,' is called an implicit aspect. So far, the 'aspect category' has been used to avoid confusion about 'aspect term'. From now on, we will consider 'aspect category' and 'aspect' as the same concept and use the word 'aspect' more for convenience. And one thing to note is that ATSC analyzes the sentiment towards given aspect terms, so it deals only with explicit aspects, and ACSC treats not only explicit aspects but also implicit aspects. This study seeks to find answers to the following issues ignored in the previous studies when applying the BERT pre-trained language model to ACSC and derives superior ACSC models. First, is it more effective to reflect the output vector of tokens for aspect categories than to use only the final output vector of [CLS] token as a classification vector? Second, is there any performance difference between QA (Question Answering) and NLI (Natural Language Inference) types in the sentence-pair configuration of input data? Third, is there any performance difference according to the order of sentence including aspect category in the QA or NLI type sentence-pair configuration of input data? To achieve these research objectives, we implemented 12 ACSC models and conducted experiments on 4 English benchmark datasets. As a result, ACSC models that provide performance beyond the existing studies without expanding the training dataset were derived. In addition, it was found that it is more effective to reflect the output vector of the aspect category token than to use only the output vector for the [CLS] token as a classification vector. It was also found that QA type input generally provides better performance than NLI, and the order of the sentence with the aspect category in QA type is irrelevant with performance. There may be some differences depending on the characteristics of the dataset, but when using NLI type sentence-pair input, placing the sentence containing the aspect category second seems to provide better performance. The new methodology for designing the ACSC model used in this study could be similarly applied to other studies such as ATSC.

A Study on the Development Trend of Artificial Intelligence Using Text Mining Technique: Focused on Open Source Software Projects on Github (텍스트 마이닝 기법을 활용한 인공지능 기술개발 동향 분석 연구: 깃허브 상의 오픈 소스 소프트웨어 프로젝트를 대상으로)

  • Chong, JiSeon;Kim, Dongsung;Lee, Hong Joo;Kim, Jong Woo
    • Journal of Intelligence and Information Systems
    • /
    • v.25 no.1
    • /
    • pp.1-19
    • /
    • 2019
  • Artificial intelligence (AI) is one of the main driving forces leading the Fourth Industrial Revolution. The technologies associated with AI have already shown superior abilities that are equal to or better than people in many fields including image and speech recognition. Particularly, many efforts have been actively given to identify the current technology trends and analyze development directions of it, because AI technologies can be utilized in a wide range of fields including medical, financial, manufacturing, service, and education fields. Major platforms that can develop complex AI algorithms for learning, reasoning, and recognition have been open to the public as open source projects. As a result, technologies and services that utilize them have increased rapidly. It has been confirmed as one of the major reasons for the fast development of AI technologies. Additionally, the spread of the technology is greatly in debt to open source software, developed by major global companies, supporting natural language recognition, speech recognition, and image recognition. Therefore, this study aimed to identify the practical trend of AI technology development by analyzing OSS projects associated with AI, which have been developed by the online collaboration of many parties. This study searched and collected a list of major projects related to AI, which were generated from 2000 to July 2018 on Github. This study confirmed the development trends of major technologies in detail by applying text mining technique targeting topic information, which indicates the characteristics of the collected projects and technical fields. The results of the analysis showed that the number of software development projects by year was less than 100 projects per year until 2013. However, it increased to 229 projects in 2014 and 597 projects in 2015. Particularly, the number of open source projects related to AI increased rapidly in 2016 (2,559 OSS projects). It was confirmed that the number of projects initiated in 2017 was 14,213, which is almost four-folds of the number of total projects generated from 2009 to 2016 (3,555 projects). The number of projects initiated from Jan to Jul 2018 was 8,737. The development trend of AI-related technologies was evaluated by dividing the study period into three phases. The appearance frequency of topics indicate the technology trends of AI-related OSS projects. The results showed that the natural language processing technology has continued to be at the top in all years. It implied that OSS had been developed continuously. Until 2015, Python, C ++, and Java, programming languages, were listed as the top ten frequently appeared topics. However, after 2016, programming languages other than Python disappeared from the top ten topics. Instead of them, platforms supporting the development of AI algorithms, such as TensorFlow and Keras, are showing high appearance frequency. Additionally, reinforcement learning algorithms and convolutional neural networks, which have been used in various fields, were frequently appeared topics. The results of topic network analysis showed that the most important topics of degree centrality were similar to those of appearance frequency. The main difference was that visualization and medical imaging topics were found at the top of the list, although they were not in the top of the list from 2009 to 2012. The results indicated that OSS was developed in the medical field in order to utilize the AI technology. Moreover, although the computer vision was in the top 10 of the appearance frequency list from 2013 to 2015, they were not in the top 10 of the degree centrality. The topics at the top of the degree centrality list were similar to those at the top of the appearance frequency list. It was found that the ranks of the composite neural network and reinforcement learning were changed slightly. The trend of technology development was examined using the appearance frequency of topics and degree centrality. The results showed that machine learning revealed the highest frequency and the highest degree centrality in all years. Moreover, it is noteworthy that, although the deep learning topic showed a low frequency and a low degree centrality between 2009 and 2012, their ranks abruptly increased between 2013 and 2015. It was confirmed that in recent years both technologies had high appearance frequency and degree centrality. TensorFlow first appeared during the phase of 2013-2015, and the appearance frequency and degree centrality of it soared between 2016 and 2018 to be at the top of the lists after deep learning, python. Computer vision and reinforcement learning did not show an abrupt increase or decrease, and they had relatively low appearance frequency and degree centrality compared with the above-mentioned topics. Based on these analysis results, it is possible to identify the fields in which AI technologies are actively developed. The results of this study can be used as a baseline dataset for more empirical analysis on future technology trends that can be converged.

Citing Behavior of Korean Scientists on Foreign Journals in KSCD (KSCD를 활용한 국내 과학기술자의 해외 학술지 인용행태 연구)

  • Kim, Byung-Kyu;Kang, Mu-Yeong;Choi, Seon-Heui;Kim, Soon-Young;You, Beom-Jong;Shin, Jae-Do
    • Journal of the Korean Society for information Management
    • /
    • v.28 no.2
    • /
    • pp.117-133
    • /
    • 2011
  • There have been little comprehensive research for studying impact of foreign journals on Korean scientists. The main reason for this is because there was no extensive citation index database of domestic journals for analysis. Korea Institute of Science and Technology Information (KISTI) built the Korea Science Citation Database (KSCD), and have provided Korea Science Citation Index (KSCI) and Korea Journal Citation Reports (KJCR) services. In this article, citing behavior of Korean scientists on foreign journals was examined by using KSCD that covers Korean core journals. This research covers (1) analysis of foreign document types cited, (2) analysis of citation counts of foreign journals by subject and the ratio of citing different disciplines, (3) analysis of language and country of foreign documents cited, (4) analysis of publishers of journals and whether or not journals are listed on global citation index services and (5) analysis for current situation of subscribing to foreign electronic journals in Korea. The results of this research would be useful for establishing strategies for licensing foreign electronic journals and for information services. From this research, immediacy citation rate (average 1.46%), peak-time (average 3.9 years) and half-life (average 8 years) of cited foreign journals were identified. It was also found that Korean scientistis tend to cite journals covered in SCI(E) or SCOPUS, and 90% of cited foreign journals have been licensed by institutions in Korea.

A Study on Transcranial Magnetic Electrode Simulation Using Maxwell 3D (Maxwell 3D를 이용한 경두개 자기 전극 시뮬레이션에 관한 연구)

  • Lee, Geun-Yong;Yoon, Se-Jin;Jeong, Jin-hyoung;Kim, Jun-Tae;Lee, Sang-sik
    • The Journal of Korea Institute of Information, Electronics, and Communication Technology
    • /
    • v.12 no.6
    • /
    • pp.657-665
    • /
    • 2019
  • In this study, we conducted a study on the transcranial magnetic electrode, a method for the study of dementia and muscle pain, a neurodegenerative disease caused by an aging society, which is becoming a problem worldwide. In particular, transcranial magnetic electrodes have been studied to improve their ability to be deteriorated by dementia symptoms such as speech, cognitive ability, and memory by outputting magnetism deep into the brain using coils on the head epidermis. In this study, simulation was performed using Maxwell 3D program for the design of coil, the core of transcranial magnetic electrode. As a result of the simulation comparison between the coil designed by the previous research and the coil through the research and development, the output was found to be superior to the conventional designed coil. The graphs of the coil outputs of B-Field and H-Field are found to be symmetrical, but the symmetry between each coil is pseudo-symmetrical and not accurate. Based on these results, an experiment was conducted to confirm whether the output of the head epidermis through both coils is possible. In the magnitude field of the reverse-coil 2-coil analysis, the maximum output was 3.3920e + 004 H [A_per_meter], and the vector field showed the strongest magnetic field around 35 to 165 degrees. It was confirmed that the magnetic output canceled due to the magnetic output. In the case of the forward 2-coil, a maximum of 3.2348e + 004H [A_per_meter] similar to the reverse coil was observed, but in the case of the vector field, the magnetic output regarding the forward output and the head skin output was confirmed. However, when the height change in the output coil, the magnetic output was reduced.

The Construction of QoS Integration Platform for Real-time Negotiation and Adaptation Stream Service in Distributed Object Computing Environments (분산 객체 컴퓨팅 환경에서 실시간 협약 및 적응 스트림 서비스를 위한 QoS 통합 플랫폼의 구축)

  • Jun, Byung-Taek;Kim, Myung-Hee;Joo, Su-Chong
    • The Transactions of the Korea Information Processing Society
    • /
    • v.7 no.11S
    • /
    • pp.3651-3667
    • /
    • 2000
  • Recently, in the distributed multimedia environments based on internet, as radical growing technologies, the most of researchers focus on both streaming technology and distributed object thchnology, Specially, the studies which are tried to integrate the streaming services on the distributed object technology have been progressing. These technologies are applied to various stream service mamgements and protocols. However, the stream service management mexlels which are being proposed by the existing researches are insufficient for suporting the QoS of stream services. Besides, the existing models have the problems that cannot support the extensibility and the reusability, when the QoS-reiatedfunctions are being developed as a sub-module which is suited on the specific-purpose application services. For solving these problems, in this paper. we suggested a QoS Integrated platform which can extend and reuse using the distributed object technologies, and guarantee the QoS of the stream services. A structure of platform we suggested consists of three components such as User Control Module(UCM), QoS Management Module(QoSM) and Stream Object. Stream Object has Send/Receive operations for transmitting the RTP packets over TCP/IP. User Control ModuleI(UCM) controls Stream Objects via the COREA service objects. QoS Management Modulel(QoSM) has the functions which maintain the QoS of stream service between the UCMs in client and server. As QoS control methexlologies, procedures of resource monitoring, negotiation, and resource adaptation are executed via the interactions among these comiXments mentioned above. For constmcting this QoS integrated platform, we first implemented the modules mentioned above independently, and then, used IDL for defining interfaces among these mexlules so that can support platform independence, interoperability and portability base on COREA. This platform is constructed using OrbixWeb 3.1c following CORBA specification on Solaris 2.5/2.7, Java language, Java, Java Media Framework API 2.0, Mini-SQL1.0.16 and multimedia equipments. As results for verifying this platform functionally, we showed executing results of each module we mentioned above, and a numerical data obtained from QoS control procedures on client and server's GUI, while stream service is executing on our platform.

  • PDF

COMPLIANCE STUDY OF METHYLPHENIDATE IR IN THE TREATMENT OF ADHD (주의력결핍과잉행동장애 치료 약물 Methylphenidate IR의 순응도 연구)

  • Hwang, Jun-Wan;Cho, Soo-Churl;Kim, Boong-Nyun
    • Journal of the Korean Academy of Child and Adolescent Psychiatry
    • /
    • v.15 no.2
    • /
    • pp.160-167
    • /
    • 2004
  • Objectives : There have been very few studies on the compliance of methylphenidate-immediate releasing form(MPH-IR), which is the most frequently used drug in Korea, in Attention Deficit Hyperactivity Disorder(ADHD). This study was conducted to investigate the compliance rate and the related factors in the one year pharmacotherapy process via OPD for children with ADHD. Method : Total 100 ADHD patients were selected randomly among patients who have been treated with MPH-IR from September in 2002 to December in 2002. All the selected patients were diagnosed with DSM-IV-ADHD criteria and fulfilled the inclusion criteria. In March, 2003(at the time of 6 month treatment), all the patients and parents received the questionnaire for the compliance and satisfaction for MPH-IR treatment. In October 2003(at time of 1 year treatment), we, investigators evaluated the socio-demographic variables, developmental data, medical data, family data, comorbid disorders, treatment variables, and compliance rate. Through these very comprehensive data, The compliance rate at the time of mean 1 year treatment and the related factors were investigated. Result : 1) In the questionnaire for compliance and satisfaction for MPND treatment, the 60% of respondents(parents) reported more than moderate degree of satisfaction in the effectiveness of MPND. Their compliance rate for the morning prescription was 81%, but the rate of afternoon prescription was 43%. 2) In the evaluation at the time of 1 year treatment(October 2003), the 38% of parents were dropped out from the OPD treatment. The mean compliance rate for the 1 year treatment was 62%. the 38% of parents were dropped out from the OPD treatment. The mean compliance rate for the 1year treatment was 62%. 3) Compared with the noncompliant group(drop-out group), compliant group showed higher total, verbal and performance IQ scores. In the treatment variables, higher reposponder rate(clinician rating), higher medication dosage and more compliance rate in afternoon prescription were found in the compliant group compared with the noncompliant group. There were no statistical differences in the demographic variables(age, sex, SES, parental education level), medical data, developmental profiles and academic function. Conclusion : To our knowledge, this is the first report about the compliance rate of the MPH-IR treatment for the children with ADHD. The compliance rate at the time of mean 1year treatment was 62%, which was comparable with other studies performed in foreign countries, especially States. In this study, the compliance related factors were IQ score, clinical treatment response, dosage of MPH-IR, and early compliance for the afternoon prescription. These results suggest that clinician plan the strategies for the promotion of the early compliance for the after prescription and enhancement of overall treatment response.

  • PDF

A Polarization-based Frequency Scanning Interferometer and the Measurement Processing Acceleration based on Parallel Programing (편광 기반 주파수 스캐닝 간섭 시스템 및 병렬 프로그래밍 기반 측정 고속화)

  • Lee, Seung Hyun;Kim, Min Young
    • Journal of the Institute of Electronics and Information Engineers
    • /
    • v.50 no.8
    • /
    • pp.253-263
    • /
    • 2013
  • Frequency Scanning Interferometry(FSI) system, one of the most promising optical surface measurement techniques, generally results in superior optical performance comparing with other 3-dimensional measuring methods as its hardware structure is fixed in operation and only the light frequency is scanned in a specific spectral band without vertical scanning of the target surface or the objective lens. FSI system collects a set of images of interference fringe by changing the frequency of light source. After that, it transforms intensity data of acquired image into frequency information, and calculates the height profile of target objects with the help of frequency analysis based on Fast Fourier Transform(FFT). However, it still suffers from optical noise on target surfaces and relatively long processing time due to the number of images acquired in frequency scanning phase. 1) a Polarization-based Frequency Scanning Interferometry(PFSI) is proposed for optical noise robustness. It consists of tunable laser for light source, ${\lambda}/4$ plate in front of reference mirror, ${\lambda}/4$ plate in front of target object, polarizing beam splitter, polarizer in front of image sensor, polarizer in front of the fiber coupled light source, ${\lambda}/2$ plate between PBS and polarizer of the light source. Using the proposed system, we can solve the problem of fringe image with low contrast by using polarization technique. Also, we can control light distribution of object beam and reference beam. 2) the signal processing acceleration method is proposed for PFSI, based on parallel processing architecture, which consists of parallel processing hardware and software such as Graphic Processing Unit(GPU) and Compute Unified Device Architecture(CUDA). As a result, the processing time reaches into tact time level of real-time processing. Finally, the proposed system is evaluated in terms of accuracy and processing speed through a series of experiment and the obtained results show the effectiveness of the proposed system and method.

TREATMENT OF ECHOLALIA IN CHILDREN WITH AUTISM (자폐아동의 반향어 치료)

  • Chung, Bo-In
    • Journal of the Korean Academy of Child and Adolescent Psychiatry
    • /
    • v.9 no.1
    • /
    • pp.47-53
    • /
    • 1998
  • The purpose of this study was to investigate the possibility of providing familiar tasks as a treatment option to decrease echolalia. Two comparisons were made:One was to compare ‘conversation condition’ and ‘task performance condition.’ and the other was to compare ‘task performance alone condition’ and ‘task performance along with contingency of reinforcement condition.’ Two echolalic children aged 12 and 13 years participated in the experiment and A-B-A-B-BC-B-BC design was used, in which A was conversation only, B was task performance, and C was task performance along with contingency of reinforcement. In the A condition, the therapist asked easy and short questions to the child;in the B condition the child was given familiar tasks with short instruction, and in BC condition, each child was reinforced for his performance on given tasks, in which immediate echolalia was controlled through his hands being held down for 5 seconds. Delayed echolalia was recorded without any intervention being given. Each child was put into each of the 7 treatment conditions. With a 15 minutes session, each child went through 5 to 6 sessions per day for 2 weeks. The mean echolalia(immediate) rates across the 7 treatment conditions were:For child 1, A(99%)-B(65%)-A(95%)-B(10%)-BC(7%)-B(6%)- BC(7%) and for child 2, A(67%)-B(62%)-A(63%)-B(35%)-BC(8%)-B(4%)-BC(0%). As to the generalization of the treatment effect of immediate echolalia to the untreated delayed echolalia, there was shown a drastic reduction of delayed echolalia in child 2:A(35%)-B(57%)-A(56%)-B(40%)-BC(8%)-B(5%)-BC(9%). Child l’s delayed echolalia was negligible(mean=3%) pre-and post treatments. In conclusion, the results of this study clearly show that providing a task performance setting with familiar tasks can certainly be helpful for minimizing echolalic response, and along with the use of the contingency of reinforcement technique it can further not only correct echolalic behavior to a negligible degree but also help the echolalic child generalize its treatment effect to the child’ overall language improvement.

  • PDF

Effects of Dietary Taurine on the Lipid Metabolism in Laying Hens (사료내 타우린 첨가가 산란계의 지방대사에 미치는 영향)

  • 박강희
    • Korean Journal of Poultry Science
    • /
    • v.29 no.2
    • /
    • pp.95-100
    • /
    • 2002
  • Two experiments were conducted to investigate the effect of taurine supplementation on lipid metabolism in laying hens. In experiment 1, 19-wk-old laying hens were given one of four taurine supplemented diets (0 (control), 0.4, 0.8, and 1.2% taurine) fur 10 weeks. Abdominal fat weight was lower in the 1.2% diet by 29.2% compared to the control. Serum concentrations of triacylglycerol and HDL-cholesterol were not different among the treatments. However, seam concentration of total cholesterol was higher by 22.4% in the 1.2% diet compared to the control. Concentration of triacylglycerol or total cholesterol in the liver were decreased by 26.1% or 26.4% and 28.2% or 26.4%, respectively in the 0.8% and 1.2% diets compared to the control. The concentration of HDL-cholesterol in liver was also lower by 33.9% in the 1.2% diet compared to the control. In experiment 2, 81-wk-old laying hens were allocated to one of three taurine supplemented diets (0 (control), 1 and 2% taurine) fur 6 weeks. Abdominal fat weight was lower by 25% in 1% taurine supplementation compared to the control. Serum concentrations of triacylglycerol, total cholesterol and HDL-cholesterol of hens fed with 1% diet were not different from those of control group. However, sew concentrations of triacylglycerol and total cholesterol were lower by 44.0% and 19.8%, respectively in the 2% diet compared to the control. Furthermore, serum concentration of HDL -cholesterol in the 2% diet was higher by 75% compared to the control. Concentrations of triacylglycerol and total cholesterol in the liver in the 2% diet were decreased in the 1% diet by 36.8 and 23%, respectively, but increased by 78.4% and 70%, respectively, compared to the control. The concentration of HDL-cholesterol in the liver was not different between the 1% diet and the control, but higher by 62.8% in the 2% diet compared to the control. These results indicated that taurine supplementation decreased the fat storage in abdominal cavity, which was accompanied by the changes in triacylglycerol and cholesterol metabolisms of laying hens.