• Title/Summary/Keyword: 관계학습

Search Result 3,523, Processing Time 0.033 seconds

.A Study on Parents' Transnational Educational Passion in the Tendency of Globalization : The Potential and Limitations of Educational Nomadism (세계화의 흐름에서 학부모의 초국가적 교육열 - 교육노마디즘의 가능성과 한계를 중심으로 -)

  • Kim, So-Hee
    • Korean Journal of Culture and Arts Education Studies
    • /
    • v.5 no.1
    • /
    • pp.97-147
    • /
    • 2010
  • Under the recent trend of globalization, a new proposal on education has not been able to avoid the request for multi-cultural trend. Furthermore, education has been exposed to circumstances which are far different from the previous situations in which global cooperation and intercultural understanding have been more emphasized. 'Educational Nomadism'is a metaphor of creating new value and significance of education. In fact, transnational education which could be a crisis and opportunity at the same time has recently been the mainstream throughout the world. In terms of education, Korea has encountered base hollowing-out in which excessive dependence on the US education and autonomous education coexist. In fact, the world has spent a lot of time and money to have better educational background on a resume through redundant expense by the government and parents. Under this critical situation, it's urgent to change Korea's modern education into a creative educational system in connection with an advanced foreign educational system and further develop the advantage of Korea's education. A parent's investment in his/her child is a support to create new culture as well as an assistance for hope and better future of Korean education. A new direction of parents' education fever that has opened a door to global communitas can stir up infinite potential through which the flow of education fever can be changed to the resources of new civilization. The global cooperation and efforts for communitas means the communication with this world. Through this communication, the culture in which people are forced to zero-sum competition can leap into the education for change of civilization which creates pleasure of self sufficiency and donation.

Derivation of Inherent Optical Properties Based on Deep Neural Network (심층신경망 기반의 해수 고유광특성 도출)

  • Hyeong-Tak Lee;Hey-Min Choi;Min-Kyu Kim;Suk Yoon;Kwang-Seok Kim;Jeong-Eon Moon;Hee-Jeong Han;Young-Je Park
    • Korean Journal of Remote Sensing
    • /
    • v.39 no.5_1
    • /
    • pp.695-713
    • /
    • 2023
  • In coastal waters, phytoplankton,suspended particulate matter, and dissolved organic matter intricately and nonlinearly alter the reflectivity of seawater. Neural network technology, which has been rapidly advancing recently, offers the advantage of effectively representing complex nonlinear relationships. In previous studies, a three-stage neural network was constructed to extract the inherent optical properties of each component. However, this study proposes an algorithm that directly employs a deep neural network. The dataset used in this study consists of synthetic data provided by the International Ocean Color Coordination Group, with the input data comprising above-surface remote-sensing reflectance at nine different wavelengths. We derived inherent optical properties using this dataset based on a deep neural network. To evaluate performance, we compared it with a quasi-analytical algorithm and analyzed the impact of log transformation on the performance of the deep neural network algorithm in relation to data distribution. As a result, we found that the deep neural network algorithm accurately estimated the inherent optical properties except for the absorption coefficient of suspended particulate matter (R2 greater than or equal to 0.9) and successfully separated the sum of the absorption coefficient of suspended particulate matter and dissolved organic matter into the absorption coefficient of suspended particulate matter and dissolved organic matter, respectively. We also observed that the algorithm, when directly applied without log transformation of the data, showed little difference in performance. To effectively apply the findings of this study to ocean color data processing, further research is needed to perform learning using field data and additional datasets from various marine regions, compare and analyze empirical and semi-analytical methods, and appropriately assess the strengths and weaknesses of each algorithm.

Study on data preprocessing methods for considering snow accumulation and snow melt in dam inflow prediction using machine learning & deep learning models (머신러닝&딥러닝 모델을 활용한 댐 일유입량 예측시 융적설을 고려하기 위한 데이터 전처리에 대한 방법 연구)

  • Jo, Youngsik;Jung, Kwansue
    • Journal of Korea Water Resources Association
    • /
    • v.57 no.1
    • /
    • pp.35-44
    • /
    • 2024
  • Research in dam inflow prediction has actively explored the utilization of data-driven machine learning and deep learning (ML&DL) tools across diverse domains. Enhancing not just the inherent model performance but also accounting for model characteristics and preprocessing data are crucial elements for precise dam inflow prediction. Particularly, existing rainfall data, derived from snowfall amounts through heating facilities, introduces distortions in the correlation between snow accumulation and rainfall, especially in dam basins influenced by snow accumulation, such as Soyang Dam. This study focuses on the preprocessing of rainfall data essential for the application of ML&DL models in predicting dam inflow in basins affected by snow accumulation. This is vital to address phenomena like reduced outflow during winter due to low snowfall and increased outflow during spring despite minimal or no rain, both of which are physical occurrences. Three machine learning models (SVM, RF, LGBM) and two deep learning models (LSTM, TCN) were built by combining rainfall and inflow series. With optimal hyperparameter tuning, the appropriate model was selected, resulting in a high level of predictive performance with NSE ranging from 0.842 to 0.894. Moreover, to generate rainfall correction data considering snow accumulation, a simulated snow accumulation algorithm was developed. Applying this correction to machine learning and deep learning models yielded NSE values ranging from 0.841 to 0.896, indicating a similarly high level of predictive performance compared to the pre-snow accumulation application. Notably, during the snow accumulation period, adjusting rainfall during the training phase was observed to lead to a more accurate simulation of observed inflow when predicted. This underscores the importance of thoughtful data preprocessing, taking into account physical factors such as snowfall and snowmelt, in constructing data models.

Stock Price Prediction by Utilizing Category Neutral Terms: Text Mining Approach (카테고리 중립 단어 활용을 통한 주가 예측 방안: 텍스트 마이닝 활용)

  • Lee, Minsik;Lee, Hong Joo
    • Journal of Intelligence and Information Systems
    • /
    • v.23 no.2
    • /
    • pp.123-138
    • /
    • 2017
  • Since the stock market is driven by the expectation of traders, studies have been conducted to predict stock price movements through analysis of various sources of text data. In order to predict stock price movements, research has been conducted not only on the relationship between text data and fluctuations in stock prices, but also on the trading stocks based on news articles and social media responses. Studies that predict the movements of stock prices have also applied classification algorithms with constructing term-document matrix in the same way as other text mining approaches. Because the document contains a lot of words, it is better to select words that contribute more for building a term-document matrix. Based on the frequency of words, words that show too little frequency or importance are removed. It also selects words according to their contribution by measuring the degree to which a word contributes to correctly classifying a document. The basic idea of constructing a term-document matrix was to collect all the documents to be analyzed and to select and use the words that have an influence on the classification. In this study, we analyze the documents for each individual item and select the words that are irrelevant for all categories as neutral words. We extract the words around the selected neutral word and use it to generate the term-document matrix. The neutral word itself starts with the idea that the stock movement is less related to the existence of the neutral words, and that the surrounding words of the neutral word are more likely to affect the stock price movements. And apply it to the algorithm that classifies the stock price fluctuations with the generated term-document matrix. In this study, we firstly removed stop words and selected neutral words for each stock. And we used a method to exclude words that are included in news articles for other stocks among the selected words. Through the online news portal, we collected four months of news articles on the top 10 market cap stocks. We split the news articles into 3 month news data as training data and apply the remaining one month news articles to the model to predict the stock price movements of the next day. We used SVM, Boosting and Random Forest for building models and predicting the movements of stock prices. The stock market opened for four months (2016/02/01 ~ 2016/05/31) for a total of 80 days, using the initial 60 days as a training set and the remaining 20 days as a test set. The proposed word - based algorithm in this study showed better classification performance than the word selection method based on sparsity. This study predicted stock price volatility by collecting and analyzing news articles of the top 10 stocks in market cap. We used the term - document matrix based classification model to estimate the stock price fluctuations and compared the performance of the existing sparse - based word extraction method and the suggested method of removing words from the term - document matrix. The suggested method differs from the word extraction method in that it uses not only the news articles for the corresponding stock but also other news items to determine the words to extract. In other words, it removed not only the words that appeared in all the increase and decrease but also the words that appeared common in the news for other stocks. When the prediction accuracy was compared, the suggested method showed higher accuracy. The limitation of this study is that the stock price prediction was set up to classify the rise and fall, and the experiment was conducted only for the top ten stocks. The 10 stocks used in the experiment do not represent the entire stock market. In addition, it is difficult to show the investment performance because stock price fluctuation and profit rate may be different. Therefore, it is necessary to study the research using more stocks and the yield prediction through trading simulation.

Study on Acknowledge and State of Clinical Experience for 3-years Dental Technology Department (3년제 치기공과 임상실습에 대한 인식 및 실태조사 - 일부 치과기공소 소장을 중심으로 -)

  • Park, Myung-Ja
    • Journal of Technologic Dentistry
    • /
    • v.17 no.1
    • /
    • pp.41-57
    • /
    • 1995
  • This study was conducted to collect and analyze previous information in order to manage efficience, improve experience effect and promote employment rate. The questionnaire interview with 27 chief of dental Laboratory refered clinical experience in technology department about clinical experience in 14 Jumior colleges were also investigated. The results were summarried as follows : The portion of age of 35-39 among chief of dental Laboratory was 40.7% which was the highest, that of male was 96.3%, that of junior college graduate was 97.5%, that of 10years experience was 92.6% and that of ceramic technician was 85.2%, 63.0% dental laboratory for clinical experience was a bore space of 30pyong. Aspect of dental laboratory management, manufacturing all part of prosthetic restoration was 29.6%, othodontic appliance and ceramic restoration was 7.4%, 3.8%, each. The percentage of 40.7 was having connection with 30-3a dental clinics and referring case per day was 10-19 cases(40.7%), manufacturing time of referred prosthetic restoration was 3-4 days(77.8%), places preparing seminar room for education was 29.6%, above a place of 40pyong was 11.1% 30-34 pyong and 35-39 pyong was 7.4% each. During training of 2 years education course student, 18.5% was rack of thorough occupational career. While 44.4% will want the more salary among 3years education course student, 74.1% will expect the more dental techmicians would engaged in their field, 51.9% will hope improve of their theory and practice, 29.6% be expected better skill and 14.8% be expected better theory. Attitude of clinical experience places was distributed by 59.3% of offering only experience chance, 25.9% of wasting time and 29.0% of annoying. The big emphasis of climical experience was thorough occupational career(44.4%). The clinical experience places of our college were selected after direct visiting, so their condition of management was not that bad but most of dental laboratory were poor in management state and working environment. Therefore it is difficult to choose appropriate places and dental Laboratory are also limited manpower and time as suppliers. So that it recommended to induce flexible management of experience period by interval and rotation of experience places among college and to applicate intern-system for employment ant industry-college cooperation aspect.

  • PDF

Analysis of Success Cases of InsurTech and Digital Insurance Platform Based on Artificial Intelligence Technologies: Focused on Ping An Insurance Group Ltd. in China (인공지능 기술 기반 인슈어테크와 디지털보험플랫폼 성공사례 분석: 중국 평안보험그룹을 중심으로)

  • Lee, JaeWon;Oh, SangJin
    • Journal of Intelligence and Information Systems
    • /
    • v.26 no.3
    • /
    • pp.71-90
    • /
    • 2020
  • Recently, the global insurance industry is rapidly developing digital transformation through the use of artificial intelligence technologies such as machine learning, natural language processing, and deep learning. As a result, more and more foreign insurers have achieved the success of artificial intelligence technology-based InsurTech and platform business, and Ping An Insurance Group Ltd., China's largest private company, is leading China's global fourth industrial revolution with remarkable achievements in InsurTech and Digital Platform as a result of its constant innovation, using 'finance and technology' and 'finance and ecosystem' as keywords for companies. In response, this study analyzed the InsurTech and platform business activities of Ping An Insurance Group Ltd. through the ser-M analysis model to provide strategic implications for revitalizing AI technology-based businesses of domestic insurers. The ser-M analysis model has been studied so that the vision and leadership of the CEO, the historical environment of the enterprise, the utilization of various resources, and the unique mechanism relationships can be interpreted in an integrated manner as a frame that can be interpreted in terms of the subject, environment, resource and mechanism. As a result of the case analysis, Ping An Insurance Group Ltd. has achieved cost reduction and customer service development by digitally innovating its entire business area such as sales, underwriting, claims, and loan service by utilizing core artificial intelligence technologies such as facial, voice, and facial expression recognition. In addition, "online data in China" and "the vast offline data and insights accumulated by the company" were combined with new technologies such as artificial intelligence and big data analysis to build a digital platform that integrates financial services and digital service businesses. Ping An Insurance Group Ltd. challenged constant innovation, and as of 2019, sales reached $155 billion, ranking seventh among all companies in the Global 2000 rankings selected by Forbes Magazine. Analyzing the background of the success of Ping An Insurance Group Ltd. from the perspective of ser-M, founder Mammingz quickly captured the development of digital technology, market competition and changes in population structure in the era of the fourth industrial revolution, and established a new vision and displayed an agile leadership of digital technology-focused. Based on the strong leadership led by the founder in response to environmental changes, the company has successfully led InsurTech and Platform Business through innovation of internal resources such as investment in artificial intelligence technology, securing excellent professionals, and strengthening big data capabilities, combining external absorption capabilities, and strategic alliances among various industries. Through this success story analysis of Ping An Insurance Group Ltd., the following implications can be given to domestic insurance companies that are preparing for digital transformation. First, CEOs of domestic companies also need to recognize the paradigm shift in industry due to the change in digital technology and quickly arm themselves with digital technology-oriented leadership to spearhead the digital transformation of enterprises. Second, the Korean government should urgently overhaul related laws and systems to further promote the use of data between different industries and provide drastic support such as deregulation, tax benefits and platform provision to help the domestic insurance industry secure global competitiveness. Third, Korean companies also need to make bolder investments in the development of artificial intelligence technology so that systematic securing of internal and external data, training of technical personnel, and patent applications can be expanded, and digital platforms should be quickly established so that diverse customer experiences can be integrated through learned artificial intelligence technology. Finally, since there may be limitations to generalization through a single case of an overseas insurance company, I hope that in the future, more extensive research will be conducted on various management strategies related to artificial intelligence technology by analyzing cases of multiple industries or multiple companies or conducting empirical research.

The theory of lesson plannig and the instructional structuration : A case study for urban units in Japanese high school (수업설계론과 수업구조화 - 일본 고등학교 도시단원을 사례로 -)

  • ;Sim, Kwang Taek
    • Journal of the Korean Geographical Society
    • /
    • v.29 no.2
    • /
    • pp.166-182
    • /
    • 1994
  • Kyonggi Province in the late Chosun dynasty was a center of superior government offices including 'Han' River water-road transportation and was located in the middle of an 'X'-shaped arterial road network. Because of these reasons, Kyonggi Province had a faster inflow of commodities, informations and technics compared with the other province. At this period of time, every local 'Eup' (name of administrative district) had not been affected by their above administrative districts and had their own autonomy. For this reason, every 'Eup' could be developed as a town, even if its size was small when it had sufficient internal growing conditions. Moreover, the markets ('Si-Jon') in big towns and periodical markets which were spread over the Kyonggi Province played role of commercial functions of town. And because military bases for the defence of the royal capital in Kyonggi Province also took parts of a non-agricultural city role, Xyonggi Provinc had much more possibilities of growing as a town rather than the other provinces. The towns of the late Chosun Dynasty were, except the capital and superior administrative districts which were governed by the 'You-Su', small towns which had only about 3, 000-5, 000 people. Most of the town dewellers were local officials, nobles, merchants, craftmen and slaves. And the farmers who lived near town became a pseudo-towner through suburb agriculture. Among these people, the merchants were leaders of townization. The downtowns were affected by the landform and traffic roads. The most fundamental function of towns were administrative. The opcial's grade, which was dispatched to the local administrative district ('Kun' or 'Hyun'), was decided by the size of population and agricultural land of each county. Large county which was governed by a high ranking opcial had more possibilities to develop as a large town. Because they supervised other opcials of lower rank and obtained more land and population for the town. The phonomena of farm abandonment after the Japanese Invasion of Korea in 1592-1598 stimulated the development of towns for commercial function. The commercial functions of towns were evident in the Si-Jon or Nan-Jon (names of markets) in the big cities such as Hansung and Kaesung, meanffwhile in the local areas it was emerged in the shape of periodical market networks as allied with near markets (which were called as Jang-Si) or permanent markets which were grown up from periodical markets. These facts of commercial development induced the birth of commercial town. Kyonggi Province showed the weak points of its defense system during both wars (Japanese Invasion in 1592 and Manchu's Invasion in 1636). The government reinforced its defense system by adding 4 'You-Su-Bus' and several military bases. Each local districts ('Eup'), where Geo-Jins were established, were stimulated to be a town while Jin-Kwan system were, adjusted and enforced. Among Dok-Jins(name of solitary military bases), Youngjongjin was grown up as a large garrison town which only played a role of defense. The number of towns that took roles of non-agricultural functions in Kyonggi Province was 52. Among these towns, 29 were developed as big towns which had above 3, 000 people and most of these towns were located on the northwest-southeast axes of 'X'-shaped arterial trafic network in the Chosn Dynasty, This fact points out that the traffic road is one of the important causes of the development of towns. When we make hierarchy of the towns of Kyonggi Province according to its population and how many functions it had, we can make it as 6 grades. The virst grade town 'Hansung' was the biggest central town of administration, commerce and defdnse. The 2nd grade town includes 'Kaesung' which had historical inertia that it had been the capital of the Koryo Dynesty. The 3rd grade towns include some 'You- Su-Bus' such as Soowon, Kanghwa, Kwangju and also include Mapo, Yongsan and from this we can imagine that the commercial development in the late Chosun Dynasty extremely affected the townization. The 4th-6th grade towns had smiliar population but it can be discriminated by how many town functions it had. So the 4th grade towns were the core of administration, commerce and defense function. 5th grade towns had administrative functions and one of commercial and defense functions. 6th grade towns had only one of these functions. When we research and town conditions of each grades as the ratio of non-agricultural population, we can find out that the towns from the 1st grade to 4th grade show difference by degree of townization but from the 4th grade to 6th grade towns do not show big difference in general.

  • PDF

Roles and Preparation for the Future Nurse-Educators (미래 간호교육자의 역할과 이를 위한 준비)

  • Kim Susie
    • The Korean Nurse
    • /
    • v.20 no.4 s.112
    • /
    • pp.39-49
    • /
    • 1981
  • 기존 간호 영역 내 간호는 질적으로, 양적으로 급격히 팽창 확대되어 가고 있다. 많은 나라에서 건강관리체계가 부적절하게 분배되어 있으며 따라서 많은 사람들이 적절한 건강관리를 제공받지 못하고 있어 수준 높은 양질의 건강관리를 전체적으로 확대시키는 것이 시급하다. 혹 건강관리의 혜택을 받는다고 해도 이들 역시 보다 더 양질의 인간적인 간호를 요하고 있는 실정이다. 간호는 또한 간호영역 자체 내에서도 급격히 확대되어가고 있다. 예를들면, 미국같은 선진국가의 건강간호사(Nurse practitioner)는 간호전문직의 새로운 직종으로 건강관리체계에서 독자적인 실무자로 그 두각을 나타내고 있다. 의사의 심한 부족난으로 고심하는 발전도상에 있는 나라들에서는 간호원들에게 전통적인 간호기능 뿐 아니라 건강관리체계에서 보다 많은 역할을 수행하도록 기대하며 일선지방의 건강센터(Health center) 직종에 많은 간호원을 투입하고 있다. 가령 우리 한국정부에서 최근에 시도한 무의촌지역에서 졸업간호원들이 건강관리를 제공할 수 있도록 한 법적 조치는 이러한 구체적인 예라고 할 수 있다. 기존 간호영역내외의 이런 급격한 변화는 Melvin Toffler가 말한 대로 ''미래의 충격''을 초래하게 되었다. 따라서 이러한 역동적인 변화는 간호전문직에 대하여 몇가지 질문을 던져준다. 첫째, 미래사회에서 간호영역의 특성은 무엇인가? 둘째, 이러한 새로운 영역에서 요구되는 간호원을 길러내기 위해 간호교육자는 어떤 역할을 수행해야 하는가? 셋째 내일의 간호원을 양성하는 간호교육자를 준비시키기 위한 실질적이면서도 현실적인 전략은 무엇인가 등이다. 1. 미래사회에서 간호영역의 특성은 무엇인가? 미래의 간호원은 다음에 열거하는 여러가지 요인으로 인하여 지금까지의 것과는 판이한 환경에서 일하게 될 것이다. 1) 건강관리를 제공하는 과정에서 컴퓨터화되고 자동화된 기계 및 기구 등 새로운 기술을 많이 사용할 것이다. 2) 1차건강관리가 대부분 간호원에 의해 제공될 것이다. 3) 내일의 건강관리는 소비자 주축의 것이 될 것이다. 4) 간호영역내에 많은 새로운 전문분야들이 생길 것이다. 5) 미래의 건강관리체계는 사회적인 변화와 이의 요구에 더 민감한 반응을 하게 될 것이다. 6) 건강관리체계의 강조점이 의료진료에서 건강관리로 바뀔 것이다. 7) 건강관리체계에서의 간호원의 역할은 의료적인 진단과 치료계획의 기능에서 크게 탈피하여 병원내외에서 보다 더 독특한 실무형태로 발전될 것이다. 이러한 변화와 더불어 미래 간호영역에서 보다 효과적인 간호를 수행하기 위해 미래 간호원들은 지금까지의 간호원보다 더 광범위하고 깊은 교육과 훈련을 받아야 한다. 보다 발전된 기술환경에서 전인적인 접근을 하기위해 신체과학이나 의학뿐 아니라 행동과학 $\cdot$ 경영과학 등에 이르기까지 다양한 훈련을 받아야 할 필요가 있다. 또한 행동양상면에서 전문직인 답게 보다 진취적이고 표현적이며 자동적이고 응용과학적인 역할을 수행하도록 훈련을 받아야 한다. 그리하여 간호원은 효과적인 의사결정자$\cdot$문제해결자$\cdot$능숙한 실무자일 뿐 아니라 소비자의 건강요구를 예리하게 관찰하고 이 요구에 효과적인 존재를 발전시켜 나가는 연구자가 되어야 한다. 2. 미래의 간호교육자는 어떤 역할을 수행해야 하는가? 간호교육은 전문직으로서의 실무를 제공하기 위한 기초석이다. 이는 간호교육자야말로 미래사회에서 국민의 건강요구를 충족시키기는 능력있는 간호원을 공급하는 일에 전무해야 함을 시사해준다. 그러면 이러한 일을 달성하기 위해 간호교육자는 무엇을 해야 하는가? 우선 간호교육자는 두가지 측면에서 이 일을 수정해야 된다고 본다. 그 하나는 간호교육기관에서의 측면이고 다른 하나는 간호교육자 개인적인 측면엣서이다. 우선 간호교육기관에서 간호교육자는 1) 미래사회에서 요구되는 간호원을 교육시키기 위한 프로그램을 제공해야 한다. 2) 효과적인 교과과정의 발전과 수정보완을 계속적으로 진행시켜야 한다. 3) 잘된 교과과정에 따라 적절한 훈련을 철저히 시켜야 한다. 4) 간호교육자 자신이 미래의 예측된 현상을 오늘의 교육과정에 포함시킬 수 있는 자신감과 창의력을 가지고 모델이 되어야 한다. 5) 연구 및 학생들의 학습에 영향을 미치는 중요한 의사결정에 학생들을 참여시키도록 해야한다. 간호교육자 개인적인 측면에서는 교육자 자신들이 능력있고 신빙성있으며 간호의 이론$\cdot$실무$\cdot$연구면에 걸친 권위와 자동성$\cdot$독창성, 그리고 인간을 진정으로 이해하려는 자질을 갖추도록 계속 노력해야 한다. 3. 미래의 간호원을 양성하는 능력있는 간호교육자를 준비시키기 위한 실질적이면서도 현실적인 전략은 무엇인가? 내일의 도전을 충족시킬 수 있는 능력있는 간호교육자를 준비시키기 위한 실질적이고 현실적인 전략을 논함에 있어 우리나라의 실정을 참조하겠다. 전문직 간호교육자를 준비하는데 세가지 방법을 통해 할 수 있다고 생각한다. 첫째는 간호원 훈련수준을 전문직 실무를 수행할 수 있는 단계로 면허를 높이는 것이고, 둘째는 훈련수준을 더 향상시키기 위하여 학사 및 석사간호교육과정을 발전시키고 확대하는 것이며, 셋째는 현존하는 간호교육 프로그램의 질을 높이는 것이다. 첫째와 둘째방법은 정부의 관할이 직접 개입되는 방법이기 때문에 여기서는 생략하고 현존하는 교과과정을 발전시키고 그 질을 향상시키는 것에 대해서만 언급하고자 한다. 미래의 여러가지 도전에 부응할 수 있는 교육자를 준비시키는 교육과정의 발전을 두가지 면에서 추진시킬 수 있다고 본다. 첫째는 국제간의 교류를 통하여 idea 및 경험을 나눔으로서 교육과정의 질을 높일 수 있다. 서로 다른 나라의 간호교육자들이 정기적으로 모여 생각과 경험을 교환하고 연구하므로서 보다 체계적이고 효과적인 발전체인(chain)이 형성되는 것이다. ICN같은 국제적인 조직에 의해 이러한 모임을 시도하는 것인 가치있는 기회라고 생각한다. 국가간 또는 국제적인 간호교육자 훈련을 위한 교육과정의 교환은 한 나라안에서 그 idea를 확산시키는데 효과적인 영향을 미칠 수 있다. 충분한 간호교육전문가를 갖춘 간호교육기관이 새로운 교육과정을 개발하여 그렇지 못한 기관과의 연차적인 conference를 가지므로 확산시킬 수도 있으며 이런 방법은 경제적인 면에서도 효과적일 뿐만 아니라 그 나라 그 문화상황에 적합한 교과과정 개발에도 효과적일 수 있다. 간호교육자를 준비시키는 둘째전략은 현존간호교육자들이 간호이론과 실무$\cdot$연구를 통합하고 발전시키는데 있어서 당면하는 여러가지 요인-전인적인 간호에 적절한 과목을 이수하지 못하고 임상실무경험의 부족등-을 보충하는 방법이다. 이런 실제적인 문제를 잠정적으로 해결하기 위하여 1) 몇몇 대학에서 방학중에 계속교육 프로그램을 개발하여 현직 간호교육자들에게 필요하고 적절한 과목을 이수하도록 한다. 따라서 임상실무교육도 이때 실시할 수 있다. 2) 대학원과정 간호교육프로그램의 입학자의 자격에 2$\~$3년의 실무경험을 포함시키도록 한다. 결론적으로 교수와 학생간의 진정한 동반자관계는 자격을 구비한 능력있는 교수의 실천적인 모델을 통하여서 가능하게 이루어 질수 있다고 믿는 바이다.

  • PDF

A Study on Knowledge Entity Extraction Method for Individual Stocks Based on Neural Tensor Network (뉴럴 텐서 네트워크 기반 주식 개별종목 지식개체명 추출 방법에 관한 연구)

  • Yang, Yunseok;Lee, Hyun Jun;Oh, Kyong Joo
    • Journal of Intelligence and Information Systems
    • /
    • v.25 no.2
    • /
    • pp.25-38
    • /
    • 2019
  • Selecting high-quality information that meets the interests and needs of users among the overflowing contents is becoming more important as the generation continues. In the flood of information, efforts to reflect the intention of the user in the search result better are being tried, rather than recognizing the information request as a simple string. Also, large IT companies such as Google and Microsoft focus on developing knowledge-based technologies including search engines which provide users with satisfaction and convenience. Especially, the finance is one of the fields expected to have the usefulness and potential of text data analysis because it's constantly generating new information, and the earlier the information is, the more valuable it is. Automatic knowledge extraction can be effective in areas where information flow is vast, such as financial sector, and new information continues to emerge. However, there are several practical difficulties faced by automatic knowledge extraction. First, there are difficulties in making corpus from different fields with same algorithm, and it is difficult to extract good quality triple. Second, it becomes more difficult to produce labeled text data by people if the extent and scope of knowledge increases and patterns are constantly updated. Third, performance evaluation is difficult due to the characteristics of unsupervised learning. Finally, problem definition for automatic knowledge extraction is not easy because of ambiguous conceptual characteristics of knowledge. So, in order to overcome limits described above and improve the semantic performance of stock-related information searching, this study attempts to extract the knowledge entity by using neural tensor network and evaluate the performance of them. Different from other references, the purpose of this study is to extract knowledge entity which is related to individual stock items. Various but relatively simple data processing methods are applied in the presented model to solve the problems of previous researches and to enhance the effectiveness of the model. From these processes, this study has the following three significances. First, A practical and simple automatic knowledge extraction method that can be applied. Second, the possibility of performance evaluation is presented through simple problem definition. Finally, the expressiveness of the knowledge increased by generating input data on a sentence basis without complex morphological analysis. The results of the empirical analysis and objective performance evaluation method are also presented. The empirical study to confirm the usefulness of the presented model, experts' reports about individual 30 stocks which are top 30 items based on frequency of publication from May 30, 2017 to May 21, 2018 are used. the total number of reports are 5,600, and 3,074 reports, which accounts about 55% of the total, is designated as a training set, and other 45% of reports are designated as a testing set. Before constructing the model, all reports of a training set are classified by stocks, and their entities are extracted using named entity recognition tool which is the KKMA. for each stocks, top 100 entities based on appearance frequency are selected, and become vectorized using one-hot encoding. After that, by using neural tensor network, the same number of score functions as stocks are trained. Thus, if a new entity from a testing set appears, we can try to calculate the score by putting it into every single score function, and the stock of the function with the highest score is predicted as the related item with the entity. To evaluate presented models, we confirm prediction power and determining whether the score functions are well constructed by calculating hit ratio for all reports of testing set. As a result of the empirical study, the presented model shows 69.3% hit accuracy for testing set which consists of 2,526 reports. this hit ratio is meaningfully high despite of some constraints for conducting research. Looking at the prediction performance of the model for each stocks, only 3 stocks, which are LG ELECTRONICS, KiaMtr, and Mando, show extremely low performance than average. this result maybe due to the interference effect with other similar items and generation of new knowledge. In this paper, we propose a methodology to find out key entities or their combinations which are necessary to search related information in accordance with the user's investment intention. Graph data is generated by using only the named entity recognition tool and applied to the neural tensor network without learning corpus or word vectors for the field. From the empirical test, we confirm the effectiveness of the presented model as described above. However, there also exist some limits and things to complement. Representatively, the phenomenon that the model performance is especially bad for only some stocks shows the need for further researches. Finally, through the empirical study, we confirmed that the learning method presented in this study can be used for the purpose of matching the new text information semantically with the related stocks.

A Study on the Development Trend of Artificial Intelligence Using Text Mining Technique: Focused on Open Source Software Projects on Github (텍스트 마이닝 기법을 활용한 인공지능 기술개발 동향 분석 연구: 깃허브 상의 오픈 소스 소프트웨어 프로젝트를 대상으로)

  • Chong, JiSeon;Kim, Dongsung;Lee, Hong Joo;Kim, Jong Woo
    • Journal of Intelligence and Information Systems
    • /
    • v.25 no.1
    • /
    • pp.1-19
    • /
    • 2019
  • Artificial intelligence (AI) is one of the main driving forces leading the Fourth Industrial Revolution. The technologies associated with AI have already shown superior abilities that are equal to or better than people in many fields including image and speech recognition. Particularly, many efforts have been actively given to identify the current technology trends and analyze development directions of it, because AI technologies can be utilized in a wide range of fields including medical, financial, manufacturing, service, and education fields. Major platforms that can develop complex AI algorithms for learning, reasoning, and recognition have been open to the public as open source projects. As a result, technologies and services that utilize them have increased rapidly. It has been confirmed as one of the major reasons for the fast development of AI technologies. Additionally, the spread of the technology is greatly in debt to open source software, developed by major global companies, supporting natural language recognition, speech recognition, and image recognition. Therefore, this study aimed to identify the practical trend of AI technology development by analyzing OSS projects associated with AI, which have been developed by the online collaboration of many parties. This study searched and collected a list of major projects related to AI, which were generated from 2000 to July 2018 on Github. This study confirmed the development trends of major technologies in detail by applying text mining technique targeting topic information, which indicates the characteristics of the collected projects and technical fields. The results of the analysis showed that the number of software development projects by year was less than 100 projects per year until 2013. However, it increased to 229 projects in 2014 and 597 projects in 2015. Particularly, the number of open source projects related to AI increased rapidly in 2016 (2,559 OSS projects). It was confirmed that the number of projects initiated in 2017 was 14,213, which is almost four-folds of the number of total projects generated from 2009 to 2016 (3,555 projects). The number of projects initiated from Jan to Jul 2018 was 8,737. The development trend of AI-related technologies was evaluated by dividing the study period into three phases. The appearance frequency of topics indicate the technology trends of AI-related OSS projects. The results showed that the natural language processing technology has continued to be at the top in all years. It implied that OSS had been developed continuously. Until 2015, Python, C ++, and Java, programming languages, were listed as the top ten frequently appeared topics. However, after 2016, programming languages other than Python disappeared from the top ten topics. Instead of them, platforms supporting the development of AI algorithms, such as TensorFlow and Keras, are showing high appearance frequency. Additionally, reinforcement learning algorithms and convolutional neural networks, which have been used in various fields, were frequently appeared topics. The results of topic network analysis showed that the most important topics of degree centrality were similar to those of appearance frequency. The main difference was that visualization and medical imaging topics were found at the top of the list, although they were not in the top of the list from 2009 to 2012. The results indicated that OSS was developed in the medical field in order to utilize the AI technology. Moreover, although the computer vision was in the top 10 of the appearance frequency list from 2013 to 2015, they were not in the top 10 of the degree centrality. The topics at the top of the degree centrality list were similar to those at the top of the appearance frequency list. It was found that the ranks of the composite neural network and reinforcement learning were changed slightly. The trend of technology development was examined using the appearance frequency of topics and degree centrality. The results showed that machine learning revealed the highest frequency and the highest degree centrality in all years. Moreover, it is noteworthy that, although the deep learning topic showed a low frequency and a low degree centrality between 2009 and 2012, their ranks abruptly increased between 2013 and 2015. It was confirmed that in recent years both technologies had high appearance frequency and degree centrality. TensorFlow first appeared during the phase of 2013-2015, and the appearance frequency and degree centrality of it soared between 2016 and 2018 to be at the top of the lists after deep learning, python. Computer vision and reinforcement learning did not show an abrupt increase or decrease, and they had relatively low appearance frequency and degree centrality compared with the above-mentioned topics. Based on these analysis results, it is possible to identify the fields in which AI technologies are actively developed. The results of this study can be used as a baseline dataset for more empirical analysis on future technology trends that can be converged.