• Title/Summary/Keyword: education using computer

Search Result 3,097, Processing Time 0.043 seconds

The Identification of the High-Risk Pregnacy, Usign a Simplified Antepartum Risk-Scoring System (단순화된 산전위험득점체계를 이용한 고위험 임부의 확인)

  • Jo, Jeong-Ho
    • The Korean Nurse
    • /
    • v.30 no.3
    • /
    • pp.49-65
    • /
    • 1991
  • This study was carried out to assess the problems with the pregnant women, and check out the risk-factors in the high-risk pregnancies, using a simplified antepartum risk-scoring system, which was revised from Edwards' scoring system to be suitable for Korean situaition. This instrument was included 4 categories, demographic, obstetric, medical and miscellaneous factors. This survey was based on the 1300 pregnant women who were admitted, $x^2$-test, F-test, Pearsons correation, using statistical package SAS in NAS computer system, KIST. The results of the study were as follows; 1. 1313 infants were deliveried of these 560 infants(42.7%) were born to mothers with risk-scores > 7, and 753 infants(57.3%) were born to mothers risk-scores <7. 2. Maternal age" parity, education level, of the demographic factors were significant relation statistically to identify the high risk pregnancies($X^2$=20.88, 42.87, 15.60 P < 0.01). 3. C-section, post term, incompetent cervix, uterine anomaly, polyhydramnios, congenital anomaly, sensitized RH negative, abortion, preeclampsia, excessive size infant, premature, low birth weight infanl, abnormal presentation, perinatal loss, multiple pregnancy, of the obstetric factors were significant relation statistically to identify the high risk-pregnancies. ($X^2$ = 175.96, 87.5, 16.28, 21.78, 9.46, 8. 10, 6.75, 22.9, 64.84, 6.93, 361.43, 185.55, 78.65, 45.52, P < 0.01). 4. Abnormal nutrition, anemia, UTI, other medicalcondition(pulmonary disease, severe influenza), heart disease, V.D., of the miscellaneous and medical factors, were significant relation statistically to identify the high risk-pregnancies. 5. Premature, low birth weight infant, contracted pelvis, abnormal presentation, of the risk factors were significantly related with Apgar score at 1 '||'&'||' 5 minute after birth and neonatal body weight. 6. Apgar score at 1 '||'&'||' 5 minute after, birth and neonatal body weight were significantly negative correlated with risk-score. 7. There were statistically significant difference between risk-score and Apgar score at 1 '||'&'||' 5 minute after birth, 3 group(0-3, 4-6, above 7), and neonatal body weight, 2 group(below 2.5kg, the other group) (F=104.65, 96.61, 284.92, P<0.01). 8. Apgar score at 1 '||'&'||' 5 minute after birth(below 7), and neonatal body weight(below 2.5kg), were significant relation statistically with risk score.($x^2$=65.99, 60.88, 177.07, P<0.01) were 60.8 %, 60% . 9. Correct classifications of morbid infants(l '||'&'||' 5 minute Apgar score < 7) were 77.8%, 83.8% and that of nonmorbid infants(l '||'&'||' 5 minute Apgar score > 7) were 60.8%, 60%. 10. There were statistically significant difference between dislribution of maternal risk-score among the morbid infants(l '||'&'||' 5 minute Apgar score < 7) and non morbid infants(l '||'&'||' 5 minute Apgar score> 7) ($x^2$=64.8, 58.8, P < 0.001). 11. There were statistically significant difference between distribution of morbid infants(l '||'&'||' 5 minute Apgar score < 7) and fetal death. 12. The predictivity for classifying high.risk cases was 12 % and for classifying low-risk cases was 98.3 % in 5 minute Apgar score. Suggestions for further studies are as follows; 1. Contineous prospective studies, using this newly revised scoring system are strongly recommended in the stetric service. 2. Besides risk facto~s used in this study, assessmenl of risks by factors in another scoring system and paralled studies related to perinatal outcome are strongly recommended.

  • PDF

Scientific Awareness appearing in Korean Tokusatsu Series - With a focus on Vectorman: Warriors of the Earth (한국 특촬물 시리즈에 나타난 과학적 인식 - <지구용사 벡터맨>을 중심으로)

  • Bak, So-young
    • (The) Research of the performance art and culture
    • /
    • no.43
    • /
    • pp.293-322
    • /
    • 2021
  • The present study examined the scientific awareness appearing in Korean tokusatsu series by focusing on Vectorman: Warriors of the Earth. As a work representing Korean tokusatsu series, Vectorman: Warriors of the Earth achieved the greatest success among tokusatsu series. This work was released thanks to the continued popularity of Japanese tokusatsu since the mid-1980s and the trend of robot animations. Due to the chronic problems regarding Korean children's programs-the oversupply of imported programs and repeated reruns-the need for domestically produced children's programs has continued to come to the fore. However, as the popularity of Korean animation waned beginning in the mid-1990s, inevitably the burden fr producing animation increased. As a result, Vectorman: Warriors of the Earth was produced as a tokusatsu rather than an animation, and because this was a time when an environment for using special effects technology was being fostered in broadcasting stations, computer visual effects were actively used for the series. The response to the new domestically produced tokusatsu series Vectorman: Warriors of the Earth was explosive. The Vectorman series explained the abilities of cosmic beings by using specific scientific terms such as DNA synthesis, brain cell transformation, and special psychological control device instead of ambiguous words like the scientific technology of space. Although the series is unable to describe in detail about the process and cause, the way it defines technology using concrete terms rather than science fiction shows how scientific imagination is manifesting in specific forms in Korean society. Furthermore, the equal relationship between Vectorman and the aliens shows how the science of space, explained with the scientific terms of earth, is an expression of confidence regarding the advancement of Korean scientific technology which represents earth. However, the female characters fail to gain entry into the domain of science and are portrayed as unscientific beings, revealing limitations in terms of scientific awareness.

A Study on the Development Trend of Artificial Intelligence Using Text Mining Technique: Focused on Open Source Software Projects on Github (텍스트 마이닝 기법을 활용한 인공지능 기술개발 동향 분석 연구: 깃허브 상의 오픈 소스 소프트웨어 프로젝트를 대상으로)

  • Chong, JiSeon;Kim, Dongsung;Lee, Hong Joo;Kim, Jong Woo
    • Journal of Intelligence and Information Systems
    • /
    • v.25 no.1
    • /
    • pp.1-19
    • /
    • 2019
  • Artificial intelligence (AI) is one of the main driving forces leading the Fourth Industrial Revolution. The technologies associated with AI have already shown superior abilities that are equal to or better than people in many fields including image and speech recognition. Particularly, many efforts have been actively given to identify the current technology trends and analyze development directions of it, because AI technologies can be utilized in a wide range of fields including medical, financial, manufacturing, service, and education fields. Major platforms that can develop complex AI algorithms for learning, reasoning, and recognition have been open to the public as open source projects. As a result, technologies and services that utilize them have increased rapidly. It has been confirmed as one of the major reasons for the fast development of AI technologies. Additionally, the spread of the technology is greatly in debt to open source software, developed by major global companies, supporting natural language recognition, speech recognition, and image recognition. Therefore, this study aimed to identify the practical trend of AI technology development by analyzing OSS projects associated with AI, which have been developed by the online collaboration of many parties. This study searched and collected a list of major projects related to AI, which were generated from 2000 to July 2018 on Github. This study confirmed the development trends of major technologies in detail by applying text mining technique targeting topic information, which indicates the characteristics of the collected projects and technical fields. The results of the analysis showed that the number of software development projects by year was less than 100 projects per year until 2013. However, it increased to 229 projects in 2014 and 597 projects in 2015. Particularly, the number of open source projects related to AI increased rapidly in 2016 (2,559 OSS projects). It was confirmed that the number of projects initiated in 2017 was 14,213, which is almost four-folds of the number of total projects generated from 2009 to 2016 (3,555 projects). The number of projects initiated from Jan to Jul 2018 was 8,737. The development trend of AI-related technologies was evaluated by dividing the study period into three phases. The appearance frequency of topics indicate the technology trends of AI-related OSS projects. The results showed that the natural language processing technology has continued to be at the top in all years. It implied that OSS had been developed continuously. Until 2015, Python, C ++, and Java, programming languages, were listed as the top ten frequently appeared topics. However, after 2016, programming languages other than Python disappeared from the top ten topics. Instead of them, platforms supporting the development of AI algorithms, such as TensorFlow and Keras, are showing high appearance frequency. Additionally, reinforcement learning algorithms and convolutional neural networks, which have been used in various fields, were frequently appeared topics. The results of topic network analysis showed that the most important topics of degree centrality were similar to those of appearance frequency. The main difference was that visualization and medical imaging topics were found at the top of the list, although they were not in the top of the list from 2009 to 2012. The results indicated that OSS was developed in the medical field in order to utilize the AI technology. Moreover, although the computer vision was in the top 10 of the appearance frequency list from 2013 to 2015, they were not in the top 10 of the degree centrality. The topics at the top of the degree centrality list were similar to those at the top of the appearance frequency list. It was found that the ranks of the composite neural network and reinforcement learning were changed slightly. The trend of technology development was examined using the appearance frequency of topics and degree centrality. The results showed that machine learning revealed the highest frequency and the highest degree centrality in all years. Moreover, it is noteworthy that, although the deep learning topic showed a low frequency and a low degree centrality between 2009 and 2012, their ranks abruptly increased between 2013 and 2015. It was confirmed that in recent years both technologies had high appearance frequency and degree centrality. TensorFlow first appeared during the phase of 2013-2015, and the appearance frequency and degree centrality of it soared between 2016 and 2018 to be at the top of the lists after deep learning, python. Computer vision and reinforcement learning did not show an abrupt increase or decrease, and they had relatively low appearance frequency and degree centrality compared with the above-mentioned topics. Based on these analysis results, it is possible to identify the fields in which AI technologies are actively developed. The results of this study can be used as a baseline dataset for more empirical analysis on future technology trends that can be converged.

Development of Menu Labeling System (MLS) Using Nutri-API (Nutrition Analysis Application Programming Interface) (영양분석 API를 이용한 메뉴 라벨링 시스템 (MLS) 개발)

  • Hong, Soon-Myung;Cho, Jee-Ye;Park, Yu-Jeong;Kim, Min-Chan;Park, Hye-Kyung;Lee, Eun-Ju;Kim, Jong-Wook;Kwon, Kwang-Il;Kim, Jee-Young
    • Journal of Nutrition and Health
    • /
    • v.43 no.2
    • /
    • pp.197-206
    • /
    • 2010
  • Now a days, people eat outside of the home more and more frequently. Menu labeling can help people make more informed decisions about the foods they eat and help them maintain a healthy diet. This study was conducted to develop menu labeling system using Nutri-API (Nutrition Analysis Application Programming Interface). This system offers convenient user interface and menu labeling information with printout format. This system provide useful functions such as new food/menu nutrients information, retrieval food semantic service, menu plan with subgroup and nutrient analysis informations and print format. This system provide nutritive values with nutrient information and ratio of 3 major energy nutrients. MLS system can analyze nutrients for menu and each subgroup. And MLS system can display nutrient comparisons with DRIs and % Daily Nutrient Values. And also this system provide 6 different menu labeling formate with nutrient information. Therefore it can be used by not only usual people but also dietitians and restaurant managers who take charge of making a menu and experts in the field of food and nutrition. It is expected that Menu Labeling System (MLS) can be useful of menu planning and nutrition education, nutrition counseling and expert meal management.

A Web-based Internet Program for Nutritional Counseling and Diet management of Patient with Diabetes Mellitus (당뇨병 환자의 웹기반 식사관리 및 영양상담 프로그램)

  • 한지숙;정지혜
    • Journal of the Korean Society of Food Science and Nutrition
    • /
    • v.33 no.1
    • /
    • pp.114-122
    • /
    • 2004
  • The purpose of this study was to develop a web-based internet program for nutritional counseling and diet management of patient with diabetes mellitus. The program consisted of four parts according to their functions and contents. The first part explained the metabolism of glucose and mechanism of insulin and insulin receptor expressed by flash 6.0, and defined the diabetes mellitus. The second part is to assess the general health status such as body weight, obesity index, basal metabolic rate and total energy requirement by the input of age, sex, height, weight and degree of activity. This part also provides tlne patient with menu lists and one day menu suitable to his weight and activity, and offers the information for food selection, snacks, convenience foods, dine-out, behavioral modification, cooking methods, food exchange lists, dietary education using buffet, and information on energy and nutrients of foods and drinks, and top 20 foods classified by nutrients. The third part is designed to investigate dietary history of patient, that is, to find out his inappropriate dietary habit and give him some suggestions for appropriate dietary behavior. This part also offers on-line counseling, follow-up management and frequently asked questions. The fourth part is evaluating their energy and nutrients intake by comparing with recommended dietary allowance for Koreans or standardized data for patient with diabetes mellitus. In this part, it is also analyzing energy and nutrients of food consumed by food group and meals, and evaluating the status of nutrient intake. These results are finally displayed as tabular forms and graphical forms on the computer screen. Therefore it is expected that the web-based internet program developed in this study will play a role in their health promotion as widely using by diabetic patients.

Efficient Topic Modeling by Mapping Global and Local Topics (전역 토픽의 지역 매핑을 통한 효율적 토픽 모델링 방안)

  • Choi, Hochang;Kim, Namgyu
    • Journal of Intelligence and Information Systems
    • /
    • v.23 no.3
    • /
    • pp.69-94
    • /
    • 2017
  • Recently, increase of demand for big data analysis has been driving the vigorous development of related technologies and tools. In addition, development of IT and increased penetration rate of smart devices are producing a large amount of data. According to this phenomenon, data analysis technology is rapidly becoming popular. Also, attempts to acquire insights through data analysis have been continuously increasing. It means that the big data analysis will be more important in various industries for the foreseeable future. Big data analysis is generally performed by a small number of experts and delivered to each demander of analysis. However, increase of interest about big data analysis arouses activation of computer programming education and development of many programs for data analysis. Accordingly, the entry barriers of big data analysis are gradually lowering and data analysis technology being spread out. As the result, big data analysis is expected to be performed by demanders of analysis themselves. Along with this, interest about various unstructured data is continually increasing. Especially, a lot of attention is focused on using text data. Emergence of new platforms and techniques using the web bring about mass production of text data and active attempt to analyze text data. Furthermore, result of text analysis has been utilized in various fields. Text mining is a concept that embraces various theories and techniques for text analysis. Many text mining techniques are utilized in this field for various research purposes, topic modeling is one of the most widely used and studied. Topic modeling is a technique that extracts the major issues from a lot of documents, identifies the documents that correspond to each issue and provides identified documents as a cluster. It is evaluated as a very useful technique in that reflect the semantic elements of the document. Traditional topic modeling is based on the distribution of key terms across the entire document. Thus, it is essential to analyze the entire document at once to identify topic of each document. This condition causes a long time in analysis process when topic modeling is applied to a lot of documents. In addition, it has a scalability problem that is an exponential increase in the processing time with the increase of analysis objects. This problem is particularly noticeable when the documents are distributed across multiple systems or regions. To overcome these problems, divide and conquer approach can be applied to topic modeling. It means dividing a large number of documents into sub-units and deriving topics through repetition of topic modeling to each unit. This method can be used for topic modeling on a large number of documents with limited system resources, and can improve processing speed of topic modeling. It also can significantly reduce analysis time and cost through ability to analyze documents in each location or place without combining analysis object documents. However, despite many advantages, this method has two major problems. First, the relationship between local topics derived from each unit and global topics derived from entire document is unclear. It means that in each document, local topics can be identified, but global topics cannot be identified. Second, a method for measuring the accuracy of the proposed methodology should be established. That is to say, assuming that global topic is ideal answer, the difference in a local topic on a global topic needs to be measured. By those difficulties, the study in this method is not performed sufficiently, compare with other studies dealing with topic modeling. In this paper, we propose a topic modeling approach to solve the above two problems. First of all, we divide the entire document cluster(Global set) into sub-clusters(Local set), and generate the reduced entire document cluster(RGS, Reduced global set) that consist of delegated documents extracted from each local set. We try to solve the first problem by mapping RGS topics and local topics. Along with this, we verify the accuracy of the proposed methodology by detecting documents, whether to be discerned as the same topic at result of global and local set. Using 24,000 news articles, we conduct experiments to evaluate practical applicability of the proposed methodology. In addition, through additional experiment, we confirmed that the proposed methodology can provide similar results to the entire topic modeling. We also proposed a reasonable method for comparing the result of both methods.

Field Survey on Smart Greenhouse (스마트 온실의 현장조사 분석)

  • Lee, Jong Goo;Jeong, Young Kyun;Yun, Sung Wook;Choi, Man Kwon;Kim, Hyeon Tae;Yoon, Yong Cheol
    • Journal of Bio-Environment Control
    • /
    • v.27 no.2
    • /
    • pp.166-172
    • /
    • 2018
  • This study set out to conduct a field survey with smart greenhouse-based farms in seven types to figure out the actual state of smart greenhouses distributed across the nation before selecting a system to implement an optimal greenhouse environment and doing a research on higher productivity based on data related to crop growth, development, and environment. The findings show that the farms were close to an intelligent or advanced smart farm, given the main purposes of leading cases across the smart farm types found in the field. As for the age of farmers, those who were in their forties and sixties accounted for the biggest percentage, but those who were in their fifties or younger ran 21 farms that accounted for approximately 70.0%. The biggest number of farmers had a cultivation career of ten years or less. As for the greenhouse type, the 1-2W type accounted for 50.0%, and the multispan type accounted for 80.0% at 24 farms. As for crops they cultivated, only three farms cultivated flowers with the remaining farms growing only fruit vegetables, of which the tomato and paprika accounted for approximately 63.6%. As for control systems, approximately 77.4% (24 farms) used a domestic control system. As for the control method of a control system, three farms regulated temperature and humidity only with a control panel with the remaining farms adopting a digital control method to combine a panel with a computer. There were total nine environmental factors to measure and control including temperature. While all the surveyed farms measured temperature, the number of farms installing a ventilation or air flow fan or measuring the concentration of carbon dioxide was relatively small. As for a heating system, 46.7% of the farms used an electric boiler. In addition, hot water boilers, heat pumps, and lamp oil boilers were used. As for investment into a control system, there was a difference in the investment scale among the farms from 10 million won to 100 million won. As for difficulties with greenhouse management, the farmers complained about difficulties with using a smart phone and digital control system due to their old age and the utter absence of education and materials about smart greenhouse management. Those difficulties were followed by high fees paid to a consultant and system malfunction in the order.

Development Plan of Guard Service According to the LBS Introduction (경호경비 발전전략에 따른 위치기반서비스(LBS) 도입)

  • Kim, Chang-Ho;Chang, Ye-Chin
    • Korean Security Journal
    • /
    • no.13
    • /
    • pp.145-168
    • /
    • 2007
  • Like to change to the information-oriented society, the guard service needs to be changed. The communication and hardware technology develop rapidly and according to the internet environment change from cable to wireless, modern person can approach every kinds of information service using wireless communication machinery which can be moved such as laptop, computer, PDA, mobile phone and so on, LBS field which presents the needing information and service at anytime, anywhere, and which kinds of device expands it's territory all the more together with the appearance of ubiquitous concept. LBS use the chip in the mobile phone and make to confirm the position of the joining member anytime within several tens centimeters to hundreds meters. LBS can be divided by the service method which use mobile communication base station and apply satellite. Also each service type can be divided by location chase service, public safe service, location based information service and so on, and it is the part which will plan with guard service development. It will be prospected 8.460 hundred million in 2005 years and 16.561 hundred million in 2007 years scale of market. Like this situation, it can be guessed that the guard service has to change rapidly according to the LBS application. Study method chooses documentary review basically, and at first theory method mainly uses the second documentary examination which depends on learned journal and independent volume which published in the inside and the outside of the country, internet searching, other kinds of all study report, statute book, thesis which published at public order research institute of the Regional Police Headquarter, police operation data, data which related with statute, documents and statistical data which depend on private guard company and so on. So the purpose of the study gropes in accordance with the LBS application, and present the problems and improvement method to analyze indirect of manager side of operate guard adaptation service of LBS, government side which has to activate LBS, systematical, operation management, manpower management and education training which related with guard course side which has to study and educate in accordance with application of the new guard service, as well as intents to excellent quality service of guard.

  • PDF

Extracting Beginning Boundaries for Efficient Management of Movie Storytelling Contents (스토리텔링 콘텐츠의 효과적인 관리를 위한 영화 스토리 발단부의 자동 경계 추출)

  • Park, Seung-Bo;You, Eun-Soon;Jung, Jason J.
    • Journal of Intelligence and Information Systems
    • /
    • v.17 no.4
    • /
    • pp.279-292
    • /
    • 2011
  • Movie is a representative media that can transmit stories to audiences. Basically, a story is described by characters in the movie. Different from other simple videos, movies deploy narrative structures for explaining various conflicts or collaborations between characters. These narrative structures consist of 3 main acts, which are beginning, middle, and ending. The beginning act includes 1) introduction to main characters and backgrounds, and 2) conflicts implication and clues for incidents. The middle act describes the events developed by both inside and outside factors and the story dramatic tension heighten. Finally, in the end act, the events are developed are resolved, and the topic of story and message of writer are transmitted. When story information is extracted from movie, it is needed to consider that it has different weights by narrative structure. Namely, when some information is extracted, it has a different influence to story deployment depending on where it locates at the beginning, middle and end acts. The beginning act is the part that exposes to audiences for story set-up various information such as setting of characters and depiction of backgrounds. And thus, it is necessary to extract much kind information from the beginning act in order to abstract a movie or retrieve character information. Thereby, this paper proposes a novel method for extracting the beginning boundaries. It is the method that detects a boundary scene between the beginning act and middle using the accumulation graph of characters. The beginning act consists of the scenes that introduce important characters, imply the conflict relationship between them, and suggest clues to resolve troubles. First, a scene that the new important characters don't appear any more should be detected in order to extract a scene completed the introduction of them. The important characters mean the major and minor characters, which can be dealt as important characters since they lead story progression. Extra should be excluded in order to extract a scene completed the introduction of important characters in the accumulation graph of characters. Extra means the characters that appear only several scenes. Second, the inflection point is detected in the accumulation graph of characters. It is the point that the increasing line changes to horizontal line. Namely, when the slope of line keeps zero during long scenes, starting point of this line with zero slope becomes the inflection point. Inflection point will be detected in the accumulation graph of characters without extra. Third, several scenes are considered as additional story progression such as conflicts implication and clues suggestion. Actually, movie story can arrive at a scene located between beginning act and middle when additional several scenes are elapsed after the introduction of important characters. We will decide the ratio of additional scenes for total scenes by experiment in order to detect this scene. The ratio of additional scenes is gained as 7.67% by experiment. It is the story inflection point to change from beginning to middle act when this ratio is added to the inflection point of graph. Our proposed method consists of these three steps. We selected 10 movies for experiment and evaluation. These movies consisted of various genres. By measuring the accuracy of boundary detection experiment, we have shown that the proposed method is more efficient.

The Effect of Herding Behavior and Perceived Usefulness on Intention to Purchase e-Learning Content: Comparison Analysis by Purchase Experience (무리행동과 지각된 유용성이 이러닝 컨텐츠 구매의도에 미치는 영향: 구매경험에 의한 비교분석)

  • Yoo, Chul-Woo;Kim, Yang-Jin;Moon, Jung-Hoon;Choe, Young-Chan
    • Asia pacific journal of information systems
    • /
    • v.18 no.4
    • /
    • pp.105-130
    • /
    • 2008
  • Consumers of e-learning market differ from those of other markets in that they are replaced in a specific time scale. For example, e-learning contents aimed at highschool senior students cannot be consumed by a specific consumer over the designated period of time. Hence e-learning service providers need to attract new groups of students every year. Due to lack of information on products designed for continuously emerging consumers, the consumers face difficulties in making rational decisions in a short time period. Increased uncertainty of product purchase leads customers to herding behaviors to obtain information of the product from others and imitate them. Taking into consideration of these features of e-learning market, this study will focus on the online herding behavior in purchasing e-learning contents. There is no definite concept for e-learning. However, it is being discussed in a wide range of perspectives from educational engineering to management to e-business etc. Based upon the existing studies, we identify two main view-points regarding e-learning. The first defines e-learning as a concept that includes existing terminologies, such as CBT (Computer Based Training), WBT (Web Based Training), and IBT (Internet Based Training). In this view, e-learning utilizes IT in order to support professors and a part of or entire education systems. In the second perspective, e-learning is defined as the usage of Internet technology to deliver diverse intelligence and achievement enhancing solutions. In other words, only the educations that are done through the Internet and network can be classified as e-learning. We take the second definition of e-learning for our working definition. The main goal of this study is to investigate what factors affect consumer intention to purchase e-learning contents and to identify the differential impact of the factors between consumers with purchase experience and those without the experience. To accomplish the goal of this study, it focuses on herding behavior and perceived usefulness as antecedents to behavioral intention. The proposed research model in the study extends the Technology Acceptance Model by adding herding behavior and usability to take into account the unique characteristics of e-learning content market and e-learning systems use, respectively. The current study also includes consumer experience with e-learning content purchase because the previous experience is believed to affect purchasing intention when consumers buy experience goods or services. Previous studies on e-learning did not consider the characteristics of e-learning contents market and the differential impact of consumer experience on the relationship between the antecedents and behavioral intention, which is the target of this study. This study employs a survey method to empirically test the proposed research model. A survey questionnaire was developed and distributed to 629 informants. 528 responses were collected, which consist of potential customer group (n = 133) and experienced customer group (n = 395). The data were analyzed using PLS method, a structural equation modeling method. Overall, both herding behavior and perceived usefulness influence consumer intention to purchase e-learning contents. In detail, in the case of potential customer group, herding behavior has stronger effect on purchase intention than does perceived usefulness. However, in the case of shopping-experienced customer group, perceived usefulness has stronger effect than does herding behavior. In sum, the results of the analysis show that with regard to purchasing experience, perceived usefulness and herding behavior had differential effects upon the purchase of e-learning contents. As a follow-up analysis, the interaction effects of the number of purchase transaction and herding behavior/perceived usefulness on purchase intention were investigated. The results show that there are no interaction effects. This study contributes to the literature in a couple of ways. From a theoretical perspective, this study examined and showed evidence that the characteristics of e-learning market such as continuous renewal of consumers and thus high uncertainty and individual experiences are important factors to be considered when the purchase intention of e-learning content is studied. This study can be used as a basis for future studies on e-learning success. From a practical perspective, this study provides several important implications on what types of marketing strategies e-learning companies need to build. The bottom lines of these strategies include target group attraction, word-of-mouth management, enhancement of web site usability quality, etc. The limitations of this study are also discussed for future studies.