• Title/Summary/Keyword: Web-based Technology

Search Result 2,602, Processing Time 0.036 seconds

Current Status and Future Prospect of Plant Disease Forecasting System in Korea (우리 나라 식물병 발생예찰의 현황과 전망)

  • Kim, Choong-Hoe
    • Research in Plant Disease
    • /
    • v.8 no.2
    • /
    • pp.84-91
    • /
    • 2002
  • Disease forecasting in Korea was first studied in the Department of Fundamental Research, in the Central Agricultural Technology Institute in Suwon in 1947, where the dispersal of air-borne conidia of blast and brown spot pathogens in rice was examined. Disease forecasting system in Korea is operated based on information obtained from 200 main forecasting plots scattered around country (rice 150, economic crops 50) and 1,403 supplementary observational plots (rice 1,050, others 353) maintained by Korean government. Total number of target crops and diseases in both forecasting plots amount to 30 crops and 104 diseases. Disease development in the forecasting plots is examined by two extension agents specialized in disease forecasting, working in the national Agricul-tural Technology Service Center(ATSC) founded in each city and prefecture. The data obtained by the extension agents are transferred to a central organization, Rural Development Administration (RDA) through an internet-web system for analysis in a nation-wide forecasting program, and forwarded far the Central Forecasting Council consisted of 12 members from administration, university, research institution, meteorology station, and mass media to discuss present situation of disease development and subsequent progress. The council issues a forecasting information message, as a result of analysis, that is announced in public via mass media to 245 agencies including ATSC, who informs to local administration, the related agencies and farmers for implementation of disease control activity. However, in future successful performance of plant disease forecasting system is thought to be securing of excellent extension agents specialized in disease forecasting, elevation of their forecasting ability through continuous trainings, and furnishing of prominent forecasting equipments. Researches in plant disease forecasting in Korea have been concentrated on rice blast, where much information is available, but are substan-tially limited in other diseases. Most of the forecasting researches failed to achieve the continuity of researches on specialized topic, ignoring steady improvement towards practical use. Since disease forecasting loses its value without practicality, more efforts are needed to improve the practicality of the forecasting method in both spatial and temporal aspects. Since significance of disease forecasting is directly related to economic profit, further fore-casting researches should be planned and propelled in relation to fungicide spray scheduling or decision-making of control activities.

A Study of Sound Expression in Webtoon (웹툰의 사운드 표현에 관한 연구)

  • Mok, Hae Jung
    • Cartoon and Animation Studies
    • /
    • s.36
    • /
    • pp.469-491
    • /
    • 2014
  • Webtoon has developed the method that makes it possible to express sound visually. Also we can also hear sound in webtoon through the development of web technology. It is natural that we analyze the sound that we can hear, but we can also analyze the sound that we can not hear. This study is based on 'dual code' in cognitive psychology. Cartoonists can make visual expression on the basis of auditive impression and memory, and readers can recall the sound through the process of memory and memory-retrieval. This study analyzes both audible sound and inaudable sound. Concise analysis owes the method to film sound theory. Three main factor, Volume, pitch, and tone are recognized by frequency in acoustics. On the other hand they are expressed by the thickness and site of line and image of sound source. The visual expression of in screen sound and off screen sound is related to the frame of comics. Generally the outside of frame means off sound, but some off sound is in the frame. In addition, horror comics use much sound for the effect of genre like horror film. When analyzing comics sound using this kinds of the method film sound analysis, we can find that webtoon has developed creative expression method comparing with simple ones of early comics. Especially arranging frames and expressing sound following and vertical moving are new ones in webtoon. Also types and arrangement of frame has been varied. BGM is the first in using audible sound and recently BGM composed mixing sound effect is being used. In addition, the program which makes it possible for readers to hear sound according to scroll moving. Especially horror genre raise the genre effects using this technology. Various methods of visualizing sound are being created, and the change shows that webtoon could be the model of convergence in contents.

Case Study on the Enterprise Microblog Usage: Focusing on Knowledge Management Strategy (기업용 마이크로블로그의 사용행태에 대한 사례연구: 지식경영전략을 중심으로)

  • Kang, Min Su;Park, Arum;Lee, Kyoung-Jun
    • Journal of Intelligence and Information Systems
    • /
    • v.21 no.1
    • /
    • pp.47-63
    • /
    • 2015
  • As knowledge is paid attention as a new production factor that generates added value, studies continue to apply knowledge management to business environment. In addition, as ICT (Information Communication Technology) was engrafted in business environment, it leads to increasing task efficiency and productivity of individual workers. Accordingly, the way that a business achieves its goal has changed to one in which its individual members are willing to take part in the organization and share information to create new values (Han, 2003) and studies for the system and service to support such transition are carrying out. Of late, a new concept called 'Enterprise 2.0' newly appears. It is the extension of Wen 2.0 and its technology, which focus on participation, sharing and openness, to the work environment of a business (Jung, 2013). Enterprise 2.0 is being used as a collaborative tool to prop up individual creativity and group brain power by combining Web 2.0 technologies such as blog, Wiki, RSS and tag with business software (McAfee, 2006). As Tweeter gets popular, Enterprise Microblog (EMB), which is an example of Enterprise 2.0 for business, has been developed as equivalent to Tweeter in business circle and SaaS (Software as a Service) such as Yammer was introduced The studies of EMB mainly focus on demonstrating its usability in terms of intra-firm communication and knowledge management. However existing studies lean too much towards large-sized companies and certain departments, rather than a company as a whole. Therefore, few studies have been conducted on small and medium-sized companies that have difficulty preparing separate resources and supplying exclusive workforce to introduce knowledge management. In this respect, the present study placed its analytic focus on small-sized companies actually equipped with EMB to know how they use it. And, based on the findings, this study examined their knowledge management strategies for EMB from the point of codification and personalization. Hypothesis -"as a company grows, it shifts EMB strategy from codification to personalization'- was established on the basis of reviewing precedent studies and literature. To demonstrate the hypothesis, this study analyzed the usage of EMB by small companies that have used it from foundation. For case study, the duration of the use was divided into 2 spans and longitudinal analysis was employed to examine the contents of the blogs. Using the key findings of the analysis, this study is aimed to propose practical implications for the operation of knowledge management of small-sized company and the suitable application of knowledge management system for operation Knowledge Management Strategy can be classified by codification strategy and personalization strategy (Hansen et. al., 1999), and how to manage the two strategies were always studied. Also, current studies regarding the knowledge management strategy were targeted mostly for major companies, resulting in lack of studies in how it can be applied on SMEs. This research, with the knowledge management strategy suited for SMEs, sets an Enterprise Microblog (EMB), and with the EMB applied on SMEs' Knowledge Management Strategy, it is reviewed on the perspective of SMEs' Codification and Personalization Strategies. Through the advanced research regarding Knowledge Management Strategy and EMB, the hypothesis is set that "Depending on the development of the company, the main application of EMB alters from Codification Strategy to Personalization Strategy". To check the hypothesis, SME that have used the EMB called 'Yammer' was analyzed from the date of their foundation until today. The case study has implemented longitudinal analysis which divides the period when the EMBs were used into three stages and analyzes the contents. As the result of the study, this suggests a substantial implication regarding the application of Knowledge Management Strategy and its Knowledge Management System that is suitable for SME.

An Analysis of IT Trends Using Tweet Data (트윗 데이터를 활용한 IT 트렌드 분석)

  • Yi, Jin Baek;Lee, Choong Kwon;Cha, Kyung Jin
    • Journal of Intelligence and Information Systems
    • /
    • v.21 no.1
    • /
    • pp.143-159
    • /
    • 2015
  • Predicting IT trends has been a long and important subject for information systems research. IT trend prediction makes it possible to acknowledge emerging eras of innovation and allocate budgets to prepare against rapidly changing technological trends. Towards the end of each year, various domestic and global organizations predict and announce IT trends for the following year. For example, Gartner Predicts 10 top IT trend during the next year, and these predictions affect IT and industry leaders and organization's basic assumptions about technology and the future of IT, but the accuracy of these reports are difficult to verify. Social media data can be useful tool to verify the accuracy. As social media services have gained in popularity, it is used in a variety of ways, from posting about personal daily life to keeping up to date with news and trends. In the recent years, rates of social media activity in Korea have reached unprecedented levels. Hundreds of millions of users now participate in online social networks and communicate with colleague and friends their opinions and thoughts. In particular, Twitter is currently the major micro blog service, it has an important function named 'tweets' which is to report their current thoughts and actions, comments on news and engage in discussions. For an analysis on IT trends, we chose Tweet data because not only it produces massive unstructured textual data in real time but also it serves as an influential channel for opinion leading on technology. Previous studies found that the tweet data provides useful information and detects the trend of society effectively, these studies also identifies that Twitter can track the issue faster than the other media, newspapers. Therefore, this study investigates how frequently the predicted IT trends for the following year announced by public organizations are mentioned on social network services like Twitter. IT trend predictions for 2013, announced near the end of 2012 from two domestic organizations, the National IT Industry Promotion Agency (NIPA) and the National Information Society Agency (NIA), were used as a basis for this research. The present study analyzes the Twitter data generated from Seoul (Korea) compared with the predictions of the two organizations to analyze the differences. Thus, Twitter data analysis requires various natural language processing techniques, including the removal of stop words, and noun extraction for processing various unrefined forms of unstructured data. To overcome these challenges, we used SAS IRS (Information Retrieval Studio) developed by SAS to capture the trend in real-time processing big stream datasets of Twitter. The system offers a framework for crawling, normalizing, analyzing, indexing and searching tweet data. As a result, we have crawled the entire Twitter sphere in Seoul area and obtained 21,589 tweets in 2013 to review how frequently the IT trend topics announced by the two organizations were mentioned by the people in Seoul. The results shows that most IT trend predicted by NIPA and NIA were all frequently mentioned in Twitter except some topics such as 'new types of security threat', 'green IT', 'next generation semiconductor' since these topics non generalized compound words so they can be mentioned in Twitter with other words. To answer whether the IT trend tweets from Korea is related to the following year's IT trends in real world, we compared Twitter's trending topics with those in Nara Market, Korea's online e-Procurement system which is a nationwide web-based procurement system, dealing with whole procurement process of all public organizations in Korea. The correlation analysis show that Tweet frequencies on IT trending topics predicted by NIPA and NIA are significantly correlated with frequencies on IT topics mentioned in project announcements by Nara market in 2012 and 2013. The main contribution of our research can be found in the following aspects: i) the IT topic predictions announced by NIPA and NIA can provide an effective guideline to IT professionals and researchers in Korea who are looking for verified IT topic trends in the following topic, ii) researchers can use Twitter to get some useful ideas to detect and predict dynamic trends of technological and social issues.

Prediction of Key Variables Affecting NBA Playoffs Advancement: Focusing on 3 Points and Turnover Features (미국 프로농구(NBA)의 플레이오프 진출에 영향을 미치는 주요 변수 예측: 3점과 턴오버 속성을 중심으로)

  • An, Sehwan;Kim, Youngmin
    • Journal of Intelligence and Information Systems
    • /
    • v.28 no.1
    • /
    • pp.263-286
    • /
    • 2022
  • This study acquires NBA statistical information for a total of 32 years from 1990 to 2022 using web crawling, observes variables of interest through exploratory data analysis, and generates related derived variables. Unused variables were removed through a purification process on the input data, and correlation analysis, t-test, and ANOVA were performed on the remaining variables. For the variable of interest, the difference in the mean between the groups that advanced to the playoffs and did not advance to the playoffs was tested, and then to compensate for this, the average difference between the three groups (higher/middle/lower) based on ranking was reconfirmed. Of the input data, only this year's season data was used as a test set, and 5-fold cross-validation was performed by dividing the training set and the validation set for model training. The overfitting problem was solved by comparing the cross-validation result and the final analysis result using the test set to confirm that there was no difference in the performance matrix. Because the quality level of the raw data is high and the statistical assumptions are satisfied, most of the models showed good results despite the small data set. This study not only predicts NBA game results or classifies whether or not to advance to the playoffs using machine learning, but also examines whether the variables of interest are included in the major variables with high importance by understanding the importance of input attribute. Through the visualization of SHAP value, it was possible to overcome the limitation that could not be interpreted only with the result of feature importance, and to compensate for the lack of consistency in the importance calculation in the process of entering/removing variables. It was found that a number of variables related to three points and errors classified as subjects of interest in this study were included in the major variables affecting advancing to the playoffs in the NBA. Although this study is similar in that it includes topics such as match results, playoffs, and championship predictions, which have been dealt with in the existing sports data analysis field, and comparatively analyzed several machine learning models for analysis, there is a difference in that the interest features are set in advance and statistically verified, so that it is compared with the machine learning analysis result. Also, it was differentiated from existing studies by presenting explanatory visualization results using SHAP, one of the XAI models.

The Brassica rapa Tissue-specific EST Database (배추의 조직 특이적 발현유전자 데이터베이스)

  • Yu, Hee-Ju;Park, Sin-Gi;Oh, Mi-Jin;Hwang, Hyun-Ju;Kim, Nam-Shin;Chung, Hee;Sohn, Seong-Han;Park, Beom-Seok;Mun, Jeong-Hwan
    • Horticultural Science & Technology
    • /
    • v.29 no.6
    • /
    • pp.633-640
    • /
    • 2011
  • Brassica rapa is an A genome model species for Brassica crop genetics, genomics, and breeding. With the completion of sequencing the B. rapa genome, functional analysis of the genome is forthcoming issue. The expressed sequence tags are fundamental resources supporting annotation and functional analysis of the genome including identification of tissue-specific genes and promoters. As of July 2011, 147,217 ESTs from 39 cDNA libraries of B. rapa are reported in the public database. However, little information can be retrieved from the sequences due to lack of organized databases. To leverage the sequence information and to maximize the use of publicly-available EST collections, the Brassica rapa tissue-specific EST database (BrTED) is developed. BrTED includes sequence information of 23,962 unigenes assembled by StackPack program. The unigene set is used as a query unit for various analyses such as BLAST against TAIR gene model, functional annotation using MIPS and UniProt, gene ontology analysis, and prediction of tissue-specific unigene sets based on statistics test. The database is composed of two main units, EST sequence processing and information retrieving unit and tissue-specific expression profile analysis unit. Information and data in both units are tightly inter-connected to each other using a web based browsing system. RT-PCR evaluation of 29 selected unigene sets successfully amplified amplicons from the target tissues of B. rapa. BrTED provided here allows the user to identify and analyze the expression of genes of interest and aid efforts to interpret the B. rapa genome through functional genomics. In addition, it can be used as a public resource in providing reference information to study the genus Brassica and other closely related crop crucifer plants.

The Analyses of Geographers지 Roles and Demands in Korean GIS Industries (GIS 산업에 있어서 지리학의 역할 및 수요에 대한 분석)

  • Chang Eun-mi
    • Journal of the Korean Geographical Society
    • /
    • v.39 no.4
    • /
    • pp.643-664
    • /
    • 2004
  • This study aims to review what geographers have contributed to GIS industries and national needs. To-be-geographers and geographers are expected to meet the gap between what we have teamed in school and what we have to do after graduation. The characteristics of GIS industry in the 1990 are summarized with approximate evaluation of the contribution of geographers in each stage. Author introduced the requirement for the licenses of geomatics and geospatial engineering experts and the other licenses, which are important to get a job in GIS industry from 2003 to 2004. A set of questionnaire on the user's requirements was given to GIS people in private companies and public GIS research centers and analyzed. Author found that they put an emphasis on hands-on experiences and programming skills. no advantages or geography such as capability or integration and inter-disciplinary collaboration were not appreciated. The prospects for the GIS tend to be positive but the reflectance of the prospect was not accompanied by at the same degree of preference for geography. Most government strategies for the next ten years' GIS focus on new-growth leading industries. SWOT(strength, weakness, opportunity, threat) analysis of geography for GIS industry will give some directions such as telematics, regional marketing strategies with web-based GIS technology, location based service. That means intra-disciplinary study in geography will evoke the potentiality of GIS, compared with interdisciplinary studies.

Flipped Learning in Socioscientific Issues Instruction: Its Impact on Middle School Students' Key Competencies and Character Development as Citizens (플립러닝 기반 SSI 수업이 중학생의 과학기술 사회 시민으로서의 역량 및 인성 함양에 미치는 효과)

  • Park, Donghwa;Ko, Yeonjoo;Lee, Hyunju
    • Journal of The Korean Association For Science Education
    • /
    • v.38 no.4
    • /
    • pp.467-480
    • /
    • 2018
  • This study aims to investigate how flipped learning-based socioscientific issue instruction (FL-SSI instruction) affected middle school students' key competencies and character development. Traditional classrooms are constrained in terms of time and resources for exploring the issues and making decision on SSI. To address these concerns, we designed and implemented an SSI instruction adopting flipped learning. Seventy-three 8th graders participated in an SSI program on four topics for over 12 class periods. Two questionnaires were used as a main data source to measure students' key competencies and character development before and after the SSI instruction. In addition, student responses and shared experience from focus group interviews after the instruction were collected and analyzed. The results indicate that the students significantly improved their key competencies and experienced character development after the SSI instruction. The students presented statistically significant improvement in the key competencies (i.e., collaboration, information and technology, critical thinking and problem-solving, and communication skills) and in two out of three factors in character and values as global citizens (social and moral compassion, and socio-scientific accountability). Interview data supports the quantitative results indicating that SSI instruction with a flipped learning strategy provided students in-depth and rich learning opportunities. The students responded that watching web-based videos prior to class enabled them to deeply understand the issue and actively engage in discussion and debate once class began. Furthermore, the resulting gains in available class time deriving from a flipped learning approach allowed the students to examine the issue from diverse perspectives.

Development and Application of ICT Teaching$\cdot$Learning Process Plan for Environmentally Friendly Housing - For an Academic Girl's High School in Gwangju Metropolitan City - (ICT를 활용한 "환경친화적 주거" 교수$\cdot$학습과정안 개발 및 적용 - 광주광역시 인문계 여자고등학교 학생을 대상으로 -)

  • Lee Seon-Hee;Cho Jae-Soon
    • Journal of Korean Home Economics Education Association
    • /
    • v.17 no.4 s.38
    • /
    • pp.101-115
    • /
    • 2005
  • The purpose of this research was to develop and applicate In based teaching$\cdot$learning process plan for Environmentally friendly Housing. The 3 main stages of process were used: analyses, planning & development, and application & evaluation. Three teaching subjects were selected to teach Environmentally friendly housing in the analysis stage. The webhard learning environment was consisted of contents and various materials such as digital video, PPT, group activity, discussion, individual and group reporting forms, and questions. The number of 101 high school students participated for the application stage during september 22-27, 2003. Most of all students evaluated very positively the various aspects of contents as well as LT cooperative learning methods md the web based learning environment. They strongly expressed to practice the practical ways of Environmentally friendly housing learned in the class in the future. The results imply that the contents and In teaching learning plan developed in this study seem to be adequate to be included in the regular text.

  • PDF

Design and Implementation of MongoDB-based Unstructured Log Processing System over Cloud Computing Environment (클라우드 환경에서 MongoDB 기반의 비정형 로그 처리 시스템 설계 및 구현)

  • Kim, Myoungjin;Han, Seungho;Cui, Yun;Lee, Hanku
    • Journal of Internet Computing and Services
    • /
    • v.14 no.6
    • /
    • pp.71-84
    • /
    • 2013
  • Log data, which record the multitude of information created when operating computer systems, are utilized in many processes, from carrying out computer system inspection and process optimization to providing customized user optimization. In this paper, we propose a MongoDB-based unstructured log processing system in a cloud environment for processing the massive amount of log data of banks. Most of the log data generated during banking operations come from handling a client's business. Therefore, in order to gather, store, categorize, and analyze the log data generated while processing the client's business, a separate log data processing system needs to be established. However, the realization of flexible storage expansion functions for processing a massive amount of unstructured log data and executing a considerable number of functions to categorize and analyze the stored unstructured log data is difficult in existing computer environments. Thus, in this study, we use cloud computing technology to realize a cloud-based log data processing system for processing unstructured log data that are difficult to process using the existing computing infrastructure's analysis tools and management system. The proposed system uses the IaaS (Infrastructure as a Service) cloud environment to provide a flexible expansion of computing resources and includes the ability to flexibly expand resources such as storage space and memory under conditions such as extended storage or rapid increase in log data. Moreover, to overcome the processing limits of the existing analysis tool when a real-time analysis of the aggregated unstructured log data is required, the proposed system includes a Hadoop-based analysis module for quick and reliable parallel-distributed processing of the massive amount of log data. Furthermore, because the HDFS (Hadoop Distributed File System) stores data by generating copies of the block units of the aggregated log data, the proposed system offers automatic restore functions for the system to continually operate after it recovers from a malfunction. Finally, by establishing a distributed database using the NoSQL-based Mongo DB, the proposed system provides methods of effectively processing unstructured log data. Relational databases such as the MySQL databases have complex schemas that are inappropriate for processing unstructured log data. Further, strict schemas like those of relational databases cannot expand nodes in the case wherein the stored data are distributed to various nodes when the amount of data rapidly increases. NoSQL does not provide the complex computations that relational databases may provide but can easily expand the database through node dispersion when the amount of data increases rapidly; it is a non-relational database with an appropriate structure for processing unstructured data. The data models of the NoSQL are usually classified as Key-Value, column-oriented, and document-oriented types. Of these, the representative document-oriented data model, MongoDB, which has a free schema structure, is used in the proposed system. MongoDB is introduced to the proposed system because it makes it easy to process unstructured log data through a flexible schema structure, facilitates flexible node expansion when the amount of data is rapidly increasing, and provides an Auto-Sharding function that automatically expands storage. The proposed system is composed of a log collector module, a log graph generator module, a MongoDB module, a Hadoop-based analysis module, and a MySQL module. When the log data generated over the entire client business process of each bank are sent to the cloud server, the log collector module collects and classifies data according to the type of log data and distributes it to the MongoDB module and the MySQL module. The log graph generator module generates the results of the log analysis of the MongoDB module, Hadoop-based analysis module, and the MySQL module per analysis time and type of the aggregated log data, and provides them to the user through a web interface. Log data that require a real-time log data analysis are stored in the MySQL module and provided real-time by the log graph generator module. The aggregated log data per unit time are stored in the MongoDB module and plotted in a graph according to the user's various analysis conditions. The aggregated log data in the MongoDB module are parallel-distributed and processed by the Hadoop-based analysis module. A comparative evaluation is carried out against a log data processing system that uses only MySQL for inserting log data and estimating query performance; this evaluation proves the proposed system's superiority. Moreover, an optimal chunk size is confirmed through the log data insert performance evaluation of MongoDB for various chunk sizes.