• Title/Summary/Keyword: Simple structure

Search Result 4,532, Processing Time 0.034 seconds

Proposal for the Hourglass-based Public Adoption-Linked National R&D Project Performance Evaluation Framework (Hourglass 기반 공공도입연계형 국가연구개발사업 성과평가 프레임워크 제안: 빅데이터 기반 인공지능 도시계획 기술개발 사업 사례를 바탕으로)

  • SeungHa Lee;Daehwan Kim;Kwang Sik Jeong;Keon Chul Park
    • Journal of Internet Computing and Services
    • /
    • v.24 no.6
    • /
    • pp.31-39
    • /
    • 2023
  • The purpose of this study is to propose a scientific performance evaluation framework for measuring and managing the overall outcome of complex types of projects that are linked to public demand-based commercialization, such as information system projects and public procurement, in integrated national R&D projects. In the case of integrated national R&D projects that involve multiple research institutes to form a single final product, and in the case of demand-based demonstration and commercialization of the project results, the existing evaluation system that evaluates performance based on the short-term outputs of the detailed tasks comprising the R&D project has limitations in evaluating the mid- and long-term effects and practicality of the integrated research products. (Moreover, as the paradigm of national R&D projects is changing to a mission-oriented one that emphasizes efficiency, there is a need to change the performance evaluation of national R&D projects to focus on the effectiveness and practicality of the results.) In this study, we propose a performance evaluation framework from a structural perspective to evaluate the completeness of each national R&D project from a practical perspective, such as its effectiveness, beyond simple short-term output, by utilizing the Hourglass model. In particular, it presents an integrated performance evaluation framework that links the top-down and bottom-up approaches leading to Tool-System-Service-Effect according to the structure of R&D projects. By applying the proposed detailed evaluation indicators and performance evaluation frame to actual national R&D projects, the validity of the indicators and the effectiveness of the proposed performance evaluation frame were verified, and these results are expected to provide academic, policy, and industrial implications for the performance evaluation system of national R&D projects that emphasize efficiency in the future.

Efficient Deep Learning Approaches for Active Fire Detection Using Himawari-8 Geostationary Satellite Images (Himawari-8 정지궤도 위성 영상을 활용한 딥러닝 기반 산불 탐지의 효율적 방안 제시)

  • Sihyun Lee;Yoojin Kang;Taejun Sung;Jungho Im
    • Korean Journal of Remote Sensing
    • /
    • v.39 no.5_3
    • /
    • pp.979-995
    • /
    • 2023
  • As wildfires are difficult to predict, real-time monitoring is crucial for a timely response. Geostationary satellite images are very useful for active fire detection because they can monitor a vast area with high temporal resolution (e.g., 2 min). Existing satellite-based active fire detection algorithms detect thermal outliers using threshold values based on the statistical analysis of brightness temperature. However, the difficulty in establishing suitable thresholds for such threshold-based methods hinders their ability to detect fires with low intensity and achieve generalized performance. In light of these challenges, machine learning has emerged as a potential-solution. Until now, relatively simple techniques such as random forest, Vanilla convolutional neural network (CNN), and U-net have been applied for active fire detection. Therefore, this study proposed an active fire detection algorithm using state-of-the-art (SOTA) deep learning techniques using data from the Advanced Himawari Imager and evaluated it over East Asia and Australia. The SOTA model was developed by applying EfficientNet and lion optimizer, and the results were compared with the model using the Vanilla CNN structure. EfficientNet outperformed CNN with F1-scores of 0.88 and 0.83 in East Asia and Australia, respectively. The performance was better after using weighted loss, equal sampling, and image augmentation techniques to fix data imbalance issues compared to before the techniques were used, resulting in F1-scores of 0.92 in East Asia and 0.84 in Australia. It is anticipated that timely responses facilitated by the SOTA deep learning-based approach for active fire detection will effectively mitigate the damage caused by wildfires.

Dynamic Limit and Predatory Pricing Under Uncertainty (불확실성하(不確實性下)의 동태적(動態的) 진입제한(進入制限) 및 약탈가격(掠奪價格) 책정(策定))

  • Yoo, Yoon-ha
    • KDI Journal of Economic Policy
    • /
    • v.13 no.1
    • /
    • pp.151-166
    • /
    • 1991
  • In this paper, a simple game-theoretic entry deterrence model is developed that integrates both limit pricing and predatory pricing. While there have been extensive studies which have dealt with predation and limit pricing separately, no study so far has analyzed these closely related practices in a unified framework. Treating each practice as if it were an independent phenomenon is, of course, an analytical necessity to abstract from complex realities. However, welfare analysis based on such a model may give misleading policy implications. By analyzing limit and predatory pricing within a single framework, this paper attempts to shed some light on the effects of interactions between these two frequently cited tactics of entry deterrence. Another distinctive feature of the paper is that limit and predatory pricing emerge, in equilibrium, as rational, profit maximizing strategies in the model. Until recently, the only conclusion from formal analyses of predatory pricing was that predation is unlikely to take place if every economic agent is assumed to be rational. This conclusion rests upon the argument that predation is costly; that is, it inflicts more losses upon the predator than upon the rival producer, and, therefore, is unlikely to succeed in driving out the rival, who understands that the price cutting, if it ever takes place, must be temporary. Recently several attempts have been made to overcome this modelling difficulty by Kreps and Wilson, Milgram and Roberts, Benoit, Fudenberg and Tirole, and Roberts. With the exception of Roberts, however, these studies, though successful in preserving the rationality of players, still share one serious weakness in that they resort to ad hoc, external constraints in order to generate profit maximizing predation. The present paper uses a highly stylized model of Cournot duopoly and derives the equilibrium predatory strategy without invoking external constraints except the assumption of asymmetrically distributed information. The underlying intuition behind the model can be summarized as follows. Imagine a firm that is considering entry into a monopolist's market but is uncertain about the incumbent firm's cost structure. If the monopolist has low cost, the rival would rather not enter because it would be difficult to compete with an efficient, low-cost firm. If the monopolist has high costs, however, the rival will definitely enter the market because it can make positive profits. In this situation, if the incumbent firm unwittingly produces its monopoly output, the entrant can infer the nature of the monopolist's cost by observing the monopolist's price. Knowing this, the high cost monopolist increases its output level up to what would have been produced by a low cost firm in an effort to conceal its cost condition. This constitutes limit pricing. The same logic applies when there is a rival competitor in the market. Producing a high cost duopoly output is self-revealing and thus to be avoided. Therefore, the firm chooses to produce the low cost duopoly output, consequently inflicting losses to the entrant or rival producer, thus acting in a predatory manner. The policy implications of the analysis are rather mixed. Contrary to the widely accepted hypothesis that predation is, at best, a negative sum game, and thus, a strategy that is unlikely to be played from the outset, this paper concludes that predation can be real occurence by showing that it can arise as an effective profit maximizing strategy. This conclusion alone may imply that the government can play a role in increasing the consumer welfare, say, by banning predation or limit pricing. However, the problem is that it is rather difficult to ascribe any welfare losses to these kinds of entry deterring practices. This difficulty arises from the fact that if the same practices have been adopted by a low cost firm, they could not be called entry-deterring. Moreover, the high cost incumbent in the model is doing exactly what the low cost firm would have done to keep the market to itself. All in all, this paper suggests that a government injunction of limit and predatory pricing should be applied with great care, evaluating each case on its own basis. Hasty generalization may work to the detriment, rather than the enhancement of consumer welfare.

  • PDF

Visualizing the Results of Opinion Mining from Social Media Contents: Case Study of a Noodle Company (소셜미디어 콘텐츠의 오피니언 마이닝결과 시각화: N라면 사례 분석 연구)

  • Kim, Yoosin;Kwon, Do Young;Jeong, Seung Ryul
    • Journal of Intelligence and Information Systems
    • /
    • v.20 no.4
    • /
    • pp.89-105
    • /
    • 2014
  • After emergence of Internet, social media with highly interactive Web 2.0 applications has provided very user friendly means for consumers and companies to communicate with each other. Users have routinely published contents involving their opinions and interests in social media such as blogs, forums, chatting rooms, and discussion boards, and the contents are released real-time in the Internet. For that reason, many researchers and marketers regard social media contents as the source of information for business analytics to develop business insights, and many studies have reported results on mining business intelligence from Social media content. In particular, opinion mining and sentiment analysis, as a technique to extract, classify, understand, and assess the opinions implicit in text contents, are frequently applied into social media content analysis because it emphasizes determining sentiment polarity and extracting authors' opinions. A number of frameworks, methods, techniques and tools have been presented by these researchers. However, we have found some weaknesses from their methods which are often technically complicated and are not sufficiently user-friendly for helping business decisions and planning. In this study, we attempted to formulate a more comprehensive and practical approach to conduct opinion mining with visual deliverables. First, we described the entire cycle of practical opinion mining using Social media content from the initial data gathering stage to the final presentation session. Our proposed approach to opinion mining consists of four phases: collecting, qualifying, analyzing, and visualizing. In the first phase, analysts have to choose target social media. Each target media requires different ways for analysts to gain access. There are open-API, searching tools, DB2DB interface, purchasing contents, and so son. Second phase is pre-processing to generate useful materials for meaningful analysis. If we do not remove garbage data, results of social media analysis will not provide meaningful and useful business insights. To clean social media data, natural language processing techniques should be applied. The next step is the opinion mining phase where the cleansed social media content set is to be analyzed. The qualified data set includes not only user-generated contents but also content identification information such as creation date, author name, user id, content id, hit counts, review or reply, favorite, etc. Depending on the purpose of the analysis, researchers or data analysts can select a suitable mining tool. Topic extraction and buzz analysis are usually related to market trends analysis, while sentiment analysis is utilized to conduct reputation analysis. There are also various applications, such as stock prediction, product recommendation, sales forecasting, and so on. The last phase is visualization and presentation of analysis results. The major focus and purpose of this phase are to explain results of analysis and help users to comprehend its meaning. Therefore, to the extent possible, deliverables from this phase should be made simple, clear and easy to understand, rather than complex and flashy. To illustrate our approach, we conducted a case study on a leading Korean instant noodle company. We targeted the leading company, NS Food, with 66.5% of market share; the firm has kept No. 1 position in the Korean "Ramen" business for several decades. We collected a total of 11,869 pieces of contents including blogs, forum contents and news articles. After collecting social media content data, we generated instant noodle business specific language resources for data manipulation and analysis using natural language processing. In addition, we tried to classify contents in more detail categories such as marketing features, environment, reputation, etc. In those phase, we used free ware software programs such as TM, KoNLP, ggplot2 and plyr packages in R project. As the result, we presented several useful visualization outputs like domain specific lexicons, volume and sentiment graphs, topic word cloud, heat maps, valence tree map, and other visualized images to provide vivid, full-colored examples using open library software packages of the R project. Business actors can quickly detect areas by a swift glance that are weak, strong, positive, negative, quiet or loud. Heat map is able to explain movement of sentiment or volume in categories and time matrix which shows density of color on time periods. Valence tree map, one of the most comprehensive and holistic visualization models, should be very helpful for analysts and decision makers to quickly understand the "big picture" business situation with a hierarchical structure since tree-map can present buzz volume and sentiment with a visualized result in a certain period. This case study offers real-world business insights from market sensing which would demonstrate to practical-minded business users how they can use these types of results for timely decision making in response to on-going changes in the market. We believe our approach can provide practical and reliable guide to opinion mining with visualized results that are immediately useful, not just in food industry but in other industries as well.

A STUDY ON THE IONOSPHERE AND THERMOSPHERE INTERACTION BASED ON NCAR-TIEGCM: DEPENDENCE OF THE INTERPLANETARY MAGNETIC FIELD (IMF) ON THE MOMENTUM FORCING IN THE HIGH-LATITUDE LOWER THERMOSPHERE (NCAR-TIEGCM을 이용한 이온권과 열권의 상호작용 연구: 행성간 자기장(IMF)에 따른 고위도 하부 열권의 운동량 강제에 대한 연구)

  • Kwak, Young-Sil;Richmond, Arthur D.;Ahn, Byung-Ho;Won, Young-In
    • Journal of Astronomy and Space Sciences
    • /
    • v.22 no.2
    • /
    • pp.147-174
    • /
    • 2005
  • To understand the physical processes that control the high-latitude lower thermospheric dynamics, we quantify the forces that are mainly responsible for maintaining the high-latitude lower thermospheric wind system with the aid of the National Center for Atmospheric Research Thermosphere-Ionosphere Electrodynamics General Circulation Model (NCAR-TIEGCM). Momentum forcing is statistically analyzed in magnetic coordinates, and its behavior with respect to the magnitude and orientation of the interplanetary magnetic field (IMF) is further examined. By subtracting the values with zero IMF from those with non-zero IMF, we obtained the difference winds and forces in the high-latitude 1ower thermosphere(<180 km). They show a simple structure over the polar cap and auroral regions for positive($B_y$ > 0.8|$\overline{B}_z$ |) or negative($B_y$ < -0.8|$\overline{B}_z$|) IMF-$\overline{B}_y$ conditions, with maximum values appearing around -80$^{\circ}$ magnetic latitude. Difference winds and difference forces for negative and positive $\overline{B}_y$ have an opposite sign and similar strength each other. For positive($B_z$ > 0.3125|$\overline{B}_y$|) or negative($B_z$ < -0.3125|$\overline{B}_y$|) IMF-$\overline{B}_z$ conditions the difference winds and difference forces are noted to subauroral latitudes. Difference winds and difference forces for negative $\overline{B}_z$ have an opposite sign to positive $\overline{B}_z$ condition. Those for negative $\overline{B}_z$ are stronger than those for positive indicating that negative $\overline{B}_z$ has a stronger effect on the winds and momentum forces than does positive $\overline{B}_z$ At higher altitudes(>125 km) the primary forces that determine the variations of tile neutral winds are the pressure gradient, Coriolis and rotational Pedersen ion drag forces; however, at various locations and times significant contributions can be made by the horizontal advection force. On the other hand, at lower altitudes(108-125 km) the pressure gradient, Coriolis and non-rotational Hall ion drag forces determine the variations of the neutral winds. At lower altitudes(<108 km) it tends to generate a geostrophic motion with the balance between the pressure gradient and Coriolis forces. The northward component of IMF By-dependent average momentum forces act more significantly on the neutral motion except for the ion drag. At lower altitudes(108-425 km) for negative IMF-$\overline{B}_y$ condition the ion drag force tends to generate a warm clockwise circulation with downward vertical motion associated with the adiabatic compress heating in the polar cap region. For positive IMF-$\overline{B}_y$ condition it tends to generate a cold anticlockwise circulation with upward vertical motion associated with the adiabatic expansion cooling in the polar cap region. For negative IMF-$\overline{B}_z$ the ion drag force tends to generate a cold anticlockwise circulation with upward vertical motion in the dawn sector. For positive IMF-$\overline{B}_z$ it tends to generate a warm clockwise circulation with downward vertical motion in the dawn sector.

Development of Predictive Models for Rights Issues Using Financial Analysis Indices and Decision Tree Technique (경영분석지표와 의사결정나무기법을 이용한 유상증자 예측모형 개발)

  • Kim, Myeong-Kyun;Cho, Yoonho
    • Journal of Intelligence and Information Systems
    • /
    • v.18 no.4
    • /
    • pp.59-77
    • /
    • 2012
  • This study focuses on predicting which firms will increase capital by issuing new stocks in the near future. Many stakeholders, including banks, credit rating agencies and investors, performs a variety of analyses for firms' growth, profitability, stability, activity, productivity, etc., and regularly report the firms' financial analysis indices. In the paper, we develop predictive models for rights issues using these financial analysis indices and data mining techniques. This study approaches to building the predictive models from the perspective of two different analyses. The first is the analysis period. We divide the analysis period into before and after the IMF financial crisis, and examine whether there is the difference between the two periods. The second is the prediction time. In order to predict when firms increase capital by issuing new stocks, the prediction time is categorized as one year, two years and three years later. Therefore Total six prediction models are developed and analyzed. In this paper, we employ the decision tree technique to build the prediction models for rights issues. The decision tree is the most widely used prediction method which builds decision trees to label or categorize cases into a set of known classes. In contrast to neural networks, logistic regression and SVM, decision tree techniques are well suited for high-dimensional applications and have strong explanation capabilities. There are well-known decision tree induction algorithms such as CHAID, CART, QUEST, C5.0, etc. Among them, we use C5.0 algorithm which is the most recently developed algorithm and yields performance better than other algorithms. We obtained data for the rights issue and financial analysis from TS2000 of Korea Listed Companies Association. A record of financial analysis data is consisted of 89 variables which include 9 growth indices, 30 profitability indices, 23 stability indices, 6 activity indices and 8 productivity indices. For the model building and test, we used 10,925 financial analysis data of total 658 listed firms. PASW Modeler 13 was used to build C5.0 decision trees for the six prediction models. Total 84 variables among financial analysis data are selected as the input variables of each model, and the rights issue status (issued or not issued) is defined as the output variable. To develop prediction models using C5.0 node (Node Options: Output type = Rule set, Use boosting = false, Cross-validate = false, Mode = Simple, Favor = Generality), we used 60% of data for model building and 40% of data for model test. The results of experimental analysis show that the prediction accuracies of data after the IMF financial crisis (59.04% to 60.43%) are about 10 percent higher than ones before IMF financial crisis (68.78% to 71.41%). These results indicate that since the IMF financial crisis, the reliability of financial analysis indices has increased and the firm intention of rights issue has been more obvious. The experiment results also show that the stability-related indices have a major impact on conducting rights issue in the case of short-term prediction. On the other hand, the long-term prediction of conducting rights issue is affected by financial analysis indices on profitability, stability, activity and productivity. All the prediction models include the industry code as one of significant variables. This means that companies in different types of industries show their different types of patterns for rights issue. We conclude that it is desirable for stakeholders to take into account stability-related indices and more various financial analysis indices for short-term prediction and long-term prediction, respectively. The current study has several limitations. First, we need to compare the differences in accuracy by using different data mining techniques such as neural networks, logistic regression and SVM. Second, we are required to develop and to evaluate new prediction models including variables which research in the theory of capital structure has mentioned about the relevance to rights issue.

Dry etching of polycarbonate using O2/SF6, O2/N2 and O2/CH4 plasmas (O2/SF6, O2/N2와 O2/CH4 플라즈마를 이용한 폴리카보네이트 건식 식각)

  • Joo, Y.W.;Park, Y.H.;Noh, H.S.;Kim, J.K.;Lee, S.H.;Cho, G.S.;Song, H.J.;Jeon, M.H.;Lee, J.W.
    • Journal of the Korean Vacuum Society
    • /
    • v.17 no.1
    • /
    • pp.16-22
    • /
    • 2008
  • We studied plasma etching of polycarbonate in $O_2/SF_6$, $O_2/N_2$ and $O_2/CH_4$. A capacitively coupled plasma system was employed for the research. For patterning, we used a photolithography method with UV exposure after coating a photoresist on the polycarbonate. Main variables in the experiment were the mixing ratio of $O_2$ and other gases, and RF chuck power. Especially, we used only a mechanical pump for in order to operate the system. The chamber pressure was fixed at 100 mTorr. All of surface profilometry, atomic force microscopy and scanning electron microscopy were used for characterization of the etched polycarbonate samples. According to the results, $O_2/SF_6$ plasmas gave the higher etch rate of the polycarbonate than pure $O_2$ and $SF_6$ plasmas. For example, with maintaining 100W RF chuck power and 100 mTorr chamber pressure, 20 sccm $O_2$ plasma provided about $0.4{\mu}m$/min of polycarbonate etch rate and 20 sccm $SF_6$ produced only $0.2{\mu}m$/min. However, the mixed plasma of 60 % $O_2$ and 40 % $SF_6$ gas flow rate generated about $0.56{\mu}m$ with even low -DC bias induced compared to that of $O_2$. More addition of $SF_6$ to the mixture reduced etch of polycarbonate. The surface roughness of etched polycarbonate was roughed about 3 times worse measured by atomic force microscopy. However examination with scanning electron microscopy indicated that the surface was comparable to that of photoresist. Increase of RF chuck power raised -DC bias on the chuck and etch rate of polycarbonate almost linearly. The etch selectivity of polycarbonate to photoresist was about 1:1. The meaning of these results was that the simple capacitively coupled plasma system can be used to make a microstructure on polymer with $O_2/SF_6$ plasmas. This result can be applied to plasma processing of other polymers.

Characteristics of Everyday Movement Represented in Steve Paxton's Works: Focused on Satisfyin' Lover, Bound, Contact at 10th & 2nd- (스티브 팩스톤(Steve Paxton)의 작품에서 나타난 일상적 움직임의 특성에 관한 연구: , , 를 중심으로)

  • KIM, Hyunhee
    • Trans-
    • /
    • v.3
    • /
    • pp.109-135
    • /
    • 2017
  • The purpose of this thesis is to analyze characteristics of everyday movement showed in performances of Steve Paxton. A work of art has been realized as a special object enjoyed by high class people as high culture for a long time. Therefore, a gap between everyday life and art has been greatly existed, and the emergence of everyday elements in a work of art means that public awareness involving social change is changed. The postmodernism as the period when a boundary between art and everyday life is uncertain was a postwar society after the Second World War and a social situation that rapidly changes into a capitalistic society. Changes in this time made scholars gain access academically concepts related to everyday life, and affected artists as the spirit of the times of pluralistic postmodernism refusing totality. At the same period of the time, modern dance also faced a turning point as post-modern dance. After the Second World War, modern dance started to be evaluated as it reaches the limit, and at this juncture, headed by dancers including the Judson Dance Theatre. Acting as a dancer in a dance company of Merce Cunningham, Steve Paxton, one of founders of the Judson Dance Theatre, had a critical mind of the conditions of dance company with the social structure and the process that movement is made. This thinking is showed in early performances as an at tempt to realize everyday motion it self in performances. His early activity represented by a walking motion attracted attention as a simple motion that excludes all artful elements of existing dance performances and is possible to conduct by a person who is not a dancer. Although starting the use of everyday movement is regarded as an open characteristic of post-modern dance, advanced researches on this were rare, so this study started. In addition, studies related to Steve Paxton are skewed towards Contact Improvisation that he rose as an active practician. As the use of ordinary movement before he focused on Contact Improvisation, this study examines other attempts including Contact Improvisation as attempts after the beginning of his performances. Therefore, the study analyzes Satisfyin' Lover, Contact at 10th & 2nd and Bound that are performances of Steve Paxton, and based on this, draws everyday characteristics. In addition, related books, academic essays, dance articles and reviews are consulted to consider a concept related to everyday life and understand dance historical movement of post-modern dance. Paxton attracted attention because of his activity starting at critical approach of movement of existing modern dance. As walking of performers who are not dancers, a walking motion showed in Satisfyin' Lover gave esthetic meaning to everyday movement. After that, he was affected by Eastern ideas, so developed Contact Improvisation making a motion through energy of the natural laws. In addition, he had everyday things on his performances, and used a method to deliver various images by using mundane movement and impromptu gestures originating from relaxed body. Everyday movement of his performances represents change in awareness of performances of the art of dancing that are traditionally maintained including change of dance genre of an area. His activity with unprecedented attempt and experimentation should be highly evaluated as efforts to overcome the limit of modern dance.

  • PDF

Open Digital Textbook for Smart Education (스마트교육을 위한 오픈 디지털교과서)

  • Koo, Young-Il;Park, Choong-Shik
    • Journal of Intelligence and Information Systems
    • /
    • v.19 no.2
    • /
    • pp.177-189
    • /
    • 2013
  • In Smart Education, the roles of digital textbook is very important as face-to-face media to learners. The standardization of digital textbook will promote the industrialization of digital textbook for contents providers and distributers as well as learner and instructors. In this study, the following three objectives-oriented digital textbooks are looking for ways to standardize. (1) digital textbooks should undertake the role of the media for blended learning which supports on-off classes, should be operating on common EPUB viewer without special dedicated viewer, should utilize the existing framework of the e-learning learning contents and learning management. The reason to consider the EPUB as the standard for digital textbooks is that digital textbooks don't need to specify antoher standard for the form of books, and can take advantage od industrial base with EPUB standards-rich content and distribution structure (2) digital textbooks should provide a low-cost open market service that are currently available as the standard open software (3) To provide appropriate learning feedback information to students, digital textbooks should provide a foundation which accumulates and manages all the learning activity information according to standard infrastructure for educational Big Data processing. In this study, the digital textbook in a smart education environment was referred to open digital textbook. The components of open digital textbooks service framework are (1) digital textbook terminals such as smart pad, smart TVs, smart phones, PC, etc., (2) digital textbooks platform to show and perform digital contents on digital textbook terminals, (3) learning contents repository, which exist on the cloud, maintains accredited learning, (4) App Store providing and distributing secondary learning contents and learning tools by learning contents developing companies, and (5) LMS as a learning support/management tool which on-site class teacher use for creating classroom instruction materials. In addition, locating all of the hardware and software implement a smart education service within the cloud must have take advantage of the cloud computing for efficient management and reducing expense. The open digital textbooks of smart education is consdered as providing e-book style interface of LMS to learners. In open digital textbooks, the representation of text, image, audio, video, equations, etc. is basic function. But painting, writing, problem solving, etc are beyond the capabilities of a simple e-book. The Communication of teacher-to-student, learner-to-learnert, tems-to-team is required by using the open digital textbook. To represent student demographics, portfolio information, and class information, the standard used in e-learning is desirable. To process learner tracking information about the activities of the learner for LMS(Learning Management System), open digital textbook must have the recording function and the commnincating function with LMS. DRM is a function for protecting various copyright. Currently DRMs of e-boook are controlled by the corresponding book viewer. If open digital textbook admitt DRM that is used in a variety of different DRM standards of various e-book viewer, the implementation of redundant features can be avoided. Security/privacy functions are required to protect information about the study or instruction from a third party UDL (Universal Design for Learning) is learning support function for those with disabilities have difficulty in learning courses. The open digital textbook, which is based on E-book standard EPUB 3.0, must (1) record the learning activity log information, and (2) communicate with the server to support the learning activity. While the recording function and the communication function, which is not determined on current standards, is implemented as a JavaScript and is utilized in the current EPUB 3.0 viewer, ths strategy of proposing such recording and communication functions as the next generation of e-book standard, or special standard (EPUB 3.0 for education) is needed. Future research in this study will implement open source program with the proposed open digital textbook standard and present a new educational services including Big Data analysis.

SANET-CC : Zone IP Allocation Protocol for Offshore Networks (SANET-CC : 해상 네트워크를 위한 구역 IP 할당 프로토콜)

  • Bae, Kyoung Yul;Cho, Moon Ki
    • Journal of Intelligence and Information Systems
    • /
    • v.26 no.4
    • /
    • pp.87-109
    • /
    • 2020
  • Currently, thanks to the major stride made in developing wired and wireless communication technology, a variety of IT services are available on land. This trend is leading to an increasing demand for IT services to vessels on the water as well. And it is expected that the request for various IT services such as two-way digital data transmission, Web, APP, etc. is on the rise to the extent that they are available on land. However, while a high-speed information communication network is easily accessible on land because it is based upon a fixed infrastructure like an AP and a base station, it is not the case on the water. As a result, a radio communication network-based voice communication service is usually used at sea. To solve this problem, an additional frequency for digital data exchange was allocated, and a ship ad-hoc network (SANET) was proposed that can be utilized by using this frequency. Instead of satellite communication that costs a lot in installation and usage, SANET was developed to provide various IT services to ships based on IP in the sea. Connectivity between land base stations and ships is important in the SANET. To have this connection, a ship must be a member of the network with its IP address assigned. This paper proposes a SANET-CC protocol that allows ships to be assigned their own IP address. SANET-CC propagates several non-overlapping IP addresses through the entire network from land base stations to ships in the form of the tree. Ships allocate their own IP addresses through the exchange of simple requests and response messages with land base stations or M-ships that can allocate IP addresses. Therefore, SANET-CC can eliminate the IP collision prevention (Duplicate Address Detection) process and the process of network separation or integration caused by the movement of the ship. Various simulations were performed to verify the applicability of this protocol to SANET. The outcome of such simulations shows us the following. First, using SANET-CC, about 91% of the ships in the network were able to receive IP addresses under any circumstances. It is 6% higher than the existing studies. And it suggests that if variables are adjusted to each port's environment, it may show further improved results. Second, this work shows us that it takes all vessels an average of 10 seconds to receive IP addresses regardless of conditions. It represents a 50% decrease in time compared to the average of 20 seconds in the previous study. Also Besides, taking it into account that when existing studies were on 50 to 200 vessels, this study on 100 to 400 vessels, the efficiency can be much higher. Third, existing studies have not been able to derive optimal values according to variables. This is because it does not have a consistent pattern depending on the variable. This means that optimal variables values cannot be set for each port under diverse environments. This paper, however, shows us that the result values from the variables exhibit a consistent pattern. This is significant in that it can be applied to each port by adjusting the variable values. It was also confirmed that regardless of the number of ships, the IP allocation ratio was the most efficient at about 96 percent if the waiting time after the IP request was 75ms, and that the tree structure could maintain a stable network configuration when the number of IPs was over 30000. Fourth, this study can be used to design a network for supporting intelligent maritime control systems and services offshore, instead of satellite communication. And if LTE-M is set up, it is possible to use it for various intelligent services.