• Title/Summary/Keyword: Problem structure

Search Result 5,523, Processing Time 0.04 seconds

Spatial Distribution of Aging District in Taejeon Metropolitan City (대전광역시 노령화 지구의 공간적 분포 패턴)

  • Jeong, Hwan-Yeong;Ko, Sang-Im
    • Journal of the Korean association of regional geographers
    • /
    • v.6 no.2
    • /
    • pp.1-19
    • /
    • 2000
  • This study is to investigate and analyze regional patterns of aging in Taejeon Metropolitan city-the overpopulated area of Choong-Cheong Province-by cohort analysis method. According to the population structure transition caused by rapid social and economic changes, Korea has made a rapid progress in population aging since 1970. This trend is so rapid that we should prepare for and cope with aging society. It is not only slow to cope with it in our society, but also there are few studies on population aging of the geographical field in Korea. The data of this study are the reports of Population and Housing Censuses in 1975 and 1985 and General Population and Housing Censuses with 10% sample survey in 1995 taken by National Statistical Office. The research method is to sample as the aging district the area with high aged population rate where the populations over 60 reside among total population during the years of 1975, 1985, 1995 and to sample the special districts of decreasing population where the population decreases very much and the special districts of increasing population in which the population increases greatly, presuming that the reason why aged population rate increases is that non-elderly population high in mobility moves out. It is then verified and ascertained whether it is true or not with cohort analysis method by age. Finally regional patterns in the city are found through the classification and modeling by type based on the aging district, the special districts of decreasing population, and the special districts of increasing population. The characteristics of the regional patterns show that there is social population transition and that non-elderly population moves out. The aging district with the high aged population rate is divided into high-level keeping-up type, relative falling type below the average of Taejeon city in aging progress, and relative rising type above the average of the city. This district can be found at both the central area of the city and the suburbs because Taejeon city has the characteristic of over-bounded city. But it cannot be found at the new built-up area with the in-migration of large population. The special districts of decreasing population where the population continues to decrease can be said to be the population doughnuts found at the CBD and its neighboring inner area. On the other hand, the special districts of increasing population where the population continues to increase are located at the new built-up area of the northern part in Taejeon city. The special districts of decreasing population are overlapping with the aging district and higher in aged population rate by the out-migration of non-elderly population. The special districts of increasing population are not overlapping with the aging district and lower in aged population rate by the in-migration of non-elderly population. To clarify the distribution map of the aging district, the special districts of decreasing and increasing population and the aging district are divided into four groups such as the special districts of decreasing population group-the same one as the aging district, the special districts of decreasing population group, the special districts of increasing population group, and the other district. With the cohort analysis method by age used to investigate the definite increase and decrease of aging population through population transition of each group, it is found that the progress of population aging is closely related to the social population fluctuation, especially that aged population rate is higher with the out-migration of non-elderly population. This is to explain each model of CBD, inner area, and the suburbs after modeling the aging district, the special districts of decreasing population, and the special districts of increasing population in Taejeon city. On the assumption that the city area is a concentric circle, it is possible to divide it into three areas such as CBD(A), the inner area(B), and the suburbs(C). The special districts of increasing and decreasing population in the city are divided into three districts-the special districts of decreasing population(a), the special districts of increasing population(b), and the others(c). The aging district of this city is divided into the aging district($\alpha$) and the others($\beta$). And then modeling these districts, it is probable to find regional patterns in the city. $Aa{\alpha}$ and $Ac{\beta}$ patterns are found in the CBD, in which $Aa{\alpha}$ is the special district of decreasing population and is higher in aged population rate because of aged population low in mobility staying behind and out-migration of non-elderly population. $Ba{\alpha}$, $Ba{\beta}$, $Bb{\beta}$, and $Bc{\beta}$ patterns are found in the inner area, in which neighboring area $Ba{\alpha}$ pattern is located. $Bb{\beta}$ pattern is located at the new developing area of newly built apartment complex. $Cb{\beta}$, $Cc{\alpha}$, and $Cc{\beta}$ patterns are found in the suburbs, among which $Cc{\alpha}$ pattern is highest in population aging. It is likely that the $Cc{\beta}$ under housing land readjustment on a large scale will be the $Cb{\beta}$ pattern. As analyzed above, marriage and out-migration of new family, non-elderly population, with house purchase are main factors in accelerating population aging in the central area of the city. Population aging is responsible for the great increase of aged population with longer life expectancy by the low death rate, the out-migration of non-elderly population, and the age group of new aged population in the suburbs. It is necessary to investigate and analyze the regional patterns of population aging at the time when population problems caused by aging as well as longer life expectancy are now on the increase. I hope that this will help the future study on population aging of the geographical field in Korea. As in the future population aging will be a major problem in our society, local autonomy should make a plan for the problem to the extent that population aging progresses by regional groups and inevitably prepare for it.

  • PDF

A Study on Interactions of Competitive Promotions Between the New and Used Cars (신차와 중고차간 프로모션의 상호작용에 대한 연구)

  • Chang, Kwangpil
    • Asia Marketing Journal
    • /
    • v.14 no.1
    • /
    • pp.83-98
    • /
    • 2012
  • In a market where new and used cars are competing with each other, we would run the risk of obtaining biased estimates of cross elasticity between them if we focus on only new cars or on only used cars. Unfortunately, most of previous studies on the automobile industry have focused on only new car models without taking into account the effect of used cars' pricing policy on new cars' market shares and vice versa, resulting in inadequate prediction of reactive pricing in response to competitors' rebate or price discount. However, there are some exceptions. Purohit (1992) and Sullivan (1990) looked into both new and used car markets at the same time to examine the effect of new car model launching on the used car prices. But their studies have some limitations in that they employed the average used car prices reported in NADA Used Car Guide instead of actual transaction prices. Some of the conflicting results may be due to this problem in the data. Park (1998) recognized this problem and used the actual prices in his study. His work is notable in that he investigated the qualitative effect of new car model launching on the pricing policy of the used car in terms of reinforcement of brand equity. The current work also used the actual price like Park (1998) but the quantitative aspect of competitive price promotion between new and used cars of the same model was explored. In this study, I develop a model that assumes that the cross elasticity between new and used cars of the same model is higher than those amongst new cars and used cars of the different model. Specifically, I apply the nested logit model that assumes the car model choice at the first stage and the choice between new and used cars at the second stage. This proposed model is compared to the IIA (Independence of Irrelevant Alternatives) model that assumes that there is no decision hierarchy but that new and used cars of the different model are all substitutable at the first stage. The data for this study are drawn from Power Information Network (PIN), an affiliate of J.D. Power and Associates. PIN collects sales transaction data from a sample of dealerships in the major metropolitan areas in the U.S. These are retail transactions, i.e., sales or leases to final consumers, excluding fleet sales and including both new car and used car sales. Each observation in the PIN database contains the transaction date, the manufacturer, model year, make, model, trim and other car information, the transaction price, consumer rebates, the interest rate, term, amount financed (when the vehicle is financed or leased), etc. I used data for the compact cars sold during the period January 2009- June 2009. The new and used cars of the top nine selling models are included in the study: Mazda 3, Honda Civic, Chevrolet Cobalt, Toyota Corolla, Hyundai Elantra, Ford Focus, Volkswagen Jetta, Nissan Sentra, and Kia Spectra. These models in the study accounted for 87% of category unit sales. Empirical application of the nested logit model showed that the proposed model outperformed the IIA (Independence of Irrelevant Alternatives) model in both calibration and holdout samples. The other comparison model that assumes choice between new and used cars at the first stage and car model choice at the second stage turned out to be mis-specfied since the dissimilarity parameter (i.e., inclusive or categroy value parameter) was estimated to be greater than 1. Post hoc analysis based on estimated parameters was conducted employing the modified Lanczo's iterative method. This method is intuitively appealing. For example, suppose a new car offers a certain amount of rebate and gains market share at first. In response to this rebate, a used car of the same model keeps decreasing price until it regains the lost market share to maintain the status quo. The new car settle down to a lowered market share due to the used car's reaction. The method enables us to find the amount of price discount to main the status quo and equilibrium market shares of the new and used cars. In the first simulation, I used Jetta as a focal brand to see how its new and used cars set prices, rebates or APR interactively assuming that reactive cars respond to price promotion to maintain the status quo. The simulation results showed that the IIA model underestimates cross elasticities, resulting in suggesting less aggressive used car price discount in response to new cars' rebate than the proposed nested logit model. In the second simulation, I used Elantra to reconfirm the result for Jetta and came to the same conclusion. In the third simulation, I had Corolla offer $1,000 rebate to see what could be the best response for Elantra's new and used cars. Interestingly, Elantra's used car could maintain the status quo by offering lower price discount ($160) than the new car ($205). In the future research, we might want to explore the plausibility of the alternative nested logit model. For example, the NUB model that assumes choice between new and used cars at the first stage and brand choice at the second stage could be a possibility even though it was rejected in the current study because of mis-specification (A dissimilarity parameter turned out to be higher than 1). The NUB model may have been rejected due to true mis-specification or data structure transmitted from a typical car dealership. In a typical car dealership, both new and used cars of the same model are displayed. Because of this fact, the BNU model that assumes brand choice at the first stage and choice between new and used cars at the second stage may have been favored in the current study since customers first choose a dealership (brand) then choose between new and used cars given this market environment. However, suppose there are dealerships that carry both new and used cars of various models, then the NUB model might fit the data as well as the BNU model. Which model is a better description of the data is an empirical question. In addition, it would be interesting to test a probabilistic mixture model of the BNU and NUB on a new data set.

  • PDF

Dynamic Limit and Predatory Pricing Under Uncertainty (불확실성하(不確實性下)의 동태적(動態的) 진입제한(進入制限) 및 약탈가격(掠奪價格) 책정(策定))

  • Yoo, Yoon-ha
    • KDI Journal of Economic Policy
    • /
    • v.13 no.1
    • /
    • pp.151-166
    • /
    • 1991
  • In this paper, a simple game-theoretic entry deterrence model is developed that integrates both limit pricing and predatory pricing. While there have been extensive studies which have dealt with predation and limit pricing separately, no study so far has analyzed these closely related practices in a unified framework. Treating each practice as if it were an independent phenomenon is, of course, an analytical necessity to abstract from complex realities. However, welfare analysis based on such a model may give misleading policy implications. By analyzing limit and predatory pricing within a single framework, this paper attempts to shed some light on the effects of interactions between these two frequently cited tactics of entry deterrence. Another distinctive feature of the paper is that limit and predatory pricing emerge, in equilibrium, as rational, profit maximizing strategies in the model. Until recently, the only conclusion from formal analyses of predatory pricing was that predation is unlikely to take place if every economic agent is assumed to be rational. This conclusion rests upon the argument that predation is costly; that is, it inflicts more losses upon the predator than upon the rival producer, and, therefore, is unlikely to succeed in driving out the rival, who understands that the price cutting, if it ever takes place, must be temporary. Recently several attempts have been made to overcome this modelling difficulty by Kreps and Wilson, Milgram and Roberts, Benoit, Fudenberg and Tirole, and Roberts. With the exception of Roberts, however, these studies, though successful in preserving the rationality of players, still share one serious weakness in that they resort to ad hoc, external constraints in order to generate profit maximizing predation. The present paper uses a highly stylized model of Cournot duopoly and derives the equilibrium predatory strategy without invoking external constraints except the assumption of asymmetrically distributed information. The underlying intuition behind the model can be summarized as follows. Imagine a firm that is considering entry into a monopolist's market but is uncertain about the incumbent firm's cost structure. If the monopolist has low cost, the rival would rather not enter because it would be difficult to compete with an efficient, low-cost firm. If the monopolist has high costs, however, the rival will definitely enter the market because it can make positive profits. In this situation, if the incumbent firm unwittingly produces its monopoly output, the entrant can infer the nature of the monopolist's cost by observing the monopolist's price. Knowing this, the high cost monopolist increases its output level up to what would have been produced by a low cost firm in an effort to conceal its cost condition. This constitutes limit pricing. The same logic applies when there is a rival competitor in the market. Producing a high cost duopoly output is self-revealing and thus to be avoided. Therefore, the firm chooses to produce the low cost duopoly output, consequently inflicting losses to the entrant or rival producer, thus acting in a predatory manner. The policy implications of the analysis are rather mixed. Contrary to the widely accepted hypothesis that predation is, at best, a negative sum game, and thus, a strategy that is unlikely to be played from the outset, this paper concludes that predation can be real occurence by showing that it can arise as an effective profit maximizing strategy. This conclusion alone may imply that the government can play a role in increasing the consumer welfare, say, by banning predation or limit pricing. However, the problem is that it is rather difficult to ascribe any welfare losses to these kinds of entry deterring practices. This difficulty arises from the fact that if the same practices have been adopted by a low cost firm, they could not be called entry-deterring. Moreover, the high cost incumbent in the model is doing exactly what the low cost firm would have done to keep the market to itself. All in all, this paper suggests that a government injunction of limit and predatory pricing should be applied with great care, evaluating each case on its own basis. Hasty generalization may work to the detriment, rather than the enhancement of consumer welfare.

  • PDF

Performance analysis of Frequent Itemset Mining Technique based on Transaction Weight Constraints (트랜잭션 가중치 기반의 빈발 아이템셋 마이닝 기법의 성능분석)

  • Yun, Unil;Pyun, Gwangbum
    • Journal of Internet Computing and Services
    • /
    • v.16 no.1
    • /
    • pp.67-74
    • /
    • 2015
  • In recent years, frequent itemset mining for considering the importance of each item has been intensively studied as one of important issues in the data mining field. According to strategies utilizing the item importance, itemset mining approaches for discovering itemsets based on the item importance are classified as follows: weighted frequent itemset mining, frequent itemset mining using transactional weights, and utility itemset mining. In this paper, we perform empirical analysis with respect to frequent itemset mining algorithms based on transactional weights. The mining algorithms compute transactional weights by utilizing the weight for each item in large databases. In addition, these algorithms discover weighted frequent itemsets on the basis of the item frequency and weight of each transaction. Consequently, we can see the importance of a certain transaction through the database analysis because the weight for the transaction has higher value if it contains many items with high values. We not only analyze the advantages and disadvantages but also compare the performance of the most famous algorithms in the frequent itemset mining field based on the transactional weights. As a representative of the frequent itemset mining using transactional weights, WIS introduces the concept and strategies of transactional weights. In addition, there are various other state-of-the-art algorithms, WIT-FWIs, WIT-FWIs-MODIFY, and WIT-FWIs-DIFF, for extracting itemsets with the weight information. To efficiently conduct processes for mining weighted frequent itemsets, three algorithms use the special Lattice-like data structure, called WIT-tree. The algorithms do not need to an additional database scanning operation after the construction of WIT-tree is finished since each node of WIT-tree has item information such as item and transaction IDs. In particular, the traditional algorithms conduct a number of database scanning operations to mine weighted itemsets, whereas the algorithms based on WIT-tree solve the overhead problem that can occur in the mining processes by reading databases only one time. Additionally, the algorithms use the technique for generating each new itemset of length N+1 on the basis of two different itemsets of length N. To discover new weighted itemsets, WIT-FWIs performs the itemset combination processes by using the information of transactions that contain all the itemsets. WIT-FWIs-MODIFY has a unique feature decreasing operations for calculating the frequency of the new itemset. WIT-FWIs-DIFF utilizes a technique using the difference of two itemsets. To compare and analyze the performance of the algorithms in various environments, we use real datasets of two types (i.e., dense and sparse) in terms of the runtime and maximum memory usage. Moreover, a scalability test is conducted to evaluate the stability for each algorithm when the size of a database is changed. As a result, WIT-FWIs and WIT-FWIs-MODIFY show the best performance in the dense dataset, and in sparse dataset, WIT-FWI-DIFF has mining efficiency better than the other algorithms. Compared to the algorithms using WIT-tree, WIS based on the Apriori technique has the worst efficiency because it requires a large number of computations more than the others on average.

Medical Information Dynamic Access System in Smart Mobile Environments (스마트 모바일 환경에서 의료정보 동적접근 시스템)

  • Jeong, Chang Won;Kim, Woo Hong;Yoon, Kwon Ha;Joo, Su Chong
    • Journal of Internet Computing and Services
    • /
    • v.16 no.1
    • /
    • pp.47-55
    • /
    • 2015
  • Recently, the environment of a hospital information system is a trend to combine various SMART technologies. Accordingly, various smart devices, such as a smart phone, Tablet PC is utilized in the medical information system. Also, these environments consist of various applications executing on heterogeneous sensors, devices, systems and networks. In these hospital information system environment, applying a security service by traditional access control method cause a problems. Most of the existing security system uses the access control list structure. It is only permitted access defined by an access control matrix such as client name, service object method name. The major problem with the static approach cannot quickly adapt to changed situations. Hence, we needs to new security mechanisms which provides more flexible and can be easily adapted to various environments with very different security requirements. In addition, for addressing the changing of service medical treatment of the patient, the researching is needed. In this paper, we suggest a dynamic approach to medical information systems in smart mobile environments. We focus on how to access medical information systems according to dynamic access control methods based on the existence of the hospital's information system environments. The physical environments consist of a mobile x-ray imaging devices, dedicated mobile/general smart devices, PACS, EMR server and authorization server. The software environment was developed based on the .Net Framework for synchronization and monitoring services based on mobile X-ray imaging equipment Windows7 OS. And dedicated a smart device application, we implemented a dynamic access services through JSP and Java SDK is based on the Android OS. PACS and mobile X-ray image devices in hospital, medical information between the dedicated smart devices are based on the DICOM medical image standard information. In addition, EMR information is based on H7. In order to providing dynamic access control service, we classify the context of the patients according to conditions of bio-information such as oxygen saturation, heart rate, BP and body temperature etc. It shows event trace diagrams which divided into two parts like general situation, emergency situation. And, we designed the dynamic approach of the medical care information by authentication method. The authentication Information are contained ID/PWD, the roles, position and working hours, emergency certification codes for emergency patients. General situations of dynamic access control method may have access to medical information by the value of the authentication information. In the case of an emergency, was to have access to medical information by an emergency code, without the authentication information. And, we constructed the medical information integration database scheme that is consist medical information, patient, medical staff and medical image information according to medical information standards.y Finally, we show the usefulness of the dynamic access application service based on the smart devices for execution results of the proposed system according to patient contexts such as general and emergency situation. Especially, the proposed systems are providing effective medical information services with smart devices in emergency situation by dynamic access control methods. As results, we expect the proposed systems to be useful for u-hospital information systems and services.

Memory Organization for a Fuzzy Controller.

  • Jee, K.D.S.;Poluzzi, R.;Russo, B.
    • Proceedings of the Korean Institute of Intelligent Systems Conference
    • /
    • 1993.06a
    • /
    • pp.1041-1043
    • /
    • 1993
  • Fuzzy logic based Control Theory has gained much interest in the industrial world, thanks to its ability to formalize and solve in a very natural way many problems that are very difficult to quantify at an analytical level. This paper shows a solution for treating membership function inside hardware circuits. The proposed hardware structure optimizes the memoried size by using particular form of the vectorial representation. The process of memorizing fuzzy sets, i.e. their membership function, has always been one of the more problematic issues for the hardware implementation, due to the quite large memory space that is needed. To simplify such an implementation, it is commonly [1,2,8,9,10,11] used to limit the membership functions either to those having triangular or trapezoidal shape, or pre-definite shape. These kinds of functions are able to cover a large spectrum of applications with a limited usage of memory, since they can be memorized by specifying very few parameters ( ight, base, critical points, etc.). This however results in a loss of computational power due to computation on the medium points. A solution to this problem is obtained by discretizing the universe of discourse U, i.e. by fixing a finite number of points and memorizing the value of the membership functions on such points [3,10,14,15]. Such a solution provides a satisfying computational speed, a very high precision of definitions and gives the users the opportunity to choose membership functions of any shape. However, a significant memory waste can as well be registered. It is indeed possible that for each of the given fuzzy sets many elements of the universe of discourse have a membership value equal to zero. It has also been noticed that almost in all cases common points among fuzzy sets, i.e. points with non null membership values are very few. More specifically, in many applications, for each element u of U, there exists at most three fuzzy sets for which the membership value is ot null [3,5,6,7,12,13]. Our proposal is based on such hypotheses. Moreover, we use a technique that even though it does not restrict the shapes of membership functions, it reduces strongly the computational time for the membership values and optimizes the function memorization. In figure 1 it is represented a term set whose characteristics are common for fuzzy controllers and to which we will refer in the following. The above term set has a universe of discourse with 128 elements (so to have a good resolution), 8 fuzzy sets that describe the term set, 32 levels of discretization for the membership values. Clearly, the number of bits necessary for the given specifications are 5 for 32 truth levels, 3 for 8 membership functions and 7 for 128 levels of resolution. The memory depth is given by the dimension of the universe of the discourse (128 in our case) and it will be represented by the memory rows. The length of a world of memory is defined by: Length = nem (dm(m)+dm(fm) Where: fm is the maximum number of non null values in every element of the universe of the discourse, dm(m) is the dimension of the values of the membership function m, dm(fm) is the dimension of the word to represent the index of the highest membership function. In our case then Length=24. The memory dimension is therefore 128*24 bits. If we had chosen to memorize all values of the membership functions we would have needed to memorize on each memory row the membership value of each element. Fuzzy sets word dimension is 8*5 bits. Therefore, the dimension of the memory would have been 128*40 bits. Coherently with our hypothesis, in fig. 1 each element of universe of the discourse has a non null membership value on at most three fuzzy sets. Focusing on the elements 32,64,96 of the universe of discourse, they will be memorized as follows: The computation of the rule weights is done by comparing those bits that represent the index of the membership function, with the word of the program memor . The output bus of the Program Memory (μCOD), is given as input a comparator (Combinatory Net). If the index is equal to the bus value then one of the non null weight derives from the rule and it is produced as output, otherwise the output is zero (fig. 2). It is clear, that the memory dimension of the antecedent is in this way reduced since only non null values are memorized. Moreover, the time performance of the system is equivalent to the performance of a system using vectorial memorization of all weights. The dimensioning of the word is influenced by some parameters of the input variable. The most important parameter is the maximum number membership functions (nfm) having a non null value in each element of the universe of discourse. From our study in the field of fuzzy system, we see that typically nfm 3 and there are at most 16 membership function. At any rate, such a value can be increased up to the physical dimensional limit of the antecedent memory. A less important role n the optimization process of the word dimension is played by the number of membership functions defined for each linguistic term. The table below shows the request word dimension as a function of such parameters and compares our proposed method with the method of vectorial memorization[10]. Summing up, the characteristics of our method are: Users are not restricted to membership functions with specific shapes. The number of the fuzzy sets and the resolution of the vertical axis have a very small influence in increasing memory space. Weight computations are done by combinatorial network and therefore the time performance of the system is equivalent to the one of the vectorial method. The number of non null membership values on any element of the universe of discourse is limited. Such a constraint is usually non very restrictive since many controllers obtain a good precision with only three non null weights. The method here briefly described has been adopted by our group in the design of an optimized version of the coprocessor described in [10].

  • PDF

Open Digital Textbook for Smart Education (스마트교육을 위한 오픈 디지털교과서)

  • Koo, Young-Il;Park, Choong-Shik
    • Journal of Intelligence and Information Systems
    • /
    • v.19 no.2
    • /
    • pp.177-189
    • /
    • 2013
  • In Smart Education, the roles of digital textbook is very important as face-to-face media to learners. The standardization of digital textbook will promote the industrialization of digital textbook for contents providers and distributers as well as learner and instructors. In this study, the following three objectives-oriented digital textbooks are looking for ways to standardize. (1) digital textbooks should undertake the role of the media for blended learning which supports on-off classes, should be operating on common EPUB viewer without special dedicated viewer, should utilize the existing framework of the e-learning learning contents and learning management. The reason to consider the EPUB as the standard for digital textbooks is that digital textbooks don't need to specify antoher standard for the form of books, and can take advantage od industrial base with EPUB standards-rich content and distribution structure (2) digital textbooks should provide a low-cost open market service that are currently available as the standard open software (3) To provide appropriate learning feedback information to students, digital textbooks should provide a foundation which accumulates and manages all the learning activity information according to standard infrastructure for educational Big Data processing. In this study, the digital textbook in a smart education environment was referred to open digital textbook. The components of open digital textbooks service framework are (1) digital textbook terminals such as smart pad, smart TVs, smart phones, PC, etc., (2) digital textbooks platform to show and perform digital contents on digital textbook terminals, (3) learning contents repository, which exist on the cloud, maintains accredited learning, (4) App Store providing and distributing secondary learning contents and learning tools by learning contents developing companies, and (5) LMS as a learning support/management tool which on-site class teacher use for creating classroom instruction materials. In addition, locating all of the hardware and software implement a smart education service within the cloud must have take advantage of the cloud computing for efficient management and reducing expense. The open digital textbooks of smart education is consdered as providing e-book style interface of LMS to learners. In open digital textbooks, the representation of text, image, audio, video, equations, etc. is basic function. But painting, writing, problem solving, etc are beyond the capabilities of a simple e-book. The Communication of teacher-to-student, learner-to-learnert, tems-to-team is required by using the open digital textbook. To represent student demographics, portfolio information, and class information, the standard used in e-learning is desirable. To process learner tracking information about the activities of the learner for LMS(Learning Management System), open digital textbook must have the recording function and the commnincating function with LMS. DRM is a function for protecting various copyright. Currently DRMs of e-boook are controlled by the corresponding book viewer. If open digital textbook admitt DRM that is used in a variety of different DRM standards of various e-book viewer, the implementation of redundant features can be avoided. Security/privacy functions are required to protect information about the study or instruction from a third party UDL (Universal Design for Learning) is learning support function for those with disabilities have difficulty in learning courses. The open digital textbook, which is based on E-book standard EPUB 3.0, must (1) record the learning activity log information, and (2) communicate with the server to support the learning activity. While the recording function and the communication function, which is not determined on current standards, is implemented as a JavaScript and is utilized in the current EPUB 3.0 viewer, ths strategy of proposing such recording and communication functions as the next generation of e-book standard, or special standard (EPUB 3.0 for education) is needed. Future research in this study will implement open source program with the proposed open digital textbook standard and present a new educational services including Big Data analysis.

A Hybrid SVM Classifier for Imbalanced Data Sets (불균형 데이터 집합의 분류를 위한 하이브리드 SVM 모델)

  • Lee, Jae Sik;Kwon, Jong Gu
    • Journal of Intelligence and Information Systems
    • /
    • v.19 no.2
    • /
    • pp.125-140
    • /
    • 2013
  • We call a data set in which the number of records belonging to a certain class far outnumbers the number of records belonging to the other class, 'imbalanced data set'. Most of the classification techniques perform poorly on imbalanced data sets. When we evaluate the performance of a certain classification technique, we need to measure not only 'accuracy' but also 'sensitivity' and 'specificity'. In a customer churn prediction problem, 'retention' records account for the majority class, and 'churn' records account for the minority class. Sensitivity measures the proportion of actual retentions which are correctly identified as such. Specificity measures the proportion of churns which are correctly identified as such. The poor performance of the classification techniques on imbalanced data sets is due to the low value of specificity. Many previous researches on imbalanced data sets employed 'oversampling' technique where members of the minority class are sampled more than those of the majority class in order to make a relatively balanced data set. When a classification model is constructed using this oversampled balanced data set, specificity can be improved but sensitivity will be decreased. In this research, we developed a hybrid model of support vector machine (SVM), artificial neural network (ANN) and decision tree, that improves specificity while maintaining sensitivity. We named this hybrid model 'hybrid SVM model.' The process of construction and prediction of our hybrid SVM model is as follows. By oversampling from the original imbalanced data set, a balanced data set is prepared. SVM_I model and ANN_I model are constructed using the imbalanced data set, and SVM_B model is constructed using the balanced data set. SVM_I model is superior in sensitivity and SVM_B model is superior in specificity. For a record on which both SVM_I model and SVM_B model make the same prediction, that prediction becomes the final solution. If they make different prediction, the final solution is determined by the discrimination rules obtained by ANN and decision tree. For a record on which SVM_I model and SVM_B model make different predictions, a decision tree model is constructed using ANN_I output value as input and actual retention or churn as target. We obtained the following two discrimination rules: 'IF ANN_I output value <0.285, THEN Final Solution = Retention' and 'IF ANN_I output value ${\geq}0.285$, THEN Final Solution = Churn.' The threshold 0.285 is the value optimized for the data used in this research. The result we present in this research is the structure or framework of our hybrid SVM model, not a specific threshold value such as 0.285. Therefore, the threshold value in the above discrimination rules can be changed to any value depending on the data. In order to evaluate the performance of our hybrid SVM model, we used the 'churn data set' in UCI Machine Learning Repository, that consists of 85% retention customers and 15% churn customers. Accuracy of the hybrid SVM model is 91.08% that is better than that of SVM_I model or SVM_B model. The points worth noticing here are its sensitivity, 95.02%, and specificity, 69.24%. The sensitivity of SVM_I model is 94.65%, and the specificity of SVM_B model is 67.00%. Therefore the hybrid SVM model developed in this research improves the specificity of SVM_B model while maintaining the sensitivity of SVM_I model.

SANET-CC : Zone IP Allocation Protocol for Offshore Networks (SANET-CC : 해상 네트워크를 위한 구역 IP 할당 프로토콜)

  • Bae, Kyoung Yul;Cho, Moon Ki
    • Journal of Intelligence and Information Systems
    • /
    • v.26 no.4
    • /
    • pp.87-109
    • /
    • 2020
  • Currently, thanks to the major stride made in developing wired and wireless communication technology, a variety of IT services are available on land. This trend is leading to an increasing demand for IT services to vessels on the water as well. And it is expected that the request for various IT services such as two-way digital data transmission, Web, APP, etc. is on the rise to the extent that they are available on land. However, while a high-speed information communication network is easily accessible on land because it is based upon a fixed infrastructure like an AP and a base station, it is not the case on the water. As a result, a radio communication network-based voice communication service is usually used at sea. To solve this problem, an additional frequency for digital data exchange was allocated, and a ship ad-hoc network (SANET) was proposed that can be utilized by using this frequency. Instead of satellite communication that costs a lot in installation and usage, SANET was developed to provide various IT services to ships based on IP in the sea. Connectivity between land base stations and ships is important in the SANET. To have this connection, a ship must be a member of the network with its IP address assigned. This paper proposes a SANET-CC protocol that allows ships to be assigned their own IP address. SANET-CC propagates several non-overlapping IP addresses through the entire network from land base stations to ships in the form of the tree. Ships allocate their own IP addresses through the exchange of simple requests and response messages with land base stations or M-ships that can allocate IP addresses. Therefore, SANET-CC can eliminate the IP collision prevention (Duplicate Address Detection) process and the process of network separation or integration caused by the movement of the ship. Various simulations were performed to verify the applicability of this protocol to SANET. The outcome of such simulations shows us the following. First, using SANET-CC, about 91% of the ships in the network were able to receive IP addresses under any circumstances. It is 6% higher than the existing studies. And it suggests that if variables are adjusted to each port's environment, it may show further improved results. Second, this work shows us that it takes all vessels an average of 10 seconds to receive IP addresses regardless of conditions. It represents a 50% decrease in time compared to the average of 20 seconds in the previous study. Also Besides, taking it into account that when existing studies were on 50 to 200 vessels, this study on 100 to 400 vessels, the efficiency can be much higher. Third, existing studies have not been able to derive optimal values according to variables. This is because it does not have a consistent pattern depending on the variable. This means that optimal variables values cannot be set for each port under diverse environments. This paper, however, shows us that the result values from the variables exhibit a consistent pattern. This is significant in that it can be applied to each port by adjusting the variable values. It was also confirmed that regardless of the number of ships, the IP allocation ratio was the most efficient at about 96 percent if the waiting time after the IP request was 75ms, and that the tree structure could maintain a stable network configuration when the number of IPs was over 30000. Fourth, this study can be used to design a network for supporting intelligent maritime control systems and services offshore, instead of satellite communication. And if LTE-M is set up, it is possible to use it for various intelligent services.

A Study on the Effect of Water Soluble Extractive upon Physical Properties of Wood (수용성(水溶性) 추출물(抽出物)이 목재(木材)의 물리적(物理的) 성질(性質)에 미치는 영향(影響))

  • Shim, Chong-Supp
    • Journal of the Korean Wood Science and Technology
    • /
    • v.10 no.3
    • /
    • pp.13-44
    • /
    • 1982
  • 1. Since long time ago, it has been talked about that soaking wood into water for a long time would be profitable for the decreasing of defects such as checking, cupping and bow due to the undue-shrinking and swelling. There are, however, no any actual data providing this fact definitly, although there are some guesses that water soluble extractives might effect on this problem. On the other hand, this is a few work which has been done about the effect of water soluble extractives upon the some physical properties of wood and that it might be related to the above mentioned problem. If man does account for that whether soaking wood into water for a long time would be profitable for the decreasing of defects due to the undue-shrinking and swelling in comparison with unsoaking wood or not, it may bring a great contribution on the reasonable uses of wood. To account for the effect of water soluble extractives upon physical properties of wood, this study has been made at the wood technology laboratory, School of Forestry, Yale university, under competent guidance of Dr. F. F. Wangaard, with the following three different species which had been provided at the same laboratory. 1. Pinus strobus 2. Quercus borealis 3. Hymenaea courbaril 2. The physical properties investigated in this study are as follows. a. Equilibrium moisture content at different relative humidity conditions. b. Shrinkage value from gre condition to different relative humidity conditions and oven dry condition. c. Swelling value from oven dry condition to different relative humidity conditions. d. Specific gravity 3. In order to investigate the effect of water soluble extractives upon physical properties of wood, the experiment has been carried out with two differently treated specimens, that is, one has been treated into water and the other into sugar solution, and with controlled specimens. 4. The quantity of water soluble extractives of each species and the group of chemical compounds in the extracted liquid from each species have shown in Table 36. Between species, there is some difference in quantity of extractives and group of chemical compounds. 5. In the case of equilibrium moisture contents at different relative humidity condition, (a) Except the desorption case at 80% R. H. C. (Relative Humidity Condition), there is a definite line between untreated specimens and treated specimens that is, untreated specimens hold water more than treated specimens at the same R.H.C. (b) The specimens treated into sugar solution have shown almost the same tendency in results compared with the untreated specimens. (c) Between species, there is no any definite relation in equilibrium moisture content each other, however E. M. C. in heartwood of pine is lesser than in sapwood. This might cause from the difference of wood anatomical structure. 6. In the case of shrinkage, (a) The shrinkage value of the treated specimen into water is more than that of the untreated specimens, except anyone case of heartwood of pine at 80% R. H. C. (b) The shrinkage value of treated specimens in the sugar solution is less than that of the others and has almost the same tendency to the untreated specimens. It would mean that the penetration of some sugar into the wood can decrease the shrinkage value of wood. (c) Between species, the shrinkage value of heartwood of pine is less than sapwood of the same, shrinkage value of oak is the largest, Hymenaea is lesser than oak and more than pine. (d) Directional difference of shrinkage value through all species can also see as other all kind of species previously tested. (e) There is a definite relation in between the difference of shrinkage value of treated and untreated specimens and amount of extractives, that is, increasing extractives gives increasing the difference of shrinkage value between treated and untreated specimens. 7. In the case of swelling, (a) The swelling value of treated specimens is greater than that of the untreated specimens through all cases. (b) In comparison with the tangential direction and radial direction, the swelling value of tangential direction is larger than that of radial direction in the same species. (c) Between species, the largest one in swelling values is oak and the smallest pine heartwood, there are also a tendency that species which shrink more swell also more and, on the contrary, species which shrink lesser swell also lesser than the others. 8. In the case of specific gravity, (a) The specific gravity of the treated specimens is larger than that of untreated specimens. This reversed value between treated and untreated specimens has been resulted from the volume of specimen of oven dry condition. (b) Between species, there are differences, that is, the specific gravity of Hymenaea is the largest one and the sapwood of pine is the smallest. 9. Through this investigation, it has been concluded that soaking wood into plain water before use without any special consideration may bring more hastful results than unsoaking for use of wood. However soaking wood into the some specially provided solutions such as salt water or inorganic matter may be dissolved in it, can be profitable for the decreasing shrinkage and swelling, checking, shaking and bow etc. if soaking wood into plain water might bring the decreasing defects, it might come from even shrinking and swelling through all dimension.

  • PDF