• Title/Summary/Keyword: computing model

Search Result 3,327, Processing Time 0.034 seconds

Performance of Collaboration Activities upon SME's Idiosyncrasy (중소기업 특성에 따른 외부 협업 활동이 혁신성과에 미치는 영향)

  • Lee, Hye Sun;Oh, Junseok;Lee, Jaeki;Lee, Bong Gyou
    • Journal of Internet Computing and Services
    • /
    • v.14 no.6
    • /
    • pp.95-105
    • /
    • 2013
  • Recently, SME's Collaboration activities have become one of a vital factor for sustaining competitive edge. This is because of the rapidly changing and competitive market environment, and also to leverage performance by overcoming obstacles of having limited internal resources. Discussing about the effects and relationships of the firm's collaboration activities and its outputs are not new. However, as ICT and various technologies have been diffused into the traditional industries, boundaries and practice capabilities within the industries are becoming ambiguous. Thus contents of the products/services and their development methods are also go and come over the industries. Although many researchers suggested the relations of SME's collaboration activities and innovation performances, most of the previous literatures are focusing on broad perspectives of firm's environmental factors rather than considering various SME's idiosyncrasy factors such as their major product and customer types at once. Therefore, the purpose of this paper is to analyze how SME(Small Medium Enterprise)'s external collaboration activities by their idiosyncrasy act as an input to types of innovation performance. In order to analyze collaboration effects in detail, we defined factors that can represent the SME's business environment - Perceived importance of using external resources, Perceived importance of external partnership, Collaboration and Collaboration levels of Major Product types, Customer types and lastly the Firm Sizes. We have also specifically divided the performance of innovation types as product innovation and process innovation based on existing research. In this study, the empirical analysis is based on Probit Regression Model to observe the correlations with the impact of each SME's business environment and their activities. For the empirical data, 497 samples were collected which, this sample data was extracted from the 'Korean Open Innovation Survey' performed by ETRI(Korean Electronics Telecommunications Research Institute) in 2010. As a result, empirical test results indicated that the impact of collaboration varies depend on the innovation types (Product and Process Innovation). The Impact of the collaboration level for the product innovation tend to be more effective when SMEs are developing for a final product, targeting on for individual customers (B2C). But on the other hand, the analysis result of the Process innovation tend to be higher than the product innovation, when SMEs are developing raw materials for their partners or to other firms targeting on for manufacturing industries(B2B). Also perceived importance of using external resources has effected to both product and process innovation performance. But Perceived importance of external partnership was statistically insignificant. Interesting finding was that the service product has negative effects on for the process innovation performance. And Relationship between size of the firms and their external collaboration activities with their performance of the innovations indicated that the bigger firms(over 100 of employees) tend to have better for both product and process innovations. Finally, implications of the results can be suggested as performance of innovation can be varied depends on firm's unique business idiosyncrasy as well as levels of external collaboration activities. The Implication of this research can be considered for firms in selecting an appropriate strategy as well as for policy makers.

IPC Multi-label Classification based on Functional Characteristics of Fields in Patent Documents (특허문서 필드의 기능적 특성을 활용한 IPC 다중 레이블 분류)

  • Lim, Sora;Kwon, YongJin
    • Journal of Internet Computing and Services
    • /
    • v.18 no.1
    • /
    • pp.77-88
    • /
    • 2017
  • Recently, with the advent of knowledge based society where information and knowledge make values, patents which are the representative form of intellectual property have become important, and the number of the patents follows growing trends. Thus, it needs to classify the patents depending on the technological topic of the invention appropriately in order to use a vast amount of the patent information effectively. IPC (International Patent Classification) is widely used for this situation. Researches about IPC automatic classification have been studied using data mining and machine learning algorithms to improve current IPC classification task which categorizes patent documents by hand. However, most of the previous researches have focused on applying various existing machine learning methods to the patent documents rather than considering on the characteristics of the data or the structure of patent documents. In this paper, therefore, we propose to use two structural fields, technical field and background, considered as having impacts on the patent classification, where the two field are selected by applying of the characteristics of patent documents and the role of the structural fields. We also construct multi-label classification model to reflect what a patent document could have multiple IPCs. Furthermore, we propose a method to classify patent documents at the IPC subclass level comprised of 630 categories so that we investigate the possibility of applying the IPC multi-label classification model into the real field. The effect of structural fields of patent documents are examined using 564,793 registered patents in Korea, and 87.2% precision is obtained in the case of using title, abstract, claims, technical field and background. From this sequence, we verify that the technical field and background have an important role in improving the precision of IPC multi-label classification in IPC subclass level.

Crosshole EM 2.5D Modeling by the Extended Born Approximation (확장된 Born 근사에 의한 시추공간 전자탐사 2.5차원 모델링)

  • Cho, In-Ky;Suh, Jung-Hee
    • Geophysics and Geophysical Exploration
    • /
    • v.1 no.2
    • /
    • pp.127-135
    • /
    • 1998
  • The Born approximation is widely used for solving the complex scattering problems in electromagnetics. Approximating total internal electric field by the background field is reasonable for small material contrasts as long as scatterer is not too large and the frequency is not too high. However in many geophysical applications, moderate and high conductivity contrasts cause both real and imaginary part of internal electric field to differ greatly from background. In the extended Born approximation, which can improve the accuracy of Born approximation dramatically, the total electric field in the integral over the scattering volume is approximated by the background electric field projected to a depolarization tensor. The finite difference and elements methods are usually used in EM scattering problems with a 2D model and a 3D source, due to their capability for simulating complex subsurface conductivity distributions. The price paid for a 3D source is that many wavenumber domain solutions and their inverse Fourier transform must be computed. In these differential equation methods, all the area including homogeneous region should be discretized, which increases the number of nodes and matrix size. Therefore, the differential equation methods need a lot of computing time and large memory. In this study, EM modeling program for a 2D model and a 3D source is developed, which is based on the extended Born approximation. The solution is very fast and stable. Using the program, crosshole EM responses with a vertical magnetic dipole source are obtained and the results are compared with those of 3D integral equation solutions. The agreement between the integral equation solution and extended Born approximation is remarkable within the entire frequency range, but degrades with the increase of conductivity contrast between anomalous body and background medium. The extended Born approximation is accurate in the case conductivity contrast is lower than 1:10. Therefore, the location and conductivity of the anomalous body can be estimated effectively by the extended Born approximation although the quantitative estimate of conductivity is difficult for the case conductivity contrast is too high.

  • PDF

An Efficient Heuristic for Storage Location Assignment and Reallocation for Products of Different Brands at Internet Shopping Malls for Clothing (의류 인터넷 쇼핑몰에서 브랜드를 고려한 상품 입고 및 재배치 방법 연구)

  • Song, Yong-Uk;Ahn, Byung-Hyuk
    • Journal of Intelligence and Information Systems
    • /
    • v.16 no.2
    • /
    • pp.129-141
    • /
    • 2010
  • An Internet shopping mall for clothing operates a warehouse for packing and shipping products to fulfill its orders. All the products in the warehouse are put into the boxes of same brands and the boxes are stored in a row on shelves equiped in the warehouse. To make picking and managing easy, boxes of the same brands are located side by side on the shelves. When new products arrive to the warehouse for storage, the products of a brand are put into boxes and those boxes are located adjacent to the boxes of the same brand. If there is not enough space for the new coming boxes, however, some boxes of other brands should be moved away and then the new coming boxes are located adjacent in the resultant vacant spaces. We want to minimize the movement of the existing boxes of other brands to another places on the shelves during the warehousing of new coming boxes, while all the boxes of the same brand are kept side by side on the shelves. Firstly, we define the adjacency of boxes by looking the shelves as an one dimensional series of spaces to store boxes, i.e. cells, tagging the series of cells by a series of numbers starting from one, and considering any two boxes stored in the cells to be adjacent to each other if their cell numbers are continuous from one number to the other number. After that, we tried to formulate the problem into an integer programming model to obtain an optimal solution. An integer programming formulation and Branch-and-Bound technique for this problem may not be tractable because it would take too long time to solve the problem considering the number of the cells or boxes in the warehouse and the computing power of the Internet shopping mall. As an alternative approach, we designed a fast heuristic method for this reallocation problem by focusing on just the unused spaces-empty cells-on the shelves, which results in an assignment problem model. In this approach, the new coming boxes are assigned to each empty cells and then those boxes are reorganized so that the boxes of a brand are adjacent to each other. The objective of this new approach is to minimize the movement of the boxes during the reorganization process while keeping the boxes of a brand adjacent to each other. The approach, however, does not ensure the optimality of the solution in terms of the original problem, that is, the problem to minimize the movement of existing boxes while keeping boxes of the same brands adjacent to each other. Even though this heuristic method may produce a suboptimal solution, we could obtain a satisfactory solution within a satisfactory time, which are acceptable by real world experts. In order to justify the quality of the solution by the heuristic approach, we generate 100 problems randomly, in which the number of cells spans from 2,000 to 4,000, solve the problems by both of our heuristic approach and the original integer programming approach using a commercial optimization software package, and then compare the heuristic solutions with their corresponding optimal solutions in terms of solution time and the number of movement of boxes. We also implement our heuristic approach into a storage location assignment system for the Internet shopping mall.

Sea Fog Level Estimation based on Maritime Digital Image for Protection of Aids to Navigation (항로표지 보호를 위한 디지털 영상기반 해무 강도 측정 알고리즘)

  • Ryu, Eun-Ji;Lee, Hyo-Chan;Cho, Sung-Yoon;Kwon, Ki-Won;Im, Tae-Ho
    • Journal of Internet Computing and Services
    • /
    • v.22 no.6
    • /
    • pp.25-32
    • /
    • 2021
  • In line with future changes in the marine environment, Aids to Navigation has been used in various fields and their use is increasing. The term "Aids to Navigation" means an aid to navigation prescribed by Ordinance of the Ministry of Oceans and Fisheries which shows navigating ships the position and direction of the ships, position of obstacles, etc. through lights, shapes, colors, sound, radio waves, etc. Also now the use of Aids to Navigation is transforming into a means of identifying and recording the marine weather environment by mounting various sensors and cameras. However, Aids to Navigation are mainly lost due to collisions with ships, and in particular, safety accidents occur because of poor observation visibility due to sea fog. The inflow of sea fog poses risks to ports and sea transportation, and it is not easy to predict sea fog because of the large difference in the possibility of occurrence depending on time and region. In addition, it is difficult to manage individually due to the features of Aids to Navigation distributed throughout the sea. To solve this problem, this paper aims to identify the marine weather environment by estimating sea fog level approximately with images taken by cameras mounted on Aids to Navigation and to resolve safety accidents caused by weather. Instead of optical and temperature sensors that are difficult to install and expensive to measure sea fog level, sea fog level is measured through the use of general images of cameras mounted on Aids to Navigation. Furthermore, as a prior study for real-time sea fog level estimation in various seas, the sea fog level criteria are presented using the Haze Model and Dark Channel Prior. A specific threshold value is set in the image through Dark Channel Prior(DCP), and based on this, the number of pixels without sea fog is found in the entire image to estimate the sea fog level. Experimental results demonstrate the possibility of estimating the sea fog level using synthetic haze image dataset and real haze image dataset.

A Study on Startups' Dependence on Business Incubation Centers (창업보육서비스에 따른 입주기업의 창업보육센터 의존도에 관한 연구)

  • Park, JaeSung;Lee, Chul;Kim, JaeJon
    • Korean small business review
    • /
    • v.31 no.2
    • /
    • pp.103-120
    • /
    • 2009
  • As business incubation centers (BICs) have been operating for more than 10 years in Korea, many early stage startups tend to use the services provided by the incubating centers. BICs in Korea have accumulated the knowledge and experience in the past ten years and their services have been considerably improved. The business incubating service has three facets : (1) business infrastructure service, (2) direct service, and (3) indirect service. The mission of BICs is to provide the early stage entrepreneurs with the incubating service in a limited period time to help them grow strong enough to survive the fierce competition after graduating from the incubation. However, the incubating services sometimes fail to foster the independence of new startup companies, and raise the dependence of many companies on BICs. Thus, the dependence on BICs is a very important factor to understand the survival of the incubated startup companies after graduation from BICs. The purpose of this study is to identify the main factors that influence the firm's dependence on BICs and to characterize the relationships among the identified factors. The business incubating service is a core construct of this study. It includes various activities and resources, such as offering the physical facilities, legal service, and connecting them with outside organizations. These services are extensive and take various forms. They are provided by BICs directly or indirectly. Past studies have identified various incubating services and classify them in different ways. Based on the past studies, we classify the business incubating service into three categories as mentioned above : (1) business infrastructure support, (2) direct support, and (3) networking support. The business infrastructure support is to provide the essential resources to start the business, such as physical facilities. The direct support is to offer the business resources available in the BICs, such as human, technical, and administrational resources. Finally, the indirect service was to support the resource in the outside of business incubation center. Dependence is generally defined as the degree to which a client firm needs the resources provided by the service provider in order to achieve its goals. Dependence is generated when a firm recognizes the benefits of interacting with its counterpart. Hence, the more positive outcomes a firm derives from its relationship with the partner, the more dependent on the partner the firm must inevitably become. In business incubating, as a resident firm is incubated in longer period, we can predict that her dependence on BICs would be stronger. In order to foster the independence of the incubated firms, BICs have to be able to manipulate the provision of their services to control the firms' dependence on BICs. Based on the above discussion, the research model for relationships between dependence and its affecting factors was developed. We surveyed the companies residing in BICs to test our research model. The instrument of our study was modified, in part, on the basis of previous relevant studies. For the purposes of testing reliability and validity, preliminary testing was conducted with firms that were residing in BICs and incubated by the BICs in the region of Gwangju and Jeonnam. The questionnaire was modified in accordance with the pre-test feedback. We mailed to all of the firms that had been incubated by the BICs with the help of business incubating managers of each BIC. The survey was conducted over a three week period. Gifts (of approximately ₩10,000 value) were offered to all actively participating respondents. The incubating period was reported by the business incubating managers, and it was transformed using natural logarithms. A total of 180 firms participated in the survey. However, we excluded 4 cases due to a lack of consistency using reversed items in the answers of the companies, and 176 cases were used for the analysis. We acknowledge that 176 samples may not be sufficient to conduct regression analyses with 5 research variables in our study. Each variable was measured through multiple items. We conducted an exploratory factor analysis to assess their unidimensionality. In an effort to test the construct validity of the instruments, a principal component factor analysis was conducted with Varimax rotation. The items correspond well to each singular factor, demonstrating a high degree of convergent validity. As the factor loadings for a variable (or factor) are higher than the factor loadings for the other variables, the instrument's discriminant validity is shown to be clear. Each factor was extracted as expected, which explained 70.97, 66.321, and 52.97 percent, respectively, of the total variance each with eigen values greater than 1.000. The internal consistency reliability of the variables was evaluated by computing Cronbach's alphas. The Cronbach's alpha values of the variables, which ranged from 0.717 to 0.950, were all securely over 0.700, which is satisfactory. The reliability and validity of the research variables are all, therefore, considered acceptable. The effects of dependence were assessed using a regression analysis. The Pearson correlations were calculated for the variables, measured by interval or ratio scales. Potential multicollinearity among the antecedents was evaluated prior to the multiple regression analysis, as some of the variables were significantly correlated with others (e.g., direct service and indirect service). Although several variables show the evidence of significant correlations, their tolerance values range between 0.334 and 0.613, thereby demonstrating that multicollinearity is not a likely threat to the parameter estimates. Checking some basic assumptions for the regression analyses, we decided to conduct multiple regression analyses and moderated regression analyses to test the given hypotheses. The results of the regression analyses indicate that the regression model is significant at p < 0.001 (F = 44.260), and that the predictors of the research model explain 42.6 percent of the total variance. Hypotheses 1, 2, and 3 address the relationships between the dependence of the incubated firms and the business incubating services. Business infrastructure service, direct service, and indirect service are all significantly related with dependence (β = 0.300, p < 0.001; β = 0.230, p < 0.001; β = 0.226, p < 0.001), thus supporting Hypotheses 1, 2, and 3. When the incubating period is the moderator and dependence is the dependent variable, the addition of the interaction terms with the antecedents to the regression equation yielded a significant increase in R2 (F change = 2.789, p < 0.05). In particular, direct service and indirect service exert different effects on dependence. Hence, the results support Hypotheses 5 and 6. This study provides several strategies and specific calls to action for BICs, based on our empirical findings. Business infrastructure service has more effect on the firm's dependence than the other two services. The introduction of an additional high charge rate for a graduated but allowed to stay in the BIC is a basic and legitimate condition for the BIC to control the firm's dependence. We detected the differential effects of direct and indirect services on the firm's dependence. The firms with long incubating period are more sensitive to indirect service positively, and more sensitive to direct service negatively, when assessing their levels of dependence. This implies that BICs must develop a strategy on the basis of a firm's incubating period. Last but not least, it would be valuable to discover other important variables that influence the firm's dependence in the future studies. Moreover, future studies to explain the independence of startup companies in BICs would also be valuable.

Aspect-Based Sentiment Analysis Using BERT: Developing Aspect Category Sentiment Classification Models (BERT를 활용한 속성기반 감성분석: 속성카테고리 감성분류 모델 개발)

  • Park, Hyun-jung;Shin, Kyung-shik
    • Journal of Intelligence and Information Systems
    • /
    • v.26 no.4
    • /
    • pp.1-25
    • /
    • 2020
  • Sentiment Analysis (SA) is a Natural Language Processing (NLP) task that analyzes the sentiments consumers or the public feel about an arbitrary object from written texts. Furthermore, Aspect-Based Sentiment Analysis (ABSA) is a fine-grained analysis of the sentiments towards each aspect of an object. Since having a more practical value in terms of business, ABSA is drawing attention from both academic and industrial organizations. When there is a review that says "The restaurant is expensive but the food is really fantastic", for example, the general SA evaluates the overall sentiment towards the 'restaurant' as 'positive', while ABSA identifies the restaurant's aspect 'price' as 'negative' and 'food' aspect as 'positive'. Thus, ABSA enables a more specific and effective marketing strategy. In order to perform ABSA, it is necessary to identify what are the aspect terms or aspect categories included in the text, and judge the sentiments towards them. Accordingly, there exist four main areas in ABSA; aspect term extraction, aspect category detection, Aspect Term Sentiment Classification (ATSC), and Aspect Category Sentiment Classification (ACSC). It is usually conducted by extracting aspect terms and then performing ATSC to analyze sentiments for the given aspect terms, or by extracting aspect categories and then performing ACSC to analyze sentiments for the given aspect category. Here, an aspect category is expressed in one or more aspect terms, or indirectly inferred by other words. In the preceding example sentence, 'price' and 'food' are both aspect categories, and the aspect category 'food' is expressed by the aspect term 'food' included in the review. If the review sentence includes 'pasta', 'steak', or 'grilled chicken special', these can all be aspect terms for the aspect category 'food'. As such, an aspect category referred to by one or more specific aspect terms is called an explicit aspect. On the other hand, the aspect category like 'price', which does not have any specific aspect terms but can be indirectly guessed with an emotional word 'expensive,' is called an implicit aspect. So far, the 'aspect category' has been used to avoid confusion about 'aspect term'. From now on, we will consider 'aspect category' and 'aspect' as the same concept and use the word 'aspect' more for convenience. And one thing to note is that ATSC analyzes the sentiment towards given aspect terms, so it deals only with explicit aspects, and ACSC treats not only explicit aspects but also implicit aspects. This study seeks to find answers to the following issues ignored in the previous studies when applying the BERT pre-trained language model to ACSC and derives superior ACSC models. First, is it more effective to reflect the output vector of tokens for aspect categories than to use only the final output vector of [CLS] token as a classification vector? Second, is there any performance difference between QA (Question Answering) and NLI (Natural Language Inference) types in the sentence-pair configuration of input data? Third, is there any performance difference according to the order of sentence including aspect category in the QA or NLI type sentence-pair configuration of input data? To achieve these research objectives, we implemented 12 ACSC models and conducted experiments on 4 English benchmark datasets. As a result, ACSC models that provide performance beyond the existing studies without expanding the training dataset were derived. In addition, it was found that it is more effective to reflect the output vector of the aspect category token than to use only the output vector for the [CLS] token as a classification vector. It was also found that QA type input generally provides better performance than NLI, and the order of the sentence with the aspect category in QA type is irrelevant with performance. There may be some differences depending on the characteristics of the dataset, but when using NLI type sentence-pair input, placing the sentence containing the aspect category second seems to provide better performance. The new methodology for designing the ACSC model used in this study could be similarly applied to other studies such as ATSC.

Analysis and Evaluation of Frequent Pattern Mining Technique based on Landmark Window (랜드마크 윈도우 기반의 빈발 패턴 마이닝 기법의 분석 및 성능평가)

  • Pyun, Gwangbum;Yun, Unil
    • Journal of Internet Computing and Services
    • /
    • v.15 no.3
    • /
    • pp.101-107
    • /
    • 2014
  • With the development of online service, recent forms of databases have been changed from static database structures to dynamic stream database structures. Previous data mining techniques have been used as tools of decision making such as establishment of marketing strategies and DNA analyses. However, the capability to analyze real-time data more quickly is necessary in the recent interesting areas such as sensor network, robotics, and artificial intelligence. Landmark window-based frequent pattern mining, one of the stream mining approaches, performs mining operations with respect to parts of databases or each transaction of them, instead of all the data. In this paper, we analyze and evaluate the techniques of the well-known landmark window-based frequent pattern mining algorithms, called Lossy counting and hMiner. When Lossy counting mines frequent patterns from a set of new transactions, it performs union operations between the previous and current mining results. hMiner, which is a state-of-the-art algorithm based on the landmark window model, conducts mining operations whenever a new transaction occurs. Since hMiner extracts frequent patterns as soon as a new transaction is entered, we can obtain the latest mining results reflecting real-time information. For this reason, such algorithms are also called online mining approaches. We evaluate and compare the performance of the primitive algorithm, Lossy counting and the latest one, hMiner. As the criteria of our performance analysis, we first consider algorithms' total runtime and average processing time per transaction. In addition, to compare the efficiency of storage structures between them, their maximum memory usage is also evaluated. Lastly, we show how stably the two algorithms conduct their mining works with respect to the databases that feature gradually increasing items. With respect to the evaluation results of mining time and transaction processing, hMiner has higher speed than that of Lossy counting. Since hMiner stores candidate frequent patterns in a hash method, it can directly access candidate frequent patterns. Meanwhile, Lossy counting stores them in a lattice manner; thus, it has to search for multiple nodes in order to access the candidate frequent patterns. On the other hand, hMiner shows worse performance than that of Lossy counting in terms of maximum memory usage. hMiner should have all of the information for candidate frequent patterns to store them to hash's buckets, while Lossy counting stores them, reducing their information by using the lattice method. Since the storage of Lossy counting can share items concurrently included in multiple patterns, its memory usage is more efficient than that of hMiner. However, hMiner presents better efficiency than that of Lossy counting with respect to scalability evaluation due to the following reasons. If the number of items is increased, shared items are decreased in contrast; thereby, Lossy counting's memory efficiency is weakened. Furthermore, if the number of transactions becomes higher, its pruning effect becomes worse. From the experimental results, we can determine that the landmark window-based frequent pattern mining algorithms are suitable for real-time systems although they require a significant amount of memory. Hence, we need to improve their data structures more efficiently in order to utilize them additionally in resource-constrained environments such as WSN(Wireless sensor network).

A Comparative Study on the Effective Deep Learning for Fingerprint Recognition with Scar and Wrinkle (상처와 주름이 있는 지문 판별에 효율적인 심층 학습 비교연구)

  • Kim, JunSeob;Rim, BeanBonyka;Sung, Nak-Jun;Hong, Min
    • Journal of Internet Computing and Services
    • /
    • v.21 no.4
    • /
    • pp.17-23
    • /
    • 2020
  • Biometric information indicating measurement items related to human characteristics has attracted great attention as security technology with high reliability since there is no fear of theft or loss. Among these biometric information, fingerprints are mainly used in fields such as identity verification and identification. If there is a problem such as a wound, wrinkle, or moisture that is difficult to authenticate to the fingerprint image when identifying the identity, the fingerprint expert can identify the problem with the fingerprint directly through the preprocessing step, and apply the image processing algorithm appropriate to the problem. Solve the problem. In this case, by implementing artificial intelligence software that distinguishes fingerprint images with cuts and wrinkles on the fingerprint, it is easy to check whether there are cuts or wrinkles, and by selecting an appropriate algorithm, the fingerprint image can be easily improved. In this study, we developed a total of 17,080 fingerprint databases by acquiring all finger prints of 1,010 students from the Royal University of Cambodia, 600 Sokoto open data sets, and 98 Korean students. In order to determine if there are any injuries or wrinkles in the built database, criteria were established, and the data were validated by experts. The training and test datasets consisted of Cambodian data and Sokoto data, and the ratio was set to 8: 2. The data of 98 Korean students were set up as a validation data set. Using the constructed data set, five CNN-based architectures such as Classic CNN, AlexNet, VGG-16, Resnet50, and Yolo v3 were implemented. A study was conducted to find the model that performed best on the readings. Among the five architectures, ResNet50 showed the best performance with 81.51%.

An Implementation of Lighting Control System using Interpretation of Context Conflict based on Priority (우선순위 기반의 상황충돌 해석 조명제어시스템 구현)

  • Seo, Won-Il;Kwon, Sook-Youn;Lim, Jae-Hyun
    • Journal of Internet Computing and Services
    • /
    • v.17 no.1
    • /
    • pp.23-33
    • /
    • 2016
  • The current smart lighting is shaped to offer the lighting environment suitable for current context, after identifying user's action and location through a sensor. The sensor-based context awareness technology just considers a single user, and the studies to interpret many users' various context occurrences and conflicts lack. In existing studies, a fuzzy theory and algorithm including ReBa have been used as the methodology to solve context conflict. The fuzzy theory and algorithm including ReBa just avoid an opportunity of context conflict that may occur by providing services by each area, after the spaces where users are located are classified into many areas. Therefore, they actually cannot be regarded as customized service type that can offer personal preference-based context conflict. This paper proposes a priority-based LED lighting control system interpreting multiple context conflicts, which decides services, based on the granted priority according to context type, when service conflict is faced with, due to simultaneous occurrence of various contexts to many users. This study classifies the residential environment into such five areas as living room, 'bed room, study room, kitchen and bath room, and the contexts that may occur within each area are defined as 20 contexts such as exercising, doing makeup, reading, dining and entering, targeting several users. The proposed system defines various contexts of users using an ontology-based model and gives service of user oriented lighting environment through rule based on standard and context reasoning engine. To solve the issue of various context conflicts among users in the same space and at the same time point, the context in which user concentration is required is set in the highest priority. Also, visual comfort is offered as the best alternative priority in the case of the same priority. In this manner, they are utilized as the criteria for service selection upon conflict occurrence.