• Title/Summary/Keyword: 모델 기반 아키텍처

Search Result 376, Processing Time 0.022 seconds

A Two-Stage Learning Method of CNN and K-means RGB Cluster for Sentiment Classification of Images (이미지 감성분류를 위한 CNN과 K-means RGB Cluster 이-단계 학습 방안)

  • Kim, Jeongtae;Park, Eunbi;Han, Kiwoong;Lee, Junghyun;Lee, Hong Joo
    • Journal of Intelligence and Information Systems
    • /
    • v.27 no.3
    • /
    • pp.139-156
    • /
    • 2021
  • The biggest reason for using a deep learning model in image classification is that it is possible to consider the relationship between each region by extracting each region's features from the overall information of the image. However, the CNN model may not be suitable for emotional image data without the image's regional features. To solve the difficulty of classifying emotion images, many researchers each year propose a CNN-based architecture suitable for emotion images. Studies on the relationship between color and human emotion were also conducted, and results were derived that different emotions are induced according to color. In studies using deep learning, there have been studies that apply color information to image subtraction classification. The case where the image's color information is additionally used than the case where the classification model is trained with only the image improves the accuracy of classifying image emotions. This study proposes two ways to increase the accuracy by incorporating the result value after the model classifies an image's emotion. Both methods improve accuracy by modifying the result value based on statistics using the color of the picture. When performing the test by finding the two-color combinations most distributed for all training data, the two-color combinations most distributed for each test data image were found. The result values were corrected according to the color combination distribution. This method weights the result value obtained after the model classifies an image's emotion by creating an expression based on the log function and the exponential function. Emotion6, classified into six emotions, and Artphoto classified into eight categories were used for the image data. Densenet169, Mnasnet, Resnet101, Resnet152, and Vgg19 architectures were used for the CNN model, and the performance evaluation was compared before and after applying the two-stage learning to the CNN model. Inspired by color psychology, which deals with the relationship between colors and emotions, when creating a model that classifies an image's sentiment, we studied how to improve accuracy by modifying the result values based on color. Sixteen colors were used: red, orange, yellow, green, blue, indigo, purple, turquoise, pink, magenta, brown, gray, silver, gold, white, and black. It has meaning. Using Scikit-learn's Clustering, the seven colors that are primarily distributed in the image are checked. Then, the RGB coordinate values of the colors from the image are compared with the RGB coordinate values of the 16 colors presented in the above data. That is, it was converted to the closest color. Suppose three or more color combinations are selected. In that case, too many color combinations occur, resulting in a problem in which the distribution is scattered, so a situation fewer influences the result value. Therefore, to solve this problem, two-color combinations were found and weighted to the model. Before training, the most distributed color combinations were found for all training data images. The distribution of color combinations for each class was stored in a Python dictionary format to be used during testing. During the test, the two-color combinations that are most distributed for each test data image are found. After that, we checked how the color combinations were distributed in the training data and corrected the result. We devised several equations to weight the result value from the model based on the extracted color as described above. The data set was randomly divided by 80:20, and the model was verified using 20% of the data as a test set. After splitting the remaining 80% of the data into five divisions to perform 5-fold cross-validation, the model was trained five times using different verification datasets. Finally, the performance was checked using the test dataset that was previously separated. Adam was used as the activation function, and the learning rate was set to 0.01. The training was performed as much as 20 epochs, and if the validation loss value did not decrease during five epochs of learning, the experiment was stopped. Early tapping was set to load the model with the best validation loss value. The classification accuracy was better when the extracted information using color properties was used together than the case using only the CNN architecture.

A Study on the Service and Performance factors of Public EA (공공부문 EA 서비스요인과 성과에 관한 연구)

  • Shin, Daul;Park, Joo-Seok;Park, JaeHong
    • Journal of Information Technology and Architecture
    • /
    • v.11 no.4
    • /
    • pp.409-426
    • /
    • 2014
  • Korea has won 3 times in a row in the evaluation of e-government services in 2014. And the last year, the government-EA has been awarded the UN Public Service Award. Because of the development and execution of personalized integrated services based on the government-EA, Korea has won the two award from the UN. EA has been selected and proceeded as one of the 31 e-Government projects in early period, in 2005 the law which public sector must adopt the EA for efficient informatization had been enacted. Many public agencies in which actively utilized to derive such as internal and external performance through the EA. On the other hand, in the last 10 years, some public agencies have still been as recognized level of management in the EA. In this study, the main purpose is that to find out what is a major factor for successful use and result of EA, what is the EA success Model and how to examine it. To do that, this study will study the related prior research such as EA services, information systems success factors, performance measures, and develop the success model for EA and then examine the model. This study will contribute great implications in practical and theoretical in EA success model because this is the nation's first research that SERVQUAL model and the IS Success Model(DeLone & McLean 2003) has been combined and examined.

A Robustness Test Method and Test Framework for the Services Composition in the Service Oriented Architecture (SOA에서 서비스 조합의 강건성 테스트 방법 및 테스트 프레임워크)

  • Kuk, Seung-Hak;Kim, Hyeon-Soo
    • Journal of KIISE:Software and Applications
    • /
    • v.36 no.10
    • /
    • pp.800-815
    • /
    • 2009
  • Recently, Web services based service-oriented architecture is widely used to integrate effectively various applications distributed on the networks. In the service-oriented architecture BPEL as a standard modeling language for the business processes provides the way to integrate various services provided by applications. Over the past few years, some types of studies have been made on testing compatibility of services and on discriminating and tracing of the business processes in the services composition. Now a lot of studies about the services composition with BPEL are going on. However there were few efforts to solve the problems caused by the services composition. Especially, there is no effort to evaluate whether a composite service is reliable and whether it is robust against to exceptional situations. In this paper, we suggest a test framework and a testing method for robustness of the composite service written in WS-BPEL. For this, firstly we extract some information from the BPEL process and the participant services. Next, with the extracted information we construct the virtual testing environment that generates various faults and exceptional cases which may be raised within the real services. Finally the testing work for robustness of a composite service is performed on the test framework.

Scenario-Based Implementation Synthesis for Real-Time Object-Oriented Models (실시간 객체 지향 모델을 위한 시나리오 기반 구현 합성)

  • Kim, Sae-Hwa;Park, Ji-Yong;Hong, Seong-Soo
    • The KIPS Transactions:PartD
    • /
    • v.12D no.7 s.103
    • /
    • pp.1049-1064
    • /
    • 2005
  • The demands of increasingly complicated software have led to the proliferation of object-oriented design methodologies in embedded systems. To execute a system designed with objects in target hardware, a task set should be derived from the objects, representing how many tasks reside in the system and which task processes which event arriving at an object. The derived task set greatly influences the responsiveness of the system. Nevertheless, it is very difficult to derive an optimal task set due to the discrepancy between objects and tasks. Therefore, the common method currently used by developers is to repetitively try various task sets. This paper proposes Scenario-based Implementation Synthesis Architecture (SISA) to solve this problem. SISA encompasses a method for deriving a task set from a system designed with objects as well as its supporting development tools and run-time system architecture. A system designed with SISA not only consists of the smallest possible number of tasks, but also guarantees that the response time for each event in the system is minimized. We have fully implemented SISA by extending the ResoRT development tool and applied it to an existing industrial PBX system. The experimental results show that maximum response times were reduced $30.3\%$ on average compared to when the task set was derived by the best known existing methods.

Edge to Edge Model and Delay Performance Evaluation for Autonomous Driving (자율 주행을 위한 Edge to Edge 모델 및 지연 성능 평가)

  • Cho, Moon Ki;Bae, Kyoung Yul
    • Journal of Intelligence and Information Systems
    • /
    • v.27 no.1
    • /
    • pp.191-207
    • /
    • 2021
  • Up to this day, mobile communications have evolved rapidly over the decades, mainly focusing on speed-up to meet the growing data demands of 2G to 5G. And with the start of the 5G era, efforts are being made to provide such various services to customers, as IoT, V2X, robots, artificial intelligence, augmented virtual reality, and smart cities, which are expected to change the environment of our lives and industries as a whole. In a bid to provide those services, on top of high speed data, reduced latency and reliability are critical for real-time services. Thus, 5G has paved the way for service delivery through maximum speed of 20Gbps, a delay of 1ms, and a connecting device of 106/㎢ In particular, in intelligent traffic control systems and services using various vehicle-based Vehicle to X (V2X), such as traffic control, in addition to high-speed data speed, reduction of delay and reliability for real-time services are very important. 5G communication uses high frequencies of 3.5Ghz and 28Ghz. These high-frequency waves can go with high-speed thanks to their straightness while their short wavelength and small diffraction angle limit their reach to distance and prevent them from penetrating walls, causing restrictions on their use indoors. Therefore, under existing networks it's difficult to overcome these constraints. The underlying centralized SDN also has a limited capability in offering delay-sensitive services because communication with many nodes creates overload in its processing. Basically, SDN, which means a structure that separates signals from the control plane from packets in the data plane, requires control of the delay-related tree structure available in the event of an emergency during autonomous driving. In these scenarios, the network architecture that handles in-vehicle information is a major variable of delay. Since SDNs in general centralized structures are difficult to meet the desired delay level, studies on the optimal size of SDNs for information processing should be conducted. Thus, SDNs need to be separated on a certain scale and construct a new type of network, which can efficiently respond to dynamically changing traffic and provide high-quality, flexible services. Moreover, the structure of these networks is closely related to ultra-low latency, high confidence, and hyper-connectivity and should be based on a new form of split SDN rather than an existing centralized SDN structure, even in the case of the worst condition. And in these SDN structural networks, where automobiles pass through small 5G cells very quickly, the information change cycle, round trip delay (RTD), and the data processing time of SDN are highly correlated with the delay. Of these, RDT is not a significant factor because it has sufficient speed and less than 1 ms of delay, but the information change cycle and data processing time of SDN are factors that greatly affect the delay. Especially, in an emergency of self-driving environment linked to an ITS(Intelligent Traffic System) that requires low latency and high reliability, information should be transmitted and processed very quickly. That is a case in point where delay plays a very sensitive role. In this paper, we study the SDN architecture in emergencies during autonomous driving and conduct analysis through simulation of the correlation with the cell layer in which the vehicle should request relevant information according to the information flow. For simulation: As the Data Rate of 5G is high enough, we can assume the information for neighbor vehicle support to the car without errors. Furthermore, we assumed 5G small cells within 50 ~ 250 m in cell radius, and the maximum speed of the vehicle was considered as a 30km ~ 200 km/hour in order to examine the network architecture to minimize the delay.

Timing Driven Analytic Placement for FPGAs (타이밍 구동 FPGA 분석적 배치)

  • Kim, Kyosun
    • Journal of the Institute of Electronics and Information Engineers
    • /
    • v.54 no.7
    • /
    • pp.21-28
    • /
    • 2017
  • Practical models for FPGA architectures which include performance- and/or density-enhancing components such as carry chains, wide function multiplexers, and memory/multiplier blocks are being applied to academic FPGA placement tools which used to rely on simple imaginary models. Previously the techniques such as pre-packing and multi-layer density analysis are proposed to remedy issues related to such practical models, and the wire length is effectively minimized during initial analytic placement. Since timing should be optimized rather than wire length, most previous work takes into account the timing constraints. However, instead of the initial analytic placement, the timing-driven techniques are mostly applied to subsequent steps such as placement legalization and iterative improvement. This paper incorporates the timing driven techniques, which check if the placement meets the timing constraints given in the standard SDC format, and minimize the detected violations, with the existing analytic placer which implements pre-packing and multi-layer density analysis. First of all, a static timing analyzer has been used to check the timing of the wire-length minimized placement results. In order to minimize the detected violations, a function to minimize the largest arrival time at end points is added to the objective function of the analytic placer. Since each clock has a different period, the function is proposed to be evaluated for each clock, and added to the objective function. Since this function can unnecessarily reduce the unviolated paths, a new function which calculates and minimizes the largest negative slack at end points is also proposed, and compared. Since the existing legalization which is non-timing driven is used before the timing analysis, any improvement on timing is entirely due to the functions added to the objective function. The experiments on twelve industrial examples show that the minimum arrival time function improves the worst negative slack by 15% on average whereas the minimum worst negative slack function improves the negative slacks by additional 6% on average.