• Title/Summary/Keyword: 서비스로서의 소프트웨어 모델

Search Result 506, Processing Time 0.028 seconds

Context Awareness Model using the Improved Google Activity Recognition (개선된 Google Activity Recognition을 이용한 상황인지 모델)

  • Baek, Seungeun;Park, Sangwon
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.4 no.1
    • /
    • pp.57-64
    • /
    • 2015
  • Activity recognition technology is gaining attention because it can provide useful information follow user's situation. In research of activity recognition before smartphone's dissemination, we had to infer user's activity by using independent sensor. But now, with development of IT industry, we can infer user's activity by using inner sensor of smartphone. So, more animated research of activity recognition is being implemented now. By applying activity recognition system, we can develop service like recommending application according to user's preference or providing information of route. Some previous activity recognition systems have a defect using up too much energy, because they use GPS sensor. On the other hand, activity recognition system which Google released recently (Google Activity Recognition) needs only a few power because it use 'Network Provider' instead of GPS. Thus it is suitable to smartphone application system. But through a result from testing performance of Google Activity Recognition, we found that is difficult to getting user's exact activity because of unnecessary activity element and some wrong recognition. So, in this paper, we describe problems of Google Activity Recognition and propose AGAR(Advanced Google Activity Recognition) applied method to improve accuracy level because we need more exact activity recognition for new service based on activity recognition. Also to appraise value of AGAR, we compare performance of other activity recognition systems and ours and explain an applied possibility of AGAR by developing exemplary program.

Multiple Regression-Based Music Emotion Classification Technique (다중 회귀 기반의 음악 감성 분류 기법)

  • Lee, Dong-Hyun;Park, Jung-Wook;Seo, Yeong-Seok
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.7 no.6
    • /
    • pp.239-248
    • /
    • 2018
  • Many new technologies are studied with the arrival of the 4th industrial revolution. In particular, emotional intelligence is one of the popular issues. Researchers are focused on emotional analysis studies for music services, based on artificial intelligence and pattern recognition. However, they do not consider how we recommend proper music according to the specific emotion of the user. This is the practical issue for music-related IoT applications. Thus, in this paper, we propose an probability-based music emotion classification technique that makes it possible to classify music with high precision based on the range of emotion, when developing music related services. For user emotion recognition, one of the popular emotional model, Russell model, is referenced. For the features of music, the average amplitude, peak-average, the number of wavelength, average wavelength, and beats per minute were extracted. Multiple regressions were derived using regression analysis based on the collected data, and probability-based emotion classification was carried out. In our 2 different experiments, the emotion matching rate shows 70.94% and 86.21% by the proposed technique, and 66.83% and 76.85% by the survey participants. From the experiment, the proposed technique generates improved results for music classification.

Trend of Research and Industry-Related Analysis in Data Quality Using Time Series Network Analysis (시계열 네트워크분석을 통한 데이터품질 연구경향 및 산업연관 분석)

  • Jang, Kyoung-Ae;Lee, Kwang-Suk;Kim, Woo-Je
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.5 no.6
    • /
    • pp.295-306
    • /
    • 2016
  • The purpose of this paper is both to analyze research trends and to predict industrial flows using the meta-data from the previous studies on data quality. There have been many attempts to analyze the research trends in various fields till lately. However, analysis of previous studies on data quality has produced poor results because of its vast scope and data. Therefore, in this paper, we used a text mining, social network analysis for time series network analysis to analyze the vast scope and data of data quality collected from a Web of Science index database of papers published in the international data quality-field journals for 10 years. The analysis results are as follows: Decreases in Mathematical & Computational Biology, Chemistry, Health Care Sciences & Services, Biochemistry & Molecular Biology, Biochemistry & Molecular Biology, and Medical Information Science. Increases, on the contrary, in Environmental Sciences, Water Resources, Geology, and Instruments & Instrumentation. In addition, the social network analysis results show that the subjects which have the high centrality are analysis, algorithm, and network, and also, image, model, sensor, and optimization are increasing subjects in the data quality field. Furthermore, the industrial connection analysis result on data quality shows that there is high correlation between technique, industry, health, infrastructure, and customer service. And it predicted that the Environmental Sciences, Biotechnology, and Health Industry will be continuously developed. This paper will be useful for people, not only who are in the data quality industry field, but also the researchers who analyze research patterns and find out the industry connection on data quality.

A Study on the Spill-over Economic Effect Analysis of Cultural and Creative Industries in Henan Province, China (중국 허난(河南)성 문화창의산업의 경제적 파급효과 분석)

  • Zhang, Binyuan;Jia, Tingting;Bae, Ki-Hyung
    • The Journal of the Korea Contents Association
    • /
    • v.21 no.7
    • /
    • pp.363-373
    • /
    • 2021
  • The purpose of this research is to analyze the Spill-over economic effect of the cultural and creative industries(CCI) in Henan Province, China. The research object is the CCI of Henan Province, which is mainly based on five sectors out of 42 industries in the industrial association table of the Statistical Bureau of Henan Province, China in 2017 (culture, sports; recreation and research sector; experimental development and integrated technical services sector; information transmission, computer services and software sector; education sector, etc), and is analyzed through secondary integration and redefinition of the CCI of Henan Province. Through the analysis of Henan Province Industry Association Table, this paper provides some enlightenment to the future direction of the cultural and creative industries. The main analysis results are as follows. The total production inducement of the CCI in Henan province is 48,848 billion yuan, and in particular, the production inducement coefficient of the industry in Henan province is 2.72809, 2.23909 (total of columns and rows), Index of the power of dispersion is 0.26325, and the index of the sensitivity of dispersion is 0.87535. Income induction coefficient is 0.55211, production tax induction coefficient is 0.09291. Because CCI of Henan Province has full development potential, the government needs to provide active support and policy support, in addition to the need for legal provisions and supervision of market management. In order to improve the innovative development of the CCI, it is necessary to develop a new model of "CCI+X".

A study on machine learning-based defense system proposal through web shell collection and analysis (웹쉘 수집 및 분석을 통한 머신러닝기반 방어시스템 제안 연구)

  • Kim, Ki-hwan;Shin, Yong-tae
    • Journal of Internet Computing and Services
    • /
    • v.23 no.4
    • /
    • pp.87-94
    • /
    • 2022
  • Recently, with the development of information and communication infrastructure, the number of Internet access devices is rapidly increasing. Smartphones, laptops, computers, and even IoT devices are receiving information and communication services through Internet access. Since most of the device operating environment consists of web (WEB), it is vulnerable to web cyber attacks using web shells. When the web shell is uploaded to the web server, it is confirmed that the attack frequency is high because the control of the web server can be easily performed. As the damage caused by the web shell occurs a lot, each company is responding to attacks with various security devices such as intrusion prevention systems, firewalls, and web firewalls. In this case, it is difficult to detect, and in order to prevent and cope with web shell attacks due to these characteristics, it is difficult to respond only with the existing system and security software. Therefore, it is an automated defense system through the collection and analysis of web shells based on artificial intelligence machine learning that can cope with new cyber attacks such as detecting unknown web shells in advance by using artificial intelligence machine learning and deep learning techniques in existing security software. We would like to propose about. The machine learning-based web shell defense system model proposed in this paper quickly collects, analyzes, and detects malicious web shells, one of the cyberattacks on the web environment. I think it will be very helpful in designing and building a security system.

Concept and Application of Groundwater's Platform Concurrency and Digital Twin (지하수의 플랫폼 동시성과 Digital Twin의 개념과 적용)

  • Doo Houng Choi;Byung-woo Kim;E Jae Kwon;Hwa-young Kim;Cheol Seo Ki
    • Proceedings of the Korea Water Resources Association Conference
    • /
    • 2023.05a
    • /
    • pp.13-13
    • /
    • 2023
  • 디지털 기술은 오늘날 플랫폼과 디지털 트윈의 기술도입을 통해 현실 세계를 네트워크와 가상세계와의 연결이 통합되어진 가상 현실 세계의 입문 도약이다. 현실에서 가상현실의 사이의 디지털 전환(digital transformation)에는 디지털 기술과 솔루션을 비즈니스의 모든 영역에 통합하는 것이 포함된다. 이러한 디지털 전환의 핵심은 데이터에 관한 것이며, 데이터를 활용하여 가치를 창출하고 고객경험과 비즈니스 영역을 극대화하는 방식을 제공한다. 최적의 데이터를 제공하기 위한 플랫폼과 가상 현실세계 구현을 위한 디지털 트윈의 상호연계 관한 기본 개념은 데이터 수집, 데이터 분석, 데이터 시각화 및 데이터 보고와 같은 데이터 비즈니스이다. 현장 데이터는 디지털 양식을 통해 수집, 기록, 저장된다. 현장 IoT 기반 데이터(사진 및 비디오 매체 등)는 지속적으로 수집되고 종종 다른 데이터베이스에 저장되지만 지리 공간적 위치에 연결되지 않는다. 모든 디지털 발전을 조화시키고 지하수 데이터에서 더 빠른 이해를 도출하기 위해서는 디지털 트윈이 시작되어야 한다. 단일 지하수플랫폼에서 현장 조건을 시각화하고 실시간 데이터를 스트리밍하며, 과거 3D 데이터와 상호작용하여지질 또는 지화학 데이터를 선택적 사용을 위해 지하수 플랫폼과 디지털 트윈이 연계되어야 한다. 데이터를 디지털 정보모델과 연결하면 디지털 트윈에 생명을 불어넣을 수 있지만 디지털 트윈의 가치를 극대화하려면 여전히 데이터 플랫폼 서비스와 전달 방식을 선택해야 한다. 지하수 플랫폼동시성을 갖는 디지털 트윈은 정적 및 동적 데이터를 저장하는 데이터베이스 또는 크라우드 서비스에서 데이터를 가져오는 API(애플리케이션 프로그래밍 인터레이스), 디지털 트윈을 위한 호스팅 공간, 디지털 대상을 구축하는 소프트웨어, 구성 요소 간 읽기/쓰기를 위한 스크립트, chatGPT 및 API를 활용할 수 있다. 이를 통해 수집된 데이터의 실시간 양방향 통신기술인 지하수 플랫폼 기술을 활용하여 디지털 트윈을 적용하고 완성할 수 있고, 이를 지하수 분야에도 그대로 적용할 수 있다. 지하수 분야의 디지털 트윈 기술의 근간은 지하수 모니터링을 위한 관측장치와 이를 활용한 지하수 플랫폼의 구축 및 양방향 자료전송을 통한 분석 및 예측기술이다. 특히 낙동강과 같이 유역면적이 넓고 유역 내 지자체가 많아 이해관계가 다양하며, 가뭄과 홍수/태풍 등 기후위기에 따른 극한 기상이변가 자주 발생하고, 또한 보 및 하굿둑 개방 등 정부정책 이행에 따른 민원이 다수 발생하는 지역의 경우 하천과 유역에 대한 지하수 플랫폼과 디지털 트윈의 동시성 기술적용 시 지하수 데이터에 대한 고려가 반드시 수반되어야 한다.

  • PDF

A Web Application for Open Data Visualization Using R (R 이용 오픈데이터 시각화 웹 응용)

  • Kim, Kwang-Seob;Lee, Ki-Won
    • Journal of the Korean Association of Geographic Information Studies
    • /
    • v.17 no.2
    • /
    • pp.72-81
    • /
    • 2014
  • As big data are one of main issues in the recent days, the interests on their technologies are also increasing. Among several technological bases, this study focuses on data visualization and R based on open source. In general, the term of data visualization can be summarized as the web technologies for constructing, manipulating and displaying various types of graphic objects in the interactive mode. R is an operating environment or a language for statistical data analysis from basic to advanced level. In this study, a web application with these technological aspects and components is newly implemented and exemplified with data visualization for geo-based open data provided by public organizations or government agencies. This application model does not need users' data building or proprietary software installation. Futhermore it is designed for users in the geo-spatial application field with less experiences and little knowledges about R. The results of data visualization by this application can support decision making process of web users accessible to this service. It is expected that the more practical and various applications with R-based geo-statistical analysis functions and complex operations linked to big data contribute to expanding the scope and the range of the geo-spatial application.

A Junk Mail Checking Model using Fuzzy Relational Products (퍼지관계곱을 이용한 내용기반 정크메일 분류 모델)

  • Park, Jeong-Seon;Kim, Chang-Min;Kim, Yong-Gi
    • Journal of KIISE:Software and Applications
    • /
    • v.29 no.10
    • /
    • pp.726-735
    • /
    • 2002
  • E-mail service has been a general method for communication as internet is widely used instead of post mails. Many companies have invested in e-mail advertisement as e-mail service is spread. E-mail advertisement has an advantage that it can consider personal characters. A lot of e-mail users have been given e-mails that they did not want to receive because their e-mail addresses were opened out to companies on internet. Therefore, they need junk mail checking systems and several e-mail service providers have supported junk mail filters. However, the junk mail filters can check the junk mail with constraint because they don't check the junk degree of mails by the contents of e-mail. This paper suggests a content-based junk mail checking model using fuzzy relational products. The process of the junk mail checking model using fuzzy relational products is as following: (1) analyzes semantic relation between junk words-base and e-mails, (2) checks the junk degree of the e-mail using the semantic relation, (3) checks the mails with SVJ(Standard Value of Junk) if those are junk mail or non-junk mail. The efficiency of the proposed technique is proved by comparing the junk degree of the e-mail and the number of junk mails that was checked by e-mail users and checked by the proposed junk mail checking model.

Bigdata Analysis Project Development Methodology (빅데이터 분석 프로젝트 수행 방법론)

  • Kim, Hyoungrae;Jeon, Do-hong;Jee, Sunghyun
    • Journal of the Korea Society of Computer and Information
    • /
    • v.19 no.3
    • /
    • pp.73-85
    • /
    • 2014
  • As the importance of big data analysis increases to improve the competitiveness of a corporate, a unified big data project development methodology is required in order to study the problem of a corporate in a systematic way and evaluate the problem w.r.t. a business value after solving the problem. This paper propose Scientific Data Anslysis and Development methodology(SDAD) which are integrated methodology of software development and project management for easier application into a field project. SDAD consisits of 6 stages(problem definition stage, data preparation stage, model design stage, model development stage, result extraction stage, service development state), each stages has detailed processes(47) and productions(93). SDAD, furthermore, unified previous ISP, DW, SW development methodologies in terms of the data analysis and can easily interchange the productions with them. This paper, lastly, introduces a way to assign responsible persons for each process and provide communication procedures in RACI chart to improves the efficiency of the interaction among professionals from different subjects. SDAD is applied to a Bigdata project in Korea Employment Information Services institution and the result turned out to be acceptable when evaluated by the supervision.

A Use-case based Component Mining Approach for the Modernization of Legacy Systems (레거시 시스템을 현대화하기 위한 유스케이스 기반의 컴포넌트 추출 방법)

  • Kim, Hyeon-Soo;Chae, Heung-Seok;Kim, Chul-Hong
    • Journal of KIISE:Software and Applications
    • /
    • v.32 no.7
    • /
    • pp.601-611
    • /
    • 2005
  • Due to not only proven stability and reliability but a significant investment and years of accumulated -experience and knowledge, legacy systems have supported the core business applications of a number of organizations over many years. While the emergence of Web-based e-business environments requires externalizing core business processes to the Web. This is a competitive advantage in the new economy. Consequently, organizations now need to mine the business value buried in the legacy systems for reuse in new e-business applications. In this paper we suggest a systematic approach to mining components that perform specific business services and that consist of the legacy system's assets to be leveraged on the modem platform. The proposed activities are divided into several tasks. First, use cases that realize the business processes are captured. Secondly, a design model is constructed for each identified use case in order to integrate the use cases with the similar functionalities. Thirdly, we identify component candidates from the design model and then adjust the component candidates by considering common elements among the candidate components. And also business components are divided into three more fine-grained components to deploy them onto J2EE/EJB environments. finally, we define the interfaces of components which provide functionalities of the components as operations.