• Title/Summary/Keyword: Previous Algorithm

Search Result 3,151, Processing Time 0.032 seconds

A Study on the Work Process of Creating AI SORA Videos (AI SORA 동영상 생성 제작의 작업 과정에 관한 고찰)

  • Cho, Hyun Kyung
    • The Journal of the Convergence on Culture Technology
    • /
    • v.10 no.5
    • /
    • pp.827-832
    • /
    • 2024
  • The AI program Sora is a video production model that can be used innovatively and is the starting point of a major paradigm shift in video planning and production in the future. In this paper, through consideration of the characteristics, application, and process of the AI video production program, the characteristics of the AI design video production method were understood, and the production algorithm was considered. The detailed consideration and characteristics of the work creation process for the video graphic AI video generation program that will be intensified every year were examined. Next, the method of generating a customized video with a text prompt and the process of innovative production results different from the previous production method were considered. In addition, the design direction through the generation of AI images was studied through the review of the strengths and weaknesses of the image details of the recently announced AI music video results. By considering the security of the AI generation video Sora and looking at the internal process of the actual AI process, it will be possible to present indicators for the future direction of AI video model production and education along with the direction of the design designer and education system. In the text and conclusion, we analyzed the strengths and weaknesses and future status of OpenAI Sora image, concluded how to apply the Sora model's capabilities, limitations, quality, and human creativity, and presented problems and alternatives through examples of the Sora model's capabilities and limitations to increase human creativity.

NUI/NUX of the Virtual Monitor Concept using the Concentration Indicator and the User's Physical Features (사용자의 신체적 특징과 뇌파 집중 지수를 이용한 가상 모니터 개념의 NUI/NUX)

  • Jeon, Chang-hyun;Ahn, So-young;Shin, Dong-il;Shin, Dong-kyoo
    • Journal of Internet Computing and Services
    • /
    • v.16 no.6
    • /
    • pp.11-21
    • /
    • 2015
  • As growing interest in Human-Computer Interaction(HCI), research on HCI has been actively conducted. Also with that, research on Natural User Interface/Natural User eXperience(NUI/NUX) that uses user's gesture and voice has been actively conducted. In case of NUI/NUX, it needs recognition algorithm such as gesture recognition or voice recognition. However these recognition algorithms have weakness because their implementation is complex and a lot of time are needed in training because they have to go through steps including preprocessing, normalization, feature extraction. Recently, Kinect is launched by Microsoft as NUI/NUX development tool which attracts people's attention, and studies using Kinect has been conducted. The authors of this paper implemented hand-mouse interface with outstanding intuitiveness using the physical features of a user in a previous study. However, there are weaknesses such as unnatural movement of mouse and low accuracy of mouse functions. In this study, we designed and implemented a hand mouse interface which introduce a new concept called 'Virtual monitor' extracting user's physical features through Kinect in real-time. Virtual monitor means virtual space that can be controlled by hand mouse. It is possible that the coordinate on virtual monitor is accurately mapped onto the coordinate on real monitor. Hand-mouse interface based on virtual monitor concept maintains outstanding intuitiveness that is strength of the previous study and enhance accuracy of mouse functions. Further, we increased accuracy of the interface by recognizing user's unnecessary actions using his concentration indicator from his encephalogram(EEG) data. In order to evaluate intuitiveness and accuracy of the interface, we experimented it for 50 people from 10s to 50s. As the result of intuitiveness experiment, 84% of subjects learned how to use it within 1 minute. Also, as the result of accuracy experiment, accuracy of mouse functions (drag(80.4%), click(80%), double-click(76.7%)) is shown. The intuitiveness and accuracy of the proposed hand-mouse interface is checked through experiment, this is expected to be a good example of the interface for controlling the system by hand in the future.

A Destructive Method in the Connection of the Algorithm and Design in the Digital media - Centered on the Rapid Prototyping Systems of Product Design - (디지털미디어 환경(環境)에서 디자인 특성(特性)에 관한 연구(硏究) - 실내제품(室內製品) 디자인을 중심으로 -)

  • Kim Seok-Hwa
    • Journal of Science of Art and Design
    • /
    • v.5
    • /
    • pp.87-129
    • /
    • 2003
  • The purpose of this thesis is to propose a new concept of design of the 21st century, on the basis of the study on the general signification of the structures and the signs of industrial product design, by examining the difference between modern and post-modern design, which is expected to lead the users to different design practice and interpretation of it. The starting point of this study is the different styles and patterns of 'Gestalt' in the post-modern design of the late 20th century from modern design - the factor of determination in industrial product design. That is to say, unlike functional and rational styles of modern product design, the late 20th century is based upon the pluralism characterized by complexity, synthetic and decorativeness. So far, most of the previous studies on design seem to have excluded visual aspects and usability, focused only on effective communication of design phenomena. These partial studies on design, blinded by phenomenal aspects, have resulted in failure to discover a principle of fundamental system. However, design varies according to the times; and the transformation of design is reflected in Design Pragnanz to constitute a new text of design. Therefore, it can be argued that Design Pragnanz serves as an essential factor under influence of the significance of text. In this thesis, therefore, I delve into analysis of the 20th century product design, in the light of Gestalt theory and Design Pragnanz, which have been functioning as the principle of the past design. For this study, I attempted to discover the fundamental elements in modern and post-modern designs, and to examine the formal structure of product design, the users' aesthetic preference and its semantics, from the integrative viewpoint. Also, with reference to history and theory of design my emphasis is more on fundamental visual phenomena than on structural analysis or process of visualization in product design, in order to examine the formal properties of modern and post-modern designs. Firstly, In Chapter 1, 'Issues and Background of the Study', I investigated the Gestalt theory and Design Pragnanz, on the premise of formal distinction between modern and post-modern designs. These theories are founded upon the discussion on visual perception of Gestalt in Germany in 1910's, in pursuit of the principle of perception centered around visual perception of human beings. In Chapter 2, I dealt with functionalism of modern design, as an advance preparation for the further study on the product design of the late 20th century. First of all, in Chapter 2-1, I examined the tendency of modern design focused on functionalism, which can be exemplified by the famous statement 'Form follows function'. Excluding all unessential elements in design - for example, decoration, this tendency has attained the position of the international style based on the spirit of Bauhause - universality and regularity - in search of geometric order, standardization and rationalization. In Chapter 2-2, I investigated the anthropological viewpoint that modern design started representing culture in a symbolic way including overall aspects of the society - politics, economics and ethics, and its criticism on functionalist design that aesthetic value is missing in exchange of excessive simplicity in style. Moreover, I examined the pluralist phenomena in post-modern design such as kitsch, eclecticism, reactionism, hi-tech and digital design, breaking away from functionalist purism of modern design. In Chapter 3, I analyzed Gestalt Pragnanz in design in a practical way, against the background of design trends. To begin with, I selected mass product design among those for the 20th century products as a target of analysis, highlighting representative styles in each category of the products. For this analysis, I adopted the theory of J. M Lehnhardt, who gradated in percentage the aesthetic and semantic levels of Pragnantz in design expression, and that of J. K. Grutter, who expressed it in a formula of M = O : C. I also employed eight units of dichotomies, according to the G. D. Birkhoff's aesthetic criteria, for the purpose of scientific classification of the degree of order and complexity in design; and I analyzed phenomenal aspects of design form represented in each unit. For Chapter 4, I executed a questionnaire about semiological phenomena of Design Pragnanz with 28 units of antonymous adjectives, based upon the research in the previous chapter. Then, I analyzed the process of signification of Design Pragnanz, founded on this research. Furthermore, the interpretation of the analysis served as an explanation to preference, through systematic analysis of Gestalt and Design Pragnanz in product design of the late 20th century. In Chapter 5, I determined the position of Design Pragnanz by integrating the analyses of Gestalt and Pragnanz in modern and post-modern designs In this process, 1 revealed the difference of each Design Pragnanz in formal respect, in order to suggest a vision of the future as a result, which will provide systemic and structural stimulation to current design.

  • PDF

Predicting stock movements based on financial news with systematic group identification (시스템적인 군집 확인과 뉴스를 이용한 주가 예측)

  • Seong, NohYoon;Nam, Kihwan
    • Journal of Intelligence and Information Systems
    • /
    • v.25 no.3
    • /
    • pp.1-17
    • /
    • 2019
  • Because stock price forecasting is an important issue both academically and practically, research in stock price prediction has been actively conducted. The stock price forecasting research is classified into using structured data and using unstructured data. With structured data such as historical stock price and financial statements, past studies usually used technical analysis approach and fundamental analysis. In the big data era, the amount of information has rapidly increased, and the artificial intelligence methodology that can find meaning by quantifying string information, which is an unstructured data that takes up a large amount of information, has developed rapidly. With these developments, many attempts with unstructured data are being made to predict stock prices through online news by applying text mining to stock price forecasts. The stock price prediction methodology adopted in many papers is to forecast stock prices with the news of the target companies to be forecasted. However, according to previous research, not only news of a target company affects its stock price, but news of companies that are related to the company can also affect the stock price. However, finding a highly relevant company is not easy because of the market-wide impact and random signs. Thus, existing studies have found highly relevant companies based primarily on pre-determined international industry classification standards. However, according to recent research, global industry classification standard has different homogeneity within the sectors, and it leads to a limitation that forecasting stock prices by taking them all together without considering only relevant companies can adversely affect predictive performance. To overcome the limitation, we first used random matrix theory with text mining for stock prediction. Wherever the dimension of data is large, the classical limit theorems are no longer suitable, because the statistical efficiency will be reduced. Therefore, a simple correlation analysis in the financial market does not mean the true correlation. To solve the issue, we adopt random matrix theory, which is mainly used in econophysics, to remove market-wide effects and random signals and find a true correlation between companies. With the true correlation, we perform cluster analysis to find relevant companies. Also, based on the clustering analysis, we used multiple kernel learning algorithm, which is an ensemble of support vector machine to incorporate the effects of the target firm and its relevant firms simultaneously. Each kernel was assigned to predict stock prices with features of financial news of the target firm and its relevant firms. The results of this study are as follows. The results of this paper are as follows. (1) Following the existing research flow, we confirmed that it is an effective way to forecast stock prices using news from relevant companies. (2) When looking for a relevant company, looking for it in the wrong way can lower AI prediction performance. (3) The proposed approach with random matrix theory shows better performance than previous studies if cluster analysis is performed based on the true correlation by removing market-wide effects and random signals. The contribution of this study is as follows. First, this study shows that random matrix theory, which is used mainly in economic physics, can be combined with artificial intelligence to produce good methodologies. This suggests that it is important not only to develop AI algorithms but also to adopt physics theory. This extends the existing research that presented the methodology by integrating artificial intelligence with complex system theory through transfer entropy. Second, this study stressed that finding the right companies in the stock market is an important issue. This suggests that it is not only important to study artificial intelligence algorithms, but how to theoretically adjust the input values. Third, we confirmed that firms classified as Global Industrial Classification Standard (GICS) might have low relevance and suggested it is necessary to theoretically define the relevance rather than simply finding it in the GICS.

Development of a complex failure prediction system using Hierarchical Attention Network (Hierarchical Attention Network를 이용한 복합 장애 발생 예측 시스템 개발)

  • Park, Youngchan;An, Sangjun;Kim, Mintae;Kim, Wooju
    • Journal of Intelligence and Information Systems
    • /
    • v.26 no.4
    • /
    • pp.127-148
    • /
    • 2020
  • The data center is a physical environment facility for accommodating computer systems and related components, and is an essential foundation technology for next-generation core industries such as big data, smart factories, wearables, and smart homes. In particular, with the growth of cloud computing, the proportional expansion of the data center infrastructure is inevitable. Monitoring the health of these data center facilities is a way to maintain and manage the system and prevent failure. If a failure occurs in some elements of the facility, it may affect not only the relevant equipment but also other connected equipment, and may cause enormous damage. In particular, IT facilities are irregular due to interdependence and it is difficult to know the cause. In the previous study predicting failure in data center, failure was predicted by looking at a single server as a single state without assuming that the devices were mixed. Therefore, in this study, data center failures were classified into failures occurring inside the server (Outage A) and failures occurring outside the server (Outage B), and focused on analyzing complex failures occurring within the server. Server external failures include power, cooling, user errors, etc. Since such failures can be prevented in the early stages of data center facility construction, various solutions are being developed. On the other hand, the cause of the failure occurring in the server is difficult to determine, and adequate prevention has not yet been achieved. In particular, this is the reason why server failures do not occur singularly, cause other server failures, or receive something that causes failures from other servers. In other words, while the existing studies assumed that it was a single server that did not affect the servers and analyzed the failure, in this study, the failure occurred on the assumption that it had an effect between servers. In order to define the complex failure situation in the data center, failure history data for each equipment existing in the data center was used. There are four major failures considered in this study: Network Node Down, Server Down, Windows Activation Services Down, and Database Management System Service Down. The failures that occur for each device are sorted in chronological order, and when a failure occurs in a specific equipment, if a failure occurs in a specific equipment within 5 minutes from the time of occurrence, it is defined that the failure occurs simultaneously. After configuring the sequence for the devices that have failed at the same time, 5 devices that frequently occur simultaneously within the configured sequence were selected, and the case where the selected devices failed at the same time was confirmed through visualization. Since the server resource information collected for failure analysis is in units of time series and has flow, we used Long Short-term Memory (LSTM), a deep learning algorithm that can predict the next state through the previous state. In addition, unlike a single server, the Hierarchical Attention Network deep learning model structure was used in consideration of the fact that the level of multiple failures for each server is different. This algorithm is a method of increasing the prediction accuracy by giving weight to the server as the impact on the failure increases. The study began with defining the type of failure and selecting the analysis target. In the first experiment, the same collected data was assumed as a single server state and a multiple server state, and compared and analyzed. The second experiment improved the prediction accuracy in the case of a complex server by optimizing each server threshold. In the first experiment, which assumed each of a single server and multiple servers, in the case of a single server, it was predicted that three of the five servers did not have a failure even though the actual failure occurred. However, assuming multiple servers, all five servers were predicted to have failed. As a result of the experiment, the hypothesis that there is an effect between servers is proven. As a result of this study, it was confirmed that the prediction performance was superior when the multiple servers were assumed than when the single server was assumed. In particular, applying the Hierarchical Attention Network algorithm, assuming that the effects of each server will be different, played a role in improving the analysis effect. In addition, by applying a different threshold for each server, the prediction accuracy could be improved. This study showed that failures that are difficult to determine the cause can be predicted through historical data, and a model that can predict failures occurring in servers in data centers is presented. It is expected that the occurrence of disability can be prevented in advance using the results of this study.

The Relationship Analysis between the Epicenter and Lineaments in the Odaesan Area using Satellite Images and Shaded Relief Maps (위성영상과 음영기복도를 이용한 오대산 지역 진앙의 위치와 선구조선의 관계 분석)

  • CHA, Sung-Eun;CHI, Kwang-Hoon;JO, Hyun-Woo;KIM, Eun-Ji;LEE, Woo-Kyun
    • Journal of the Korean Association of Geographic Information Studies
    • /
    • v.19 no.3
    • /
    • pp.61-74
    • /
    • 2016
  • The purpose of this paper is to analyze the relationship between the location of the epicenter of a medium-sized earthquake(magnitude 4.8) that occurred on January 20, 2007 in the Odaesan area with lineament features using a shaded relief map(1/25,000 scale) and satellite images from LANDSAT-8 and KOMPSAT-2. Previous studies have analyzed lineament features in tectonic settings primarily by examining two-dimensional satellite images and shaded relief maps. These methods, however, limit the application of the visual interpretation of relief features long considered as the major component of lineament extraction. To overcome some existing limitations of two-dimensional images, this study examined three-dimensional images, produced from a Digital Elevation Model and drainage network map, for lineament extraction. This approach reduces mapping errors introduced by visual interpretation. In addition, spline interpolation was conducted to produce density maps of lineament frequency, intersection, and length required to estimate the density of lineament at the epicenter of the earthquake. An algorithm was developed to compute the Value of the Relative Density(VRD) representing the relative density of lineament from the map. The VRD is the lineament density of each map grid divided by the maximum density value from the map. As such, it is a quantified value that indicates the concentration level of the lineament density across the area impacted by the earthquake. Using this algorithm, the VRD calculated at the earthquake epicenter using the lineament's frequency, intersection, and length density maps ranged from approximately 0.60(min) to 0.90(max). However, because there were differences in mapped images such as those for solar altitude and azimuth, the mean of VRD was used rather than those categorized by the images. The results show that the average frequency of VRD was approximately 0.85, which was 21% higher than the intersection and length of VRD, demonstrating the close relationship that exists between lineament and the epicenter. Therefore, it is concluded that the density map analysis described in this study, based on lineament extraction, is valid and can be used as a primary data analysis tool for earthquake research in the future.

The Adaptive Personalization Method According to Users Purchasing Index : Application to Beverage Purchasing Predictions (고객별 구매빈도에 동적으로 적응하는 개인화 시스템 : 음료수 구매 예측에의 적용)

  • Park, Yoon-Joo
    • Journal of Intelligence and Information Systems
    • /
    • v.17 no.4
    • /
    • pp.95-108
    • /
    • 2011
  • TThis is a study of the personalization method that intelligently adapts the level of clustering considering purchasing index of a customer. In the e-biz era, many companies gather customers' demographic and transactional information such as age, gender, purchasing date and product category. They use this information to predict customer's preferences or purchasing patterns so that they can provide more customized services to their customers. The previous Customer-Segmentation method provides customized services for each customer group. This method clusters a whole customer set into different groups based on their similarity and builds predictive models for the resulting groups. Thus, it can manage the number of predictive models and also provide more data for the customers who do not have enough data to build a good predictive model by using the data of other similar customers. However, this method often fails to provide highly personalized services to each customer, which is especially important to VIP customers. Furthermore, it clusters the customers who already have a considerable amount of data as well as the customers who only have small amount of data, which causes to increase computational cost unnecessarily without significant performance improvement. The other conventional method called 1-to-1 method provides more customized services than the Customer-Segmentation method for each individual customer since the predictive model are built using only the data for the individual customer. This method not only provides highly personalized services but also builds a relatively simple and less costly model that satisfies with each customer. However, the 1-to-1 method has a limitation that it does not produce a good predictive model when a customer has only a few numbers of data. In other words, if a customer has insufficient number of transactional data then the performance rate of this method deteriorate. In order to overcome the limitations of these two conventional methods, we suggested the new method called Intelligent Customer Segmentation method that provides adaptive personalized services according to the customer's purchasing index. The suggested method clusters customers according to their purchasing index, so that the prediction for the less purchasing customers are based on the data in more intensively clustered groups, and for the VIP customers, who already have a considerable amount of data, clustered to a much lesser extent or not clustered at all. The main idea of this method is that applying clustering technique when the number of transactional data of the target customer is less than the predefined criterion data size. In order to find this criterion number, we suggest the algorithm called sliding window correlation analysis in this study. The algorithm purposes to find the transactional data size that the performance of the 1-to-1 method is radically decreased due to the data sparity. After finding this criterion data size, we apply the conventional 1-to-1 method for the customers who have more data than the criterion and apply clustering technique who have less than this amount until they can use at least the predefined criterion amount of data for model building processes. We apply the two conventional methods and the newly suggested method to Neilsen's beverage purchasing data to predict the purchasing amounts of the customers and the purchasing categories. We use two data mining techniques (Support Vector Machine and Linear Regression) and two types of performance measures (MAE and RMSE) in order to predict two dependent variables as aforementioned. The results show that the suggested Intelligent Customer Segmentation method can outperform the conventional 1-to-1 method in many cases and produces the same level of performances compare with the Customer-Segmentation method spending much less computational cost.

Cross-Calibration of GOCI-II in Near-Infrared Band with GOCI (GOCI를 이용한 GOCI-II 근적외 밴드 교차보정)

  • Eunkyung Lee;Sujung Bae;Jae-Hyun Ahn;Kyeong-Sang Lee
    • Korean Journal of Remote Sensing
    • /
    • v.39 no.6_2
    • /
    • pp.1553-1563
    • /
    • 2023
  • The Geostationary Ocean Color Imager-II (GOCI-II) is a satellite designed for ocean color observation, covering the Northeast Asian region and the entire disk of the Earth. It commenced operations in 2020, succeeding its predecessor, GOCI, which had been active for the previous decade. In this study, we aimed to enhance the atmospheric correction algorithm, a critical step in producing satellite-based ocean color data, by performing cross-calibration on the GOCI-II near-infrared (NIR) band using the GOCI NIR band. To achieve this, we conducted a cross-calibration study on the top-of-atmosphere (TOA) radiance of the NIR band and derived a vicarious calibration gain for two NIR bands (745 and 865 nm). As a result of applying this gain, the offset of two sensors decreased and the ratio approached 1. It shows that consistency of two sensors was improved. Also, the Rayleigh-corrected reflectance at 745 nm and 865 nm increased by 5.62% and 9.52%, respectively. This alteration had implications for the ratio of Rayleigh-corrected reflectance at these wavelengths, potentially impacting the atmospheric correction results across all spectral bands, particularly during the aerosol reflectance correction process within the atmospheric correction algorithm. Due to the limited overlapping operational period of GOCI and GOCI-II satellites, we only used data from March 2021. Nevertheless, we anticipate further enhancements through ongoing cross-calibration research with other satellites in the future. Additionally, it is essential to apply the vicarious calibration gain derived for the NIR band in this study to perform vicarious calibration for the visible channels and assess its impact on the accuracy of the ocean color products.

System Development for Measuring Group Engagement in the Art Center (공연장에서 다중 몰입도 측정을 위한 시스템 개발)

  • Ryu, Joon Mo;Choi, Il Young;Choi, Lee Kwon;Kim, Jae Kyeong
    • Journal of Intelligence and Information Systems
    • /
    • v.20 no.3
    • /
    • pp.45-58
    • /
    • 2014
  • The Korean Culture Contents spread out to Worldwide, because the Korean wave is sweeping in the world. The contents stand in the middle of the Korean wave that we are used it. Each country is ongoing to keep their Culture industry improve the national brand and High added value. Performing contents is important factor of arousal in the enterprise industry. To improve high arousal confidence of product and positive attitude by populace is one of important factor by advertiser. Culture contents is the same situation. If culture contents have trusted by everyone, they will give information their around to spread word-of-mouth. So, many researcher study to measure for person's arousal analysis by statistical survey, physiological response, body movement and facial expression. First, Statistical survey has a problem that it is not possible to measure each person's arousal real time and we cannot get good survey result after they watched contents. Second, physiological response should be checked with surround because experimenter sets sensors up their chair or space by each of them. Additionally it is difficult to handle provided amount of information with real time from their sensor. Third, body movement is easy to get their movement from camera but it difficult to set up experimental condition, to measure their body language and to get the meaning. Lastly, many researcher study facial expression. They measures facial expression, eye tracking and face posed. Most of previous studies about arousal and interest are mostly limited to reaction of just one person and they have problems with application multi audiences. They have a particular method, for example they need room light surround, but set limits only one person and special environment condition in the laboratory. Also, we need to measure arousal in the contents, but is difficult to define also it is not easy to collect reaction by audiences immediately. Many audience in the theater watch performance. We suggest the system to measure multi-audience's reaction with real-time during performance. We use difference image analysis method for multi-audience but it weaks a dark field. To overcome dark environment during recoding IR camera can get the photo from dark area. In addition we present Multi-Audience Engagement Index (MAEI) to calculate algorithm which sources from sound, audience' movement and eye tracking value. Algorithm calculates audience arousal from the mobile survey, sound value, audience' reaction and audience eye's tracking. It improves accuracy of Multi-Audience Engagement Index, we compare Multi-Audience Engagement Index with mobile survey. And then it send the result to reporting system and proposal an interested persons. Mobile surveys are easy, fast, and visitors' discomfort can be minimized. Also additional information can be provided mobile advantage. Mobile application to communicate with the database, real-time information on visitors' attitudes focused on the content stored. Database can provide different survey every time based on provided information. The example shown in the survey are as follows: Impressive scene, Satisfied, Touched, Interested, Didn't pay attention and so on. The suggested system is combine as 3 parts. The system consist of three parts, External Device, Server and Internal Device. External Device can record multi-Audience in the dark field with IR camera and sound signal. Also we use survey with mobile application and send the data to ERD Server DB. The Server part's contain contents' data, such as each scene's weights value, group audience weights index, camera control program, algorithm and calculate Multi-Audience Engagement Index. Internal Device presents Multi-Audience Engagement Index with Web UI, print and display field monitor. Our system is test-operated by the Mogencelab in the DMC display exhibition hall which is located in the Sangam Dong, Mapo Gu, Seoul. We have still gotten from visitor daily. If we find this system audience arousal factor with this will be very useful to create contents.

A Study on People Counting in Public Metro Service using Hybrid CNN-LSTM Algorithm (Hybrid CNN-LSTM 알고리즘을 활용한 도시철도 내 피플 카운팅 연구)

  • Choi, Ji-Hye;Kim, Min-Seung;Lee, Chan-Ho;Choi, Jung-Hwan;Lee, Jeong-Hee;Sung, Tae-Eung
    • Journal of Intelligence and Information Systems
    • /
    • v.26 no.2
    • /
    • pp.131-145
    • /
    • 2020
  • In line with the trend of industrial innovation, IoT technology utilized in a variety of fields is emerging as a key element in creation of new business models and the provision of user-friendly services through the combination of big data. The accumulated data from devices with the Internet-of-Things (IoT) is being used in many ways to build a convenience-based smart system as it can provide customized intelligent systems through user environment and pattern analysis. Recently, it has been applied to innovation in the public domain and has been using it for smart city and smart transportation, such as solving traffic and crime problems using CCTV. In particular, it is necessary to comprehensively consider the easiness of securing real-time service data and the stability of security when planning underground services or establishing movement amount control information system to enhance citizens' or commuters' convenience in circumstances with the congestion of public transportation such as subways, urban railways, etc. However, previous studies that utilize image data have limitations in reducing the performance of object detection under private issue and abnormal conditions. The IoT device-based sensor data used in this study is free from private issue because it does not require identification for individuals, and can be effectively utilized to build intelligent public services for unspecified people. Especially, sensor data stored by the IoT device need not be identified to an individual, and can be effectively utilized for constructing intelligent public services for many and unspecified people as data free form private issue. We utilize the IoT-based infrared sensor devices for an intelligent pedestrian tracking system in metro service which many people use on a daily basis and temperature data measured by sensors are therein transmitted in real time. The experimental environment for collecting data detected in real time from sensors was established for the equally-spaced midpoints of 4×4 upper parts in the ceiling of subway entrances where the actual movement amount of passengers is high, and it measured the temperature change for objects entering and leaving the detection spots. The measured data have gone through a preprocessing in which the reference values for 16 different areas are set and the difference values between the temperatures in 16 distinct areas and their reference values per unit of time are calculated. This corresponds to the methodology that maximizes movement within the detection area. In addition, the size of the data was increased by 10 times in order to more sensitively reflect the difference in temperature by area. For example, if the temperature data collected from the sensor at a given time were 28.5℃, the data analysis was conducted by changing the value to 285. As above, the data collected from sensors have the characteristics of time series data and image data with 4×4 resolution. Reflecting the characteristics of the measured, preprocessed data, we finally propose a hybrid algorithm that combines CNN in superior performance for image classification and LSTM, especially suitable for analyzing time series data, as referred to CNN-LSTM (Convolutional Neural Network-Long Short Term Memory). In the study, the CNN-LSTM algorithm is used to predict the number of passing persons in one of 4×4 detection areas. We verified the validation of the proposed model by taking performance comparison with other artificial intelligence algorithms such as Multi-Layer Perceptron (MLP), Long Short Term Memory (LSTM) and RNN-LSTM (Recurrent Neural Network-Long Short Term Memory). As a result of the experiment, proposed CNN-LSTM hybrid model compared to MLP, LSTM and RNN-LSTM has the best predictive performance. By utilizing the proposed devices and models, it is expected various metro services will be provided with no illegal issue about the personal information such as real-time monitoring of public transport facilities and emergency situation response services on the basis of congestion. However, the data have been collected by selecting one side of the entrances as the subject of analysis, and the data collected for a short period of time have been applied to the prediction. There exists the limitation that the verification of application in other environments needs to be carried out. In the future, it is expected that more reliability will be provided for the proposed model if experimental data is sufficiently collected in various environments or if learning data is further configured by measuring data in other sensors.