• Title/Summary/Keyword: feature models

Search Result 1,084, Processing Time 0.035 seconds

A Deep Learning Based Approach to Recognizing Accompanying Status of Smartphone Users Using Multimodal Data (스마트폰 다종 데이터를 활용한 딥러닝 기반의 사용자 동행 상태 인식)

  • Kim, Kilho;Choi, Sangwoo;Chae, Moon-jung;Park, Heewoong;Lee, Jaehong;Park, Jonghun
    • Journal of Intelligence and Information Systems
    • /
    • v.25 no.1
    • /
    • pp.163-177
    • /
    • 2019
  • As smartphones are getting widely used, human activity recognition (HAR) tasks for recognizing personal activities of smartphone users with multimodal data have been actively studied recently. The research area is expanding from the recognition of the simple body movement of an individual user to the recognition of low-level behavior and high-level behavior. However, HAR tasks for recognizing interaction behavior with other people, such as whether the user is accompanying or communicating with someone else, have gotten less attention so far. And previous research for recognizing interaction behavior has usually depended on audio, Bluetooth, and Wi-Fi sensors, which are vulnerable to privacy issues and require much time to collect enough data. Whereas physical sensors including accelerometer, magnetic field and gyroscope sensors are less vulnerable to privacy issues and can collect a large amount of data within a short time. In this paper, a method for detecting accompanying status based on deep learning model by only using multimodal physical sensor data, such as an accelerometer, magnetic field and gyroscope, was proposed. The accompanying status was defined as a redefinition of a part of the user interaction behavior, including whether the user is accompanying with an acquaintance at a close distance and the user is actively communicating with the acquaintance. A framework based on convolutional neural networks (CNN) and long short-term memory (LSTM) recurrent networks for classifying accompanying and conversation was proposed. First, a data preprocessing method which consists of time synchronization of multimodal data from different physical sensors, data normalization and sequence data generation was introduced. We applied the nearest interpolation to synchronize the time of collected data from different sensors. Normalization was performed for each x, y, z axis value of the sensor data, and the sequence data was generated according to the sliding window method. Then, the sequence data became the input for CNN, where feature maps representing local dependencies of the original sequence are extracted. The CNN consisted of 3 convolutional layers and did not have a pooling layer to maintain the temporal information of the sequence data. Next, LSTM recurrent networks received the feature maps, learned long-term dependencies from them and extracted features. The LSTM recurrent networks consisted of two layers, each with 128 cells. Finally, the extracted features were used for classification by softmax classifier. The loss function of the model was cross entropy function and the weights of the model were randomly initialized on a normal distribution with an average of 0 and a standard deviation of 0.1. The model was trained using adaptive moment estimation (ADAM) optimization algorithm and the mini batch size was set to 128. We applied dropout to input values of the LSTM recurrent networks to prevent overfitting. The initial learning rate was set to 0.001, and it decreased exponentially by 0.99 at the end of each epoch training. An Android smartphone application was developed and released to collect data. We collected smartphone data for a total of 18 subjects. Using the data, the model classified accompanying and conversation by 98.74% and 98.83% accuracy each. Both the F1 score and accuracy of the model were higher than the F1 score and accuracy of the majority vote classifier, support vector machine, and deep recurrent neural network. In the future research, we will focus on more rigorous multimodal sensor data synchronization methods that minimize the time stamp differences. In addition, we will further study transfer learning method that enables transfer of trained models tailored to the training data to the evaluation data that follows a different distribution. It is expected that a model capable of exhibiting robust recognition performance against changes in data that is not considered in the model learning stage will be obtained.

A Case Study on Venture and Small-Business Executives' Use of Strategic Intuition in the Decision Making Process (벤처.중소기업가의 전략적 직관에 의한 의사결정 모형에 대한 사례연구)

  • Park, Jong An;Kim, Young Su;Do, Man Seung
    • Asia-Pacific Journal of Business Venturing and Entrepreneurship
    • /
    • v.9 no.1
    • /
    • pp.15-23
    • /
    • 2014
  • A Case Study on Venture and Small-Business Executives' Use of Strategic Intuition in the Decision Making Process This paper is a case study on how Venture and Small-Business Executives managers can take advantage of their intuitions in situations where the business environment is increasingly uncertain, a novel situation occurs without any data to reflect on, when rational decision-making is not possible, and when the business environment changes. The case study is based on a literature review, in-depth interviews with 16 business managers, and an analysis of Klein, G's (1998) "Generic Mental Simulation Model." The "intuition" discussed in this analysis is classified into two types of intuition: the Expert Intuition which is based on one's own experiences, and Strategic Intuition which is based on the experience of others. Case study strategic management intuition and intuition, the experts were utilized differently. Features of professional intuition to work quickly without any effort by, while the strategic intuition, is time-consuming. Another feature that has already occurred, one expert intuition in decision-making about the widely used strategic intuition was used a lot in future decision-making. The case study results revealed that managers were using expert intuition and strategic intuition differentially. More specifically, Expert Intuition was activated effortlessly, while strategic intuition required more time. Also, expert intuition was used mainly for making judgments about events that have already happened, while strategic intuition was used more often for judgments regarding events in the future. The process of strategic intuition involved (1) Strategic concerns, (2) the discovery of medium, (3) Primary mental simulation, (4) The offsetting of key parameters, (5) secondary mental simulation, and (6) the decision making process. These steps were used to develop the "Strategic Intuition Decision-making Model" for Venture and Small-Business Executives. The case study results further showed that firstly, the success of decision-making was determined in the "secondary mental simulation' stage, and secondly, that more difficulty in management was encountered when expert intuition was used more than strategic intuition and lastly strategic intuition is possible to be educated.

  • PDF

A preliminary study for development of an automatic incident detection system on CCTV in tunnels based on a machine learning algorithm (기계학습(machine learning) 기반 터널 영상유고 자동 감지 시스템 개발을 위한 사전검토 연구)

  • Shin, Hyu-Soung;Kim, Dong-Gyou;Yim, Min-Jin;Lee, Kyu-Beom;Oh, Young-Sup
    • Journal of Korean Tunnelling and Underground Space Association
    • /
    • v.19 no.1
    • /
    • pp.95-107
    • /
    • 2017
  • In this study, a preliminary study was undertaken for development of a tunnel incident automatic detection system based on a machine learning algorithm which is to detect a number of incidents taking place in tunnel in real time and also to be able to identify the type of incident. Two road sites where CCTVs are operating have been selected and a part of CCTV images are treated to produce sets of training data. The data sets are composed of position and time information of moving objects on CCTV screen which are extracted by initially detecting and tracking of incoming objects into CCTV screen by using a conventional image processing technique available in this study. And the data sets are matched with 6 categories of events such as lane change, stoping, etc which are also involved in the training data sets. The training data are learnt by a resilience neural network where two hidden layers are applied and 9 architectural models are set up for parametric studies, from which the architectural model, 300(first hidden layer)-150(second hidden layer) is found to be optimum in highest accuracy with respect to training data as well as testing data not used for training. From this study, it was shown that the highly variable and complex traffic and incident features could be well identified without any definition of feature regulation by using a concept of machine learning. In addition, detection capability and accuracy of the machine learning based system will be automatically enhanced as much as big data of CCTV images in tunnel becomes rich.

Estimation of the Superelevation Safety Factor Considering Operating Speed at 3-Dimensional Alignment (입체선형의 주행속도를 고려한 편경사 안전율 산정에 관한 연구)

  • Park, Tae-Hoon;Kim, Joong-Hyo;Park, Je-Jin;Park, Ju-Won;Ha, Tae-Jun
    • Journal of Korean Society of Transportation
    • /
    • v.23 no.7 s.85
    • /
    • pp.159-163
    • /
    • 2005
  • The propriety between suppliers and demanders in geometric design is very important. Although the final purpose of constructing roads is to concern about the driver s comfort, unfortunately, it has not been considered so far. We've considered the regularity and quickness in considering driver's comfort but there should be considered the safety for the accident as well. If drivers are appeared to be more speeding than designer's intention, there will be needed some supplements to increase the safety rate for the roads. Even if both an upward and downward section are supposed to exist at the same time for solid geometry of the roads like this, it is true that the recent design for the 3-D solid geometry section has been done as flat 2-D and the minimum plane curve radius and the maximum cant have been decided just by calculating without considering operating speed between an upward and downward section at the same point. In this investigation, thus, I'd like to calculate the safety of the cant by considering the speed features of the solid geometry for the first lane of four lane rural roads. To begin with, we investigated the driving speed of the car, which is not been influenced by a preceding car to analyze the influence of the geometrical structure by using Nc-97. Secondly, we statistically analyzed the driving features of the solid geometry after comparing the 6 sections, that is, measuring the driving speed feature at 12 points and combining the influence of the vertical geometry and plane geometry to the driving speed of the plane curve which was researched before. Finally, we estimated the value of cant which considers the driving speed not by using it which has applied uniformly without considering it properly, though there were some differences between a designed speed and driving speed through the result of the basic statistical analysis but by introducing the new safety rate rule, a notion of ${\alpha}$. As a result of the research, we could see the driving features of the car and suggest the safety rate which considers these. For considering the maximum cant, if we apply the safety rate, the result of this experiment, which considers 3-D solid geometry, there'll be the improvement of the driver's safety for designing roads. In addition, after collecting and analyzing the data for the road sections which have various geometrical structures by expanding this experiment it is considered that there should be developed the models which considers 3-D solid geometry.

Roles of Perceived Use Control consisting of Perceived Ease of Use and Perceived Controllability in IT acceptance (정보기술 수용에서 사용용이성과 통제가능성을 하위 차원으로 하는 지각된 사용통제의 역할)

  • Lee, Woong-Kyu
    • Asia pacific journal of information systems
    • /
    • v.18 no.2
    • /
    • pp.1-14
    • /
    • 2008
  • According to technology acceptance model(TAN) which is one of the most important research models for explaining IT users' behavior, on intention of using IT is determined by usefulness and ease of use of it. However, TAM wouldn't explain the performance of using IT while it has been considered as a very good model for prediction of the intention. Many people would not be confirmed in the performance of using IT until they can control it at their will, although they think it useful and easy to use. In other words, in addition to usefulness and ease of use as in TAM, controllability is also should be a factor to determine acceptance of IT. Especially, there is a very close relationship between controllability and ease of use, both of which explain the other sides of control over the performance of using IT, so called perceived behavioral control(PBC) in social psychology. The objective of this study is to identify the relationship between ease of use and controllability, and analyse the effects of both two beliefs over performance and intention in using IT. For this purpose, we review the issues related with PBC in information systems studies as well as social psychology, Based on a review of PBC, we suggest a research model which includes the relationship between control and performance in using IT, and prove its validity empirically. Since it was introduced as qa variable for explaining volitional control for actions in theory of planned behavior(TPB), there have been confusion about concept of PBC in spite of its important role in predicting so many kinds of actions. Some studies define PBC as self-efficacy that means actor's perception of difficulty or ease of actions, while others as controllability. However, this confusion dose not imply conceptual contradiction but a double-faced feature of PBC since the performance of actions is related with both self-efficacy and controllability. In other words, these two concepts are discriminated and correlated with each other. Therefore, PBC should be considered as a composite concept consisting of self-efficacy and controllability, Use of IT has been also one of important areas for predictions by PBC. Most of them have been studied by analysis of comparison in prediction power between TAM and TPB or modification of TAM by inclusion of PBC as another belief as like usefulness and ease of use. Interestingly, unlike the other applications in social psychology, it is hard to find such confusion in the concept of PBC in the studies for use of IT. In most of studies, controllability is adapted as PBC since the concept of self-efficacy is included in ease of use explicitly. Based on these discussions, we can suggest perceived use control(PUC) which is defined as perception of control over the performance of using IT and composed of controllability and ease of use as sub-concepts. We suggest a research model explaining acceptance of IT which includes the relationships of PUC with attitude and performance of using IT. For empirical test of our research model, two user groups are selected for surveying questionnaires. In the first group, there are freshmen who take a basic course for Microsoft Excel, and the second group consists of senior students who take a course for analysis of management information by Excel. Most of measurements are adapted ones that have been validated in the other studies, while performance is real score of mid-term in each class. In result, four hypotheses related with PUC are supported statistically with very low significance level. Main contribution of this study is suggestion of PUC through theoretical review of PBC. Specifically, a hierarchical model of PUC are derived from very rigorous studies in the relationship between self-efficacy and controllability with a view of PBC in social psychology. The relationship between PUC and performance is another main contribution.

Study on the Geoelectrical Structure of the Upper Crust Using the Magnetotelluric Data Along a Transect Across the Korean Peninsula (한반도 횡단 자기지전류 탐사에 의한 상부 지각의 지전기적 구조 연구)

  • Lee, Choon-Ki;Kwon, Byung-Doo;Lee, Heui-Soon;Cho, In-Ky;Oh, Seok-Hoon;Song, Yoon-Ho;Lee, Tae-Jong
    • Journal of the Korean earth science society
    • /
    • v.28 no.2
    • /
    • pp.187-201
    • /
    • 2007
  • The first magnetotelluric (MT) transect across the Korean Peninsula was obtained traversing from the East Sea shoreline to the Yellow Sea shoreline. The MT survey profile was designed perpendicular to the strike of the principal geologic structure of the Korean Peninsula $(N30^{\circ}E)$, so-called 'China direction'. MT data were achieved at 50 sites with spacings of $3{\sim}8km$ along the 240 km survey line. The impedance responses are divided into four subsets reflecting typical geological units: the Kyonggi Massif, the Okchon Belt, the western part of the Kyongsang Basin, and the eastern part of the Kyongsang Basin. In the western part of the Kyongsang Basin, the thickness of the sedimentary layer is estimated to be about 3 km to 8 km and its resistivity is a few hundred ohm-m. A highly conductive layer with a resistivity of 1 to 30 ohm-m was detected beneath the sedimentary layer. The MT data at the Okchon Belt show peculiar responses with phase exceeding $90^{\circ}$. This feature may be explained by an electrically anisotropic structure which is composed of a narrow anisotropic block and an anisotropic layer. The Kyonggi Massif and the eastern part of Kyongsang Basin play a role of window to the deep geoelectrical structure because of the very high resistivity of upper crust. The second layers with highest resistivities in 1-D conductivity models occupy the upper crust with thicknesses of 13 km in the Kyonggi Massif and 18 km in the eastern Kyongsang Basin, respectively.

A Study on Clinical Variables Contributing to Differentiation of Delirium and Non-Delirium Patients in the ICU (중환자실 섬망 환자와 비섬망 환자 구분에 기여하는 임상 지표에 관한 연구)

  • Ko, Chanyoung;Kim, Jae-Jin;Cho, Dongrae;Oh, Jooyoung;Park, Jin Young
    • Korean Journal of Psychosomatic Medicine
    • /
    • v.27 no.2
    • /
    • pp.101-110
    • /
    • 2019
  • Objectives : It is not clear which clinical variables are most closely associated with delirium in the Intensive Care Unit (ICU). By comparing clinical data of ICU delirium and non-delirium patients, we sought to identify variables that most effectively differentiate delirium from non-delirium. Methods : Medical records of 6,386 ICU patients were reviewed. Random Subset Feature Selection and Principal Component Analysis were utilized to select a set of clinical variables with the highest discriminatory capacity. Statistical analyses were employed to determine the separation capacity of two models-one using just the selected few clinical variables and the other using all clinical variables associated with delirium. Results : There was a significant difference between delirium and non-delirium individuals across 32 clinical variables. Richmond Agitation Sedation Scale (RASS), urinary catheterization, vascular catheterization, Hamilton Anxiety Rating Scale (HAM-A), Blood urea nitrogen, and Acute Physiology and Chronic Health Examination II most effectively differentiated delirium from non-delirium. Multivariable logistic regression analysis showed that, with the exception of vascular catheterization, these clinical variables were independent risk factors associated with delirium. Separation capacity of the logistic regression model using just 6 clinical variables was measured with Receiver Operating Characteristic curve, with Area Under the Curve (AUC) of 0.818. Same analyses were performed using all 32 clinical variables;the AUC was 0.881, denoting a very high separation capacity. Conclusions : The six aforementioned variables most effectively separate delirium from non-delirium. This highlights the importance of close monitoring of patients who received invasive medical procedures and were rated with very low RASS and HAM-A scores.

A STUDY ON TEMPERATURE VARIATION OF THE UPPER THERMOSPHERE IN THE HIGH LATITUDE THROUGH THE ANALYSIS OF 6300 $\AA$ AIRGLOW DATA (6300 $\AA$ 대기광 자료 분석을 통한 고위도 열권 상부에서의 온도 변화)

  • 정종균;김용하;원영인;이방용
    • Journal of Astronomy and Space Sciences
    • /
    • v.14 no.1
    • /
    • pp.94-108
    • /
    • 1997
  • The temperature of the upper thermosphere is generally varied with the solar activity, and largely with geomagnetic activity in the high latitude. The data analyzed in this study are acquired at two ground stations, Thule Air Base($76,5{deg} N, 68.4{deg} W, A = 86{deg}$) and $S{psi}ndre Str{psi}mfjord (67.0{deg} N, 50.9{deg} W, A = 74{deg}$), Greenland. Both stations are located in the high latitude not only geographically but also geomagnetically. The terrestrial night glow at 6300 ${angs}$ from atomic oxygen has been observed from the two ground-based Fabry-Perot interferometers, during periods of 1986~1991 in Thule Air Base and 1986~1994 in $S{psi}ndre Str{psi}mfjord$. Some features noted in this study are as follows: (1) The correlation between the solar activity and the measured thermospheric temperature is highest in the case of $3{leq}Kp{leq}4$ in Thule, and increases with the geomagnetic activity in $S{psi}ndre Str{psi}mfjord$. (2) The measured temperatures at Thule is generally higher than those at $S{psi}ndre Str{psi}mfjord$, but the latter shows steeper slope with the solar activity. (3) The harmonic analysis shows that the diurnal variation(24hrs) is the main feature of the daily temperature variation with a temperature peak at about 13-14 LT (LT=UT-4). However, the semi-diurnal variation is evident during the period of weak solar activity. (4) Generally the predicted temperatures from both MSIS86 and VSH models are lower than the measured temperature, and this discrepancy grows as the solar activity increases. Therefore, we urge modelers to develope a new thermospheric model utilizing broader sets of measurements, especially for high solar activity.

  • PDF

Simulation and Post-representation: a study of Algorithmic Art (시뮬라시옹과 포스트-재현 - 알고리즘 아트를 중심으로)

  • Lee, Soojin
    • 기호학연구
    • /
    • no.56
    • /
    • pp.45-70
    • /
    • 2018
  • Criticism of the postmodern philosophy of the system of representation, which has continued since the Renaissance, is based on a critique of the dichotomy that separates the subjects and objects and the environment from the human being. Interactivity, highlighted in a series of works emerging as postmodern trends in the 1960s, was transmitted to an interactive aspect of digital art in the late 1990s. The key feature of digital art is the possibility of infinite variations reflecting unpredictable changes based on public participation on the spot. In this process, the importance of computer programs is highlighted. Instead of using the existing program as it is, more and more artists are creating and programming their own algorithms or creating unique algorithms through collaborations with programmers. We live in an era of paradigm shift in which programming itself must be considered as a creative act. Simulation technology and VR technology draw attention as a technique to represent the meaning of reality. Simulation technology helps artists create experimental works. In fact, Baudrillard's concept of Simulation defines the other reality that has nothing to do with our reality, rather than a reality that is extremely representative of our reality. His book Simulacra and Simulation refers to the existence of a reality entirely different from the traditional concept of reality. His argument does not concern the problems of right and wrong. There is no metaphysical meaning. Applying the concept of simulation to algorithmic art, the artist models the complex attributes of reality in the digital system. And it aims to build and integrate internal laws that structure and activate the world (specific or individual), that is to say, simulate the world. If the images of the traditional order correspond to the reproduction of the real world, the synthesized images of algorithmic art and simulated space-time are the forms of art that facilitate the experience. The moment of seeing and listening to the work of Ian Cheng presented in this article is a moment of personal experience and the perception is made at that time. It is not a complete and closed process, but a continuous and changing process. It is this active and situational awareness that is required to the audience for the comprehension of post-representation's forms.

Research on hybrid music recommendation system using metadata of music tracks and playlists (음악과 플레이리스트의 메타데이터를 활용한 하이브리드 음악 추천 시스템에 관한 연구)

  • Hyun Tae Lee;Gyoo Gun Lim
    • Journal of Intelligence and Information Systems
    • /
    • v.29 no.3
    • /
    • pp.145-165
    • /
    • 2023
  • Recommendation system plays a significant role on relieving difficulties of selecting information among rapidly increasing amount of information caused by the development of the Internet and on efficiently displaying information that fits individual personal interest. In particular, without the help of recommendation system, E-commerce and OTT companies cannot overcome the long-tail phenomenon, a phenomenon in which only popular products are consumed, as the number of products and contents are rapidly increasing. Therefore, the research on recommendation systems is being actively conducted to overcome the phenomenon and to provide information or contents that are aligned with users' individual interests, in order to induce customers to consume various products or contents. Usually, collaborative filtering which utilizes users' historical behavioral data shows better performance than contents-based filtering which utilizes users' preferred contents. However, collaborative filtering can suffer from cold-start problem which occurs when there is lack of users' historical behavioral data. In this paper, hybrid music recommendation system, which can solve cold-start problem, is proposed based on the playlist data of Melon music streaming service that is given by Kakao Arena for music playlist continuation competition. The goal of this research is to use music tracks, that are included in the playlists, and metadata of music tracks and playlists in order to predict other music tracks when the half or whole of the tracks are masked. Therefore, two different recommendation procedures were conducted depending on the two different situations. When music tracks are included in the playlist, LightFM is used in order to utilize the music track list of the playlists and metadata of each music tracks. Then, the result of Item2Vec model, which uses vector embeddings of music tracks, tags and titles for recommendation, is combined with the result of LightFM model to create final recommendation list. When there are no music tracks available in the playlists but only playlists' tags and titles are available, recommendation was made by finding similar playlists based on playlists vectors which was made by the aggregation of FastText pre-trained embedding vectors of tags and titles of each playlists. As a result, not only cold-start problem can be resolved, but also achieved better performance than ALS, BPR and Item2Vec by using the metadata of both music tracks and playlists. In addition, it was found that the LightFM model, which uses only artist information as an item feature, shows the best performance compared to other LightFM models which use other item features of music tracks.