• Title/Summary/Keyword: Audio information

Search Result 1,425, Processing Time 0.03 seconds

Open Digital Textbook for Smart Education (스마트교육을 위한 오픈 디지털교과서)

  • Koo, Young-Il;Park, Choong-Shik
    • Journal of Intelligence and Information Systems
    • /
    • v.19 no.2
    • /
    • pp.177-189
    • /
    • 2013
  • In Smart Education, the roles of digital textbook is very important as face-to-face media to learners. The standardization of digital textbook will promote the industrialization of digital textbook for contents providers and distributers as well as learner and instructors. In this study, the following three objectives-oriented digital textbooks are looking for ways to standardize. (1) digital textbooks should undertake the role of the media for blended learning which supports on-off classes, should be operating on common EPUB viewer without special dedicated viewer, should utilize the existing framework of the e-learning learning contents and learning management. The reason to consider the EPUB as the standard for digital textbooks is that digital textbooks don't need to specify antoher standard for the form of books, and can take advantage od industrial base with EPUB standards-rich content and distribution structure (2) digital textbooks should provide a low-cost open market service that are currently available as the standard open software (3) To provide appropriate learning feedback information to students, digital textbooks should provide a foundation which accumulates and manages all the learning activity information according to standard infrastructure for educational Big Data processing. In this study, the digital textbook in a smart education environment was referred to open digital textbook. The components of open digital textbooks service framework are (1) digital textbook terminals such as smart pad, smart TVs, smart phones, PC, etc., (2) digital textbooks platform to show and perform digital contents on digital textbook terminals, (3) learning contents repository, which exist on the cloud, maintains accredited learning, (4) App Store providing and distributing secondary learning contents and learning tools by learning contents developing companies, and (5) LMS as a learning support/management tool which on-site class teacher use for creating classroom instruction materials. In addition, locating all of the hardware and software implement a smart education service within the cloud must have take advantage of the cloud computing for efficient management and reducing expense. The open digital textbooks of smart education is consdered as providing e-book style interface of LMS to learners. In open digital textbooks, the representation of text, image, audio, video, equations, etc. is basic function. But painting, writing, problem solving, etc are beyond the capabilities of a simple e-book. The Communication of teacher-to-student, learner-to-learnert, tems-to-team is required by using the open digital textbook. To represent student demographics, portfolio information, and class information, the standard used in e-learning is desirable. To process learner tracking information about the activities of the learner for LMS(Learning Management System), open digital textbook must have the recording function and the commnincating function with LMS. DRM is a function for protecting various copyright. Currently DRMs of e-boook are controlled by the corresponding book viewer. If open digital textbook admitt DRM that is used in a variety of different DRM standards of various e-book viewer, the implementation of redundant features can be avoided. Security/privacy functions are required to protect information about the study or instruction from a third party UDL (Universal Design for Learning) is learning support function for those with disabilities have difficulty in learning courses. The open digital textbook, which is based on E-book standard EPUB 3.0, must (1) record the learning activity log information, and (2) communicate with the server to support the learning activity. While the recording function and the communication function, which is not determined on current standards, is implemented as a JavaScript and is utilized in the current EPUB 3.0 viewer, ths strategy of proposing such recording and communication functions as the next generation of e-book standard, or special standard (EPUB 3.0 for education) is needed. Future research in this study will implement open source program with the proposed open digital textbook standard and present a new educational services including Big Data analysis.

Records Management and Archives in Korea : Its Development and Prospects (한국 기록관리행정의 변천과 전망)

  • Nam, Hyo-Chai
    • Journal of Korean Society of Archives and Records Management
    • /
    • v.1 no.1
    • /
    • pp.19-35
    • /
    • 2001
  • After almost one century of discontinuity in the archival tradition of Chosun dynasty, Korea entered the new age of records and archival management by legislating and executing the basic laws (The Records and Archives Management of Public Agencies Ad of 1999). Annals of Chosun dynasty recorded major historical facts of the five hundred years of national affairs. The Annals are major accomplishment in human history and rare in the world. It was possible because the Annals were composed of collected, selected and complied records of primary sources written and compiled by generations of historians, As important public records are needed to be preserved in original forms in modern archives, we had to develop and establish a modern archival system to appraise and select important national records for archival preservation. However, the colonialization of Korea deprived us of the opportunity to do the task, and our fine archival tradition was not succeeded. A centralized archival system began to develop since the establishment of GARS under the Ministry of Government Administration in 1969. GARS built a modem repository in Pusan in 1984 succeeding to the tradition of History Archives of Chosun dynasty. In 1998, GARS moved its headquarter to Taejon Government Complex and acquired state-of-the-art audio visual archives preservation facilities. From 1996, GARS introduced an automated archival management system to remedy the manual registration and management system complementing the preservation microfilming. Digitization of the holdings was the key project to provided the digital images of archives to users. To do this, the GARS purchased new computer/server systems and developed application softwares. Parallel to this direction, GARS drastically renovated its manpower composition toward a high level of professionalization by recruiting more archivists with historical and library science backgrounds. Conservators and computer system operators were also recruited. The new archival laws has been in effect from January 1, 2000. The new laws made following new changes in the field of records and archival administration in Korea. First, the laws regulate the records and archives of all public agencies including the Legislature, the Judiciary, the Administration, the constitutional institutions, Army, Navy, Air Force, and National Intelligence Service. A nation-wide unified records and archives management system became available. Second, public archives and records centers are to be established according to the level of the agency; a central archives at national level, special archives for the National Assembly and the Judiciary, local government archives for metropolitan cities and provinces, records center or special records center for administrative agencies. A records manager will be responsible for the records management of each administrative divisions. Third, the records in the public agencies are registered in the computer system as they are produced. Therefore, the records are traceable and will be searched or retrieved easily through internet or computer network. Fourth, qualified records managers and archivists who are professionally trained in the field of records management and archival science will be assigned mandatorily to guarantee the professional management of records and archives. Fifth, the illegal treatment of public records and archives constitutes a punishable crime. In the future, the public records find archival management will develop along with Korean government's 'Electronic Government Project.' Following changes are in prospect. First, public agencies will digitize paper records, audio-visual records, and publications as well as electronic documents, thus promoting administrative efficiency and productivity. Second, the National Assembly already established its Special Archives. The judiciary and the National Intelligence Service will follow it. More archives will be established at city and provincial levels. Third, the more our society develop into a knowledge-based information society, the more the records management function will become one of the important national government functions. As more universities, academic associations, and civil societies participate in promoting archival awareness and in establishing archival science, and more people realize the importance of the records and archives management up to the level of national public campaign, the records and archival management in Korea will develop significantly distinguishable from present practice.

A Study of Performance Analysis on Effective Multiple Buffering and Packetizing Method of Multimedia Data for User-Demand Oriented RTSP Based Transmissions Between the PoC Box and a Terminal (PoC Box 단말의 RTSP 운용을 위한 사용자 요구 중심의 효율적인 다중 수신 버퍼링 기법 및 패킷화 방법에 대한 성능 분석에 관한 연구)

  • Bang, Ji-Woong;Kim, Dae-Won
    • Journal of Korea Multimedia Society
    • /
    • v.14 no.1
    • /
    • pp.54-75
    • /
    • 2011
  • PoC(Push-to-talk Over Cellular) is an integrated technology of group voice calls, video calls and internet based multimedia services. If a PoC user can not participate in the PoC session for various reasons such as an emergency situation, lack of battery capacity, then the user can use the PoC Box which has a similar functionality to the MM Box in the MMS(Multimedia Messaging Service). The RTSP(Real-Time Streaming Protocol) method is recommended to be used when there is a transmission session between the PoC box and a terminal. Since the existing VOD service uses a wired network, the packet size of RTSP-based VOD service is huge, however, the PoC service has wireless communication environments which have general characteristics to be used in RTSP method. Packet loss in a wired communication environments is relatively less than that in wireless communication environment, therefore, a buffering latency occurs in PoC service due to a play-out delay which means an asynchronous play of audio & video contents. Those problems make a user to be difficult to find the information they want when the media contents are played-out. In this paper, the following techniques and methods were proposed and their performance and superiority were verified through testing: cross-over dual reception buffering technique, advance partition multi-reception buffering technique, and on-demand multi-reception buffering technique, which are designed for effective picking up of information in media content being transmitted in short amount of time using RTSP when a user searches for media, as well as for reduction in playback delay; and same-priority packetization transmission method and priority-based packetization transmission method, which are media data packetization methods for transmission. From the simulation of functional evaluation, we could find that the proposed multiple receiving buffering and packetizing methods are superior, with respect to the media retrieval inclination, to the existing single receiving buffering method by 6-9 points from the viewpoint of effectiveness and excellence. Among them, especially, on-demand multiple receiving buffering technology with same-priority packetization transmission method is able to manage the media search inclination promptly to the requests of users by showing superiority of 3-24 points above compared to other combination methods. In addition, users could find the information they want much quickly since large amount of informations are received in a focused media retrieval period within a short time.

A Study on the Improvement for Medical Service Using Video Promotion Materials for PET/CT Scans (PET/CT 검사에서 동영상 홍보물을 통한 의료서비스 향상에 관한 연구)

  • Kim, Woo Hyun;Kim, Jung Seon;Ko, Hyun Soo;Sung, Ji Hye;Lee, Jeoung Eun
    • The Korean Journal of Nuclear Medicine Technology
    • /
    • v.17 no.1
    • /
    • pp.30-35
    • /
    • 2013
  • Purpose: One of the current services, providing information to the patients and their guardians by using promotion materials induces positive responses and contributes to the improvement of the hospital reliability. Therefore, the objective of this study is to evaluate the effectiveness of audio visual materials, one of the means of promotion, as a way to give accurate medical information to resolve patient's curiosity about purpose and procedure of their examination and deplete complains about waiting which attributes negative effect to service quality assessment. Materials and Methods: 60 patients(mean age $53.97{\pm}12.24$, male : female = 26 : 34) who had $^{18}F-FDG PET/CT$ scan from July 2012 to August 2012 in Seoul Asan Medical Center were referred to the study. All of the patients having PET/CT scan were asked to watch an informative video material before the injection of radiopharmaceutical ($^{18}F-FDG$) and to fill in a questionnaire. Results: As a result of analyzing the contents of questionnaire, 52% of 60 patients had PET/CT scan for the first time and 72.4% of the patients read the PET/CT guidebook offered from their outpatient department or inpatient wards before their scan. After we searched the level of previous knowledge of the purpose and method of PET/CT scan, the patients answered 25.1% "know well", 34% "not sure", 40.9% "don't know" respectively. And 84.7% of the patients answered that watching the PET/CT guide video before the injection helps understanding what exam they were having and 15.3% of the patients did not. For the question asking ever the patients have experienced using our homepage or smart phone QR code to see the guide video before they visit out PET center, only 3.3% of them answered "yes". Lastly, the patients answered 60.1% "yes", 31.4% "so so" and 8.5% "no" respectively for the question asking whether watching the video makes the patients to fill the waiting time short. Conclusion: It is found that understanding of objective and method of the PET/CT scan and level of satisfaction was improved after the patients watched the guide video whether they had PET/CT scan before and read the PET/CT guidebook or not. Also, watching the video was effective for the reduction of perceptible waiting time. But while displaying the PET/CT guide video is useful for providing information about the scan and shortening the waiting time as one of the medical service, utilization of service was actually very poor because of the passive promotion and indifference of the patients about their examination. Therefore, from now on, it is necessary to construct the healthcare system which can be offered to more patients through the active promotion.

  • PDF

Deep Learning Architectures and Applications (딥러닝의 모형과 응용사례)

  • Ahn, SungMahn
    • Journal of Intelligence and Information Systems
    • /
    • v.22 no.2
    • /
    • pp.127-142
    • /
    • 2016
  • Deep learning model is a kind of neural networks that allows multiple hidden layers. There are various deep learning architectures such as convolutional neural networks, deep belief networks and recurrent neural networks. Those have been applied to fields like computer vision, automatic speech recognition, natural language processing, audio recognition and bioinformatics where they have been shown to produce state-of-the-art results on various tasks. Among those architectures, convolutional neural networks and recurrent neural networks are classified as the supervised learning model. And in recent years, those supervised learning models have gained more popularity than unsupervised learning models such as deep belief networks, because supervised learning models have shown fashionable applications in such fields mentioned above. Deep learning models can be trained with backpropagation algorithm. Backpropagation is an abbreviation for "backward propagation of errors" and a common method of training artificial neural networks used in conjunction with an optimization method such as gradient descent. The method calculates the gradient of an error function with respect to all the weights in the network. The gradient is fed to the optimization method which in turn uses it to update the weights, in an attempt to minimize the error function. Convolutional neural networks use a special architecture which is particularly well-adapted to classify images. Using this architecture makes convolutional networks fast to train. This, in turn, helps us train deep, muti-layer networks, which are very good at classifying images. These days, deep convolutional networks are used in most neural networks for image recognition. Convolutional neural networks use three basic ideas: local receptive fields, shared weights, and pooling. By local receptive fields, we mean that each neuron in the first(or any) hidden layer will be connected to a small region of the input(or previous layer's) neurons. Shared weights mean that we're going to use the same weights and bias for each of the local receptive field. This means that all the neurons in the hidden layer detect exactly the same feature, just at different locations in the input image. In addition to the convolutional layers just described, convolutional neural networks also contain pooling layers. Pooling layers are usually used immediately after convolutional layers. What the pooling layers do is to simplify the information in the output from the convolutional layer. Recent convolutional network architectures have 10 to 20 hidden layers and billions of connections between units. Training deep learning networks has taken weeks several years ago, but thanks to progress in GPU and algorithm enhancement, training time has reduced to several hours. Neural networks with time-varying behavior are known as recurrent neural networks or RNNs. A recurrent neural network is a class of artificial neural network where connections between units form a directed cycle. This creates an internal state of the network which allows it to exhibit dynamic temporal behavior. Unlike feedforward neural networks, RNNs can use their internal memory to process arbitrary sequences of inputs. Early RNN models turned out to be very difficult to train, harder even than deep feedforward networks. The reason is the unstable gradient problem such as vanishing gradient and exploding gradient. The gradient can get smaller and smaller as it is propagated back through layers. This makes learning in early layers extremely slow. The problem actually gets worse in RNNs, since gradients aren't just propagated backward through layers, they're propagated backward through time. If the network runs for a long time, that can make the gradient extremely unstable and hard to learn from. It has been possible to incorporate an idea known as long short-term memory units (LSTMs) into RNNs. LSTMs make it much easier to get good results when training RNNs, and many recent papers make use of LSTMs or related ideas.

Research on Generative AI for Korean Multi-Modal Montage App (한국형 멀티모달 몽타주 앱을 위한 생성형 AI 연구)

  • Lim, Jeounghyun;Cha, Kyung-Ae;Koh, Jaepil;Hong, Won-Kee
    • Journal of Service Research and Studies
    • /
    • v.14 no.1
    • /
    • pp.13-26
    • /
    • 2024
  • Multi-modal generation is the process of generating results based on a variety of information, such as text, images, and audio. With the rapid development of AI technology, there is a growing number of multi-modal based systems that synthesize different types of data to produce results. In this paper, we present an AI system that uses speech and text recognition to describe a person and generate a montage image. While the existing montage generation technology is based on the appearance of Westerners, the montage generation system developed in this paper learns a model based on Korean facial features. Therefore, it is possible to create more accurate and effective Korean montage images based on multi-modal voice and text specific to Korean. Since the developed montage generation app can be utilized as a draft montage, it can dramatically reduce the manual labor of existing montage production personnel. For this purpose, we utilized persona-based virtual person montage data provided by the AI-Hub of the National Information Society Agency. AI-Hub is an AI integration platform aimed at providing a one-stop service by building artificial intelligence learning data necessary for the development of AI technology and services. The image generation system was implemented using VQGAN, a deep learning model used to generate high-resolution images, and the KoDALLE model, a Korean-based image generation model. It can be confirmed that the learned AI model creates a montage image of a face that is very similar to what was described using voice and text. To verify the practicality of the developed montage generation app, 10 testers used it and more than 70% responded that they were satisfied. The montage generator can be used in various fields, such as criminal detection, to describe and image facial features.

Business Application of Convolutional Neural Networks for Apparel Classification Using Runway Image (합성곱 신경망의 비지니스 응용: 런웨이 이미지를 사용한 의류 분류를 중심으로)

  • Seo, Yian;Shin, Kyung-shik
    • Journal of Intelligence and Information Systems
    • /
    • v.24 no.3
    • /
    • pp.1-19
    • /
    • 2018
  • Large amount of data is now available for research and business sectors to extract knowledge from it. This data can be in the form of unstructured data such as audio, text, and image data and can be analyzed by deep learning methodology. Deep learning is now widely used for various estimation, classification, and prediction problems. Especially, fashion business adopts deep learning techniques for apparel recognition, apparel search and retrieval engine, and automatic product recommendation. The core model of these applications is the image classification using Convolutional Neural Networks (CNN). CNN is made up of neurons which learn parameters such as weights while inputs come through and reach outputs. CNN has layer structure which is best suited for image classification as it is comprised of convolutional layer for generating feature maps, pooling layer for reducing the dimensionality of feature maps, and fully-connected layer for classifying the extracted features. However, most of the classification models have been trained using online product image, which is taken under controlled situation such as apparel image itself or professional model wearing apparel. This image may not be an effective way to train the classification model considering the situation when one might want to classify street fashion image or walking image, which is taken in uncontrolled situation and involves people's movement and unexpected pose. Therefore, we propose to train the model with runway apparel image dataset which captures mobility. This will allow the classification model to be trained with far more variable data and enhance the adaptation with diverse query image. To achieve both convergence and generalization of the model, we apply Transfer Learning on our training network. As Transfer Learning in CNN is composed of pre-training and fine-tuning stages, we divide the training step into two. First, we pre-train our architecture with large-scale dataset, ImageNet dataset, which consists of 1.2 million images with 1000 categories including animals, plants, activities, materials, instrumentations, scenes, and foods. We use GoogLeNet for our main architecture as it has achieved great accuracy with efficiency in ImageNet Large Scale Visual Recognition Challenge (ILSVRC). Second, we fine-tune the network with our own runway image dataset. For the runway image dataset, we could not find any previously and publicly made dataset, so we collect the dataset from Google Image Search attaining 2426 images of 32 major fashion brands including Anna Molinari, Balenciaga, Balmain, Brioni, Burberry, Celine, Chanel, Chloe, Christian Dior, Cividini, Dolce and Gabbana, Emilio Pucci, Ermenegildo, Fendi, Giuliana Teso, Gucci, Issey Miyake, Kenzo, Leonard, Louis Vuitton, Marc Jacobs, Marni, Max Mara, Missoni, Moschino, Ralph Lauren, Roberto Cavalli, Sonia Rykiel, Stella McCartney, Valentino, Versace, and Yve Saint Laurent. We perform 10-folded experiments to consider the random generation of training data, and our proposed model has achieved accuracy of 67.2% on final test. Our research suggests several advantages over previous related studies as to our best knowledge, there haven't been any previous studies which trained the network for apparel image classification based on runway image dataset. We suggest the idea of training model with image capturing all the possible postures, which is denoted as mobility, by using our own runway apparel image dataset. Moreover, by applying Transfer Learning and using checkpoint and parameters provided by Tensorflow Slim, we could save time spent on training the classification model as taking 6 minutes per experiment to train the classifier. This model can be used in many business applications where the query image can be runway image, product image, or street fashion image. To be specific, runway query image can be used for mobile application service during fashion week to facilitate brand search, street style query image can be classified during fashion editorial task to classify and label the brand or style, and website query image can be processed by e-commerce multi-complex service providing item information or recommending similar item.

How Enduring Product Involvement and Perceived Risk Affect Consumers' Online Merchant Selection Process: The 'Required Trust Level' Perspective (지속적 관여도 및 인지된 위험이 소비자의 온라인 상인선택 프로세스에 미치는 영향에 관한 연구: 요구신뢰 수준 개념을 중심으로)

  • Hong, Il-Yoo B.;Lee, Jung-Min;Cho, Hwi-Hyung
    • Asia pacific journal of information systems
    • /
    • v.22 no.1
    • /
    • pp.29-52
    • /
    • 2012
  • Consumers differ in the way they make a purchase. An audio mania would willingly make a bold, yet serious, decision to buy a top-of-the-line home theater system, while he is not interested in replacing his two-decade-old shabby car. On the contrary, an automobile enthusiast wouldn't mind spending forty thousand dollars to buy a new Jaguar convertible, yet cares little about his junky component system. It is product involvement that helps us explain such differences among individuals in the purchase style. Product involvement refers to the extent to which a product is perceived to be important to a consumer (Zaichkowsky, 2001). Product involvement is an important factor that strongly influences consumer's purchase decision-making process, and thus has been of prime interest to consumer behavior researchers. Furthermore, researchers found that involvement is closely related to perceived risk (Dholakia, 2001). While abundant research exists addressing how product involvement relates to overall perceived risk, little attention has been paid to the relationship between involvement and different types of perceived risk in an electronic commerce setting. Given that perceived risk can be a substantial barrier to the online purchase (Jarvenpaa, 2000), research addressing such an issue will offer useful implications on what specific types of perceived risk an online firm should focus on mitigating if it is to increase sales to a fullest potential. Meanwhile, past research has focused on such consumer responses as information search and dissemination as a consequence of involvement, neglecting other behavioral responses like online merchant selection. For one example, will a consumer seriously considering the purchase of a pricey Guzzi bag perceive a great degree of risk associated with online buying and therefore choose to buy it from a digital storefront rather than from an online marketplace to mitigate risk? Will a consumer require greater trust on the part of the online merchant when the perceived risk of online buying is rather high? We intend to find answers to these research questions through an empirical study. This paper explores the impact of enduring product involvement and perceived risks on required trust level, and further on online merchant choice. For the purpose of the research, five types or components of perceived risk are taken into consideration, including financial, performance, delivery, psychological, and social risks. A research model has been built around the constructs under consideration, and 12 hypotheses have been developed based on the research model to examine the relationships between enduring involvement and five components of perceived risk, between five components of perceived risk and required trust level, between enduring involvement and required trust level, and finally between required trust level and preference toward an e-tailer. To attain our research objectives, we conducted an empirical analysis consisting of two phases of data collection: a pilot test and main survey. The pilot test was conducted using 25 college students to ensure that the questionnaire items are clear and straightforward. Then the main survey was conducted using 295 college students at a major university for nine days between December 13, 2010 and December 21, 2010. The measures employed to test the model included eight constructs: (1) enduring involvement, (2) financial risk, (3) performance risk, (4) delivery risk, (5) psychological risk, (6) social risk, (7) required trust level, (8) preference toward an e-tailer. The statistical package, SPSS 17.0, was used to test the internal consistency among the items within the individual measures. Based on the Cronbach's ${\alpha}$ coefficients of the individual measure, the reliability of all the variables is supported. Meanwhile, the Amos 18.0 package was employed to perform a confirmatory factor analysis designed to assess the unidimensionality of the measures. The goodness of fit for the measurement model was satisfied. Unidimensionality was tested using convergent, discriminant, and nomological validity. The statistical evidences proved that the three types of validity were all satisfied. Now the structured equation modeling technique was used to analyze the individual paths along the relationships among the research constructs. The results indicated that enduring involvement has significant positive relationships with all the five components of perceived risk, while only performance risk is significantly related to trust level required by consumers for purchase. It can be inferred from the findings that product performance problems are mostly likely to occur when a merchant behaves in an opportunistic manner. Positive relationships were also found between involvement and required trust level and between required trust level and online merchant choice. Enduring involvement is concerned with the pleasure a consumer derives from a product class and/or with the desire for knowledge for the product class, and thus is likely to motivate the consumer to look for ways of mitigating perceived risk by requiring a higher level of trust on the part of the online merchant. Likewise, a consumer requiring a high level of trust on the merchant will choose a digital storefront rather than an e-marketplace, since a digital storefront is believed to be trustworthier than an e-marketplace, as it fulfills orders by itself rather than acting as an intermediary. The findings of the present research provide both academic and practical implications. The first academic implication is that enduring product involvement is a strong motivator of consumer responses, especially the selection of a merchant, in the context of electronic shopping. Secondly, academicians are advised to pay attention to the finding that an individual component or type of perceived risk can be used as an important research construct, since it would allow one to pinpoint the specific types of risk that are influenced by antecedents or that influence consequents. Meanwhile, our research provides implications useful for online merchants (both online storefronts and e-marketplaces). Merchants may develop strategies to attract consumers by managing perceived performance risk involved in purchase decisions, since it was found to have significant positive relationship with the level of trust required by a consumer on the part of the merchant. One way to manage performance risk would be to thoroughly examine the product before shipping to ensure that it has no deficiencies or flaws. Secondly, digital storefronts are advised to focus on symbolic goods (e.g., cars, cell phones, fashion outfits, and handbags) in which consumers are relatively more involved than others, whereas e- marketplaces should put their emphasis on non-symbolic goods (e.g., drinks, books, MP3 players, and bike accessories).

  • PDF

A Deep Learning Based Approach to Recognizing Accompanying Status of Smartphone Users Using Multimodal Data (스마트폰 다종 데이터를 활용한 딥러닝 기반의 사용자 동행 상태 인식)

  • Kim, Kilho;Choi, Sangwoo;Chae, Moon-jung;Park, Heewoong;Lee, Jaehong;Park, Jonghun
    • Journal of Intelligence and Information Systems
    • /
    • v.25 no.1
    • /
    • pp.163-177
    • /
    • 2019
  • As smartphones are getting widely used, human activity recognition (HAR) tasks for recognizing personal activities of smartphone users with multimodal data have been actively studied recently. The research area is expanding from the recognition of the simple body movement of an individual user to the recognition of low-level behavior and high-level behavior. However, HAR tasks for recognizing interaction behavior with other people, such as whether the user is accompanying or communicating with someone else, have gotten less attention so far. And previous research for recognizing interaction behavior has usually depended on audio, Bluetooth, and Wi-Fi sensors, which are vulnerable to privacy issues and require much time to collect enough data. Whereas physical sensors including accelerometer, magnetic field and gyroscope sensors are less vulnerable to privacy issues and can collect a large amount of data within a short time. In this paper, a method for detecting accompanying status based on deep learning model by only using multimodal physical sensor data, such as an accelerometer, magnetic field and gyroscope, was proposed. The accompanying status was defined as a redefinition of a part of the user interaction behavior, including whether the user is accompanying with an acquaintance at a close distance and the user is actively communicating with the acquaintance. A framework based on convolutional neural networks (CNN) and long short-term memory (LSTM) recurrent networks for classifying accompanying and conversation was proposed. First, a data preprocessing method which consists of time synchronization of multimodal data from different physical sensors, data normalization and sequence data generation was introduced. We applied the nearest interpolation to synchronize the time of collected data from different sensors. Normalization was performed for each x, y, z axis value of the sensor data, and the sequence data was generated according to the sliding window method. Then, the sequence data became the input for CNN, where feature maps representing local dependencies of the original sequence are extracted. The CNN consisted of 3 convolutional layers and did not have a pooling layer to maintain the temporal information of the sequence data. Next, LSTM recurrent networks received the feature maps, learned long-term dependencies from them and extracted features. The LSTM recurrent networks consisted of two layers, each with 128 cells. Finally, the extracted features were used for classification by softmax classifier. The loss function of the model was cross entropy function and the weights of the model were randomly initialized on a normal distribution with an average of 0 and a standard deviation of 0.1. The model was trained using adaptive moment estimation (ADAM) optimization algorithm and the mini batch size was set to 128. We applied dropout to input values of the LSTM recurrent networks to prevent overfitting. The initial learning rate was set to 0.001, and it decreased exponentially by 0.99 at the end of each epoch training. An Android smartphone application was developed and released to collect data. We collected smartphone data for a total of 18 subjects. Using the data, the model classified accompanying and conversation by 98.74% and 98.83% accuracy each. Both the F1 score and accuracy of the model were higher than the F1 score and accuracy of the majority vote classifier, support vector machine, and deep recurrent neural network. In the future research, we will focus on more rigorous multimodal sensor data synchronization methods that minimize the time stamp differences. In addition, we will further study transfer learning method that enables transfer of trained models tailored to the training data to the evaluation data that follows a different distribution. It is expected that a model capable of exhibiting robust recognition performance against changes in data that is not considered in the model learning stage will be obtained.

An Application-Specific and Adaptive Power Management Technique for Portable Systems (휴대장치를 위한 응용프로그램 특성에 따른 적응형 전력관리 기법)

  • Egger, Bernhard;Lee, Jae-Jin;Shin, Heon-Shik
    • Journal of KIISE:Computer Systems and Theory
    • /
    • v.34 no.8
    • /
    • pp.367-376
    • /
    • 2007
  • In this paper, we introduce an application-specific and adaptive power management technique for portable systems that support dynamic voltage scaling (DVS). We exploit both the idle time of multitasking systems running soft real-time tasks as well as memory- or CPU-bound code regions. Detailed power and execution time profiles guide an adaptive power manager (APM) that is linked to the operating system. A post-pass optimizer marks candidate regions for DVS by inserting calls to the APM. At runtime, the APM monitors the CPU's performance counters to dynamically determine the affinity of the each marked region. for each region, the APM computes the optimal voltage and frequency setting in terms of energy consumption and switches the CPU to that setting during the execution of the region. Idle time is exploited by monitoring system idle time and switching to the energy-wise most economical setting without prolonging execution. We show that our method is most effective for periodic workloads such as video or audio decoding. We have implemented our method in a multitasking operating system (Microsoft Windows CE) running on an Intel XScale-processor. We achieved up to 9% of total system power savings over the standard power management policy that puts the CPU in a low Power mode during idle periods.