• Title/Summary/Keyword: Android application

Search Result 795, Processing Time 0.022 seconds

Implementation of a Photo-Input Game Interface Using Image Search (이미지 검색을 이용한 사진입력 게임 인터페이스 구현)

  • Lee, Taeho;Han, Jaesun;Park, Heemin
    • KIISE Transactions on Computing Practices
    • /
    • v.21 no.10
    • /
    • pp.658-669
    • /
    • 2015
  • The paradigm of game development changes with technological trends. If the system can analyze and determine undefined inputs, users' input choices are not restricted. Therefore, game scenarios can have multifarious flows depending upon the user's input data. In this paper, we propose a method of including an output plan in the game system that is based on the user's input but is not restricted to predefined choices. We have implemented an experimental game on the Android platform by combining network communication and APIs. The game interface works as follows: first, the user's input data is transmitted to the server using HTTP protocol; then, the server carries out an analysis on the input data; and finally, the server returns the decision result to the game device. The game can provide users a scenario that corresponds to the decision results. In this paper, we used an image file for the user's input data format. The server calculates similarities between the user's image file and reference images obtained from the Naver Image Search API and then returns determination results. We have confirmed the value of integrating the game development framework with other computing technologies demonstrating the potential of the proposed methods for application to various future game interfaces.

Implementation of virtual reality for interactive disaster evacuation training using close-range image information (근거리 영상정보를 활용한 실감형 재난재해 대피 훈련 가상 현실 구현)

  • KIM, Du-Young;HUH, Jung-Rim;LEE, Jin-Duk;BHANG, Kon-Joon
    • Journal of the Korean Association of Geographic Information Studies
    • /
    • v.22 no.1
    • /
    • pp.140-153
    • /
    • 2019
  • Cloase-range image information from drones and ground-based camera has been frequently used in the field of disaster mitigation with 3D modeling and mapping. In addition, the utilization of virtual reality(VR) is being increased by implementing realistic 3D models with the VR technology simulating disaster circumstances in large scale. In this paper, we created a VR training program by extracting realistic 3D models from close-range images from unmanned aircraft and digital camera on hand and observed several issues occurring during the implementation and the effectiveness in the case of a VR application in training for disaster mitigation. First of all, we built up a scenario of disaster and created 3D models after image processing with the close-range imagery. The 3D models were imported into Unity, a software for creation of augmented/virtual reality, as a background for android-based mobile phones and VR environment was created with C#-based script language. The generated virtual reality includes a scenario in which the trainer moves to a safe place along the evacuation route in the event of a disaster, and it was considered that the successful training can be obtained with virtual reality. In addition, the training through the virtual reality has advantages relative to actual evacuation training in terms of cost, space and time efficiencies.

A Study on Applicability of Smartphone Camera and Lens for Concrete Crack Measurement Using Image Processing Techniques (이미지 처리기법을 이용한 균열 측정시 스마트폰 카메라 및 렌즈 적용성에 대한 연구)

  • Seo, Seunghwan;Kim, Dong-Hyun;Chung, Moonkyung
    • Journal of the Korean Geosynthetics Society
    • /
    • v.20 no.4
    • /
    • pp.63-71
    • /
    • 2021
  • Recently, high-resolution cameras in smartphones enable measurement of minute objects such as cracks in concrete using image processing techniques. The technology to investigate the crack width using an application at an adjacent distance of the close shot range has already been implemented, but the use is limited, so it is necessary to verify the usability of the high-resolution smartphone camera to measure cracks at a longer distance. This study focuses on recognizing the size of subdivided crack widths at a thickness within 1.0 mm of crack width at a distance of 2 m. In recent Android-based smartphones, an experiment was conducted focusing on the relationship between the unit pixel size, which is a measurement component, and the shooting distance, depending on the camera resolution. As a result, it was possible to confirm the necessity of a smartphone lens for the classification and quantification of microcrack widths of 0.3 mm to 1mm. The universal telecentric lens for smartphones needed to be installed in an accurate position to minimize the effect of distortion. In addition, as a result of applying a 64 MP high-resolution smartphone camera and double magnification lens, the crack width could be calculated within 2 m in pixel units, and crack widths of 0.3, 0.5, and 1mm could be distinguished.

Development of Mental Health Self-Care App for University Student (대학생을 위한 정신건강 자가관리 어플리케이션 개발)

  • Kang, Gwang-Soon;Roh, Sun-Sik
    • Journal of Korea Entertainment Industry Association
    • /
    • v.13 no.1
    • /
    • pp.25-34
    • /
    • 2019
  • The purpose of this study is to develop a mobile app for mental health self care of university student. User centered design is a research design that applies the subject's needs assessment, analysis, design, development, evaluation, modification and supplement to suit the subjects. In order to manage the mental health of university students, they consisted of four main areas of mental health problems: drinking, sleeping, depression, and stress. It is designed to enable self test content, analysis and notification of inspection results, and management plan for current status of each area. Based on this, I developed an Android based mental health self-care Application. The subject can enter his or her mental health status data to explain the normal or risk level for each result, and the subject can then select the appropriate intervention method that he or she can perform. In addition, we developed a mental health self care calendar that can display the present status of each of the four areas on a day by day basis, and the current status can be expressed in an integrated manner through animations and status bars. The purpose of this study was to develop a mental health self-care app that can be improved by continuous and improved programs.

A Deep Learning Based Approach to Recognizing Accompanying Status of Smartphone Users Using Multimodal Data (스마트폰 다종 데이터를 활용한 딥러닝 기반의 사용자 동행 상태 인식)

  • Kim, Kilho;Choi, Sangwoo;Chae, Moon-jung;Park, Heewoong;Lee, Jaehong;Park, Jonghun
    • Journal of Intelligence and Information Systems
    • /
    • v.25 no.1
    • /
    • pp.163-177
    • /
    • 2019
  • As smartphones are getting widely used, human activity recognition (HAR) tasks for recognizing personal activities of smartphone users with multimodal data have been actively studied recently. The research area is expanding from the recognition of the simple body movement of an individual user to the recognition of low-level behavior and high-level behavior. However, HAR tasks for recognizing interaction behavior with other people, such as whether the user is accompanying or communicating with someone else, have gotten less attention so far. And previous research for recognizing interaction behavior has usually depended on audio, Bluetooth, and Wi-Fi sensors, which are vulnerable to privacy issues and require much time to collect enough data. Whereas physical sensors including accelerometer, magnetic field and gyroscope sensors are less vulnerable to privacy issues and can collect a large amount of data within a short time. In this paper, a method for detecting accompanying status based on deep learning model by only using multimodal physical sensor data, such as an accelerometer, magnetic field and gyroscope, was proposed. The accompanying status was defined as a redefinition of a part of the user interaction behavior, including whether the user is accompanying with an acquaintance at a close distance and the user is actively communicating with the acquaintance. A framework based on convolutional neural networks (CNN) and long short-term memory (LSTM) recurrent networks for classifying accompanying and conversation was proposed. First, a data preprocessing method which consists of time synchronization of multimodal data from different physical sensors, data normalization and sequence data generation was introduced. We applied the nearest interpolation to synchronize the time of collected data from different sensors. Normalization was performed for each x, y, z axis value of the sensor data, and the sequence data was generated according to the sliding window method. Then, the sequence data became the input for CNN, where feature maps representing local dependencies of the original sequence are extracted. The CNN consisted of 3 convolutional layers and did not have a pooling layer to maintain the temporal information of the sequence data. Next, LSTM recurrent networks received the feature maps, learned long-term dependencies from them and extracted features. The LSTM recurrent networks consisted of two layers, each with 128 cells. Finally, the extracted features were used for classification by softmax classifier. The loss function of the model was cross entropy function and the weights of the model were randomly initialized on a normal distribution with an average of 0 and a standard deviation of 0.1. The model was trained using adaptive moment estimation (ADAM) optimization algorithm and the mini batch size was set to 128. We applied dropout to input values of the LSTM recurrent networks to prevent overfitting. The initial learning rate was set to 0.001, and it decreased exponentially by 0.99 at the end of each epoch training. An Android smartphone application was developed and released to collect data. We collected smartphone data for a total of 18 subjects. Using the data, the model classified accompanying and conversation by 98.74% and 98.83% accuracy each. Both the F1 score and accuracy of the model were higher than the F1 score and accuracy of the majority vote classifier, support vector machine, and deep recurrent neural network. In the future research, we will focus on more rigorous multimodal sensor data synchronization methods that minimize the time stamp differences. In addition, we will further study transfer learning method that enables transfer of trained models tailored to the training data to the evaluation data that follows a different distribution. It is expected that a model capable of exhibiting robust recognition performance against changes in data that is not considered in the model learning stage will be obtained.