• Title/Summary/Keyword: Interaction Device

Search Result 647, Processing Time 0.023 seconds

A method of effective tactile information display using smart devices (스마트 단말을 이용한 효과적인 촉각정보 표시 방법)

  • Yun, Sung-Jo;Seo, Kap-Ho;Kim, Dae-Hee;Park, Yong-Sik;Park, Sung-Ho;Jeon, Kwang-Woo;Jeon, Jung-Su
    • Proceedings of the Korean Society of Computer Information Conference
    • /
    • 2014.07a
    • /
    • pp.53-54
    • /
    • 2014
  • 본 논문에서는 시각 장애인들을 위해 유용한 정보를 제공해 주기 위한 촉각표시장치의 효율적인 정보 표시 방법에 대해 제안한다. 이 방법은 전방에 장애물이 있을 경우 이를 인지 및 촉각 정보로 변환하여 제공함으로 인해 사고를 미연에 예방하고자 한다. 이 방법은 저가의 스테레오 카메라를 이용하여 경제적인 비용으로 구성이 가능하다.

  • PDF

How to apply foldable display interaction to smart device (폴더블 디스플레이 인터랙션의 스마트 디바이스 적용방안에 관한 연구)

  • Noh, Ji Hye;Chung, Seung Eun;Ryoo, Han Young
    • Design Convergence Study
    • /
    • v.15 no.3
    • /
    • pp.151-169
    • /
    • 2016
  • This study intends to present the most optimal interaction in association with available functions if the foldable display is applied to smart devices. For this, I have looked into the flow of development, morphological features and the application areas of the foldable display based on review literature. I have also established five principles of interaction applicable to the foldable display through the study on the concept and characteristics of foldable display interactions and previous research cases. Next, I have conducted user surveys to find the most optimal interactions with the functions of smart devices taken into account. Prior to user surveys, I have classified foldable display interactions into 36 categories based on the five interaction principles of the foldable display. In addition, I have selected 17 major functions of regular smart devices based on relevant documents. Lastly, utilizing concrete interaction methods and functions obtained in this manner, I have conducted user surveys based on the relationship among multiple relevant factors and chosen the interaction method that acquired the highest frequency and score as the most optimal one whose detailed description has been provided as well.

A research on man-robot cooperative interaction system

  • Ishii, Masaru
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 1992.10b
    • /
    • pp.555-557
    • /
    • 1992
  • Recently, realization of an intelligent cooperative interaction system between a man and robot systems is required. In this paper, HyperCard with a voice control is used for above system because of its easy handling and excellent human interfaces. Clicking buttons in the HyperCard by a mouse device or a voice command means controlling each joint of a robot system. Robot teaching operation of grasping a bin and pouring liquid in it into a cup is carried out. This robot teaching method using HyperCard provides a foundation for realizing a user friendly cooperative interaction system.

  • PDF

The Design and Fabrication of μCCA-μGI Device for Toxicity Evaluation of Acetaminophen (아세트아미노펜 독성평가를 위한 μCCA-μGI 디바이스의 개발)

  • Chang Jung-Yun;Shuler Michael L.
    • Journal of Pharmaceutical Investigation
    • /
    • v.36 no.4
    • /
    • pp.263-269
    • /
    • 2006
  • Deficiencies in the early ADMET(absorption, distribution, metabolism, elimination and toxicity) information on drug candidate extract a significant economic penalty on pharmaceutical firms. Microscale cell culture analogue-microscale gastrointestinal(${\mu}CCA-{\mu}GI$) device using Caco 2, L2 and HEp G2/C3A cells, which mimic metabolic process after absorption occurring in humans was used to investigate the toxicity of the model chemical, acetaminophen(AAP). The toxicity of acetaminophen determined after induction of CYP 1A1/2 in Caco 2 cells was not significant. In a coculture system, although no significant reduction in viability of HEp G2/C3A and L2 cells was found, approximately 5 fold increase in the CYP 1A1/2 activity was observed. These results appear to be related to organ-organ interaction. The oral administration of a drug requires addition of the absorption process through small intestine to the current ${\mu}CCA$ device. Therefore, a perfusion coculture system was employed for the evaluation of the absolution across the small intestine and resulting toxicity in the liver and lung. This system give comprehensive and physiologic information on oral uptake and resulting toxicity as in the body. The current ${\mu}CCA$ device can be used to demonstrate the toxic effect due to organ to organ interaction after oral administration,

The Effect of Visual Feedback on One-hand Gesture Performance in Vision-based Gesture Recognition System

  • Kim, Jun-Ho;Lim, Ji-Hyoun;Moon, Sung-Hyun
    • Journal of the Ergonomics Society of Korea
    • /
    • v.31 no.4
    • /
    • pp.551-556
    • /
    • 2012
  • Objective: This study presents the effect of visual feedback on one-hand gesture performance in vision-based gesture recognition system when people use gestures to control a screen device remotely. Backgroud: gesture interaction receives growing attention because it uses advanced sensor technology and it allows users natural interaction using their own body motion. In generating motion, visual feedback has been to considered critical factor affect speed and accuracy. Method: three types of visual feedback(arrow, star, and animation) were selected and 20 gestures were listed. 12 participants perform each 20 gestures while given 3 types of visual feedback in turn. Results: People made longer hand trace and take longer time to make a gesture when they were given arrow shape feedback than star-shape feedback. The animation type feedback was most preferred. Conclusion: The type of visual feedback showed statistically significant effect on the length of hand trace, elapsed time, and speed of motion in performing a gesture. Application: This study could be applied to any device that needs visual feedback for device control. A big feedback generate shorter length of motion trace, less time, faster than smaller one when people performs gestures to control a device. So the big size of visual feedback would be recommended for a situation requiring fast actions. On the other hand, the smaller visual feedback would be recommended for a situation requiring elaborated actions.

User Interaction Library for Natural Science Education Digital App-Book on Android Platform (안드로이드 기반 자연과학 교육용 디지털 앱북 개발을 위한 사용자 상호작용 라이브러리)

  • Lee, Kang-Woon;Beak, A-Ram;Choi, Haechul
    • Journal of Broadcast Engineering
    • /
    • v.20 no.1
    • /
    • pp.110-121
    • /
    • 2015
  • The digital app-book is an advanced form of the electronic book (e-book), which attracts a lot of interests by the help of video, sound, sensors and a variety of interactions. As mobile devices have evolved, the demand of digital app-books is also rising substantially. However, the distribution of digital app-book contents is hard to meet the demand because the digital app-book requires a lot of programming cost for the interaction. To resolve this problem, Was verified and implementation as a library function of the interaction between device and user. The proposed library consists of three parts (user action recognition, device action, and content action) and provides various user-device interaction functions by combining methods of each part, which can support source code reusability, easy understanding and availability, and wide expandibility. The library was used in the development of natural science education app-book contents. As a result, it could reduce a lot of code lines and facilitate more rapid app-book development.

Ink Jet Printing of Functional Materials

  • Canisius, Johannes;Brookes, Paul;Heckmeier, Michael;James, Mark;Mueller, David;Patterson, Katie
    • 한국정보디스플레이학회:학술대회논문집
    • /
    • 2007.08b
    • /
    • pp.1121-1124
    • /
    • 2007
  • Ink jet printing has been targeted as a key technology for OLED, TFT backplane and other organic semiconductor device fabrication. This presentation will concentrate on aspects of the IJ process, formulation design, jetting performance, interaction with the substrate and resultant printed device performance.

  • PDF

Laser Pointer Interaction System Based on Image Processing (영상처리 기반의 레이저 포인터 인터랙션 시스템)

  • Kim, Nam-Woo;Lee, Seung-Jae;Lee, Joon-Jae;Lee, Byung-Gook
    • Journal of Korea Multimedia Society
    • /
    • v.11 no.3
    • /
    • pp.373-385
    • /
    • 2008
  • The evolution of input device for computer has pretty much slowed down after the introduction of mouse as feinting input device. Even though stylus and touch screen were invented later on which provide some alternatives, all these methods were designed to have close range interaction with computer. There are not many options available for user to interact with computer from afar, which is especially needed during presentation. Therefore, in this paper, we try to fill the gap by proposing a laser pointer interaction system to allow user to give pointing input command to the computer from some distance away using only laser pointer, which is cheap and readily available. With the combination of image processing based software, we could provide mouse-like pointing interaction with computer. The proposed system works well not only in currently plane screen, but also in flexible screen by incorporating the feature of non-linear coordinate mapping algorithm in our system so that our system can support non-linear environment, such as curved and flexible wall.

  • PDF

The Individual Discrimination Location Tracking Technology for Multimodal Interaction at the Exhibition (전시 공간에서 다중 인터랙션을 위한 개인식별 위치 측위 기술 연구)

  • Jung, Hyun-Chul;Kim, Nam-Jin;Choi, Lee-Kwon
    • Journal of Intelligence and Information Systems
    • /
    • v.18 no.2
    • /
    • pp.19-28
    • /
    • 2012
  • After the internet era, we are moving to the ubiquitous society. Nowadays the people are interested in the multimodal interaction technology, which enables audience to naturally interact with the computing environment at the exhibitions such as gallery, museum, and park. Also, there are other attempts to provide additional service based on the location information of the audience, or to improve and deploy interaction between subjects and audience by analyzing the using pattern of the people. In order to provide multimodal interaction service to the audience at the exhibition, it is important to distinguish the individuals and trace their location and route. For the location tracking on the outside, GPS is widely used nowadays. GPS is able to get the real time location of the subjects moving fast, so this is one of the important technologies in the field requiring location tracking service. However, as GPS uses the location tracking method using satellites, the service cannot be used on the inside, because it cannot catch the satellite signal. For this reason, the studies about inside location tracking are going on using very short range communication service such as ZigBee, UWB, RFID, as well as using mobile communication network and wireless lan service. However these technologies have shortcomings in that the audience needs to use additional sensor device and it becomes difficult and expensive as the density of the target area gets higher. In addition, the usual exhibition environment has many obstacles for the network, which makes the performance of the system to fall. Above all these things, the biggest problem is that the interaction method using the devices based on the old technologies cannot provide natural service to the users. Plus the system uses sensor recognition method, so multiple users should equip the devices. Therefore, there is the limitation in the number of the users that can use the system simultaneously. In order to make up for these shortcomings, in this study we suggest a technology that gets the exact location information of the users through the location mapping technology using Wi-Fi and 3d camera of the smartphones. We applied the signal amplitude of access point using wireless lan, to develop inside location tracking system with lower price. AP is cheaper than other devices used in other tracking techniques, and by installing the software to the user's mobile device it can be directly used as the tracking system device. We used the Microsoft Kinect sensor for the 3D Camera. Kinect is equippedwith the function discriminating the depth and human information inside the shooting area. Therefore it is appropriate to extract user's body, vector, and acceleration information with low price. We confirm the location of the audience using the cell ID obtained from the Wi-Fi signal. By using smartphones as the basic device for the location service, we solve the problems of additional tagging device and provide environment that multiple users can get the interaction service simultaneously. 3d cameras located at each cell areas get the exact location and status information of the users. The 3d cameras are connected to the Camera Client, calculate the mapping information aligned to each cells, get the exact information of the users, and get the status and pattern information of the audience. The location mapping technique of Camera Client decreases the error rate that occurs on the inside location service, increases accuracy of individual discrimination in the area through the individual discrimination based on body information, and establishes the foundation of the multimodal interaction technology at the exhibition. Calculated data and information enables the users to get the appropriate interaction service through the main server.