• Title/Summary/Keyword: Simple user interface

Search Result 255, Processing Time 0.024 seconds

Development of a Package for the Multi-Location Problem by Genetic Algorithm (유전 알고리즘을 이용한 복수 물류센터 입지분석용 패키지의 개발)

  • Yang, Byung-Hak
    • IE interfaces
    • /
    • v.13 no.3
    • /
    • pp.479-485
    • /
    • 2000
  • We consider a Location-Allocation Problem with the Cost of Land(LAPCL). LAPCL has extremely huge size of problem and complex characteristic of location and allocation problem. Heuristics and decomposition approaches on simple Location-Allocation Problem were well developed in last three decades. Recently, genetic algorithm(GA) is used widely at combinatorics and NLP fields. A lot of research shows that GA has efficiency for finding good solution. Our main motive of this research is developing of a package for LAPCL. We found that LAPCL could be reduced to trivial problem, if locations were given. In this case, we can calculate fitness function by simple technique. We built a database constructed by zipcode, latitude, longitude, administrative address and posted land price. This database enables any real field problem to be coded into a mathematical location problem. We developed a package for a class of multi-location problem at PC. The package allows for an interactive interface between user and computer so that user can generate various solutions easily.

  • PDF

Development of Simple-function PC-NC System Based on One-CPU (단인 CPU 기반의 단순 기능형 PC-NC 시스템 개발)

  • 전현배;황진동;이돈진;김화영;안중환
    • Proceedings of the Korean Society of Precision Engineering Conference
    • /
    • 2000.11a
    • /
    • pp.229-232
    • /
    • 2000
  • This research aims at developing a low-cost PC-NC system based on one-CPU and investigating the feasibility of its application to a simple-function lathe. Its hardware consists a two axes motion control board including a 24bit counter, 8253 timer, a 12bit DA converter, DIO board for PLC operation and a PC with Intel Pentium 466MHz. The fundamental real-time MC functions such as G-code interpretation, interpolation, position and velocity control of axes are performed. User programming interface with functions of icon manipulation, tool-path simulation and NC-code generation was implemented. In order to achieve real-time control and safety, axis control, NC interpretation, interpolation and user communication are completely executed during every interrupt interval of I msec.

  • PDF

Code Development of Automatic Mesh Generation for Finite Element Method Using Delaunay Triangulation Method (Delaunay 삼각화에 의한 유한요소 자동 생성 코드 개발에 관한 연구)

  • Park Pyong-Ho;Sah Jong-Youb
    • 한국전산유체공학회:학술대회논문집
    • /
    • 1996.05a
    • /
    • pp.111-117
    • /
    • 1996
  • The Delaunay triangulation technique was tested for complicated shapes of computational domain. While a simple geometry, both in topology and in geometry, was discretized well into triangular elements. a complex geometry often failed in triangularization. A complex geometry should be devided into smaller sub-domains whose shape is simple both topologically and geometrically. The present study developed the data structures not only for relationships among neibering elements but also for shape information, and coupled these into the Delaunay triangulation technique. This approach was able to enhance greatly the reliability of triangularization specially in complicated shapes of computational domains. The GUI (Graphic User Interface) and OOP (Object-Oriented Programming) were used in order to develop the user-friendly and efficient computer code.

  • PDF

OWC based Smart TV Remote Controller Design Using Flashlight

  • Mariappan, Vinayagam;Lee, Minwoo;Choi, Byunghoon;Kim, Jooseok;Lee, Jisung;Choi, Seongjhin
    • International Journal of Internet, Broadcasting and Communication
    • /
    • v.10 no.1
    • /
    • pp.71-76
    • /
    • 2018
  • The technology convergence of television, communication, and computing devices enables the rich social and entertaining experience through Smart TV in personal living space. The powerful smart TV computing platform allows to provide various user interaction interfaces like IR remote control, web based control, body gesture based control, etc. The presently used smart TV interaction user control methods are not efficient and user-friendly to access different type of media content and services and strongly required advanced way to control and access to the smart TV with easy user interface. This paper propose the optical wireless communication (OWC) based remote controller design for Smart TV using smart device Flashlights. In this approach, the user smart device act as a remote controller with touch based interactive smart device application and transfer the user control interface data to smart TV trough Flashlight using visible light communication method. The smart TV built-in camera follows the optical camera communication (OCC) principle to decode data and control smart TV user access functions according. This proposed method is not harmful as radio frequency (RF) radiation does it on human health and very simple to use as well user does need to any gesture moves to control the smart TV.

Visual Touchless User Interface for Window Manipulation (윈도우 제어를 위한 시각적 비접촉 사용자 인터페이스)

  • Kim, Jin-Woo;Jung, Kyung-Boo;Jeong, Seung-Do;Choi, Byung-Uk
    • Journal of KIISE:Software and Applications
    • /
    • v.36 no.6
    • /
    • pp.471-478
    • /
    • 2009
  • Recently, researches for user interface are remarkably processed due to the explosive growth of 3-dimensional contents and applications, and the spread class of computer user. This paper proposes a novel method to manipulate windows efficiently using only the intuitive motion of hand. Previous methods have some drawbacks such as burden of expensive device, high complexity of gesture recognition, assistance of additional information using marker, and so on. To improve the defects, we propose a novel visual touchless interface. First, we detect hand region using hue channel in HSV color space to control window using hand. The distance transform method is applied to detect centroid of hand and curvature of hand contour is used to determine position of fingertips. Finally, by using the hand motion information, we recognize hand gesture as one of predefined seven motions. Recognized hand gesture is to be a command to control window. In the proposed method, user can manipulate windows with sense of depth in the real environment because the method adopts stereo camera. Intuitive manipulation is also available because the proposed method supports visual touch for the virtual object, which user want to manipulate, only using simple motions of hand. Finally, the efficiency of the proposed method is verified via an application based on our proposed interface.

A STUDY ON THE DEVELOPMENT OF ONE-DIMENSIONAL GUI PROGRAM FOR MICROFLUIDIC-NETWORK DESIGN (마이크로 유동 네트워크 설계를 위한 1차원 GUI 프로그램 개발에 관한 연구)

  • Park, I.H.;Kang, S.;Suh, Y.K.
    • Journal of computational fluids engineering
    • /
    • v.14 no.4
    • /
    • pp.86-92
    • /
    • 2009
  • Nowadays, the development of microfluidic chip [i.e. biochip, micro-total analysis system ($\mu$-TAS) and LOC (lab-on-a-chip)] becomes more active, and the microchannels to deliver fluid by pressure or electroosmotic forces tend to be more complex like electronic circuits or networks. For a simple network of channels, we may calculate the pressure and the flow rate easily by using suitable formula. However, for complex network it is not handy to obtain such information with that simple way. For this reason, Graphic User Interface (GUI) program which can rapidly give required information should be necessary for microchip designers. In this paper, we present a GUI program developed in our laboratory and the simple theoretical formula used in the program. We applied our program to simple case and could get results compared well with other numerical results. Further, we applied our program to several complex cases and obtained reasonable results.

Development of an Interface for Data Visualization and Controlling of Classified Objects based on User Conditions (사용 상황에 맞게 분류된 사물의 데이터 시각화와 제어를 위한 인터페이스 개발)

  • Park, Heesung;Han, Minseok;Choi, Yuri
    • KIISE Transactions on Computing Practices
    • /
    • v.22 no.7
    • /
    • pp.320-325
    • /
    • 2016
  • By developing the IoT(Internet of Things) technology, devices for smart home environment have rapidly increased. With respect to mobiles, these applications are used to control and manage the various smart devices effectively. However, the existing mechanisms only provide simple information, and hence a difficulty to search or control the smart devices persists, since there is no meaningful relationship between them. In this research, we suggest an interface which visualizes the device's data and controls them effectively, based on the user's device using pattern. As a solution for this problem, we classify the user pattern based on a timeline for the associated circumstance, and visualize the device's data to make a group or to control individually in an easier approach. Also, all meaningful information could be confirmed by summarizing all the data of smart devices.

Interface for in-situ Authoring of Augmented Reality Contents on a Mobile Device Environment (모바일 환경 증강현실 콘텐츠 현장 저작 인터페이스)

  • Lee, Jeong-Gyu;Lee, Jong-Weon
    • The Journal of the Korea Contents Association
    • /
    • v.10 no.7
    • /
    • pp.1-9
    • /
    • 2010
  • This paper introduces a difference between mobile Augmented Reality (AR) authoring and desktop AR authoring, and suggests an interface system that can be applied to in-situ authoring of AR contents on a mobile device. The mobile devices have enabled users to use the AR system anytime and anywhere. It is now necessary to create user's individualized context information using mobile devices on the spot. To author AR contents easily with mobile devices, we need to maximize the convenience of mobile systems, which have yet limitation. To do so, this paper suggests new interaction approaches that manage augmented contents using visual cues and other simple attribute settings. In addition, to solve the problem that users have to hold their mobile devices to track markers while authoring contents, this system enables users to author contents in environment based on captured images. This interface system also can make cooperation environment for more than one users to author contents. This paper verifies the usefulness of the proposed interface by user tests. The results of an analyzing users' comments show that the proposed interface is suitable for in-situ mobile authoring system.

Virtual Block Game Interface based on the Hand Gesture Recognition (손 제스처 인식에 기반한 Virtual Block 게임 인터페이스)

  • Yoon, Min-Ho;Kim, Yoon-Jae;Kim, Tae-Young
    • Journal of Korea Game Society
    • /
    • v.17 no.6
    • /
    • pp.113-120
    • /
    • 2017
  • With the development of virtual reality technology, in recent years, user-friendly hand gesture interface has been more studied for natural interaction with a virtual 3D object. Most earlier studies on the hand-gesture interface are using relatively simple hand gestures. In this paper, we suggest an intuitive hand gesture interface for interaction with 3D object in the virtual reality applications. For hand gesture recognition, first of all, we preprocess various hand data and classify the data through the binary decision tree. The classified data is re-sampled and converted to the chain-code, and then constructed to the hand feature data with the histograms of the chain code. Finally, the input gesture is recognized by MCSVM-based machine learning from the feature data. To test our proposed hand gesture interface we implemented a 'Virtual Block' game. Our experiments showed about 99.2% recognition ratio of 16 kinds of command gestures and more intuitive and user friendly than conventional mouse interface.

Touch TT: Scene Text Extractor Using Touchscreen Interface

  • Jung, Je-Hyun;Lee, Seong-Hun;Cho, Min-Su;Kim, Jin-Hyung
    • ETRI Journal
    • /
    • v.33 no.1
    • /
    • pp.78-88
    • /
    • 2011
  • In this paper, we present the Touch Text exTractor (Touch TT), an interactive text segmentation tool for the extraction of scene text from camera-based images. Touch TT provides a natural interface for a user to simply indicate the location of text regions with a simple touchline. Touch TT then automatically estimates the text color and roughly locates the text regions. By inferring text characteristics from the estimated text color and text region, Touch TT can extract text components. Touch TT can also handle partially drawn lines which cover only a small section of text area. The proposed system achieves reasonable accuracy for text extraction from moderately difficult examples from the ICDAR 2003 database and our own database.