• Title/Summary/Keyword: tetris

Search Result 17, Processing Time 0.026 seconds

Vision-based Interface for Tetris Game (테트리스 게임을 위한 비젼 기반의 인터페이스)

  • 김상호;장재식;김항준
    • Proceedings of the Korean Information Science Society Conference
    • /
    • 2004.04b
    • /
    • pp.775-777
    • /
    • 2004
  • 본 논문에서는 테트리스 게임을 위한 비젼 기반의 인터페이스를 제안하고 있다 제안된 인터페이스는 카메라로부터 실시간으로 입력되는 연속 영상에서 손의 제스처를 인식하고, 인식된 제스처를 게임의 명령으로 사용한다 테트리스 게임에 필요만 6개의 명령은 손의 자세로 정의되는 세 종류의 정적 제스처와 손의 자세와 움직임으로 정의되는 세 종류의 동적 제스처 정의된다. 손의 자세는 손 영역의 불변 모멘트로 표현하였고, 입력된 손 영역의 자세는 미리 학습된 불변 모멘트 값들과의 거리차이를 비교하여 분류한다. 실험 결과에서 제안된 시스템이 실시간 테트리스 게임의 인터페이스로 적용가능함을 보였다.

  • PDF

Simulation of Entropy Decrease in Puzzle Game Play (퍼즐 게임 플레이에 나타난 엔트로피 감소의 시뮬레이션)

  • Yun, Hye-Young
    • Journal of Korea Game Society
    • /
    • v.13 no.5
    • /
    • pp.19-30
    • /
    • 2013
  • This Study analyzes dynamic of a puzzle game play by applying entropy law. Entropy is a concept that a quantitative measure of the amount of thermal energy not available to do work in a closed system. And amount of entropy can be measured only if we see the closed system as whole, the field. Puzzle game is also closed system. When player moves an object in game, it change a relationship among objects in play field. In , through an act of position change, player sustains a play field active. In respect of an entropy, this kind of play is considered as pursue of usability of the energy. In , player piles up objects without empty space. In respect of an entropy, this kind of play is considered as pursue of the order. Likewise, puzzle game play can be considered as simulation of a human's pursue of the order in an entropy increasing physical world. And this pursue is a driving force of puzzle game play.

Real-Time Scheduling Scheme based on Reinforcement Learning Considering Minimizing Setup Cost (작업 준비비용 최소화를 고려한 강화학습 기반의 실시간 일정계획 수립기법)

  • Yoo, Woosik;Kim, Sungjae;Kim, Kwanho
    • The Journal of Society for e-Business Studies
    • /
    • v.25 no.2
    • /
    • pp.15-27
    • /
    • 2020
  • This study starts with the idea that the process of creating a Gantt Chart for schedule planning is similar to Tetris game with only a straight line. In Tetris games, the X axis is M machines and the Y axis is time. It is assumed that all types of orders can be worked without separation in all machines, but if the types of orders are different, setup cost will be incurred without delay. In this study, the game described above was named Gantris and the game environment was implemented. The AI-scheduling table through in-depth reinforcement learning compares the real-time scheduling table with the human-made game schedule. In the comparative study, the learning environment was studied in single order list learning environment and random order list learning environment. The two systems to be compared in this study are four machines (Machine)-two types of system (4M2T) and ten machines-six types of system (10M6T). As a performance indicator of the generated schedule, a weighted sum of setup cost, makespan and idle time in processing 100 orders were scheduled. As a result of the comparative study, in 4M2T system, regardless of the learning environment, the learned system generated schedule plan with better performance index than the experimenter. In the case of 10M6T system, the AI system generated a schedule of better performance indicators than the experimenter in a single learning environment, but showed a bad performance index than the experimenter in random learning environment. However, in comparing the number of job changes, the learning system showed better results than those of the 4M2T and 10M6T, showing excellent scheduling performance.

Research and development of 2P Tetris game (2인 테트리스 코드)

  • Kim, Dong Sub;Kim, Seong Min;Kim, TaeHong;Lee, SangJin;Yu, JinSu;Han, Seok Whan;Gang, Yun-Jeong
    • Proceedings of the Korean Society of Computer Information Conference
    • /
    • 2017.07a
    • /
    • pp.307-308
    • /
    • 2017
  • 본 논문에서는 Facebook 테트리스 게임에서 착안하여 모바일로도 실행시키고자 하였다. Facebook 테트리스와 같이 Hold, 미리보기, 줄 넘기기공격 등 세부 기능을 모두 구현하였다.Hold를 하면 다음에 나올 블록을 저장하여 필요할때 꺼내쓸수 있도록 한다. 미리보기를 하여 블록이 바닥에 도달했을시 예상위치를 나타내준다. 한 줄을 clear하였을 시 상대방바닥에 한줄이 올라오도록 한다. 안드로이드 프로그래밍으로 응용시 모바일 앱환경에서도 활용할수 있다.

  • PDF

Design and Implementation of AI methodologies for Tetris Game using Genetic Algorithm (유전자 알고리즘을 이용한 테트리스 AI 기법의 설계 및 구현)

  • Park, Jong-Kir;Lee, Seong-Sil;Choi, Kyoung-Am;Choi, Jun-Hyeok;Kim, Jin-Il
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 2017.05a
    • /
    • pp.805-807
    • /
    • 2017
  • 유전자 알고리즘을 이용하여 스스로 테트리스 게임을 플레이하는 AI 기법을 제안한다. 테트리스에 필요한 요소들을 고려하여 각 요소마다 가중치를 곱한 값을 통해 블록을 이동시킬 자리를 정한다. 해당 알고리즘은 8가지의 고려 요소를 가지며, 각 요소별 최적의 가중치를 구하기 위해 유전자 알고리즘을 적용하였다. 본 연구의 성능을 분석하기 위하여 직접 설계 제작한 테트리스로 게임을 정확하게 진행해 나가는가를 실험하였다. 실험 결과, 제안 기법에 따라 테트리스를 진행하는 것을 확인하였다.

  • PDF

Game Storytelling Analysed through Montage Technique Borrowed from Film - Case Study of Game 'World of Warcraft' - (영화의 몽타주 기법을 통해 분석해 본 게임 스토리텔링 - 게임 World of Warcraft를 중심으로 -)

  • Lee, Jun-Hee
    • Archives of design research
    • /
    • v.19 no.1 s.63
    • /
    • pp.119-128
    • /
    • 2006
  • 'Play' itself is enough of a motivation for anyone to do it. Still, it will be very difficult to continue to enjoy meaningless and aimless repetition of fights, mimicking, chases, discoveries or changing of sceneries by the mechanism alone. Of course, in rare occasions there are games like Tetris which can be enjoyed on its great gameplay alone for hours. But for most cases, players need goals, initiatives, and dynamism through storytelling for their experiences to be a rich one. Validity and feasibility of storytelling in games has always existed with plenty of skepticism. However, as games evolved from some small play mechanisms for spare times to a major entertainment with recognizable volume and content, a need to keep players interested and participating has made storytelling an essential ingredient. Storytelling within games has to have different meanings and shapes to existing narratives. Hence, new definition and methodology must emerge and studies has been active. If a case can be made so that a tested and tried methodology that has been successful for other media can be substituted for games, then it can bring a new direction to the ongoing studies. This study will borrow some methodology from cinema which can sometimes be seen as opposite to games and sometimes as something games want to be alike.

  • PDF

Cloud Task Scheduling Based on Proximal Policy Optimization Algorithm for Lowering Energy Consumption of Data Center

  • Yang, Yongquan;He, Cuihua;Yin, Bo;Wei, Zhiqiang;Hong, Bowei
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.16 no.6
    • /
    • pp.1877-1891
    • /
    • 2022
  • As a part of cloud computing technology, algorithms for cloud task scheduling place an important influence on the area of cloud computing in data centers. In our earlier work, we proposed DeepEnergyJS, which was designed based on the original version of the policy gradient and reinforcement learning algorithm. We verified its effectiveness through simulation experiments. In this study, we used the Proximal Policy Optimization (PPO) algorithm to update DeepEnergyJS to DeepEnergyJSV2.0. First, we verify the convergence of the PPO algorithm on the dataset of Alibaba Cluster Data V2018. Then we contrast it with reinforcement learning algorithm in terms of convergence rate, converged value, and stability. The results indicate that PPO performed better in training and test data sets compared with reinforcement learning algorithm, as well as other general heuristic algorithms, such as First Fit, Random, and Tetris. DeepEnergyJSV2.0 achieves better energy efficiency than DeepEnergyJS by about 7.814%.