• Title/Summary/Keyword: Mixed-Model Line Balancing

Search Result 22, Processing Time 0.016 seconds

A Study on Throughput Increase in Semiconductor Package Process of K Manufacturing Company Using a Simulation Model (시뮬레이션 모델을 이용한 K회사 반도체 패키지 공정의 생산량 증가를 위한 연구)

  • Chai, Jong-In;Park, Yang-Byung
    • Journal of the Korea Society for Simulation
    • /
    • v.19 no.1
    • /
    • pp.1-11
    • /
    • 2010
  • K company produces semiconductor package products under the make-to-order policy to supply for domestic and foreign semiconductor manufacturing companies. Its production process is a machine-paced assembly line type, which consists of die sawing, assembly, and test. This paper suggests three plans to increase process throughput based on the process analysis of K company and evaluates them via a simulation model using a real data collected. The three plans are line balancing by adding machines to the bottleneck process, product group scheduling, and reallocation of the operators in non-bottleneck processes. The evaluation result shows the highest daily throughput increase of 17.3% with an effect of 2.8% reduction of due date violation when the three plans are applied together. Payback period for the mixed application of the three plans is obtained as 1.37 years.

Implementation of the Agent using Universal On-line Q-learning by Balancing Exploration and Exploitation in Reinforcement Learning (강화 학습에서의 탐색과 이용의 균형을 통한 범용적 온라인 Q-학습이 적용된 에이전트의 구현)

  • 박찬건;양성봉
    • Journal of KIISE:Software and Applications
    • /
    • v.30 no.7_8
    • /
    • pp.672-680
    • /
    • 2003
  • A shopbot is a software agent whose goal is to maximize buyer´s satisfaction through automatically gathering the price and quality information of goods as well as the services from on-line sellers. In the response to shopbots´ activities, sellers on the Internet need the agents called pricebots that can help them maximize their own profits. In this paper we adopts Q-learning, one of the model-free reinforcement learning methods as a price-setting algorithm of pricebots. A Q-learned agent increases profitability and eliminates the cyclic price wars when compared with the agents using the myoptimal (myopically optimal) pricing strategy Q-teaming needs to select a sequence of state-action fairs for the convergence of Q-teaming. When the uniform random method in selecting state-action pairs is used, the number of accesses to the Q-tables to obtain the optimal Q-values is quite large. Therefore, it is not appropriate for universal on-line learning in a real world environment. This phenomenon occurs because the uniform random selection reflects the uncertainty of exploitation for the optimal policy. In this paper, we propose a Mixed Nonstationary Policy (MNP), which consists of both the auxiliary Markov process and the original Markov process. MNP tries to keep balance of exploration and exploitation in reinforcement learning. Our experiment results show that the Q-learning agent using MNP converges to the optimal Q-values about 2.6 time faster than the uniform random selection on the average.