Closed-loop Rescheduling using Deep Reinforcement Learning
In this work, a novel approach for generating rescheduling knowledge which can be used in real-time for handling unforeseen events without extra deliberation is presented. For generating such control knowledge, the rescheduling task is modelled and solved as a closed-loop control problem by resortin...
Guardado en:
| Autores principales: | , |
|---|---|
| Formato: | Objeto de conferencia Resumen |
| Lenguaje: | Inglés |
| Publicado: |
2019
|
| Materias: | |
| Acceso en línea: | http://sedici.unlp.edu.ar/handle/10915/89513 |
| Aporte de: |
| Sumario: | In this work, a novel approach for generating rescheduling knowledge which can be used in real-time for handling unforeseen events without extra deliberation is presented. For generating such control knowledge, the rescheduling task is modelled and solved as a closed-loop control problem by resorting to the integration of a schedule state simulator with a rescheduling agent that can learn successful schedule repairing policies directly from a variety of simulated transitions between schedule states, using as input readily available schedule color-rich Gantt chart images, and negligible prior knowledge. The generated knowledge is stored in a deep Q-network, which can be used as a computational tool in a closed-loop rescheduling control way that select repair actions to make progress towards a goal schedule state, without requiring to compute the rescheduling problem solution every time a disruptive event occurs and safely generalize control knowledge to unseen schedule states. |
|---|