IMPALA

class IMPALA(model, sample_batch_steps=None, gamma=None, vf_loss_coeff=None, clip_rho_threshold=None, clip_pg_rho_threshold=None)[source]

Bases: parl.core.fluid.algorithm.Algorithm

__init__(model, sample_batch_steps=None, gamma=None, vf_loss_coeff=None, clip_rho_threshold=None, clip_pg_rho_threshold=None)[source]

IMPALA algorithm

Parameters
  • model (parl.Model) – forward network of policy and value

  • sample_batch_steps (int) – steps of each environment sampling.

  • gamma (float) – discounted factor for reward computation.

  • vf_loss_coeff (float) – coefficient of the value function loss.

  • clip_rho_threshold (float) – clipping threshold for importance weights (rho).

  • clip_pg_rho_threshold (float) – clipping threshold on rho_s in rho_s delta log pi(a|x) (r + gamma v_{s+1} - V(x_s)).

learn(obs, actions, behaviour_logits, rewards, dones, learning_rate, entropy_coeff)[source]
Parameters
  • obs – An float32 tensor of shape ([B] + observation_space). E.g. [B, C, H, W] in atari.

  • actions – An int64 tensor of shape [B].

  • behaviour_logits – A float32 tensor of shape [B, NUM_ACTIONS].

  • rewards – A float32 tensor of shape [B].

  • dones – A float32 tensor of shape [B].

  • learning_rate – float scalar of learning rate.

  • entropy_coeff – float scalar of entropy coefficient.

predict(obs)[source]
Parameters

obs – An float32 tensor of shape ([B] + observation_space). E.g. [B, C, H, W] in atari.

sample(obs)[source]
Parameters

obs – An float32 tensor of shape ([B] + observation_space). E.g. [B, C, H, W] in atari.