The Paths Perspective on Value Learning
Temporal Difference Learning can be viewed as expanding the agent’s experience by merging trajectories.
TD averages over more trajectories than Monte Carlo methods, because there are never fewer simulated trajectories than real trajectories. This explains why TD learning outperforms Monte Carlo in tabular environments.
However, with function approximators, Monte Carlo and TD can make bad value updates. TD learning amplifies these errors dramatically.
Great article explaining why TD learning can be beneficial, and when it can be harmful.