Reward is the driving power for reinforcement studying (RL) brokers. Given its central function in RL, reward is usually assumed to be suitably common in its expressivity, as summarized by Sutton and Littman’s reward speculation:
In our work, we take first steps towards a scientific examine of this speculation. To take action, we contemplate the next thought experiment involving Alice, a designer, and Bob, a studying agent:
We suppose that Alice thinks of a activity she would possibly like Bob to be taught to resolve – this activity could possibly be within the kind a a pure language description (“steadiness this pole”), an imagined state of affairs (“attain any of the successful configurations of a chess board”), or one thing extra conventional like a reward or worth perform. Then, we think about Alice interprets her alternative of activity into some generator that can present studying sign (similar to reward) to Bob (a studying agent), who will be taught from this sign all through his lifetime. We then floor our examine of the reward speculation by addressing the next query: given Alice’s alternative of activity, is there all the time a reward perform that may convey this activity to Bob?
What’s a activity?
To make our examine of this query concrete, we first limit focus to 3 sorts of activity. Particularly, we introduce three activity varieties that we consider seize smart sorts of duties: 1) A set of acceptable insurance policies (SOAP), 2) A coverage order (PO), and three) A trajectory order (TO). These three types of duties symbolize concrete cases of the sorts of activity we’d need an agent to be taught to resolve.
We then examine whether or not reward is able to capturing every of those activity varieties in finite environments. Crucially, we solely focus consideration on Markov reward features; as an illustration, given a state house that’s adequate to kind a activity similar to (x,y) pairs in a grid world, is there a reward perform that solely relies on this identical state house that may seize the duty?
First Principal Outcome
Our first predominant end result exhibits that for every of the three activity varieties, there are environment-task pairs for which there isn’t any Markov reward perform that may seize the duty. One instance of such a pair is the “go all the way in which across the grid clockwise or counterclockwise” activity in a typical grid world:
This activity is of course captured by a SOAP that consists of two acceptable insurance policies: the “clockwise” coverage (in blue) and the “counterclockwise” coverage (in purple). For a Markov reward perform to specific this activity, it might have to make these two insurance policies strictly larger in worth than all different deterministic insurance policies. Nevertheless, there isn’t any such Markov reward perform: the optimality of a single “transfer clockwise” motion will depend upon whether or not the agent was already transferring in that course up to now. For the reason that reward perform have to be Markov, it can not convey this sort of data. Related examples reveal that Markov reward can not seize each coverage order and trajectory order, too.
Second Principal Outcome
On condition that some duties will be captured and a few can not, we subsequent discover whether or not there may be an environment friendly process for figuring out whether or not a given activity will be captured by reward in a given setting. Additional, if there’s a reward perform that captures the given activity, we might ideally like to have the ability to output such a reward perform. Our second result’s a optimistic end result which says that for any finite environment-task pair, there’s a process that may 1) resolve whether or not the duty will be captured by Markov reward within the given setting, and a couple of) outputs the specified reward perform that precisely conveys the duty, when such a perform exists.
This work establishes preliminary pathways towards understanding the scope of the reward speculation, however there may be a lot nonetheless to be finished to generalize these outcomes past finite environments, Markov rewards, and easy notions of “activity” and “expressivity”. We hope this work supplies new conceptual views on reward and its place in reinforcement studying.