[1910.10944] Preference-Based Batch and Sequential Teaching: Towards a Unified View of Models
We identify preference functions inducing a novel family of sequential models with teaching complexity linear in the VC dimension of the hypothesis class: this is in contrast to the best known complexity result for the batch models which is quadratic in the VC dimension.

Abstract: Algorithmic machine teaching studies the interaction between a teacher and a
learner where the teacher selects labeled examples aiming at teaching a target
hypothesis. In a quest to lower teaching complexity and to achieve more natural
teacher-learner interactions, several teaching models and complexity measures
have been proposed for both the batch settings (e.g., worst-case, recursive,
preference-based, and non-clashing models) as well as the sequential settings
(e.g., local preference-based model). To better understand the connections
between these different batch and sequential models, we develop a novel
framework which captures the teaching process via preference functions
$\Sigma$. In our framework, each function $\sigma \in \Sigma$ induces a
teacher-learner pair with teaching complexity as $\TD(\sigma)$. We show that
the above-mentioned teaching models are equivalent to specific types/families
of preference functions in our framework. This equivalence, in turn, allows us
to study the differences between two important teaching models, namely $\sigma$
functions inducing the strongest batch (i.e., non-clashing) model and $\sigma$
functions inducing a weak sequential (i.e., local preference-based) model.
Finally, we identify preference functions inducing a novel family of sequential
models with teaching complexity linear in the VC dimension of the hypothesis
class: this is in contrast to the best known complexity result for the batch
models which is quadratic in the VC dimension.

Figure 2: Batch models. (Introduction)Figure 3: Recursive procedure for constructing σlvs achieving TDX,H,h0 pσlvsq ď VCDpH, Xq Figure 4: Illustration of Lemma ?? on the Warmuth class. The grouped hypotheses in the leaf clusters correspond to the sets Hy x created in Line ?? of Algorithm ??. (Families of Preference Functions)Figure 6: This figure is representing the teaching sequence for first four for direct children of h0 (top four most preferred hypothesis of h0 after h0) and all of their children. Figure 7: This figure is representing the teaching sequence for next three direct children of h0 (next three most preferred hypothesis of h0) and all of their children. Figure 8: Details of teaching sequences, for a preference function σ P Σlocal, where TDX,H,h0 pσq “ 3 for powerset k “ 7 class. For any hypothesis the cell with blue color is representing last teaching example in teaching sequence, and the cells with red color are representing rest of teaching sequence. Also see Table ?? that lists down details for all the hypotheses in the left branch of the tree. (Complexity Results) (Supplementary Materials for §??)›