Interleaving is an increasingly popular technique for evaluating information retrieval systems based on implicit user feedback
In one of my research project on data science, I coordinated a new effort to select the best compund from a list of compounds using the interleaving approach, instead of the common A/B Testing,
In particular I used the team draft interleaving, which mimics the process of how team selection occurs for a friendly sports match.
You can find more about this in these links:
This approach can be implemented also in Artificial Intelligence models where perceptrons change their activation functions based on the input data, after a pattern of data is recognized from the perceptron itself or from the control part of the model. In the past, we used standard backpropagation of the error to improve the weights, but we found sometimes perceptrons can change even before the process of learning is completed. This learning during responding has been possible thanking the interleaving approach to choose the best configuration for this model.