publications
α-β indicates alphabetical author order.
2024
- Belief Samples Are All You Need For Social LearningMahyar JafariNodeh, Amir Ajorlou, and Ali Jadbabaie2024
In this paper, we consider the problem of social learning, where a group of agents embedded in a social network are interested in learning an underlying state of the world. Agents have incomplete, noisy, and heterogeneous sources of information, providing them with recurring private observations of the underlying state of the world. Agents can share their learning experience with their peers by taking actions observable to them, with values from a finite feasible set of states. Actions can be interpreted as samples from the beliefs which agents may form and update on what the true state of the world is. Sharing samples, in place of full beliefs, is motivated by the limited communication, cognitive, and information-processing resources available to agents especially in large populations. Previous work (Salhab et al.) poses the question as to whether learning with probability one is still achievable if agents are only allowed to communicate samples from their beliefs. We provide a definite positive answer to this question, assuming a strongly connected network and a “collective distinguishability” assumption, which are both required for learning even in full-belief-sharing settings. In our proposed belief update mechanism, each agent’s belief is a normalized weighted geometric interpolation between a fully Bayesian private belief – aggregating information from the private source – and an ensemble of empirical distributions of the samples shared by her neighbors over time. By carefully constructing asymptotic almost-sure lower/upper bounds on the frequency of shared samples matching the true state/or not, we rigorously prove the convergence of all the beliefs to the true state, with probability one.
- Robust Semi-supervised Learning via f-Divergence and α-Rényi Divergence (α-β)Gholamali Aminian, Amirhossien Bagheri, Mahyar JafariNodeh, and 2 more authors2024
This paper investigates a range of empirical risk functions and regularization methods suitable for self-training methods in semi-supervised learning. These approaches draw inspiration from various divergence measures, such as f-divergences and α-Rényi divergences. Inspired by the theoretical foundations rooted in divergences, i.e., f-divergences and α-Rényi divergence, we also provide valuable insights to enhance the understanding of our empirical risk functions and regularization techniques. In the pseudo-labeling and entropy minimization techniques as self-training methods for effective semi-supervised learning, the self-training process has some inherent mismatch between the true label and pseudo-label (noisy pseudo-labels) and some of our empirical risk functions are robust, concerning noisy pseudo-labels. Under some conditions, our empirical risk functions demonstrate better performance when compared to traditional self-training methods.