Today were announcing MinDiff! This is the first technique in our TensorFlow Model Remediation Library aimed at tackling fairness concerns in machine learning models. See how its used [goo.gle][twitter]
Is there any public database for financial transactions, or at least a synthetic generated data set?[reddit]/r/datasets
Looking for financial transactions such as credit card payments, deposits and withdraws from banks or payments services. The most needed fields would be customer profile (age, gender, occupation, etc.) and transaction information (date, amount, location, detail ...).
[R] NeurIPS 2020 (Spotlight) Self-Supervised Relational Reasoning for Representation Learning[reddit]/r/MachineLearning
Hello everyone. I would like to share paper/code of our latest work entitled "Self-Supervised Relational Reasoning for Representation Learning" that has been accepted at NeurIPS 2020.
There are three key technical differences with contrastive methods like SimCLR: (i) the replacement of the reprojection head with a relation module, (ii) the use of a Binary Cross Entropy loss (BCE) instead of a contrastive loss, and (iii) the use of multiple augmentations instead of just two.
In the GitHub repository we have also released some pretrained models, minimalistic code of the method, a step-by-step notebook, and code to reproduce the experiments.
Abstract: In self-supervised learning, a system is tasked with achieving a surrogate objective by defining alternative targets on a set of unlabeled data. The aim is to build useful representations that can be used in downstream tasks, without costly manual annotation. In this work, we propose a novel self-supervised formulation of relational reasoning that allows a learner to bootstrap a signal from information implicit in unlabeled data. Training a relation head to discriminate how entities relate to themselves (intra-reasoning) and other entities (inter-reasoning), results in rich and descriptive representations in the underlying neural network backbone, which can be used in downstream tasks such as classification and image retrieval. We evaluate the proposed method following a rigorous experimental procedure, using standard datasets, protocols, and backbones. Self-supervised relational reasoning outperforms the best competitor in all conditions by an average 14% in accuracy, and the most recent state-of-the-art model by 3%. We link the effectiveness of the method to the maximization of a Bernoulli log-likelihood, which can be considered as a proxy for maximizing the mutual information, resulting in a more efficient objective with respect to the commonly used contrastive losses.