
For example, ResNet, which solves the gradient vanishing problem of deep learning models, and MobileNet, which can run on CPU. What is the best-trained model for my data set? Here, we assume that the data to train the original learned model is fixed (such as ImageNet above). In general, the parameters of a model trained on 10 million images of 1000 classes called ImageNet (hereinafter referred to as "trained model") are reused, and only the part that actually performs prediction is replaced and relearned. Transfer learning makes it possible to achieve accuracy even with a limited amount of data by reusing the feature extraction part of a model that has been trained on a large amount of data. Transfer learning " is a very effective method to solve this problem. In other words, a large amount of data is required to achieve accuracy in deep learning models. This means that a large amount of data is required to achieve accuracy in deep learning models.

On the other hand, it is known that the mechanism for extracting these features is acquired by training on a large amount of data. One of the reasons why deep learning models have such good accuracy is that the models have acquired a mechanism to extract features that are very important for prediction.

In particular, in the fields of image recognition, natural language processing, and speech recognition, deep learning models have achieved overwhelming accuracy over conventional methods. Subjects: Machine Learning (cs.LG) Computer Vision and Pattern Recognition (cs.CV) Machine Learning (stat.ML)Īs we all know, deep learning models have been used in various aspects of our daily life due to their very good prediction accuracy compared to conventional methods. (Submitted on ( v1 ), last revised (this version, v2))Ĭomments: Accepted to the International Conference on Machine Learning (ICML) 2020. Nguyen, Tal Hassner, Matthias Seeger, Cedric Archambeau LEEP: A New Measure to Evaluate Transferability of Learned Representations ✔️ The first metric that shows a high correlation with the accuracy of recently proposed Meta-Transfer Learning ✔️ Faster computation because we only need to make predictions once for data in the target domain using the learned model Model trained on ImageNet.✔️ Proposes LEEP, a metric that predicts with high accuracy which learned models should be used for transfer learning to produce accurate models Outperforms the standard practice of fine-tuning a generic Select an expert from a collection of 156 feature extractors Our experiments show that using TASK2VEC to Ing set size, mimicking the heavy-tailed distribution of real. These tasks vary in the level ofĭifficulty and have orders of magnitude variation in train. We present large-scale experiments on a library of 1,460įine-grained classification tasks constructed from existingĬomputer vision datasets. We formulate thisĪs a meta-learning problem where the objective is to findĪn embedding such that that models whose embeddings areĬlose to a task exhibit good performance on that task. Sign a joint embedding of models and tasks in the same vec. To select an appropriate pre-trained model, we de. Ticularly valuable when there is insufficient data to train orįine-tune a generic model, and transfer of knowledge is es. Ple, we study the problem of selecting the best pre-trainedįeature extractor to solve a new task (Sect. Our task embedding can be used to reason about the Probe network are discriminative for the task (Sect. Of the input domain, and which features extracted by the Multaneously encodes the “difficulty” of the task, statistics Representation of the task which is independent of, e.g., how Network are fixed, the FIM provides a fixed-dimensional Since the architecture and weights of the probe Work filter parameters to capture the structure of the task The diagonal Fisher Information Matrix (FIM) of the net. Ral network which we call a “probe network”, and compute


The data through a pre-trained reference convolutional neu. T ASK 2V EC : Task Embedding for Meta-Learning Alessandro Achille 1, 2 Michael Lam 1 Rahul Tewari 1 Avinash Ravichandran 1 Subhransu Maji 1, 3 Charless Fowlkes 1, 4 Stefano Soatto 1, 2 Pietro Perona 1, 5 Ni=1 of labeled samples, we feed
