Edit model card

SetFit with sentence-transformers/all-mpnet-base-v2

This is a SetFit model that can be used for Text Classification. This SetFit model uses sentence-transformers/all-mpnet-base-v2 as the Sentence Transformer embedding model. A LogisticRegression instance is used for classification.

The model has been trained using an efficient few-shot learning technique that involves:

  1. Fine-tuning a Sentence Transformer with contrastive learning.
  2. Training a classification head with features from the fine-tuned Sentence Transformer.

Model Details

Model Description

Model Sources

Model Labels

Label Examples
1
  • '

    I'm looking to use Tensorflow to train a neural network model for classification, and I want to read data from a CSV file, such as the Iris data set.

    \n\n

    The Tensorflow documentation shows an example of loading the Iris data and building a prediction model, but the example uses the high-level tf.contrib.learn API. I want to use the low-level Tensorflow API and run gradient descent myself. How would I do that?

    \n'
  • '

    In the following code, I want dense matrix B to left multiply a sparse matrix A, but I got errors.

    \n\n
    import tensorflow as tf\nimport numpy as np\n\nA = tf.sparse_placeholder(tf.float32)\nB = tf.placeholder(tf.float32, shape=(5,5))\nC = tf.matmul(B,A,a_is_sparse=False,b_is_sparse=True)\nsess = tf.InteractiveSession()\nindices = np.array([[3, 2], [1, 2]], dtype=np.int64)\nvalues = np.array([1.0, 2.0], dtype=np.float32)\nshape = np.array([5,5], dtype=np.int64)\nSparse_A = tf.SparseTensorValue(indices, values, shape)\nRandB = np.ones((5, 5))\nprint sess.run(C, feed_dict={A: Sparse_A, B: RandB})\n
    \n\n

    The error message is as follows:

    \n\n
    TypeError: Failed to convert object of type <class 'tensorflow.python.framework.sparse_tensor.SparseTensor'> \nto Tensor. Contents: SparseTensor(indices=Tensor("Placeholder_4:0", shape=(?, ?), dtype=int64), values=Tensor("Placeholder_3:0", shape=(?,), dtype=float32), dense_shape=Tensor("Placeholder_2:0", shape=(?,), dtype=int64)). \nConsider casting elements to a supported type.\n
    \n\n

    What's wrong with my code?

    \n\n

    I'm doing this following the documentation and it says we should use a_is_sparse to denote whether the first matrix is sparse, and similarly with b_is_sparse. Why is my code wrong?

    \n\n

    As is suggested by vijay, I should use C = tf.matmul(B,tf.sparse_tensor_to_dense(A),a_is_sparse=False,b_is_sparse=True)

    \n\n

    I tried this but I met with another error saying:

    \n\n
    Caused by op u'SparseToDense', defined at:\n  File "a.py", line 19, in <module>\n    C = tf.matmul(B,tf.sparse_tensor_to_dense(A),a_is_sparse=False,b_is_sparse=True)\n  File "/home/fengchao.pfc/anaconda2/lib/python2.7/site-packages/tensorflow/python/ops/sparse_ops.py", line 845, in sparse_tensor_to_dense\n    name=name)\n  File "/home/mypath/anaconda2/lib/python2.7/site-packages/tensorflow/python/ops/sparse_ops.py", line 710, in sparse_to_dense\n    name=name)\n  File "/home/mypath/anaconda2/lib/python2.7/site-packages/tensorflow/python/ops/gen_sparse_ops.py", line 1094, in _sparse_to_dense\n    validate_indices=validate_indices, name=name)\n  File "/home/mypath/anaconda2/lib/python2.7/site-packages/tensorflow/python/framework/op_def_library.py", line 767, in apply_op\n    op_def=op_def)\n  File "/home/mypath/anaconda2/lib/python2.7/site-packages/tensorflow/python/framework/ops.py", line 2506, in create_op\n    original_op=self._default_original_op, op_def=op_def)\n  File "/home/mypath/anaconda2/lib/python2.7/site-packages/tensorflow/python/framework/ops.py", line 1269, in __init__\n    self._traceback = _extract_stack()\n\nInvalidArgumentError (see above for traceback): indices[1] = [1,2] is out of order\n[[Node: SparseToDense = SparseToDense[T=DT_FLOAT, Tindices=DT_INT64, validate_indices=true, _device="/job:localhost/replica:0/task:0/cpu:0"](_arg_Placeholder_4_0_2, _arg_Placeholder_2_0_0, _arg_Placeholder_3_0_1, SparseToDense/default_value)]]\n
    \n\n

    Thank you all for helping me!

    \n'
  • "

    I am using tf.estimator.train_and_evaluate and tf.data.Dataset to feed data to the estimator:

    \n\n

    Input Data function:

    \n\n
        def data_fn(data_dict, batch_size, mode, num_epochs=10):\n        dataset = {}\n        if mode == tf.estimator.ModeKeys.TRAIN:\n            dataset = tf.data.Dataset.from_tensor_slices(data_dict['train_data'].astype(np.float32))\n            dataset = dataset.cache()\n            dataset = dataset.shuffle(buffer_size= batch_size * 10).repeat(num_epochs).batch(batch_size)\n        else:\n            dataset = tf.data.Dataset.from_tensor_slices(data_dict['valid_data'].astype(np.float32))\n            dataset = dataset.cache()\n            dataset = dataset.batch(batch_size)\n\n        iterator = dataset.make_one_shot_iterator()\n        next_element = iterator.get_next()\n\n    return next_element\n
    \n\n

    Train Function:

    \n\n
    def train_model(data):\n    tf.logging.set_verbosity(tf.logging.INFO)\n    config = tf.ConfigProto(allow_soft_placement=True,\n                            log_device_placement=False)\n    config.gpu_options.allow_growth = True\n    run_config = tf.contrib.learn.RunConfig(\n        save_checkpoints_steps=10,\n        keep_checkpoint_max=10,\n        session_config=config\n    )\n\n    train_input = lambda: data_fn(data, 100, tf.estimator.ModeKeys.TRAIN, num_epochs=1)\n    eval_input = lambda: data_fn(data, 1000, tf.estimator.ModeKeys.EVAL)\n    estimator = tf.estimator.Estimator(model_fn=model_fn, params=hps, config=run_config)\n    train_spec = tf.estimator.TrainSpec(train_input, max_steps=100)\n    eval_spec = tf.estimator.EvalSpec(eval_input,\n                                      steps=None,\n                                      throttle_secs = 30)\n\n    tf.estimator.train_and_evaluate(estimator, train_spec, eval_spec)\n
    \n\n

    The training goes fine, but when it comes to evaluation I get this error:

    \n\n
    OutOfRangeError (see above for traceback): End of sequence \n
    \n\n

    If I don't use Dataset.batch on evaluation dataset (by omitting the line dataset[name] = dataset[name].batch(batch_size) in data_fn) I get the same error but after a much longer time.

    \n\n

    I can only avoid this error if I don't batch the data and use steps=1 for evaluation, but does that perform the evaluation on the whole dataset?

    \n\n

    I don't understand what causes this error as the documentation suggests I should be able to evaluate on batches too.

    \n\n

    Note: I get the same error when using tf.estimator.evaluate on data batches.

    \n"
0
  • '

    I'm working on a project where I have trained a series of binary classifiers with Keras, with Tensorflow as the backend engine. The input data I have is a series of images, where each binary classifier must make the prediction on the images, later I save the predictions on a CSV file.

    \n

    The problem I have is when I get the predictions from the first series of binary classifiers there isn't any warning, but when the 5th or 6th binary classifier calls the method predict on the input data I get the following warning:

    \n
    \n

    WARNING:tensorflow:5 out of the last 5 calls to <function\nModel.make_predict_function..predict_function at\n0x2b280ff5c158> triggered tf.function retracing. Tracing is expensive\nand the excessive number of tracings could be due to (1) creating\[email protected] repeatedly in a loop, (2) passing tensors with different\nshapes, (3) passing Python objects instead of tensors. For (1), please\ndefine your @tf.function outside of the loop. For (2), @tf.function\nhas experimental_relax_shapes=True option that relaxes argument shapes\nthat can avoid unnecessary retracing. For (3), please refer to\nhttps://www.tensorflow.org/tutorials/customization/performance#python_or_tensor_args\nand https://www.tensorflow.org/api_docs/python/tf/function for more\ndetails.

    \n
    \n

    To answer each point in the parenthesis, here are my answers:

    \n
      \n
    1. The predict method is called inside a for loop.
    2. \n
    3. I don't pass tensors but a list of NumPy arrays of gray scale images, all of them with the same size in width and height. The only thing that can change is the batch size because the list can have only 1 image or more than one.
    4. \n
    5. As I wrote in point 2, I pass a list of NumPy arrays.
    6. \n
    \n

    I have debugged my program and found that this warning always happens when the method predict is called. To summarize the code I have written is the following:

    \n
    import cv2 as cv\nimport tensorflow as tf\nfrom tensorflow.keras.models import load_model\n# Load the models\nbinary_classifiers = [load_model(path) for path in path2models]\n# Get the images\nimages = [#Load the images with OpenCV]\n# Apply the resizing and reshapes on the images.\nmy_list = list()\nfor image in images:\n    image_reworked = # Apply the resizing and reshaping on images\n    my_list.append(image_reworked)\n\n# Get the prediction from each model\n# This is where I get the warning\npredictions = [model.predict(x=my_list,verbose=0) for model in binary_classifiers]\n
    \n

    What I have tried

    \n

    I have defined a function as tf.function and putted the code of the predictions inside the tf.function like this

    \n
    @tf.function\ndef testing(models, faces):\n    return [model.predict(x=faces,verbose=0) for model in models]\n
    \n

    But I ended up getting the following error:

    \n
    \n

    RuntimeError: Detected a call to Model.predict inside a\ntf.function. Model.predict is a high-level endpoint that manages\nits own tf.function. Please move the call to Model.predict outside\nof all enclosing tf.functions. Note that you can call a Model\ndirectly on Tensors inside a tf.function like: model(x).

    \n
    \n

    So calling the method predict is basically already a tf.function. So it's useless to define a tf.function when the warning I get it's from that method.

    \n

    I have also checked those other two questions:

    \n
      \n
    1. Tensorflow 2: Getting "WARNING:tensorflow:9 out of the last 9 calls to triggered tf.function retracing. Tracing is expensive"
    2. \n
    3. Loading multiple saved tensorflow/keras models for prediction
    4. \n
    \n

    But neither of the two questions answers my question about how to avoid this warning. Plus I have also checked the links in the warning message but I couldn't solve my problem.

    \n

    What I want

    \n

    I simply want to avoid this warning. While I'm still getting the predictions from the models I noticed that the python program takes way too much time on doing predictions for a list of images.

    \n

    What I'm using

    \n
      \n
    • Python 3.6.13
    • \n
    • Tensorflow 2.3.0
    • \n
    \n

    Solution

    \n

    After some tries to suppress the warning from the predict method, I have checked the documentation of Tensorflow and in one of the first tutorials on how to use Tensorflow it is explained that, by default, Tensorflow is executed in eager mode, which is useful for testing and debugging the network models. Since I have already tested my models many times, it was only required to disable the eager mode by writing this single python line of code:

    \n

    tf.compat.v1.disable_eager_execution()

    \n

    Now the warning doesn't show up anymore.

    \n'
  • '

    I try to export a Tensorflow model but I can not find the best way to add the exogenous feature to the tf.contrib.timeseries.StructuralEnsembleRegressor.build_raw_serving_input_receiver_fn.

    \n\n

    I use the sample from the Tensorflow contrib: https://github.com/tensorflow/tensorflow/blob/master/tensorflow/contrib/timeseries/examples/known_anomaly.py and I just try to save the model.

    \n\n
    # this is the exogenous column \nstring_feature = tf.contrib.layers.sparse_column_with_keys(\n      column_name="is_changepoint", keys=["no", "yes"])\n\none_hot_feature = tf.contrib.layers.one_hot_column(\n      sparse_id_column=string_feature)\n\nestimator = tf.contrib.timeseries.StructuralEnsembleRegressor(\n      periodicities=12,    \n      cycle_num_latent_values=3,\n      num_features=1,\n      exogenous_feature_columns=[one_hot_feature],\n      exogenous_update_condition=\n      lambda times, features: tf.equal(features["is_changepoint"], "yes"))\n\nreader = tf.contrib.timeseries.CSVReader(\n      csv_file_name,\n\n      column_names=(tf.contrib.timeseries.TrainEvalFeatures.TIMES,\n                    tf.contrib.timeseries.TrainEvalFeatures.VALUES,\n                    "is_changepoint"),\n\n      column_dtypes=(tf.int64, tf.float32, tf.string),\n\n      skip_header_lines=1)\n\ntrain_input_fn = tf.contrib.timeseries.RandomWindowInputFn(reader, batch_size=4, window_size=64)\nestimator.train(input_fn=train_input_fn, steps=train_steps)\nevaluation_input_fn = tf.contrib.timeseries.WholeDatasetInputFn(reader)\nevaluation = estimator.evaluate(input_fn=evaluation_input_fn, steps=1)\n\nexport_directory = tempfile.mkdtemp()\n\n###################################################### \n# the exogenous column must be provided to the build_raw_serving_input_receiver_fn. \n# But How ?\n######################################################\n\ninput_receiver_fn = estimator.build_raw_serving_input_receiver_fn()\n# -> error missing 'is_changepoint' key    \n\n#input_receiver_fn = estimator.build_raw_serving_input_receiver_fn({'is_changepoint' : string_feature}) \n# -> cast exception\n\nexport_location = estimator.export_savedmodel(export_directory, input_receiver_fn)\n
    \n\n

    According to the documentation, build_raw_serving_input_receiver_fn exogenous_features parameter : A dictionary mapping feature keys to exogenous features (either Numpy arrays or Tensors). Used to determine the shapes of placeholders for these features.

    \n\n

    So what is the best way to transform the one_hot_column or sparse_column_with_keys to a Tensor object ?

    \n'
  • "

    I am currently working on an optical flow project and I come across a strange error.

    \n\n

    I have uint16 images stored in bytes in my TFrecords. When I read the TFrecords from my local machine it is giving me uint16 values, but when I deploy the same code and read it from the docker I am getting uint8 values eventhough my dtype is uint16. I mean the uint16 values are getting reduced to uint8 like 32768 --> 128.

    \n\n

    What is causing this error?

    \n\n

    My local machine has: Tensorflow 1.10.1 and python 3.6\nMy Docker Image has: Tensorflow 1.12.0 and python 3.5

    \n\n

    I am working on tensorflow object detection API\nWhile creating the TF records I use:

    \n\n
    with tf.gfile.GFile(flows, 'rb') as fid:\n    flow_images = fid.read()\n
    \n\n

    While reading it back I am using: tf.image.decoderaw

    \n\n

    Dataset: KITTI FLOW 2015

    \n"

Evaluation

Metrics

Label Accuracy Precision Recall F1
all 0.85 0.8535 0.85 0.8496

Uses

Direct Use for Inference

First install the SetFit library:

pip install setfit

Then you can load this model and run inference.

from setfit import SetFitModel

# Download from the 🤗 Hub
model = SetFitModel.from_pretrained("sharukat/sbert-questionclassifier")
# Run inference
preds = model("<p>In the documentation it seems they focus on how to save and restore tf.keras.models, but i was wondering how do you save and restore models trained customly through some basic iteration loop?</p>

<p>Now that there isnt a graph or a session, how do we save structure defined in a tf function that is customly built without using layer abstractions?</p>
")

Training Details

Training Set Metrics

Training set Min Median Max
Word count 15 330.0667 3755
Label Training Sample Count
0 450
1 450

Training Hyperparameters

  • batch_size: (16, 2)
  • num_epochs: (1, 16)
  • max_steps: -1
  • sampling_strategy: unique
  • body_learning_rate: (2e-05, 1e-05)
  • head_learning_rate: 0.01
  • loss: CosineSimilarityLoss
  • distance_metric: cosine_distance
  • margin: 0.25
  • end_to_end: False
  • use_amp: False
  • warmup_proportion: 0.1
  • max_length: 256
  • seed: 42
  • eval_max_steps: -1
  • load_best_model_at_end: True

Training Results

Epoch Step Training Loss Validation Loss
0.0000 1 0.2951 -
1.0 25341 0.0 0.2473
  • The bold row denotes the saved checkpoint.

Framework Versions

  • Python: 3.10.13
  • SetFit: 1.0.3
  • Sentence Transformers: 2.5.0
  • Transformers: 4.38.1
  • PyTorch: 2.1.2
  • Datasets: 2.17.1
  • Tokenizers: 0.15.2

Citation

BibTeX

@article{https://doi.org/10.48550/arxiv.2209.11055,
    doi = {10.48550/ARXIV.2209.11055},
    url = {https://arxiv.org/abs/2209.11055},
    author = {Tunstall, Lewis and Reimers, Nils and Jo, Unso Eun Seo and Bates, Luke and Korat, Daniel and Wasserblat, Moshe and Pereg, Oren},
    keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences},
    title = {Efficient Few-Shot Learning Without Prompts},
    publisher = {arXiv},
    year = {2022},
    copyright = {Creative Commons Attribution 4.0 International}
}
Downloads last month
6
Safetensors
Model size
109M params
Tensor type
F32
·
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Model tree for sharukat/sbert-questionclassifier

Finetuned
(165)
this model

Evaluation results