Edit model card

LLM_IMAGE

neural-chat-finetuned-bilic-v1

This model is a fine-tuned version of Intel/neural-chat-7b-v3-1 on our custom dataset.

Model description

This is a fine tuned version of the intel's Neuralchat model, specifically trained on a carefully curated dataset on fraud detection. We implemented a contextual based architecture to enable the model learn and be adept at understanding context within a conversation as opposed to the traditional rule based approach.

Intended uses & limitations

  • detecting fraudulent conversations in real-time
  • Giving a summary of conversations and suggestions
  • Understanding with high accuracy the context in a conversation to make better predictions

Training

50,000 synthetically conversations

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 0.0002
  • train_batch_size: 8
  • eval_batch_size: 8
  • seed: 42
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: cosine
  • training_steps: 250
  • mixed_precision_training: Native AMP

Framework versions

  • Transformers 4.36.0.dev0
  • Pytorch 2.1.0+cu118
  • Datasets 2.15.0
  • Tokenizers 0.15.0
Downloads last month
21
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Model tree for Bilic/NeuralChat-finetuned-for-fraud-detection

Finetuned
(21)
this model