Spaces:
Runtime error
Runtime error
Your commit message
Browse files- .github/workflows/update_readme.yml +34 -0
- README.md +153 -9
- eval/generate_bookmarks.ipynb +175 -0
- eval/question_and_answer_list.json +66 -61
- main.py +6 -5
- requirements.txt +4 -3
- src/agentics/agents.py +4 -2
- src/config/OAI_CONFIG_LIST.json +4 -4
- src/datatonic/dataloader.py +4 -1
- src/documentation/PROJECT.md +35 -9
.github/workflows/update_readme.yml
ADDED
@@ -0,0 +1,34 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
name: Update README
|
2 |
+
|
3 |
+
on:
|
4 |
+
push:
|
5 |
+
paths:
|
6 |
+
- '.src/documentation/PROJECT.md'
|
7 |
+
- '.src/documentation/CODE_OF_CONDUCT.md'
|
8 |
+
- '.src/documentation/CONTRIBUTING.md'
|
9 |
+
|
10 |
+
jobs:
|
11 |
+
update-readme:
|
12 |
+
runs-on: ubuntu-latest
|
13 |
+
|
14 |
+
steps:
|
15 |
+
- name: Checkout repository
|
16 |
+
uses: actions/checkout@v2
|
17 |
+
|
18 |
+
- name: Combine markdown files
|
19 |
+
run: |
|
20 |
+
cat .src/documentation/PROJECT.md > README.md
|
21 |
+
echo -e "\n\n" >> README.md
|
22 |
+
cat .src/documentation/INSTALL.md > README.md
|
23 |
+
echo -e "\n\n" >> README.md
|
24 |
+
cat .src/documentation/CODE_OF_CONDUCT.md >> README.md
|
25 |
+
echo -e "\n\n" >> README.md
|
26 |
+
cat .src/documentation/CONTRIBUTING.md >> README.md
|
27 |
+
|
28 |
+
- name: Commit and push if changed
|
29 |
+
run: |
|
30 |
+
git config --global user.email "[email protected]"
|
31 |
+
git config --global user.name "GitHub Action"
|
32 |
+
git add README.md
|
33 |
+
git commit -m "Update README" || exit 0
|
34 |
+
git push
|
README.md
CHANGED
@@ -1,10 +1,154 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
---
|
2 |
-
|
3 |
-
|
4 |
-
sdk: gradio
|
5 |
-
emoji: π
|
6 |
-
colorFrom: green
|
7 |
-
colorTo: indigo
|
8 |
-
app_file: main.py
|
9 |
-
pinned: true
|
10 |
-
---
|
|
|
1 |
+
# Introducing π§ͺπ©π»βπ¬Sci-Tonic - Your Ultimate Technical Research Assistant π
|
2 |
+
|
3 |
+
### Welcome to the Future of Technical Research: Sci-Tonic π
|
4 |
+
|
5 |
+
In an era where data is king π, the ability to efficiently gather, analyze, and present information is crucial for success across various fields. Today, we are thrilled to introduce Sci-Tonic π€, a state-of-the-art technical research assistant that revolutionizes how professionals, researchers, and enthusiasts interact with data. Whether it's financial figures πΉ, scientific articles π§¬, or complex texts π, Sci-Tonic is your go-to solution for turning data into insights.
|
6 |
+
|
7 |
+
## Features of Sci-Tonic π
|
8 |
+
|
9 |
+
### 1. Data Retrieval: A Gateway to Information πͺπ
|
10 |
+
- **Broad Spectrum Access**: From financial reports to scientific papers, Sci-Tonic accesses a wide array of data sources.
|
11 |
+
- **Efficiency and Precision**: Quickly fetches relevant data, saving you time and effort β°πΌ.
|
12 |
+
|
13 |
+
### 2. Advanced Analysis: Deep Insights from Cutting-Edge AI π§ π‘
|
14 |
+
- **Intelligent Interpretation**: Utilizes advanced AI algorithms to analyze and interpret complex data sets.
|
15 |
+
- **Customizable Analysis**: Tailored to meet specific research needs, providing targeted insights π.
|
16 |
+
|
17 |
+
### 3. Multimedia Output: Diverse and Dynamic Presentation ππ₯π
|
18 |
+
- **Versatile Formats**: Outputs range from text and infographics to video summaries.
|
19 |
+
- **Engaging and Informative**: Enhances understanding and retention of information π.
|
20 |
+
|
21 |
+
### 4. User-Friendly Interface: Accessible to All π©βπ»π¨βπ»
|
22 |
+
- **Intuitive Design**: Easy to navigate for both tech experts and novices.
|
23 |
+
- **Seamless Experience**: Makes research not just productive but also enjoyable π.
|
24 |
+
|
25 |
+
### 5. Adaptive Technical Operator π€
|
26 |
+
- **High Performance**: Capable of handling complex analyses with ease.
|
27 |
+
- **On-the-Fly Adaptability**: Quickly adjusts to new data and user requests πͺοΈ.
|
28 |
+
|
29 |
+
## Applications of Sci-Tonic π οΈ
|
30 |
+
- **Academic Research**: Streamlines the process of gathering and analyzing scientific data ππ¬.
|
31 |
+
- **Financial Analysis**: Provides comprehensive insights into market trends and financial reports πΉ.
|
32 |
+
- **Business Intelligence**: Assists in making data-driven decisions for business strategies π.
|
33 |
+
- **Personal Use**: Aids enthusiasts in exploring data in their fields of interest π.
|
34 |
+
|
35 |
+
## Choose Sci-Tonic? π€
|
36 |
+
- **Efficiency**: Saves time and effort in data collection and analysis β³.
|
37 |
+
- **Accuracy**: Provides reliable and precise insights π.
|
38 |
+
- **Customization**: Adapts to specific user needs and preferences π οΈ.
|
39 |
+
- **Innovation**: Employs the latest AI technology for data analysis π.
|
40 |
+
|
41 |
+
|
42 |
+
### Installation π₯
|
43 |
+
```bash
|
44 |
+
# Clone the repository
|
45 |
+
git clone https://github.com/Tonic-AI/scitonic.git
|
46 |
+
|
47 |
+
# Navigate to the repository
|
48 |
+
cd scitonic
|
49 |
+
|
50 |
+
# Install dependencies
|
51 |
+
pip install -r requirements.txt
|
52 |
+
|
53 |
+
# Run the application
|
54 |
+
python main.py
|
55 |
+
```
|
56 |
+
|
57 |
+
## Usage π¦
|
58 |
+
|
59 |
+
1. **Installation**: Before you begin, ensure you have Sci-Tonic installed. If not, refer to our installation guide. π₯
|
60 |
+
|
61 |
+
2. **Open the Application**: Launch Sci-Tonic to start your journey into data exploration. π
|
62 |
+
|
63 |
+
## Setting Up Your Environment π οΈ
|
64 |
+
|
65 |
+
1. **Enter OpenAI API Key**:
|
66 |
+
- Locate the `OpenAI API Key` textbox.
|
67 |
+
- Enter your API key securely. This key powers the AI models in Sci-Tonic. π
|
68 |
+
|
69 |
+
2. **Enter Clarifai PAT**:
|
70 |
+
- Find the `Clarifai PAT` textbox.
|
71 |
+
- Input your Clarifai Personal Access Token. This is crucial for image and audio processing functionalities. πΌοΈποΈ
|
72 |
+
|
73 |
+
## Describing Your Problem π
|
74 |
+
|
75 |
+
1. **Text Input**:
|
76 |
+
- Use the `Describe your problem in detail:` textbox to type in your query or problem statement.
|
77 |
+
- Be as detailed as possible for the best results. π
|
78 |
+
|
79 |
+
2. **Audio Input** (Optional):
|
80 |
+
- Click on `Or speak your problem here:` to record or upload an audio clip.
|
81 |
+
- Sci-Tonic will transcribe and process your spoken words. π€
|
82 |
+
|
83 |
+
3. **Image Input** (Optional):
|
84 |
+
- Use `Or upload an image related to your problem:` to add an image.
|
85 |
+
- This can provide visual context to your query. πΌοΈ
|
86 |
+
|
87 |
+
## Submitting Your Query π
|
88 |
+
|
89 |
+
- Click the `Submit` button after entering your information and query.
|
90 |
+
- Sci-Tonic will process your inputs and start generating insights. β¨
|
91 |
+
|
92 |
+
## Receiving Output π
|
93 |
+
|
94 |
+
- The `Output` textbox will display the results, insights, or answers generated by Sci-Tonic.
|
95 |
+
- **Scitonic produces files** so check the scitonic folder
|
96 |
+
- Review the output to gain valuable information related to your query. π§
|
97 |
+
|
98 |
+
## Tips for Optimal Use π
|
99 |
+
|
100 |
+
- **Clear Descriptions**: The more specific your query, the better the output. π―
|
101 |
+
- **Utilize Multimedia Inputs**: Leverage audio and image inputs for a more comprehensive analysis. πΈπ
|
102 |
+
- **Regular Updates**: Keep your API keys and tokens updated for uninterrupted service. π
|
103 |
+
|
104 |
+
# CONTRIBUTING GUIDE
|
105 |
+
|
106 |
+
## Introduction
|
107 |
+
Welcome to the `scitonic` repository! This guide is designed to provide a streamlined process for contributing to our project. We value your input and are excited to collaborate with you.
|
108 |
+
|
109 |
+
## Prerequisites
|
110 |
+
Before contributing, make sure you have a GitHub account. You should also join our Tonic-AI Discord to communicate with other contributors and the core team.
|
111 |
+
|
112 |
+
## How to Contribute
|
113 |
+
|
114 |
+
### Reporting Issues
|
115 |
+
- **Create an Issue**: If you find a bug or have a feature request, please create an issue to report it. Use clear and descriptive titles and provide as much information as possible.
|
116 |
+
- **Use the Issue Template**: Follow the issue template provided to ensure all relevant information is included.
|
117 |
+
- **Discuss in Discord**: For immediate feedback or discussion, bring up your issue in the `#scitonic-discussion` channel on Discord.
|
118 |
+
|
119 |
+
### Making Changes
|
120 |
+
- **Fork the Repository**: Start by forking the repository to your own GitHub account.
|
121 |
+
- **Create a Branch**: Create a branch in your forked repository for your proposed changes. Name the branch something relevant to the changes you're making (e.g., `feature-add-login` or `bugfix-header-alignment`).
|
122 |
+
```bash
|
123 |
+
git checkout -b your-branch-name
|
124 |
+
```
|
125 |
+
- **Make Your Changes**: Perform the necessary changes to the codebase or documentation.
|
126 |
+
- **Commit Your Changes**: Use meaningful commit messages that describe what you've done.
|
127 |
+
|
128 |
+
```bash
|
129 |
+
git commit -m "Your detailed commit message"
|
130 |
+
```
|
131 |
+
|
132 |
+
- **Push to Your Fork**: Push your changes to your forked repository on GitHub.
|
133 |
+
|
134 |
+
```bash
|
135 |
+
git push origin your-branch-name
|
136 |
+
```
|
137 |
+
|
138 |
+
### Submitting a Pull Request
|
139 |
+
- **Pull Request (PR)**: Go to the original `scitonic` repository and click on "Pull Request" to start the process.
|
140 |
+
- **PR Template**: Fill in the PR template with all the necessary details, linking the issue you're addressing.
|
141 |
+
- **Code Review**: Wait for the core team or community to review your PR. Be responsive to feedback.
|
142 |
+
- **Merge**: Once your PR has been approved and passes all checks, it will be merged into the main codebase.
|
143 |
+
|
144 |
+
## Code of Conduct
|
145 |
+
Please adhere to the Code of Conduct laid out in the `CODE_OF_CONDUCT.md` [file](src/documentation/CODE_OF_CONDUCT.md). Respectful collaboration is key to a healthy open-source environment.
|
146 |
+
|
147 |
+
## Questions or Additional Help
|
148 |
+
If you need further assistance or have any questions, please don't hesitate to ask in our Discord community or directly in GitHub issues.
|
149 |
+
|
150 |
+
Thank you for contributing to `scitonic`!
|
151 |
+
|
152 |
---
|
153 |
+
|
154 |
+
π Thank you for considering Sci-Tonic as your ultimate technical research assistant. Together, let's turn data into discoveries! ππππ§¬ππππ€π©βπ¬π¨βπΌ
|
|
|
|
|
|
|
|
|
|
|
|
|
|
eval/generate_bookmarks.ipynb
ADDED
@@ -0,0 +1,175 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
{
|
2 |
+
"nbformat": 4,
|
3 |
+
"nbformat_minor": 0,
|
4 |
+
"metadata": {
|
5 |
+
"colab": {
|
6 |
+
"provenance": []
|
7 |
+
},
|
8 |
+
"kernelspec": {
|
9 |
+
"name": "python3",
|
10 |
+
"display_name": "Python 3"
|
11 |
+
},
|
12 |
+
"language_info": {
|
13 |
+
"name": "python"
|
14 |
+
}
|
15 |
+
},
|
16 |
+
"cells": [
|
17 |
+
{
|
18 |
+
"cell_type": "code",
|
19 |
+
"execution_count": 19,
|
20 |
+
"metadata": {
|
21 |
+
"colab": {
|
22 |
+
"base_uri": "https://localhost:8080/"
|
23 |
+
},
|
24 |
+
"id": "9CrIbR0AK3d1",
|
25 |
+
"outputId": "8624a380-d370-43b0-969c-1b21e275b322"
|
26 |
+
},
|
27 |
+
"outputs": [
|
28 |
+
{
|
29 |
+
"output_type": "stream",
|
30 |
+
"name": "stdout",
|
31 |
+
"text": [
|
32 |
+
"Requirement already satisfied: typing_extensions in /usr/local/lib/python3.10/dist-packages (4.9.0)\n"
|
33 |
+
]
|
34 |
+
}
|
35 |
+
],
|
36 |
+
"source": [
|
37 |
+
"# !pip install openai sentence-transformers\n",
|
38 |
+
"# !pip install langchain\n",
|
39 |
+
"!pip install typing_extensions\n"
|
40 |
+
]
|
41 |
+
},
|
42 |
+
{
|
43 |
+
"cell_type": "code",
|
44 |
+
"source": [
|
45 |
+
"import os\n",
|
46 |
+
"import openai\n",
|
47 |
+
"from langchain_community.document_loaders import TextLoader, PyPDFLoader, CSVLoader, DirectoryLoader\n",
|
48 |
+
"from transformers import AutoModel\n",
|
49 |
+
"from langchain_community.embeddings.sentence_transformer import (\n",
|
50 |
+
" SentenceTransformerEmbeddings,\n",
|
51 |
+
")\n",
|
52 |
+
"from langchain_community.vectorstores import Chroma\n",
|
53 |
+
"import torch\n",
|
54 |
+
"import json"
|
55 |
+
],
|
56 |
+
"metadata": {
|
57 |
+
"id": "xOFM83MoLQ-B"
|
58 |
+
},
|
59 |
+
"execution_count": 20,
|
60 |
+
"outputs": []
|
61 |
+
},
|
62 |
+
{
|
63 |
+
"cell_type": "code",
|
64 |
+
"source": [
|
65 |
+
"from google.colab import drive\n",
|
66 |
+
"drive.mount('new_articles')"
|
67 |
+
],
|
68 |
+
"metadata": {
|
69 |
+
"colab": {
|
70 |
+
"base_uri": "https://localhost:8080/"
|
71 |
+
},
|
72 |
+
"id": "WMvNDl83M7Xb",
|
73 |
+
"outputId": "d59ab804-42ce-4b10-fee6-f01f19d60b38"
|
74 |
+
},
|
75 |
+
"execution_count": 53,
|
76 |
+
"outputs": [
|
77 |
+
{
|
78 |
+
"output_type": "stream",
|
79 |
+
"name": "stdout",
|
80 |
+
"text": [
|
81 |
+
"Drive already mounted at new_articles; to attempt to forcibly remount, call drive.mount(\"new_articles\", force_remount=True).\n"
|
82 |
+
]
|
83 |
+
}
|
84 |
+
]
|
85 |
+
},
|
86 |
+
{
|
87 |
+
"cell_type": "code",
|
88 |
+
"source": [
|
89 |
+
"def document_loader(directory):\n",
|
90 |
+
" documents = {}\n",
|
91 |
+
" for filename in os.listdir(directory):\n",
|
92 |
+
" file_path = os.path.join(directory, filename)\n",
|
93 |
+
" if filename.endswith(\".csv\"):\n",
|
94 |
+
" loader = CSVLoader(file_path)\n",
|
95 |
+
" elif filename.endswith(\".pdf\"):\n",
|
96 |
+
" loader = PyPDFLoader(file_path)\n",
|
97 |
+
" elif filename.endswith(\".txt\"):\n",
|
98 |
+
" loader = TextLoader(file_path)\n",
|
99 |
+
" else:\n",
|
100 |
+
" break\n",
|
101 |
+
"\n",
|
102 |
+
" document = loader.load()\n",
|
103 |
+
" documents[filename] = document\n",
|
104 |
+
" return (documents)\n"
|
105 |
+
],
|
106 |
+
"metadata": {
|
107 |
+
"id": "QxVY8IyNL3Zp"
|
108 |
+
},
|
109 |
+
"execution_count": 54,
|
110 |
+
"outputs": []
|
111 |
+
},
|
112 |
+
{
|
113 |
+
"cell_type": "code",
|
114 |
+
"source": [
|
115 |
+
"openai.api_key = \"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\"\n",
|
116 |
+
"JSON_DATA = []\n",
|
117 |
+
"directory = \"/content/new_articles/MyDrive/new_articles\"\n",
|
118 |
+
"documents = document_loader(directory)\n",
|
119 |
+
"for filename, document in documents.items():\n",
|
120 |
+
" doc = document[0].page_content\n",
|
121 |
+
" # print(filename)\n",
|
122 |
+
" # print(document)\n",
|
123 |
+
" response = openai.chat.completions.create(\n",
|
124 |
+
" model=\"gpt-3.5-turbo\",\n",
|
125 |
+
" messages = [\n",
|
126 |
+
" {\"role\": \"system\", \"content\": f\"Generate one Question, Answer,Reference_Article:(use {filename}), Reference_Text from(use block of text which you've used to generate answer {doc})\"},\n",
|
127 |
+
" ], temperature = 0.3\n",
|
128 |
+
" )\n",
|
129 |
+
" #print(response)\n",
|
130 |
+
" result = response.choices[0].message.content.split(\"\\n\")\n",
|
131 |
+
" # print(result)\n",
|
132 |
+
" json_data = {\n",
|
133 |
+
" \"Question\": result[0].split(\"Question: \")[1].strip() if len(result) > 0 and \"Question:\" in result[0] else \"Not provided\",\n",
|
134 |
+
" \"Answer\": result[2].split(\"Answer: \")[1].strip() if len(result) > 2 and \"Answer:\" in result[2] else \"Not provided\",\n",
|
135 |
+
" \"Reference_article\": result[4].split(\"Reference_article: \")[1].strip() if len(result) > 4 and \"Reference_article:\" in result[4] else \"Not provided\",\n",
|
136 |
+
" \"Reference_text\": result[6].split(\"Reference_text: \")[1].strip() if len(result) > 6 and \"Reference_text:\" in result[6] else \"Not provided\",\n",
|
137 |
+
" }\n",
|
138 |
+
"\n",
|
139 |
+
" # print(json_data)\n",
|
140 |
+
"\n",
|
141 |
+
" JSON_DATA.append(json_data)\n",
|
142 |
+
"\n",
|
143 |
+
"with open('question_and_answer_list.json', 'w') as json_file:\n",
|
144 |
+
" json.dump(JSON_DATA, json_file, indent=2)\n",
|
145 |
+
"\n",
|
146 |
+
"print(\"JSON data saved to question_and_answer_list.json\")\n",
|
147 |
+
"\n",
|
148 |
+
"print(JSON_DATA)\n"
|
149 |
+
],
|
150 |
+
"metadata": {
|
151 |
+
"id": "LO9imR5SMA1u"
|
152 |
+
},
|
153 |
+
"execution_count": null,
|
154 |
+
"outputs": []
|
155 |
+
},
|
156 |
+
{
|
157 |
+
"cell_type": "code",
|
158 |
+
"source": [],
|
159 |
+
"metadata": {
|
160 |
+
"id": "eOAr3cy6iA9J"
|
161 |
+
},
|
162 |
+
"execution_count": 46,
|
163 |
+
"outputs": []
|
164 |
+
},
|
165 |
+
{
|
166 |
+
"cell_type": "code",
|
167 |
+
"source": [],
|
168 |
+
"metadata": {
|
169 |
+
"id": "E86P5xBqizsG"
|
170 |
+
},
|
171 |
+
"execution_count": null,
|
172 |
+
"outputs": []
|
173 |
+
}
|
174 |
+
]
|
175 |
+
}
|
eval/question_and_answer_list.json
CHANGED
@@ -1,63 +1,68 @@
|
|
1 |
[
|
2 |
{
|
3 |
-
|
4 |
-
|
5 |
-
|
6 |
-
|
7 |
-
|
8 |
-
|
9 |
-
|
10 |
-
"
|
11 |
-
|
12 |
-
|
13 |
-
|
14 |
-
|
15 |
-
|
16 |
-
|
17 |
-
|
18 |
-
|
19 |
-
|
20 |
-
|
21 |
-
|
22 |
-
|
23 |
-
|
24 |
-
|
25 |
-
|
26 |
-
|
27 |
-
|
28 |
-
"
|
29 |
-
"
|
30 |
-
"
|
31 |
-
|
32 |
-
|
33 |
-
|
34 |
-
|
35 |
-
|
36 |
-
|
37 |
-
|
38 |
-
|
39 |
-
|
40 |
-
|
41 |
-
|
42 |
-
|
43 |
-
|
44 |
-
|
45 |
-
|
46 |
-
"
|
47 |
-
"
|
48 |
-
"
|
49 |
-
|
50 |
-
|
51 |
-
|
52 |
-
"
|
53 |
-
"
|
54 |
-
"
|
55 |
-
|
56 |
-
|
57 |
-
|
58 |
-
"
|
59 |
-
"
|
60 |
-
"
|
61 |
-
|
62 |
-
|
63 |
-
|
|
|
|
|
|
|
|
|
|
|
|
1 |
[
|
2 |
{
|
3 |
+
"Question": "What are some of the new integrations and features that Slack announced to incorporate AI into its platform?",
|
4 |
+
"Answer": "Slack announced several new integrations and features to incorporate AI into its platform. These include SlackGPT, a generative AI built on top of the Slack platform that users and developers can tap into to build AI-driven experiences. Slack is also bringing AI natively into the user experience with features like AI-powered conversation summaries and writing assistance. Additionally, Slack will incorporate EinsteinGPT, Salesforce's generative AI, to provide insights from real-time customer data in Salesforce directly into Slack. These integrations are still in development, but developers can currently build custom integrations with a variety of large language models (LLMs).",
|
5 |
+
"Reference_article": "Not provided",
|
6 |
+
"Reference_text": "Not provided"
|
7 |
+
},
|
8 |
+
{
|
9 |
+
"Question": "What is Checks and how does it help mobile developers with privacy compliance?",
|
10 |
+
"Answer": "Checks is an AI-powered tool developed by Google to help mobile developers ensure compliance with privacy rules and regulations. It uses artificial intelligence and machine learning to scan apps and their code, identifying potential violations of privacy and data protection rules. Checks provides remediation suggestions on how to fix these issues, making it easier for developers to address privacy concerns. The tool is integrated with Google's language models and app understanding technologies to power its identification and suggestion capabilities. It offers a dashboard for monitoring and triaging compliance issues in areas such as compliance monitoring, data monitoring, and store disclosure support. While it currently focuses on Google Play data safety, it may expand to include Apple App Store data safety in the future.",
|
11 |
+
"Reference_article": "Not provided",
|
12 |
+
"Reference_text": "Not provided"
|
13 |
+
},
|
14 |
+
{
|
15 |
+
"Question": "What are the two new products that Nova has announced to help brands police AI-generated content?",
|
16 |
+
"Answer": "The two new products that Nova has announced to help brands police AI-generated content are BrandGuard and BrandGPT.",
|
17 |
+
"Reference_article": "Not provided",
|
18 |
+
"Reference_text": "Not provided"
|
19 |
+
},
|
20 |
+
{
|
21 |
+
"Question": "What is Pando's approach to solving supply chain challenges and how does it differentiate itself from other vendors?",
|
22 |
+
"Answer": "Pando aims to solve supply chain challenges by consolidating supply chain data from various sources and providing tools and apps for different tasks across freight procurement, trade and transport management, and more. It differentiates itself from other vendors like SAP and Oracle by offering no-code capabilities, allowing business users to customize the apps without the need for IT resources. Pando also utilizes algorithms and machine learning to make predictions and detect anomalies in the supply chain.",
|
23 |
+
"Reference_article": "Not provided",
|
24 |
+
"Reference_text": "Not provided"
|
25 |
+
},
|
26 |
+
{
|
27 |
+
"Question": "What are some controversies surrounding ChatGPT?",
|
28 |
+
"Answer": "ChatGPT has been involved in several controversies. Discord integrated OpenAI's technology into its bot named Clyde, which was tricked into providing instructions for making illegal drugs and incendiary mixtures. There have been cases of ChatGPT accusing individuals of false crimes, and it has been banned by some school systems and colleges for promoting plagiarism and misinformation. Additionally, there have been concerns about defamation and the use of AI-generated content for SEO farming.",
|
29 |
+
"Reference_article": "Not provided",
|
30 |
+
"Reference_text": "Not provided"
|
31 |
+
},
|
32 |
+
{
|
33 |
+
"Question": "What are the concerns of writers regarding the use of AI in the entertainment industry?",
|
34 |
+
"Answer": "Writers are concerned that the use of AI in the entertainment industry could undermine their working conditions and devalue their labor. They argue that AI-generated content should not be considered as writers' work and that their job involves more than just scriptwriting. They also worry that studios may use AI as a way to demand more from writers in a shorter period of time without adequately compensating them. Additionally, the legal status of AI-generated content remains unclear, which further complicates the issue.",
|
35 |
+
"Reference_article": "Not provided",
|
36 |
+
"Reference_text": "Not provided"
|
37 |
+
},
|
38 |
+
{
|
39 |
+
"Question": "What new features is Microsoft adding to Bing?",
|
40 |
+
"Answer": "Not provided",
|
41 |
+
"Reference_article": "Not provided",
|
42 |
+
"Reference_text": "Not provided"
|
43 |
+
},
|
44 |
+
{
|
45 |
+
"Question": "What is the U.K.'s competition watchdog reviewing in relation to AI?",
|
46 |
+
"Answer": "Not provided",
|
47 |
+
"Reference_article": "Not provided",
|
48 |
+
"Reference_text": "Not provided"
|
49 |
+
},
|
50 |
+
{
|
51 |
+
"Question": "What is Spawning AI's solution to give artists more control over how their art is used in generative AI models?",
|
52 |
+
"Answer": "Not provided",
|
53 |
+
"Reference_article": "Not provided",
|
54 |
+
"Reference_text": "Not provided"
|
55 |
+
},
|
56 |
+
{
|
57 |
+
"Question": "What is StarCoder and how does it compare to other code-generating AI systems?",
|
58 |
+
"Answer": "Not provided",
|
59 |
+
"Reference_article": "Not provided",
|
60 |
+
"Reference_text": "Not provided"
|
61 |
+
},
|
62 |
+
{
|
63 |
+
"Question": "What is the focus of Okera's data governance platform and how does it use AI technology?",
|
64 |
+
"Answer": "Okera's data governance platform focuses on AI and uses AI-powered systems to automatically discover and classify personally identifiable information, tag it, and apply rules to it. It utilizes a no-code interface and emphasizes metadata.",
|
65 |
+
"Reference_article": "Not provided",
|
66 |
+
"Reference_text": "Not provided"
|
67 |
+
}
|
68 |
+
]
|
main.py
CHANGED
@@ -1,6 +1,7 @@
|
|
1 |
import os
|
2 |
import gradio as gr
|
3 |
import autogen
|
|
|
4 |
from src.mapper.e5map import E5Mapper
|
5 |
from src.mapper.scimap import scimap
|
6 |
from src.mapper.parser import MapperParser
|
@@ -12,7 +13,7 @@ this is a highly adaptive technical operator that will listen to your query and
|
|
12 |
"""
|
13 |
|
14 |
def update_config_file(api_key):
|
15 |
-
config_path = "./config/OAI_CONFIG_LIST.json"
|
16 |
with open(config_path, "r") as file:
|
17 |
config = json.load(file)
|
18 |
|
@@ -67,7 +68,7 @@ def process_query(oai_key, query, max_auto_reply):
|
|
67 |
update_config_file(oai_key)
|
68 |
os.environ['OAI_KEY'] = oai_key
|
69 |
llm_config = autogen.config_list_from_json(
|
70 |
-
env_or_file="./config/OAI_CONFIG_LIST.json",
|
71 |
filter_dict={"model": {"gpt-4", "gpt-3.5-turbo-16k", "gpt-4-1106-preview"}}
|
72 |
)
|
73 |
|
@@ -118,10 +119,10 @@ def main():
|
|
118 |
txt_pat = gr.Textbox(label="Clarifai PAT", type="password", placeholder="Enter Clarifai PAT here")
|
119 |
txt_query = gr.Textbox(label="Describe your problem in detail:")
|
120 |
txt_max_auto_reply = gr.Number(label="Max Auto Replies", value=50)
|
121 |
-
audio_input = gr.Audio(label="Or speak your problem here:", type="numpy")
|
122 |
-
image_input = gr.Image(label="Or upload an image related to your problem:", type="numpy")
|
123 |
btn_submit = gr.Button("Submit")
|
124 |
-
output = gr.Textbox(label="Output")
|
125 |
|
126 |
def process_and_submit(oai_key, pat, query, max_auto_reply, audio, image):
|
127 |
os.environ['CLARIFAI_PAT'] = pat
|
|
|
1 |
import os
|
2 |
import gradio as gr
|
3 |
import autogen
|
4 |
+
import json
|
5 |
from src.mapper.e5map import E5Mapper
|
6 |
from src.mapper.scimap import scimap
|
7 |
from src.mapper.parser import MapperParser
|
|
|
13 |
"""
|
14 |
|
15 |
def update_config_file(api_key):
|
16 |
+
config_path = "./src/config/OAI_CONFIG_LIST.json"
|
17 |
with open(config_path, "r") as file:
|
18 |
config = json.load(file)
|
19 |
|
|
|
68 |
update_config_file(oai_key)
|
69 |
os.environ['OAI_KEY'] = oai_key
|
70 |
llm_config = autogen.config_list_from_json(
|
71 |
+
env_or_file="./src/config/OAI_CONFIG_LIST.json",
|
72 |
filter_dict={"model": {"gpt-4", "gpt-3.5-turbo-16k", "gpt-4-1106-preview"}}
|
73 |
)
|
74 |
|
|
|
119 |
txt_pat = gr.Textbox(label="Clarifai PAT", type="password", placeholder="Enter Clarifai PAT here")
|
120 |
txt_query = gr.Textbox(label="Describe your problem in detail:")
|
121 |
txt_max_auto_reply = gr.Number(label="Max Auto Replies", value=50)
|
122 |
+
audio_input = gr.Audio(label="Or speak your problem here:", type="numpy",)
|
123 |
+
image_input = gr.Image(label="Or upload an image related to your problem:", type="numpy", )
|
124 |
btn_submit = gr.Button("Submit")
|
125 |
+
output = gr.Textbox(label="Output",)
|
126 |
|
127 |
def process_and_submit(oai_key, pat, query, max_auto_reply, audio, image):
|
128 |
os.environ['CLARIFAI_PAT'] = pat
|
requirements.txt
CHANGED
@@ -1,7 +1,7 @@
|
|
1 |
streamlit
|
2 |
gradio
|
3 |
datasets
|
4 |
-
|
5 |
chromadb
|
6 |
semantic-kernel
|
7 |
llama-index
|
@@ -9,5 +9,6 @@ llama-hub
|
|
9 |
langchain
|
10 |
huggingface_hub
|
11 |
openai
|
12 |
-
|
13 |
-
|
|
|
|
1 |
streamlit
|
2 |
gradio
|
3 |
datasets
|
4 |
+
autogen
|
5 |
chromadb
|
6 |
semantic-kernel
|
7 |
llama-index
|
|
|
9 |
langchain
|
10 |
huggingface_hub
|
11 |
openai
|
12 |
+
Ipython
|
13 |
+
pyautogen
|
14 |
+
pypdf
|
src/agentics/agents.py
CHANGED
@@ -20,13 +20,15 @@ llm_config = {
|
|
20 |
"temperature": 0,
|
21 |
}
|
22 |
|
|
|
|
|
|
|
23 |
class AgentsFactory:
|
24 |
def __init__(self, llm_config, db_path):
|
25 |
self.llm_config = llm_config
|
26 |
self.db_path = db_path
|
27 |
|
28 |
-
|
29 |
-
return isinstance(x, dict) and "TERMINATE" == str(x.get("content", ""))[-9:].upper()
|
30 |
|
31 |
def tonic(self) :
|
32 |
return autogen.UserProxyAgent(
|
|
|
20 |
"temperature": 0,
|
21 |
}
|
22 |
|
23 |
+
def termination_msg(self, x):
|
24 |
+
return isinstance(x, dict) and "TERMINATE" == str(x.get("content", ""))[-9:].upper()
|
25 |
+
|
26 |
class AgentsFactory:
|
27 |
def __init__(self, llm_config, db_path):
|
28 |
self.llm_config = llm_config
|
29 |
self.db_path = db_path
|
30 |
|
31 |
+
|
|
|
32 |
|
33 |
def tonic(self) :
|
34 |
return autogen.UserProxyAgent(
|
src/config/OAI_CONFIG_LIST.json
CHANGED
@@ -1,25 +1,25 @@
|
|
1 |
[
|
2 |
{
|
3 |
"model": "gpt-3.5-turbo-preview",
|
4 |
-
"api_key": "
|
5 |
"base_url": "https://api.openai.com/v1",
|
6 |
"api_version": "2023-06-01-preview"
|
7 |
},
|
8 |
{
|
9 |
"model": "gpt-4-preview",
|
10 |
-
"api_key": "
|
11 |
"base_url": "https://api.openai.com/v1",
|
12 |
"api_version": "2023-06-01-preview"
|
13 |
},
|
14 |
{
|
15 |
"model": "gpt-4-vision-preview",
|
16 |
-
"api_key": "
|
17 |
"base_url": "https://api.openai.com/v1",
|
18 |
"api_version": "2023-06-01-preview"
|
19 |
},
|
20 |
{
|
21 |
"model": "dall-e-3",
|
22 |
-
"api_key": "
|
23 |
"base_url": "https://api.openai.com/v1",
|
24 |
"api_version": "2023-06-01-preview"
|
25 |
}
|
|
|
1 |
[
|
2 |
{
|
3 |
"model": "gpt-3.5-turbo-preview",
|
4 |
+
"api_key": "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",
|
5 |
"base_url": "https://api.openai.com/v1",
|
6 |
"api_version": "2023-06-01-preview"
|
7 |
},
|
8 |
{
|
9 |
"model": "gpt-4-preview",
|
10 |
+
"api_key": "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",
|
11 |
"base_url": "https://api.openai.com/v1",
|
12 |
"api_version": "2023-06-01-preview"
|
13 |
},
|
14 |
{
|
15 |
"model": "gpt-4-vision-preview",
|
16 |
+
"api_key": "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",
|
17 |
"base_url": "https://api.openai.com/v1",
|
18 |
"api_version": "2023-06-01-preview"
|
19 |
},
|
20 |
{
|
21 |
"model": "dall-e-3",
|
22 |
+
"api_key": "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",
|
23 |
"base_url": "https://api.openai.com/v1",
|
24 |
"api_version": "2023-06-01-preview"
|
25 |
}
|
src/datatonic/dataloader.py
CHANGED
@@ -89,7 +89,10 @@ class DataLoader:
|
|
89 |
if dataset_name in self.datasets:
|
90 |
return self.datasets[dataset_name]()
|
91 |
else:
|
92 |
-
|
|
|
|
|
|
|
93 |
|
94 |
def save_to_json(self, data, file_name):
|
95 |
with open(file_name, 'w') as f:
|
|
|
89 |
if dataset_name in self.datasets:
|
90 |
return self.datasets[dataset_name]()
|
91 |
else:
|
92 |
+
# Log or return an error message and default to "gpl-arguana"
|
93 |
+
error_message = f"Dataset '{dataset_name}' not supported. Defaulting to 'gpl-arguana'."
|
94 |
+
print(error_message) # or handle this message as needed
|
95 |
+
return self.load_gpl_arguana() # Default to the 'gpl-arguana' dataset
|
96 |
|
97 |
def save_to_json(self, data, file_name):
|
98 |
with open(file_name, 'w') as f:
|
src/documentation/PROJECT.md
CHANGED
@@ -1,13 +1,39 @@
|
|
1 |
-
#
|
2 |
|
3 |
-
Welcome to
|
4 |
|
5 |
-
|
6 |
|
7 |
-
Sci-Tonic
|
8 |
|
9 |
-
|
10 |
-
- **
|
11 |
-
- **
|
12 |
-
|
13 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
# Introducing π§ͺπ©π»βπ¬Sci-Tonic - Your Ultimate Technical Research Assistant π
|
2 |
|
3 |
+
### Welcome to the Future of Technical Research: Sci-Tonic π
|
4 |
|
5 |
+
In an era where data is king π, the ability to efficiently gather, analyze, and present information is crucial for success across various fields. Today, we are thrilled to introduce Sci-Tonic π€, a state-of-the-art technical research assistant that revolutionizes how professionals, researchers, and enthusiasts interact with data. Whether it's financial figures πΉ, scientific articles π§¬, or complex texts π, Sci-Tonic is your go-to solution for turning data into insights.
|
6 |
|
7 |
+
## Features of Sci-Tonic π
|
8 |
|
9 |
+
### 1. Data Retrieval: A Gateway to Information πͺπ
|
10 |
+
- **Broad Spectrum Access**: From financial reports to scientific papers, Sci-Tonic accesses a wide array of data sources.
|
11 |
+
- **Efficiency and Precision**: Quickly fetches relevant data, saving you time and effort β°πΌ.
|
12 |
+
|
13 |
+
### 2. Advanced Analysis: Deep Insights from Cutting-Edge AI π§ π‘
|
14 |
+
- **Intelligent Interpretation**: Utilizes advanced AI algorithms to analyze and interpret complex data sets.
|
15 |
+
- **Customizable Analysis**: Tailored to meet specific research needs, providing targeted insights π.
|
16 |
+
|
17 |
+
### 3. Multimedia Output: Diverse and Dynamic Presentation ππ₯π
|
18 |
+
- **Versatile Formats**: Outputs range from text and infographics to video summaries.
|
19 |
+
- **Engaging and Informative**: Enhances understanding and retention of information π.
|
20 |
+
|
21 |
+
### 4. User-Friendly Interface: Accessible to All π©βπ»π¨βπ»
|
22 |
+
- **Intuitive Design**: Easy to navigate for both tech experts and novices.
|
23 |
+
- **Seamless Experience**: Makes research not just productive but also enjoyable π.
|
24 |
+
|
25 |
+
### 5. Adaptive Technical Operator π€
|
26 |
+
- **High Performance**: Capable of handling complex analyses with ease.
|
27 |
+
- **On-the-Fly Adaptability**: Quickly adjusts to new data and user requests πͺοΈ.
|
28 |
+
|
29 |
+
## Applications of Sci-Tonic π οΈ
|
30 |
+
- **Academic Research**: Streamlines the process of gathering and analyzing scientific data ππ¬.
|
31 |
+
- **Financial Analysis**: Provides comprehensive insights into market trends and financial reports πΉ.
|
32 |
+
- **Business Intelligence**: Assists in making data-driven decisions for business strategies π.
|
33 |
+
- **Personal Use**: Aids enthusiasts in exploring data in their fields of interest π.
|
34 |
+
|
35 |
+
## Choose Sci-Tonic? π€
|
36 |
+
- **Efficiency**: Saves time and effort in data collection and analysis β³.
|
37 |
+
- **Accuracy**: Provides reliable and precise insights π.
|
38 |
+
- **Customization**: Adapts to specific user needs and preferences π οΈ.
|
39 |
+
- **Innovation**: Employs the latest AI technology for data analysis π.
|