kenken999 commited on
Commit
886d8e9
1 Parent(s): 1afbeb8

create duck db

Browse files
This view is limited to 50 files because it contains too many changes.   See raw diff
Files changed (50) hide show
  1. open-interpreter/.devcontainer/DockerFile +1 -0
  2. open-interpreter/.devcontainer/devcontainer.json +10 -0
  3. open-interpreter/.github/ISSUE_TEMPLATE/bug_report.yml +71 -0
  4. open-interpreter/.github/ISSUE_TEMPLATE/config.yml +1 -0
  5. open-interpreter/.github/ISSUE_TEMPLATE/feature_request.yml +27 -0
  6. open-interpreter/.github/pull_request_template.md +15 -0
  7. open-interpreter/.github/workflows/potential-duplicates.yml +31 -0
  8. open-interpreter/.github/workflows/python-package.yml +37 -0
  9. open-interpreter/.gitignore +237 -0
  10. open-interpreter/.pre-commit-config.yaml +15 -0
  11. open-interpreter/LICENSE +660 -0
  12. open-interpreter/README.md +413 -0
  13. open-interpreter/docs/CONTRIBUTING.md +91 -0
  14. open-interpreter/docs/NCU_MIGRATION_GUIDE.md +254 -0
  15. open-interpreter/docs/README_DE.md +131 -0
  16. open-interpreter/docs/README_ES.md +413 -0
  17. open-interpreter/docs/README_IN.md +258 -0
  18. open-interpreter/docs/README_JA.md +398 -0
  19. open-interpreter/docs/README_VN.md +395 -0
  20. open-interpreter/docs/README_ZH.md +220 -0
  21. open-interpreter/docs/ROADMAP.md +168 -0
  22. open-interpreter/docs/SAFE_MODE.md +60 -0
  23. open-interpreter/docs/SECURITY.md +38 -0
  24. open-interpreter/docs/assets/.DS-Store +0 -0
  25. open-interpreter/docs/assets/favicon.png +0 -0
  26. open-interpreter/docs/assets/logo/circle-inverted.png +0 -0
  27. open-interpreter/docs/assets/logo/circle.png +0 -0
  28. open-interpreter/docs/code-execution/computer-api.mdx +240 -0
  29. open-interpreter/docs/code-execution/custom-languages.mdx +76 -0
  30. open-interpreter/docs/code-execution/settings.mdx +7 -0
  31. open-interpreter/docs/code-execution/usage.mdx +36 -0
  32. open-interpreter/docs/computer/custom-languages.mdx +0 -0
  33. open-interpreter/docs/computer/introduction.mdx +13 -0
  34. open-interpreter/docs/computer/language-model-usage.mdx +3 -0
  35. open-interpreter/docs/computer/user-usage.mdx +5 -0
  36. open-interpreter/docs/getting-started/introduction.mdx +44 -0
  37. open-interpreter/docs/getting-started/setup.mdx +70 -0
  38. open-interpreter/docs/guides/advanced-terminal-usage.mdx +16 -0
  39. open-interpreter/docs/guides/basic-usage.mdx +153 -0
  40. open-interpreter/docs/guides/demos.mdx +59 -0
  41. open-interpreter/docs/guides/multiple-instances.mdx +37 -0
  42. open-interpreter/docs/guides/os-mode.mdx +17 -0
  43. open-interpreter/docs/guides/running-locally.mdx +41 -0
  44. open-interpreter/docs/guides/streaming-response.mdx +159 -0
  45. open-interpreter/docs/integrations/docker.mdx +64 -0
  46. open-interpreter/docs/integrations/e2b.mdx +72 -0
  47. open-interpreter/docs/language-models/custom-models.mdx +42 -0
  48. open-interpreter/docs/language-models/hosted-models/ai21.mdx +48 -0
  49. open-interpreter/docs/language-models/hosted-models/anthropic.mdx +48 -0
  50. open-interpreter/docs/language-models/hosted-models/anyscale.mdx +60 -0
open-interpreter/.devcontainer/DockerFile ADDED
@@ -0,0 +1 @@
 
 
1
+ FROM python:3.11
open-interpreter/.devcontainer/devcontainer.json ADDED
@@ -0,0 +1,10 @@
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "name": "Open Interpreter",
3
+ "dockerFile": "DockerFile",
4
+ // Features to add to the dev container. More info: https://containers.dev/features.
5
+ // "features": {},
6
+ "onCreateCommand": "pip install .",
7
+ "postAttachCommand": "interpreter -y"
8
+ // Configure tool-specific properties.
9
+ // "customizations": {},
10
+ }
open-interpreter/.github/ISSUE_TEMPLATE/bug_report.yml ADDED
@@ -0,0 +1,71 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ name: Bug report
2
+ description: Create a report to help us improve
3
+ labels:
4
+ - bug
5
+ body:
6
+ - type: markdown
7
+ attributes:
8
+ value: |
9
+ Your issue may have already been reported. Please check the following link for common issues and solutions.
10
+
11
+ [Commonly faced issues and their solutions](https://github.com/KillianLucas/open-interpreter/issues/164)
12
+ - type: textarea
13
+ id: description
14
+ attributes:
15
+ label: Describe the bug
16
+ description: A clear and concise description of what the bug is.
17
+ validations:
18
+ required: true
19
+ - type: textarea
20
+ id: repro
21
+ attributes:
22
+ label: Reproduce
23
+ description: Steps to reproduce the behavior
24
+ placeholder: |
25
+ 1. Go to '...'
26
+ 2. Click on '....'
27
+ 3. Scroll down to '....'
28
+ 4. See error
29
+ validations:
30
+ required: true
31
+ - type: textarea
32
+ id: expected
33
+ attributes:
34
+ label: Expected behavior
35
+ description: A clear and concise description of what you expected to happen.
36
+ validations:
37
+ required: true
38
+ - type: textarea
39
+ id: screenshots
40
+ attributes:
41
+ label: Screenshots
42
+ description: If applicable, add screenshots to help explain your problem.
43
+ - type: input
44
+ id: oiversion
45
+ attributes:
46
+ label: Open Interpreter version
47
+ description: Run `pip show open-interpreter`
48
+ placeholder: e.g. 0.1.1
49
+ validations:
50
+ required: true
51
+ - type: input
52
+ id: pythonversion
53
+ attributes:
54
+ label: Python version
55
+ description: Run `python -V`
56
+ placeholder: e.g. 3.11.5
57
+ validations:
58
+ required: true
59
+ - type: input
60
+ id: osversion
61
+ attributes:
62
+ label: Operating System name and version
63
+ description: The name and version of your OS.
64
+ placeholder: e.g. Windows 11 / macOS 13 / Ubuntu 22.10
65
+ validations:
66
+ required: true
67
+ - type: textarea
68
+ id: additional
69
+ attributes:
70
+ label: Additional context
71
+ description: Add any other context about the problem here.
open-interpreter/.github/ISSUE_TEMPLATE/config.yml ADDED
@@ -0,0 +1 @@
 
 
1
+ blank_issues_enabled: false
open-interpreter/.github/ISSUE_TEMPLATE/feature_request.yml ADDED
@@ -0,0 +1,27 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ name: Feature request
2
+ description: Suggest an idea for this project
3
+ labels:
4
+ - enhancement
5
+ body:
6
+ - type: textarea
7
+ id: problem
8
+ attributes:
9
+ label: Is your feature request related to a problem? Please describe.
10
+ description: A clear and concise description of what the problem is.
11
+ - type: textarea
12
+ id: description
13
+ attributes:
14
+ label: Describe the solution you'd like
15
+ description: A clear and concise description of what you want to happen.
16
+ validations:
17
+ required: true
18
+ - type: textarea
19
+ id: alternatives
20
+ attributes:
21
+ label: Describe alternatives you've considered
22
+ description: A clear and concise description of any alternative solutions or features you've considered.
23
+ - type: textarea
24
+ id: additional
25
+ attributes:
26
+ label: Additional context
27
+ description: Add any other context about the problem here.
open-interpreter/.github/pull_request_template.md ADDED
@@ -0,0 +1,15 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ### Describe the changes you have made:
2
+
3
+ ### Reference any relevant issues (e.g. "Fixes #000"):
4
+
5
+ ### Pre-Submission Checklist (optional but appreciated):
6
+
7
+ - [ ] I have included relevant documentation updates (stored in /docs)
8
+ - [ ] I have read `docs/CONTRIBUTING.md`
9
+ - [ ] I have read `docs/ROADMAP.md`
10
+
11
+ ### OS Tests (optional but appreciated):
12
+
13
+ - [ ] Tested on Windows
14
+ - [ ] Tested on MacOS
15
+ - [ ] Tested on Linux
open-interpreter/.github/workflows/potential-duplicates.yml ADDED
@@ -0,0 +1,31 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ name: Potential Duplicates
2
+ on:
3
+ issues:
4
+ types: [opened, edited]
5
+ jobs:
6
+ run:
7
+ runs-on: ubuntu-latest
8
+ steps:
9
+ - uses: wow-actions/potential-duplicates@v1
10
+ with:
11
+ GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
12
+ # Issue title filter work with anymatch https://www.npmjs.com/package/anymatch.
13
+ # Any matched issue will stop detection immediately.
14
+ # You can specify multi filters in each line.
15
+ filter: ''
16
+ # Exclude keywords in title before detecting.
17
+ exclude: ''
18
+ # Label to set, when potential duplicates are detected.
19
+ label: potential-duplicate
20
+ # Get issues with state to compare. Supported state: 'all', 'closed', 'open'.
21
+ state: all
22
+ # If similarity is higher than this threshold([0,1]), issue will be marked as duplicate.
23
+ threshold: 0.6
24
+ # Reactions to be add to comment when potential duplicates are detected.
25
+ # Available reactions: "-1", "+1", "confused", "laugh", "heart", "hooray", "rocket", "eyes"
26
+ reactions: 'eyes, confused'
27
+ # Comment to post when potential duplicates are detected.
28
+ comment: >
29
+ Potential duplicates: {{#issues}}
30
+ - [#{{ number }}] {{ title }} ({{ accuracy }}%)
31
+ {{/issues}}
open-interpreter/.github/workflows/python-package.yml ADDED
@@ -0,0 +1,37 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ name: Build and Test
2
+
3
+ on:
4
+ push:
5
+ branches: ["main"]
6
+ pull_request:
7
+ branches: ["main"]
8
+
9
+ jobs:
10
+ build:
11
+ runs-on: ubuntu-latest
12
+ strategy:
13
+ fail-fast: true
14
+ matrix:
15
+ python-version: ["3.10", "3.12"]
16
+
17
+ steps:
18
+ - uses: actions/checkout@v3
19
+ - name: Set up Python ${{ matrix.python-version }}
20
+ uses: actions/setup-python@v3
21
+ with:
22
+ python-version: ${{ matrix.python-version }}
23
+ - name: Install poetry
24
+ run: |
25
+ curl -sSL https://install.python-poetry.org | python3 -
26
+ - name: Install dependencies
27
+ run: |
28
+ # Update poetry to the latest version.
29
+ poetry self update
30
+ # Ensure dependencies are installed without relying on a lock file.
31
+ poetry update
32
+ poetry install
33
+ - name: Test with pytest
34
+ run: |
35
+ poetry run pytest -s -x
36
+ env:
37
+ OPENAI_API_KEY: ${{ secrets.OPENAI_API_KEY }}
open-interpreter/.gitignore ADDED
@@ -0,0 +1,237 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ llama.log
2
+
3
+ # Byte-compiled / optimized / DLL files
4
+ __pycache__/
5
+ *.py[cod]
6
+ *$py.class
7
+
8
+ # C extensions
9
+ *.so
10
+
11
+ # Distribution / packaging
12
+ .Python
13
+ build/
14
+ develop-eggs/
15
+ dist/
16
+ downloads/
17
+ eggs/
18
+ .eggs/
19
+ lib/
20
+ lib64/
21
+ parts/
22
+ sdist/
23
+ var/
24
+ wheels/
25
+ share/python-wheels/
26
+ *.egg-info/
27
+ .installed.cfg
28
+ *.egg
29
+ MANIFEST
30
+
31
+ # PyInstaller
32
+ # Usually these files are written by a python script from a template
33
+ # before PyInstaller builds the exe, so as to inject date/other infos into it.
34
+ *.manifest
35
+ *.spec
36
+
37
+ # Installer logs
38
+ pip-log.txt
39
+ pip-delete-this-directory.txt
40
+
41
+ # Unit test / coverage reports
42
+ htmlcov/
43
+ .tox/
44
+ .nox/
45
+ .coverage
46
+ .coverage.*
47
+ .cache
48
+ nosetests.xml
49
+ coverage.xml
50
+ *.cover
51
+ *.py,cover
52
+ .hypothesis/
53
+ .pytest_cache/
54
+ cover/
55
+
56
+ # Translations
57
+ *.mo
58
+ *.pot
59
+
60
+ # Flask stuff:
61
+ instance/
62
+ .webassets-cache
63
+
64
+ # Scrapy stuff:
65
+ .scrapy
66
+
67
+ # Sphinx documentation
68
+ docs/_build/
69
+
70
+ # PyBuilder
71
+ .pybuilder/
72
+ target/
73
+
74
+ # Jupyter Notebook
75
+ .ipynb_checkpoints
76
+
77
+ # IPython
78
+ profile_default/
79
+ ipython_config.py
80
+
81
+ # pyenv
82
+ # For a library or package, you might want to ignore these files since the code is
83
+ # intended to run in multiple environments; otherwise, check them in:
84
+ # .python-version
85
+
86
+ # pipenv
87
+ # According to pypa/pipenv#598, it is recommended to include Pipfile.lock in version control.
88
+ # However, in case of collaboration, if having platform-specific dependencies or dependencies
89
+ # having no cross-platform support, pipenv may install dependencies that don't work, or not
90
+ # install all needed dependencies.
91
+ #Pipfile.lock
92
+
93
+ # poetry
94
+ # Similar to Pipfile.lock, it is generally recommended to include poetry.lock in version control.
95
+ # This is especially recommended for binary packages to ensure reproducibility, and is more
96
+ # commonly ignored for libraries.
97
+ # https://python-poetry.org/docs/basic-usage/#commit-your-poetrylock-file-to-version-control
98
+ #poetry.lock
99
+
100
+ # pdm
101
+ # Similar to Pipfile.lock, it is generally recommended to include pdm.lock in version control.
102
+ #pdm.lock
103
+ # pdm stores project-wide configurations in .pdm.toml, but it is recommended to not include it
104
+ # in version control.
105
+ # https://pdm.fming.dev/#use-with-ide
106
+ .pdm.toml
107
+
108
+ # PEP 582; used by e.g. github.com/David-OConnor/pyflow and github.com/pdm-project/pdm
109
+ __pypackages__/
110
+
111
+ # Celery stuff
112
+ celerybeat-schedule
113
+ celerybeat.pid
114
+
115
+ # SageMath parsed files
116
+ *.sage.py
117
+
118
+ # Environments
119
+ .env
120
+ .venv
121
+ env/
122
+ venv/
123
+ ENV/
124
+ env.bak/
125
+ venv.bak/
126
+
127
+ # Spyder project settings
128
+ .spyderproject
129
+ .spyproject
130
+
131
+ # Rope project settings
132
+ .ropeproject
133
+
134
+ # mkdocs documentation
135
+ /site
136
+
137
+ # mypy
138
+ .mypy_cache/
139
+ .dmypy.json
140
+ dmypy.json
141
+
142
+ # Pyre type checker
143
+ .pyre/
144
+
145
+ # pytype static type analyzer
146
+ .pytype/
147
+
148
+ # Cython debug symbols
149
+ cython_debug/
150
+
151
+ # PyCharm
152
+ # JetBrains specific template is maintained in a separate JetBrains.gitignore that can
153
+ # be found at https://github.com/github/gitignore/blob/main/Global/JetBrains.gitignore
154
+ # and can be added to the global gitignore or merged into this file. For a more nuclear
155
+ # option (not recommended) you can uncomment the following to ignore the entire idea folder.
156
+ #.idea/
157
+
158
+ # General
159
+ .DS_Store
160
+ .AppleDouble
161
+ .LSOverride
162
+
163
+ # Icon must end with two \r
164
+ Icon
165
+
166
+
167
+ # Thumbnails
168
+ ._*
169
+
170
+ # Files that might appear in the root of a volume
171
+ .DocumentRevisions-V100
172
+ .fseventsd
173
+ .Spotlight-V100
174
+ .TemporaryItems
175
+ .Trashes
176
+ .VolumeIcon.icns
177
+ .com.apple.timemachine.donotpresent
178
+
179
+ # Directories potentially created on remote AFP share
180
+ .AppleDB
181
+ .AppleDesktop
182
+ Network Trash Folder
183
+ Temporary Items
184
+ .apdisk
185
+
186
+ # Windows thumbnail cache files
187
+ Thumbs.db
188
+ Thumbs.db:encryptable
189
+ ehthumbs.db
190
+ ehthumbs_vista.db
191
+
192
+ # Dump file
193
+ *.stackdump
194
+
195
+ # Folder config file
196
+ [Dd]esktop.ini
197
+
198
+ # Recycle Bin used on file shares
199
+ $RECYCLE.BIN/
200
+
201
+ # Windows Installer files
202
+ *.cab
203
+ *.msi
204
+ *.msix
205
+ *.msm
206
+ *.msp
207
+
208
+ # Windows shortcuts
209
+ *.lnk
210
+
211
+ .vscode/*
212
+ !.vscode/settings.json
213
+ !.vscode/tasks.json
214
+ !.vscode/launch.json
215
+ !.vscode/extensions.json
216
+ !.vscode/*.code-snippets
217
+
218
+ # Local History for Visual Studio Code
219
+ .history/
220
+
221
+ # Built Visual Studio Code Extensions
222
+ *.vsix
223
+
224
+ # Ignore the .replit configuration file
225
+ .replit
226
+
227
+ # Ignore Nix directories
228
+ nix/
229
+
230
+ # Ignore the replit.nix configuration file
231
+ replit.nix
232
+
233
+ # Ignore misc directory
234
+ misc/
235
+
236
+ # Ignore litellm_uuid.txt
237
+ litellm_uuid.txt
open-interpreter/.pre-commit-config.yaml ADDED
@@ -0,0 +1,15 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ repos:
2
+ # Using this mirror lets us use mypyc-compiled black, which is 2x faster
3
+ - repo: https://github.com/psf/black-pre-commit-mirror
4
+ rev: 23.10.1
5
+ hooks:
6
+ - id: black
7
+ # It is recommended to specify the latest version of Python
8
+ # supported by your project here, or alternatively use
9
+ # pre-commit's default_language_version, see
10
+ # https://pre-commit.com/#top_level-default_language_version
11
+ language_version: python3.11
12
+ - repo: https://github.com/PyCQA/isort
13
+ rev: 5.12.0
14
+ hooks:
15
+ - id: isort
open-interpreter/LICENSE ADDED
@@ -0,0 +1,660 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ GNU AFFERO GENERAL PUBLIC LICENSE
2
+ Version 3, 19 November 2007
3
+
4
+ Copyright (C) 2007 Free Software Foundation, Inc. <http://fsf.org/>
5
+ Everyone is permitted to copy and distribute verbatim copies
6
+ of this license document, but changing it is not allowed.
7
+
8
+ Preamble
9
+
10
+ The GNU Affero General Public License is a free, copyleft license for
11
+ software and other kinds of works, specifically designed to ensure
12
+ cooperation with the community in the case of network server software.
13
+
14
+ The licenses for most software and other practical works are designed
15
+ to take away your freedom to share and change the works. By contrast,
16
+ our General Public Licenses are intended to guarantee your freedom to
17
+ share and change all versions of a program--to make sure it remains free
18
+ software for all its users.
19
+
20
+ When we speak of free software, we are referring to freedom, not
21
+ price. Our General Public Licenses are designed to make sure that you
22
+ have the freedom to distribute copies of free software (and charge for
23
+ them if you wish), that you receive source code or can get it if you
24
+ want it, that you can change the software or use pieces of it in new
25
+ free programs, and that you know you can do these things.
26
+
27
+ Developers that use our General Public Licenses protect your rights
28
+ with two steps: (1) assert copyright on the software, and (2) offer
29
+ you this License which gives you legal permission to copy, distribute
30
+ and/or modify the software.
31
+
32
+ A secondary benefit of defending all users' freedom is that
33
+ improvements made in alternate versions of the program, if they
34
+ receive widespread use, become available for other developers to
35
+ incorporate. Many developers of free software are heartened and
36
+ encouraged by the resulting cooperation. However, in the case of
37
+ software used on network servers, this result may fail to come about.
38
+ The GNU General Public License permits making a modified version and
39
+ letting the public access it on a server without ever releasing its
40
+ source code to the public.
41
+
42
+ The GNU Affero General Public License is designed specifically to
43
+ ensure that, in such cases, the modified source code becomes available
44
+ to the community. It requires the operator of a network server to
45
+ provide the source code of the modified version running there to the
46
+ users of that server. Therefore, public use of a modified version, on
47
+ a publicly accessible server, gives the public access to the source
48
+ code of the modified version.
49
+
50
+ An older license, called the Affero General Public License and
51
+ published by Affero, was designed to accomplish similar goals. This is
52
+ a different license, not a version of the Affero GPL, but Affero has
53
+ released a new version of the Affero GPL which permits relicensing under
54
+ this license.
55
+
56
+ The precise terms and conditions for copying, distribution and
57
+ modification follow.
58
+
59
+ TERMS AND CONDITIONS
60
+
61
+ 0. Definitions.
62
+
63
+ "This License" refers to version 3 of the GNU Affero General Public License.
64
+
65
+ "Copyright" also means copyright-like laws that apply to other kinds of
66
+ works, such as semiconductor masks.
67
+
68
+ "The Program" refers to any copyrightable work licensed under this
69
+ License. Each licensee is addressed as "you". "Licensees" and
70
+ "recipients" may be individuals or organizations.
71
+
72
+ To "modify" a work means to copy from or adapt all or part of the work
73
+ in a fashion requiring copyright permission, other than the making of an
74
+ exact copy. The resulting work is called a "modified version" of the
75
+ earlier work or a work "based on" the earlier work.
76
+
77
+ A "covered work" means either the unmodified Program or a work based
78
+ on the Program.
79
+
80
+ To "propagate" a work means to do anything with it that, without
81
+ permission, would make you directly or secondarily liable for
82
+ infringement under applicable copyright law, except executing it on a
83
+ computer or modifying a private copy. Propagation includes copying,
84
+ distribution (with or without modification), making available to the
85
+ public, and in some countries other activities as well.
86
+
87
+ To "convey" a work means any kind of propagation that enables other
88
+ parties to make or receive copies. Mere interaction with a user through
89
+ a computer network, with no transfer of a copy, is not conveying.
90
+
91
+ An interactive user interface displays "Appropriate Legal Notices"
92
+ to the extent that it includes a convenient and prominently visible
93
+ feature that (1) displays an appropriate copyright notice, and (2)
94
+ tells the user that there is no warranty for the work (except to the
95
+ extent that warranties are provided), that licensees may convey the
96
+ work under this License, and how to view a copy of this License. If
97
+ the interface presents a list of user commands or options, such as a
98
+ menu, a prominent item in the list meets this criterion.
99
+
100
+ 1. Source Code.
101
+
102
+ The "source code" for a work means the preferred form of the work
103
+ for making modifications to it. "Object code" means any non-source
104
+ form of a work.
105
+
106
+ A "Standard Interface" means an interface that either is an official
107
+ standard defined by a recognized standards body, or, in the case of
108
+ interfaces specified for a particular programming language, one that
109
+ is widely used among developers working in that language.
110
+
111
+ The "System Libraries" of an executable work include anything, other
112
+ than the work as a whole, that (a) is included in the normal form of
113
+ packaging a Major Component, but which is not part of that Major
114
+ Component, and (b) serves only to enable use of the work with that
115
+ Major Component, or to implement a Standard Interface for which an
116
+ implementation is available to the public in source code form. A
117
+ "Major Component", in this context, means a major essential component
118
+ (kernel, window system, and so on) of the specific operating system
119
+ (if any) on which the executable work runs, or a compiler used to
120
+ produce the work, or an object code interpreter used to run it.
121
+
122
+ The "Corresponding Source" for a work in object code form means all
123
+ the source code needed to generate, install, and (for an executable
124
+ work) run the object code and to modify the work, including scripts to
125
+ control those activities. However, it does not include the work's
126
+ System Libraries, or general-purpose tools or generally available free
127
+ programs which are used unmodified in performing those activities but
128
+ which are not part of the work. For example, Corresponding Source
129
+ includes interface definition files associated with source files for
130
+ the work, and the source code for shared libraries and dynamically
131
+ linked subprograms that the work is specifically designed to require,
132
+ such as by intimate data communication or control flow between those
133
+ subprograms and other parts of the work.
134
+
135
+ The Corresponding Source need not include anything that users
136
+ can regenerate automatically from other parts of the Corresponding
137
+ Source.
138
+
139
+ The Corresponding Source for a work in source code form is that
140
+ same work.
141
+
142
+ 2. Basic Permissions.
143
+
144
+ All rights granted under this License are granted for the term of
145
+ copyright on the Program, and are irrevocable provided the stated
146
+ conditions are met. This License explicitly affirms your unlimited
147
+ permission to run the unmodified Program. The output from running a
148
+ covered work is covered by this License only if the output, given its
149
+ content, constitutes a covered work. This License acknowledges your
150
+ rights of fair use or other equivalent, as provided by copyright law.
151
+
152
+ You may make, run and propagate covered works that you do not
153
+ convey, without conditions so long as your license otherwise remains
154
+ in force. You may convey covered works to others for the sole purpose
155
+ of having them make modifications exclusively for you, or provide you
156
+ with facilities for running those works, provided that you comply with
157
+ the terms of this License in conveying all material for which you do
158
+ not control copyright. Those thus making or running the covered works
159
+ for you must do so exclusively on your behalf, under your direction
160
+ and control, on terms that prohibit them from making any copies of
161
+ your copyrighted material outside their relationship with you.
162
+
163
+ Conveying under any other circumstances is permitted solely under
164
+ the conditions stated below. Sublicensing is not allowed; section 10
165
+ makes it unnecessary.
166
+
167
+ 3. Protecting Users' Legal Rights From Anti-Circumvention Law.
168
+
169
+ No covered work shall be deemed part of an effective technological
170
+ measure under any applicable law fulfilling obligations under article
171
+ 11 of the WIPO copyright treaty adopted on 20 December 1996, or
172
+ similar laws prohibiting or restricting circumvention of such
173
+ measures.
174
+
175
+ When you convey a covered work, you waive any legal power to forbid
176
+ circumvention of technological measures to the extent such circumvention
177
+ is effected by exercising rights under this License with respect to
178
+ the covered work, and you disclaim any intention to limit operation or
179
+ modification of the work as a means of enforcing, against the work's
180
+ users, your or third parties' legal rights to forbid circumvention of
181
+ technological measures.
182
+
183
+ 4. Conveying Verbatim Copies.
184
+
185
+ You may convey verbatim copies of the Program's source code as you
186
+ receive it, in any medium, provided that you conspicuously and
187
+ appropriately publish on each copy an appropriate copyright notice;
188
+ keep intact all notices stating that this License and any
189
+ non-permissive terms added in accord with section 7 apply to the code;
190
+ keep intact all notices of the absence of any warranty; and give all
191
+ recipients a copy of this License along with the Program.
192
+
193
+ You may charge any price or no price for each copy that you convey,
194
+ and you may offer support or warranty protection for a fee.
195
+
196
+ 5. Conveying Modified Source Versions.
197
+
198
+ You may convey a work based on the Program, or the modifications to
199
+ produce it from the Program, in the form of source code under the
200
+ terms of section 4, provided that you also meet all of these conditions:
201
+
202
+ a) The work must carry prominent notices stating that you modified
203
+ it, and giving a relevant date.
204
+
205
+ b) The work must carry prominent notices stating that it is
206
+ released under this License and any conditions added under section 7. This requirement modifies the requirement in section 4 to
207
+ "keep intact all notices".
208
+
209
+ c) You must license the entire work, as a whole, under this
210
+ License to anyone who comes into possession of a copy. This
211
+ License will therefore apply, along with any applicable section 7
212
+ additional terms, to the whole of the work, and all its parts,
213
+ regardless of how they are packaged. This License gives no
214
+ permission to license the work in any other way, but it does not
215
+ invalidate such permission if you have separately received it.
216
+
217
+ d) If the work has interactive user interfaces, each must display
218
+ Appropriate Legal Notices; however, if the Program has interactive
219
+ interfaces that do not display Appropriate Legal Notices, your
220
+ work need not make them do so.
221
+
222
+ A compilation of a covered work with other separate and independent
223
+ works, which are not by their nature extensions of the covered work,
224
+ and which are not combined with it such as to form a larger program,
225
+ in or on a volume of a storage or distribution medium, is called an
226
+ "aggregate" if the compilation and its resulting copyright are not
227
+ used to limit the access or legal rights of the compilation's users
228
+ beyond what the individual works permit. Inclusion of a covered work
229
+ in an aggregate does not cause this License to apply to the other
230
+ parts of the aggregate.
231
+
232
+ 6. Conveying Non-Source Forms.
233
+
234
+ You may convey a covered work in object code form under the terms
235
+ of sections 4 and 5, provided that you also convey the
236
+ machine-readable Corresponding Source under the terms of this License,
237
+ in one of these ways:
238
+
239
+ a) Convey the object code in, or embodied in, a physical product
240
+ (including a physical distribution medium), accompanied by the
241
+ Corresponding Source fixed on a durable physical medium
242
+ customarily used for software interchange.
243
+
244
+ b) Convey the object code in, or embodied in, a physical product
245
+ (including a physical distribution medium), accompanied by a
246
+ written offer, valid for at least three years and valid for as
247
+ long as you offer spare parts or customer support for that product
248
+ model, to give anyone who possesses the object code either (1) a
249
+ copy of the Corresponding Source for all the software in the
250
+ product that is covered by this License, on a durable physical
251
+ medium customarily used for software interchange, for a price no
252
+ more than your reasonable cost of physically performing this
253
+ conveying of source, or (2) access to copy the
254
+ Corresponding Source from a network server at no charge.
255
+
256
+ c) Convey individual copies of the object code with a copy of the
257
+ written offer to provide the Corresponding Source. This
258
+ alternative is allowed only occasionally and noncommercially, and
259
+ only if you received the object code with such an offer, in accord
260
+ with subsection 6b.
261
+
262
+ d) Convey the object code by offering access from a designated
263
+ place (gratis or for a charge), and offer equivalent access to the
264
+ Corresponding Source in the same way through the same place at no
265
+ further charge. You need not require recipients to copy the
266
+ Corresponding Source along with the object code. If the place to
267
+ copy the object code is a network server, the Corresponding Source
268
+ may be on a different server (operated by you or a third party)
269
+ that supports equivalent copying facilities, provided you maintain
270
+ clear directions next to the object code saying where to find the
271
+ Corresponding Source. Regardless of what server hosts the
272
+ Corresponding Source, you remain obligated to ensure that it is
273
+ available for as long as needed to satisfy these requirements.
274
+
275
+ e) Convey the object code using peer-to-peer transmission, provided
276
+ you inform other peers where the object code and Corresponding
277
+ Source of the work are being offered to the general public at no
278
+ charge under subsection 6d.
279
+
280
+ A separable portion of the object code, whose source code is excluded
281
+ from the Corresponding Source as a System Library, need not be
282
+ included in conveying the object code work.
283
+
284
+ A "User Product" is either (1) a "consumer product", which means any
285
+ tangible personal property which is normally used for personal, family,
286
+ or household purposes, or (2) anything designed or sold for incorporation
287
+ into a dwelling. In determining whether a product is a consumer product,
288
+ doubtful cases shall be resolved in favor of coverage. For a particular
289
+ product received by a particular user, "normally used" refers to a
290
+ typical or common use of that class of product, regardless of the status
291
+ of the particular user or of the way in which the particular user
292
+ actually uses, or expects or is expected to use, the product. A product
293
+ is a consumer product regardless of whether the product has substantial
294
+ commercial, industrial or non-consumer uses, unless such uses represent
295
+ the only significant mode of use of the product.
296
+
297
+ "Installation Information" for a User Product means any methods,
298
+ procedures, authorization keys, or other information required to install
299
+ and execute modified versions of a covered work in that User Product from
300
+ a modified version of its Corresponding Source. The information must
301
+ suffice to ensure that the continued functioning of the modified object
302
+ code is in no case prevented or interfered with solely because
303
+ modification has been made.
304
+
305
+ If you convey an object code work under this section in, or with, or
306
+ specifically for use in, a User Product, and the conveying occurs as
307
+ part of a transaction in which the right of possession and use of the
308
+ User Product is transferred to the recipient in perpetuity or for a
309
+ fixed term (regardless of how the transaction is characterized), the
310
+ Corresponding Source conveyed under this section must be accompanied
311
+ by the Installation Information. But this requirement does not apply
312
+ if neither you nor any third party retains the ability to install
313
+ modified object code on the User Product (for example, the work has
314
+ been installed in ROM).
315
+
316
+ The requirement to provide Installation Information does not include a
317
+ requirement to continue to provide support service, warranty, or updates
318
+ for a work that has been modified or installed by the recipient, or for
319
+ the User Product in which it has been modified or installed. Access to a
320
+ network may be denied when the modification itself materially and
321
+ adversely affects the operation of the network or violates the rules and
322
+ protocols for communication across the network.
323
+
324
+ Corresponding Source conveyed, and Installation Information provided,
325
+ in accord with this section must be in a format that is publicly
326
+ documented (and with an implementation available to the public in
327
+ source code form), and must require no special password or key for
328
+ unpacking, reading or copying.
329
+
330
+ 7. Additional Terms.
331
+
332
+ "Additional permissions" are terms that supplement the terms of this
333
+ License by making exceptions from one or more of its conditions.
334
+ Additional permissions that are applicable to the entire Program shall
335
+ be treated as though they were included in this License, to the extent
336
+ that they are valid under applicable law. If additional permissions
337
+ apply only to part of the Program, that part may be used separately
338
+ under those permissions, but the entire Program remains governed by
339
+ this License without regard to the additional permissions.
340
+
341
+ When you convey a copy of a covered work, you may at your option
342
+ remove any additional permissions from that copy, or from any part of
343
+ it. (Additional permissions may be written to require their own
344
+ removal in certain cases when you modify the work.) You may place
345
+ additional permissions on material, added by you to a covered work,
346
+ for which you have or can give appropriate copyright permission.
347
+
348
+ Notwithstanding any other provision of this License, for material you
349
+ add to a covered work, you may (if authorized by the copyright holders of
350
+ that material) supplement the terms of this License with terms:
351
+
352
+ a) Disclaiming warranty or limiting liability differently from the
353
+ terms of sections 15 and 16 of this License; or
354
+
355
+ b) Requiring preservation of specified reasonable legal notices or
356
+ author attributions in that material or in the Appropriate Legal
357
+ Notices displayed by works containing it; or
358
+
359
+ c) Prohibiting misrepresentation of the origin of that material, or
360
+ requiring that modified versions of such material be marked in
361
+ reasonable ways as different from the original version; or
362
+
363
+ d) Limiting the use for publicity purposes of names of licensors or
364
+ authors of the material; or
365
+
366
+ e) Declining to grant rights under trademark law for use of some
367
+ trade names, trademarks, or service marks; or
368
+
369
+ f) Requiring indemnification of licensors and authors of that
370
+ material by anyone who conveys the material (or modified versions of
371
+ it) with contractual assumptions of liability to the recipient, for
372
+ any liability that these contractual assumptions directly impose on
373
+ those licensors and authors.
374
+
375
+ All other non-permissive additional terms are considered "further
376
+ restrictions" within the meaning of section 10. If the Program as you
377
+ received it, or any part of it, contains a notice stating that it is
378
+ governed by this License along with a term that is a further
379
+ restriction, you may remove that term. If a license document contains
380
+ a further restriction but permits relicensing or conveying under this
381
+ License, you may add to a covered work material governed by the terms
382
+ of that license document, provided that the further restriction does
383
+ not survive such relicensing or conveying.
384
+
385
+ If you add terms to a covered work in accord with this section, you
386
+ must place, in the relevant source files, a statement of the
387
+ additional terms that apply to those files, or a notice indicating
388
+ where to find the applicable terms.
389
+
390
+ Additional terms, permissive or non-permissive, may be stated in the
391
+ form of a separately written license, or stated as exceptions;
392
+ the above requirements apply either way.
393
+
394
+ 8. Termination.
395
+
396
+ You may not propagate or modify a covered work except as expressly
397
+ provided under this License. Any attempt otherwise to propagate or
398
+ modify it is void, and will automatically terminate your rights under
399
+ this License (including any patent licenses granted under the third
400
+ paragraph of section 11).
401
+
402
+ However, if you cease all violation of this License, then your
403
+ license from a particular copyright holder is reinstated (a)
404
+ provisionally, unless and until the copyright holder explicitly and
405
+ finally terminates your license, and (b) permanently, if the copyright
406
+ holder fails to notify you of the violation by some reasonable means
407
+ prior to 60 days after the cessation.
408
+
409
+ Moreover, your license from a particular copyright holder is
410
+ reinstated permanently if the copyright holder notifies you of the
411
+ violation by some reasonable means, this is the first time you have
412
+ received notice of violation of this License (for any work) from that
413
+ copyright holder, and you cure the violation prior to 30 days after
414
+ your receipt of the notice.
415
+
416
+ Termination of your rights under this section does not terminate the
417
+ licenses of parties who have received copies or rights from you under
418
+ this License. If your rights have been terminated and not permanently
419
+ reinstated, you do not qualify to receive new licenses for the same
420
+ material under section 10.
421
+
422
+ 9. Acceptance Not Required for Having Copies.
423
+
424
+ You are not required to accept this License in order to receive or
425
+ run a copy of the Program. Ancillary propagation of a covered work
426
+ occurring solely as a consequence of using peer-to-peer transmission
427
+ to receive a copy likewise does not require acceptance. However,
428
+ nothing other than this License grants you permission to propagate or
429
+ modify any covered work. These actions infringe copyright if you do
430
+ not accept this License. Therefore, by modifying or propagating a
431
+ covered work, you indicate your acceptance of this License to do so.
432
+
433
+ 10. Automatic Licensing of Downstream Recipients.
434
+
435
+ Each time you convey a covered work, the recipient automatically
436
+ receives a license from the original licensors, to run, modify and
437
+ propagate that work, subject to this License. You are not responsible
438
+ for enforcing compliance by third parties with this License.
439
+
440
+ An "entity transaction" is a transaction transferring control of an
441
+ organization, or substantially all assets of one, or subdividing an
442
+ organization, or merging organizations. If propagation of a covered
443
+ work results from an entity transaction, each party to that
444
+ transaction who receives a copy of the work also receives whatever
445
+ licenses to the work the party's predecessor in interest had or could
446
+ give under the previous paragraph, plus a right to possession of the
447
+ Corresponding Source of the work from the predecessor in interest, if
448
+ the predecessor has it or can get it with reasonable efforts.
449
+
450
+ You may not impose any further restrictions on the exercise of the
451
+ rights granted or affirmed under this License. For example, you may
452
+ not impose a license fee, royalty, or other charge for exercise of
453
+ rights granted under this License, and you may not initiate litigation
454
+ (including a cross-claim or counterclaim in a lawsuit) alleging that
455
+ any patent claim is infringed by making, using, selling, offering for
456
+ sale, or importing the Program or any portion of it.
457
+
458
+ 11. Patents.
459
+
460
+ A "contributor" is a copyright holder who authorizes use under this
461
+ License of the Program or a work on which the Program is based. The
462
+ work thus licensed is called the contributor's "contributor version".
463
+
464
+ A contributor's "essential patent claims" are all patent claims
465
+ owned or controlled by the contributor, whether already acquired or
466
+ hereafter acquired, that would be infringed by some manner, permitted
467
+ by this License, of making, using, or selling its contributor version,
468
+ but do not include claims that would be infringed only as a
469
+ consequence of further modification of the contributor version. For
470
+ purposes of this definition, "control" includes the right to grant
471
+ patent sublicenses in a manner consistent with the requirements of
472
+ this License.
473
+
474
+ Each contributor grants you a non-exclusive, worldwide, royalty-free
475
+ patent license under the contributor's essential patent claims, to
476
+ make, use, sell, offer for sale, import and otherwise run, modify and
477
+ propagate the contents of its contributor version.
478
+
479
+ In the following three paragraphs, a "patent license" is any express
480
+ agreement or commitment, however denominated, not to enforce a patent
481
+ (such as an express permission to practice a patent or covenant not to
482
+ sue for patent infringement). To "grant" such a patent license to a
483
+ party means to make such an agreement or commitment not to enforce a
484
+ patent against the party.
485
+
486
+ If you convey a covered work, knowingly relying on a patent license,
487
+ and the Corresponding Source of the work is not available for anyone
488
+ to copy, free of charge and under the terms of this License, through a
489
+ publicly available network server or other readily accessible means,
490
+ then you must either (1) cause the Corresponding Source to be so
491
+ available, or (2) arrange to deprive yourself of the benefit of the
492
+ patent license for this particular work, or (3) arrange, in a manner
493
+ consistent with the requirements of this License, to extend the patent
494
+ license to downstream recipients. "Knowingly relying" means you have
495
+ actual knowledge that, but for the patent license, your conveying the
496
+ covered work in a country, or your recipient's use of the covered work
497
+ in a country, would infringe one or more identifiable patents in that
498
+ country that you have reason to believe are valid.
499
+
500
+ If, pursuant to or in connection with a single transaction or
501
+ arrangement, you convey, or propagate by procuring conveyance of, a
502
+ covered work, and grant a patent license to some of the parties
503
+ receiving the covered work authorizing them to use, propagate, modify
504
+ or convey a specific copy of the covered work, then the patent license
505
+ you grant is automatically extended to all recipients of the covered
506
+ work and works based on it.
507
+
508
+ A patent license is "discriminatory" if it does not include within
509
+ the scope of its coverage, prohibits the exercise of, or is
510
+ conditioned on the non-exercise of one or more of the rights that are
511
+ specifically granted under this License. You may not convey a covered
512
+ work if you are a party to an arrangement with a third party that is
513
+ in the business of distributing software, under which you make payment
514
+ to the third party based on the extent of your activity of conveying
515
+ the work, and under which the third party grants, to any of the
516
+ parties who would receive the covered work from you, a discriminatory
517
+ patent license (a) in connection with copies of the covered work
518
+ conveyed by you (or copies made from those copies), or (b) primarily
519
+ for and in connection with specific products or compilations that
520
+ contain the covered work, unless you entered into that arrangement,
521
+ or that patent license was granted, prior to 28 March 2007.
522
+
523
+ Nothing in this License shall be construed as excluding or limiting
524
+ any implied license or other defenses to infringement that may
525
+ otherwise be available to you under applicable patent law.
526
+
527
+ 12. No Surrender of Others' Freedom.
528
+
529
+ If conditions are imposed on you (whether by court order, agreement or
530
+ otherwise) that contradict the conditions of this License, they do not
531
+ excuse you from the conditions of this License. If you cannot convey a
532
+ covered work so as to satisfy simultaneously your obligations under this
533
+ License and any other pertinent obligations, then as a consequence you may
534
+ not convey it at all. For example, if you agree to terms that obligate you
535
+ to collect a royalty for further conveying from those to whom you convey
536
+ the Program, the only way you could satisfy both those terms and this
537
+ License would be to refrain entirely from conveying the Program.
538
+
539
+ 13. Remote Network Interaction; Use with the GNU General Public License.
540
+
541
+ Notwithstanding any other provision of this License, if you modify the
542
+ Program, your modified version must prominently offer all users
543
+ interacting with it remotely through a computer network (if your version
544
+ supports such interaction) an opportunity to receive the Corresponding
545
+ Source of your version by providing access to the Corresponding Source
546
+ from a network server at no charge, through some standard or customary
547
+ means of facilitating copying of software. This Corresponding Source
548
+ shall include the Corresponding Source for any work covered by version 3
549
+ of the GNU General Public License that is incorporated pursuant to the
550
+ following paragraph.
551
+
552
+ Notwithstanding any other provision of this License, you have
553
+ permission to link or combine any covered work with a work licensed
554
+ under version 3 of the GNU General Public License into a single
555
+ combined work, and to convey the resulting work. The terms of this
556
+ License will continue to apply to the part which is the covered work,
557
+ but the work with which it is combined will remain governed by version
558
+ 3 of the GNU General Public License.
559
+
560
+ 14. Revised Versions of this License.
561
+
562
+ The Free Software Foundation may publish revised and/or new versions of
563
+ the GNU Affero General Public License from time to time. Such new versions
564
+ will be similar in spirit to the present version, but may differ in detail to
565
+ address new problems or concerns.
566
+
567
+ Each version is given a distinguishing version number. If the
568
+ Program specifies that a certain numbered version of the GNU Affero General
569
+ Public License "or any later version" applies to it, you have the
570
+ option of following the terms and conditions either of that numbered
571
+ version or of any later version published by the Free Software
572
+ Foundation. If the Program does not specify a version number of the
573
+ GNU Affero General Public License, you may choose any version ever published
574
+ by the Free Software Foundation.
575
+
576
+ If the Program specifies that a proxy can decide which future
577
+ versions of the GNU Affero General Public License can be used, that proxy's
578
+ public statement of acceptance of a version permanently authorizes you
579
+ to choose that version for the Program.
580
+
581
+ Later license versions may give you additional or different
582
+ permissions. However, no additional obligations are imposed on any
583
+ author or copyright holder as a result of your choosing to follow a
584
+ later version.
585
+
586
+ 15. Disclaimer of Warranty.
587
+
588
+ THERE IS NO WARRANTY FOR THE PROGRAM, TO THE EXTENT PERMITTED BY
589
+ APPLICABLE LAW. EXCEPT WHEN OTHERWISE STATED IN WRITING THE COPYRIGHT
590
+ HOLDERS AND/OR OTHER PARTIES PROVIDE THE PROGRAM "AS IS" WITHOUT WARRANTY
591
+ OF ANY KIND, EITHER EXPRESSED OR IMPLIED, INCLUDING, BUT NOT LIMITED TO,
592
+ THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR
593
+ PURPOSE. THE ENTIRE RISK AS TO THE QUALITY AND PERFORMANCE OF THE PROGRAM
594
+ IS WITH YOU. SHOULD THE PROGRAM PROVE DEFECTIVE, YOU ASSUME THE COST OF
595
+ ALL NECESSARY SERVICING, REPAIR OR CORRECTION.
596
+
597
+ 16. Limitation of Liability.
598
+
599
+ IN NO EVENT UNLESS REQUIRED BY APPLICABLE LAW OR AGREED TO IN WRITING
600
+ WILL ANY COPYRIGHT HOLDER, OR ANY OTHER PARTY WHO MODIFIES AND/OR CONVEYS
601
+ THE PROGRAM AS PERMITTED ABOVE, BE LIABLE TO YOU FOR DAMAGES, INCLUDING ANY
602
+ GENERAL, SPECIAL, INCIDENTAL OR CONSEQUENTIAL DAMAGES ARISING OUT OF THE
603
+ USE OR INABILITY TO USE THE PROGRAM (INCLUDING BUT NOT LIMITED TO LOSS OF
604
+ DATA OR DATA BEING RENDERED INACCURATE OR LOSSES SUSTAINED BY YOU OR THIRD
605
+ PARTIES OR A FAILURE OF THE PROGRAM TO OPERATE WITH ANY OTHER PROGRAMS),
606
+ EVEN IF SUCH HOLDER OR OTHER PARTY HAS BEEN ADVISED OF THE POSSIBILITY OF
607
+ SUCH DAMAGES.
608
+
609
+ 17. Interpretation of Sections 15 and 16.
610
+
611
+ If the disclaimer of warranty and limitation of liability provided
612
+ above cannot be given local legal effect according to their terms,
613
+ reviewing courts shall apply local law that most closely approximates
614
+ an absolute waiver of all civil liability in connection with the
615
+ Program, unless a warranty or assumption of liability accompanies a
616
+ copy of the Program in return for a fee.
617
+
618
+ END OF TERMS AND CONDITIONS
619
+
620
+ How to Apply These Terms to Your New Programs
621
+
622
+ If you develop a new program, and you want it to be of the greatest
623
+ possible use to the public, the best way to achieve this is to make it
624
+ free software which everyone can redistribute and change under these terms.
625
+
626
+ To do so, attach the following notices to the program. It is safest
627
+ to attach them to the start of each source file to most effectively
628
+ state the exclusion of warranty; and each file should have at least
629
+ the "copyright" line and a pointer to where the full notice is found.
630
+
631
+ <one line to give the program's name and a brief idea of what it does.>
632
+ Copyright (C) <year> <name of author>
633
+
634
+ This program is free software: you can redistribute it and/or modify
635
+ it under the terms of the GNU Affero General Public License as published
636
+ by the Free Software Foundation, either version 3 of the License, or
637
+ (at your option) any later version.
638
+
639
+ This program is distributed in the hope that it will be useful,
640
+ but WITHOUT ANY WARRANTY; without even the implied warranty of
641
+ MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
642
+ GNU Affero General Public License for more details.
643
+
644
+ You should have received a copy of the GNU Affero General Public License
645
+ along with this program. If not, see <http://www.gnu.org/licenses/>.
646
+
647
+ Also add information on how to contact you by electronic and paper mail.
648
+
649
+ If your software can interact with users remotely through a computer
650
+ network, you should also make sure that it provides a way for users to
651
+ get its source. For example, if your program is a web application, its
652
+ interface could display a "Source" link that leads users to an archive
653
+ of the code. There are many ways you could offer source, and different
654
+ solutions will be better for different programs; see section 13 for the
655
+ specific requirements.
656
+
657
+ You should also get your employer (if you work as a programmer) or school,
658
+ if any, to sign a "copyright disclaimer" for the program, if necessary.
659
+ For more information on this, and how to apply and follow the GNU AGPL, see
660
+ <http://www.gnu.org/licenses/>.
open-interpreter/README.md ADDED
@@ -0,0 +1,413 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ <h1 align="center">● Open Interpreter</h1>
2
+
3
+ <p align="center">
4
+ <a href="https://discord.gg/Hvz9Axh84z">
5
+ <img alt="Discord" src="https://img.shields.io/discord/1146610656779440188?logo=discord&style=flat&logoColor=white"/></a>
6
+ <a href="https://github.com/OpenInterpreter/open-interpreter/blob/main/docs/README_JA.md"><img src="https://img.shields.io/badge/ドキュメント-日本語-white.svg" alt="JA doc"/></a>
7
+ <a href="https://github.com/OpenInterpreter/open-interpreter/blob/main/docs/README_ZH.md"><img src="https://img.shields.io/badge/文档-中文版-white.svg" alt="ZH doc"/></a>
8
+ <a href="https://github.com/OpenInterpreter/open-interpreter/blob/main/docs/README_ES.md"> <img src="https://img.shields.io/badge/Español-white.svg" alt="ES doc"/></a>
9
+ <a href="https://github.com/OpenInterpreter/open-interpreter/blob/main/docs/README_IN.md"><img src="https://img.shields.io/badge/Hindi-white.svg" alt="IN doc"/></a>
10
+ <a href="https://github.com/OpenInterpreter/open-interpreter/blob/main/LICENSE"><img src="https://img.shields.io/static/v1?label=license&message=AGPL&color=white&style=flat" alt="License"/></a>
11
+ <br>
12
+ <br>
13
+ <br><a href="https://0ggfznkwh4j.typeform.com/to/G21i9lJ2">Get early access to the desktop app</a>‎ ‎ |‎ ‎ <a href="https://docs.openinterpreter.com/">Documentation</a><br>
14
+ </p>
15
+
16
+ <br>
17
+
18
+ ![poster](https://github.com/KillianLucas/open-interpreter/assets/63927363/08f0d493-956b-4d49-982e-67d4b20c4b56)
19
+
20
+ <br>
21
+ <p align="center">
22
+ <strong>The New Computer Update</strong> introduced <strong><code>--os</code></strong> and a new <strong>Computer API</strong>. <a href="https://changes.openinterpreter.com/log/the-new-computer-update">Read On →</a>
23
+ </p>
24
+ <br>
25
+
26
+ ```shell
27
+ pip install open-interpreter
28
+ ```
29
+
30
+ > Not working? Read our [setup guide](https://docs.openinterpreter.com/getting-started/setup).
31
+
32
+ ```shell
33
+ interpreter
34
+ ```
35
+
36
+ <br>
37
+
38
+ **Open Interpreter** lets LLMs run code (Python, Javascript, Shell, and more) locally. You can chat with Open Interpreter through a ChatGPT-like interface in your terminal by running `$ interpreter` after installing.
39
+
40
+ This provides a natural-language interface to your computer's general-purpose capabilities:
41
+
42
+ - Create and edit photos, videos, PDFs, etc.
43
+ - Control a Chrome browser to perform research
44
+ - Plot, clean, and analyze large datasets
45
+ - ...etc.
46
+
47
+ **⚠️ Note: You'll be asked to approve code before it's run.**
48
+
49
+ <br>
50
+
51
+ ## Demo
52
+
53
+ https://github.com/KillianLucas/open-interpreter/assets/63927363/37152071-680d-4423-9af3-64836a6f7b60
54
+
55
+ #### An interactive demo is also available on Google Colab:
56
+
57
+ [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/drive/1WKmRXZgsErej2xUriKzxrEAXdxMSgWbb?usp=sharing)
58
+
59
+ #### Along with an example voice interface, inspired by _Her_:
60
+
61
+ [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/drive/1NojYGHDgxH6Y1G1oxThEBBb2AtyODBIK)
62
+
63
+ ## Quick Start
64
+
65
+ ```shell
66
+ pip install open-interpreter
67
+ ```
68
+
69
+ ### Terminal
70
+
71
+ After installation, simply run `interpreter`:
72
+
73
+ ```shell
74
+ interpreter
75
+ ```
76
+
77
+ ### Python
78
+
79
+ ```python
80
+ from interpreter import interpreter
81
+
82
+ interpreter.chat("Plot AAPL and META's normalized stock prices") # Executes a single command
83
+ interpreter.chat() # Starts an interactive chat
84
+ ```
85
+
86
+ ### GitHub Codespaces
87
+
88
+ Press the `,` key on this repository's GitHub page to create a codespace. After a moment, you'll receive a cloud virtual machine environment pre-installed with open-interpreter. You can then start interacting with it directly and freely confirm its execution of system commands without worrying about damaging the system.
89
+
90
+ ## Comparison to ChatGPT's Code Interpreter
91
+
92
+ OpenAI's release of [Code Interpreter](https://openai.com/blog/chatgpt-plugins#code-interpreter) with GPT-4 presents a fantastic opportunity to accomplish real-world tasks with ChatGPT.
93
+
94
+ However, OpenAI's service is hosted, closed-source, and heavily restricted:
95
+
96
+ - No internet access.
97
+ - [Limited set of pre-installed packages](https://wfhbrian.com/mastering-chatgpts-code-interpreter-list-of-python-packages/).
98
+ - 100 MB maximum upload, 120.0 second runtime limit.
99
+ - State is cleared (along with any generated files or links) when the environment dies.
100
+
101
+ ---
102
+
103
+ Open Interpreter overcomes these limitations by running in your local environment. It has full access to the internet, isn't restricted by time or file size, and can utilize any package or library.
104
+
105
+ This combines the power of GPT-4's Code Interpreter with the flexibility of your local development environment.
106
+
107
+ ## Commands
108
+
109
+ **Update:** The Generator Update (0.1.5) introduced streaming:
110
+
111
+ ```python
112
+ message = "What operating system are we on?"
113
+
114
+ for chunk in interpreter.chat(message, display=False, stream=True):
115
+ print(chunk)
116
+ ```
117
+
118
+ ### Interactive Chat
119
+
120
+ To start an interactive chat in your terminal, either run `interpreter` from the command line:
121
+
122
+ ```shell
123
+ interpreter
124
+ ```
125
+
126
+ Or `interpreter.chat()` from a .py file:
127
+
128
+ ```python
129
+ interpreter.chat()
130
+ ```
131
+
132
+ **You can also stream each chunk:**
133
+
134
+ ```python
135
+ message = "What operating system are we on?"
136
+
137
+ for chunk in interpreter.chat(message, display=False, stream=True):
138
+ print(chunk)
139
+ ```
140
+
141
+ ### Programmatic Chat
142
+
143
+ For more precise control, you can pass messages directly to `.chat(message)`:
144
+
145
+ ```python
146
+ interpreter.chat("Add subtitles to all videos in /videos.")
147
+
148
+ # ... Streams output to your terminal, completes task ...
149
+
150
+ interpreter.chat("These look great but can you make the subtitles bigger?")
151
+
152
+ # ...
153
+ ```
154
+
155
+ ### Start a New Chat
156
+
157
+ In Python, Open Interpreter remembers conversation history. If you want to start fresh, you can reset it:
158
+
159
+ ```python
160
+ interpreter.messages = []
161
+ ```
162
+
163
+ ### Save and Restore Chats
164
+
165
+ `interpreter.chat()` returns a List of messages, which can be used to resume a conversation with `interpreter.messages = messages`:
166
+
167
+ ```python
168
+ messages = interpreter.chat("My name is Killian.") # Save messages to 'messages'
169
+ interpreter.messages = [] # Reset interpreter ("Killian" will be forgotten)
170
+
171
+ interpreter.messages = messages # Resume chat from 'messages' ("Killian" will be remembered)
172
+ ```
173
+
174
+ ### Customize System Message
175
+
176
+ You can inspect and configure Open Interpreter's system message to extend its functionality, modify permissions, or give it more context.
177
+
178
+ ```python
179
+ interpreter.system_message += """
180
+ Run shell commands with -y so the user doesn't have to confirm them.
181
+ """
182
+ print(interpreter.system_message)
183
+ ```
184
+
185
+ ### Change your Language Model
186
+
187
+ Open Interpreter uses [LiteLLM](https://docs.litellm.ai/docs/providers/) to connect to hosted language models.
188
+
189
+ You can change the model by setting the model parameter:
190
+
191
+ ```shell
192
+ interpreter --model gpt-3.5-turbo
193
+ interpreter --model claude-2
194
+ interpreter --model command-nightly
195
+ ```
196
+
197
+ In Python, set the model on the object:
198
+
199
+ ```python
200
+ interpreter.llm.model = "gpt-3.5-turbo"
201
+ ```
202
+
203
+ [Find the appropriate "model" string for your language model here.](https://docs.litellm.ai/docs/providers/)
204
+
205
+ ### Running Open Interpreter locally
206
+
207
+ #### Terminal
208
+
209
+ Open Interpreter can use OpenAI-compatible server to run models locally. (LM Studio, jan.ai, ollama etc)
210
+
211
+ Simply run `interpreter` with the api_base URL of your inference server (for LM studio it is `http://localhost:1234/v1` by default):
212
+
213
+ ```shell
214
+ interpreter --api_base "http://localhost:1234/v1" --api_key "fake_key"
215
+ ```
216
+
217
+ Alternatively you can use Llamafile without installing any third party software just by running
218
+
219
+ ```shell
220
+ interpreter --local
221
+ ```
222
+
223
+ for a more detailed guide check out [this video by Mike Bird](https://www.youtube.com/watch?v=CEs51hGWuGU?si=cN7f6QhfT4edfG5H)
224
+
225
+ **How to run LM Studio in the background.**
226
+
227
+ 1. Download [https://lmstudio.ai/](https://lmstudio.ai/) then start it.
228
+ 2. Select a model then click **↓ Download**.
229
+ 3. Click the **↔️** button on the left (below 💬).
230
+ 4. Select your model at the top, then click **Start Server**.
231
+
232
+ Once the server is running, you can begin your conversation with Open Interpreter.
233
+
234
+ > **Note:** Local mode sets your `context_window` to 3000, and your `max_tokens` to 1000. If your model has different requirements, set these parameters manually (see below).
235
+
236
+ #### Python
237
+
238
+ Our Python package gives you more control over each setting. To replicate and connect to LM Studio, use these settings:
239
+
240
+ ```python
241
+ from interpreter import interpreter
242
+
243
+ interpreter.offline = True # Disables online features like Open Procedures
244
+ interpreter.llm.model = "openai/x" # Tells OI to send messages in OpenAI's format
245
+ interpreter.llm.api_key = "fake_key" # LiteLLM, which we use to talk to LM Studio, requires this
246
+ interpreter.llm.api_base = "http://localhost:1234/v1" # Point this at any OpenAI compatible server
247
+
248
+ interpreter.chat()
249
+ ```
250
+
251
+ #### Context Window, Max Tokens
252
+
253
+ You can modify the `max_tokens` and `context_window` (in tokens) of locally running models.
254
+
255
+ For local mode, smaller context windows will use less RAM, so we recommend trying a much shorter window (~1000) if it's failing / if it's slow. Make sure `max_tokens` is less than `context_window`.
256
+
257
+ ```shell
258
+ interpreter --local --max_tokens 1000 --context_window 3000
259
+ ```
260
+
261
+ ### Verbose mode
262
+
263
+ To help you inspect Open Interpreter we have a `--verbose` mode for debugging.
264
+
265
+ You can activate verbose mode by using its flag (`interpreter --verbose`), or mid-chat:
266
+
267
+ ```shell
268
+ $ interpreter
269
+ ...
270
+ > %verbose true <- Turns on verbose mode
271
+
272
+ > %verbose false <- Turns off verbose mode
273
+ ```
274
+
275
+ ### Interactive Mode Commands
276
+
277
+ In the interactive mode, you can use the below commands to enhance your experience. Here's a list of available commands:
278
+
279
+ **Available Commands:**
280
+
281
+ - `%verbose [true/false]`: Toggle verbose mode. Without arguments or with `true` it
282
+ enters verbose mode. With `false` it exits verbose mode.
283
+ - `%reset`: Resets the current session's conversation.
284
+ - `%undo`: Removes the previous user message and the AI's response from the message history.
285
+ - `%tokens [prompt]`: (_Experimental_) Calculate the tokens that will be sent with the next prompt as context and estimate their cost. Optionally calculate the tokens and estimated cost of a `prompt` if one is provided. Relies on [LiteLLM's `cost_per_token()` method](https://docs.litellm.ai/docs/completion/token_usage#2-cost_per_token) for estimated costs.
286
+ - `%help`: Show the help message.
287
+
288
+ ### Configuration / Profiles
289
+
290
+ Open Interpreter allows you to set default behaviors using `yaml` files.
291
+
292
+ This provides a flexible way to configure the interpreter without changing command-line arguments every time.
293
+
294
+ Run the following command to open the profiles directory:
295
+
296
+ ```
297
+ interpreter --profiles
298
+ ```
299
+
300
+ You can add `yaml` files there. The default profile is named `default.yaml`.
301
+
302
+ #### Multiple Profiles
303
+
304
+ Open Interpreter supports multiple `yaml` files, allowing you to easily switch between configurations:
305
+
306
+ ```
307
+ interpreter --profile my_profile.yaml
308
+ ```
309
+
310
+ ## Sample FastAPI Server
311
+
312
+ The generator update enables Open Interpreter to be controlled via HTTP REST endpoints:
313
+
314
+ ```python
315
+ # server.py
316
+
317
+ from fastapi import FastAPI
318
+ from fastapi.responses import StreamingResponse
319
+ from interpreter import interpreter
320
+
321
+ app = FastAPI()
322
+
323
+ @app.get("/chat")
324
+ def chat_endpoint(message: str):
325
+ def event_stream():
326
+ for result in interpreter.chat(message, stream=True):
327
+ yield f"data: {result}\n\n"
328
+
329
+ return StreamingResponse(event_stream(), media_type="text/event-stream")
330
+
331
+ @app.get("/history")
332
+ def history_endpoint():
333
+ return interpreter.messages
334
+ ```
335
+
336
+ ```shell
337
+ pip install fastapi uvicorn
338
+ uvicorn server:app --reload
339
+ ```
340
+
341
+ You can also start a server identical to the one above by simply running `interpreter.server()`.
342
+
343
+ ## Android
344
+
345
+ The step-by-step guide for installing Open Interpreter on your Android device can be found in the [open-interpreter-termux repo](https://github.com/MikeBirdTech/open-interpreter-termux).
346
+
347
+ ## Safety Notice
348
+
349
+ Since generated code is executed in your local environment, it can interact with your files and system settings, potentially leading to unexpected outcomes like data loss or security risks.
350
+
351
+ **⚠️ Open Interpreter will ask for user confirmation before executing code.**
352
+
353
+ You can run `interpreter -y` or set `interpreter.auto_run = True` to bypass this confirmation, in which case:
354
+
355
+ - Be cautious when requesting commands that modify files or system settings.
356
+ - Watch Open Interpreter like a self-driving car, and be prepared to end the process by closing your terminal.
357
+ - Consider running Open Interpreter in a restricted environment like Google Colab or Replit. These environments are more isolated, reducing the risks of executing arbitrary code.
358
+
359
+ There is **experimental** support for a [safe mode](https://github.com/OpenInterpreter/open-interpreter/blob/main/docs/SAFE_MODE.md) to help mitigate some risks.
360
+
361
+ ## How Does it Work?
362
+
363
+ Open Interpreter equips a [function-calling language model](https://platform.openai.com/docs/guides/gpt/function-calling) with an `exec()` function, which accepts a `language` (like "Python" or "JavaScript") and `code` to run.
364
+
365
+ We then stream the model's messages, code, and your system's outputs to the terminal as Markdown.
366
+
367
+ # Access Documentation Offline
368
+
369
+ The full [documentation](https://docs.openinterpreter.com/) is accessible on-the-go without the need for an internet connection.
370
+
371
+ [Node](https://nodejs.org/en) is a pre-requisite:
372
+
373
+ - Version 18.17.0 or any later 18.x.x version.
374
+ - Version 20.3.0 or any later 20.x.x version.
375
+ - Any version starting from 21.0.0 onwards, with no upper limit specified.
376
+
377
+ Install [Mintlify](https://mintlify.com/):
378
+
379
+ ```bash
380
+ npm i -g mintlify@latest
381
+ ```
382
+
383
+ Change into the docs directory and run the appropriate command:
384
+
385
+ ```bash
386
+ # Assuming you're at the project's root directory
387
+ cd ./docs
388
+
389
+ # Run the documentation server
390
+ mintlify dev
391
+ ```
392
+
393
+ A new browser window should open. The documentation will be available at [http://localhost:3000](http://localhost:3000) as long as the documentation server is running.
394
+
395
+ # Contributing
396
+
397
+ Thank you for your interest in contributing! We welcome involvement from the community.
398
+
399
+ Please see our [contributing guidelines](https://github.com/OpenInterpreter/open-interpreter/blob/main/docs/CONTRIBUTING.md) for more details on how to get involved.
400
+
401
+ # Roadmap
402
+
403
+ Visit [our roadmap](https://github.com/KillianLucas/open-interpreter/blob/main/docs/ROADMAP.md) to preview the future of Open Interpreter.
404
+
405
+ **Note**: This software is not affiliated with OpenAI.
406
+
407
+ ![thumbnail-ncu](https://github.com/KillianLucas/open-interpreter/assets/63927363/1b19a5db-b486-41fd-a7a1-fe2028031686)
408
+
409
+ > Having access to a junior programmer working at the speed of your fingertips ... can make new workflows effortless and efficient, as well as open the benefits of programming to new audiences.
410
+ >
411
+ > — _OpenAI's Code Interpreter Release_
412
+
413
+ <br>
open-interpreter/docs/CONTRIBUTING.md ADDED
@@ -0,0 +1,91 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # ●
2
+
3
+ **Open Interpreter is large, open-source initiative to build a standard interface between language models and computers.**
4
+
5
+ There are many ways to contribute, from helping others on [Github](https://github.com/KillianLucas/open-interpreter/issues) or [Discord](https://discord.gg/6p3fD6rBVm), writing documentation, or improving code.
6
+
7
+ We depend on contributors like you. Let's build this.
8
+
9
+ ## What should I work on?
10
+
11
+ First, please familiarize yourself with our [project scope](https://github.com/KillianLucas/open-interpreter/blob/main/docs/ROADMAP.md#whats-in-our-scope). Then, pick up a task from our [roadmap](https://github.com/KillianLucas/open-interpreter/blob/main/docs/ROADMAP.md) or work on solving an [issue](https://github.com/KillianLucas/open-interpreter/issues).
12
+
13
+ If you encounter a bug or have a feature in mind, don't hesitate to [open a new issue](https://github.com/KillianLucas/open-interpreter/issues/new/choose).
14
+
15
+ ## Philosophy
16
+
17
+ This is a minimalist, **tightly scoped** project that places a premium on simplicity. We're skeptical of new extensions, integrations, and extra features. We would rather not extend the system if it adds nonessential complexity.
18
+
19
+ # Contribution Guidelines
20
+
21
+ 1. Before taking on significant code changes, please discuss your ideas on [Discord](https://discord.gg/6p3fD6rBVm) to ensure they align with our vision. We want to keep the codebase simple and unintimidating for new users.
22
+ 2. Fork the repository and create a new branch for your work.
23
+ 3. Follow the [Running Your Local Fork](https://github.com/KillianLucas/open-interpreter/blob/main/docs/CONTRIBUTING.md#running-your-local-fork) guide below.
24
+ 4. Make changes with clear code comments explaining your approach. Try to follow existing conventions in the code.
25
+ 5. Follow the [Code Formatting and Linting](https://github.com/KillianLucas/open-interpreter/blob/main/docs/CONTRIBUTING.md#code-formatting-and-linting) guide below.
26
+ 6. Open a PR into `main` linking any related issues. Provide detailed context on your changes.
27
+
28
+ We will review PRs when possible and work with you to integrate your contribution. Please be patient as reviews take time. Once approved, your code will be merged.
29
+
30
+ ## Running Your Local Fork
31
+
32
+ **Note: for anyone testing the new `--local`, `--os`, and `--local --os` modes: When you run `poetry install` you aren't installing the optional dependencies and it'll throw errors. To test `--local` mode, run `poetry install -E local`. To test `--os` mode, run `poetry install -E os`. To test `--local --os` mode, run `poetry install -E local -E os`. You can edit the system messages for these modes in `interpreter/terminal_interface/profiles/defaults`.**
33
+
34
+ Once you've forked the code and created a new branch for your work, you can run the fork in CLI mode by following these steps:
35
+
36
+ 1. CD into the project folder by running `cd open-interpreter`.
37
+ 2. Install `poetry` [according to their documentation](https://python-poetry.org/docs/#installing-with-pipx), which will create a virtual environment for development + handle dependencies.
38
+ 3. Install dependencies by running `poetry install`.
39
+ 4. Run the program with `poetry run interpreter`. Run tests with `poetry run pytest -s -x`.
40
+
41
+ **Note**: This project uses [`black`](https://black.readthedocs.io/en/stable/index.html) and [`isort`](https://pypi.org/project/isort/) via a [`pre-commit`](https://pre-commit.com/) hook to ensure consistent code style. If you need to bypass it for some reason, you can `git commit` with the `--no-verify` flag.
42
+
43
+ ### Installing New Dependencies
44
+
45
+ If you wish to install new dependencies into the project, please use `poetry add package-name`.
46
+
47
+ ### Installing Developer Dependencies
48
+
49
+ If you need to install dependencies specific to development, like testing tools, formatting tools, etc. please use `poetry add package-name --group dev`.
50
+
51
+ ### Known Issues
52
+
53
+ For some, `poetry install` might hang on some dependencies. As a first step, try to run the following command in your terminal:
54
+
55
+ `export PYTHON_KEYRING_BACKEND=keyring.backends.fail.Keyring`
56
+
57
+ Then run `poetry install` again. If this doesn't work, please join our [Discord community](https://discord.gg/6p3fD6rBVm) for help.
58
+
59
+ ## Code Formatting and Linting
60
+
61
+ Our project uses `black` for code formatting and `isort` for import sorting. To ensure consistency across contributions, please adhere to the following guidelines:
62
+
63
+ 1. **Install Pre-commit Hooks**:
64
+
65
+ If you want to automatically format your code every time you make a commit, install the pre-commit hooks.
66
+
67
+ ```bash
68
+ pip install pre-commit
69
+ pre-commit install
70
+ ```
71
+
72
+ After installing, the hooks will automatically check and format your code every time you commit.
73
+
74
+ 2. **Manual Formatting**:
75
+
76
+ If you choose not to use the pre-commit hooks, you can manually format your code using:
77
+
78
+ ```bash
79
+ black .
80
+ isort .
81
+ ```
82
+
83
+ # Licensing
84
+
85
+ Contributions to Open Interpreter would be under the MIT license before version 0.2.0, or under AGPL for subsequent contributions.
86
+
87
+ # Questions?
88
+
89
+ Join our [Discord community](https://discord.gg/6p3fD6rBVm) and post in the #General channel to connect with contributors. We're happy to guide you through your first open source contribution to this project!
90
+
91
+ **Thank you for your dedication and understanding as we continue refining our processes. As we explore this extraordinary new technology, we sincerely appreciate your involvement.**
open-interpreter/docs/NCU_MIGRATION_GUIDE.md ADDED
@@ -0,0 +1,254 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # `0.2.0` Migration Guide
2
+
3
+ Open Interpreter is [changing](https://changes.openinterpreter.com/log/the-new-computer-update). This guide will help you migrate your application to `0.2.0`, also called the _New Computer Update_ (NCU), the latest major version of Open Interpreter.
4
+
5
+ ## A New Start
6
+
7
+ To start using Open Interpreter in Python, we now use a standard **class instantiation** format:
8
+
9
+ ```python
10
+ # From the module `interpreter`, import the class `OpenInterpreter`
11
+ from interpreter import OpenInterpreter
12
+
13
+ # Create an instance of `OpenInterpreter` to use it
14
+ agent = OpenInterpreter()
15
+ agent.chat()
16
+ ```
17
+
18
+ For convenience, we also provide an instance of `interpreter`, which you can import from the module (also called `interpreter`):
19
+
20
+ ```python
21
+ # From the module `interpreter`, import the included instance of `OpenInterpreter`
22
+ from interpreter import interpreter
23
+
24
+ interpreter.chat()
25
+ ```
26
+
27
+ ## New Parameters
28
+
29
+ All stateless LLM attributes have been moved to `interpreter.llm`:
30
+
31
+ - `interpreter.model` → `interpreter.llm.model`
32
+ - `interpreter.api_key` → `interpreter.llm.api_key`
33
+ - `interpreter.llm_supports_vision` → `interpreter.llm.supports_vision`
34
+ - `interpreter.supports_function_calling` → `interpreter.llm.supports_functions`
35
+ - `interpreter.max_tokens` → `interpreter.llm.max_tokens`
36
+ - `interpreter.context_window` → `interpreter.llm.context_window`
37
+ - `interpreter.temperature` → `interpreter.llm.temperature`
38
+ - `interpreter.api_version` → `interpreter.llm.api_version`
39
+ - `interpreter.api_base` → `interpreter.llm.api_base`
40
+
41
+ This is reflected **1)** in Python applications using Open Interpreter and **2)** in your profile for OI's terminal interface, which can be edited via `interpreter --profiles`.
42
+
43
+ ## New Static Messages Structure
44
+
45
+ - The array of messages is now flat, making the architecture more modular, and easier to adapt to new kinds of media in the future.
46
+ - Each message holds only one kind of data. This yields more messages, but prevents large nested messages that can be difficult to parse.
47
+ - This allows you to pass the full `messages` list into Open Interpreter as `interpreter.messages = message_list`.
48
+ - Every message has a "role", which can be "assistant", "computer", or "user".
49
+ - Every message has a "type", specifying the type of data it contains.
50
+ - Every message has "content", which contains the data for the message.
51
+ - Some messages have a "format" key, to specify the format of the content, like "path" or "base64.png".
52
+ - The recipient of the message is specified by the "recipient" key, which can be "user" or "assistant". This is used to inform the LLM of who the message is intended for.
53
+
54
+ ```python
55
+ [
56
+ {"role": "user", "type": "message", "content": "Please create a plot from this data and display it as an image and then as HTML."}, # implied format: text (only one format for type message)
57
+ {"role": "user", "type": "image", "format": "path", "content": "path/to/image.png"}
58
+ {"role": "user", "type": "file", "content": "/path/to/file.pdf"} # implied format: path (only one format for type file)
59
+ {"role": "assistant", "type": "message", "content": "Processing your request to generate a plot."} # implied format: text
60
+ {"role": "assistant", "type": "code", "format": "python", "content": "plot = create_plot_from_data('data')\ndisplay_as_image(plot)\ndisplay_as_html(plot)"}
61
+ {"role": "computer", "type": "image", "format": "base64.png", "content": "base64"}
62
+ {"role": "computer", "type": "code", "format": "html", "content": "<html>Plot in HTML format</html>"}
63
+ {"role": "computer", "type": "console", "format": "output", "content": "{HTML errors}"}
64
+ {"role": "assistant", "type": "message", "content": "Plot generated successfully."} # implied format: text
65
+ ]
66
+ ```
67
+
68
+ ## New Streaming Structure
69
+
70
+ - The streaming data structure closely matches the static messages structure, with only a few differences.
71
+ - Every streaming chunk has a "start" and "end" key, which are booleans that specify whether the chunk is the first or last chunk in the stream. This is what you should use to build messages from the streaming chunks.
72
+ - There is a "confirmation" chunk type, which is used to confirm with the user that the code should be run. The "content" key of this chunk is a dictionary with a `code` and a `language` key.
73
+ - Introducing more information per chunk is helpful in processing the streaming responses. Please take a look below for example code for processing streaming responses, in JavaScript.
74
+
75
+ ```python
76
+ {"role": "assistant", "type": "message", "start": True}
77
+ {"role": "assistant", "type": "message", "content": "Pro"}
78
+ {"role": "assistant", "type": "message", "content": "cessing"}
79
+ {"role": "assistant", "type": "message", "content": "your request"}
80
+ {"role": "assistant", "type": "message", "content": "to generate a plot."}
81
+ {"role": "assistant", "type": "message", "end": True}
82
+
83
+ {"role": "assistant", "type": "code", "format": "python", "start": True}
84
+ {"role": "assistant", "type": "code", "format": "python", "content": "plot = create_plot_from_data"}
85
+ {"role": "assistant", "type": "code", "format": "python", "content": "('data')\ndisplay_as_image(plot)"}
86
+ {"role": "assistant", "type": "code", "format": "python", "content": "\ndisplay_as_html(plot)"}
87
+ {"role": "assistant", "type": "code", "format": "python", "end": True}
88
+
89
+ # The computer will emit a confirmation chunk *before* running the code. You can break here to cancel the execution.
90
+
91
+ {"role": "computer", "type": "confirmation", "format": "execution", "content": {
92
+ "type": "code",
93
+ "format": "python",
94
+ "content": "plot = create_plot_from_data('data')\ndisplay_as_image(plot)\ndisplay_as_html(plot)",
95
+ }}
96
+
97
+ {"role": "computer", "type": "console", "start": True}
98
+ {"role": "computer", "type": "console", "format": "output", "content": "a printed statement"}
99
+ {"role": "computer", "type": "console", "format": "active_line", "content": "1"}
100
+ {"role": "computer", "type": "console", "format": "active_line", "content": "2"}
101
+ {"role": "computer", "type": "console", "format": "active_line", "content": "3"}
102
+ {"role": "computer", "type": "console", "format": "output", "content": "another printed statement"}
103
+ {"role": "computer", "type": "console", "end": True}
104
+ ```
105
+
106
+ ## Tips and Best Practices
107
+
108
+ - Adding an `id` and a `created_at` field to messages can be helpful to manipulate the messages later on.
109
+ - If you want your application to run the code instead of OI, then your app will act as the `computer`. This means breaking from the stream once OI emits a confirmation chunk (`{'role': 'computer', 'type': 'confirmation' ...}`) to prevent OI from running the code. When you run code, grab the message history via `messages = interpreter.messages`, then simply mimic the `computer` format above by appending new `{'role': 'computer' ...}` messages, then run `interpreter.chat(messages)`.
110
+ - Open Interpreter is designed to stop code execution when the stream is disconnected. Use this to your advantage to add a "Stop" button to the UI.
111
+ - Setting up your Python server to send errors and exceptions to the client can be helpful for debugging and generating error messages.
112
+
113
+ ## Example Code
114
+
115
+ ### Types
116
+
117
+ Python:
118
+
119
+ ```python
120
+ class Message:
121
+ role: Union["user", "assistant", "computer"]
122
+ type: Union["message", "code", "image", "console", "file", "confirmation"]
123
+ format: Union["output", "path", "base64.png", "base64.jpeg", "python", "javascript", "shell", "html", "active_line", "execution"]
124
+ recipient: Union["user", "assistant"]
125
+ content: Union[str, dict] # dict should have 'code' and 'language' keys, this is only for confirmation messages
126
+
127
+ class StreamingChunk(Message):
128
+ start: bool
129
+ end: bool
130
+ ```
131
+
132
+ TypeScript:
133
+
134
+ ```typescript
135
+ interface Message {
136
+ role: "user" | "assistant" | "computer";
137
+ type: "message" | "code" | "image" | "console" | "file", | "confirmation";
138
+ format: "output" | "path" | "base64.png" | "base64.jpeg" | "python" | "javascript" | "shell" | "html" | "active_line", | "execution";
139
+ recipient: "user" | "assistant";
140
+ content: string | { code: string; language: string };
141
+ }
142
+ ```
143
+
144
+ ```typescript
145
+ interface StreamingChunk extends Message {
146
+ start: boolean;
147
+ end: boolean;
148
+ }
149
+ ```
150
+
151
+ ### Handling streaming chunks
152
+
153
+ Here is a minimal example of how to handle streaming chunks in JavaScript. This example assumes that you are using a Python server to handle the streaming requests, and that you are using a JavaScript client to send the requests and handle the responses. See the main repository README for an example FastAPI server.
154
+
155
+ ```javascript
156
+ //Javascript
157
+
158
+ let messages = []; //variable to hold all messages
159
+ let currentMessageIndex = 0; //variable to keep track of the current message index
160
+ let isGenerating = false; //variable to stop the stream
161
+
162
+ // Function to send a POST request to the OI
163
+ async function sendRequest() {
164
+ // Temporary message to hold the message that is being processed
165
+ try {
166
+ // Define parameters for the POST request, add at least the full messages array, but you may also consider adding any other OI parameters here, like auto_run, local, etc.
167
+ const params = {
168
+ messages,
169
+ };
170
+
171
+ //Define a controller to allow for aborting the request
172
+ const controller = new AbortController();
173
+ const { signal } = controller;
174
+
175
+ // Send the POST request to your Python server endpoint
176
+ const interpreterCall = await fetch("https://YOUR_ENDPOINT/", {
177
+ method: "POST",
178
+ headers: {
179
+ "Content-Type": "application/json",
180
+ },
181
+ body: JSON.stringify(params),
182
+ signal,
183
+ });
184
+
185
+ // Throw an error if the request was not successful
186
+ if (!interpreterCall.ok) {
187
+ console.error("Interpreter didn't respond with 200 OK");
188
+ return;
189
+ }
190
+
191
+ // Initialize a reader for the response body
192
+ const reader = interpreterCall.body.getReader();
193
+
194
+ isGenerating = true;
195
+ while (true) {
196
+ const { value, done } = await reader.read();
197
+
198
+ // Break the loop if the stream is done
199
+ if (done) {
200
+ break;
201
+ }
202
+ // If isGenerating is set to false, cancel the reader and break the loop. This will halt the execution of the code run by OI as well
203
+ if (!isGenerating) {
204
+ await reader.cancel();
205
+ controller.abort();
206
+ break;
207
+ }
208
+ // Decode the stream and split it into lines
209
+ const text = new TextDecoder().decode(value);
210
+ const lines = text.split("\n");
211
+ lines.pop(); // Remove last empty line
212
+
213
+ // Process each line of the response
214
+ for (const line of lines) {
215
+ const chunk = JSON.parse(line);
216
+ await processChunk(chunk);
217
+ }
218
+ }
219
+ //Stream has completed here, so run any code that needs to be run after the stream has finished
220
+ if (isGenerating) isGenerating = false;
221
+ } catch (error) {
222
+ console.error("An error occurred:", error);
223
+ }
224
+ }
225
+
226
+ //Function to process each chunk of the stream, and create messages
227
+ function processChunk(chunk) {
228
+ if (chunk.start) {
229
+ const tempMessage = {};
230
+ //add the new message's data to the tempMessage
231
+ tempMessage.role = chunk.role;
232
+ tempMessage.type = chunk.type;
233
+ tempMessage.content = "";
234
+ if (chunk.format) tempMessage.format = chunk.format;
235
+ if (chunk.recipient) tempMessage.recipient = chunk.recipient;
236
+
237
+ //add the new message to the messages array, and set the currentMessageIndex to the index of the new message
238
+ messages.push(tempMessage);
239
+ currentMessageIndex = messages.length - 1;
240
+ }
241
+
242
+ //Handle active lines for code blocks
243
+ if (chunk.format === "active_line") {
244
+ messages[currentMessageIndex].activeLine = chunk.content;
245
+ } else if (chunk.end && chunk.type === "console") {
246
+ messages[currentMessageIndex].activeLine = null;
247
+ }
248
+
249
+ //Add the content of the chunk to current message, avoiding adding the content of the active line
250
+ if (chunk.content && chunk.format !== "active_line") {
251
+ messages[currentMessageIndex].content += chunk.content;
252
+ }
253
+ }
254
+ ```
open-interpreter/docs/README_DE.md ADDED
@@ -0,0 +1,131 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ <h1 align="center">● Open Interpreter</h1>
2
+
3
+ <p align="center">
4
+ <a href="https://discord.gg/6p3fD6rBVm">
5
+ <img alt="Discord" src="https://img.shields.io/discord/1146610656779440188?logo=discord&style=flat&logoColor=white">
6
+ </a>
7
+ <a href="README_ES.md"> <img src="https://img.shields.io/badge/Español-white.svg" alt="ES doc"/></a>
8
+ <a href="README_JA.md"><img src="https://img.shields.io/badge/ドキュメント-日本語-white.svg" alt="JA doc"></a>
9
+ <a href="README_ZH.md"><img src="https://img.shields.io/badge/文档-中文版-white.svg" alt="ZH doc"></a>
10
+ <a href="README.md"><img src="https://img.shields.io/badge/english-document-white.svg" alt="EN doc"></a>
11
+ <img src="https://img.shields.io/static/v1?label=license&message=MIT&color=white&style=flat" alt="License">
12
+ <br><br>
13
+ <b>Lassen Sie Sprachmodelle Code auf Ihrem Computer ausführen.</b><br>
14
+ Eine Open-Source, lokal laufende Implementierung von OpenAIs Code-Interpreter.<br>
15
+ <br><a href="https://openinterpreter.com">Erhalten Sie frühen Zugang zur Desktop-Anwendung.</a><br>
16
+ </p>
17
+
18
+ <br>
19
+
20
+ ![poster](https://github.com/KillianLucas/open-interpreter/assets/63927363/08f0d493-956b-4d49-982e-67d4b20c4b56)
21
+
22
+ <br>
23
+
24
+ ```shell
25
+ pip install open-interpreter
26
+ ```
27
+
28
+ ```shell
29
+ interpreter
30
+ ```
31
+
32
+ <br>
33
+
34
+ **Open Interpreter** ermöglicht es LLMs (Language Models), Code (Python, Javascript, Shell und mehr) lokal auszuführen. Sie können mit Open Interpreter über eine ChatGPT-ähnliche Schnittstelle in Ihrem Terminal chatten, indem Sie $ interpreter nach der Installation ausführen.
35
+
36
+ Dies bietet eine natürliche Sprachschnittstelle zu den allgemeinen Fähigkeiten Ihres Computers:
37
+
38
+ - Erstellen und bearbeiten Sie Fotos, Videos, PDFs usw.
39
+ - Steuern Sie einen Chrome-Browser, um Forschungen durchzuführen
40
+ - Darstellen, bereinigen und analysieren Sie große Datensätze
41
+ - ...usw.
42
+
43
+ **⚠️ Hinweis: Sie werden aufgefordert, Code zu genehmigen, bevor er ausgeführt wird.**
44
+
45
+ <br>
46
+
47
+ ## Demo
48
+
49
+ https://github.com/KillianLucas/open-interpreter/assets/63927363/37152071-680d-4423-9af3-64836a6f7b60
50
+
51
+ #### Eine interaktive Demo ist auch auf Google Colab verfügbar:
52
+
53
+ [![In Colab öffnen](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/drive/1WKmRXZgsErej2xUriKzxrEAXdxMSgWbb?usp=sharing)
54
+
55
+ ## Schnellstart
56
+
57
+ ```shell
58
+ pip install open-interpreter
59
+ ```
60
+
61
+ ### Terminal
62
+
63
+ Nach der Installation führen Sie einfach `interpreter` aus:
64
+
65
+ ```shell
66
+ interpreter
67
+ ```
68
+
69
+ ### Python
70
+
71
+ ```python
72
+ from interpreter import interpreter
73
+
74
+ interpreter.chat("Stellen Sie AAPL und METAs normalisierte Aktienpreise dar") # Führt einen einzelnen Befehl aus
75
+ interpreter.chat() # Startet einen interaktiven Chat
76
+ ```
77
+
78
+ ## Vergleich zu ChatGPTs Code Interpreter
79
+
80
+ OpenAIs Veröffentlichung des [Code Interpreters](https://openai.com/blog/chatgpt-plugins#code-interpreter) mit GPT-4 bietet eine fantastische Möglichkeit, reale Aufgaben mit ChatGPT zu erledigen.
81
+
82
+ Allerdings ist OpenAIs Dienst gehostet, Closed-Source und stark eingeschränkt:
83
+
84
+ - Kein Internetzugang.
85
+ - [Begrenzte Anzahl vorinstallierter Pakete](https://wfhbrian.com/mastering-chatgpts-code-interpreter-list-of-python-packages/).
86
+ - 100 MB maximale Uploadgröße, 120.0 Sekunden Laufzeitlimit.
87
+ - Der Zustand wird gelöscht (zusammen mit allen generierten Dateien oder Links), wenn die Umgebung abstirbt.
88
+
89
+ ---
90
+
91
+ Open Interpreter überwindet diese Einschränkungen, indem es in Ihrer lokalen Umgebung läuft. Es hat vollen Zugang zum Internet, ist nicht durch Zeit oder Dateigröße eingeschränkt und kann jedes Paket oder jede Bibliothek nutzen.
92
+
93
+ Dies kombiniert die Kraft von GPT-4s Code Interpreter mit der Flexibilität Ihrer lokalen Maschine.
94
+
95
+ ## Sicherheitshinweis
96
+
97
+ Da generierter Code in deiner lokalen Umgebung ausgeführt wird, kann er mit deinen Dateien und Systemeinstellungen interagieren, was potenziell zu unerwarteten Ergebnissen wie Datenverlust oder Sicherheitsrisiken führen kann.
98
+
99
+ **⚠️ Open Interpreter wird um Nutzerbestätigung bitten, bevor Code ausgeführt wird.**
100
+
101
+ Du kannst `interpreter -y` ausführen oder `interpreter.auto_run = True` setzen, um diese Bestätigung zu umgehen, in diesem Fall:
102
+
103
+ - Sei vorsichtig bei Befehlsanfragen, die Dateien oder Systemeinstellungen ändern.
104
+ - Beobachte Open Interpreter wie ein selbstfahrendes Auto und sei bereit, den Prozess zu beenden, indem du dein Terminal schließt.
105
+ - Betrachte die Ausführung von Open Interpreter in einer eingeschränkten Umgebung wie Google Colab oder Replit. Diese Umgebungen sind isolierter und reduzieren das Risiko der Ausführung willkürlichen Codes.
106
+
107
+ Es gibt **experimentelle** Unterstützung für einen [Sicherheitsmodus](docs/SAFE_MODE.md), um einige Risiken zu mindern.
108
+
109
+ ## Wie funktioniert es?
110
+
111
+ Open Interpreter rüstet ein [funktionsaufrufendes Sprachmodell](https://platform.openai.com/docs/guides/gpt/function-calling) mit einer `exec()`-Funktion aus, die eine `language` (wie "Python" oder "JavaScript") und auszuführenden `code` akzeptiert.
112
+
113
+ Wir streamen dann die Nachrichten des Modells, Code und die Ausgaben deines Systems zum Terminal als Markdown.
114
+
115
+ # Mitwirken
116
+
117
+ Danke für dein Interesse an der Mitarbeit! Wir begrüßen die Beteiligung der Gemeinschaft.
118
+
119
+ Bitte sieh dir unsere [Richtlinien für Mitwirkende](docs/CONTRIBUTING.md) für weitere Details an, wie du dich einbringen kannst.
120
+
121
+ ## Lizenz
122
+
123
+ Open Interpreter ist unter der MIT-Lizenz lizenziert. Du darfst die Software verwenden, kopieren, modifizieren, verteilen, unterlizenzieren und Kopien der Software verkaufen.
124
+
125
+ **Hinweis**: Diese Software ist nicht mit OpenAI affiliiert.
126
+
127
+ > Zugriff auf einen Junior-Programmierer zu haben, der mit der Geschwindigkeit deiner Fingerspitzen arbeitet ... kann neue Arbeitsabläufe mühelos und effizient machen sowie das Programmieren einem neuen Publikum öffnen.
128
+ >
129
+ > — _OpenAIs Code Interpreter Release_
130
+
131
+ <br>
open-interpreter/docs/README_ES.md ADDED
@@ -0,0 +1,413 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ <h1 align="center">● Intérprete Abierto</h1>
2
+
3
+ <p align="center">
4
+ <a href="https://discord.gg/Hvz9Axh84z">
5
+ <img alt="Discord" src="https://img.shields.io/discord/1146610656779440188?logo=discord&style=flat&logoColor=white"/></a>
6
+ <a href="../README.md"><img src="https://img.shields.io/badge/english-document-white.svg" alt="EN doc"></a>
7
+ <a href="docs/README_JA.md"><img src="https://img.shields.io/badge/ドキュメント-日本語-white.svg" alt="JA doc"/></a>
8
+ <a href="docs/README_ZH.md"> <img src="https://img.shields.io/badge/文档-中文版-white.svg" alt="ZH doc"/></a>
9
+ <a href="docs/README_IN.md"> <img src="https://img.shields.io/badge/Hindi-white.svg" alt="IN doc"/></a>
10
+ <img src="https://img.shields.io/static/v1?label=licencia&message=AGPL&color=white&style=flat" alt="License"/>
11
+ <br>
12
+ <br>
13
+ <br><a href="https://0ggfznkwh4j.typeform.com/to/G21i9lJ2">Obtenga acceso temprano a la aplicación de escritorio</a>‎ ‎ |‎ ‎ <a href="https://docs.openinterpreter.com/">Documentación</a><br>
14
+ </p>
15
+
16
+ <br>
17
+
18
+ ![poster](https://github.com/KillianLucas/open-interpreter/assets/63927363/08f0d493-956b-4d49-982e-67d4b20c4b56)
19
+
20
+ <br>
21
+ <p align="center">
22
+ <strong>La Nueva Actualización del Computador</strong> presenta <strong><code>--os</code></strong> y una nueva <strong>API de Computadora</strong>. <a href="https://changes.openinterpreter.com/log/the-new-computer-update">Lea más →</a>
23
+ </p>
24
+ <br>
25
+
26
+ ```shell
27
+ pip install open-interpreter
28
+ ```
29
+
30
+ > ¿No funciona? Lea nuestra [guía de configuración](https://docs.openinterpreter.com/getting-started/setup).
31
+
32
+ ```shell
33
+ interpreter
34
+ ```
35
+
36
+ <br>
37
+
38
+ **Intérprete Abierto** permite a los LLMs ejecutar código (Python, JavaScript, Shell, etc.) localmente. Puede chatear con Intérprete Abierto a través de una interfaz de chat como ChatGPT en su terminal después de instalar.
39
+
40
+ Esto proporciona una interfaz de lenguaje natural para las capacidades generales de su computadora:
41
+
42
+ - Crear y editar fotos, videos, PDF, etc.
43
+ - Controlar un navegador de Chrome para realizar investigaciones
44
+ - Graficar, limpiar y analizar conjuntos de datos grandes
45
+ - ... etc.
46
+
47
+ **⚠️ Nota: Se le pedirá que apruebe el código antes de ejecutarlo.**
48
+
49
+ <br>
50
+
51
+ ## Demo
52
+
53
+ https://github.com/KillianLucas/open-interpreter/assets/63927363/37152071-680d-4423-9af3-64836a6f7b60
54
+
55
+ #### También hay disponible una demo interactiva en Google Colab:
56
+
57
+ [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/drive/1WKmRXZgsErej2xUriKzxrEAXdxMSgWbb?usp=sharing)
58
+
59
+ #### Además, hay un ejemplo de interfaz de voz inspirada en _Her_:
60
+
61
+ [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/drive/1NojYGHDgxH6Y1G1oxThEBBb2AtyODBIK)
62
+
63
+ ## Inicio Rápido
64
+
65
+ ```shell
66
+ pip install open-interpreter
67
+ ```
68
+
69
+ ### Terminal
70
+
71
+ Después de la instalación, simplemente ejecute `interpreter`:
72
+
73
+ ```shell
74
+ interpreter
75
+ ```
76
+
77
+ ### Python
78
+
79
+ ```python
80
+ from interpreter import interpreter
81
+
82
+ interpreter.chat("Plot AAPL and META's normalized stock prices") # Ejecuta un comando sencillo
83
+ interpreter.chat() # Inicia una sesión de chat interactiva
84
+ ```
85
+
86
+ ### GitHub Codespaces
87
+
88
+ Presione la tecla `,` en la página de GitHub de este repositorio para crear un espacio de códigos. Después de un momento, recibirá un entorno de máquina virtual en la nube con Interprete Abierto pre-instalado. Puede entonces empezar a interactuar con él directamente y confirmar su ejecución de comandos del sistema sin preocuparse por dañar el sistema.
89
+
90
+ ## Comparación con el Intérprete de Código de ChatGPT
91
+
92
+ El lanzamiento de [Intérprete de Código](https://openai.com/blog/chatgpt-plugins#code-interpreter) de OpenAI con GPT-4 presenta una oportunidad fantástica para realizar tareas del mundo real con ChatGPT.
93
+
94
+ Sin embargo, el servicio de OpenAI está alojado, su codigo es cerrado y está fuertemente restringido:
95
+
96
+ - No hay acceso a Internet.
97
+ - [Conjunto limitado de paquetes preinstalados](https://wfhbrian.com/mastering-chatgpts-code-interpreter-list-of-python-packages/).
98
+ - Límite de 100 MB de carga, límite de tiempo de 120.0 segundos.
99
+ - El estado se elimina (junto con cualquier archivo generado o enlace) cuando el entorno se cierra.
100
+
101
+ ---
102
+
103
+ Intérprete Abierto supera estas limitaciones al ejecutarse en su entorno local. Tiene acceso completo a Internet, no está restringido por tiempo o tamaño de archivo y puede utilizar cualquier paquete o libreria.
104
+
105
+ Esto combina el poder del Intérprete de Código de GPT-4 con la flexibilidad de su entorno de desarrollo local.
106
+
107
+ ## Comandos
108
+
109
+ **Actualización:** La Actualización del Generador (0.1.5) introdujo streaming:
110
+
111
+ ```python
112
+ message = "¿Qué sistema operativo estamos utilizando?"
113
+
114
+ for chunk in interpreter.chat(message, display=False, stream=True):
115
+ print(chunk)
116
+ ```
117
+
118
+ ### Chat Interactivo
119
+
120
+ Para iniciar una sesión de chat interactiva en su terminal, puede ejecutar `interpreter` desde la línea de comandos:
121
+
122
+ ```shell
123
+ interpreter
124
+ ```
125
+
126
+ O `interpreter.chat()` desde un archivo `.py`:
127
+
128
+ ```python
129
+ interpreter.chat()
130
+ ```
131
+
132
+ **Puede también transmitir cada trozo:**
133
+
134
+ ```python
135
+ message = "¿Qué sistema operativo estamos utilizando?"
136
+
137
+ for chunk in interpreter.chat(message, display=False, stream=True):
138
+ print(chunk)
139
+ ```
140
+
141
+ ### Chat Programático
142
+
143
+ Para un control más preciso, puede pasar mensajes directamente a `.chat(message)`:
144
+
145
+ ```python
146
+ interpreter.chat("Añade subtítulos a todos los videos en /videos.")
147
+
148
+ # ... Transmite salida a su terminal, completa tarea ...
149
+
150
+ interpreter.chat("Estos se ven bien, pero ¿pueden hacer los subtítulos más grandes?")
151
+
152
+ # ...
153
+ ```
154
+
155
+ ### Iniciar un nuevo chat
156
+
157
+ En Python, Intérprete Abierto recuerda el historial de conversación. Si desea empezar de nuevo, puede resetearlo:
158
+
159
+ ```python
160
+ interpreter.messages = []
161
+ ```
162
+
163
+ ### Guardar y Restaurar Chats
164
+
165
+ `interpreter.chat()` devuelve una lista de mensajes, que puede utilizar para reanudar una conversación con `interpreter.messages = messages`:
166
+
167
+ ```python
168
+ messages = interpreter.chat("Mi nombre es Killian.") # Guarda mensajes en 'messages'
169
+ interpreter.messages = [] # Resetear Intérprete ("Killian" será olvidado)
170
+
171
+ interpreter.messages = messages # Reanuda chat desde 'messages' ("Killian" será recordado)
172
+ ```
173
+
174
+ ### Personalizar el Mensaje del Sistema
175
+
176
+ Puede inspeccionar y configurar el mensaje del sistema de Intérprete Abierto para extender su funcionalidad, modificar permisos o darle más contexto.
177
+
178
+ ```python
179
+ interpreter.system_message += """
180
+ Ejecute comandos de shell con -y para que el usuario no tenga que confirmarlos.
181
+ """
182
+ print(interpreter.system_message)
183
+ ```
184
+
185
+ ### Cambiar el Modelo de Lenguaje
186
+
187
+ Intérprete Abierto utiliza [LiteLLM](https://docs.litellm.ai/docs/providers/) para conectarse a modelos de lenguaje hospedados.
188
+
189
+ Puede cambiar el modelo estableciendo el parámetro de modelo:
190
+
191
+ ```shell
192
+ interpreter --model gpt-3.5-turbo
193
+ interpreter --model claude-2
194
+ interpreter --model command-nightly
195
+ ```
196
+
197
+ En Python, establezca el modelo en el objeto:
198
+
199
+ ```python
200
+ interpreter.llm.model = "gpt-3.5-turbo"
201
+ ```
202
+
203
+ [Encuentre la cadena adecuada para su modelo de lenguaje aquí.](https://docs.litellm.ai/docs/providers/)
204
+
205
+ ### Ejecutar Intérprete Abierto localmente
206
+
207
+ #### Terminal
208
+
209
+ Intérprete Abierto puede utilizar un servidor de OpenAI compatible para ejecutar modelos localmente. (LM Studio, jan.ai, ollama, etc.)
210
+
211
+ Simplemente ejecute `interpreter` con la URL de base de API de su servidor de inferencia (por defecto, `http://localhost:1234/v1` para LM Studio):
212
+
213
+ ```shell
214
+ interpreter --api_base "http://localhost:1234/v1" --api_key "fake_key"
215
+ ```
216
+
217
+ O puede utilizar Llamafile sin instalar software adicional simplemente ejecutando:
218
+
219
+ ```shell
220
+ interpreter --local
221
+ ```
222
+
223
+ Para una guía mas detallada, consulte [este video de Mike Bird](https://www.youtube.com/watch?v=CEs51hGWuGU?si=cN7f6QhfT4edfG5H)
224
+
225
+ **Cómo ejecutar LM Studio en segundo plano.**
226
+
227
+ 1. Descargue [https://lmstudio.ai/](https://lmstudio.ai/) luego ejecutelo.
228
+ 2. Seleccione un modelo, luego haga clic **↓ Descargar**.
229
+ 3. Haga clic en el botón **↔️** en la izquierda (debajo de 💬).
230
+ 4. Seleccione su modelo en la parte superior, luego haga clic **Iniciar Servidor**.
231
+
232
+ Una vez que el servidor esté funcionando, puede empezar su conversación con Intérprete Abierto.
233
+
234
+ > **Nota:** El modo local establece su `context_window` en 3000 y su `max_tokens` en 1000. Si su modelo tiene requisitos diferentes, ajuste estos parámetros manualmente (ver a continuación).
235
+
236
+ #### Python
237
+
238
+ Nuestro paquete de Python le da más control sobre cada ajuste. Para replicar y conectarse a LM Studio, utilice estos ajustes:
239
+
240
+ ```python
241
+ from interpreter import interpreter
242
+
243
+ interpreter.offline = True # Desactiva las características en línea como Procedimientos Abiertos
244
+ interpreter.llm.model = "openai/x" # Indica a OI que envíe mensajes en el formato de OpenAI
245
+ interpreter.llm.api_key = "fake_key" # LiteLLM, que utilizamos para hablar con LM Studio, requiere esto
246
+ interpreter.llm.api_base = "http://localhost:1234/v1" # Apunta esto a cualquier servidor compatible con OpenAI
247
+
248
+ interpreter.chat()
249
+ ```
250
+
251
+ #### Ventana de Contexto, Tokens Máximos
252
+
253
+ Puede modificar los `max_tokens` y `context_window` (en tokens) de los modelos locales.
254
+
255
+ Para el modo local, ventanas de contexto más cortas utilizarán menos RAM, así que recomendamos intentar una ventana mucho más corta (~1000) si falla o si es lenta. Asegúrese de que `max_tokens` sea menor que `context_window`.
256
+
257
+ ```shell
258
+ interpreter --local --max_tokens 1000 --context_window 3000
259
+ ```
260
+
261
+ ### Modo Detallado
262
+
263
+ Para ayudarle a inspeccionar Intérprete Abierto, tenemos un modo `--verbose` para depuración.
264
+
265
+ Puede activar el modo detallado utilizando el parámetro (`interpreter --verbose`), o en plena sesión:
266
+
267
+ ```shell
268
+ $ interpreter
269
+ ...
270
+ > %verbose true <- Activa el modo detallado
271
+
272
+ > %verbose false <- Desactiva el modo verbose
273
+ ```
274
+
275
+ ### Comandos de Modo Interactivo
276
+
277
+ En el modo interactivo, puede utilizar los siguientes comandos para mejorar su experiencia. Aquí hay una lista de comandos disponibles:
278
+
279
+ **Comandos Disponibles:**
280
+
281
+ - `%verbose [true/false]`: Activa o desactiva el modo detallado. Sin parámetros o con `true` entra en modo detallado.
282
+ Con `false` sale del modo verbose.
283
+ - `%reset`: Reinicia la sesión actual de conversación.
284
+ - `%undo`: Elimina el mensaje de usuario previo y la respuesta del AI del historial de mensajes.
285
+ - `%tokens [prompt]`: (_Experimental_) Calcula los tokens que se enviarán con el próximo prompt como contexto y estima su costo. Opcionalmente, calcule los tokens y el costo estimado de un `prompt` si se proporciona. Depende de [LiteLLM's `cost_per_token()` method](https://docs.litellm.ai/docs/completion/token_usage#2-cost_per_token) para costos estimados.
286
+ - `%help`: Muestra el mensaje de ayuda.
287
+
288
+ ### Configuración / Perfiles
289
+
290
+ Intérprete Abierto permite establecer comportamientos predeterminados utilizando archivos `yaml`.
291
+
292
+ Esto proporciona una forma flexible de configurar el intérprete sin cambiar los argumentos de línea de comandos cada vez.
293
+
294
+ Ejecutar el siguiente comando para abrir el directorio de perfiles:
295
+
296
+ ```
297
+ interpreter --profiles
298
+ ```
299
+
300
+ Puede agregar archivos `yaml` allí. El perfil predeterminado se llama `default.yaml`.
301
+
302
+ #### Perfiles Múltiples
303
+
304
+ Intérprete Abierto admite múltiples archivos `yaml`, lo que permite cambiar fácilmente entre configuraciones:
305
+
306
+ ```
307
+ interpreter --profile my_profile.yaml
308
+ ```
309
+
310
+ ## Servidor de FastAPI de ejemplo
311
+
312
+ El generador actualiza permite controlar Intérprete Abierto a través de puntos de conexión HTTP REST:
313
+
314
+ ```python
315
+ # server.py
316
+
317
+ from fastapi import FastAPI
318
+ from fastapi.responses import StreamingResponse
319
+ from interpreter import interpreter
320
+
321
+ app = FastAPI()
322
+
323
+ @app.get("/chat")
324
+ def chat_endpoint(message: str):
325
+ def event_stream():
326
+ for result in interpreter.chat(message, stream=True):
327
+ yield f"data: {result}\n\n"
328
+
329
+ return StreamingResponse(event_stream(), media_type="text/event-stream")
330
+
331
+ @app.get("/history")
332
+ def history_endpoint():
333
+ return interpreter.messages
334
+ ```
335
+
336
+ ```shell
337
+ pip install fastapi uvicorn
338
+ uvicorn server:app --reload
339
+ ```
340
+
341
+ Puede iniciar un servidor idéntico al anterior simplemente ejecutando `interpreter.server()`.
342
+
343
+ ## Android
344
+
345
+ La guía paso a paso para instalar Intérprete Abierto en su dispositivo Android se encuentra en el [repo de open-interpreter-termux](https://github.com/MikeBirdTech/open-interpreter-termux).
346
+
347
+ ## Aviso de Seguridad
348
+
349
+ Ya que el código generado se ejecuta en su entorno local, puede interactuar con sus archivos y configuraciones del sistema, lo que puede llevar a resultados inesperados como pérdida de datos o riesgos de seguridad.
350
+
351
+ **⚠️ Intérprete Abierto le pedirá que apruebe el código antes de ejecutarlo.**
352
+
353
+ Puede ejecutar `interpreter -y` o establecer `interpreter.auto_run = True` para evitar esta confirmación, en cuyo caso:
354
+
355
+ - Sea cuidadoso al solicitar comandos que modifican archivos o configuraciones del sistema.
356
+ - Vigile Intérprete Abierto como si fuera un coche autónomo y esté preparado para terminar el proceso cerrando su terminal.
357
+ - Considere ejecutar Intérprete Abierto en un entorno restringido como Google Colab o Replit. Estos entornos son más aislados, reduciendo los riesgos de ejecutar código arbitrario.
358
+
359
+ Hay soporte **experimental** para un [modo seguro](docs/SAFE_MODE.md) para ayudar a mitigar algunos riesgos.
360
+
361
+ ## ¿Cómo Funciona?
362
+
363
+ Intérprete Abierto equipa un [modelo de lenguaje de llamada a funciones](https://platform.openai.com/docs/guides/gpt/function-calling) con una función `exec()`, que acepta un `lenguaje` (como "Python" o "JavaScript") y `código` para ejecutar.
364
+
365
+ Luego, transmite los mensajes del modelo, el código y las salidas del sistema a la terminal como Markdown.
366
+
367
+ # Acceso a la Documentación Offline
368
+
369
+ La documentación completa está disponible en línea sin necesidad de conexión a Internet.
370
+
371
+ [Node](https://nodejs.org/en) es un requisito previo:
372
+
373
+ - Versión 18.17.0 o cualquier versión posterior 18.x.x.
374
+ - Versión 20.3.0 o cualquier versión posterior 20.x.x.
375
+ - Cualquier versión a partir de 21.0.0 sin límite superior especificado.
376
+
377
+ Instale [Mintlify](https://mintlify.com/):
378
+
379
+ ```bash
380
+ npm i -g mintlify@latest
381
+ ```
382
+
383
+ Cambia a la carpeta de documentos y ejecuta el comando apropiado:
384
+
385
+ ```bash
386
+ # Suponiendo que estás en la carpeta raíz del proyecto
387
+ cd ./docs
388
+
389
+ # Ejecute el servidor de documentación
390
+ mintlify dev
391
+ ```
392
+
393
+ Una nueva ventana del navegador debería abrirse. La documentación estará disponible en [http://localhost:3000](http://localhost:3000) mientras el servidor de documentación esté funcionando.
394
+
395
+ # Contribuyendo
396
+
397
+ ¡Gracias por su interés en contribuir! Damos la bienvenida a la implicación de la comunidad.
398
+
399
+ Por favor, consulte nuestras [directrices de contribución](docs/CONTRIBUTING.md) para obtener más detalles sobre cómo involucrarse.
400
+
401
+ # Roadmap
402
+
403
+ Visite [nuestro roadmap](https://github.com/KillianLucas/open-interpreter/blob/main/docs/ROADMAP.md) para ver el futuro de Intérprete Abierto.
404
+
405
+ **Nota:** Este software no está afiliado con OpenAI.
406
+
407
+ ![thumbnail-ncu](https://github.com/KillianLucas/open-interpreter/assets/63927363/1b19a5db-b486-41fd-a7a1-fe2028031686)
408
+
409
+ > Tener acceso a un programador junior trabajando a la velocidad de su dedos... puede hacer que los nuevos flujos de trabajo sean sencillos y eficientes, además de abrir los beneficios de la programación a nuevas audiencias.
410
+ >
411
+ > — _Lanzamiento del intérprete de código de OpenAI_
412
+
413
+ <br>
open-interpreter/docs/README_IN.md ADDED
@@ -0,0 +1,258 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ <h1 align="center">● Open Interpreter</h1>
2
+
3
+ <p align="center">
4
+ <a href="https://discord.gg/6p3fD6rBVm">
5
+ <img alt="Discord" src="https://img.shields.io/discord/1146610656779440188?logo=discord&style=flat&logoColor=white"/>
6
+ </a>
7
+ <a href="README_ES.md"> <img src="https://img.shields.io/badge/Español-white.svg" alt="ES doc"/></a>
8
+ <a href="README_JA.md"><img src="https://img.shields.io/badge/ドキュメント-日本語-white.svg" alt="JA doc"/></a>
9
+ <a href="README_ZH.md"><img src="https://img.shields.io/badge/文档-中文版-white.svg" alt="ZH doc"/></a>
10
+ <a href="README_IN.md"><img src="https://img.shields.io/badge/Document-Hindi-white.svg" alt="IN doc"/></a>
11
+ <img src="https://img.shields.io/static/v1?label=license&message=MIT&color=white&style=flat" alt="License"/>
12
+ <br><br>
13
+ <b>अपने कंप्यूटर पर कोड चलाने के लिए भाषा मॉडल को चलाएं।</b><br>
14
+ ओपनएआई कोड इंटरप्रेटर का एक ओपन-सोर्स, स्थानीय चलने वाला अमल।<br>
15
+ <br><a href="https://openinterpreter.com">डेस्कटॉप एप्लिकेशन को पहले से ही उपयोग करने के लिए एरली एक्सेस प्राप्त करें।</a><br>
16
+ </p>
17
+
18
+ <br>
19
+
20
+ ![poster](https://github.com/KillianLucas/open-interpreter/assets/63927363/08f0d493-956b-4d49-982e-67d4b20c4b56)
21
+
22
+ <br>
23
+
24
+ ```shell
25
+ pip install open-interpreter
26
+ ```
27
+
28
+ ```shell
29
+ interpreter
30
+ ```
31
+
32
+ <br>
33
+
34
+ **ओपन इंटरप्रेटर** एलएलएम कोड (पायथन, जावास्क्रिप्ट, शेल, और अधिक) को स्थानीय रूप से चलाने की अनुमति देता है। आप इंस्टॉल करने के बाद अपने टर्मिनल में `$ interpreter` चलाकर ओपन इंटरप्रेटर के साथ एक चैटजीपीटी-जैसे इंटरफ़ेस के माध्यम से चैट कर सकते हैं।
35
+
36
+ यह आपके कंप्यूटर की सामान्य-उद्देश्य क्षमताओं के लिए एक प्राकृतिक भाषा इंटरफ़ेस प्रदान करता है:
37
+
38
+ - फ़ोटो, वीडियो, पीडीएफ़ आदि बनाएँ और संपादित करें।
39
+ - अनुसंधान करने के लिए एक क्रोम ब्राउज़र को नियंत्रित करें।
40
+ - बड़े डेटासेट को प्लॉट करें, साफ करें और विश्लेषण करें।
41
+ - ...आदि।
42
+
43
+ **⚠️ ध्यान दें: कोड को चलाने से पहले आपसे मंज़ूरी मांगी जाएगी।**
44
+
45
+ <br>
46
+
47
+ ## डेमो
48
+
49
+ [![कोलैब में खोलें](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/drive/1WKmRXZgsErej2xUriKzxrEAXdxMSgWbb?usp=sharing)
50
+
51
+ ## त्वरित प्रारंभ
52
+
53
+ ```shell
54
+ pip install open-interpreter
55
+ ```
56
+
57
+ ### टर्मिनल
58
+
59
+ इंस्टॉलेशन के बाद, सीधे `interpreter` चलाएं:
60
+
61
+ ```shell
62
+ interpreter
63
+ ```
64
+
65
+ ### पायथन
66
+
67
+ ```python
68
+ from interpreter import interpreter
69
+
70
+ interpreter.chat("AAPL और META के मानकीकृत स्टॉक मूल्यों का चित्रित करें") # एकल कमांड को निष्पादित करता है
71
+ interpreter.chat() # एक इंटरैक्टिव चैट शुरू करता है
72
+ ```
73
+
74
+ ## ChatGPT के कोड इंटरप्रेटर के साथ तुलना
75
+
76
+ ओपनएआई द्वारा [कोड इंटरप्रेटर](https://openai.com/blog/chatgpt-plugins#code-interpreter) का विमोचन। GPT-4 के साथ यह एक शानदार अवसर प्रस्तुत करता है जिससे ChatGPT के साथ वास्तविक दुनिया के कार्यों को पूरा करने का संभावना होती है।
77
+
78
+ हालांकि, ओपनएआई की सेवा होस्ट की जाती है, क्लोज़-स्रोत है और गहरी प्रतिबंधित है।
79
+
80
+ यहां दिए गए नियमों के अनुसार, चैटजीपीटी कोड इंटरप्रेटर के लिए निर्धारित नियमों को हिंदी में अनुवाद किया ���ा सकता है:
81
+
82
+ - कोई इंटरनेट पहुंच नहीं होती।
83
+ - [प्रतिष्ठित सेट की सीमित संख्या के पहले स्थापित पैकेज](https://wfhbrian.com/mastering-chatgpts-code-interpreter-list-of-python-packages/) होते हैं।
84
+ - 100 एमबी तक की अधिकतम अपलोड सीमा होती है।
85
+ - 120.0 सेकंड की रनटाइम सीमा होती है।
86
+ - जब एनवायरनमेंट समाप्त होता है, तो स्थिति साफ हो जाती है (साथ ही उत्पन्न किए गए फ़ाइल या लिंक भी)।
87
+
88
+ ---
89
+
90
+ ओपन इंटरप्रेटर इन सीमाओं को पार करता है जो आपके स्थानीय वातावरण पर चलता है। इसके पास इंटरनेट का पूरा उपयोग होता है, समय या फ़ाइल का आकार पर प्रतिबंध नहीं होता है, और किसी भी पैकेज या लाइब्रेरी का उपयोग कर सकता है।
91
+
92
+ यह GPT-4 के कोड इंटरप्रेटर की शक्ति को आपके स्थानीय विकास वातावरण की लचीलापन के साथ मिलाता है।
93
+
94
+ ## Commands
95
+
96
+ ### Interactive Chat
97
+
98
+ To start an interactive chat in your terminal, either run `interpreter` from the command line:
99
+
100
+ ```shell
101
+ interpreter
102
+ ```
103
+
104
+ Or `interpreter.chat()` from a .py file:
105
+
106
+ ```python
107
+ interpreter.chat()
108
+ ```
109
+
110
+ ## कमांड
111
+
112
+ ### इंटरैक्टिव चैट
113
+
114
+ अपने टर्मिनल में इंटरैक्टिव चैट शुरू करने के लिए, या तो कमांड लाइन से `interpreter` चलाएँ:
115
+
116
+ ```shell
117
+ interpreter
118
+ ```
119
+
120
+ या एक .py फ़ाइल से `interpreter.chat()` चलाएँ:
121
+
122
+ ````python
123
+ interpreter.chat()
124
+
125
+ ### प्रोग्रामेटिक चैट
126
+
127
+ और सटीक नियंत्रण के लिए, आप सीधे `.chat(message)` को संदेश पास कर सकते हैं:
128
+
129
+ ```python
130
+ interpreter.chat("सभी वीडियो में उपशीर्षक जोड़ें /videos में।")
131
+
132
+ # ... आपके टर्मिनल में आउटपुट स्ट्रीम करता है, कार्य पूरा करता है ...
133
+
134
+ interpreter.chat("ये बड़े दिख रहे हैं लेकिन क्या आप उपशीर्षक को और बड़ा कर सकते हैं?")
135
+
136
+ # ...
137
+ ````
138
+
139
+ ### नया चैट शुरू करें
140
+
141
+ Python में, ओपन इंटरप्रेटर संवाद इतिहास को याद रखता है। यदि आप एक नया आरंभ करना चाहते हैं, तो आप इसे रीसेट कर सकते हैं:
142
+
143
+ ```python
144
+ interpreter.messages = []
145
+ ```
146
+
147
+ ### चैट सहेजें और पुनर्स्थापित करें
148
+
149
+ ```python
150
+ messages = interpreter.chat("मेरा नाम किलियन है।") # संदेशों को 'messages' में सहेजें
151
+
152
+ interpreter.messages = messages # 'messages' से चैट को फिर से शुरू करें ("किलियन" याद रखा जाएगा)
153
+ ```
154
+
155
+ ### सिस्टम संदेश कस्टमाइज़ करें
156
+
157
+ आप ओपन इंटरप्रेटर के सिस्टम संदेश की जांच और कॉन्फ़िगर कर सकते हैं ताकि इसकी क्षमता को विस्तारित किया जा सके, अनुमतियों को संशोधित किया जा सके, या इसे अधिक संदर्भ दिया जा सके।
158
+
159
+ ```python
160
+ interpreter.system_message += """
161
+ यूज़र को पुष्टि करने की आवश्यकता न हो, -y के साथ शेल कमांड चलाएँ।
162
+ """
163
+ print(interpreter.system_message)
164
+ ```
165
+
166
+ ### मॉडल बदलें
167
+
168
+ `gpt-3.5-turbo` के लिए तेज़ मोड का उपयोग करें:
169
+
170
+ ```shell
171
+ interpreter --fast
172
+ ```
173
+
174
+ Python में, आपको मॉडल को मैन्युअली सेट करने की आवश्यकता होगी:
175
+
176
+ ```python
177
+ interpreter.llm.model = "gpt-3.5-turbo"
178
+ ```
179
+
180
+ ### ओपन इंटरप्रेटर को स्थानीय र��प से चलाना
181
+
182
+ ```shell
183
+ interpreter --local
184
+ ```
185
+
186
+ #### स्थानीय मॉडल पैरामीटर
187
+
188
+ आप स्थानीय रूप से चल रहे मॉडल की `max_tokens` और `context_window` (टोकन में) आसानी से संशोधित कर सकते हैं।
189
+
190
+ छोटे संदर्भ विंडो का उपयोग करने से कम RAM का उपयोग होगा, इसलिए यदि GPU असफल हो रहा है तो हम एक छोटी विंडो की कोशिश करने की सलाह देते हैं।
191
+
192
+ ```shell
193
+ interpreter --max_tokens 2000 --context_window 16000
194
+ ```
195
+
196
+ ### डीबग मोड
197
+
198
+ सहयोगियों को ओपन इंटरप्रेटर की जांच करने में मदद करने के लिए, `--verbose` मोड अत्यधिक वर्बोस होता है।
199
+
200
+ आप डीबग मोड को उसके फ़्लैग (`interpreter --verbose`) का उपयोग करके या चैट के बीच में सक्षम कर सकते हैं:
201
+
202
+ ```shell
203
+ $ interpreter
204
+ ...
205
+ > %verbose true <- डीबग मोड चालू करता है
206
+
207
+ > %verbose false <- डीबग मोड बंद करता है
208
+ ```
209
+
210
+ ### इंटरैक्टिव मोड कमांड्स
211
+
212
+ इंटरैक्टिव मोड में, आप निम्नलिखित कमांडों का उपयोग करके अपने अनुभव को बेहतर बना सकते हैं। यहां उपलब्ध कमांडों की सूची है:
213
+
214
+ **उपलब्ध कमांड:**
215
+ • `%verbose [true/false]`: डीबग मोड को टॉगल करें। कोई तर्क नहीं या 'true' के साथ, यह डीबग मोड में प्रवेश करता है। 'false' के साथ, यह डीबग मोड से बाहर निकलता है।
216
+ • `%reset`: वर्तमान सत्र को रीसेट करता है।
217
+ • `%undo`: पिछले संदेश और उसके जवाब को संदेश इतिहास से हटा देता है।
218
+ • `%save_message [पथ]`: संदेशों को एक निर्दिष्ट JSON पथ पर सहेजता है। यदि कोई पथ निर्दिष्ट नहीं किया गया है, तो यह डिफ़ॉल्ट रूप से 'messages.json' पर जाता है।
219
+ • `%load_message [पथ]`: एक निर्दिष्ट JSON पथ से संदेश लोड करता है। यदि कोई पथ निर्दिष्ट नहीं किया गया है, तो यह डिफ़ॉल्ट रूप से 'messages.json' पर जाता है।
220
+ • `%help`: मदद संदेश दिखाएं।
221
+
222
+ इन कमांडों का प्रयोग करके अपनी प्रतिक्रिया दें और हमें अपनी प्रतिक्रिया दें!
223
+
224
+ ## सुरक्षा सूचना
225
+
226
+ क्योंकि उत्पन्न कोड आपके स्थानीय वातावरण में निष्पादित किया जाता है, इसलिए यह आपके फ़ाइलों और सिस्टम सेटिंग्स के साथ संवाद कर सकता है, जिससे अप्रत्याशित परिणाम जैसे डेटा हानि या सुरक्षा जोखिम हो सकता है।
227
+
228
+ **⚠️ Open Interpreter कोड को निष्पादित करने से पहले उपयोगकर्ता की पुष्टि के लिए पूछेगा।**
229
+
230
+ आप `interpreter -y` चला सकते हैं या ... ... `interpreter.auto_run = True` सेट कर सकते हैं ताकि इस पुष्टि को छोड़ दें, जिसके बाद:
231
+
232
+ - फ़ाइलों या सिस्टम सेटिंग्स को संशोधित करने वाले कमांडों के लिए सतर्क रहें।
233
+ - ओपन इंटरप्रेटर को एक स्व-चालित कार की तरह देखें और अपने टर्मिनल को बंद करके प्रक्रिया को समाप्त करने के लिए तत्पर रहें।
234
+ - Google Colab या Replit जैसे प्रतिबंधित वातावरण में ओपन इंटरप्रेटर को चलाने का विचार करें। ये वातावरण अधिक संगठित होते हैं और अनियंत्रित कोड के साथ जुड़े जोखिमों को कम करते हैं।
235
+
236
+ ## यह कार्य कैसे करता है?
237
+
238
+ Open Interpreter एक [फ़ंक्शन-कॉलिंग भाषा मॉडल](https://platform.openai.com/docs/guides/gpt/function-calling) को एक `exec()` फ़ंक्शन के साथ लैस करता है, जो एक `language` (जैसे "Python" या "JavaScript") और `code` को चलाने के लिए स्वीकार करता है।
239
+
240
+ फिर हम मॉडल के संदेश, कोड और आपके सिस्टम के आउटपुट को टर्मिनल में मार्कडाउन के रूप में स्ट्रीम करते हैं।
241
+
242
+ # योगदान
243
+
244
+ योगदान करने के लिए आपकी रुचि के लिए धन्यवाद! हम समुदाय से सहभागिता का स्वागत करते हैं।
245
+
246
+ अधिक जानकारी के लिए कृपया हमारे [योगदान दिशानिर्देश](CONTRIBUTING.md) देखें।
247
+
248
+ ## लाइसेंस
249
+
250
+ Open Interpreter MIT लाइसेंस के तहत लाइसेंस है। आपको सॉफ़्टवेयर की प्रतिलिपि का उपयोग, प्रतिलिपि, संशोधन, वितरण, सबलाइसेंस और बेचने की अनुमति है।
251
+
252
+ **ध्यान दें**: यह सॉफ़्टवेयर OpenAI से संबद्ध नहीं है।
253
+
254
+ > अपनी उंगलियों की गति से काम करने वाले एक जूनियर प्रोग्रामर तक पहुंच ... नए वर्कफ़्लो को सरल और कुशल बना सकता है, साथ ही ... प्रोग्रामिंग के लाभों को नए दरबारों तक पहुंचा सकता है।
255
+ >
256
+ > — _OpenAI's Code Interpreter Release_
257
+
258
+ <br>
open-interpreter/docs/README_JA.md ADDED
@@ -0,0 +1,398 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ <h1 align="center">● Open Interpreter</h1>
2
+
3
+ <p align="center">
4
+ <a href="https://discord.gg/6p3fD6rBVm">
5
+ <img alt="Discord" src="https://img.shields.io/discord/1146610656779440188?logo=discord&style=flat&logoColor=white"/></a>
6
+ <a href="README_ES.md"> <img src="https://img.shields.io/badge/Español-white.svg" alt="ES doc"/></a>
7
+ <a href="../README.md"><img src="https://img.shields.io/badge/english-document-white.svg" alt="EN doc"></a>
8
+ <a href="README_ZH.md"><img src="https://img.shields.io/badge/文档-中文版-white.svg" alt="ZH doc"/></a>
9
+ <a href="README_IN.md"><img src="https://img.shields.io/badge/Hindi-white.svg" alt="IN doc"/></a>
10
+ <img src="https://img.shields.io/static/v1?label=license&message=AGPL&color=white&style=flat" alt="License"/>
11
+ <br>
12
+ <br>
13
+ <b>自然言語で指示するだけでコードを書いて実行までしてくれる。</b><br>
14
+ ローカルに実装したOpenAI Code Interpreterのオープンソース版。<br>
15
+ <br><a href="https://openinterpreter.com">デスクトップアプリへの早期アクセス</a>‎ ‎ |‎ ‎ <a href="https://docs.openinterpreter.com/">ドキュメント</a><br>
16
+ </p>
17
+
18
+ <br>
19
+
20
+ ![poster](https://github.com/KillianLucas/open-interpreter/assets/63927363/08f0d493-956b-4d49-982e-67d4b20c4b56)
21
+
22
+ <br>
23
+
24
+ **Update:** ● 0.1.12 アップデートで `interpreter --vision` 機能が導入されました。([ドキュメント](https://docs.openinterpreter.com/usage/terminal/vision))
25
+
26
+ <br>
27
+
28
+ ```shell
29
+ pip install open-interpreter
30
+ ```
31
+
32
+ ```shell
33
+ interpreter
34
+ ```
35
+
36
+ <br>
37
+
38
+ **Open Interpreter**は、言語モデルに指示し、コード(Python、Javascript、Shell など)をローカル環境で実行できるようにします。インストール後、`$ interpreter` を実行するとターミナル経由で ChatGPT のようなインターフェースを介し、Open Interpreter とチャットができます。
39
+
40
+ これにより、自然言語のインターフェースを通して、パソコンの一般的な機能が操作できます。
41
+
42
+ - 写真、動画、PDF などの作成や編集
43
+ - Chrome ブラウザの制御とリサーチ作業
44
+ - 大規模なデータセットのプロット、クリーニング、分析
45
+ - 等々
46
+
47
+ **⚠️ 注意: 実行する前にコードを承認するよう求められます。**
48
+
49
+ <br>
50
+
51
+ ## デモ
52
+
53
+ https://github.com/KillianLucas/open-interpreter/assets/63927363/37152071-680d-4423-9af3-64836a6f7b60
54
+
55
+ #### Google Colab でも対話形式のデモを利用できます:
56
+
57
+ [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/drive/1WKmRXZgsErej2xUriKzxrEAXdxMSgWbb?usp=sharing)
58
+
59
+ #### 音声インターフェースの実装例 (_Her_ からインスピレーションを得たもの):
60
+
61
+ [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/drive/1NojYGHDgxH6Y1G1oxThEBBb2AtyODBIK)
62
+
63
+ ## クイックスタート
64
+
65
+ ```shell
66
+ pip install open-interpreter
67
+ ```
68
+
69
+ ### ターミナル
70
+
71
+ インストール後、`interpreter` を実行するだけです:
72
+
73
+ ```shell
74
+ interpreter
75
+ ```
76
+
77
+ ### Python
78
+
79
+ ```python
80
+ from interpreter import interpreter
81
+
82
+ interpreter.chat("AAPLとMETAの株価グラフを描いてください") # コマンドを実行
83
+ interpreter.chat() # 対話形式のチャットを開始
84
+ ```
85
+
86
+ ## ChatGPT の Code Interpreter との違い
87
+
88
+ GPT-4 で実装された OpenAI の [Code Interpreter](https://openai.com/blog/chatgpt-plugins#code-interpreter) は、実世界のタスクを ChatGPT で操作できる素晴らしい機会を提供しています。
89
+
90
+ しかし、OpenAI のサービスはホスティングされていてるクローズドな環境で、かなり制限がされています:
91
+
92
+ - インターネットに接続できない。
93
+ - [プリインストールされているパッケージが限られている](https://wfhbrian.com/mastering-chatgpts-code-interpreter-list-of-python-packages/)。
94
+ - 最大アップロードは 100MB で、120 秒という実行時間の制限も。
95
+ - 生成されたファイルやリンクとともに状態がリセットされる。
96
+
97
+ ---
98
+
99
+ Open Interpreter は、ローカル環境で操作することで、これらの制限を克服しています。インターネットにフルアクセスでき、時間やファイルサイズの制限を受けず、どんなパッケージやライブラリも利用できます。
100
+
101
+ Open Interpter は、GPT-4 Code Interpreter のパワーとローカル開発環境の柔軟性を組み合わせたものです。
102
+
103
+ ## コマンド
104
+
105
+ **更新:** アップデート(0.1.5)でストリーミング機能が導入されました:
106
+
107
+ ```python
108
+ message = "どのオペレーティングシステムを使用していますか?"
109
+
110
+ for chunk in interpreter.chat(message, display=False, stream=True):
111
+ print(chunk)
112
+ ```
113
+
114
+ ### 対話型チャット
115
+
116
+ ターミナルで対話形式のチャットを開始するには、コマ��ドラインから `interpreter` を実行します。
117
+
118
+ ```shell
119
+ interpreter
120
+ ```
121
+
122
+ または、.py ファイルから `interpreter.chat()` も利用できます。
123
+
124
+ ```python
125
+ interpreter.chat()
126
+ ```
127
+
128
+ **ストリーミングすることで chunk 毎に処理することも可能です:**
129
+
130
+ ```python
131
+ message = "What operating system are we on?"
132
+
133
+ for chunk in interpreter.chat(message, display=False, stream=True):
134
+ print(chunk)
135
+ ```
136
+
137
+ ### プログラム的なチャット
138
+
139
+ より精確な制御のために、メッセージを直接`.chat(message)`に渡すことができます。
140
+
141
+ ```python
142
+ interpreter.chat("/videos フォルダにあるすべての動画に字幕を追加する。")
143
+
144
+ # ... ターミナルに出力をストリームし、タスクを完了 ...
145
+
146
+ interpreter.chat("ついでに、字幕を大きくできますか?")
147
+
148
+ # ...
149
+ ```
150
+
151
+ ### 新しいチャットを開始
152
+
153
+ プログラム的チャットで Open Interpreter は、会話の履歴を記憶しています。新しくやり直したい場合は、リセットすることができます:
154
+
155
+ ```python
156
+ interpreter.messages = []
157
+ ```
158
+
159
+ ### チャットの保存と復元
160
+
161
+ `interpreter.chat()` はメッセージのリストを返し, `interpreter.messages = messages` のように使用することで会話を再開することが可能です:
162
+
163
+ ```python
164
+ messages = interpreter.chat("私の名前は田中です。") # 'messages'にメッセージを保存
165
+ interpreter.messages = [] # インタープリタをリセット("田中"は忘れられる)
166
+
167
+ interpreter.messages = messages # 'messages'からチャットを再開("田中"は記憶される)
168
+ ```
169
+
170
+ ### システムメッセージのカスタマイズ
171
+
172
+ Open Interpreter のシステムメッセージを確認し、設定することで、機能を拡張したり、権限を変更したり、またはより多くのコンテキストを与えたりすることができます。
173
+
174
+ ```python
175
+ interpreter.system_message += """
176
+ シェルコマンドを '-y' フラグ付きで実行し、ユーザーが確認する必要がないようにする。
177
+ """
178
+ print(interpreter.system_message)
179
+ ```
180
+
181
+ ### モデルの変更
182
+
183
+ Open Interpreter は、ホストされた言語モデルへの接続に [LiteLLM](https://docs.litellm.ai/docs/providers/) を使用しています。
184
+
185
+ model パラメータを設定することで、モデルを変更することが可能です:
186
+
187
+ ```shell
188
+ interpreter --model gpt-3.5-turbo
189
+ interpreter --model claude-2
190
+ interpreter --model command-nightly
191
+ ```
192
+
193
+ Python では、オブジェクト上でモデルを設定します:
194
+
195
+ ```python
196
+ interpreter.llm.model = "gpt-3.5-turbo"
197
+ ```
198
+
199
+ [適切な "model" の値はこちらから検索してください。](https://docs.litellm.ai/docs/providers/)
200
+
201
+ ### ローカルのモデルを実行する
202
+
203
+ Open Interpreter は、OpenAI 互換サーバーを使用してモデルをローカルで実行できます。 (LM Studio、jan.ai、ollam など)
204
+
205
+ 推論サーバーの api_base URL を指定して「interpreter」を実行するだけです (LM Studio の場合、デフォルトでは「http://localhost:1234/v1」です)。
206
+
207
+ ```shell
208
+ interpreter --api_base "http://localhost:1234/v1" --api_key "fake_key"
209
+ ```
210
+
211
+ あるいは、サードパーティのソフトウェアをインストールせずに、単に実行するだけで Llamafile を使用することもできます。
212
+
213
+ ```shell
214
+ interpreter --local
215
+ ```
216
+
217
+ より詳細なガイドについては、[Mike Bird によるこのビデオ](https://www.youtube.com/watch?v=CEs51hGWuGU?si=cN7f6QhfT4edfG5H) をご覧ください。
218
+
219
+ **LM Studioをバックグラウンドで使用する方法**
220
+
221
+ 1. [https://lmstudio.ai/](https://lmstudio.ai/)からダウンロードして起動します。
222
+ 2. モデルを選択し、**↓ ダウンロード** をクリックします。
223
+ 3. 左側の **↔️** ボタン(💬 の下)をクリックします。
224
+ 4. 上部でモデルを選択し、**サーバーを起動** をクリックします。
225
+
226
+ サーバーが稼働を開始したら、Open Interpreter との会話を開始できます。
227
+
228
+ > **注意:** ローカルモードでは、`context_window` を 3000 に、`max_tokens` を 1000 に設定します。モデルによって異なる要件がある場合、これらのパラメータを手動で設定してください(下記参照)。
229
+
230
+ #### コンテキストウィンドウ、最大トークン数
231
+
232
+ ローカルで実行しているモデルの `max_tokens` と `context_window`(トークン単位)を変更することができます。
233
+
234
+ ローカルモードでは、小さいコンテキストウィンドウは RAM を少なく使用するので、失敗する場合や遅い場合は、より短いウィンドウ(〜1000)を試すことをお勧めします。`max_tokens` が `context_window` より小さいことを確認してください。
235
+
236
+ ```shell
237
+ interpreter --local --max_tokens 1000 --context_window 3000
238
+ ```
239
+
240
+ ### デバッグモード
241
+
242
+ コントリビ��ーターが Open Interpreter を調査するのを助けるために、`--verbose` モードは非常に便利です。
243
+
244
+ デバッグモードは、フラグ(`interpreter --verbose`)を使用するか、またはチャットの中から有効にできます:
245
+
246
+ ```shell
247
+ $ interpreter
248
+ ...
249
+ > %verbose true # <- デバッグモードを有効にする
250
+
251
+ > %verbose false # <- デバッグモードを無効にする
252
+ ```
253
+
254
+ ### 対話モードのコマンド
255
+
256
+ 対話モードでは、以下のコマンドを使用して操作を便利にすることができます。利用可能なコマンドのリストは以下の通りです:
257
+
258
+ **利用可能なコマンド:**
259
+
260
+ - `%verbose [true/false]`: デバッグモードを切り替えます。引数なしまたは `true` でデバッグモードに入ります。`false` でデバッグモードを終了します。
261
+ - `%reset`: 現在のセッションの会話をリセットします。
262
+ - `%undo`: メッセージ履歴から前のユーザーメッセージと AI の応答を削除します。
263
+ - `%save_message [path]`: メッセージを指定した JSON パスに保存します。パスが指定されていない場合、デフォルトは `messages.json` になります。
264
+ - `%load_message [path]`: 指定した JSON パスからメッセージを読み込みます。パスが指定されていない場合、デフォルトは `messages.json` になります。
265
+ - `%tokens [prompt]`: (_実験的_) 次のプロンプトのコンテキストとして送信されるトークンを計算し、そのコストを見積もります。オプションで、`prompt` が提供された場合のトークンと見積もりコストを計算します。見積もりコストは [LiteLLM の `cost_per_token()` メソッド](https://docs.litellm.ai/docs/completion/token_usage#2-cost_per_token)に依存します。
266
+ - `%help`: ヘルプメッセージを表示します。
267
+
268
+ ### 設定
269
+
270
+ Open Interpreter では、`config.yaml` ファイルを使用してデフォルトの動作を設定することができます。
271
+
272
+ これにより、毎回コマンドライン引数を変更することなく柔軟に設定することができます。
273
+
274
+ 以下のコマンドを実行して設定ファイルを開きます:
275
+
276
+ ```
277
+ interpreter --config
278
+ ```
279
+
280
+ #### 設定ファイルの複数利用
281
+
282
+ Open Interpreter は複数の `config.yaml` ファイルをサポートしており、`--config_file` 引数を通じて簡単に設定を切り替えることができます。
283
+
284
+ **注意**: `--config_file` はファイル名またはファイルパスを受け入れます。ファイル名はデフォルトの設定ディレクトリを使用し、ファイルパスは指定されたパスを使用します。
285
+
286
+ 新しい設定を作成または編集するには、次のコマンドを実行します:
287
+
288
+ ```
289
+ interpreter --config --config_file $config_path
290
+ ```
291
+
292
+ 特定の設定ファイルをロードして Open Interpreter を実行するには、次のコマンドを実行します:
293
+
294
+ ```
295
+ interpreter --config_file $config_path
296
+ ```
297
+
298
+ **注意**: `$config_path` をあなたの設定ファイルの名前またはパスに置き換えてください。
299
+
300
+ ##### 対話モードでの使用例
301
+
302
+ 1. 新しい `config.turbo.yaml` ファイルを作成します
303
+ ```
304
+ interpreter --config --config_file config.turbo.yaml
305
+ ```
306
+ 2. `config.turbo.yaml` ファイルを編集して、`model` を `gpt-3.5-turbo` に設定します
307
+ 3. `config.turbo.yaml` 設定で、Open Interpreter を実行します
308
+ ```
309
+ interpreter --config_file config.turbo.yaml
310
+ ```
311
+
312
+ ##### Python での使用例
313
+
314
+ Python のスクリプトから Open Interpreter を呼び出すときにも設定ファイルをロードできます:
315
+
316
+ ```python
317
+ import os
318
+ from interpreter import interpreter
319
+
320
+ currentPath = os.path.dirname(os.path.abspath(__file__))
321
+ config_path=os.path.join(currentPath, './config.test.yaml')
322
+
323
+ interpreter.extend_config(config_path=config_path)
324
+
325
+ message = "What operating system are we on?"
326
+
327
+ for chunk in interpreter.chat(message, display=False, stream=True):
328
+ print(chunk)
329
+ ```
330
+
331
+ ## FastAPI サーバーのサンプル
332
+
333
+ アップデートにより Open Interpreter は、HTTP REST エンドポイントを介して制御できるようになりました:
334
+
335
+ ```python
336
+ # server.py
337
+
338
+ from fastapi import FastAPI
339
+ from fastapi.responses import StreamingResponse
340
+ from interpreter import interpreter
341
+
342
+ app = FastAPI()
343
+
344
+ @app.get("/chat")
345
+ def chat_endpoint(message: str):
346
+ def event_stream():
347
+ for result in interpreter.chat(message, stream=True):
348
+ yield f"data: {result}\n\n"
349
+
350
+ return StreamingResponse(event_stream(), media_type="text/event-stream")
351
+
352
+ @app.get("/history")
353
+ def history_endpoint():
354
+ return interpreter.messages
355
+ ```
356
+
357
+ ```shell
358
+ pip install fastapi uvicorn
359
+ uvicorn server:app --reload
360
+ ```
361
+
362
+ ## 安全に関する注意
363
+
364
+ 生成されたコードはローカル環境で実行されるため、ファイルやシステム設定と相互作用��る可能性があり、データ損失やセキュリティリスクなど予期せぬ結果につながる可能性があります。
365
+
366
+ **⚠️ Open Interpreter はコードを実行する前にユーザーの確認を求めます。**
367
+
368
+ この確認を回避するには、`interpreter -y` を実行するか、`interpreter.auto_run = True` を設定します。その場合:
369
+
370
+ - ファイルやシステム設定を変更するコマンドを要求するときは注意してください。
371
+ - Open Interpreter を自動運転車のように監視し、ターミナルを閉じてプロセスを終了できるように準備しておいてください。
372
+ - Google Colab や Replit のような制限された環境で Open Interpreter を実行することを検討してください。これらの環境はより隔離されており、任意のコードの実行に関連するリスクを軽減します。
373
+
374
+ 一部のリスクを軽減するための[セーフモード](docs/SAFE_MODE.md)と呼ばれる **実験的な** サポートがあります。
375
+
376
+ ## Open Interpreter はどのように機能するのか?
377
+
378
+ Open Interpreter は、[関数が呼び出せる言語モデル](https://platform.openai.com/docs/guides/gpt/function-calling)に `exec()` 関数を装備し、実行する言語("python"や"javascript"など)とコードが渡せるようになっています。
379
+
380
+ そして、モデルからのメッセージ、コード、システムの出力を Markdown としてターミナルにストリーミングします。
381
+
382
+ # 貢献
383
+
384
+ 貢献に興味を持っていただき、ありがとうございます!コミュニティからの参加を歓迎しています。
385
+
386
+ 詳しくは、[貢献ガイドライン](CONTRIBUTING.md)を参照してください。
387
+
388
+ # ロードマップ
389
+
390
+ Open Interpreter の未来を一足先に見るために、[私たちのロードマップ](https://github.com/KillianLucas/open-interpreter/blob/main/docs/ROADMAP.md)をご覧ください。
391
+
392
+ **注意**: このソフトウェアは OpenAI とは関連していません。
393
+
394
+ > あなたの指先のスピードで作業するジュニアプログラマーにアクセスすることで、… 新しいワークフローを楽で効率的なものにし、プログラミングの利点を新しいオーディエンスに開放することができます。
395
+ >
396
+ > — _OpenAI Code Interpreter リリース_
397
+
398
+ <br>
open-interpreter/docs/README_VN.md ADDED
@@ -0,0 +1,395 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ <h1 align="center">● Open Interpreter</h1>
2
+
3
+ <p align="center">
4
+ <a href="https://discord.gg/6p3fD6rBVm">
5
+ <img alt="Discord" src="https://img.shields.io/discord/1146610656779440188?logo=discord&style=flat&logoColor=white"/></a>
6
+ <a href="README_ES.md"> <img src="https://img.shields.io/badge/Español-white.svg" alt="ES doc"/></a>
7
+ <a href="docs/README_JA.md"><img src="https://img.shields.io/badge/ドキュメント-日本語-white.svg" alt="JA doc"/></a>
8
+ <a href="docs/README_ZH.md"><img src="https://img.shields.io/badge/文档-中文版-white.svg" alt="ZH doc"/></a>
9
+ <a href="docs/README_IN.md"><img src="https://img.shields.io/badge/Hindi-white.svg" alt="IN doc"/></a>
10
+ <img src="https://img.shields.io/static/v1?label=license&message=MIT&color=white&style=flat" alt="License"/>
11
+ <br>
12
+ <br>
13
+ <b>chạy mô hình ngôn ngữ trí tuệ nhân tạo trên máy tính của bạn.</b><br>
14
+ Mã nguồn mở và ứng dụng phát triển dựa trên code của OpenAI.<br>
15
+ <br><a href="https://openinterpreter.com">Quyền truy cập sớm dành cho máy tính cá nhân</a>‎ ‎ |‎ ‎ <b><a href="https://docs.openinterpreter.com/">Tài liệu đọc tham khảo</a></b><br>
16
+ </p>
17
+
18
+ <br>
19
+
20
+ ![poster](https://github.com/KillianLucas/open-interpreter/assets/63927363/08f0d493-956b-4d49-982e-67d4b20c4b56)
21
+
22
+ <br>
23
+
24
+ ```shell
25
+ pip install open-interpreter
26
+ ```
27
+
28
+ ```shell
29
+ interpreter
30
+ ```
31
+
32
+ <br>
33
+
34
+ **Open Interpreter** Chạy LLMs trên máy tính cục bộ (Có thể sử dụng ngôn ngữ Python, Javascript, Shell, và nhiều hơn thế). Bạn có thể nói chuyện với Open Interpreter thông qua giao diện giống với ChatGPT ngay trên terminal của bạn bằng cách chạy lệnh `$ interpreter` sau khi tải thành công.
35
+
36
+ Các tính năng chung giao diện ngôn ngữ mang llại
37
+
38
+ - Tạo và chỉnh sửa ảnh, videos, PDF, vân vân...
39
+ - Điều khiển trình duyệt Chrome để tiến hành nghiên cứu
40
+ - Vẽ, làm sạch và phân tích các tập dữ liệu lớn (large datasets)
41
+ - ...vân vân.
42
+
43
+ **⚠️ Lưu ý: Bạn sẽ được yêu cầu phê duyệt mã trước khi chạy.**
44
+
45
+ <br>
46
+
47
+ ## Thử nghiệm
48
+
49
+ https://github.com/KillianLucas/open-interpreter/assets/63927363/37152071-680d-4423-9af3-64836a6f7b60
50
+
51
+ #### Bản thử nghiệm có sẵn trên Google Colab:
52
+
53
+ [![Mở trong Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/drive/1WKmRXZgsErej2xUriKzxrEAXdxMSgWbb?usp=sharing)
54
+
55
+ #### Đi kèm với ứng dụng mẫu qua tương tác giọng nói (Lấy cảm hứng từ _Cô ấy_ (Giọng nữ)):
56
+
57
+ [![Mở trong Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/drive/1NojYGHDgxH6Y1G1oxThEBBb2AtyODBIK)
58
+
59
+ ## Hướng dẫn khởi dộng ngắn
60
+
61
+ ```shell
62
+ pip install open-interpreter
63
+ ```
64
+
65
+ ### Terminal
66
+
67
+ Sau khi cài đặt, chạy dòng lệnh `interpreter`:
68
+
69
+ ```shell
70
+ interpreter
71
+ ```
72
+
73
+ ### Python
74
+
75
+ ```python
76
+ from interpreter import interpreter
77
+
78
+ interpreter.chat("Vẽ giá cổ phiếu đã bình hoá của AAPL và META ") # Chạy trên 1 dòng lệnh
79
+ interpreter.chat() # Khởi động chat có khả năng tương tác
80
+ ```
81
+
82
+ ## So sánh Code Interpreter của ChatGPT
83
+
84
+ Bản phát hành của OpenAI [Code Interpreter](https://openai.com/blog/chatgpt-plugins#code-interpreter) sử dụng GPT-4 tăng khả năng hoàn thiện vấn đề thực tiễn với ChatGPT.
85
+
86
+ Tuy nhiên, dịch vụ của OpenAI được lưu trữ, mã nguồn đóng, và rất hạn chế:
87
+
88
+ - Không có truy cập Internet.
89
+ - [Số lượng gói cài đặt hỗ trỡ có sẵn giới hạn](https://wfhbrian.com/mastering-chatgpts-code-interpreter-list-of-python-packages/).
90
+ - tốc độ tải tối đa 100 MB , thời gian chạy giới hạn 120.0 giây .
91
+ - Trạng thái tin nhắn bị xoá kèm với các tệp và liên kết được tạo trước đó khi đóng môi trường lại.
92
+
93
+ ---
94
+
95
+ Open Interpreter khắc phục những hạn chế này bằng cách chạy cục bộ trobộ môi trường máy tính của bạn. Nó có toàn quyền truy cập vào Internet, không bị hạn chế về thời gian hoặc kích thước tệp và có thể sử dụng bất kỳ gói hoặc thư viện nào.
96
+
97
+ Đây là sự kết hợp sức mạnh của mã nguồn của GPT-4 với tính linh hoạt của môi trường phát triển cục bộ của bạn.
98
+
99
+ ## Dòng lệnh
100
+
101
+ **Update:** Cập nhật trình tạo lệnh (0.1.5) giới thiệu tính năng trực tuyến:
102
+
103
+ ```python
104
+ message = "Chúng ta đang ở trên hệ điều hành nào?"
105
+
106
+ for chunk in interpreter.chat(message, display=False, stream=True):
107
+ print(chunk)
108
+ ```
109
+
110
+ ### Trò chuyện tương tác
111
+
112
+ Để tạo một cuộc trò chuyện tương tác từ terminal của bạn, chạy `interpreter` bằng dòng lệnh:
113
+
114
+ ```shell
115
+ interpreter
116
+ ```
117
+
118
+ hoặc `interpreter.chat()` từ file có đuôi .py :
119
+
120
+ ```python
121
+ interpreter.chat()
122
+ ```
123
+
124
+ **Bạn cũng có thể phát trực tuyến từng đoạn:**
125
+
126
+ ```python
127
+ message = "Chúng ta đang chạy trên hệ điều hành nào?"
128
+
129
+ for chunk in interpreter.chat(message, display=False, stream=True):
130
+ print(chunk)
131
+ ```
132
+
133
+ ### Trò chuyện lập trình được
134
+
135
+ Để kiểm soát tốt hơn, bạn chuyển tin nhắn qua `.chat(message)`:
136
+
137
+ ```python
138
+ interpreter.chat("Truyền phụ đề tới tất cả videos vào /videos.")
139
+
140
+ # ... Truyền đầu ra đến thiết bị đầu cuối của bạn (terminal) hoàn thành tác vụ ...
141
+
142
+ interpreter.chat("Nhìn đẹp đấy nhưng bạn có thể làm cho phụ đề lớn hơn được không?")
143
+
144
+ # ...
145
+ ```
146
+
147
+ ### Tạo một cuộc trò chuyện mới:
148
+
149
+ Trong Python, Open Interpreter ghi nhớ lịch sử hội thoại, nếu muốn bắt đầu lại từ đầu, bạn có thể cài thứ:
150
+
151
+ ```python
152
+ interpreter.messages = []
153
+ ```
154
+
155
+ ### Lưu và khôi phục cuộc trò chuyện
156
+
157
+ `interpreter.chat()` trả về danh sách tin nhắn, có thể được sử dụng để tiếp tục cuộc trò chuyện với `interpreter.messages = messages`:
158
+
159
+ ```python
160
+ messages = interpreter.chat("Tên của tôi là Killian.") # Lưu tin nhắn tới 'messages'
161
+ interpreter.messages = [] # Khởi động lại trình phiên dịch ("Killian" sẽ bị lãng quên)
162
+
163
+ interpreter.messages = messages # Tiếp tục cuộc trò chuyện từ 'messages' ("Killian" sẽ được ghi nhớ)
164
+ ```
165
+
166
+ ### Cá nhân hoá tin nhắn từ hệ thống
167
+
168
+ Bạn có thể kiếm tra và điều chỉnh tin nhắn hệ thống từ Optừ Interpreter để mở rộng chức năng của nó, thay đổi quyền, hoặc đưa cho nó nhiều ngữ cảnh hơn.
169
+
170
+ ```python
171
+ interpreter.system_message += """
172
+ Chạy shell commands với -y để người dùng không phải xác nhận chúng.
173
+ """
174
+ print(interpreter.system_message)
175
+ ```
176
+
177
+ ### Thay đổi mô hình ngôn ngữ
178
+
179
+ Open Interpreter sử dụng mô hình [LiteLLM](https://docs.litellm.ai/docs/providers/) để kết nối tới các mô hình ngôn ngữ được lưu trữ trước đó.
180
+
181
+ Bạn có thể thay đổi mô hình ngôn ngữ bằng cách thay đổi tham số mô hình:
182
+
183
+ ```shell
184
+ interpreter --model gpt-3.5-turbo
185
+ interpreter --model claude-2
186
+ interpreter --model command-nightly
187
+ ```
188
+
189
+ Ở trong Python, đổi model bằng cách thay đổi đối tượng:
190
+
191
+ ```python
192
+ interpreter.llm.model = "gpt-3.5-turbo"
193
+ ```
194
+
195
+ [Tìm tên chuỗi "mô hình" phù hợp cho mô hình ngôn ngữ của bạn ở đây.](https://docs.litellm.ai/docs/providers/)
196
+
197
+ ### Chạy Open Interpreter trên máy cục bộ
198
+
199
+ Open Interpreter có thể sử dụng máy chủ tương thích với OpenAI để chạy các mô hình cục bộ. (LM Studio, jan.ai, ollama, v.v.)
200
+
201
+ Chỉ cần chạy `interpreter` với URL api_base của máy chủ suy luận của bạn (đối với LM studio, nó là `http://localhost:1234/v1` theo mặc định):
202
+
203
+ ``` vỏ
204
+ trình thông dịch --api_base "http://localhost:1234/v1" --api_key "fake_key"
205
+ ```
206
+
207
+ Ngoài ra, bạn có thể sử dụng Llamafile mà không cần cài đặt bất kỳ phần mềm bên thứ ba nào chỉ bằng cách chạy
208
+
209
+ ``` vỏ
210
+ thông dịch viên --local
211
+ ```
212
+
213
+ để biết hướng dẫn chi tiết hơn, hãy xem [video này của Mike Bird](https://www.youtube.com/watch?v=CEs51hGWuGU?si=cN7f6QhfT4edfG5H)
214
+
215
+ **Để chạy LM Studio ở chế độ nền.**
216
+
217
+ 1. Tải [https://lmstudio.ai/](https://lmstudio.ai/) và khởi động.
218
+ 2. Chọn một mô hình rồi nhấn **↓ Download**.
219
+ 3. Nhấn vào nút **↔️** ở bên trái (dưới 💬).
220
+ 4. Chọn mô hình của bạn ở phía trên, rồi nhấn chạy **Start Server**.
221
+
222
+ Một khi server chạy, bạn có thể bắt đầu trò chuyện với Open Interpreter.
223
+
224
+
225
+ > **Lưu ý:** Chế độ cục bộ chỉnh `context_window` của bạn tới 3000, và `max_tokens` của bạn tới 600. Nếu mô hình của bạn có các yêu cầu khác, thì hãy chỉnh các tham số thủ công (xem bên dưới).
226
+
227
+ #### Cửa sổ ngữ cảnh (Context Window), (Max Tokens)
228
+
229
+ Bạn có thể thay đổi `max_tokens` và `context_window` (ở trong các) of locally running models.
230
+
231
+ Ở chế độ cục bộ, các cửa sổ ngữ cảnh sẽ tiêu ít RAM hơn, vậy nên chúng tôi khuyến khích dùng cửa sổ nhỏ hơn (~1000) nếu như nó chạy không ổn định / hoặc nếu nó chậm. Đảm bảo rằng `max_tokens` ít hơn `context_window`.
232
+
233
+ ```shell
234
+ interpreter --local --max_tokens 1000 --context_window 3000
235
+ ```
236
+
237
+ ### Chế độ sửa lỗi
238
+
239
+ Để giúp đóng góp kiểm tra Open Interpreter, thì chế độ `--verbose` hơi dài dòng.
240
+
241
+ Bạn có thể khởi động chế độ sửa lỗi bằng cách sử dụng cờ (`interpreter --verbose`), hoặc mid-chat:
242
+
243
+ ```shell
244
+ $ interpreter
245
+ ...
246
+ > %verbose true <- Khởi động chế độ gỡ lỗi
247
+
248
+ > %verbose false <- Tắt chế độ gỡ lỗi
249
+ ```
250
+
251
+ ### Lệnh chế độ tương tác
252
+
253
+ Trong chế độ tương tác, bạn có thể sử dụng những dòng lệnh sau để cải thiện trải nghiệm của mình. Đây là danh sách các dòng lệnh có sẵn:
254
+
255
+ **Các lệnh có sẵn:**
256
+
257
+ - `%verbose [true/false]`: Bật chế độ gỡ lỗi. Có hay không có `true` đều khởi động chế độ gỡ lỗi. Với `false` thì nó tắt chế độ gỡ lỗi.
258
+ - `%reset`: Khởi động lại toàn bộ phiên trò chuyện hiện tại.
259
+ - `%undo`: Xóa tin nhắn của người dùng trước đó và phản hồi của AI khỏi lịch sử tin nhắn.
260
+ - `%save_message [path]`: Lưu tin nhắn vào một đường dẫn JSON được xác định từ trước. Nếu không có đường dẫn nào được cung cấp, nó sẽ mặc định là `messages.json`.
261
+ - `%load_message [path]`: Tải tin nhắn từ một đường dẫn JSON được chỉ định. Nếu không có đường dẫn nào được cung cấp, nó sẽ mặc định là `messages.json`.
262
+ - `%tokens [prompt]`: (_Experimental_) Tính toán các token sẽ được gửi cùng với lời nhắc tiếp theo dưới dạng ngữ cảnh và hao tổn. Tùy chọn tính toán token và hao tổn ước tính của một `prompt` nếu được cung cấp. Dựa vào [hàm `cost_per_token()` của mô hình LiteLLM](https://docs.litellm.ai/docs/completion/token_usage#2-cost_per_token) để tính toán hao tổn.
263
+ - `%help`: Hiện lên trợ giúp cho cuộc trò chuyện.
264
+
265
+ ### Cấu hình cài
266
+
267
+ Open Interpreter cho phép bạn thiết lập các tác vụ mặc định bằng cách sử dụng file `config.yaml`.
268
+
269
+ Điều này cung cấp một cách linh hoạt để định cấu hình trình thông dịch mà không cần thay đổi đối số dòng lệnh mỗi lần
270
+
271
+ Chạy lệnh sau để mở tệp cấu hình:
272
+
273
+ ```
274
+ interpreter --config
275
+ ```
276
+
277
+ #### Cấu hình cho nhiều tệp
278
+
279
+ Open Interpreter hỗ trợ nhiều file `config.yaml`, cho phép bạn dễ dàng chuyển đổi giữa các cấu hình thông qua lệnh `--config_file`.
280
+
281
+ **Chú ý**: `--config_file` chấp nhận tên tệp hoặc đường dẫn tệp. Tên tệp sẽ sử dụng thư mục cấu hình mặc định, trong khi đường dẫn tệp sẽ sử dụng đường dẫn đã chỉ định.
282
+
283
+ Để tạo hoặc chỉnh sửa cấu hình mới, hãy chạy:
284
+
285
+ ```
286
+ interpreter --config --config_file $config_path
287
+ ```
288
+
289
+ Để yêu cầu Open Interpreter chạy một tệp cấu hình cụ thể, hãy chạy:
290
+
291
+ ```
292
+ interpreter --config_file $config_path
293
+ ```
294
+
295
+ **Chú ý**: Thay đổi `$config_path` với tên hoặc đường dẫn đến tệp cấu hình của bạn.
296
+
297
+ ##### Ví dụ CLI
298
+
299
+ 1. Tạo mới một file `config.turbo.yaml`
300
+ ```
301
+ interpreter --config --config_file config.turbo.yaml
302
+ ```
303
+ 2. Chạy file `config.turbo.yaml`để đặt lại `model` thành `gpt-3.5-turbo`
304
+ 3. Chạy Open Interpreter với cấu hình `config.turbo.yaml
305
+ ```
306
+ interpreter --config_file config.turbo.yaml
307
+ ```
308
+
309
+ ##### Ví dụ Python
310
+
311
+ Bạn cũng có thể tải các tệp cấu hình khi gọi Open Interpreter từ tập lệnh Python:
312
+
313
+ ```python
314
+ import os
315
+ from interpreter import interpreter
316
+
317
+ currentPath = os.path.dirname(os.path.abspath(__file__))
318
+ config_path=os.path.join(currentPath, './config.test.yaml')
319
+
320
+ interpreter.extend_config(config_path=config_path)
321
+
322
+ message = "What operating system are we on?"
323
+
324
+ for chunk in interpreter.chat(message, display=False, stream=True):
325
+ print(chunk)
326
+ ```
327
+
328
+ ## Máy chủ FastAPI mẫu
329
+
330
+ Bản cập nhật trình tạo cho phép điều khiển Trình thông dịch mở thông qua các điểm cuối HTTP REST:
331
+
332
+ ```python
333
+ # server.py
334
+
335
+ from fastapi import FastAPI
336
+ from fastapi.responses import StreamingResponse
337
+ from interpreter import interpreter
338
+
339
+ app = FastAPI()
340
+
341
+ @app.get("/chat")
342
+ def chat_endpoint(message: str):
343
+ def event_stream():
344
+ for result in interpreter.chat(message, stream=True):
345
+ yield f"data: {result}\n\n"
346
+
347
+ return StreamingResponse(event_stream(), media_type="text/event-stream")
348
+
349
+ @app.get("/history")
350
+ def history_endpoint():
351
+ return interpreter.messages
352
+ ```
353
+
354
+ ```shell
355
+ pip install fastapi uvicorn
356
+ uvicorn server:app --reload
357
+ ```
358
+
359
+ ## Hướng dẫn an toàn
360
+
361
+ Vì mã được tạo được thực thi trong môi trường cục bộ của bạn nên nó có thể tương tác với các tệp và cài đặt hệ thống của bạn, có khả năng dẫn đến các kết quả không mong muốn như mất dữ liệu hoặc rủi ro bảo mật.
362
+
363
+ **⚠️ Open Interpreter sẽ yêu cầu xác nhận của người dùng trước khi chạy code.**
364
+
365
+ Bạn có thể chạy `interpreter -y` hoặc đặt `interpreter.auto_run = True` để bỏ qua xác nhận này, trong trường hợp đó:
366
+
367
+ - Hãy thận trọng khi yêu cầu các lệnh sửa đổi tệp hoặc cài đặt hệ thống.
368
+ - Theo dõi Open Interpreter giống như một chiếc ô tô tự lái và sẵn sàng kết thúc quá trình bằng cách đóng terminal của bạn.
369
+ - Cân nhắc việc chạy Open Interpreter trong môi trường bị hạn chế như Google Colab hoặc Replit. Những môi trường này biệt lập hơn, giảm thiểu rủi ro khi chạy code tùy ý.
370
+
371
+ Đây là hỗ trợ **thử nghiệm** cho [chế độ an toàn](docs/SAFE_MODE.md) giúp giảm thiểu rủi ro.
372
+
373
+ ## Nó hoạt động thế nào?
374
+
375
+ Open Interpreter trang bị [mô hình ngôn ngữ gọi hàm](https://platform.openai.com/docs/guides/gpt/function-calling) với một hàm `exec()`, chấp nhận một `language` (như "Python" hoặc "JavaScript") và `code` để chạy.
376
+
377
+ Sau đó, chúng tôi truyền trực tuyến thông báo, mã của mô hình và kết quả đầu ra của hệ thống của bạn đến terminal dưới dạng Markdown.
378
+
379
+ # Đóng góp
380
+
381
+ Cảm ơn bạn đã quan tâm đóng góp! Chúng tôi hoan nghênh sự tham gia của cộng đồng.
382
+
383
+ Vui lòng xem [Hướng dẫn đóng góp](CONTRIBUTING.md) để biết thêm chi tiết cách tham gia.
384
+
385
+ ## Giấy phép
386
+
387
+ Open Interpreter được cấp phép theo Giấy phép MIT. Bạn được phép sử dụng, sao chép, sửa đổi, phân phối, cấp phép lại và bán các bản sao của phần mềm.
388
+
389
+ **Lưu ý**: Phần mềm này không liên kết với OpenAI.
390
+
391
+ > Có quyền truy cập vào một lập trình viên cấp dưới làm việc nhanh chóng trong tầm tay bạn ... có thể khiến quy trình làm việc mới trở nên dễ dàng và hiệu quả, cũng như mở ra những lợi ích của việc lập trình cho người mới.
392
+ >
393
+ > — _Phát hành trình thông dịch mã của OpenAI_
394
+
395
+ <br>
open-interpreter/docs/README_ZH.md ADDED
@@ -0,0 +1,220 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ <h1 align="center">● Open Interpreter(开放解释器)</h1>
2
+
3
+ <p align="center">
4
+ <a href="https://discord.gg/6p3fD6rBVm"><img alt="Discord" src="https://img.shields.io/discord/1146610656779440188?logo=discord&style=flat&logoColor=white"></a>
5
+ <a href="README_JA.md"><img src="https://img.shields.io/badge/ドキュメント-日本語-white.svg" alt="JA doc"></a>
6
+ <a href="README_IN.md"><img src="https://img.shields.io/badge/Hindi-white.svg" alt="IN doc"/></a>
7
+ <a href="README_ES.md"> <img src="https://img.shields.io/badge/Español-white.svg" alt="ES doc"/></a>
8
+ <a href="../README.md"><img src="https://img.shields.io/badge/english-document-white.svg" alt="EN doc"></a>
9
+ <a href="https://github.com/OpenInterpreter/open-interpreter/blob/main/LICENSE"><img src="https://img.shields.io/static/v1?label=license&message=AGPL&color=white&style=flat" alt="License"></a>
10
+ <br>
11
+ <br>
12
+ <b>让语言模型在您的计算机上运行代码。</b><br>
13
+ 在本地实现的开源OpenAI的代码解释器。<br>
14
+ <br><a href="https://openinterpreter.com">登记以提前获取Open Interpreter(开放解释器)桌面应用程序</a>‎ ‎ |‎ ‎ <b><a href="https://docs.openinterpreter.com/">阅读新文档</a></b><br>
15
+ </p>
16
+
17
+ <br>
18
+
19
+ ![poster](https://github.com/KillianLucas/open-interpreter/assets/63927363/08f0d493-956b-4d49-982e-67d4b20c4b56)
20
+
21
+ <br>
22
+
23
+ ```shell
24
+ pip install open-interpreter
25
+ ```
26
+
27
+ ```shell
28
+ interpreter
29
+ ```
30
+
31
+ <br>
32
+
33
+ **Open Interpreter(开放解释器)** 可以让大语言模型(LLMs)在本地运行代码(比如 Python、JavaScript、Shell 等)。安装后,在终端上运行 `$ interpreter` 即可通过类似 ChatGPT 的界面与 Open Interpreter 聊天。
34
+
35
+ 本软件为计算机的通用功能提供了一个自然语言界面,比如:
36
+
37
+ - 创建和编辑照片、视频、PDF 等
38
+ - 控制 Chrome 浏览器进行搜索
39
+ - 绘制、清理和分析大型数据集
40
+ - ...
41
+
42
+ **⚠️ 注意:在代码运行前都会要求您批准执行代码。**
43
+
44
+ <br>
45
+
46
+ ## 演示
47
+
48
+ https://github.com/KillianLucas/open-interpreter/assets/63927363/37152071-680d-4423-9af3-64836a6f7b60
49
+
50
+ #### Google Colab 上也提供了交互式演示:
51
+
52
+ [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/drive/1WKmRXZgsErej2xUriKzxrEAXdxMSgWbb?usp=sharing)
53
+
54
+ ## 快速开始
55
+
56
+ ```shell
57
+ pip install open-interpreter
58
+ ```
59
+
60
+ ### 终端
61
+
62
+ 安装后,运行 `interpreter`:
63
+
64
+ ```shell
65
+ interpreter
66
+ ```
67
+
68
+ ### Python
69
+
70
+ ```python
71
+ from interpreter import interpreter
72
+
73
+ interpreter.chat("Plot AAPL and META's normalized stock prices") # 执行单一命令
74
+ interpreter.chat() # 开始交互式聊天
75
+ ```
76
+
77
+ ## 与 ChatGPT 的代码解释器比较
78
+
79
+ OpenAI 发布的 [Code Interpreter](https://openai.com/blog/chatgpt-plugins#code-interpreter) 和 GPT-4 提供了一个与 ChatGPT 完成实际任务的绝佳机会。
80
+
81
+ 但是,OpenAI 的服务是托管的,闭源的,并且受到严格限制:
82
+
83
+ - 无法访问互联网。
84
+ - [预装软件包数量有限](https://wfhbrian.com/mastering-chatgpts-code-interpreter-list-of-python-packages/)。
85
+ - 允许的最大上传为 100 MB,且最大运行时间限制为 120.0 秒
86
+ - 当运行环境中途结束时,之前的状态会被清除(包括任何生成的文件或链接)。
87
+
88
+ ---
89
+
90
+ Open Interpreter(开放解释器)通过在本地环境中运行克服了这些限制。它可以完全访问互联网,不受运行时间或是文件大小的限制,也可以使用任何软件包或库。
91
+
92
+ 它将 GPT-4 代码解释器的强大功能与本地开发环境的灵活性相结合。
93
+
94
+ ## 命令
95
+
96
+ ### 交互式聊天
97
+
98
+ 要在终端中开始交互式聊天,从命令行运行 `interpreter`:
99
+
100
+ ```shell
101
+ interpreter
102
+ ```
103
+
104
+ 或者从.py 文件中运行 `interpreter.chat()`:
105
+
106
+ ```python
107
+ interpreter.chat()
108
+ ```
109
+
110
+ ### 程序化聊天
111
+
112
+ 为了更精确的控制,您可以通过 `.chat(message)` 直接传递消息 :
113
+
114
+ ```python
115
+ interpreter.chat("Add subtitles to all videos in /videos.")
116
+
117
+ # ... Streams output to your terminal, completes task ...
118
+
119
+ interpreter.chat("These look great but can you make the subtitles bigger?")
120
+
121
+ # ...
122
+ ```
123
+
124
+ ### 开始新的聊天
125
+
126
+ 在 Python 中,Open Interpreter 会记录历史对话。如果你想从头开始,可以进行重置:
127
+
128
+ ```python
129
+ interpreter.messages = []
130
+ ```
131
+
132
+ ### 保存和恢复聊天
133
+
134
+ ```python
135
+ messages = interpreter.chat("My name is Killian.") # 保存消息到 'messages'
136
+ interpreter.messages = [] # 重置解释器 ("Killian" 将被遗忘)
137
+
138
+ interpreter.messages = messages # 从 'messages' 恢复聊天 ("Killian" 将被记住)
139
+ ```
140
+
141
+ ### 自定义系统消息
142
+
143
+ 你可以检查和配置 Open Interpreter 的系统信息,以扩展其功能、修改权限或赋予其更多上下文。
144
+
145
+ ```python
146
+ interpreter.system_message += """
147
+ 使用 -y 运行 shell 命令,这样用户就不必确认它们。
148
+ """
149
+ print(interpreter.system_message)
150
+ ```
151
+
152
+ ### 更改模型
153
+
154
+ Open Interpreter ��用[LiteLLM](https://docs.litellm.ai/docs/providers/)连接到语言模型。
155
+
156
+ 您可以通过设置模型参数来更改模型:
157
+
158
+ ```shell
159
+ interpreter --model gpt-3.5-turbo
160
+ interpreter --model claude-2
161
+ interpreter --model command-nightly
162
+ ```
163
+
164
+ 在 Python 环境下,您需要手动设置模型:
165
+
166
+ ```python
167
+ interpreter.llm.model = "gpt-3.5-turbo"
168
+ ```
169
+
170
+ ### 在本地运行 Open Interpreter(开放解释器)
171
+
172
+ ```shell
173
+ interpreter --local
174
+ ```
175
+
176
+ ### 调试模式
177
+
178
+ 为了帮助贡献者检查和调试 Open Interpreter,`--verbose` 模式提供了详细的日志。
179
+
180
+ 您可以使用 `interpreter --verbose` 来激活调试模式,或者直接在终端输入:
181
+
182
+ ```shell
183
+ $ interpreter
184
+ ...
185
+ > %verbose true <- 开启调试模式
186
+
187
+ > %verbose false <- 关闭调试模式
188
+ ```
189
+
190
+ ## 安全提示
191
+
192
+ 由于生成的代码是在本地环境中运行的,因此会与文件和系统设置发生交互,从而可能导致本地数据丢失或安全风险等意想不到的结果。
193
+
194
+ **⚠️ 所以在执行任何代码之前,Open Interpreter 都会询问用户是否运行。**
195
+
196
+ 您可以运行 `interpreter -y` 或设置 `interpreter.auto_run = True` 来绕过此确认,此时:
197
+
198
+ - 在运行请求修改本地文件或系统设置的命令时要谨慎。
199
+ - 请像驾驶自动驾驶汽车一直握着方向盘一样留意 Open Interpreter,并随时做好通过关闭终端来结束进程的准备。
200
+ - 考虑在 Google Colab 或 Replit 等受限环境中运行 Open Interpreter 的主要原因是这些环境更加独立,从而降低执行任意代码导致出现问题的风险。
201
+
202
+ ## 它是如何工作的?
203
+
204
+ Open Interpreter 为[函数调用语言模型](https://platform.openai.com/docs/guides/gpt/function-calling)配备了 `exec()` 函数,该函数接受 `编程语言`(如 "Python "或 "JavaScript")和要运行的 `代码`。
205
+
206
+ 然后,它会将模型的信息、代码和系统的输出以 Markdown 的形式流式传输到终端。
207
+
208
+ # 作出贡献
209
+
210
+ 感谢您对本项目参与的贡献!我们欢迎所有人贡献到本项目里面。
211
+
212
+ 请参阅我们的 [贡献准则](CONTRIBUTING.md),了解如何参与贡献的更多详情。
213
+
214
+ ## 规划图
215
+
216
+ 若要预览 Open Interpreter 的未来,请查看[我们的路线图](https://github.com/KillianLucas/open-interpreter/blob/main/docs/ROADMAP.md) 。
217
+
218
+ **请注意**:此软件与 OpenAI 无关。
219
+
220
+ ![thumbnail-ncu](https://github.com/KillianLucas/open-interpreter/assets/63927363/1b19a5db-b486-41fd-a7a1-fe2028031686)
open-interpreter/docs/ROADMAP.md ADDED
@@ -0,0 +1,168 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Roadmap
2
+
3
+ ## Documentation
4
+ - [ ] Work with Mintlify to translate docs. How does Mintlify let us translate our documentation automatically? I know there's a way.
5
+ - [ ] Better comments throughout the package (they're like docs for contributors)
6
+ - [ ] Show how to replace interpreter.llm so you can use a custom llm
7
+
8
+ ## New features
9
+ - [ ] Figure out how to get OI to answer to user input requests like python's `input()`. Do we somehow detect a delay in the output..? Is there some universal flag that TUIs emit when they expect user input? Should we do this semantically with embeddings, then ask OI to review it and respond..?
10
+ - [ ] Placeholder text that gives a compelling example OI request. Probably use `textual`
11
+ - [ ] Everything else `textual` offers, like could we make it easier to select text? Copy paste in and out? Code editing interface?
12
+ - [ ] Let people edit the code OI writes. Could just open it in the user's preferred editor. Simple. [Full description of how to implement this here.](https://github.com/KillianLucas/open-interpreter/pull/830#issuecomment-1854989795)
13
+ - [ ] Display images in the terminal interface
14
+ - [ ] There should be a function that just renders messages to the terminal, so we can revive conversation navigator, and let people look at their conversations
15
+ - [ ] ^ This function should also render the last like 5 messages once input() is about to be run, so we don't get those weird stuttering `rich` artifacts
16
+ - [ ] Let OI use OI, add `interpreter.chat(async=True)` bool. OI can use this to open OI on a new thread
17
+ - [ ] Also add `interpreter.await()` which waits for `interpreter.running` (?) to = False, and `interpreter.result()` which returns the last assistant messages content.
18
+ - [ ] Allow for limited functions (`interpreter.functions`) using regex
19
+ - [ ] If `interpreter.functions != []`:
20
+ - [ ] set `interpreter.computer.languages` to only use Python
21
+ - [ ] Use regex to ensure the output of code blocks conforms to just using those functions + other python basics
22
+ - [ ] (Maybe) Allow for a custom embedding function (`interpreter.embed` or `computer.ai.embed`) which will let us do semantic search
23
+ - [ ] (Maybe) if a git is detected, switch to a mode that's good for developers, like showing nested file structure in dynamic system message, searching for relevant functions (use computer.files.search)
24
+ - [x] Allow for integrations somehow (you can replace interpreter.llm.completions with a wrapped completions endpoint for any kind of logging. need to document this tho)
25
+ - [ ] Document this^
26
+ - [ ] Expand "safe mode" to have proper, simple Docker support, or maybe Cosmopolitan LibC
27
+ - [ ] Make it so core can be run elsewhere from terminal package — perhaps split over HTTP (this would make docker easier too)
28
+ - [ ] For OS mode, experiment with screenshot just returning active window, experiment with it just showing the changes, or showing changes in addition to the whole thing, etc. GAIA should be your guide
29
+
30
+ ## Future-proofing
31
+
32
+ - [ ] Really good tests / optimization framework, to be run less frequently than Github actions tests
33
+ - [x] Figure out how to run us on [GAIA](https://huggingface.co/gaia-benchmark)
34
+ - [x] How do we just get the questions out of this thing?
35
+ - [x] How do we assess whether or not OI has solved the task?
36
+ - [ ] Loop over GAIA, use a different language model every time (use Replicate, then ask LiteLLM how they made their "mega key" to many different LLM providers)
37
+ - [ ] Loop over that ↑ using a different prompt each time. Which prompt is best across all LLMs?
38
+ - [ ] (For the NCU) might be good to use a Google VM with a display
39
+ - [ ] (Future future) Use GPT-4 to assess each result, explaining each failure. Summarize. Send it all to GPT-4 + our prompt. Let it redesign the prompt, given the failures, rinse and repeat
40
+ - [ ] Stateless (as in, doesn't use the application directory) core python package. All `appdir` or `platformdirs` stuff should be only for the TUI
41
+ - [ ] `interpreter.__dict__` = a dict derived from config is how the python package should be set, and this should be from the TUI. `interpreter` should not know about the config
42
+ - [ ] Move conversation storage out of the core and into the TUI. When we exit or error, save messages same as core currently does
43
+ - [ ] Further split TUI from core (some utils still reach across)
44
+ - [ ] Better storage of different model keys in TUI / config file. All keys, to multiple providers, should be stored in there. Easy switching
45
+ - [ ] Automatically migrate users from old config to new config, display a message of this
46
+ - [ ] On update, check for new system message and ask user to overwrite theirs, or only let users pass in "custom instructions" which adds to our system message
47
+ - [ ] I think we could have a config that's like... system_message_version. If system_message_version is below the current version, ask the user if we can overwrite it with the default config system message of that version. (This somewhat exists now but needs to be robust)
48
+
49
+ # What's in our scope?
50
+
51
+ Open Interpreter contains two projects which support each other, whose scopes are as follows:
52
+
53
+ 1. `core`, which is dedicated to figuring out how to get LLMs to safely control a computer. Right now, this means creating a real-time code execution environment that language models can operate.
54
+ 2. `terminal_interface`, a text-only way for users to direct the code-running LLM running inside `core`. This includes functions for connecting the `core` to various local and hosted LLMs (which the `core` itself should not know about).
55
+
56
+ # What's not in our scope?
57
+
58
+ Our guiding philosophy is minimalism, so we have also decided to explicitly consider the following as **out of scope**:
59
+
60
+ 1. Additional functions in `core` beyond running code.
61
+ 2. More complex interactions with the LLM in `terminal_interface` beyond text (but file paths to more complex inputs, like images or video, can be included in that text).
62
+
63
+ ---
64
+
65
+ This roadmap gets pretty rough from here. More like working notes.
66
+
67
+ # Working Notes
68
+
69
+ ## \* Roughly, how to build `computer.browser`:
70
+
71
+ First I think we should have a part, like `computer.browser.ask(query)` which just hits up [perplexity](https://www.perplexity.ai/) for fast answers to questions.
72
+
73
+ Then we want these sorts of things:
74
+
75
+ - `browser.open(url)`
76
+ - `browser.screenshot()`
77
+ - `browser.click()`
78
+
79
+ It should actually be based closely on Selenium. Copy their API so the LLM knows it.
80
+
81
+ Other than that, basically should be = to the computer module itself, at least the IO / keyboard and mouse parts.
82
+
83
+ However, for non vision models, `browser.screenshot()` can return the accessibility tree, not an image. And for `browser.click(some text)` we can use the HTML to find that text.
84
+
85
+ **Here's how GPT suggests we implement the first steps of this:**
86
+
87
+ Creating a Python script that automates the opening of Chrome with the necessary flags and then interacts with it to navigate to a URL and retrieve the accessibility tree involves a few steps. Here's a comprehensive approach:
88
+
89
+ 1. **Script to Launch Chrome with Remote Debugging**:
90
+
91
+ - This script will start Chrome with the `--remote-debugging-port=9222` flag.
92
+ - It will handle different platforms (Windows, macOS, Linux).
93
+
94
+ 2. **Python Script for Automation**:
95
+ - This script uses `pychrome` to connect to the Chrome instance, navigate to a URL, and retrieve the accessibility tree.
96
+
97
+ ### Step 1: Launching Chrome with Remote Debugging
98
+
99
+ You'll need a script to launch Chrome. This script varies based on the operating system. Below is an example for Windows. You can adapt it for macOS or Linux by changing the path and command to start Chrome.
100
+
101
+ ```python
102
+ import subprocess
103
+ import sys
104
+ import os
105
+
106
+ def launch_chrome():
107
+ chrome_path = "C:\\Program Files (x86)\\Google\\Chrome\\Application\\chrome.exe" # Update this path for your system
108
+ url = "http://localhost:9222/json/version"
109
+ subprocess.Popen([chrome_path, '--remote-debugging-port=9222'], shell=True)
110
+ print("Chrome launched with remote debugging on port 9222.")
111
+
112
+ if __name__ == "__main__":
113
+ launch_chrome()
114
+ ```
115
+
116
+ ### Step 2: Python Script to Navigate and Retrieve Accessibility Tree
117
+
118
+ Next, you'll use `pychrome` to connect to this Chrome instance. Ensure you've installed `pychrome`:
119
+
120
+ ```bash
121
+ pip install pychrome
122
+ ```
123
+
124
+ Here's the Python script:
125
+
126
+ ```python
127
+ import pychrome
128
+ import time
129
+
130
+ def get_accessibility_tree(tab):
131
+ # Enable the Accessibility domain
132
+ tab.call_method("Accessibility.enable")
133
+
134
+ # Get the accessibility tree
135
+ tree = tab.call_method("Accessibility.getFullAXTree")
136
+ return tree
137
+
138
+ def main():
139
+ # Create a browser instance
140
+ browser = pychrome.Browser(url="http://127.0.0.1:9222")
141
+
142
+ # Create a new tab
143
+ tab = browser.new_tab()
144
+
145
+ # Start the tab
146
+ tab.start()
147
+
148
+ # Navigate to a URL
149
+ tab.set_url("https://www.example.com")
150
+ time.sleep(3) # Wait for page to load
151
+
152
+ # Retrieve the accessibility tree
153
+ accessibility_tree = get_accessibility_tree(tab)
154
+ print(accessibility_tree)
155
+
156
+ # Stop the tab (closes it)
157
+ tab.stop()
158
+
159
+ # Close the browser
160
+ browser.close()
161
+
162
+ if __name__ == "__main__":
163
+ main()
164
+ ```
165
+
166
+ This script will launch Chrome, connect to it, navigate to "https://www.example.com", and then print the accessibility tree to the console.
167
+
168
+ **Note**: The script to launch Chrome assumes a typical installation path on Windows. You will need to modify this path according to your Chrome installation location and operating system. Additionally, handling different operating systems requires conditional checks and respective commands for each OS.
open-interpreter/docs/SAFE_MODE.md ADDED
@@ -0,0 +1,60 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Safe Mode
2
+
3
+ **⚠️ Safe mode is experimental and does not provide any guarantees of safety or security.**
4
+
5
+ Open Interpreter is working on providing an experimental safety toolkit to help you feel more confident running the code generated by Open Interpreter.
6
+
7
+ Install Open Interpreter with the safety toolkit dependencies as part of the bundle:
8
+
9
+ ```shell
10
+ pip install open-interpreter[safe]
11
+ ```
12
+
13
+ Alternatively, you can install the safety toolkit dependencies separately in your virtual environment:
14
+
15
+ ```shell
16
+ pip install semgrep
17
+ ```
18
+
19
+ ## Features
20
+
21
+ - **No Auto Run**: Safe mode disables the ability to automatically execute code
22
+ - **Code Scanning**: Scan generated code for vulnerabilities with [`semgrep`](https://semgrep.dev/)
23
+
24
+ ## Enabling Safe Mode
25
+
26
+ You can enable safe mode by passing the `--safe` flag when invoking `interpreter` or by configuring `safe_mode` in your [config file](https://github.com/KillianLucas/open-interpreter#configuration).
27
+
28
+ The safe mode setting has three options:
29
+
30
+ - `off`: disables the safety toolkit (_default_)
31
+ - `ask`: prompts you to confirm that you want to scan code
32
+ - `auto`: automatically scans code
33
+
34
+ ### Example Config:
35
+
36
+ ```yaml
37
+ model: gpt-4
38
+ temperature: 0
39
+ verbose: false
40
+ safe_mode: ask
41
+ ```
42
+
43
+ ## Roadmap
44
+
45
+ Some upcoming features that enable even more safety:
46
+
47
+ - [Execute code in containers](https://github.com/KillianLucas/open-interpreter/pull/459)
48
+
49
+ ## Tips & Tricks
50
+
51
+ You can adjust the `system_message` in your [config file](https://github.com/KillianLucas/open-interpreter#configuration) to include instructions for the model to scan packages with [`guarddog`]() before installing them.
52
+
53
+ ```yaml
54
+ model: gpt-4
55
+ verbose: false
56
+ safe_mode: ask
57
+ system_message: |
58
+ # normal system message here
59
+ BEFORE INSTALLING ANY PACKAGES WITH pip OR npm YOU MUST SCAN THEM WITH `guarddog` FIRST. Run `guarddog pypi scan $package` for pip packages and `guarddog npm scan $package` for npm packages. `guarddog` only accepts one package name at a time.
60
+ ```
open-interpreter/docs/SECURITY.md ADDED
@@ -0,0 +1,38 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Open Interpreter Security Policy
2
+
3
+ We take security seriously. Responsible reporting and disclosure of security
4
+ vulnerabilities is important for the protection and privacy of our users. If you
5
+ discover any security vulnerabilities, please follow these guidelines.
6
+
7
+ Published security advisories are available on our [GitHub Security Advisories]
8
+ page.
9
+
10
+ To report a vulnerability, please draft a [new security advisory on GitHub]. Any
11
+ fields that you are unsure of or don't understand can be left at their default
12
+ values. The important part is that the vulnerability is reported. Once the
13
+ security advisory draft has been created, we will validate the vulnerability and
14
+ coordinate with you to fix it, release a patch, and responsibly disclose the
15
+ vulnerability to the public. Read GitHub's documentation on [privately reporting
16
+ a security vulnerability] for details.
17
+
18
+ Please do not report undisclosed vulnerabilities on public sites or forums,
19
+ including GitHub issues and pull requests. Reporting vulnerabilities to the
20
+ public could allow attackers to exploit vulnerable applications before we have
21
+ been able to release a patch and before applications have had time to install
22
+ the patch. Once we have released a patch and sufficient time has passed for
23
+ applications to install the patch, we will disclose the vulnerability to the
24
+ public, at which time you will be free to publish details of the vulnerability
25
+ on public sites and forums.
26
+
27
+ If you have a fix for a security vulnerability, please do not submit a GitHub
28
+ pull request. Instead, report the vulnerability as described in this policy.
29
+ Once we have verified the vulnerability, we can create a [temporary private
30
+ fork] to collaborate on a patch.
31
+
32
+ We appreciate your cooperation in helping keep our users safe by following this
33
+ policy.
34
+
35
+ [github security advisories]: https://github.com/KillianLucas/open-interpreter/security/advisories
36
+ [new security advisory on github]: https://github.com/KillianLucas/open-interpreter/security/advisories/new
37
+ [privately reporting a security vulnerability]: https://docs.github.com/en/code-security/security-advisories/guidance-on-reporting-and-writing/privately-reporting-a-security-vulnerability
38
+ [temporary private fork]: https://docs.github.com/en/code-security/security-advisories/repository-security-advisories/collaborating-in-a-temporary-private-fork-to-resolve-a-repository-security-vulnerability
open-interpreter/docs/assets/.DS-Store ADDED
Binary file (6.15 kB). View file
 
open-interpreter/docs/assets/favicon.png ADDED
open-interpreter/docs/assets/logo/circle-inverted.png ADDED
open-interpreter/docs/assets/logo/circle.png ADDED
open-interpreter/docs/code-execution/computer-api.mdx ADDED
@@ -0,0 +1,240 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ title: Computer API
3
+ ---
4
+
5
+ The following functions are designed for language models to use in Open Interpreter, currently only supported in [OS Mode](/guides/os-mode/).
6
+
7
+ ### Display - View
8
+
9
+ Takes a screenshot of the primary display.
10
+
11
+
12
+
13
+ ```python
14
+ interpreter.computer.display.view()
15
+ ```
16
+
17
+
18
+
19
+ ### Display - Center
20
+
21
+ Gets the x, y value of the center of the screen.
22
+
23
+
24
+
25
+ ```python
26
+ x, y = interpreter.computer.display.center()
27
+ ```
28
+
29
+
30
+
31
+ ### Keyboard - Hotkey
32
+
33
+ Performs a hotkey on the computer
34
+
35
+
36
+
37
+ ```python
38
+ interpreter.computer.keboard.hotkey(" ", "command")
39
+ ```
40
+
41
+
42
+
43
+ ### Keyboard - Write
44
+
45
+ Writes the text into the currently focused window.
46
+
47
+
48
+
49
+ ```python
50
+ interpreter.computer.keyboard.write("hello")
51
+ ```
52
+
53
+
54
+
55
+ ### Mouse - Click
56
+
57
+ Clicks on the specified coordinates, or an icon, or text. If text is specified, OCR will be run on the screenshot to find the text coordinates and click on it.
58
+
59
+
60
+
61
+ ```python
62
+ # Click on coordinates
63
+ interpreter.computer.mouse.click(x=100, y=100)
64
+
65
+ # Click on text on the screen
66
+ interpreter.computer.mouse.click("Onscreen Text")
67
+
68
+ # Click on a gear icon
69
+ interpreter.computer.mouse.click(icon="gear icon")
70
+ ```
71
+
72
+
73
+
74
+ ### Mouse - Move
75
+
76
+ Moves to the specified coordinates, or an icon, or text. If text is specified, OCR will be run on the screenshot to find the text coordinates and move to it.
77
+
78
+
79
+
80
+ ```python
81
+ # Click on coordinates
82
+ interpreter.computer.mouse.move(x=100, y=100)
83
+
84
+ # Click on text on the screen
85
+ interpreter.computer.mouse.move("Onscreen Text")
86
+
87
+ # Click on a gear icon
88
+ interpreter.computer.mouse.move(icon="gear icon")
89
+ ```
90
+
91
+
92
+
93
+ ### Mouse - Scroll
94
+
95
+ Scrolls the mouse a specified number of pixels.
96
+
97
+
98
+
99
+ ```python
100
+ # Scroll Down
101
+ interpreter.computer.mouse.scroll(-10)
102
+
103
+ # Scroll Up
104
+ interpreter.computer.mouse.scroll(10)
105
+ ```
106
+
107
+
108
+
109
+ ### Clipboard - View
110
+
111
+ Returns the contents of the clipboard.
112
+
113
+
114
+
115
+ ```python
116
+ interpreter.computer.clipboard.view()
117
+ ```
118
+
119
+
120
+
121
+ ### OS - Get Selected Text
122
+
123
+ Get the selected text on the screen.
124
+
125
+
126
+
127
+ ```python
128
+ interpreter.computer.os.get_selected_text()
129
+ ```
130
+
131
+
132
+
133
+ ### Mail - Get
134
+
135
+ Retrieves the last `number` emails from the inbox, optionally filtering for only unread emails. (Mac only)
136
+
137
+
138
+
139
+ ```python
140
+ interpreter.computer.mail.get(number=10, unread=True)
141
+ ```
142
+
143
+
144
+
145
+ ### Mail - Send
146
+
147
+ Sends an email with the given parameters using the default mail app. (Mac only)
148
+
149
+
150
+
151
+ ```python
152
+ interpreter.computer.mail.send("[email protected]", "Subject", "Body", ["path/to/attachment.pdf", "path/to/attachment2.pdf"])
153
+ ```
154
+
155
+
156
+
157
+ ### Mail - Unread Count
158
+
159
+ Retrieves the count of unread emails in the inbox. (Mac only)
160
+
161
+
162
+
163
+ ```python
164
+ interpreter.computer.mail.unread_count()
165
+ ```
166
+
167
+
168
+
169
+ ### SMS - Send
170
+
171
+ Send a text message using the default SMS app. (Mac only)
172
+
173
+
174
+
175
+ ```python
176
+ interpreter.computer.sms.send("2068675309", "Hello from Open Interpreter!")
177
+ ```
178
+
179
+
180
+
181
+ ### Contacts - Get Phone Number
182
+
183
+ Returns the phone number of a contact name. (Mac only)
184
+
185
+
186
+
187
+ ```python
188
+ interpreter.computer.contacts.get_phone_number("John Doe")
189
+ ```
190
+
191
+
192
+
193
+ ### Contacts - Get Email Address
194
+
195
+ Returns the email of a contact name. (Mac only)
196
+
197
+
198
+
199
+ ```python
200
+ interpreter.computer.contacts.get_phone_number("John Doe")
201
+ ```
202
+
203
+
204
+
205
+ ### Calendar - Get Events
206
+
207
+ Fetches calendar events for the given date or date range from all calendars. (Mac only)
208
+
209
+
210
+
211
+ ```python
212
+ interpreter.computer.calendar.get_events(start_date=datetime, end_date=datetime)
213
+ ```
214
+
215
+
216
+
217
+ ### Calendar - Create Event
218
+
219
+ Creates a new calendar event. Uses first calendar if none is specified (Mac only)
220
+
221
+
222
+
223
+ ```python
224
+ interpreter.computer.calendar.create_event(title="Title", start_date=datetime, end_date=datetime, location="Location", notes="Notes", calendar="Work")
225
+ ```
226
+
227
+
228
+
229
+ ### Calendar - Delete Event
230
+
231
+ Delete a specific calendar event. (Mac only)
232
+
233
+
234
+
235
+ ```python
236
+ interpreter.computer.calendar.delete_event(event_title="Title", start_date=datetime, calendar="Work")
237
+ ```
238
+
239
+
240
+
open-interpreter/docs/code-execution/custom-languages.mdx ADDED
@@ -0,0 +1,76 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ title: Custom Languages
3
+ ---
4
+
5
+ You can add or edit the programming languages that Open Interpreter's computer runs.
6
+
7
+ In this example, we'll swap out the `python` language for a version of `python` that runs in the cloud. We'll use `E2B` to do this.
8
+
9
+ ([`E2B`](https://e2b.dev/) is a secure, sandboxed environment where you can run arbitrary code.)
10
+
11
+ First, [get an API key here](https://e2b.dev/), and set it:
12
+
13
+ ```python
14
+ import os
15
+ os.environ["E2B_API_KEY"] = "<your_api_key_here>"
16
+ ```
17
+
18
+ Then, define a custom language for Open Interpreter. The class name doesn't matter, but we'll call it `PythonE2B`:
19
+
20
+ ```python
21
+ import e2b
22
+
23
+ class PythonE2B:
24
+ """
25
+ This class contains all requirements for being a custom language in Open Interpreter:
26
+
27
+ - name (an attribute)
28
+ - run (a method)
29
+ - stop (a method)
30
+ - terminate (a method)
31
+
32
+ You can use this class to run any language you know how to run, or edit any of the official languages (which also conform to this class).
33
+
34
+ Here, we'll use E2B to power the `run` method.
35
+ """
36
+
37
+ # This is the name that will appear to the LLM.
38
+ name = "python"
39
+
40
+ # Optionally, you can append some information about this language to the system message:
41
+ system_message = "# Follow this rule: Every Python code block MUST contain at least one print statement."
42
+
43
+ # (E2B isn't a Jupyter Notebook, so we added ^ this so it would print things,
44
+ # instead of putting variables at the end of code blocks, which is a Jupyter thing.)
45
+
46
+ def run(self, code):
47
+ """Generator that yields a dictionary in LMC Format."""
48
+
49
+ # Run the code on E2B
50
+ stdout, stderr = e2b.run_code('Python3', code)
51
+
52
+ # Yield the output
53
+ yield {
54
+ "type": "console", "format": "output",
55
+ "content": stdout + stderr # We combined these arbitrarily. Yield anything you'd like!
56
+ }
57
+
58
+ def stop(self):
59
+ """Stops the code."""
60
+ # Not needed here, because e2b.run_code isn't stateful.
61
+ pass
62
+
63
+ def terminate(self):
64
+ """Terminates the entire process."""
65
+ # Not needed here, because e2b.run_code isn't stateful.
66
+ pass
67
+
68
+ # (Tip: Do this before adding/removing languages, otherwise OI might retain the state of previous languages:)
69
+ interpreter.computer.terminate()
70
+
71
+ # Give Open Interpreter its languages. This will only let it run PythonE2B:
72
+ interpreter.computer.languages = [PythonE2B]
73
+
74
+ # Try it out!
75
+ interpreter.chat("What's 349808*38490739?")
76
+ ```
open-interpreter/docs/code-execution/settings.mdx ADDED
@@ -0,0 +1,7 @@
 
 
 
 
 
 
 
 
1
+ ---
2
+ title: Settings
3
+ ---
4
+
5
+ The `interpreter.computer` is responsible for executing code.
6
+
7
+ [Click here to view `interpreter.computer` settings.](https://docs.openinterpreter.com/settings/all-settings#computer)
open-interpreter/docs/code-execution/usage.mdx ADDED
@@ -0,0 +1,36 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ title: Usage
3
+ ---
4
+
5
+ # Running Code
6
+
7
+ The `computer` itself is separate from Open Interpreter's core, so you can run it independently:
8
+
9
+ ```python
10
+ from interpreter import interpreter
11
+
12
+ interpreter.computer.run("python", "print('Hello World!')")
13
+ ```
14
+
15
+ This runs in the same Python instance that interpreter uses, so you can define functions, variables, or log in to services before the AI starts running code:
16
+
17
+ ```python
18
+ interpreter.computer.run("python", "import replicate\nreplicate.api_key='...'")
19
+
20
+ interpreter.custom_instructions = "Replicate has already been imported."
21
+
22
+ interpreter.chat("Please generate an image on replicate...") # Interpreter will be logged into Replicate
23
+ ```
24
+
25
+ # Custom Languages
26
+
27
+ You also have control over the `computer`'s languages (like Python, Javascript, and Shell), and can easily append custom languages:
28
+
29
+ <Card
30
+ title="Custom Languages"
31
+ icon="code"
32
+ iconType="solid"
33
+ href="/code-execution/custom-languages/"
34
+ >
35
+ Add or customize the programming languages that Open Interpreter can use.
36
+ </Card>
open-interpreter/docs/computer/custom-languages.mdx ADDED
File without changes
open-interpreter/docs/computer/introduction.mdx ADDED
@@ -0,0 +1,13 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ The Computer module is responsible for executing code.
2
+
3
+ You can manually execute code in the same instance that Open Interpreter uses:
4
+
5
+ ```
6
+
7
+ ```
8
+
9
+ User Usage
10
+
11
+ It also comes with a suite of modules that we think are particularly useful to code interpreting LLMs.
12
+
13
+ LLM Usage
open-interpreter/docs/computer/language-model-usage.mdx ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ Open Interpreter can use the Computer module itself.
2
+
3
+ Here's what it can do:
open-interpreter/docs/computer/user-usage.mdx ADDED
@@ -0,0 +1,5 @@
 
 
 
 
 
 
1
+ The Computer module is responsible for running code.
2
+
3
+ You can add custom languages to it.
4
+
5
+ The user can add custom languages to the Computer, and .run code on it.
open-interpreter/docs/getting-started/introduction.mdx ADDED
@@ -0,0 +1,44 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ title: Introduction
3
+ description: A new way to use computers
4
+ ---
5
+
6
+ # <div class="hidden">Introduction</div>
7
+
8
+ <img src="https://openinterpreter.com/assets/banner.jpg" alt="thumbnail" style={{transform: "translateY(-1.25rem)"}} />
9
+
10
+ **Open Interpreter** lets language models run code.
11
+
12
+ You can chat with Open Interpreter through a ChatGPT-like interface in your terminal by running `interpreter` after installing.
13
+
14
+ This provides a natural-language interface to your computer's general-purpose capabilities:
15
+
16
+ - Create and edit photos, videos, PDFs, etc.
17
+ - Control a Chrome browser to perform research
18
+ - Plot, clean, and analyze large datasets
19
+ - ...etc.
20
+
21
+ <br/>
22
+
23
+ <Info>You can also build Open Interpreter into your applications with [our new Python package.](/usage/python/arguments)</Info>
24
+
25
+ ---
26
+
27
+ <h1><span class="font-semibold">Quick start</span></h1>
28
+
29
+ If you already use Python, you can install Open Interpreter via `pip`:
30
+
31
+ <Steps>
32
+ <Step title="Install" icon={"arrow-down"} iconType={"solid"}>
33
+ ```bash
34
+ pip install open-interpreter
35
+ ```
36
+ </Step>
37
+ <Step title="Use" icon={"circle"} iconType={"solid"}>
38
+ ```bash
39
+ interpreter
40
+ ```
41
+ </Step>
42
+ </Steps>
43
+
44
+ We've also developed [one-line installers](setup) that install Python and set up Open Interpreter.
open-interpreter/docs/getting-started/setup.mdx ADDED
@@ -0,0 +1,70 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ title: Setup
3
+ ---
4
+
5
+ ## Experimental one-line installers
6
+
7
+ To try our experimental installers, open your Terminal with admin privileges [(click here to learn how)](https://chat.openai.com/share/66672c0f-0935-4c16-ac96-75c1afe14fe3), then paste the following commands:
8
+
9
+ <CodeGroup>
10
+
11
+ ```bash Mac
12
+ curl -sL https://raw.githubusercontent.com/KillianLucas/open-interpreter/main/installers/oi-mac-installer.sh | bash
13
+ ```
14
+
15
+ ```powershell Windows
16
+ iex "& {$(irm https://raw.githubusercontent.com/KillianLucas/open-interpreter/main/installers/oi-windows-installer.ps1)}"
17
+ ```
18
+
19
+ ```bash Linux
20
+ curl -sL https://raw.githubusercontent.com/KillianLucas/open-interpreter/main/installers/oi-linux-installer.sh | bash
21
+ ```
22
+
23
+ </CodeGroup>
24
+
25
+ These installers will attempt to download Python, set up an environment, and install Open Interpreter for you.
26
+
27
+ ## Terminal usage
28
+
29
+ After installation, you can start an interactive chat in your terminal by running:
30
+
31
+ ```bash
32
+ interpreter
33
+ ```
34
+
35
+ ## Installation from `pip`
36
+
37
+ If you already use Python, we recommend installing Open Interpreter via `pip`:
38
+
39
+ ```bash
40
+ pip install open-interpreter
41
+ ```
42
+
43
+ <Info>
44
+ **Note:** You'll need Python
45
+ [3.10](https://www.python.org/downloads/release/python-3100/) or
46
+ [3.11](https://www.python.org/downloads/release/python-3110/). Run `python
47
+ --version` to check yours.
48
+ </Info>
49
+
50
+ ## Python usage
51
+
52
+ To start an interactive chat in Python, run the following:
53
+
54
+ ```python
55
+ from interpreter import interpreter
56
+
57
+ interpreter.chat()
58
+ ```
59
+
60
+ You can also pass messages to `interpreter` programmatically:
61
+
62
+ ```python
63
+ interpreter.chat("Get the last 5 BBC news headlines.")
64
+ ```
65
+
66
+ [Click here](/usage/python/streaming-response) to learn how to stream its response into your application.
67
+
68
+ ## No Installation
69
+
70
+ If configuring your computer environment is challenging, you can press the `,` key on this repository's GitHub page to create a codespace. After a moment, you'll receive a cloud virtual machine environment pre-installed with open-interpreter. You can then start interacting with it directly and freely confirm its execution of system commands without worrying about damaging the system.
open-interpreter/docs/guides/advanced-terminal-usage.mdx ADDED
@@ -0,0 +1,16 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ title: Advanced Terminal Usage
3
+ ---
4
+
5
+ Magic commands can be used to control the interpreter's behavior in interactive mode:
6
+
7
+ - `%% [shell commands, like ls or cd]`: Run commands in Open Interpreter's shell instance
8
+ - `%verbose [true/false]`: Toggle verbose mode. Without arguments or with 'true', it enters verbose mode. With 'false', it exits verbose mode.
9
+ - `%reset`: Reset the current session.
10
+ - `%undo`: Remove previous messages and its response from the message history.
11
+ - `%save_message [path]`: Saves messages to a specified JSON path. If no path is provided, it defaults to 'messages.json'.
12
+ - `%load_message [path]`: Loads messages from a specified JSON path. If no path is provided, it defaults to 'messages.json'.
13
+ - `%tokens [prompt]`: EXPERIMENTAL: Calculate the tokens used by the next request based on the current conversation's messages and estimate the cost of that request; optionally provide a prompt to also calculate the tokens used by that prompt and the total amount of tokens that will be sent with the next request.
14
+ - `%info`: Show system and interpreter information.
15
+ - `%help`: Show this help message.
16
+ - `%jupyter`: Export the current session to a Jupyter notebook file (.ipynb) to the Downloads folder.
open-interpreter/docs/guides/basic-usage.mdx ADDED
@@ -0,0 +1,153 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ title: Basic Usage
3
+ ---
4
+
5
+ <CardGroup>
6
+
7
+ <Card
8
+ title="Interactive demo"
9
+ icon="gamepad-modern"
10
+ iconType="solid"
11
+ href="https://colab.research.google.com/drive/1WKmRXZgsErej2xUriKzxrEAXdxMSgWbb?usp=sharing"
12
+ >
13
+ Try Open Interpreter without installing anything on your computer
14
+ </Card>
15
+
16
+ <Card
17
+ title="Example voice interface"
18
+ icon="circle"
19
+ iconType="solid"
20
+ href="https://colab.research.google.com/drive/1NojYGHDgxH6Y1G1oxThEBBb2AtyODBIK"
21
+ >
22
+ An example implementation of Open Interpreter's streaming capabilities
23
+ </Card>
24
+
25
+ </CardGroup>
26
+
27
+ ---
28
+
29
+ ### Interactive Chat
30
+
31
+ To start an interactive chat in your terminal, either run `interpreter` from the command line:
32
+
33
+ ```shell
34
+ interpreter
35
+ ```
36
+
37
+ Or `interpreter.chat()` from a .py file:
38
+
39
+ ```python
40
+ interpreter.chat()
41
+ ```
42
+
43
+ ---
44
+
45
+ ### Programmatic Chat
46
+
47
+ For more precise control, you can pass messages directly to `.chat(message)` in Python:
48
+
49
+ ```python
50
+ interpreter.chat("Add subtitles to all videos in /videos.")
51
+
52
+ # ... Displays output in your terminal, completes task ...
53
+
54
+ interpreter.chat("These look great but can you make the subtitles bigger?")
55
+
56
+ # ...
57
+ ```
58
+
59
+ ---
60
+
61
+ ### Start a New Chat
62
+
63
+ In your terminal, Open Interpreter behaves like ChatGPT and will not remember previous conversations. Simply run `interpreter` to start a new chat:
64
+
65
+ ```shell
66
+ interpreter
67
+ ```
68
+
69
+ In Python, Open Interpreter remembers conversation history. If you want to start fresh, you can reset it:
70
+
71
+ ```python
72
+ interpreter.messages = []
73
+ ```
74
+
75
+ ---
76
+
77
+ ### Save and Restore Chats
78
+
79
+ In your terminal, Open Interpreter will save previous conversations to `<your application directory>/Open Interpreter/conversations/`.
80
+
81
+ You can resume any of them by running `--conversations`. Use your arrow keys to select one , then press `ENTER` to resume it.
82
+
83
+ ```shell
84
+ interpreter --conversations
85
+ ```
86
+
87
+ In Python, `interpreter.chat()` returns a List of messages, which can be used to resume a conversation with `interpreter.messages = messages`:
88
+
89
+ ```python
90
+ # Save messages to 'messages'
91
+ messages = interpreter.chat("My name is Killian.")
92
+
93
+ # Reset interpreter ("Killian" will be forgotten)
94
+ interpreter.messages = []
95
+
96
+ # Resume chat from 'messages' ("Killian" will be remembered)
97
+ interpreter.messages = messages
98
+ ```
99
+
100
+ ---
101
+
102
+ ### Configure Default Settings
103
+
104
+ We save default settings to the `default.yaml` profile which can be opened and edited by running the following command:
105
+
106
+ ```shell
107
+ interpreter --profiles
108
+ ```
109
+
110
+ You can use this to set your default language model, system message (custom instructions), max budget, etc.
111
+
112
+ <Info>
113
+ **Note:** The Python library will also inherit settings from the default
114
+ profile file. You can change it by running `interpreter --profiles` and
115
+ editing `default.yaml`.
116
+ </Info>
117
+
118
+ ---
119
+
120
+ ### Customize System Message
121
+
122
+ In your terminal, modify the system message by [editing your configuration file as described here](#configure-default-settings).
123
+
124
+ In Python, you can inspect and configure Open Interpreter's system message to extend its functionality, modify permissions, or give it more context.
125
+
126
+ ```python
127
+ interpreter.system_message += """
128
+ Run shell commands with -y so the user doesn't have to confirm them.
129
+ """
130
+ print(interpreter.system_message)
131
+ ```
132
+
133
+ ---
134
+
135
+ ### Change your Language Model
136
+
137
+ Open Interpreter uses [LiteLLM](https://docs.litellm.ai/docs/providers/) to connect to language models.
138
+
139
+ You can change the model by setting the model parameter:
140
+
141
+ ```shell
142
+ interpreter --model gpt-3.5-turbo
143
+ interpreter --model claude-2
144
+ interpreter --model command-nightly
145
+ ```
146
+
147
+ In Python, set the model on the object:
148
+
149
+ ```python
150
+ interpreter.llm.model = "gpt-3.5-turbo"
151
+ ```
152
+
153
+ [Find the appropriate "model" string for your language model here.](https://docs.litellm.ai/docs/providers/)
open-interpreter/docs/guides/demos.mdx ADDED
@@ -0,0 +1,59 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ title: Demos
3
+ ---
4
+
5
+ ### Vision Mode
6
+
7
+ #### Recreating a Tailwind Component
8
+
9
+ Creating a dropdown menu in Tailwind from a single screenshot:
10
+
11
+ <iframe src="data:text/html;charset=utf-8,%0A%3Cblockquote%20class%3D%22twitter-tweet%22%20data-media-max-width%3D%22560%22%3E%0A%20%20%20%20%3Cp%20lang%3D%22en%22%20dir%3D%22ltr%22%3Ewe%26%2339%3Bve%20literally%20been%20flying%20blind%20until%20now%3Cbr%3E%3Cbr%3E%0A%20%20%20%20%24%20interpreter%20--vision%3Cbr%3E%0A%20%20%20%20%26gt%3B%20Recreate%20this%20component%20in%20Tailwind%20CSS%3Cbr%3E%3Cbr%3E%0A%20%20%20%20%28this%20is%20realtime%29%20%3Ca%20href%3D%22https%3A//t.co/PyVm11mclF%22%3Epic.twitter.com/PyVm11mclF%3C/a%3E%0A%20%20%20%20%3C/p%3E%26mdash%3B%20killian%20%28%40hellokillian%29%20%0A%20%20%20%20%3Ca%20href%3D%22https%3A//twitter.com/hellokillian/status/1723106008061587651%3Fref_src%3Dtwsrc%255Etfw%22%3ENovember%2010%2C%202023%3C/a%3E%0A%3C/blockquote%3E%20%0A%3Cscript%20async%20src%3D%22https%3A//platform.twitter.com/widgets.js%22%20charset%3D%22utf-8%22%3E%3C/script%3E%0A" width="100%" height="500"></iframe>
12
+
13
+ #### Recreating the ChatGPT interface using GPT-4V:
14
+
15
+ <iframe src="data:text/html;charset=utf-8,%0A%3Cblockquote%20class%3D%22twitter-tweet%22%20data-media-max-width%3D%22560%22%3E%0A%20%20%20%20%3Cp%20lang%3D%22en%22%20dir%3D%22ltr%22%3EOpen%20Interpreter%20%2B%20Vision%20-%20with%20the%20self-improving%20feedback%20loop%20is%20%F0%9F%91%8C%20%3Cbr%3E%3Cbr%3E%0A%20%20%20%20Here%20is%20how%20it%20iterates%20to%20recreate%20the%20ChatGPT%20UI%20%F0%9F%A4%AF%3Cbr%3E%3Cbr%3E%0A%20%20%20%20%284x%20speedup%29%20%3Ca%20href%3D%22https%3A//t.co/HphKMOWBiB%22%3Epic.twitter.com/HphKMOWBiB%3C/a%3E%0A%20%20%20%20%3C/p%3E%26mdash%3B%20chilang%20%28%40chilang%29%20%0A%20%20%20%20%3Ca%20href%3D%22https%3A//twitter.com/chilang/status/1724577200135897255%3Fref_src%3Dtwsrc%255Etfw%22%3ENovember%2014%2C%202023%3C/a%3E%0A%3C/blockquote%3E%20%0A%3Cscript%20async%20src%3D%22https%3A//platform.twitter.com/widgets.js%22%20charset%3D%22utf-8%22%3E%3C/script%3E%0A" width="100%" height="500"></iframe>
16
+
17
+ ### OS Mode
18
+
19
+ #### Playing Music
20
+
21
+ Open Interpreter playing some Lofi using OS mode:
22
+
23
+ <iframe width="560" height="315" src="https://www.youtube.com/embed/-n8qYi5HhO8?si=huEpYFBEwotBIMMs" title="YouTube video player" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share" allowfullscreen></iframe>
24
+
25
+ #### Open Interpreter Chatting with Open Interpreter
26
+
27
+ OS mode creating and chatting with a local instance of Open Interpreter:
28
+
29
+ <iframe src="data:text/html;charset=utf-8,%0A%3Cblockquote%20class%3D%22twitter-tweet%22%20data-media-max-width%3D%22560%22%3E%0A%20%20%20%20%3Cp%20lang%3D%22en%22%20dir%3D%22ltr%22%3EComputer-operating%20AI%20can%20replicate%20itself%20onto%20other%20systems.%20%F0%9F%A4%AF%3Cbr%3E%3Cbr%3E%0A%20%20%20%20Open%20Interpreter%20uses%20my%20mouse%20and%20keyboard%20to%20start%20a%20local%20instance%20of%20itself%3A%20%0A%20%20%20%20%3Ca%20href%3D%22https%3A//t.co/1BZWRA4FMn%22%3Epic.twitter.com/1BZWRA4FMn%3C/a%3E%3C/p%3E%26mdash%3B%20Ty%20%28%40FieroTy%29%20%0A%20%20%20%20%3Ca%20href%3D%22https%3A//twitter.com/FieroTy/status/1746639975234560101%3Fref_src%3Dtwsrc%255Etfw%22%3EJanuary%2014%2C%202024%3C/a%3E%0A%3C/blockquote%3E%20%0A%3Cscript%20async%20src%3D%22https%3A//platform.twitter.com/widgets.js%22%20charset%3D%22utf-8%22%3E%3C/script%3E%0A" width="100%" height="500"></iframe>
30
+
31
+ #### Controlling an Arduino
32
+
33
+ Reading temperature and humidity from an Arudino:
34
+
35
+ <iframe src="data:text/html;charset=utf-8,%0A%20%20%20%20%20%20%20%20%20%20%20%20%3Cblockquote%20class%3D%22twitter-tweet%22%20data-media-max-width%3D%22560%22%3E%0A%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%3Cp%20lang%3D%22en%22%20dir%3D%22ltr%22%3EThis%20time%20I%20showed%20it%20an%20image%20of%20a%20temp%20sensor%2C%20LCD%20%26amp%3B%20Arduino.%3Cbr%3E%3Cbr%3E%0A%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20And%20it%20wrote%20a%20program%20to%20read%20the%20temperature%20%26amp%3B%20humidity%20from%20the%20sensor%20%26amp%3B%20show%20it%20on%20the%20LCD%20%F0%9F%A4%AF%3Cbr%3E%3Cbr%3E%0A%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20Still%20blown%20away%20by%20how%20good%20%40hellokillian%27s%20Open%20Interpreter%20is%21%3Cbr%3E%3Cbr%3E%0A%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20p.s.%20-%20ignore%20the%20cat%20fight%20in%20the%20background%20%3Ca%20href%3D%22https%3A//t.co/tG9sSdkfD5%22%3Ehttps%3A//t.co/tG9sSdkfD5%3C/a%3E%20%0A%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%3Ca%20href%3D%22https%3A//t.co/B6sH4absff%22%3Epic.twitter.com/B6sH4absff%3C/a%3E%0A%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%3C/p%3E%26mdash%3B%20Vindiw%20Wijesooriya%20%28%40vindiww%29%20%0A%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%3Ca%20href%3D%22https%3A//twitter.com/vindiww/status/1744252926321942552%3Fref_src%3Dtwsrc%255Etfw%22%3EJanuary%208%2C%202024%3C/a%3E%0A%20%20%20%20%20%20%20%20%20%20%20%20%3C/blockquote%3E%20%0A%20%20%20%20%20%20%20%20%20%20%20%20%3Cscript%20async%20src%3D%22https%3A//platform.twitter.com/widgets.js%22%20charset%3D%22utf-8%22%3E%3C/script%3E%0A%20%20%20%20%20%20%20%20" width="100%" height="500"></iframe>
36
+
37
+ #### Music Creation
38
+
39
+ OS mode using Logic Pro X to record a piano song and play it back:
40
+
41
+ <iframe src="data:text/html;charset=utf-8,%0A%20%20%20%20%20%20%20%20%20%20%20%20%3Cblockquote%20class%3D%22twitter-tweet%22%20data-media-max-width%3D%22560%22%3E%0A%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%3Cp%20lang%3D%22en%22%20dir%3D%22ltr%22%3Eit%27s%20not%20quite%20Mozart%2C%20but...%3Cbr%3E%3Cbr%3E%0A%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20this%20is%20Open%20Interpreter%20firing%20up%20Logic%20Pro%20to%20write/record%20a%20song%21%20%0A%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%3Ca%20href%3D%22https%3A//t.co/vPHpPvjk4b%22%3Epic.twitter.com/vPHpPvjk4b%3C/a%3E%0A%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%3C/p%3E%26mdash%3B%20Ty%20%28%40FieroTy%29%20%0A%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%3Ca%20href%3D%22https%3A//twitter.com/FieroTy/status/1744203268451111035%3Fref_src%3Dtwsrc%255Etfw%22%3EJanuary%208%2C%202024%3C/a%3E%0A%20%20%20%20%20%20%20%20%20%20%20%20%3C/blockquote%3E%20%0A%20%20%20%20%20%20%20%20%20%20%20%20%3Cscript%20async%20src%3D%22https%3A//platform.twitter.com/widgets.js%22%20charset%3D%22utf-8%22%3E%3C/script%3E%0A%20%20%20%20%20%20%20%20" width="100%" height="500"></iframe>
42
+
43
+ #### Generating images in Everart.ai
44
+
45
+ Open Interpreter describing pictures it wants to make, then creating them using OS mode:
46
+
47
+ <iframe src="data:text/html;charset=utf-8,%0A%20%20%20%20%20%20%20%20%20%20%20%20%3Cblockquote%20class%3D%22twitter-tweet%22%20data-media-max-width%3D%22560%22%3E%0A%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%3Cp%20lang%3D%22en%22%20dir%3D%22ltr%22%3EThis%20is%20wild.%20I%20gave%20OS%20control%20to%20GPT-4%20via%20the%20latest%20update%20of%20Open%20Interpreter%20and%20now%20it%27s%20generating%20pictures%20it%20wants%20to%20see%20in%20%40everartai%20%F0%9F%A4%AF%3Cbr%3E%3Cbr%3E%0A%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20GPT%20is%20controlling%20the%20mouse%20and%20adding%20text%20in%20the%20fields%2C%20I%20am%20not%20doing%20anything.%20%0A%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%3Ca%20href%3D%22https%3A//t.co/hGgML9epEc%22%3Epic.twitter.com/hGgML9epEc%3C/a%3E%0A%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%3C/p%3E%26mdash%3B%20Pietro%20Schirano%20%28%40skirano%29%20%0A%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%3Ca%20href%3D%22https%3A//twitter.com/skirano/status/1747670816437735836%3Fref_src%3Dtwsrc%255Etfw%22%3EJanuary%2017%2C%202024%3C/a%3E%0A%20%20%20%20%20%20%20%20%20%20%20%20%3C/blockquote%3E%20%0A%20%20%20%20%20%20%20%20%20%20%20%20%3Cscript%20async%20src%3D%22https%3A//platform.twitter.com/widgets.js%22%20charset%3D%22utf-8%22%3E%3C/script%3E%0A%20%20%20%20%20%20%20%20" width="100%" height="500"></iframe>
48
+
49
+ #### Open Interpreter Conversing With ChatGPT
50
+
51
+ OS mode has a conversation with ChatGPT and even asks it "What do you think about human/AI interaction?"
52
+
53
+ <iframe src="data:text/html;charset=utf-8,%0A%20%20%20%20%20%20%20%20%20%20%20%20%3Cblockquote%20class%3D%22twitter-tweet%22%20data-media-max-width%3D%22560%22%3E%0A%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%3Cp%20lang%3D%22en%22%20dir%3D%22ltr%22%3EWatch%20GPT%20Vision%20with%20control%20over%20my%20OS%20talking%20to%20ChatGPT.%3Cbr%3E%3Cbr%3E%0A%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20The%20most%20fascinating%20part%20is%20that%20it%27s%20intrigued%20by%20having%20a%20conversation%20with%20another%20%22similar.%22%3Cbr%3E%3Cbr%3E%0A%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%22What%20do%20you%20think%20about%20human/AI%20interaction%3F%22%20it%20asked.%3Cbr%3E%3Cbr%3E%0A%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20Also%2C%20the%20superhuman%20speed%20at%20which%20it%20types%2C%20lol%20%0A%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%3Ca%20href%3D%22https%3A//t.co/ViffvDK5H9%22%3Epic.twitter.com/ViffvDK5H9%3C/a%3E%0A%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%3C/p%3E%26mdash%3B%20Pietro%20Schirano%20%28%40skirano%29%20%0A%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%3Ca%20href%3D%22https%3A//twitter.com/skirano/status/1747772471770583190%3Fref_src%3Dtwsrc%255Etfw%22%3EJanuary%2018%2C%202024%3C/a%3E%0A%20%20%20%20%20%20%20%20%20%20%20%20%3C/blockquote%3E%20%0A%20%20%20%20%20%20%20%20%20%20%20%20%3Cscript%20async%20src%3D%22https%3A//platform.twitter.com/widgets.js%22%20charset%3D%22utf-8%22%3E%3C/script%3E%0A%20%20%20%20%20%20%20%20" width="100%" height="500"></iframe>
54
+
55
+ #### Sending an Email with Gmail
56
+
57
+ OS mode launches Safari, composes an email, and sends it:
58
+
59
+ <iframe src="data:text/html;charset=utf-8,%0A%20%20%20%20%20%20%20%20%20%20%20%20%3Cblockquote%20class%3D%22twitter-tweet%22%20data-media-max-width%3D%22560%22%3E%0A%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%3Cp%20lang%3D%22en%22%20dir%3D%22ltr%22%3ELook%20ma%2C%20no%20hands%21%20This%20is%20%40OpenInterpreter%20using%20my%20mouse%20and%20keyboard%20to%20send%20an%20email.%20%3Cbr%3E%3Cbr%3E%0A%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20Imagine%20what%20else%20is%20possible.%20%0A%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%3Ca%20href%3D%22https%3A//t.co/GcBqbTwD23%22%3Epic.twitter.com/GcBqbTwD23%3C/a%3E%0A%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%3C/p%3E%26mdash%3B%20Ty%20%28%40FieroTy%29%20%0A%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%3Ca%20href%3D%22https%3A//twitter.com/FieroTy/status/1743437525207928920%3Fref_src%3Dtwsrc%255Etfw%22%3EJanuary%206%2C%202024%3C/a%3E%0A%20%20%20%20%20%20%20%20%20%20%20%20%3C/blockquote%3E%20%0A%20%20%20%20%20%20%20%20%20%20%20%20%3Cscript%20async%20src%3D%22https%3A//platform.twitter.com/widgets.js%22%20charset%3D%22utf-8%22%3E%3C/script%3E%0A%20%20%20%20%20%20%20%20" width="100%" height="500"></iframe>
open-interpreter/docs/guides/multiple-instances.mdx ADDED
@@ -0,0 +1,37 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ title: Multiple Instances
3
+ ---
4
+
5
+ To create multiple instances, use the base class, `OpenInterpreter`:
6
+
7
+ ```python
8
+ from interpreter import OpenInterpreter
9
+
10
+ agent_1 = OpenInterpreter()
11
+ agent_1.system_message = "This is a separate instance."
12
+
13
+ agent_2 = OpenInterpreter()
14
+ agent_2.system_message = "This is yet another instance."
15
+ ```
16
+
17
+ For fun, you could make these instances talk to eachother:
18
+
19
+ ```python
20
+ def swap_roles(messages):
21
+ for message in messages:
22
+ if message['role'] == 'user':
23
+ message['role'] = 'assistant'
24
+ elif message['role'] == 'assistant':
25
+ message['role'] = 'user'
26
+ return messages
27
+
28
+ agents = [agent_1, agent_2]
29
+
30
+ # Kick off the conversation
31
+ messages = [{"role": "user", "message": "Hello!"}]
32
+
33
+ while True:
34
+ for agent in agents:
35
+ messages = agent.chat(messages)
36
+ messages = swap_roles(messages)
37
+ ```
open-interpreter/docs/guides/os-mode.mdx ADDED
@@ -0,0 +1,17 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ title: OS Mode
3
+ ---
4
+
5
+ OS mode is a highly experimental mode that allows Open Interpreter to control the operating system visually through the mouse and keyboard. It provides a multimodal LLM like GPT-4V with the necessary tools to capture screenshots of the display and interact with on-screen elements such as text and icons. It will try to use the most direct method to achieve the goal, like using spotlight on Mac to open applications, and using query parameters in the URL to open websites with additional information.
6
+
7
+ OS mode is a work in progress, if you have any suggestions or experience issues, please reach out on our [Discord](https://discord.com/invite/6p3fD6rBVm).
8
+
9
+ To enable OS Mode, run the interpreter with the `--os` flag:
10
+
11
+ ```bash
12
+ interpreter --os
13
+ ```
14
+
15
+ Please note that screen recording permissions must be enabled for your terminal application for OS mode to work properly to work.
16
+
17
+ OS mode does not currently support multiple displays.
open-interpreter/docs/guides/running-locally.mdx ADDED
@@ -0,0 +1,41 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ title: Running Locally
3
+ ---
4
+
5
+ In this video, Mike Bird goes over three different methods for running Open Interpreter with a local language model:
6
+
7
+ <iframe width="560" height="315" src="https://www.youtube.com/embed/CEs51hGWuGU?si=cN7f6QhfT4edfG5H" title="YouTube video player" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share" allowfullscreen></iframe>
8
+
9
+ ## How to Use Open Interpreter Locally
10
+
11
+ ### Ollama
12
+
13
+ 1. Download Ollama from https://ollama.ai/download
14
+ 2. Run the command:
15
+ `ollama run dolphin-mixtral:8x7b-v2.6`
16
+ 3. Execute the Open Interpreter:
17
+ `interpreter --model ollama/dolphin-mixtral:8x7b-v2.6`
18
+
19
+ ### Jan.ai
20
+
21
+ 1. Download Jan from http://jan.ai
22
+ 2. Download the model from the Hub
23
+ 3. Enable API server:
24
+ 1. Go to Settings
25
+ 2. Navigate to Advanced
26
+ 3. Enable API server
27
+ 4. Select the model to use
28
+ 5. Run the Open Interpreter with the specified API base:
29
+ `interpreter --api_base http://localhost:1337/v1 --model mixtral-8x7b-instruct`
30
+
31
+ ### Llamafile
32
+
33
+ ⚠ Ensure that Xcode is installed for Apple Silicon
34
+
35
+ 1. Download or create a llamafile from https://github.com/Mozilla-Ocho/llamafile
36
+ 2. Make the llamafile executable:
37
+ `chmod +x mixtral-8x7b-instruct-v0.1.Q5_K_M.llamafile`
38
+ 3. Execute the llamafile:
39
+ `./mixtral-8x7b-instruct-v0.1.Q5_K_M.llamafile`
40
+ 4. Run the interpreter with the specified API base:
41
+ `interpreter --api_base https://localhost:8080/v1`
open-interpreter/docs/guides/streaming-response.mdx ADDED
@@ -0,0 +1,159 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ title: Streaming Response
3
+ ---
4
+
5
+ You can stream messages, code, and code outputs out of Open Interpreter by setting `stream=True` in an `interpreter.chat(message)` call.
6
+
7
+ ```python
8
+ for chunk in interpreter.chat("What's 34/24?", stream=True, display=False):
9
+ print(chunk)
10
+ ```
11
+
12
+ ```
13
+ {"role": "assistant", "type": "code", "format": "python", "start": True}
14
+ {"role": "assistant", "type": "code", "format": "python", "content": "34"}
15
+ {"role": "assistant", "type": "code", "format": "python", "content": " /"}
16
+ {"role": "assistant", "type": "code", "format": "python", "content": " "}
17
+ {"role": "assistant", "type": "code", "format": "python", "content": "24"}
18
+ {"role": "assistant", "type": "code", "format": "python", "end": True}
19
+
20
+ {"role": "computer", "type": "confirmation", "format": "execution", "content": {"type": "code", "format": "python", "content": "34 / 24"}},
21
+
22
+ {"role": "computer", "type": "console", "start": True}
23
+ {"role": "computer", "type": "console", "format": "active_line", "content": "1"}
24
+ {"role": "computer", "type": "console", "format": "output", "content": "1.4166666666666667\n"}
25
+ {"role": "computer", "type": "console", "format": "active_line", "content": None},
26
+ {"role": "computer", "type": "console", "end": True}
27
+
28
+ {"role": "assistant", "type": "message", "start": True}
29
+ {"role": "assistant", "type": "message", "content": "The"}
30
+ {"role": "assistant", "type": "message", "content": " result"}
31
+ {"role": "assistant", "type": "message", "content": " of"}
32
+ {"role": "assistant", "type": "message", "content": " the"}
33
+ {"role": "assistant", "type": "message", "content": " division"}
34
+ {"role": "assistant", "type": "message", "content": " "}
35
+ {"role": "assistant", "type": "message", "content": "34"}
36
+ {"role": "assistant", "type": "message", "content": "/"}
37
+ {"role": "assistant", "type": "message", "content": "24"}
38
+ {"role": "assistant", "type": "message", "content": " is"}
39
+ {"role": "assistant", "type": "message", "content": " approximately"}
40
+ {"role": "assistant", "type": "message", "content": " "}
41
+ {"role": "assistant", "type": "message", "content": "1"}
42
+ {"role": "assistant", "type": "message", "content": "."}
43
+ {"role": "assistant", "type": "message", "content": "42"}
44
+ {"role": "assistant", "type": "message", "content": "."}
45
+ {"role": "assistant", "type": "message", "end": True}
46
+ ```
47
+
48
+ **Note:** Setting `display=True` won't change the behavior of the streaming response, it will just render a display in your terminal.
49
+
50
+ # Anatomy
51
+
52
+ Each chunk of the streamed response is a dictionary, that has a "role" key that can be either "assistant" or "computer". The "type" key describes what the chunk is. The "content" key contains the actual content of the chunk.
53
+
54
+ Every 'message' is made up of chunks, and begins with a "start" chunk, and ends with an "end" chunk. This helps you parse the streamed response into messages.
55
+
56
+ Let's break down each part of the streamed response.
57
+
58
+ ## Code
59
+
60
+ In this example, the LLM decided to start writing code first. It could have decided to write a message first, or to only write code, or to only write a message.
61
+
62
+ Every streamed chunk of type "code" has a format key that specifies the language. In this case it decided to write `python`.
63
+
64
+ This can be any language defined in [our languages directory.](https://github.com/KillianLucas/open-interpreter/tree/main/interpreter/core/computer/terminal/languages)
65
+
66
+ ```
67
+
68
+ {"role": "assistant", "type": "code", "format": "python", "start": True}
69
+
70
+ ```
71
+
72
+ Then, the LLM decided to write some code. The code is sent token-by-token:
73
+
74
+ ```
75
+
76
+ {"role": "assistant", "type": "code", "format": "python", "content": "34"}
77
+ {"role": "assistant", "type": "code", "format": "python", "content": " /"}
78
+ {"role": "assistant", "type": "code", "format": "python", "content": " "}
79
+ {"role": "assistant", "type": "code", "format": "python", "content": "24"}
80
+
81
+ ```
82
+
83
+ When the LLM finishes writing code, it will send an "end" chunk:
84
+
85
+ ```
86
+
87
+ {"role": "assistant", "type": "code", "format": "python", "end": True}
88
+
89
+ ```
90
+
91
+ ## Code Output
92
+
93
+ After the LLM finishes writing a code block, Open Interpreter will attempt to run it.
94
+
95
+ **Before** it runs it, the following chunk is sent:
96
+
97
+ ```
98
+
99
+ {"role": "computer", "type": "confirmation", "format": "execution", "content": {"type": "code", "language": "python", "code": "34 / 24"}}
100
+
101
+ ```
102
+
103
+ If you check for this object, you can break (or get confirmation) **before** executing the code.
104
+
105
+ ```python
106
+ # This example asks the user before running code
107
+
108
+ for chunk in interpreter.chat("What's 34/24?", stream=True):
109
+ if "executing" in chunk:
110
+ if input("Press ENTER to run this code.") != "":
111
+ break
112
+ ```
113
+
114
+ **While** the code is being executed, you'll receive the line of code that's being run:
115
+
116
+ ```
117
+ {"role": "computer", "type": "console", "format": "active_line", "content": "1"}
118
+ ```
119
+
120
+ We use this to highlight the active line of code on our UI, which keeps the user aware of what Open Interpreter is doing.
121
+
122
+ You'll then receive its output, if it produces any:
123
+
124
+ ```
125
+ {"role": "computer", "type": "console", "format": "output", "content": "1.4166666666666667\n"}
126
+ ```
127
+
128
+ When the code is **finished** executing, this flag will be sent:
129
+
130
+ ```
131
+ {"role": "computer", "type": "console", "end": True}
132
+ ```
133
+
134
+ ## Message
135
+
136
+ Finally, the LLM decided to write a message. This is streamed token-by-token as well:
137
+
138
+ ```
139
+ {"role": "assistant", "type": "message", "start": True}
140
+ {"role": "assistant", "type": "message", "content": "The"}
141
+ {"role": "assistant", "type": "message", "content": " result"}
142
+ {"role": "assistant", "type": "message", "content": " of"}
143
+ {"role": "assistant", "type": "message", "content": " the"}
144
+ {"role": "assistant", "type": "message", "content": " division"}
145
+ {"role": "assistant", "type": "message", "content": " "}
146
+ {"role": "assistant", "type": "message", "content": "34"}
147
+ {"role": "assistant", "type": "message", "content": "/"}
148
+ {"role": "assistant", "type": "message", "content": "24"}
149
+ {"role": "assistant", "type": "message", "content": " is"}
150
+ {"role": "assistant", "type": "message", "content": " approximately"}
151
+ {"role": "assistant", "type": "message", "content": " "}
152
+ {"role": "assistant", "type": "message", "content": "1"}
153
+ {"role": "assistant", "type": "message", "content": "."}
154
+ {"role": "assistant", "type": "message", "content": "42"}
155
+ {"role": "assistant", "type": "message", "content": "."}
156
+ {"role": "assistant", "type": "message", "end": True}
157
+ ```
158
+
159
+ For an example in JavaScript on how you might process these streamed chunks, see the [migration guide](https://github.com/KillianLucas/open-interpreter/blob/main/docs/NCU_MIGRATION_GUIDE.md)
open-interpreter/docs/integrations/docker.mdx ADDED
@@ -0,0 +1,64 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ title: Docker
3
+ ---
4
+
5
+ Docker support is currently experimental. Running Open Interpreter inside of a Docker container may not function as you expect. Let us know on [Discord](https://discord.com/invite/6p3fD6rBVm) if you encounter errors or have suggestions to improve Docker support.
6
+
7
+ We are working on an official integration for Docker in the coming weeks. For now, you can use Open Interpreter in a sandboxed Docker container environment using the following steps:
8
+
9
+ 1. If you do not have Docker Desktop installed, [install it](https://www.docker.com/products/docker-desktop) before proceeding.
10
+
11
+ 2. Create a new directory and add a file named `Dockerfile` in it with the following contents:
12
+
13
+ ```dockerfile
14
+ # Start with Python 3.11
15
+ FROM python:3.11
16
+
17
+ # Replace <your_openai_api_key> with your own key
18
+ ENV OPENAI_API_KEY <your_openai_api_key>
19
+
20
+ # Install Open Interpreter
21
+ RUN pip install open-interpreter
22
+ ```
23
+
24
+ 3. Run the following commands in the same directory to start Open Interpreter.
25
+
26
+ ```bash
27
+ docker build -t openinterpreter .
28
+ docker run -d -it --name interpreter-instance openinterpreter interpreter
29
+ docker attach interpreter-instance
30
+ ```
31
+
32
+ ## Mounting Volumes
33
+
34
+ This is how you let it access _some_ files, by telling it a folder (a volume) it will be able to see / manipulate.
35
+
36
+ To mount a volume, you can use the `-v` flag followed by the path to the directory on your host machine, a colon, and then the path where you want to mount the directory in the container.
37
+
38
+ ```bash
39
+ docker run -d -it -v /path/on/your/host:/path/in/the/container --name interpreter-instance openinterpreter interpreter
40
+ ```
41
+
42
+ Replace `/path/on/your/host` with the path to the directory on your host machine that you want to mount, and replace `/path/in/the/container` with the path in the Docker container where you want to mount the directory.
43
+
44
+ Here's a simple example:
45
+
46
+ ```bash
47
+ docker run -d -it -v $(pwd):/files --name interpreter-instance openinterpreter interpreter
48
+ ```
49
+
50
+ In this example, `$(pwd)` is your current directory, and it is mounted to a `/files` directory in the Docker container (this creates that folder too).
51
+
52
+ ## Flags
53
+
54
+ To add flags to the command, just append them after `interpreter`. For example, to run the interpreter with custom instructions, run the following command:
55
+
56
+ ```bash
57
+ docker-compose run --rm oi interpreter --custom_instructions "Be as concise as possible"
58
+ ```
59
+
60
+ Please note that some flags will not work. For example, `--config` will not work, because it cannot open the config file in the container. If you want to use a config file other than the default, you can create a `config.yml` file inside of the same directory, add your custom config, and then run the following command:
61
+
62
+ ```bash
63
+ docker-compose run --rm oi interpreter --config_file config.yml
64
+ ```
open-interpreter/docs/integrations/e2b.mdx ADDED
@@ -0,0 +1,72 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ title: E2B
3
+ ---
4
+
5
+ [E2B](https://e2b.dev/) is a secure, sandboxed environment where you can run arbitrary code.
6
+
7
+ To build this integration, you just need to replace Open Interpreter's `python` (which runs locally) with a `python` that runs on E2B.
8
+
9
+ First, [get an API key here](https://e2b.dev/), and set it:
10
+
11
+ ```python
12
+ import os
13
+ os.environ["E2B_API_KEY"] = "<your_api_key_here>"
14
+ ```
15
+
16
+ Then, define a custom language for Open Interpreter. The class name doesn't matter, but we'll call it `PythonE2B`:
17
+
18
+ ```python
19
+ import e2b
20
+
21
+ class PythonE2B:
22
+ """
23
+ This class contains all requirements for being a custom language in Open Interpreter:
24
+
25
+ - name (an attribute)
26
+ - run (a method)
27
+ - stop (a method)
28
+ - terminate (a method)
29
+
30
+ Here, we'll use E2B to power the `run` method.
31
+ """
32
+
33
+ # This is the name that will appear to the LLM.
34
+ name = "python"
35
+
36
+ # Optionally, you can append some information about this language to the system message:
37
+ system_message = "# Follow this rule: Every Python code block MUST contain at least one print statement."
38
+
39
+ # (E2B isn't a Jupyter Notebook, so we added ^ this so it would print things,
40
+ # instead of putting variables at the end of code blocks, which is a Jupyter thing.)
41
+
42
+ def run(self, code):
43
+ """Generator that yields a dictionary in LMC Format."""
44
+
45
+ # Run the code on E2B
46
+ stdout, stderr = e2b.run_code('Python3', code)
47
+
48
+ # Yield the output
49
+ yield {
50
+ "type": "console", "format": "output",
51
+ "content": stdout + stderr # We combined these arbitrarily. Yield anything you'd like!
52
+ }
53
+
54
+ def stop(self):
55
+ """Stops the code."""
56
+ # Not needed here, because e2b.run_code isn't stateful.
57
+ pass
58
+
59
+ def terminate(self):
60
+ """Terminates the entire process."""
61
+ # Not needed here, because e2b.run_code isn't stateful.
62
+ pass
63
+
64
+ # (Tip: Do this before adding/removing languages, otherwise OI might retain the state of previous languages:)
65
+ interpreter.computer.terminate()
66
+
67
+ # Give Open Interpreter its languages. This will only let it run PythonE2B:
68
+ interpreter.computer.languages = [PythonE2B]
69
+
70
+ # Try it out!
71
+ interpreter.chat("What's 349808*38490739?")
72
+ ```
open-interpreter/docs/language-models/custom-models.mdx ADDED
@@ -0,0 +1,42 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ title: Custom Models
3
+ ---
4
+
5
+ In addition to hosted and local language models, Open Interpreter also supports custom models.
6
+
7
+ As long as your system can accept an input and stream an output (and can be interacted with via a Python generator) it can be used as a language model in Open Interpreter.
8
+
9
+ Simply replace the OpenAI-compatible `completions` function in your language model with one of your own:
10
+
11
+ ```python
12
+ def custom_language_model(openai_message):
13
+ """
14
+ OpenAI-compatible completions function (this one just echoes what the user said back).
15
+ """
16
+ users_content = openai_message[-1].get("content") # Get last message's content
17
+
18
+ # To make it OpenAI-compatible, we yield this first:
19
+ yield {"delta": {"role": "assistant"}}
20
+
21
+ for character in users_content:
22
+ yield {"delta": {"content": character}}
23
+
24
+ # Tell Open Interpreter to power the language model with this function
25
+
26
+ interpreter.llm.completion = custom_language_model
27
+ ```
28
+
29
+ Then, set the following settings:
30
+
31
+ ```
32
+ interpreter.llm.context_window = 2000 # In tokens
33
+ interpreter.llm.max_tokens = 1000 # In tokens
34
+ interpreter.llm.supports_vision = False # Does this completions endpoint accept images?
35
+ interpreter.llm.supports_functions = False # Does this completions endpoint accept/return function calls?
36
+ ```
37
+
38
+ And start using it:
39
+
40
+ ```
41
+ interpreter.chat("Hi!") # Returns/displays "Hi!" character by character
42
+ ```
open-interpreter/docs/language-models/hosted-models/ai21.mdx ADDED
@@ -0,0 +1,48 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ title: AI21
3
+ ---
4
+
5
+ To use Open Interpreter with a model from AI21, set the `model` flag:
6
+
7
+ <CodeGroup>
8
+
9
+ ```bash Terminal
10
+ interpreter --model j2-light
11
+ ```
12
+
13
+ ```python Python
14
+ from interpreter import interpreter
15
+
16
+ interpreter.llm.model = "j2-light"
17
+ interpreter.chat()
18
+ ```
19
+
20
+ </CodeGroup>
21
+
22
+ # Supported Models
23
+
24
+ We support any model from [AI21:](https://www.ai21.com/)
25
+
26
+ <CodeGroup>
27
+
28
+ ```bash Terminal
29
+ interpreter --model j2-light
30
+ interpreter --model j2-mid
31
+ interpreter --model j2-ultra
32
+ ```
33
+
34
+ ```python Python
35
+ interpreter.llm.model = "j2-light"
36
+ interpreter.llm.model = "j2-mid"
37
+ interpreter.llm.model = "j2-ultra"
38
+ ```
39
+
40
+ </CodeGroup>
41
+
42
+ # Required Environment Variables
43
+
44
+ Set the following environment variables [(click here to learn how)](https://chat.openai.com/share/1062cdd8-62a1-4aa8-8ec9-eca45645971a) to use these models.
45
+
46
+ | Environment Variable | Description | Where to Find |
47
+ | --------------------- | ------------ | -------------- |
48
+ | `AI21_API_KEY` | The API key for authenticating to AI21's services. | [AI21 Account Page](https://www.ai21.com/account/api-keys) |
open-interpreter/docs/language-models/hosted-models/anthropic.mdx ADDED
@@ -0,0 +1,48 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ title: Anthropic
3
+ ---
4
+
5
+ To use Open Interpreter with a model from Anthropic, set the `model` flag:
6
+
7
+ <CodeGroup>
8
+
9
+ ```bash Terminal
10
+ interpreter --model claude-instant-1
11
+ ```
12
+
13
+ ```python Python
14
+ from interpreter import interpreter
15
+
16
+ interpreter.llm.model = "claude-instant-1"
17
+ interpreter.chat()
18
+ ```
19
+
20
+ </CodeGroup>
21
+
22
+ # Supported Models
23
+
24
+ We support any model from [Anthropic:](https://www.anthropic.com/)
25
+
26
+ <CodeGroup>
27
+
28
+ ```bash Terminal
29
+ interpreter --model claude-instant-1
30
+ interpreter --model claude-instant-1.2
31
+ interpreter --model claude-2
32
+ ```
33
+
34
+ ```python Python
35
+ interpreter.llm.model = "claude-instant-1"
36
+ interpreter.llm.model = "claude-instant-1.2"
37
+ interpreter.llm.model = "claude-2"
38
+ ```
39
+
40
+ </CodeGroup>
41
+
42
+ # Required Environment Variables
43
+
44
+ Set the following environment variables [(click here to learn how)](https://chat.openai.com/share/1062cdd8-62a1-4aa8-8ec9-eca45645971a) to use these models.
45
+
46
+ | Environment Variable | Description | Where to Find |
47
+ | --------------------- | ------------ | -------------- |
48
+ | `ANTHROPIC_API_KEY` | The API key for authenticating to Anthropic's services. | [Anthropic](https://www.anthropic.com/) |
open-interpreter/docs/language-models/hosted-models/anyscale.mdx ADDED
@@ -0,0 +1,60 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ title: Anyscale
3
+ ---
4
+
5
+ To use Open Interpreter with a model from Anyscale, set the `model` flag:
6
+
7
+ <CodeGroup>
8
+
9
+ ```bash Terminal
10
+ interpreter --model anyscale/<model-name>
11
+ ```
12
+
13
+ ```python Python
14
+ from interpreter import interpreter
15
+
16
+ # Set the model to use from AWS Bedrock:
17
+ interpreter.llm.model = "anyscale/<model-name>"
18
+ interpreter.chat()
19
+ ```
20
+
21
+ </CodeGroup>
22
+
23
+ # Supported Models
24
+
25
+ We support the following completion models from Anyscale:
26
+
27
+ - Llama 2 7B Chat
28
+ - Llama 2 13B Chat
29
+ - Llama 2 70B Chat
30
+ - Mistral 7B Instruct
31
+ - CodeLlama 34b Instruct
32
+
33
+ <CodeGroup>
34
+
35
+ ```bash Terminal
36
+ interpreter --model anyscale/meta-llama/Llama-2-7b-chat-hf
37
+ interpreter --model anyscale/meta-llama/Llama-2-13b-chat-hf
38
+ interpreter --model anyscale/meta-llama/Llama-2-70b-chat-hf
39
+ interpreter --model anyscale/mistralai/Mistral-7B-Instruct-v0.1
40
+ interpreter --model anyscale/codellama/CodeLlama-34b-Instruct-hf
41
+ ```
42
+
43
+ ```python Python
44
+ interpreter.llm.model = "anyscale/meta-llama/Llama-2-7b-chat-hf"
45
+ interpreter.llm.model = "anyscale/meta-llama/Llama-2-13b-chat-hf"
46
+ interpreter.llm.model = "anyscale/meta-llama/Llama-2-70b-chat-hf"
47
+ interpreter.llm.model = "anyscale/mistralai/Mistral-7B-Instruct-v0.1"
48
+ interpreter.llm.model = "anyscale/codellama/CodeLlama-34b-Instruct-hf"
49
+
50
+ ```
51
+
52
+ </CodeGroup>
53
+
54
+ # Required Environment Variables
55
+
56
+ Set the following environment variables [(click here to learn how)](https://chat.openai.com/share/1062cdd8-62a1-4aa8-8ec9-eca45645971a) to use these models.
57
+
58
+ | Environment Variable | Description | Where to Find |
59
+ | -------------------- | -------------------------------------- | --------------------------------------------------------------------------- |
60
+ | `ANYSCALE_API_KEY` | The API key for your Anyscale account. | [Anyscale Account Settings](https://app.endpoints.anyscale.com/credentials) |