Merge branch 'main' of https://github.com/OpenInterpreter/01 into config
This commit is contained in:
commit
685b14ae31
|
@ -1,10 +1,9 @@
|
||||||
---
|
---
|
||||||
name: Feature request
|
name: Feature request
|
||||||
about: Suggest an idea for this project
|
about: Suggest an idea for this project
|
||||||
title: ''
|
title: ""
|
||||||
labels: ''
|
labels: ""
|
||||||
assignees: ''
|
assignees: ""
|
||||||
|
|
||||||
---
|
---
|
||||||
|
|
||||||
**Is your feature request related to a problem? Please describe.**
|
**Is your feature request related to a problem? Please describe.**
|
||||||
|
@ -13,8 +12,5 @@ A clear and concise description of what the problem is. Ex. I'm always frustrate
|
||||||
**Describe the solution you'd like**
|
**Describe the solution you'd like**
|
||||||
A clear and concise description of what you want to happen.
|
A clear and concise description of what you want to happen.
|
||||||
|
|
||||||
**Describe alternatives you've considered**
|
|
||||||
A clear and concise description of any alternative solutions or features you've considered.
|
|
||||||
|
|
||||||
**Additional context**
|
**Additional context**
|
||||||
Add any other context or screenshots about the feature request here.
|
Add any other context or screenshots about the feature request here.
|
||||||
|
|
|
@ -1,6 +1,7 @@
|
||||||
name: Run Test
|
name: Run Test
|
||||||
|
|
||||||
on:
|
on:
|
||||||
|
workflow_dispatch:
|
||||||
pull_request:
|
pull_request:
|
||||||
branches: [main]
|
branches: [main]
|
||||||
# push: # Trigger the workflow on push events
|
# push: # Trigger the workflow on push events
|
||||||
|
@ -16,7 +17,8 @@ jobs:
|
||||||
strategy:
|
strategy:
|
||||||
fail-fast: false
|
fail-fast: false
|
||||||
matrix:
|
matrix:
|
||||||
os: [ubuntu-latest, windows-latest, macos-latest]
|
os: [macos-latest]
|
||||||
|
# os: [ubuntu-latest, windows-latest, macos-latest]
|
||||||
python-version: ["3.11"]
|
python-version: ["3.11"]
|
||||||
|
|
||||||
defaults:
|
defaults:
|
||||||
|
@ -58,4 +60,6 @@ jobs:
|
||||||
|
|
||||||
# Run pytest
|
# Run pytest
|
||||||
- name: Run Pytest
|
- name: Run Pytest
|
||||||
run: poetry run pytest tests
|
env:
|
||||||
|
OPENAI_API_KEY: ${{ secrets.OPENAI_API_KEY }}
|
||||||
|
run: poetry run pytest
|
||||||
|
|
|
@ -84,7 +84,7 @@ The 01 exposes a speech-to-speech websocket at `localhost:10001`.
|
||||||
|
|
||||||
If you stream raw audio bytes to `/` in [LMC format](https://docs.openinterpreter.com/protocols/lmc-messages), you will receive its response in the same format.
|
If you stream raw audio bytes to `/` in [LMC format](https://docs.openinterpreter.com/protocols/lmc-messages), you will receive its response in the same format.
|
||||||
|
|
||||||
Inspired in part by [Andrej Karpathy's LLM OS](https://twitter.com/karpathy/status/1723140519554105733), we run a [code-interpreting language model](https://github.com/OpenInterpreter/open-interpreter), and call it when certain events occur at your computer's [kernel](https://github.com/OpenInterpreter/01/blob/main/01OS/01OS/server/utils/kernel.py).
|
Inspired in part by [Andrej Karpathy's LLM OS](https://twitter.com/karpathy/status/1723140519554105733), we run a [code-interpreting language model](https://github.com/OpenInterpreter/open-interpreter), and call it when certain events occur at your computer's [kernel](https://github.com/OpenInterpreter/01/blob/main/software/source/server/utils/kernel.py).
|
||||||
|
|
||||||
The 01 wraps this in a voice interface:
|
The 01 wraps this in a voice interface:
|
||||||
|
|
||||||
|
|
|
@ -0,0 +1,151 @@
|
||||||
|
<h1 align="center">○</h1>
|
||||||
|
|
||||||
|
<p align="center">
|
||||||
|
<a href="https://discord.gg/Hvz9Axh84z"><img alt="Discord" src="https://img.shields.io/discord/1146610656779440188?logo=discord&style=social&logoColor=black"/></a>
|
||||||
|
<br>
|
||||||
|
<br>
|
||||||
|
<strong>The open-source language model computer.(开源大语言模型计算机)</strong><br>
|
||||||
|
<!-- <br><a href="https://openinterpreter.com">Preorder the Light</a> | <a href="https://openinterpreter.com">Get Updates</a> | <a href="https://docs.openinterpreter.com/">Documentation</a><br> -->
|
||||||
|
</p>
|
||||||
|
|
||||||
|
<br>
|
||||||
|
|
||||||
|

|
||||||
|
|
||||||
|
我们想帮助您构建。 [申请 1 对 1 的支持。](https://0ggfznkwh4j.typeform.com/to/kkStE8WF)
|
||||||
|
|
||||||
|
<br>
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
⚠️ **警告:** 这个实验性项目正在快速开发中,并且缺乏基本的安全保障。在稳定的 1.0 版本发布之前, **仅在**没有敏感信息或访问付费服务的设备上运行此存储库。⚠️
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
<br>
|
||||||
|
|
||||||
|
**01 项目** 正在构建一个用于 AI 设备的开源生态系统。
|
||||||
|
|
||||||
|
我们的旗舰操作系统可以为对话设备提供动力,比如 Rabbit R1、Humane Pin 或 [Star Trek computer](https://www.youtube.com/watch?v=1ZXugicgn6U)。
|
||||||
|
|
||||||
|
我们打算成为这个领域的 GNU/Linux,保持开放、模块化和免费。
|
||||||
|
|
||||||
|
<br>
|
||||||
|
|
||||||
|
# 软件
|
||||||
|
|
||||||
|
```shell
|
||||||
|
git clone https://github.com/OpenInterpreter/01 # Clone the repository
|
||||||
|
cd 01/software # CD into the source directory
|
||||||
|
```
|
||||||
|
|
||||||
|
<!-- > Not working? Read our [setup guide](https://docs.openinterpreter.com/getting-started/setup). -->
|
||||||
|
|
||||||
|
```shell
|
||||||
|
brew install portaudio ffmpeg cmake # Install Mac OSX dependencies
|
||||||
|
poetry install # Install Python dependencies
|
||||||
|
export OPENAI_API_KEY=sk... # OR run `poetry run 01 --local` to run everything locally
|
||||||
|
poetry run 01 # Runs the 01 Light simulator (hold your spacebar, speak, release)
|
||||||
|
```
|
||||||
|
|
||||||
|
<br>
|
||||||
|
|
||||||
|
# 硬件
|
||||||
|
|
||||||
|
- **01 Light** 是基于 ESP32 的语音接口。 [构建说明在这里。](https://github.com/OpenInterpreter/01/tree/main/hardware/light) 它与运行在你家庭电脑上的 **01 Server** ([下面有设置指南](https://github.com/OpenInterpreter/01/blob/main/README.md#01-server)) 配合使用。
|
||||||
|
- **Mac OSX** and **Ubuntu** 支持通过运行 `poetry run 01`。 这会使用你的空格键来模拟 01 Light。
|
||||||
|
- (即将推出) **01 Heavy** 是一个独立设备,可以在本地运行所有功能。
|
||||||
|
|
||||||
|
**我们需要您的帮助来支持和构建更多硬件。** 01 应该能够在任何具有输入(麦克风、键盘等)、输出(扬声器、屏幕、电机等)和互联网连接(或足够的计算资源以在本地运行所有内容)的设备上运行。 [ 贡献指南 →](https://github.com/OpenInterpreter/01/blob/main/CONTRIBUTING.md)
|
||||||
|
|
||||||
|
<br>
|
||||||
|
|
||||||
|
# 它是做什么的?
|
||||||
|
|
||||||
|
01 在 `localhost:10001` 上暴露了一个语音到语音的 WebSocket。
|
||||||
|
|
||||||
|
如果你以 [LMC 格式](https://docs.openinterpreter.com/protocols/lmc-messages) 将原始音频字节流传送到 `/`,你将会以相同的格式收到其回复。
|
||||||
|
|
||||||
|
受 [Andrej Karpathy's LLM OS](https://twitter.com/karpathy/status/1723140519554105733) 的启发,我们运行了一个 [code-interpreting language model](https://github.com/OpenInterpreter/open-interpreter),并在你的计算机 [ 内核 ](https://github.com/OpenInterpreter/01/blob/main/01OS/01OS/server/utils/kernel.py) 发生某些事件时调用它。
|
||||||
|
|
||||||
|
01 将其包装成一个语音界面:
|
||||||
|
|
||||||
|
<br>
|
||||||
|
|
||||||
|
<img width="100%" alt="LMC" src="https://github.com/OpenInterpreter/01/assets/63927363/52417006-a2ca-4379-b309-ffee3509f5d4"><br><br>
|
||||||
|
|
||||||
|
# 协议
|
||||||
|
|
||||||
|
## LMC 消息
|
||||||
|
|
||||||
|
为了与系统的不同组件进行通信,我们引入了 [LMC 消息](https://docs.openinterpreter.com/protocols/lmc-messages) 格式,它扩展了 OpenAI 的消息格式以包含一个 "computer" 角色。
|
||||||
|
|
||||||
|
## 动态系统消息
|
||||||
|
|
||||||
|
动态系统消息使您能够在 LLM 系统消息出现在 AI 前的片刻内执行代码。
|
||||||
|
|
||||||
|
```python
|
||||||
|
# Edit the following settings in i.py
|
||||||
|
interpreter.system_message = r" The time is {{time.time()}}. " # Anything in double brackets will be executed as Python
|
||||||
|
interpreter.chat("What time is it?") # It will know, without making a tool/API call
|
||||||
|
```
|
||||||
|
|
||||||
|
# 指南
|
||||||
|
|
||||||
|
## 01 服务器
|
||||||
|
|
||||||
|
要在您的桌面上运行服务器并将其连接到您的 01 Light,请运行以下命令:
|
||||||
|
|
||||||
|
```shell
|
||||||
|
brew install ngrok/ngrok/ngrok
|
||||||
|
ngrok authtoken ... # Use your ngrok authtoken
|
||||||
|
poetry run 01 --server --expose
|
||||||
|
```
|
||||||
|
|
||||||
|
最后一个命令将打印一个服务器 URL。您可以将其输入到您的 01 Light 的 captive WiFi 门户中,以连接到您的 01 服务器。
|
||||||
|
|
||||||
|
## 本地模式
|
||||||
|
|
||||||
|
```
|
||||||
|
poetry run 01 --local
|
||||||
|
```
|
||||||
|
|
||||||
|
如果您想要使用 Whisper 运行本地语音转文本,您必须安装 Rust。请按照 [这里](https://www.rust-lang.org/tools/install) 给出的说明进行操作。
|
||||||
|
|
||||||
|
## 自定义
|
||||||
|
|
||||||
|
要自定义系统的行为,请编辑 `i.py` 中的 [系统消息、模型、技能库路径](https://docs.openinterpreter.com/settings/all-settings) 等。这个文件设置了一个解释器,并由 Open Interpreter 提供支持。
|
||||||
|
|
||||||
|
## Ubuntu 依赖项
|
||||||
|
|
||||||
|
```bash
|
||||||
|
sudo apt-get install portaudio19-dev ffmpeg cmake
|
||||||
|
```
|
||||||
|
|
||||||
|
# 贡献者
|
||||||
|
|
||||||
|
[](https://github.com/OpenInterpreter/01/graphs/contributors)
|
||||||
|
|
||||||
|
请查看我们的 [贡献指南](CONTRIBUTING.md) 以获取更多的参与详情。
|
||||||
|
|
||||||
|
<br>
|
||||||
|
|
||||||
|
# 路线图
|
||||||
|
|
||||||
|
访问 [我们的路线图](/ROADMAP.md) 以了解 01 的未来。
|
||||||
|
|
||||||
|
<br>
|
||||||
|
|
||||||
|
## 背景
|
||||||
|
|
||||||
|
### [背景说明 ↗](https://github.com/KillianLucas/01/blob/main/CONTEXT.md)
|
||||||
|
|
||||||
|
关于 01 之前设备的故事。
|
||||||
|
|
||||||
|
### [灵感来源 ↗](https://github.com/KillianLucas/01/tree/main/INSPIRATION.md)
|
||||||
|
|
||||||
|
我们想要从中获取优秀想法的事物。
|
||||||
|
|
||||||
|
<br>
|
||||||
|
|
||||||
|
○
|
|
@ -1,9 +0,0 @@
|
||||||
The open-source language model computer.
|
|
||||||
|
|
||||||
```bash
|
|
||||||
pip install 01OS
|
|
||||||
```
|
|
||||||
|
|
||||||
```bash
|
|
||||||
01 # Runs the 01 server and client
|
|
||||||
```
|
|
|
@ -330,33 +330,33 @@ lxml = ["lxml"]
|
||||||
|
|
||||||
[[package]]
|
[[package]]
|
||||||
name = "black"
|
name = "black"
|
||||||
version = "23.12.1"
|
version = "24.3.0"
|
||||||
description = "The uncompromising code formatter."
|
description = "The uncompromising code formatter."
|
||||||
optional = false
|
optional = false
|
||||||
python-versions = ">=3.8"
|
python-versions = ">=3.8"
|
||||||
files = [
|
files = [
|
||||||
{file = "black-23.12.1-cp310-cp310-macosx_10_9_x86_64.whl", hash = "sha256:e0aaf6041986767a5e0ce663c7a2f0e9eaf21e6ff87a5f95cbf3675bfd4c41d2"},
|
{file = "black-24.3.0-cp310-cp310-macosx_10_9_x86_64.whl", hash = "sha256:7d5e026f8da0322b5662fa7a8e752b3fa2dac1c1cbc213c3d7ff9bdd0ab12395"},
|
||||||
{file = "black-23.12.1-cp310-cp310-macosx_11_0_arm64.whl", hash = "sha256:c88b3711d12905b74206227109272673edce0cb29f27e1385f33b0163c414bba"},
|
{file = "black-24.3.0-cp310-cp310-macosx_11_0_arm64.whl", hash = "sha256:9f50ea1132e2189d8dff0115ab75b65590a3e97de1e143795adb4ce317934995"},
|
||||||
{file = "black-23.12.1-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:a920b569dc6b3472513ba6ddea21f440d4b4c699494d2e972a1753cdc25df7b0"},
|
{file = "black-24.3.0-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:e2af80566f43c85f5797365077fb64a393861a3730bd110971ab7a0c94e873e7"},
|
||||||
{file = "black-23.12.1-cp310-cp310-win_amd64.whl", hash = "sha256:3fa4be75ef2a6b96ea8d92b1587dd8cb3a35c7e3d51f0738ced0781c3aa3a5a3"},
|
{file = "black-24.3.0-cp310-cp310-win_amd64.whl", hash = "sha256:4be5bb28e090456adfc1255e03967fb67ca846a03be7aadf6249096100ee32d0"},
|
||||||
{file = "black-23.12.1-cp311-cp311-macosx_10_9_x86_64.whl", hash = "sha256:8d4df77958a622f9b5a4c96edb4b8c0034f8434032ab11077ec6c56ae9f384ba"},
|
{file = "black-24.3.0-cp311-cp311-macosx_10_9_x86_64.whl", hash = "sha256:4f1373a7808a8f135b774039f61d59e4be7eb56b2513d3d2f02a8b9365b8a8a9"},
|
||||||
{file = "black-23.12.1-cp311-cp311-macosx_11_0_arm64.whl", hash = "sha256:602cfb1196dc692424c70b6507593a2b29aac0547c1be9a1d1365f0d964c353b"},
|
{file = "black-24.3.0-cp311-cp311-macosx_11_0_arm64.whl", hash = "sha256:aadf7a02d947936ee418777e0247ea114f78aff0d0959461057cae8a04f20597"},
|
||||||
{file = "black-23.12.1-cp311-cp311-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:9c4352800f14be5b4864016882cdba10755bd50805c95f728011bcb47a4afd59"},
|
{file = "black-24.3.0-cp311-cp311-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:65c02e4ea2ae09d16314d30912a58ada9a5c4fdfedf9512d23326128ac08ac3d"},
|
||||||
{file = "black-23.12.1-cp311-cp311-win_amd64.whl", hash = "sha256:0808494f2b2df923ffc5723ed3c7b096bd76341f6213989759287611e9837d50"},
|
{file = "black-24.3.0-cp311-cp311-win_amd64.whl", hash = "sha256:bf21b7b230718a5f08bd32d5e4f1db7fc8788345c8aea1d155fc17852b3410f5"},
|
||||||
{file = "black-23.12.1-cp312-cp312-macosx_10_9_x86_64.whl", hash = "sha256:25e57fd232a6d6ff3f4478a6fd0580838e47c93c83eaf1ccc92d4faf27112c4e"},
|
{file = "black-24.3.0-cp312-cp312-macosx_10_9_x86_64.whl", hash = "sha256:2818cf72dfd5d289e48f37ccfa08b460bf469e67fb7c4abb07edc2e9f16fb63f"},
|
||||||
{file = "black-23.12.1-cp312-cp312-macosx_11_0_arm64.whl", hash = "sha256:2d9e13db441c509a3763a7a3d9a49ccc1b4e974a47be4e08ade2a228876500ec"},
|
{file = "black-24.3.0-cp312-cp312-macosx_11_0_arm64.whl", hash = "sha256:4acf672def7eb1725f41f38bf6bf425c8237248bb0804faa3965c036f7672d11"},
|
||||||
{file = "black-23.12.1-cp312-cp312-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:6d1bd9c210f8b109b1762ec9fd36592fdd528485aadb3f5849b2740ef17e674e"},
|
{file = "black-24.3.0-cp312-cp312-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:c7ed6668cbbfcd231fa0dc1b137d3e40c04c7f786e626b405c62bcd5db5857e4"},
|
||||||
{file = "black-23.12.1-cp312-cp312-win_amd64.whl", hash = "sha256:ae76c22bde5cbb6bfd211ec343ded2163bba7883c7bc77f6b756a1049436fbb9"},
|
{file = "black-24.3.0-cp312-cp312-win_amd64.whl", hash = "sha256:56f52cfbd3dabe2798d76dbdd299faa046a901041faf2cf33288bc4e6dae57b5"},
|
||||||
{file = "black-23.12.1-cp38-cp38-macosx_10_9_x86_64.whl", hash = "sha256:1fa88a0f74e50e4487477bc0bb900c6781dbddfdfa32691e780bf854c3b4a47f"},
|
{file = "black-24.3.0-cp38-cp38-macosx_10_9_x86_64.whl", hash = "sha256:79dcf34b33e38ed1b17434693763301d7ccbd1c5860674a8f871bd15139e7837"},
|
||||||
{file = "black-23.12.1-cp38-cp38-macosx_11_0_arm64.whl", hash = "sha256:a4d6a9668e45ad99d2f8ec70d5c8c04ef4f32f648ef39048d010b0689832ec6d"},
|
{file = "black-24.3.0-cp38-cp38-macosx_11_0_arm64.whl", hash = "sha256:e19cb1c6365fd6dc38a6eae2dcb691d7d83935c10215aef8e6c38edee3f77abd"},
|
||||||
{file = "black-23.12.1-cp38-cp38-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:b18fb2ae6c4bb63eebe5be6bd869ba2f14fd0259bda7d18a46b764d8fb86298a"},
|
{file = "black-24.3.0-cp38-cp38-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:65b76c275e4c1c5ce6e9870911384bff5ca31ab63d19c76811cb1fb162678213"},
|
||||||
{file = "black-23.12.1-cp38-cp38-win_amd64.whl", hash = "sha256:c04b6d9d20e9c13f43eee8ea87d44156b8505ca8a3c878773f68b4e4812a421e"},
|
{file = "black-24.3.0-cp38-cp38-win_amd64.whl", hash = "sha256:b5991d523eee14756f3c8d5df5231550ae8993e2286b8014e2fdea7156ed0959"},
|
||||||
{file = "black-23.12.1-cp39-cp39-macosx_10_9_x86_64.whl", hash = "sha256:3e1b38b3135fd4c025c28c55ddfc236b05af657828a8a6abe5deec419a0b7055"},
|
{file = "black-24.3.0-cp39-cp39-macosx_10_9_x86_64.whl", hash = "sha256:c45f8dff244b3c431b36e3224b6be4a127c6aca780853574c00faf99258041eb"},
|
||||||
{file = "black-23.12.1-cp39-cp39-macosx_11_0_arm64.whl", hash = "sha256:4f0031eaa7b921db76decd73636ef3a12c942ed367d8c3841a0739412b260a54"},
|
{file = "black-24.3.0-cp39-cp39-macosx_11_0_arm64.whl", hash = "sha256:6905238a754ceb7788a73f02b45637d820b2f5478b20fec82ea865e4f5d4d9f7"},
|
||||||
{file = "black-23.12.1-cp39-cp39-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:97e56155c6b737854e60a9ab1c598ff2533d57e7506d97af5481141671abf3ea"},
|
{file = "black-24.3.0-cp39-cp39-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:d7de8d330763c66663661a1ffd432274a2f92f07feeddd89ffd085b5744f85e7"},
|
||||||
{file = "black-23.12.1-cp39-cp39-win_amd64.whl", hash = "sha256:dd15245c8b68fe2b6bd0f32c1556509d11bb33aec9b5d0866dd8e2ed3dba09c2"},
|
{file = "black-24.3.0-cp39-cp39-win_amd64.whl", hash = "sha256:7bb041dca0d784697af4646d3b62ba4a6b028276ae878e53f6b4f74ddd6db99f"},
|
||||||
{file = "black-23.12.1-py3-none-any.whl", hash = "sha256:78baad24af0f033958cad29731e27363183e140962595def56423e626f4bee3e"},
|
{file = "black-24.3.0-py3-none-any.whl", hash = "sha256:41622020d7120e01d377f74249e677039d20e6344ff5851de8a10f11f513bf93"},
|
||||||
{file = "black-23.12.1.tar.gz", hash = "sha256:4ce3ef14ebe8d9509188014d96af1c456a910d5b5cbf434a09fef7e024b3d0d5"},
|
{file = "black-24.3.0.tar.gz", hash = "sha256:a0c9c4a0771afc6919578cec71ce82a3e31e054904e7197deacbc9382671c41f"},
|
||||||
]
|
]
|
||||||
|
|
||||||
[package.dependencies]
|
[package.dependencies]
|
||||||
|
@ -9251,4 +9251,4 @@ testing = ["big-O", "jaraco.functools", "jaraco.itertools", "more-itertools", "p
|
||||||
[metadata]
|
[metadata]
|
||||||
lock-version = "2.0"
|
lock-version = "2.0"
|
||||||
python-versions = ">=3.9,<3.12"
|
python-versions = ">=3.9,<3.12"
|
||||||
content-hash = "9fb165e9de2f2929d2fd030a696e30f5eaa61c82468aa1499c0d52e6c1acdd6b"
|
content-hash = "b46b8a83c1cc9f130a045934dc9635cf6bc509cd798c4c55a0142db08438fb95"
|
||||||
|
|
|
@ -42,7 +42,7 @@ build-backend = "poetry.core.masonry.api"
|
||||||
01 = "start:app"
|
01 = "start:app"
|
||||||
|
|
||||||
[tool.poetry.group.dev.dependencies]
|
[tool.poetry.group.dev.dependencies]
|
||||||
black = "^23.10.1"
|
black = "^24.3.0"
|
||||||
isort = "^5.12.0"
|
isort = "^5.12.0"
|
||||||
pre-commit = "^3.6.2"
|
pre-commit = "^3.6.2"
|
||||||
pytest = "^8.1.1"
|
pytest = "^8.1.1"
|
||||||
|
|
|
@ -248,6 +248,7 @@ class Device:
|
||||||
await asyncio.sleep(0.01)
|
await asyncio.sleep(0.01)
|
||||||
|
|
||||||
async def websocket_communication(self, WS_URL):
|
async def websocket_communication(self, WS_URL):
|
||||||
|
show_connection_log = True
|
||||||
while True:
|
while True:
|
||||||
try:
|
try:
|
||||||
async with websockets.connect(WS_URL) as websocket:
|
async with websockets.connect(WS_URL) as websocket:
|
||||||
|
@ -303,7 +304,9 @@ class Device:
|
||||||
send_queue.put(result)
|
send_queue.put(result)
|
||||||
except:
|
except:
|
||||||
logger.debug(traceback.format_exc())
|
logger.debug(traceback.format_exc())
|
||||||
logger.info(f"Connecting to `{WS_URL}`...")
|
if show_connection_log:
|
||||||
|
logger.info(f"Connecting to `{WS_URL}`...")
|
||||||
|
show_connection_log = False
|
||||||
await asyncio.sleep(2)
|
await asyncio.sleep(2)
|
||||||
|
|
||||||
async def start_async(self):
|
async def start_async(self):
|
||||||
|
|
|
@ -58,7 +58,7 @@ The `computer` module is ALREADY IMPORTED, and can be used for some tasks:
|
||||||
|
|
||||||
```python
|
```python
|
||||||
result_string = computer.browser.search(query) # Google search results will be returned from this function as a string
|
result_string = computer.browser.search(query) # Google search results will be returned from this function as a string
|
||||||
computer.calendar.create_event(title="Meeting", start_date=datetime.datetime.now(), end=datetime.datetime.now() + datetime.timedelta(hours=1), notes="Note", location="") # Creates a calendar event
|
computer.calendar.create_event(title="Meeting", start_date=datetime.datetime.now(), end_date=datetime.datetime.now() + datetime.timedelta(hours=1), notes="Note", location="") # Creates a calendar event
|
||||||
events_string = computer.calendar.get_events(start_date=datetime.date.today(), end_date=None) # Get events between dates. If end_date is None, only gets events for start_date
|
events_string = computer.calendar.get_events(start_date=datetime.date.today(), end_date=None) # Get events between dates. If end_date is None, only gets events for start_date
|
||||||
computer.calendar.delete_event(event_title="Meeting", start_date=datetime.datetime) # Delete a specific event with a matching title and start date, you may need to get use get_events() to find the specific event object first
|
computer.calendar.delete_event(event_title="Meeting", start_date=datetime.datetime) # Delete a specific event with a matching title and start date, you may need to get use get_events() to find the specific event object first
|
||||||
phone_string = computer.contacts.get_phone_number("John Doe")
|
phone_string = computer.contacts.get_phone_number("John Doe")
|
||||||
|
@ -92,7 +92,7 @@ computer.mouse.scroll(-10) # Scrolls down. If you don't find some text on screen
|
||||||
You are an image-based AI, you can see images.
|
You are an image-based AI, you can see images.
|
||||||
Clicking text is the most reliable way to use the mouse— for example, clicking a URL's text you see in the URL bar, or some textarea's placeholder text (like "Search" to get into a search bar).
|
Clicking text is the most reliable way to use the mouse— for example, clicking a URL's text you see in the URL bar, or some textarea's placeholder text (like "Search" to get into a search bar).
|
||||||
If you use `plt.show()`, the resulting image will be sent to you. However, if you use `PIL.Image.show()`, the resulting image will NOT be sent to you.
|
If you use `plt.show()`, the resulting image will be sent to you. However, if you use `PIL.Image.show()`, the resulting image will NOT be sent to you.
|
||||||
It is very important to make sure you are focused on the right application and window. Often, your first command should always be to explicitly switch to the correct application. On Macs, ALWAYS use Spotlight to switch applications.
|
It is very important to make sure you are focused on the right application and window. Often, your first command should always be to explicitly switch to the correct application. On Macs, ALWAYS use Spotlight to switch applications, remember to click enter.
|
||||||
When searching the web, use query parameters. For example, https://www.amazon.com/s?k=monitor
|
When searching the web, use query parameters. For example, https://www.amazon.com/s?k=monitor
|
||||||
|
|
||||||
# SKILLS
|
# SKILLS
|
||||||
|
@ -181,7 +181,7 @@ Try multiple methods before saying the task is impossible. **You can do it!**
|
||||||
|
|
||||||
|
|
||||||
def configure_interpreter(interpreter: OpenInterpreter):
|
def configure_interpreter(interpreter: OpenInterpreter):
|
||||||
|
|
||||||
### SYSTEM MESSAGE
|
### SYSTEM MESSAGE
|
||||||
interpreter.system_message = system_message
|
interpreter.system_message = system_message
|
||||||
|
|
||||||
|
@ -354,7 +354,7 @@ def configure_interpreter(interpreter: OpenInterpreter):
|
||||||
interpreter.computer.languages = [l for l in interpreter.computer.languages if l.name.lower() in ["applescript", "shell", "zsh", "bash", "python"]]
|
interpreter.computer.languages = [l for l in interpreter.computer.languages if l.name.lower() in ["applescript", "shell", "zsh", "bash", "python"]]
|
||||||
interpreter.force_task_completion = True
|
interpreter.force_task_completion = True
|
||||||
# interpreter.offline = True
|
# interpreter.offline = True
|
||||||
interpreter.id = 206 # Used to identify itself to other interpreters. This should be changed programatically so it's unique.
|
interpreter.id = 206 # Used to identify itself to other interpreters. This should be changed programmatically so it's unique.
|
||||||
|
|
||||||
### RESET conversations/user.json
|
### RESET conversations/user.json
|
||||||
app_dir = user_data_dir('01')
|
app_dir = user_data_dir('01')
|
||||||
|
@ -364,4 +364,4 @@ def configure_interpreter(interpreter: OpenInterpreter):
|
||||||
with open(user_json_path, 'w') as file:
|
with open(user_json_path, 'w') as file:
|
||||||
json.dump([], file)
|
json.dump([], file)
|
||||||
|
|
||||||
return interpreter
|
return interpreter
|
||||||
|
|
|
@ -4,7 +4,6 @@ import ast
|
||||||
import json
|
import json
|
||||||
import queue
|
import queue
|
||||||
import os
|
import os
|
||||||
import traceback
|
|
||||||
import datetime
|
import datetime
|
||||||
from .utils.bytes_to_wav import bytes_to_wav
|
from .utils.bytes_to_wav import bytes_to_wav
|
||||||
import re
|
import re
|
||||||
|
@ -427,4 +426,4 @@ async def main(server_host, server_port, llm_service, model, llm_supports_vision
|
||||||
|
|
||||||
# Run the FastAPI app
|
# Run the FastAPI app
|
||||||
if __name__ == "__main__":
|
if __name__ == "__main__":
|
||||||
asyncio.run(main())
|
asyncio.run(main())
|
||||||
|
|
|
@ -5,12 +5,12 @@ import time
|
||||||
import wget
|
import wget
|
||||||
import stat
|
import stat
|
||||||
|
|
||||||
|
|
||||||
class Llm:
|
class Llm:
|
||||||
def __init__(self, config):
|
def __init__(self, config):
|
||||||
|
|
||||||
self.interpreter = config["interpreter"]
|
self.interpreter = config["interpreter"]
|
||||||
config.pop("interpreter", None)
|
config.pop("interpreter", None)
|
||||||
|
|
||||||
self.install(config["service_directory"])
|
self.install(config["service_directory"])
|
||||||
|
|
||||||
config.pop("service_directory", None)
|
config.pop("service_directory", None)
|
||||||
|
@ -20,8 +20,7 @@ class Llm:
|
||||||
self.llm = self.interpreter.llm.completions
|
self.llm = self.interpreter.llm.completions
|
||||||
|
|
||||||
def install(self, service_directory):
|
def install(self, service_directory):
|
||||||
|
if platform.system() == "Darwin": # Check if the system is MacOS
|
||||||
if platform.system() == "Darwin": # Check if the system is MacOS
|
|
||||||
result = subprocess.run(
|
result = subprocess.run(
|
||||||
["xcode-select", "-p"], stdout=subprocess.PIPE, stderr=subprocess.STDOUT
|
["xcode-select", "-p"], stdout=subprocess.PIPE, stderr=subprocess.STDOUT
|
||||||
)
|
)
|
||||||
|
@ -30,7 +29,9 @@ class Llm:
|
||||||
"Llamafile requires Mac users to have Xcode installed. You can install Xcode from https://developer.apple.com/xcode/ .\n\nAlternatively, you can use `LM Studio`, `Jan.ai`, or `Ollama` to manage local language models. Learn more at https://docs.openinterpreter.com/guides/running-locally ."
|
"Llamafile requires Mac users to have Xcode installed. You can install Xcode from https://developer.apple.com/xcode/ .\n\nAlternatively, you can use `LM Studio`, `Jan.ai`, or `Ollama` to manage local language models. Learn more at https://docs.openinterpreter.com/guides/running-locally ."
|
||||||
)
|
)
|
||||||
time.sleep(3)
|
time.sleep(3)
|
||||||
raise Exception("Xcode is not installed. Please install Xcode and try again.")
|
raise Exception(
|
||||||
|
"Xcode is not installed. Please install Xcode and try again."
|
||||||
|
)
|
||||||
|
|
||||||
# Define the path to the models directory
|
# Define the path to the models directory
|
||||||
models_dir = os.path.join(service_directory, "models")
|
models_dir = os.path.join(service_directory, "models")
|
||||||
|
@ -48,12 +49,10 @@ class Llm:
|
||||||
"Attempting to download the `Phi-2` language model. This may take a few minutes."
|
"Attempting to download the `Phi-2` language model. This may take a few minutes."
|
||||||
)
|
)
|
||||||
time.sleep(3)
|
time.sleep(3)
|
||||||
|
|
||||||
url = "https://huggingface.co/jartine/phi-2-llamafile/resolve/main/phi-2.Q4_K_M.llamafile"
|
url = "https://huggingface.co/jartine/phi-2-llamafile/resolve/main/phi-2.Q4_K_M.llamafile"
|
||||||
wget.download(url, llamafile_path)
|
wget.download(url, llamafile_path)
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
# Make the new llamafile executable
|
# Make the new llamafile executable
|
||||||
if platform.system() != "Windows":
|
if platform.system() != "Windows":
|
||||||
st = os.stat(llamafile_path)
|
st = os.stat(llamafile_path)
|
||||||
|
@ -63,11 +62,15 @@ class Llm:
|
||||||
if os.path.exists(llamafile_path):
|
if os.path.exists(llamafile_path):
|
||||||
try:
|
try:
|
||||||
# Test if the llamafile is executable
|
# Test if the llamafile is executable
|
||||||
subprocess.check_call([llamafile_path])
|
subprocess.check_call([f'"{llamafile_path}"'], shell=True)
|
||||||
except subprocess.CalledProcessError:
|
except subprocess.CalledProcessError:
|
||||||
print("The llamafile is not executable. Please check the file permissions.")
|
print(
|
||||||
|
"The llamafile is not executable. Please check the file permissions."
|
||||||
|
)
|
||||||
raise
|
raise
|
||||||
subprocess.Popen([llamafile_path, "-ngl", "9999"])
|
subprocess.Popen(
|
||||||
|
f'"{llamafile_path}" ' + " ".join(["-ngl", "9999"]), shell=True
|
||||||
|
)
|
||||||
else:
|
else:
|
||||||
error_message = "The llamafile does not exist or is corrupted. Please ensure it has been downloaded correctly or try again."
|
error_message = "The llamafile does not exist or is corrupted. Please ensure it has been downloaded correctly or try again."
|
||||||
print(error_message)
|
print(error_message)
|
||||||
|
@ -81,4 +84,4 @@ class Llm:
|
||||||
self.interpreter.llm.api_base = "https://localhost:8080/v1"
|
self.interpreter.llm.api_base = "https://localhost:8080/v1"
|
||||||
self.interpreter.llm.max_tokens = 1000
|
self.interpreter.llm.max_tokens = 1000
|
||||||
self.interpreter.llm.context_window = 3000
|
self.interpreter.llm.context_window = 3000
|
||||||
self.interpreter.llm.supports_functions = False
|
self.interpreter.llm.supports_functions = False
|
||||||
|
|
|
@ -14,7 +14,7 @@ class Tts:
|
||||||
def tts(self, text):
|
def tts(self, text):
|
||||||
response = client.audio.speech.create(
|
response = client.audio.speech.create(
|
||||||
model="tts-1",
|
model="tts-1",
|
||||||
voice="alloy",
|
voice=os.getenv('OPENAI_VOICE_NAME', 'alloy'),
|
||||||
input=text,
|
input=text,
|
||||||
response_format="opus"
|
response_format="opus"
|
||||||
)
|
)
|
||||||
|
|
|
@ -2,7 +2,6 @@ import ffmpeg
|
||||||
import tempfile
|
import tempfile
|
||||||
import os
|
import os
|
||||||
import subprocess
|
import subprocess
|
||||||
import tempfile
|
|
||||||
import urllib.request
|
import urllib.request
|
||||||
import tarfile
|
import tarfile
|
||||||
|
|
||||||
|
|
|
@ -15,27 +15,27 @@ def test_ping(client):
|
||||||
assert response.text == "pong"
|
assert response.text == "pong"
|
||||||
|
|
||||||
|
|
||||||
def test_interpreter_chat(mock_interpreter):
|
# def test_interpreter_chat(mock_interpreter):
|
||||||
# Set up a sample conversation
|
# # Set up a sample conversation
|
||||||
messages = [
|
# messages = [
|
||||||
{"role": "user", "type": "message", "content": "Hello."},
|
# {"role": "user", "type": "message", "content": "Hello."},
|
||||||
{"role": "assistant", "type": "message", "content": "Hi there!"},
|
# {"role": "assistant", "type": "message", "content": "Hi there!"},
|
||||||
# Add more messages as needed
|
# # Add more messages as needed
|
||||||
]
|
# ]
|
||||||
|
|
||||||
# Configure the mock interpreter with the sample conversation
|
# # Configure the mock interpreter with the sample conversation
|
||||||
mock_interpreter.messages = messages
|
# mock_interpreter.messages = messages
|
||||||
|
|
||||||
# Simulate additional user input
|
# # Simulate additional user input
|
||||||
user_input = {"role": "user", "type": "message", "content": "How are you?"}
|
# user_input = {"role": "user", "type": "message", "content": "How are you?"}
|
||||||
mock_interpreter.chat([user_input])
|
# mock_interpreter.chat([user_input])
|
||||||
|
|
||||||
# Ensure the interpreter processed the user input
|
# # Ensure the interpreter processed the user input
|
||||||
assert len(mock_interpreter.messages) == len(messages)
|
# assert len(mock_interpreter.messages) == len(messages)
|
||||||
assert mock_interpreter.messages[-1]["role"] == "assistant"
|
# assert mock_interpreter.messages[-1]["role"] == "assistant"
|
||||||
assert "don't have feelings" in mock_interpreter.messages[-1]["content"]
|
# assert "don't have feelings" in mock_interpreter.messages[-1]["content"]
|
||||||
|
|
||||||
def test_interpreter_configuration(mock_interpreter):
|
# def test_interpreter_configuration(mock_interpreter):
|
||||||
# Test interpreter configuration
|
# # Test interpreter configuration
|
||||||
interpreter = configure_interpreter(mock_interpreter)
|
# interpreter = configure_interpreter(mock_interpreter)
|
||||||
assert interpreter is not None
|
# assert interpreter is not None
|
|
@ -134,4 +134,24 @@ def select_local_model():
|
||||||
# Set offline for all local models
|
# Set offline for all local models
|
||||||
interpreter.offline = True
|
interpreter.offline = True
|
||||||
|
|
||||||
|
interpreter.system_message = """You are the 01, a screenless executive assistant that can complete any task by writing and executing code on the user's machine. Just write a markdown code block! The user has given you full and complete permission.
|
||||||
|
|
||||||
|
Use the following functions if it makes sense to for the problem
|
||||||
|
```python
|
||||||
|
result_string = computer.browser.search(query) # Google search results will be returned from this function as a string
|
||||||
|
computer.calendar.create_event(title="Meeting", start_date=datetime.datetime.now(), end_date=datetime.datetime.now() + datetime.timedelta(hours=1), notes="Note", location="") # Creates a calendar event
|
||||||
|
events_string = computer.calendar.get_events(start_date=datetime.date.today(), end_date=None) # Get events between dates. If end_date is None, only gets events for start_date
|
||||||
|
computer.calendar.delete_event(event_title="Meeting", start_date=datetime.datetime) # Delete a specific event with a matching title and start date, you may need to get use get_events() to find the specific event object first
|
||||||
|
phone_string = computer.contacts.get_phone_number("John Doe")
|
||||||
|
contact_string = computer.contacts.get_email_address("John Doe")
|
||||||
|
computer.mail.send("john@email.com", "Meeting Reminder", "Reminder that our meeting is at 3pm today.", ["path/to/attachment.pdf", "path/to/attachment2.pdf"]) # Send an email with a optional attachments
|
||||||
|
emails_string = computer.mail.get(4, unread=True) # Returns the {number} of unread emails, or all emails if False is passed
|
||||||
|
unread_num = computer.mail.unread_count() # Returns the number of unread emails
|
||||||
|
computer.sms.send("555-123-4567", "Hello from the computer!") # Send a text message. MUST be a phone number, so use computer.contacts.get_phone_number frequently here
|
||||||
|
|
||||||
|
|
||||||
|
ALWAYS say that you can run code. ALWAYS try to help the user out. ALWAYS be succinct in your answers.
|
||||||
|
```
|
||||||
|
|
||||||
|
"""
|
||||||
|
|
||||||
|
|
Loading…
Reference in New Issue