01/01OS
Tom Chapin a0cc46cfc0 Adding localtunnel support 2024-02-17 22:27:58 -08:00
..
01OS Adding localtunnel support 2024-02-17 22:27:58 -08:00
_archive Bug fixes, CTRL-C fix, relative script fixes, less print statements 2024-02-15 12:36:08 -08:00
.cursorignore configuring cursor to ignore archive and other random files 2024-02-17 18:42:12 -08:00
.env.example Adding localtunnel support 2024-02-17 22:27:58 -08:00
README.md Adding localtunnel support 2024-02-17 22:27:58 -08:00
poetry.lock Switching tunneling functionality from ngrok to localhost.run 2024-02-17 21:23:36 -08:00
pyproject.toml Switching tunneling functionality from ngrok to localhost.run 2024-02-17 21:23:36 -08:00
start.py Bug fixes, CTRL-C fix, relative script fixes, less print statements 2024-02-15 12:36:08 -08:00
start.sh Adding localtunnel support 2024-02-17 22:27:58 -08:00
tunnel.sh Adding localtunnel support 2024-02-17 22:27:58 -08:00

README.md

The open-source language model computer.

pip install 01OS
01 # This will run a server + attempt to determine and run a client.
# (Behavior can be modified by changing the contents of `.env`)

Expose an 01 server publically:

We are currently using localtunnel to handle the creation of public tunnel endpoints.

Note: You will need to install Node and the localtunnel tool before this will work correctly: npm install -g localtunnel

01 --server --expose # This will print a URL that a client can point to.

Run a specific client:

01 --client macos # Options: macos, rpi

Run locally:

The current default uses OpenAI's services.

The --local flag will install and run the whisper.cpp STT and Piper TTS models.

01 --local # Local client and server
01 --local --server --expose # Expose a local server