merge guides into configure
This commit is contained in:
parent
a7c96bed62
commit
db3e2c3638
|
@ -1,16 +0,0 @@
|
||||||
---
|
|
||||||
title: "Language Model"
|
|
||||||
description: "The LLM that powers your 01"
|
|
||||||
---
|
|
||||||
|
|
||||||
## Hosted Models
|
|
||||||
|
|
||||||
The default LLM for 01 is GPT-4-Turbo. You can find this in the default profile in `software/source/server/profiles/default.py`.
|
|
||||||
|
|
||||||
The fast profile uses Llama3-8b served by Groq. You can find this in the fast profile in `software/source/server/profiles/fast.py`.
|
|
||||||
|
|
||||||
## Local Models
|
|
||||||
|
|
||||||
You can use local models to power 01.
|
|
||||||
|
|
||||||
Using the local profile launches the Local Explorer where you can select your inference provider and model. The default options include Llamafile, Jan, Ollama, and LM Studio.
|
|
|
@ -1,26 +0,0 @@
|
||||||
---
|
|
||||||
title: "Text To Speech"
|
|
||||||
description: "The voice of 01"
|
|
||||||
---
|
|
||||||
|
|
||||||
## Local TTS
|
|
||||||
|
|
||||||
For local TTS, Coqui is used.
|
|
||||||
|
|
||||||
```python
|
|
||||||
# Set your profile with a local TTS service
|
|
||||||
interpreter.tts = "coqui"
|
|
||||||
```
|
|
||||||
|
|
||||||
## Hosted TTS
|
|
||||||
|
|
||||||
01 supports OpenAI and Elevenlabs for hosted TTS
|
|
||||||
|
|
||||||
```python
|
|
||||||
# Set your profile with a hosted TTS service
|
|
||||||
interpreter.tts = "elevenlabs"
|
|
||||||
```
|
|
||||||
|
|
||||||
## Other Models
|
|
||||||
|
|
||||||
More instructions coming soon!
|
|
|
@ -52,10 +52,6 @@
|
||||||
"group": "Hardware Setup",
|
"group": "Hardware Setup",
|
||||||
"pages": ["hardware/01-light", "hardware/m5atom"]
|
"pages": ["hardware/01-light", "hardware/m5atom"]
|
||||||
},
|
},
|
||||||
{
|
|
||||||
"group": "Using 01",
|
|
||||||
"pages": ["guides/language-model", "guides/text-to-speech"]
|
|
||||||
},
|
|
||||||
{
|
{
|
||||||
"group": "Troubleshooting",
|
"group": "Troubleshooting",
|
||||||
"pages": ["troubleshooting/faq"]
|
"pages": ["troubleshooting/faq"]
|
||||||
|
|
|
@ -38,3 +38,33 @@ The easiest way is to duplicate an existing profile and then update values as ne
|
||||||
# Use custom profile
|
# Use custom profile
|
||||||
poetry run 01 --profile <profile_name>
|
poetry run 01 --profile <profile_name>
|
||||||
```
|
```
|
||||||
|
|
||||||
|
### Hosted LLMs
|
||||||
|
|
||||||
|
The default LLM for 01 is GPT-4-Turbo. You can find this in the default profile in `software/source/server/profiles/default.py`.
|
||||||
|
|
||||||
|
The fast profile uses Llama3-8b served by Groq. You can find this in the fast profile in `software/source/server/profiles/fast.py`.
|
||||||
|
|
||||||
|
### Local LLMs
|
||||||
|
|
||||||
|
You can use local models to power 01.
|
||||||
|
|
||||||
|
Using the local profile launches the Local Explorer where you can select your inference provider and model. The default options include Llamafile, Jan, Ollama, and LM Studio.
|
||||||
|
|
||||||
|
### Hosted TTS
|
||||||
|
|
||||||
|
01 supports OpenAI and Elevenlabs for hosted TTS
|
||||||
|
|
||||||
|
```python
|
||||||
|
# Set your profile with a hosted TTS service
|
||||||
|
interpreter.tts = "elevenlabs"
|
||||||
|
```
|
||||||
|
|
||||||
|
### Local TTS
|
||||||
|
|
||||||
|
For local TTS, Coqui is used.
|
||||||
|
|
||||||
|
```python
|
||||||
|
# Set your profile with a local TTS service
|
||||||
|
interpreter.tts = "coqui"
|
||||||
|
```
|
||||||
|
|
Loading…
Reference in New Issue