add llm examples to configure
This commit is contained in:
		
							parent
							
								
									db3e2c3638
								
							
						
					
					
						commit
						ef1e711986
					
				| 
						 | 
				
			
			@ -45,12 +45,22 @@ The default LLM for 01 is GPT-4-Turbo. You can find this in the default profile
 | 
			
		|||
 | 
			
		||||
The fast profile uses Llama3-8b served by Groq. You can find this in the fast profile in `software/source/server/profiles/fast.py`.
 | 
			
		||||
 | 
			
		||||
```python
 | 
			
		||||
# Set your profile with a hosted LLM
 | 
			
		||||
interpreter.llm.model = "gpt-4o"
 | 
			
		||||
```
 | 
			
		||||
 | 
			
		||||
### Local LLMs
 | 
			
		||||
 | 
			
		||||
You can use local models to power 01.
 | 
			
		||||
 | 
			
		||||
Using the local profile launches the Local Explorer where you can select your inference provider and model. The default options include Llamafile, Jan, Ollama, and LM Studio.
 | 
			
		||||
 | 
			
		||||
```python
 | 
			
		||||
# Set your profile with a local LLM
 | 
			
		||||
interpreter.local_setup()
 | 
			
		||||
```
 | 
			
		||||
 | 
			
		||||
### Hosted TTS
 | 
			
		||||
 | 
			
		||||
01 supports OpenAI and Elevenlabs for hosted TTS
 | 
			
		||||
| 
						 | 
				
			
			
 | 
			
		|||
		Loading…
	
		Reference in New Issue