add docs fixes for esp32 and async interpreter
This commit is contained in:
		
							parent
							
								
									1c4be961c2
								
							
						
					
					
						commit
						0e68bb7125
					
				|  | @ -129,7 +129,7 @@ If you want to run local speech-to-text using Whisper, you must install Rust. Fo | ||||||
| 
 | 
 | ||||||
| To customize the behavior of the system, edit the [system message, model, skills library path,](https://docs.openinterpreter.com/settings/all-settings) etc. in the `profiles` directory under the `server` directory. This file sets up an interpreter, and is powered by Open Interpreter. | To customize the behavior of the system, edit the [system message, model, skills library path,](https://docs.openinterpreter.com/settings/all-settings) etc. in the `profiles` directory under the `server` directory. This file sets up an interpreter, and is powered by Open Interpreter. | ||||||
| 
 | 
 | ||||||
| To specify the text-to-speech service for the 01 `base_device.py`, set `interpreter.tts` to either "openai" for OpenAI, "elevenlabs" for ElevenLabs, or "coqui" for Coqui (local) in a profile. For the 01 Light, set `SPEAKER_SAMPLE_RATE` to 24000 for Coqui (local) or 22050 for OpenAI TTS. We currently don't support ElevenLabs TTS on the 01 Light. | To specify the text-to-speech service for the 01 `base_device.py`, set `interpreter.tts` to either "openai" for OpenAI, "elevenlabs" for ElevenLabs, or "coqui" for Coqui (local) in a profile. For the 01 Light, set `SPEAKER_SAMPLE_RATE` in `client.ino` under the `esp32` client directory to 24000 for Coqui (local) or 22050 for OpenAI TTS. We currently don't support ElevenLabs TTS on the 01 Light. | ||||||
| 
 | 
 | ||||||
| ## Ubuntu Dependencies | ## Ubuntu Dependencies | ||||||
| 
 | 
 | ||||||
|  |  | ||||||
|  | @ -25,6 +25,7 @@ class AsyncInterpreter: | ||||||
|         self.stt_latency = None |         self.stt_latency = None | ||||||
|         self.tts_latency = None |         self.tts_latency = None | ||||||
|         self.interpreter_latency = None |         self.interpreter_latency = None | ||||||
|  |         # time from first put to first yield | ||||||
|         self.tffytfp = None |         self.tffytfp = None | ||||||
|         self.debug = debug |         self.debug = debug | ||||||
| 
 | 
 | ||||||
|  |  | ||||||
		Loading…
	
		Reference in New Issue
	
	 Ben Xu
						Ben Xu