Author avatar

tysva

by astrabert

Server

Tags

#agentic-ai #ai #elevenlabs #groq #learning #llama-index #llms #mcp #typescript #vector-database #voice-assistant
4.8 (120)

TySVA - TypeScript Voice Assistant🪄

Learn TypeScript chatting effortlessly with AI

If you find TySVA userful, please consider to donate and support the project:

GitHub Sponsors Badge

TySVA Logo

TySVA is aimed at creating a learning space for you to get to know more about TypeScript, leveraging:

  • Qdrant local database, with the full documentation for TypeScript
  • LinkUp, for web deep search
  • MCP servers, for vector search and web search automation
  • ElevenLabs, for voice input transcription and voice output generation
  • LlamaIndex, for agent workflows

It supports voice input/output, as well as textual input/output.

Install and launch🚀

The first step, common to both the Docker and the source code setup approaches, is to clone the repository and access it:

git clone https://github.com/AstraBert/TySVA.git
cd TySVA

Once there, you can choose one of the two following approaches:

Docker (recommended)🐋

Required: Docker and docker compose

mv .env.example .env
  • Launch the Docker application:
# If you are on Linux/macOS
bash start_services.sh
# If you are on Windows
.\start_services.ps1
  • Or do it manually:
docker compose up vector_db -d
docker compose up mcp -d
docker compose up app -d

You will see the application running on http://localhost:7999/app and you will be able to use it. Depending on your connection and on your hardware, the set up might take some time (up to 15 mins to set up) - but this is only for the first time your run it!

Source code🗎

Required: Docker, docker compose and conda

mv .env.example scripts/.env
  • Set up the conda environment and the vector database using the dedicated script:
# For MacOs/Linux users
bash setup.sh
# For Windows users
.\setup.ps1
  • Or you can do it manually, if you prefer:
docker compose up vector_db -d

conda env create -f environment.yml
  • Now you can launch the script to load TypeScript documentation to the vector database:
conda activate typescript-assistant-voice
python3 scripts/data.py
  • And, when you're done, launch the MCP server:
conda activate typescript-assistant-voice
cd scripts
python3 server.py
  • Now open another terminal, and run the application:
uvicorn app:app --host 0.0.0.0 --port 7999

You will see the application running on http://localhost:7999/app and you will be able to use it.

Workflow

workflow

The workflow is very simple:

  • When you submit a request, if is audio, it gets transcribed and then submitted to the agent workflow as a starting prompt, whereas if it is textual it will be submitted directly to the agent workflow
  • The agent workflow can solve the TypeScript answer by retrieving documents from the vector database or by searching the web. There is also the possibility of a direct response (no tool use) if the answer is simple. All the tools are available through MCP.
  • Once the agent is done, the agentic process and the output get summarized, and the summaries are turned into voice output. The voice output is returned along with the textual output by the agent.

Contributing

Contributions are always welcome! Follow the contributions guidelines reported here.

License and rights of usage

The software is provided under MIT license.

Related Services

playwright-mcp

Server

4.8 (120)
View Details →

blender-mcp

Server

4.8 (120)
View Details →

tavily-mcp

Server

4.8 (120)
View Details →