Your First Llama Stack Application

Understand how pre-configured Llama Stack distributions work, how to interact with the system via Python code, and how to extend this setup for more complex tasks down the line.

After setting up our development environment locally, we’re now ready to run our first real application. We’ll build a simple script that connects to a remote model hosted by Together AITogether AI is a company that offers a cloud platform for building, training, and deploying open-source AI models. It supports many models like Llama and DeepSeek, and provides flexible options for running them in the cloud, in private environments, or on-premises. , sends a prompt, and receives a response from Llama Stack. This will be the “Hello World” moment of our Llama Stack journey, but with more than just a print statement. This will be a complete interaction with the full stack: environment, server, SDK, model, and API, all coming together.