AI Studio vs. Vertex AI: Choosing the Right Platform
Explore AI Studio’s no-code environment to craft and customize Gemini prompts, tune models, and export code. Understand when to choose Vertex AI for advanced deployment and management of Gemini-powered applications.
We'll cover the following...
We discussed Vertex AI in an earlier lesson, and briefly interacted with AI Studio to create our API keys. Let’s explore AI Studio in detail and check out its features.
AI Studio
Google AI Studio is a development environment specifically designed for working with generative AI models, particularly Gemini. It provides a user-friendly interface to streamline the process of building generative AI applications. AI Studio acts as a bridge between us and Gemini. It allows us to send and receive prompts without using code while still providing customization features.
AI Studio is primarily targeted toward developers and non-experts who want to leverage generative AI capabilities without needing in-depth knowledge of LLM technologies.
AI Studio features
The AI Studio offers many features. We have already learned how to use it to create an API key for Gemini, so let’s explore a few other key features:
With the current pace of development, Google is regularly updating both Gemini and the AI Studio. New features are constantly being added. The following key features were present at the time of writing this course.
Crafting prompts
With Gemini’s huge context window, we can get really creative with our prompts. AI Studio allows us to work on our prompts in an easy-to-use, no-code environment. There are currently two types of prompts being offered: chat and structured.
Chat prompts simulate a conversation with a model, allowing us to exchange messages. The chat is carried out as it would in a regular chatbot, with the model building and storing context for that chat.
Structured prompts allow us to guide the model by providing sample inputs and outputs. This can be thought of as being similar to few-shot prompting. The AI Studio currently offers 500 examples to use in these prompts.
There are also a few similar features that are available for both prompts:
Model: Both prompt types can be used with a variety of models. They can be chosen from the drop-down.
Token count: The AI Studio provides an updated count of the current tokens being used. This can help estimate the cost and size of the AI-powered experiences.
Temperature: This allows us to set the temperature of the models.
Stop sequence: A stop sequence can be added to stop generation at the first appearance.
Safety settings: Gemini offers different categories of safety filters that can be modified individually. This includes harassment, hate, sexually explicit, and dangerous content. We do not recommend modifying the default settings.
Advanced options: Some advanced options appear based on the model that was chosen. For Gemini 1.0 models, we can modify the output length in tokens, top K, and top P. For Gemini 1.5 models, we can set the output to be in JSON, with 1.5 Pro offering a JSON schema to be set as well. The output length can also be set for 1.5 models.
Code execution: We can also enable code execution. This will allow the models to run Python code in a code execution environment, with some limitations. For example, we can ask Gemini to generate and run some code. Instead of generating a result for what the code output might be, Gemini instead runs the code in an environment and returns the result. Code execution allows the model to iterate and learn from the code output to generate a better output.
Perhaps the most impressive feature is the ability to get the code for the prompt we have built. This allows us to call the prompt from the Gemini API for JavaScript, Python, Kotlin, and Swift. Creating AI experiences could not be easier! We can test and modify our prompts on the AI Studio, and once we are ready, we can simply export the entire experience as code.
Tuning a model
While structured prompts can be a quick and easy way to customize the model’s output, sometimes, we might need a little more help. AI Studio offers model-tuning capabilities for a few models, with more to be added later. A structured prompt or a CSV file can be used as training data for the model.
Advanced settings, such as setting the
Why use Vertex AI?
AI Studio’s current offerings are nothing short of impressive. You might wonder why Vertex AI would be needed when the AI Studio can do so much. Let’s compare the two:
Feature | Google AI Studio | Vertex AI |
Focus | Experimentation with generative AI models (specifically Gemini) | Unified platform for end-to-end Machine Learning (ML) workflow |
Target Users | Data scientists and developers familiar with generative AI | Data scientists, developers, and business users |
Ease of Use | Simpler interface for quick exploration of generative AI | More complex and requires an understanding of ML concepts |
Control | Less control over model training and deployment | Offers more control and customization |
Integration | Limited integration with other Google Cloud services | Integrates seamlessly with BigQuery and other GCP services |
MLOps Features | Limited | Includes features for model monitoring, versioning, and management |
Cost | Free tier available | Pay-as-you-go pricing |
While AI Studio is user-friendly and offers a free tier, Vertex AI steps in for more complex tasks that require enhanced capabilities. AI Studio acts as a bridge for the chasm between AI users and AI developers, with the former often requiring an in-depth understanding of the underlying technology. The AI Studio provides a neat insight into what AI can achieve without requiring a lot of setup.