Gemini API and Flutter: Practical, AI-driven apps with Google AI tools

Flutter
16 May 202414:58

Summary

TLDRIn this video, Eric and Ander from the Dart and Flutter teams explore Generative AI, demonstrating how it can transform app development. They share their journey of creating a cooking app using Google AI Studio and the Gemini API, which generates recipes from photos of ingredients. The talk covers prompt design, integrating the API with Flutter, and enhancing the app's user experience with AI. They showcase how developers can leverage AI to build functional apps without extensive server-side coding.

Takeaways

  • 🧑‍💻 Eric Windmill and Ander Dobo are engineers and product managers, respectively, working on the Dart and Flutter teams.
  • 🤖 Large Language Models (LLMs) are sophisticated AI systems that power generative AI, capable of creating content like text, images, code, and music.
  • 🛠️ Generative AI is rapidly becoming a practical tool for developers, with new products being released and improved frequently.
  • 🧐 It can be challenging for developers to identify the right AI tools and understand their practical applications in app development.
  • 🚀 The speakers built a cooking app using the Gemini API, showcasing the ease of integrating AI into a Flutter app with the help of Google AI SDK for Dart.
  • 🔍 Google AI Studio is a browser-based IDE for experimenting with Google's generative models and is instrumental in the development process.
  • 📝 Prompt design is a critical process in AI, involving creating and refining prompts to guide the AI model to produce the desired output.
  • 🍲 The cooking app allows users to take a photo of ingredients, and the app generates a recipe, bypassing the need for a pre-existing database.
  • 🔑 To utilize the Gemini API, developers need to obtain an API key from Google AI Studio and integrate it into their app projects.
  • 🔄 The app's user interface allows for dynamic input, which is interpolated into the prompt sent to the Gemini API to generate personalized recipes.
  • 📚 The speakers emphasize the importance of safety considerations and adherence to Google's safety guidance when working with AI models.
  • 📈 They also highlight the potential for continuous improvement of the app, including making the AI character more interactive through chat features.

Q & A

  • What are Large Language Models (LLMs)?

    -Large Language Models (LLMs) are sophisticated artificial intelligence systems trained on large sets of data, capable of generating new content such as text, images, code, or music.

  • How can generative AI transform application creation and interaction?

    -Generative AI, powered by LLMs, has the potential to transform how we create and interact with applications by enabling the creation of new content and providing more dynamic, personalized user experiences.

  • What challenges do developers face when starting with generative AI tools?

    -Developers may find it challenging to identify the right tools, understand how to get started with them, and determine the practical applications of AI in app development.

  • How did Eric and Ander overcome their lack of experience with AI in app development?

    -Eric and Ander overcame their lack of experience by using tools like the Google AI SDK for Dart, which allowed them to quickly get started and build an app using the Gemini API.

  • What is the purpose of Google AI Studio?

    -Google AI Studio is a browser-based IDE for prototyping with Google's generative models, useful for experimenting with different prompts when building features that use the Gemini API.

  • What is the main functionality of the cooking app built by Eric and Ander?

    -The cooking app allows users to take a photo of ingredients they have on hand, and the app uses generative AI to generate a recipe based on those ingredients, eliminating the need for manual entry and a pre-existing recipe database.

  • What is prompt design in the context of using the Gemini API?

    -Prompt design is the process of creating and tweaking prompts given to a large language model like Gemini to achieve the desired type and quality of output.

  • Why is it important to consider safety when working with large language models like Gemini?

    -It is crucial to consider safety to ensure the app provides appropriate and safe content, following guidelines such as avoiding harmful or dangerous information and adhering to food safety practices.

  • How did Ander address unexpected results from the Gemini model in the cooking app?

    -Ander addressed unexpected results by refining the prompt, adding instructions for the model to avoid returning recipes when the image doesn't contain edible items, and incorporating safety measures as per Google's guidelines.

  • What steps are involved in setting up the Gemini API for a Flutter app?

    -The steps include obtaining an API key from Google AI Studio, adding the Google generative AI package to the Flutter app, setting up the API with the necessary code, and making requests to the Gemini API using a properly formatted prompt.

  • How did Eric enhance the user experience of the cooking app?

    -Eric enhanced the user experience by adding a more interesting personality to Chef Noodle, the app's character, and by structuring the data returned by the Gemini API to make it more reliable and easier to parse.

  • What are the future plans for the cooking app according to Ander?

    -Ander plans to make Chef Noodle more interactive by incorporating the Gemini API's chat feature, allowing for a more conversational user experience in future versions of the app.

Outlines

00:00

🤖 Introduction to Generative AI and the Cooking App Project

In the first paragraph, Eric Windmill and Ander Dobo introduce themselves as members of the Dart and Flutter teams, respectively. They explain the concept of Large Language Models (LLMs) and generative AI, highlighting its potential to revolutionize application creation and interaction. Eric discusses the rapid development of AI tools for developers and the challenges of selecting the right ones. Ander emphasizes the difficulty in identifying practical AI applications for app developers. The speakers share their experience with the Google AI SDK for Dart and the process of building a cooking app using the Gemini API, which leverages generative AI to create recipes from photos of ingredients, addressing the 'cold start problem' and eliminating the need for manual ingredient entry or a pre-existing recipe database.

05:05

🔧 Experimentation with Google AI Studio and Prompt Design

The second paragraph delves into the initial steps taken by the team to explore the capabilities of generative AI using Google AI Studio, a browser-based IDE for prototyping with Google's generative models. The team experimented with different prompts to understand the potential applications of AI, such as creating smart chatbots or inspiring users with images. Eric expresses his interest in cooking apps that suggest recipes based on available ingredients but points out their limitations. The team then discusses the proof of concept for their cooking app, which involved using the Gemini model to generate recipes from images of ingredients. Ander explains the process of prompt design, which involves creating and refining prompts to achieve the desired output from the LLM. He also addresses safety considerations and the incorporation of Google's safety guidance into their prompts.

10:06

📝 Setting Up the Gemini API and Enhancing the Cooking App

In the third paragraph, the focus shifts to the technical setup and integration of the Gemini API into the cooking app. Ander outlines the process of obtaining an API key and setting up the Google generative AI package in the Flutter app. Eric demonstrates the app's functionality, which includes capturing a photo of ingredients and personalizing the recipe request through additional inputs. The app then sends a formatted prompt to the Gemini API, which generates a recipe in response. The paragraph also covers the steps for setting up the API key, adding the necessary package to the Flutter app, and writing the code to communicate with the Gemini API. Additionally, the team discusses enhancing the app's user experience by giving the app's character, Chef Noodle, a more engaging personality through updates to the prompt. Eric also addresses the challenges of structuring data returned by the Gemini API and how they were overcome by specifying the expected format in the prompt.

Mindmap

Keywords

💡Large Language Models (LLMs)

Large Language Models, or LLMs, are advanced artificial intelligence systems that are trained on vast amounts of data. They are capable of understanding and generating human-like text based on the input they receive. In the context of the video, LLMs are the foundational technology behind generative AI, which is used to create new content such as recipes in the cooking app discussed. The script mentions that these models are sophisticated and can transform application creation and interaction.

💡Generative AI

Generative AI refers to artificial intelligence systems that can create new content, such as text, images, code, or music. It is powered by Large Language Models and represents a significant shift in AI capabilities. The video's theme revolves around the use of generative AI in application development, particularly in building a cooking app that generates recipes from images of ingredients.

💡Google AI SDK for Dart

The Google AI SDK for Dart is a software development kit that allows developers to integrate Google's AI capabilities into their Dart applications. In the video, Eric and Ander mention using this SDK to quickly get started with building an app that uses the Gemini API, showcasing how developers can leverage AI without needing extensive AI expertise.

💡Gemini API

The Gemini API is a specific service within the Google AI suite that is used in the video's example to generate recipes from images of ingredients. It is an instance of a generative AI model that takes inputs like images and text prompts to produce outputs like recipe suggestions. The API is central to the app's functionality and is a key component discussed in the video.

💡Prompt Design

Prompt design is the process of creating and refining the prompts given to a large language model to elicit the desired type and quality of output. In the video, Ander explains the importance of prompt design in ensuring that the Gemini model returns reasonable and delicious recipes based on the image of ingredients provided by the user.

💡Flutter

Flutter is an open-source UI software development kit created by Google for building natively compiled applications for mobile, web, and desktop from a single codebase. The video showcases the development of a cooking app using Flutter, which integrates the Gemini API to generate recipes, demonstrating Flutter's capabilities in building interactive and functional apps.

💡Multimodal

In the context of AI, multimodal refers to systems that can process and understand multiple types of input data, such as text, images, and audio. The Gemini 1.5 Pro model mentioned in the video is described as multimodal because it takes both text and images as inputs to generate text outputs, like recipes.

💡Safety Parameters

Safety parameters are settings within AI systems that are designed to prevent the generation of harmful or inappropriate content. The video discusses the importance of adjusting these parameters in the Gemini API to ensure that the generated recipes only contain real, edible ingredients and follow food safety practices.

💡API Key

An API key is a unique code that identifies an application programming interface (API) user to an API service. In the video, the process of obtaining an API key for the Gemini API is described, which is necessary for developers to access and use the API's functionality within their applications.

💡Environment Variable

An environment variable is a dynamic-named value that can be set and read in the environment of a process. In the video, the use of an environment variable to pass the API key to the Flutter app is discussed. This is a common practice for securely managing sensitive information like API keys.

💡JSON

JSON (JavaScript Object Notation) is a lightweight data-interchange format that is easy for humans to read and write and for machines to parse and generate. In the video, the speaker mentions deserializing the response from the Gemini API as JSON to structure the recipe data in a reliable way, which is crucial for the app's functionality.

Highlights

Introduction to Large Language Models (LLMs) and generative AI, emphasizing their potential to transform application creation and interaction.

Generative AI's capability to create new content such as text, images, code, and music.

The rapid development of AI tools for developers and the challenges in choosing the right ones.

The use of Google AI SDK for Dart to quickly build an app using the Gemini API.

A walkthrough of building a cooking app with generative AI as the backend.

Google AI Studio as a browser-based IDE for experimenting with Google's generative models.

The potential of generative AI in solving problems like building smart chatbots and inspiring users with images.

The innovative cooking app that generates recipes from photos of ingredients, eliminating the need for manual input and databases.

The proof of concept process using the Gemini models to ensure they can generate reasonable and delicious recipes from images.

The importance of prompt design in getting the desired output from a large language model.

The use of free-form prompts in Google AI Studio for open-ended instructions to the Gemini model.

The process of tweaking prompts to achieve consistently reasonable recommendations and useful information.

Incorporating safety considerations and following Google's safety guidance in the app development.

The setup process for the Gemini API in a Flutter app, including obtaining an API key and adding the Google generative AI package.

The implementation of dynamic input from the form into the prompt for the Gemini API in the Flutter app.

The idea of giving Chef Noodle a more interesting personality by updating the prompt.

The challenge of structuring data when working with the Gemini API and the solution of adding explicit formatting to the prompt.

The personalization of the cooking app experience with the ability to switch devices for hands-free recipe following.

The potential of the Gemini API for building AI-driven apps and the invitation to explore its use in Flutter or Dart apps.

Transcripts

play00:00

[MUSIC PLAYING]

play00:04

ERIC WINDMILL: Hi.

play00:05

I'm Eric.

play00:05

I'm an engineer on the Dart and Flutter teams.

play00:08

ANDER DOBO: And I'm Ander, and I'm a product

play00:10

manager on the Flutter team.

play00:12

Large Language Models, or LLMs, are

play00:15

sophisticated artificial intelligence systems

play00:18

trained on large sets of data.

play00:20

And generative AI is powered by LLMs.

play00:24

These are artificial intelligence models

play00:26

that can create new content such as text, images, code,

play00:32

or even music.

play00:34

Generative AI has the potential to transform how we create

play00:38

and interact with applications.

play00:40

ERIC WINDMILL: As a developer, you've probably seen and heard

play00:43

news about how quickly generative AI is

play00:46

becoming a tool you could use to build software.

play00:49

New AI products for developers are being released all the time,

play00:53

and those products are changing and improving fast.

play00:56

ANDER DOBO: It can be hard to know what the right tools are

play00:59

and how to get started with them.

play01:02

It's even hard to know what the practical applications of AI

play01:06

might be as an app developer.

play01:08

ERIC WINDMILL: We didn't have much experience building

play01:10

apps that use AI before preparing for this talk.

play01:13

But using tools like the Google AI SDK for Dart,

play01:16

we were able to get up and running

play01:18

and build an app that uses Gemini API in no time.

play01:23

In this talk, we're going to walk you

play01:25

through our journey of building a cooking app that uses

play01:28

generative AI as the back end.

play01:31

First, we'll talk about how we got started with generative

play01:34

AI using Google AI Studio.

play01:37

ANDER DOBO: Then I'll walk you through how

play01:39

you can get the most out of the Gemini API

play01:42

through a process called prompt design.

play01:45

ERIC WINDMILL: Finally, I'll show you

play01:46

how I use the Gemini API to enhance

play01:48

a real-world application.

play01:53

ANDER DOBO: When we started this project,

play01:55

we didn't really know what was possible

play01:57

when it came to generative AI.

play01:59

So our first step was to learn and experiment

play02:02

in Google AI Studio.

play02:05

Google AI Studio is a browser-based IDE

play02:08

for prototyping with Google's generative models.

play02:11

It's useful for experimenting with different prompts

play02:14

as you build a feature that uses the Gemini API.

play02:18

While experimenting in Google AI Studio,

play02:21

we started to realize how many problems

play02:23

we could solve by building Flutter

play02:25

apps that use generative AI, such as building

play02:28

a smart chatbot for users to have

play02:31

a natural conversation about a topic

play02:33

or using an image to inspire a user to make something.

play02:38

ERIC WINDMILL: And I'm a big fan of the cooking apps that tell me

play02:41

what recipes I can make based on the ingredients I already

play02:44

have on hand.

play02:45

But these types of apps can be cumbersome to use.

play02:48

It's time-consuming to use apps that

play02:51

require the user to manually type all the food

play02:54

items in their pantry every time they want to find a new recipe.

play02:58

And these apps can be difficult to build because they

play03:01

have the cold start problem.

play03:03

They rely on having a large database of recipes

play03:05

to be useful.

play03:07

But with generative AI, both of those problems go away.

play03:10

Using our new app, the user can just

play03:12

take a photo of some ingredients they want to use,

play03:15

and the app generates a recipe using that photo, which

play03:19

means there's no need to type each of the ingredients

play03:21

and there's no need for a pre-existing database.

play03:25

ANDER DOBO: We needed to do a proof of concept

play03:27

to make sure that the Gemini models are

play03:29

capable of taking an image of ingredients

play03:32

and returning a recipe that is both reasonable to make

play03:36

and delicious.

play03:37

This required a process of trial and error called prompt design.

play03:42

Prompt design is the process of creating

play03:45

and tweaking prompts given to a large language model

play03:48

to get the desired type and quality of output.

play03:52

The first decision I had to make is what type of prompt

play03:56

would fit our use case--

play03:58

free form, which is open-ended text;

play04:01

structured, which has a predefined format

play04:04

and often where you provide examples of requests

play04:07

and responses; or chat, which enables a user

play04:11

to have a natural ongoing conversation with a language

play04:15

model.

play04:16

I started with the most basic type of prompt

play04:19

in Google AI Studio, a free-form prompt.

play04:22

And I used the Gemini 1.5 Pro model.

play04:27

A free form prompt is an open-ended instruction

play04:30

or a question you provide to a large language

play04:33

model like Gemini.

play04:35

It has no predefined structure and doesn't

play04:38

require you to give specific examples of requests

play04:41

and responses.

play04:43

The Gemini 1.5 Pro model is multimodal

play04:46

and takes text and images as inputs and outputs text.

play04:52

To start, I entered, what recipe can I

play04:56

make using the items in this photo,

play04:59

along with a photo of some food items that I took.

play05:04

And here's a result I got back in Google AI Studio.

play05:11

After a bit more experimentation,

play05:13

I found that for version one of our app,

play05:15

a free-form prompt was perfect.

play05:17

It let us quickly experiment, and it gave us

play05:20

good results for our use case.

play05:23

Next, I focused on tweaking the prompt

play05:26

to get consistently reasonable recommendations.

play05:30

I also instructed Gemini model to provide useful information

play05:34

with the recipe, such as the number of people

play05:37

it will serve and nutritional information per serving.

play05:41

An example of something unexpected that I address

play05:44

was the Gemini model returned a recipe even if the image didn't

play05:48

contain any edible items.

play05:52

So I added a line to the prompt instructing the Gemini model

play05:55

not to return a recipe in this scenario.

play05:59

It's crucial to be mindful of safety considerations

play06:02

when building your app and working with large language

play06:05

models like Gemini.

play06:06

Following Google's safety guidance,

play06:09

we incorporated several safety measures in our prompt.

play06:12

For example, in our case, we instruct

play06:15

the model to only provide recipes

play06:17

that contain real edible ingredients

play06:19

and to follow food safety practices like ensuring poultry

play06:23

is fully cooked.

play06:26

I updated the prompt by adding that the Gemini model should

play06:29

list ingredients that are potential allergens.

play06:32

Additionally, there are adjustable safety parameters

play06:35

for the Gemini API, such as for harassment or dangerous content.

play06:40

After reading up on each, I found

play06:42

that the default settings for these safety parameters

play06:44

were suitable for our app.

play06:47

We will continue to test and monitor for safety problems

play06:50

throughout the lifecycle of the app.

play06:53

This is the initial prompt I came up with for the app.

play06:57

Now I can save it and share it with Eric using the share

play07:02

feature in Google AI Studio so he can add

play07:05

the prompt to the Flutter app.

play07:10

ERIC WINDMILL: To start, let me show you the app that we built.

play07:14

When you open the app, the first thing the user sees

play07:16

is the chef, Chef Noodle, asking them which ingredients

play07:19

they want to use in a recipe.

play07:21

They provide this list of ingredients by taking a picture.

play07:24

Then the app has additional inputs that

play07:26

allows them to personalize their recipe request,

play07:29

such as buttons for common ingredients they may have,

play07:32

dietary restrictions, and cuisines they're in the mood

play07:34

for.

play07:35

When the form is filled out, the user

play07:37

presses Submit to request a new recipe from Chef Noodle.

play07:41

Behind the scenes, this form data

play07:43

is being interpolated into the prompt.

play07:45

So in the Flutter app, conceptually, the prompt

play07:47

looks more like this.

play07:49

The inputs from the form are inserted into the text prompt,

play07:52

and the images are attached.

play07:54

The prompt is then sent to the Gemini

play07:56

API, which generates a new recipe

play07:58

and returns it to the app.

play07:59

And that's the main functionality of the app.

play08:02

Now let's go through the steps taken

play08:04

to set up and start making requests to the Gemini API

play08:07

with our prompt.

play08:08

ANDER DOBO: First, you need to get an API key for the Gemini

play08:12

API.

play08:13

In Google AI Studio, click Get API Key

play08:17

in the left-hand navigation bar.

play08:19

Let's create the API key in a new project.

play08:22

This automatically creates the API key for you in Google Cloud

play08:26

and restricts the key to only be able to call the Gemini API.

play08:31

Alternatively, you can select an existing Google Cloud project

play08:36

if you already have one that you would like

play08:38

to associate your API key with.

play08:40

Now you can copy the key to use as you develop your app.

play08:45

If you don't set up billing, you can use the API free of charge

play08:49

up to specified rate limits.

play08:51

ERIC WINDMILL: Once you're set up with the API key,

play08:53

the next step is to add the Google generative AI

play08:56

package to your Flutter app.

play08:57

To do so, open your terminal and navigate

play09:00

to the directory of your Flutter project,

play09:02

then add the package with the pub add command.

play09:05

Next, add the code required to set up the Gemini

play09:08

API and a Flutter project.

play09:10

You can find the code needed to do this in the Getting Started

play09:13

docs at ai.google.dev.

play09:17

The setup code looks like this.

play09:19

I added this code in the init state method

play09:22

of the app's top level widget.

play09:24

This code is creating a new instance of a generative model

play09:27

object, which knows how to communicate with the Gemini API.

play09:33

The constructor for the generative model class

play09:35

expects the name of the Google LLM you're passing in,

play09:38

such as Gemini 1.5 Pro, as well as your API key.

play09:42

Finally, this code attempts to get your Gemini API key using

play09:46

the string from environment method, which is

play09:48

part of the Dart core library.

play09:50

This method expects that an environment variable

play09:52

called API key will be passed in when the app starts running.

play09:57

The simplest way to pass in the API key as an argument

play10:00

is to use the Dart define flag when you run the Flutter run

play10:03

command.

play10:03

And this works great for development.

play10:05

Now that the Gemini API is set up and the app is running,

play10:08

we can focus on adding the logic to the app that

play10:11

will make the request to the Gemini API with the prompt.

play10:16

To start, I looked at the documentation at ai.google.dev

play10:20

and found an example of the code I needed to add to my app.

play10:23

That example code looks like this.

play10:27

The most important part of this code

play10:28

is the generative model generate content method from the Google

play10:32

generative AI package.

play10:34

The generate content method is where you build

play10:37

the prompt for the Gemini API.

play10:39

It expects a list of content objects, which

play10:42

will be a list of content subtypes, text part, and data

play10:45

part.

play10:46

Text parts are used to pass in strings and data

play10:49

parts are objects you can use to pass in files, such as images.

play10:54

Let's get back to our prompt, which currently looks like this.

play10:58

But, of course, our app has dynamic input from the form,

play11:01

so we need to update the prompt in the app

play11:03

to look more like this.

play11:07

To add this to the Flutter app, I

play11:08

copied the prompt text from AI Studio into a text part object

play11:12

and then replace the specifics, like dietary restrictions,

play11:15

with values from the form the user fills out.

play11:18

Now, back on the main page of the app, when a user presses

play11:21

Submit Prompts, the app will generate

play11:24

a recipe that can be saved.

play11:25

Let's see it in action.

play11:27

[CAMERA CLICK]

play11:32

Great, it all works as expected.

play11:34

But I think we can do more with AI.

play11:36

Namely, I want to give Chef Noodle

play11:38

a more interesting personality.

play11:42

ANDER DOBO: Let's see what happens

play11:43

if we update the prompt by adding,

play11:45

you are a cat chef who travels around the world,

play11:48

and your travels inspire recipes.

play11:53

With this update to the prompt, let's reload

play11:56

and see what happens.

play11:58

And now Chef Noodle tells us something

play12:01

interesting with each recipe.

play12:07

ERIC WINDMILL: Lastly, I want to talk about structuring data when

play12:10

working with the Gemini API.

play12:12

By default, when we started, the Gemini API

play12:14

returned the recipe and all the accompanying data as Markdown.

play12:18

In the beginning, this was great,

play12:19

and parsing out a title, a list of ingredients,

play12:22

and a list of instructions was simple.

play12:24

But as our prompt became more complex

play12:26

and we were requesting more information like nutritional

play12:29

information and allergens, it became

play12:31

impossible to parse out the information reliably.

play12:35

But then I realized I was thinking with my pre-LLM brain.

play12:38

This isn't a problem that I need to solve with code.

play12:41

It's a problem I can solve in the prompt itself.

play12:44

So I added an explicit formatting to the prompt.

play12:49

This mostly worked right away, but occasionally, the Gemini API

play12:52

would return different properties as different types.

play12:55

For example, sometimes ingredients

play12:57

would be a list of strings, and sometimes it

play12:59

would just be a long string.

play13:02

To solve this, I added the expected types to the prompt.

play13:05

And since that update, the responses

play13:07

have been in the expected format,

play13:08

and I've been able to reliably deserialize the response as JSON

play13:12

without throwing exceptions.

play13:16

And that's the app.

play13:17

Now I have a personal chef in my pocket,

play13:18

and I didn't to build a database of recipes to get it.

play13:22

And at the end of a long day, I don't

play13:23

have to type a list of ingredients

play13:25

into my phone to find a recipe to make.

play13:27

I just snap a picture, and Chef Noodle figures it out for me.

play13:31

And, of course, this is a Flutter app.

play13:34

So, when it's time to start cooking,

play13:35

I can switch to my Pixel Tablet so it's easier

play13:37

to follow the recipe hands-free.

play13:41

As a Flutter developer, I like building interesting

play13:43

UIs. I find it fun to focus on the user experience

play13:47

and animations, and I don't find it

play13:48

fun to tinker with server-side logic.

play13:50

It's pretty incredible that using the Gemini API in Flutter,

play13:54

I was able to build an app that has real-world functionality,

play13:57

and I spent almost none of my coding time

play13:59

on building out a server and writing database queries.

play14:02

ANDER DOBO: As a product manager,

play14:04

I like continuously improving the product and experience

play14:07

for users.

play14:09

In the future, I'd like to make Chef Noodle more

play14:12

interactive with chat.

play14:13

I'm looking forward to using the Gemini API's chat feature

play14:16

to build that version of the app.

play14:19

Right now, you can check out this video's description

play14:22

for links to the GitHub repo for this app

play14:24

and some other useful resources.

play14:28

I hope you have seen the potential

play14:29

of building AI-driven apps with the Gemini API.

play14:33

And if you'd like to use the Gemini API in your Flutter

play14:37

or Dart apps, head to the Quickstart to get

play14:40

started with the Google AI SDK.

play14:43

We can't wait to see what you will build.

play14:45

[MUSIC PLAYING]

Rate This

5.0 / 5 (0 votes)

Related Tags
Generative AIFlutter AppsGemini APIAI SDKApp DevelopmentPrompt DesignAI CookingRecipe CreationAI PersonalityData ParsingUser Experience