Object Detection App Using the Task Library

Learn to deploy a TF Lite object detection model to an Android app using the ObjectDetector API of the Task Library.

We'll cover the following

In this lesson, we walk through the process of building an Android app that allows users to select an image from their device storage and detect objects in it. To achieve this, we utilize a pretrained TF Lite model for object detection.

Our app has the MainActivity class that provides a UI, handles user interactions, such as picking an image from the device storage, and integrates a pretrained object detection model with metadata (label file, etc.) to detect objects in the image.

Let’s get started.

Android app

The MainActivity class of the app has various fields, such as:

  • loadImageButton

  • imageView

  • selectedImage

  • resultTextView

  • pickImage

It also has methods like:

  • onCreate(): This is for UI setup and user interactions.

  • runTensorFlowLiteObjectDetection(): This is for object detection.

  • drawBoxTextDetections(): This is for drawing bounding boxes and text on images.

The Detection class represents a detected object with its bounding box and categories, while the Category class represents a label and score for a detected object.

The dataClassBoxText data class stores detection results with a bounding box (RectF) and associated text.

Get hands-on with 1200+ tech skills courses.