[ML Story] Machine Learning on the browser: TF Lite meets TF.js

Nitin Tiwari
Google Developer Experts
6 min readOct 30, 2023

--

In the fast-paced world of technology, nothing has evolved as swiftly as ML has over the past few years. Frameworks such as TensorFlow and PyTorch have played a significant role in enabling the deployment of ML models on web browsers and various edge devices, including mobile phones, smartwatches, Arduino, Raspberry Pi, and more.

While Web ML is not new, it has surely picked up pace to solve some exciting problems. This article introduces the combined capabilities of TF Lite and TF.js within the TensorFlow ecosystem to simplify the process of serving models.

TF.js-TFLite API

I’m pretty sure you’re excited to unfold what’s in store for you in this article. So, fasten your seat belts, and let’s get started with the help of an example.

Real-time Waste Detection on the browser

During our final year of engineering, my team and I were working on solving a critical problem that the environment faces a lot. Recognizing it as an opportunity to #SolveForEnvironment, this project (now extended and improvised) focuses on detecting different categories of waste in the surroundings using object detection techniques.

Through this initiative, the goal is to make a meaningful impact in addressing environmental issues with a bit of AI involved. I have open-sourced the project implementation.

About the data

The dataset used to train the object detection model was manually scraped from different sources on the web, Google Images, and pictures clicked from cameras, and mobile phones for five categories of waste – open litter, plastic waste, biodegradable waste, medical waste, and overflowing dustbin, approximately about 100 images each. Of course, for better results, you must build a healthy dataset.

Here’s an overview of the dataset.

Waste Dataset

On a lighter note, today marks a rare occasion where I’d be happy if the famous saying “Garbage in, garbage out” proves correct (hope you caught the joke).

The model

The object detection model was trained by applying Transfer Learning on the EfficientDet-Lite2 as the foundation model using the TF Lite Model Maker.

If you wish to train a TF Lite object detection model for your custom dataset, please feel free to refer to the Custom Object Detection on Android using TF Lite GitHub repository.

Note: TF Lite Model Maker is no longer supported on Google Colab and has been replaced by MediaPipe Model Maker. Stay tuned for hands-on tutorials and examples on MediaPipe in my upcoming blogs.

The pre-trained model to detect waste is readily available on my GitHub repository.

Execute the following command to clone the repository.

git clone https://github.com/NSTiwari/TFJS-TFLite-Object-Detection.git

Inspecting the model architecture

Netron is a free online web-based tool to quickly visualize neural network architectures and various model properties.

Upload the waste.tflite model file on Netron to view its architecture.

Input and subsequent hidden layers of the model
Output layers of the model

Our model has a sophisticated neural network architecture featuring numerous hidden layers performing many complex ops such as Conv2D, MaxPool2D, quantization, etc.

Understanding the entire architecture might be overwhelming, but feel free to explore some of the hidden layers. For now, let’s just concentrate on the input and output layers.

Input layer:
serving_default_images:0 is the input tensor node of size 1 x 448 x 448 x 3 meaning, an RGB image of dimensions 448 x 448.

Output layer:
TFLite_Detection_PostProcess is the output node with four tensors and their denotations as follows:

  • StatefulPartitionedCall:31782: Bounding box coordinates
  • StatefulPartitionedCall:32783: Labels of the detected object(s)
  • StatefulPartitionedCall:33784: Confidence of the detected object(s)
  • StatefulPartitionedCall:34785: No. of objects detected

Deploying the TF Lite model on the browser

With the introduction of the TensorFlow.js-TFLite API, it has now become extremely easy to deploy and use TF Lite models directly on the browser.

In contrast to the conventional approach of converting a TF Lite model into TF.js format, which involves the intricate process of sharding the model into several chunks and transforming the model graph into a JSON file, the TF.js-TFLite API allows seamless loading of the TF Lite model directly into the browser.

Getting TensorFlow.js and TF.js-TFLite
Include the following script tags in your HTML file to load the CDN for TF.js and the TF.js-TFLite API.

<script src="https://cdn.jsdelivr.net/npm/@tensorflow/tfjs-core"></script>
<script src="https://cdn.jsdelivr.net/npm/@tensorflow/tfjs-backend-cpu"></script>
<script src="https://cdn.jsdelivr.net/npm/@tensorflow/tfjs-tflite@0.0.1-alpha.8/dist/tf-tflite.min.js"></script>

Loading the TF Lite model
To load the model, use the following method in your JavaScript file:

const MODEL_PATH = "./model/waste.tflite"
objectDetector = tflite.loadTFLiteModel(MODEL_PATH);

Interpreting the output tensors
The model accepts input either from static images or real-time webcam/video feeds post which it performs detection processes, producing output tensors accordingly.

// Input tensor of size 448x448.
let input = tf.image.resizeBilinear(tf.browser.fromPixels(frame), [448, 448]);
input = tf.cast(tf.expandDims(input), 'int32');

// Run the inference and get the output tensors.
let result = await objectDetector.predict(input);

// Interpret the output tensors to get box coordinates, classes, scores, and no. of detections respectively.
let boxes = Array.from(await result[Object.keys(result)[0]].data());
let classes = Array.from(await result[Object.keys(result)[1]].data())
let scores = Array.from(await result[Object.keys(result)[2]].data())
let n = Array.from(await result[Object.keys(result)[3]].data())

The output tensors are subsequently processed to render detections, enabling the inference of results by drawing bounding boxes.

Putting it together
You can find the complete code for both static and real-time detection in the GitHub repository you previously cloned.

Running the application

To run the application on your computer, you’ll need to set up a local HTTP server. While there are various web extensions designed for this task, Python offers a built-in feature to launch a local server on your machine.

Open your terminal/command prompt and navigate to the directory where the index.html file is located.

cd TFJS-TFLite-Object-Detection

# For static detection.
cd "Static Detection

# For real-time detection from the webcam.
cd "Real-time Detection"

# Launch the HTTP server.
py -m http.server

The HTTP server should be launched locally using the default port number 8000. Open your web browser and enter localhost:8000 in the address bar.

Static Detection
Here’s an illustration demonstrating waste detection through the upload of static images.

Waste Detection in static images

Real-time detection
In the video below, witness the TF Lite object detection model in action, successfully detecting various categories of waste in real-time on the browser.

Real-time Waste Detection from webcam on the browser

Try out a LIVE demo

Looks exciting? Why not give it a try yourself? The project is available as a LIVE demo on CodeSandbox.

That wraps up this blog post. While there’s still much to be done with this project, I hope you found it engaging. If you enjoyed my efforts, I’d greatly appreciate your support by starring the GitHub repository and helping me spread the word.

I welcome your feedback. Should you have any questions, don’t hesitate to connect with me on LinkedIn. Thanks for reading.

--

--