Create Artistic Effect by Stylizing Image Background — Part 3: Android Implementation

Written by George Soloupis

This is part 3 of an end-to-end TensorFlow Lite tutorial written with team effort, on how to combine multiple ML models to create artistic effects by segmenting an image and then stylizing the image background with neural style transfer. (Part 1 | Part 2 | Part 3)

Now that we have the models from the previous post, we are ready to implement the models in an Android app. We will go over the scenario of how to combine multiple ML models to create artistic effects by segmenting an image and then stylize the image background with neural style transfer. We are using below features in the implementation:

  • Combination of different TensorFlow ML models inside an android application.
  • TensorFlow Support library for bitmap loading.
  • TensorFlow Task library for Segmentation procedure.
  • ML Binding for Style Transfer operation.

The code can be found here in this GitHub repository where you may find details on TensorFlow Lite models and mobile implementations.

Firstly let’s visualize the finished application to understand the case:

We can see that the camera is used to take a picture then on the second screen Segmentation procedure takes place at the top ImageView and Style is applied at the background at the bottom ImageView.

For this project it was a great chance to use CameraX implementation. This is a new Jetpack library that lets developers control a device’s camera and focuses on compatibility across devices going back to API level 21 (Lollipop). Some of the features of Camera are:

  • It is backwards compatible till Android 5.0 / Lollipop (API 21) and it works with at least 90% devices in the market.
  • Under the hood, it uses and leverages the Camera 2 APIs. It basically provides the same consistency as Camera 1 API via Camera 2 Legacy layer and it fixes a lot of issues across the device.
  • It also has a lot of awesome advanced features like Portrait, HDR, Night mode etc (provided your Device supports that).
  • CameraX has also introduced use cases which allow you to focus on the task you need to get it done and not waste your time with specific devices. Few of them are Preview, Image Analysis, Image Capture.
  • CameraX doesn’t have specific call/stop methods in onResume() and onPause() but it binds to the lifecycle of the View with the help of CameraX.bindToLifecycle().

You can also:

  • Create a Video Recorder App using CameraX.
  • Use Image Analysis to perform Computer Vision, ML. So it implements the Analyzer method to run on each and every frame.

For more info you can refer to the documentation and the official example.

Task Library

After taking a picture second screen is visible and Segmentation procedure takes place immediately. The bitmap that is generated by camera implementation is loaded with the help of TensorFlow Lite Android Support Library. This library is designed to help process the input and output of TensorFlow Lite models, and make the TensorFlow Lite interpreter or at our example TensorFlow Task library easier to use. The TensorFlow Lite Task library is a new set of tools to help developers implement ML models to their apps. It’s really simple to use.

Here are the steps of how to set up the task library for segmentation:

  • First make sure the TensorFlow Lite model contains metadata.
  • Create assets folder under the app module, and place the .tflite model file there. In our case “lite-model_deeplabv3_1_metadata_2.tflite”.
  • Then add gradle dependency:

Follow along to see how the difficult and hard to code procedure of Image Segmentation is now executed in a few lines of code:

With just 2 lines of code we get the segmentation result:

Mind Blowing right? Just remember the difficult procedure of loading, resizing and transforming bitmap to bytebuffer to use it with the Interpreter and you will start using TensorFlow Lite Android Support Library and TensorFlow Task library immediately!!

Style Transfer with ML Model binding

When we get the output of the Segmentation procedure then we have a RecyclerView with different styles to choose so we can apply them to the background of the original image. For Style transfer procedure we use ML Model binding option where there is no need to manually create the assets folder and place the .tflite model there. You can simply import the model via Android Studio File -> New -> Other -> TensorFlow Lite Model. It also automatically takes care of adding Gradle dependency, generating a class for loading model and running model inference, and then providing you with the code snippets for interacting with the generated class. These models can be found here and here.

Code for style transfer with ML binding:

Check again that with ML binding and models with metadata there is no need to transform bitmaps to bytebuffer that sometimes is hard to debug! We just load them with the TensorFlow Support library and we process them directly with the models.

The output is outstanding!

Image for post
Image for post

Conclusion

The latest libraries from TensorFlow facilitate ML procedures very much! Operations used to be hard to code now are simple enough to be executed from users with minimum knowledge of Machine Learning! In addition, this is a great example of a combination of different ML models inside a single application. I wonder if the output image with the person over a stylized background could be possible to be generated with no Machine Learning. I guess no!

Thanks to Margaret Maynard-Reid for reviewing the post. Thanks to Khanh LeViet and Sayak Paul from the TFLite team for their technical support.

Written by

I'am a pharmacist turned android developer engineer. Right now I am a member of Google's Tensorflow Lite Machine Learning on Mobile OS Working Group

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store