Using MediaPipe inside android for live background blurring

George Soloupis
4 min readMar 12, 2024

--

Written by George Soloupis ML and Android GDE.

This is a blogpost to demonstrate how you can blur the background of a live camera feed using MediaPipe’s segmentation library. Usually demo applications showcase a mask (that is generated from a ML model and it is usually converted to a simple bitmap) on top of a live feed camera. Here we are going to use the basic MediaPipe’s segmentation application and will extend it to blur the background.

Why Blurring Background Matters

Adding a blur effect to the background can significantly enhance the visual appeal of applications such as video conferencing, live streaming, or virtual events. By removing distractions and maintaining focus on the subject, background blurring helps elevate the overall user experience.

MediaPipe’s solution

The MediaPipe Image Segmenter task empowers you to effortlessly partition images into distinct regions based on predefined categories. This functionality proves invaluable for pinpointing specific objects or textures within images and subsequently applying visual enhancements such as background blurring. Notably, this task offers a range of models trained for segmenting individuals and their attributes within image data, comprising:

Whether you’re working with single images or continuous video streams, this task seamlessly operates on image data using a machine learning (ML) model. It yields a comprehensive list of segmented regions, delineating objects or areas within the image, tailored to the model you select.

Code

You can find a lot of details about this high level API at the official documentation. Let’s see some key points to run the task inside android:

  1. First you need to add the dependency for the MediaPipe’s vision task.
implementation("com.google.mediapipe:tasks-vision:0.10.10")

2. Create the task

val baseOptions = baseOptionsBuilder.build()
val optionsBuilder = ImageSegmenter.ImageSegmenterOptions.builder()
.setRunningMode(runningMode)
.setBaseOptions(baseOptions)
.setOutputCategoryMask(true)
.setOutputConfidenceMasks(false)

if (runningMode == RunningMode.LIVE_STREAM) {
optionsBuilder.setResultListener(this::returnSegmentationResult)
.setErrorListener(this::returnSegmentationHelperError)
}

val options = optionsBuilder.build()
imagesegmenter = ImageSegmenter.createFromOptions(context, options)

3. Use the ImageProxy from CameraX for live streaming

fun segmentLiveStreamFrame(imageProxy: ImageProxy, isFrontCamera: Boolean) {
if (runningMode != RunningMode.LIVE_STREAM) {
throw IllegalArgumentException(
"Attempting to call segmentLiveStreamFrame" + " while not using RunningMode.LIVE_STREAM"
)
}

val frameTime = SystemClock.uptimeMillis()
val bitmapBuffer = Bitmap.createBitmap(
imageProxy.width, imageProxy.height, Bitmap.Config.ARGB_8888
)

imageProxy.use {
bitmapBuffer.copyPixelsFromBuffer(imageProxy.planes[0].buffer)
}

// Used for rotating the frame image so it matches our models
val matrix = Matrix().apply {
postRotate(imageProxy.imageInfo.rotationDegrees.toFloat())
if(isFrontCamera) {
postScale(
-1f,
1f,
imageProxy.width.toFloat(),
imageProxy.height.toFloat()
)
}
}

imageProxy.close()

rotatedBitmap = Bitmap.createBitmap(
bitmapBuffer,
0,
0,
bitmapBuffer.width,
bitmapBuffer.height,
matrix,
true
)

val mpImage = BitmapImageBuilder(rotatedBitmap).build()

imagesegmenter?.segmentAsync(mpImage, frameTime)
}

4. Use the mask on top of the camera feed with an OverlayView

fun setResults(
originalBitmap: Bitmap? = null,
byteBuffer: ByteBuffer,
outputWidth: Int,
outputHeight: Int
) {
if (originalBitmap == null) {
return
}

val pixels = IntArray(byteBuffer.capacity())
for (i in pixels.indices) {
val index = byteBuffer.get(i).toUInt() % 20U
val color = if (index.toInt() == 0) Color.BLACK else Color.TRANSPARENT
pixels[i] = color
}

val mask = Bitmap.createBitmap(
pixels,
outputWidth,
outputHeight,
Bitmap.Config.ARGB_8888
)
...........
)

Blur effect

The OverlayView is an android custom View that overlaps with the CameraX live Preview. The idea here is to use the original bitmap that was fed to the model and the mask that was produce to create the final blurred bitmap that will be loaded to the OverlayView. That is created using the PorterDuff mode.

    private fun cropBitmapWithMask(original: Bitmap, mask: Bitmap?, style: String): Bitmap? {
if (original == null || mask == null) {
return null
}

val w = original.width
val h = original.height
if (w <= 0 || h <= 0) {
return null
}

// Reuse cropped bitmap to avoid creating a new one each time
val cropped = Bitmap.createBitmap(w, h, Bitmap.Config.ARGB_8888)
val canvas = Canvas(cropped)

// Draw the original bitmap
canvas.drawBitmap(original, 0f, 0f, null)

// Draw the mask
val paint = Paint(Paint.ANTI_ALIAS_FLAG)
paint.xfermode = PorterDuffXfermode(PorterDuff.Mode.DST_OUT)
canvas.drawBitmap(mask, 0f, 0f, paint)
paint.xfermode = null

// Apply style based on input
return when (style) {
"gray.jpg" -> androidGrayScale(cropped)
"blur1.jpg" -> blurImage(cropped, 5)
"blur2.jpg" -> blurImage(cropped, 10)
"blur3.jpg" -> blurImage(cropped, 15)
"sepia.jpg" -> setSepiaColorFilter(cropped)
else -> cropped // If no style is applied, return the cropped bitmap
}
}

Above you can see that the app supports different effects for the background as the grayscale or the sepia one. Here we are going to talk about blur. One of the fastest options to blur the bitmap is with Renderscript intrisicts. Unfortunately this is deprecated but still available for older devices. Other options include RenderEffect, OpenGL and Vulcan API. You can find a detailed explanation at this blog post and the sample to build and experiment here.

Here we are going to use the Renderscript intrinsics replacement toolkit which is faster than the RenderScript CPU implementation. It is based on C++ implementation and you can find the code here.

Renderscript implementation:

try {
val rsScript = RenderScript.create(context)
val alloc = Allocation.createFromBitmap(rsScript, input)
val blur = ScriptIntrinsicBlur.create(rsScript, Element.U8_4(rsScript))

// Set different values for different blur effect
blur.setRadius(number.toFloat())
blur.setInput(alloc)

// Reuse the input bitmap for the result
val result = Bitmap.createBitmap(input)
val outAlloc = Allocation.createFromBitmap(rsScript, result)
blur.forEach(outAlloc)
outAlloc.copyTo(result)

rsScript.destroy()
result
} catch (e: Exception) {
input
}

Replacement:

Toolkit.blur(bitmap, radius)

Complete code can be found at this Github repository.

Blurring the background.

Conclusion

The blogpost illustrated the utilization of MediaPipe’s segmentation library to implement live background blurring in applications like video conferencing or live streaming. By leveraging MediaPipe’s Image Segmenter task, developers can effortlessly partition images into distinct regions and apply background effects. The provided code snippets demonstrated how to integrate this functionality into Android applications, utilizing CameraX for live streaming and an OverlayView. Additionally, the post offered insights into implementing blur effect using Renderscript intrinsics replacement toolkit, which provides faster performance compared to traditional methods.

--

--

George Soloupis

I am a pharmacist turned android developer and machine learning engineer. Right now I’m a senior android developer at Invisalign, a ML & Android GDE.