Bake a cake with TensorFlow — Android part

George Soloupis
4 min readJul 25, 2023

Written by George Soloupis ML and Android GDE.

This blog post is a continuation of part 1, where we previously showcased the usage of an e-Nose for data collection and monitoring of the cake baking procedure using TensorFlow. In this part, we will delve into the process of machine learning inference inside an Android phone. Specifically, we will demonstrate how the string transmitted through Bluetooth is converted to floats, which are then fed into the TensorFlow Lite interpreter. This seamless integration allows for real-time predictions and analysis on the Android device, taking the e-Nose’s capabilities to the next level.

One of the most user-friendly and capable Android applications for connecting your ESP32 to your Android phone is this particular one. Despite being written entirely in Java, it excels in searching for available devices on its first screen. With just a simple click on the desired device, your Android phone seamlessly connects to the ESP32. This impressive functionality allows you to send and receive messages from your Android device, providing efficient and convenient remote control over the ESP32, making it an great tool for various applications.

Inside the TerminalFragment.java file, you can find functions designed for sending and receiving data to and from the ESP32 module. In our current focus, we will elaborate on the process of receiving messages transmitted from the single board computer. The ESP32 module is programmed to send all the floats collected from various sensors in the form of a string. Here’s an example of the transmitted data:

String receivedData = "0.001, 0.0071,  0.0462,  0.066729,  0.002547, 0.002528, 0.00292, 0.009678, 0.007478, 0.030769, 0.013094, 0.021364, 0.008834, 0.015785, 0.02529, 0.022349,  0.024583, 0.042495, 0.013057, 0.011628, 0.003466, 0.003048, 0.002265, 0.0051, 0.009261, 0.003978, 0.003117";

Upon receiving the data, the Android application then parses this string and converts it into individual float values, enabling seamless integration with the TensorFlow Lite interpreter for real-time inference and analysis.

String string = receivedData;
Pattern pattern = Pattern.compile(",");
String[] floats = pattern.split(string);
float[] floatList = new float[floats.length];
for (int i = 0; i < floats.length; i++) {
floatList[i] = Float.parseFloat(floats[i] );
}

The floatList array contains float values that are not yet ready to be fed into the Interpreter. We have to normalize the data as we did at part 1 when we trained the model. Mean and standard deviation has to be used for that purpose and the algorithm will be:

normalized_float = (value — mean) / std

To check the mean and standard deviation values for each of the sensors in the Python notebook, you can refer to cell 28, which contains the relevant information.

Additionally, to explore the input and output arrays expected by the Interpreter, you can use the netron.app tool. This user-friendly tool allows you to inspect the model’s architecture and visualize the flow of data through the layers, providing valuable insights into the model’s structure and data requirements. By examining the model in netron.app, you can gain a deeper understanding of the input format expected by the Interpreter for accurate inference and the output format for obtaining meaningful predictions. The cake_model.tflite file uploaded to the netron.app will give this visualization:

Model uploaded to netron

From this we gain valuable information like the input and output arrays’ dimensions.

Input array = [1][26][1]
Output array = [1][10]

The same arrays have to be created inside our application:

float[][] outputArray = new float[1][10];
float[][][] inputArray = new float[1][26][1];
for (int i = 0; i < inputArray.length; i++) {
for (int j = 0; j < inputArray[i].length; j++) {
inputArray[i][j][0] = (floatList[j] - mean_array[j]) / std_array[j];
}
}

Now that we have the arrays we are ready to initialize the Interpreter. Check the documentation about the options developers have to use the TensorFlow Lite Interpreter. There is the bundle option and the Google Play Services option. For this implementation we went with the bundle option:

Interpreter interpreterPredict = getInterpreter(requireActivity(), OCR_MODEL, false);

private Interpreter getInterpreter(Context context, String modelName, boolean useGpu) throws IOException {
Interpreter.Options tfliteOptions = new Interpreter.Options();
tfliteOptions.setNumThreads(NUMBER_OF_THREADS);
if (useGpu) {
tfliteOptions.setUseNNAPI(true);
}
return new Interpreter(loadModelFile(context, modelName), tfliteOptions);
}

private MappedByteBuffer loadModelFile(Context context, String modelFile) throws IOException {
AssetFileDescriptor fileDescriptor = context.getAssets().openFd(modelFile);
FileInputStream inputStream = new FileInputStream(fileDescriptor.getFileDescriptor());
FileChannel fileChannel = inputStream.getChannel();
long startOffset = fileDescriptor.getStartOffset();
long declaredLength = fileDescriptor.getDeclaredLength();
MappedByteBuffer retFile = fileChannel.map(FileChannel.MapMode.READ_ONLY, startOffset, declaredLength);
fileDescriptor.close();
return retFile;
}

and feed the input array:

interpreterPredict.run(inputArray, outputArray);

The output array then is populated after the inference with float values and we get the index with the maximum value:

// Find the index.
int biggestFloatIndex = 0;
for (int i = 1; i < outputArray[0].length; i++) {
if (outputArray[0][i] > outputArray[0][biggestFloatIndex]) {
biggestFloatIndex = i;
}
}

The index is the class that is predicted from the model.

Conclusion
In summary, the blog post discussed the connection of an electronic nose (e-Nose) to an Android phone for real-time machine learning inference. The Android application seamlessly connected with the ESP32 module via Bluetooth, enabling remote control and communication. The integration of TensorFlow Lite interpreter facilitated real-time predictions. With the e-Nose and Android phone working in harmony, the cake baking process was efficiently monitored, offering insights for improvements and enhancing overall convenience.

--

--

George Soloupis

I am a pharmacist turned android developer and machine learning engineer. Right now I’m a senior android developer at Invisalign, a ML & Android GDE.