Category: Uncategorized

ConstraintLayout 2.0: ImageFilterView

ConstraintLayout 2.0: ImageFilterView

Whilst browsing through the various examples online with the new ConstraintLayout 2.0, I stumbled upon ImageFilterView. This got my attention immediately and I decided to investigate further.

An ImageFilterView allows you to perform some common image filtering techniques on an ImageView, including saturation, contrast, warmth and crossfade. 

If you have tried to implement these image filters before, you may have run into ColorMatrix. If you look at the source of ImageFilterView you will see that these methods have just been nicely wrapped up with simpler API usage.

For example, if you would like to adjust the warmth, contrast or saturation, all you need to do is set a property on the ImageFilterView:

You can also access this programmatically, so you could add a SeekBar to control these values.

There is also the ability to crossfade between two different images using the crossfade method defined on ImageFilterView. This allows you to merge two images together.

If you are looking for a quick way to add some basic image effects, ImageFilterView is definitely something to consider. It is fast to use and execute since it is backed by ColorMatrix which uses the GPU (and not the CPU) to process the resultant image.

Here is an example of ImageFilterView in action:

Realtime Image Processing with ImageFilterView
Realtime Image Processing with ImageFilterView

 

The downside to using this approach is that you are not in full control of the exact pixel values that are going to be used, which could be problematic if you are developing an image editing application.

Overall, I’m really excited about the ImageFilterView class! I hope it is the start of some awesome Image effects offered by the Android Team.

Check out the ConstraintLayout demo repository for the code used in the above example.

Follow me on Twitter for more.

 

Building a Custom Machine Learning Model on Android with TensorFlow Lite

Building a Custom Machine Learning Model on Android with TensorFlow Lite

Building a custom TensorFlow Lite model sounds really scary. As it turns out, you don’t need to be a Machine Learning or TensorFlow expert to add Machine Learning capabilities to your Android/iOS App. 

One of the simplest ways to add Machine Learning capabilities is to use the new ML Kit from Firebase recently announced at Google I/O 2018. 

ML Kit is a set of APIs provided by Firebase that provide Face Detection, Barcode Scanning, Text Recognition, Landmark Detection and Image Labelling. Some of these APIs provide an offline-mode which enables you to use these features without worrying if a user has an internet connection.

ML Kit is great for the common use cases described above, but what if you have some very specific use case? For example, you want to be able to classify between different kinds of candy boxes, or you want to be able to differentiate between different potato chip packets. This is when TensorFlow Lite comes in.

Nik Naks are a popular South African brand of Cheese Puffs
Nik Naks are a popular South African brand of Cheese Puffs

What is TensorFlow Lite?

TensorFlow Lite is TensorFlow’s solution to lightweight models for mobile and embedded devices. It allows you to run a trained model on device. It also makes use of hardware acceleration on Android with the Machine Learning APIs.

How do I train my own custom model?

There are a few steps to this process that we are going to take in order to build our own custom TensorFlow Lite model. 

6 Steps to retrain Mobile Image Classifier
6 Steps to Retrain Mobile Image Classifier

Training a TensorFlow model can take a long time and require a large corpus of data. Luckily, there is a way to make this process shorter and does not require gigabytes of images or tons of GPU processing power. 

Transfer Learning is the process of using an already trained model and retraining it to produce a new model.

In this example, we will use the MobileNet_V1 model and provide it with our own set of images that we will retrain this model on.

This example is an adaption of these two codelabs (1 and 2) and this talk from Yufeng Guo.

Prerequisites:

We need to install TensorFlow in order to run this example. You will also need to make sure PILLOW is installed. 

If the installation of TensorFlow doesn’t work, follow the instructions here.

Clone the following repository and cd into the directory:

Step 1: Gather Training Data

For this part of the process, because we don’t have a large set of data to work with, taking a video recording of each chip packet will work well enough for our use case. With each video, we need to make sure we are getting different angles of the chip packet and if possible, different lighting conditions. 

Here is an example of a video taken for of a chip packet: 

We would need a video of each packet of chips that we want to identify. 

Step 2: Convert Training Data into useful images

Once we have our videos from the previous step, we need to convert these into images. Using FFMPEG (a command-line tool for video processing), we can batch convert a video into images by running this command for each video, substituting the name of the mp4 file and the folder and image name:

Step 3: Folders of Images 

Once you have all your videos cut up into images, make sure you have a folder with all the training data. Inside the folder, make sure to group all the related images, into labeled folders (This would happen if you have done the above step). It should look something like this:

Folder Structure for retraining TensorFlow Lite Model 

Step 4: Retrain the Model with the new images 

Once we’ve got our training data, we need to retrain the MobileNet_V1 model, with our new images. This python script will be run in the folder that we have checked out from the prerequisites.

We run the scripts.retrain python script, with our new training data referenced as the image_dir. This step will produce a retrained_graph.pb file.

Step 5: Optimise the Model for Mobile Devices

Once we are done retraining our model, we need to optimise the file to run on mobile devices. TOCO or “TensorFlow Lite Optimizing Converter” is a tool provided by the TensorFlow Library, that optimises the graph to run on mobile devices. 

We pass our new retrained_graph.pb file that we created from the previous step, into this function. 

After running this step, we have a chips_optimized_graph.tflite file and a bunch of labels stored in the .txt file. 

Side note: This step was honestly what took me a while to get working, there were a lot of issues I was experiencing and I ended up having to dive deep into TensorFlow libraries and building the whole TensorFlow library from source to be able to run TOCO. 🤷🏻‍ Apparently there is a tool coming soon to the Firebase Console that will help developers easily optimise their models for Android without having to build TensorFlow from source. I suggest reading more in this codelab here if you are also struggling.

Step 6: Embed .tflite file into App or distribute via ML Kit on Firebase

Now open up the android folder from the checked out repository in Android Studio to build and run the project. Once you have it opened, navigate to a class called ImageClassifier . Inside here, there are two fields you need to update with your new TensorFlow Lite model that we have created.

The MODEL_PATH and the LABEL_PATH will need to be updated with the names of the new files that you have created. You can place these files inside the assets folder in the app. 

Step 7: Profit 🤑

Once we have retrained our model to our needs and embedded the new model in our app, we can run the app locally and see if it detects the correct chip packets. Here is an example of it working below:

Things to consider

  • If you need to update your model, you would need to ship a new app update and hope that people download it. Another way to do this, without requiring an app update, is to host the model on Firebase. Have a read here for more information on how to do this. 
  • TensorFlow Mobile is the older version of TensorFlow for Android/Mobile devices. Make sure any tutorial you are following is using the new TensorFlow Lite and not TensorFlow Mobile

Hopefully, this inspires you to train your own Image Classifier and ship some cool features into your apps! Find me on twitter @riggaroo

References:

On-Device Machine Learning: TensorFlow for Android https://youtu.be/EnFyneRScQ8

Codelabs with more information:

Teaching High School Girls about the Different Careers in Software Engineering

Teaching High School Girls about the Different Careers in Software Engineering

Yesterday I was invited to speak at St. Mary’s Diocesan School for Girls in Pretoria about Software Engineering and the different aspects of my every day job. I was really excited to share my story with them. When I was in High School we didn’t have this kind of opportunity. We had a Career Expo but not anything like this, our Career Expo involved a bunch of stands in the school hall with people handing out brochures. I remember walking around and being way too scared to talk to anyone, I collected a few brochures and still had no clue what I wanted to do with my life.

Read More Read More

Google Developer Launchpad Build SSA – Nairobi and Cape Town Events

Google Developer Launchpad Build SSA – Nairobi and Cape Town Events

I was lucky enough to be invited to speak in Nairobi and Cape Town this past week for the Google Developer Launchpad Build Series events.

The theme this year was Firebase. The event was a huge success and I had the best time! I gave a talk about Firebase Remote Config and Test Lab. Here are the slides from my talk:

Read More Read More

Android – Reduce the size of your APK files

Android – Reduce the size of your APK files

If it is one thing that I hate – it is apps that are HUGE downloads for really simplistic functionality.

40MB for an app that just accesses some messages and has hardly any images, what is it doing??

313

I have recently had quite an obsession trying to reduce my app size and have managed to shave off 6MB with a few optimisations (Yay right?!!!! 😃). I thought I would share some tips on how to reduce your Android APK file size:

  1. Use ProGuard: this will obfuscate your code and reduce the app size. [1]
  2. Enable the following with gradle:
    • minifyEnabled – this will get rid of unused code.
    • shrinkResources – this will get rid of unused resources such as layouts or strings that are not referenced in your app.
  3. Make use of split APKs. This is especially useful if you are using native libraries. The native libraries can be quite large and making a user download an x86 library when their device is ARMv7 is pointless. Strip that out. [2]
  4. Check your images. If there are any large bitmaps, chances are you can reduce the size without losing much detail in the image. If possible, use JPEGs instead of PNGs as they are generally smaller.[3]
  5. Consider using Vector Drawables instead of PNGs for every density bucket. This will reduce the number of files needed and the images won’t degrade in quality.[4]
  6. Don’t use images if you don’t need to. Gradient backgrounds, shapes or colours can be achieved with XML instead.[5]
  7. Consider the libraries you use. If a library is massive and you are only using one or two functions you should find an alternative smaller option. Look through your code to see if there are any unused JAR files or unused code and remove that too.

Be careful when doing the things listed above, make sure you test thoroughly after applying these changes as it could break parts of your app. It could get rid of resources or code that might be used.

What do you do to reduce the size of your APKs?

Links:
  1. ProGuard Documentation
  2. Split APKs
  3. PNG vs JPG vs SVG
  4. Vector Asset Studio
  5. Drawable Resources