Category: android

Android Canvas Drawing: Useful Graphics Classes & Operations 🧪

Android Canvas Drawing: Useful Graphics Classes & Operations 🧪

Drawing on an Android Canvas is quite overwhelming, there are many different classes and concepts to understand when drawing something. If you haven’t already read part one of this series make sure to read it here.

In this post, we will be covering some classes that you will find available within the Android Framework which can make your life a bit easier when working with a canvas.

Rect / RectF ◼️

A rectangle class that stores four values: topleftright and bottom. These can be used for drawing directly on canvas or just for storing sizes of objects that you want to draw.

The difference between the Rect and RectF class is that the RectF stores float values, where as the Rect class stores integers.

val rect = RectF(100.0f, 200.0f, 300.0f, 400.0f)

With KTX there are many graphics extension functions that you should take advantage of. For instance, there are some for Rect and RectF. One such extension is the destructuring declaration which gets components out of a Rect object:

val rect = RectF(100.0f, 200.0f, 300.0f, 400.0f)
val (left, top, right, bottom) = rect
// left = 100.0f, top = 200.0f, right = 300.0f, bottom = 400.0f

You can do other operations with a Rect class, for instance, you can union two Rects together. This basically just includes the points from both Rectsand returns a bigger Rect that contains both Rects inside it. There are also extension functions for this operation, but it is also possible without the extension:

val rect = RectF(100.0f, 200.0f, 300.0f, 400.0f)
val otherRect = RectF(50.0f, 400.0f, 150.0f, 500.0f)
// rect = RectF(50.0, 200.0, 300.0, 500.0)
// alternatively:
val combinedRect = rect + otherRect
// or alternatively:
val combinedRect = rect or otherRect
// combinedRect = RectF(50.0, 200.0, 300.0, 500.0)

There are other operations you can perform on a Rect, such as : andxoror.

Point / PointF 👉🏽

Stores an x and y coordinate which represents a “point” on a canvas. Point stores integer values, whereas PointF stores floating point values.

val point = PointF(200.0f, 300.0f)

If you are using KTX, there are some extension functions built onto the Point and PointF classes that make working with points much easier. For instance, operator overloads which add the ability to plus and minus two points from each other.

val start = PointF(100.0f, 30.0f)
val end = PointF(20.0f, 10.0f)
val difference = start - end
val together = start + end
// together = Point(120.0f, 40.0f)

There also exists destructuring declarations for these classes, so we can easily get out x and y coordinates out easily from the Point class:

val start = PointF(100.0f, 30.0f)
val end = PointF(20.0f, 10.0f)
val (x, y) = start - end
// x = 80.0f y = 20.0f

Matrix 🔢

A 3 by 3 Matrix that stores information which can be used to transform a canvas. A Matrix can store the following kind of transformation information: scale, skew, rotation, translation.

Below is an example of using a Matrix to transform a Bitmap that is drawn on a Canvas.

Examples of Matrix transformations

To use a Matrix when drawing, you can do the following:

val customMatrix = Matrix()
// in onDraw()
canvas.withMatrix(customMatrix) {
    drawBitmap(bitmap, null, rect, paint)

The above code will draw a bitmap on a canvas that is rotated at 20 degrees. There are a few other functions on a Matrix that we can use such as scaling, rotating and skewing. The great part about using a Matrix over doing everything yourself manually with Canvas transformations, is that the Matrix holds cumulative information about the transformations that are applied to it.

If you translate the Matrix, rotate, scale and translate again, the end values of the translation would be a bit different than the original values. If you were to do this yourself, you would need to calculate that manually if you were performing normal Canvas translate, scale functions.

preRotate vs proRotate vs setRotate

You might be wondering what postRotate means, considering the fact that there are other methods such as setRotate and preRotate on a Matrix. These three methods all do slightly different things:

setRotate — Completely resets the current Matrix and applies the rotation, thus losing any information that you may already have stored in your Matrix.

preRotate — The rotation will be applied before whatever the currentMatrix contains.

postRotate — The rotation will be applied after whatever the currentMatrix contains.

Perspective Drawing with Matrix

Matrix object can also provide us with the ability to perform perspective drawing, which is not possible to achieve with just standard Canvas transformation APIs. The function that allows perspective drawing or skewing of a canvas is Matrix#setPolyToPoly() . This method does sound a bit confusing at first, but once you wrap your head around how it works, it is not so bad!

Here is an example bitmap that has been “skewed” using the setPolyToPolymethod.

Bitmap drawn with setPolyToPoly

The setPolyToPoly method takes input (src) “points”, and maps them to the specified output (dst) “points”. I say “points” because they aren’t real point objects as we’ve just explored earlier in this post, but they are rather just values in a float array, which can be quite confusing.

You can see in the src array below, the first two values are representing the top left point of the image, the second two values represent the top right point and so on. These points can be in any order, but they must match with the corresponding point that you want it to map to, in the dstarray.

val src = floatArrayOf(
    0f, 0f, // top left point
    width, 0f, // top right point
    width, height, // bottom right point
    0f, height // bottom left point
val dst = floatArrayOf(
    50f, -200f, // top left point
    width, -200f, // top right point
    width, height +200f, // bottom right point
    0f, height // bottom left point
val pointCount = 4 // number of points

// the second and fourth arguments represent the index in the src and dest arrays from which the points should start being mapped from
newMatrix.setPolyToPoly(src, 0, dst, 0, pointCount)

canvas.withMatrix(newMatrix) {
   drawBitmap(bitmap, null, rect, paint)

In this example, the bottom right point, will be mapped from the point [width, height] to the point [width, height +200f].

So you can see from the above example that a Matrix can do some pretty powerful and interesting stuff.

Tip: Use the Matrix class to work between different coordinate systems

If you have two different coordinate systems that you are dealing with on a single view, then leveraging a Matrix class can help you map between the two.

For instance, if you get a touch event from Android that is measured in the height and width of the size of the screen, but you would like to know what that point would be inside the image you are drawing on screen, within that coordinate system (ie the coordinate system of the image), you can use a Matrix to map between these two systems.

Example of two different coordinate systems

In order to get the point mapped inside the image drawn on screen, we can use the Matrix#mapPoints() method:

fun mapPoint(point: PointF): PointF {
computeMatrix.reset() // apply the same transformations on the matrix that are applied to the Image
computeMatrix.postTranslate(20f, 20f)
computeMatrix.postRotate(20f, x, y) // create float array with the points we want to map
val arrayPoint = floatArrayOf(point.x, point.y) // use the map points function to apply the same transformations that the matrix has, onto the input array of coordinates
computeMatrix.mapPoints(arrayPoint) // get the points out from the array, these will now be transformed by the matrix.
return PointF(arrayPoint[0], arrayPoint[1])

In the above example, the input point would be the touch event from Android, and the translation and rotation that we apply on the computeMatrix is the same translation and rotation we applied on the image when it was drawn. Then we create a float array which contains the original x and y point. We then call the mapPoints method with this array. It’ll then transform the values in place and when we query the array for the first and second values it’ll be the mapped coordinate, which is the point inside the image view.

Summary 👨🏾‍🎨

You can see that the Android Graphics APIs contain loads of useful classes that you can leverage to do a lot of the calculations and mathematics for you. From Points to Rects to more complex calculation classes like Matrixclasses, we can see that there are many things we can use to help us draw graphics on the screen! Make sure to include KTX to have an even smoother experience when working with Android Graphics classes.

Have any questions or comments? Feel free to reach out and say hi to me on Twitter.

Getting Started with Android Canvas Drawing 🖼

Getting Started with Android Canvas Drawing 🖼

Learn the basics of drawing with the Android Canvas Class

Diving into using the Android Canvas class can unlock magical super powers you never knew you had 🤯. Imagine being able to draw anything* your heart desires just with some basic shapes, paths and bitmaps? Well, the Android Canvas gives you just that ability. 

What is a Canvas?

Canvas is a class in Android that performs 2D drawing of different objects onto the screen. The saying “a blank canvas” is very similar to what a Canvas object is on Android. It is basically, an empty space to draw onto. 

The Canvas class is not a new concept, this class is actually wrapping a SKCanvas under the hood. The SKCanvas comes from SKIA, which is a 2D Graphics Library that is used on many different platforms. SKIA is used on platforms such as Google Chrome, Firefox OS, Flutter, Fuschia etc. Once you understand how the Canvas works on Android, the same drawing concepts apply to many other different platforms. 

Tip: Check out the SKIA source code for a deeper understanding of the Canvas implementation.

It is useful to know that SKIA is used in the underlying code for Android, so when you get stuck trying to understand how a certain API works, you can look at the source for SKIA to gain a deeper understanding.

Canvas Coordinate System

The Android Canvas coordinate system.
The Android Canvas Coordinate System

The coordinate system of the Android canvas starts in the top left corner, where [0,0] represents that point. The y axis is positive downwards, and x axis positive towards the right.

All elements drawn on a canvas are placed relative to the [0,0] point. 

When working with the Canvas, you are working with px and not dp, so any methods such as translating, or resizing will be done in pixel sizes. This means you need to translate any dp values into px before calling any canvas operations. This will ensure that your drawing looks consistent across devices with different pixel densities. 

Canvas draw commands will draw over previously drawn items. The last draw command will be the topmost item drawn onto your canvas. It is up to you to ensure that your items are laid out correctly (Alternatively, you might want to use some of the built-in layout mechanisms for this — such as LinearLayout). 

How do I use a Canvas?

To draw onto a canvas in Android, you will need four things:

  1. A bitmap or a view — to hold the pixels where the canvas will be drawn.
  2. Canvas — to run the drawing commands on.
  3. Drawing commands — to indicate to the canvas what to draw.
  4. Paint — to describe how to draw the commands.

Get access to a Canvas instance

In order to get access to a Canvas instance, you will need to create a class that extends from View. This will then allow you to override the onDraw method, which has a Canvas as a parameter. 

class CustomView @JvmOverloads constructor(context: Context,
attrs: AttributeSet? = null, defStyleAttr: Int = 0)
: View(context, attrs, defStyleAttr) {

// Called when the view should render its content.
override fun onDraw(canvas: Canvas?) {

You can then include this view inside your layout XML and this will then automatically invoke the onDraw method of the Custom View. 


You can also get access to a Canvas object by programatically creating one in code, like this:

val bitmap = Bitmap.createBitmap(width, height, Bitmap.Config.ARGB_8888) 
val canvas = Canvas(bitmap)

It’s worth noting at this point that any Canvas created programmatically without using a View, will be software rendered and not hardware rendered. This can affect the appearance of some of the drawing commands. For instance, some commands are just not supported with hardware rendering, or only supported from a certain API level. For more information about the differences between hardware rendering and software rendering, read this post

What can I draw on a Canvas? ✏️

There are many different things you can draw onto a Canvas. One of the most common drawing operations is to draw a bitmap (image) onto the canvas. The method for doing this is just called drawBitmap and it takes in the bitmap object that is loaded up either with Android’s built-in mechanisms, or with Glide.

canvas.drawBitmap(bitmap, null, rect, paint)

The second parameter here allows us to pass in the portion of the bitmap that you want to render, when passing in null the whole bitmap will be rendered. The third parameter is a RectF object which represents the scale and translation of the bitmap that you want to draw on screen. 

Tip: Make sure your RectF object that you pass into the drawBitmap function is scaled with the correct aspect ratio otherwise your output may be stretched

You need to be careful with this, since it can quite easily stretch the bitmap, as the Canvas calls don’t take into account the aspect ratio of the provided image. You need to ensure the rect that is passed in is properly scaled. The fourth parameter is the paint object, we will cover the purpose of this parameter soon.

There are many other Canvas drawing methods that can give you some great looking views. We won’t be covering them all here, but here are two other examples of drawing methods on the Canvas class:

To draw a circle onto the view, give it a center point x,y , its size and apaint object:

canvas.drawCircle(x, y, size, paint)

Another one is the drawRect() method. This draws a rectangle on screen:

canvas.drawRect(rect, paint)

This is not the full list of drawing methods but just a small highlight of some of them to get you more comfortable with the concepts, feel free to browse the Canvas documentation for a comprehensive list of all of them. 

Paint 🎨

In my opinion, the Paint class is possibly the most interesting graphics class and it is also my favourite, for that very reason. There is so much that a Paint object can do that can really make your drawing operations shine. ✨

The Paint class typically holds colour and style information. The Paint object is then used for drawing objects (i.e. bitmap, text, paths etc) onto a Canvas

To create a Paint object: 

private val textPaint =
Paint().apply {
isAntiAlias = true
color = Color.RED
style = Paint.Style.STROKE

This object should be created before using it in Canvas#onDraw(). It is not recommended to create it in onDraw() since you shouldn’t be doing object allocations in that method. 

isAntiAlias Flag

Tip: Use the isAntiAlias flag to ensure your drawing has smooth edges.

The isAntiAlias flag is quite an important one. If you are drawing objects to your canvas and you notice that the edges of your objects have jagged edges, it is likely that you haven’t set this flag to true. This flag indicates to the paint to smooth out the edges of the object you are drawing to the canvas. 

The Paint class has more than just those three properties, there are many more things you can do with it. For instance, you can also set properties related to text rendering, such as the typeface, letterSpacing(kerning) and textSize.

private val textPaint =
Paint().apply {
isAntiAlias = true
textSize = fontSize
letterSpacing = letterSpace
typeface = newTypeface
setShadowLayer(blurValue, x, y, Color.BLACK)

Unsupported Operations

Tip: Check that the Canvas/Paint APIs you are using work across different API versions. See this site for more information.

It is worth noting that the Paint#setShadowlayer() method doesn’t work consistently across API levels and drawing commands. It works when drawing text on a Canvas, but applying the shadow to other commands such as drawBitmap doesn’t yield the same results across API levels. 

The reason for the inconsistency between API levels is because the Canvas APIs are bundled with the Android Platform and therefore are not updated until the OS is updated. See the list on this page for more information about which APIs work on which Android versions. 

Once you’ve created the Paint object, pass that object into your Canvas#draw*() calls and the drawing will then take on the properties you’ve specified in the paint. 

Next up…

In the next few articles, we will be diving into other parts of working with Canvas and drawing on Android. Be sure to subscribe to updates and follow me on Twitter for more tips. 

If you prefer watching this content — be sure to check out my talk from Mobile Matters London for a summary of this content.

Android: Using Physics-based Animations in Custom Views (SpringAnimation)

Android: Using Physics-based Animations in Custom Views (SpringAnimation)

Learn how to use physics-based animations in a Custom View implementation for natural looking animations in your app.

You’ve used all the standard Android animation techniques, but you find that they sometimes just don’t give you that extra sparkle you are looking for. You’ve wondered how to get more natural looking animations and had no luck thinking about how to do it yourself. So here you are, reading this article in the hope that you will learn how to create beautiful, natural, physics-based animations in your app. 🌈

The Problem 🕵🏽‍♀️

The physics-based animation library is not new, but it was largely unexplored territory for me. Having always used the “standard” animation options (i.e. view.animate()), I had never found a need to use the physics-based animations, until I started with this particular custom view animation. This animation required that we animate a view between two points, decided by the user. Using the standard ValueAnimator, the result was not good enough for the polish that our app requires.

Here is how I previously animated the custom ColourDropperView using the ValueAnimator and PropertyValuesHolder class:

private fun animateToPoint(point: Point) {
    val propertyX = PropertyValuesHolder.ofFloat(ColorDropperView.PROPERTY_X, dropperPoint.x, point.x)
    val propertyY = PropertyValuesHolder.ofFloat(ColorDropperView.PROPERTY_Y, dropperPoint.y, point.y)

    val animator = ValueAnimator()
    animator.setValues(propertyX, propertyY)
    animator.interpolator = OvershootInterpolator()
    animator.duration = 100
    animator.addUpdateListener { animation ->
        val animatedX = animation.getAnimatedValue(ColorDropperView.PROPERTY_X) as Float
        val animatedY = animation.getAnimatedValue(ColorDropperView.PROPERTY_Y) as Float
        setPoint(Point(animatedX, animatedY))

The PropertyValuesHolder is useful when creating custom animations on our own properties. When using it, the animated property values can be fetched from the animation in the AnimationUpdateListener callback. At this point, the values are interpolated between the start and end values we initially provide. We can then go ahead and perform the draw operation (by calling invalidate() on our custom view) using these new animated values and the view will animate 🤩. In our case, the setPoint() method calls invalidate()and the draw() function uses the new point values to draw itself.

ValueAnimator Example

Don’t get me wrong — the above animation is okay in most contexts but we would like it to look more fluid. We need to animate it elegantly between these two positions.

One of the problems with the above animation is that we needed to specify a duration that the animation should take. We specified 100ms which moved the view at a high speed. But you may also notice that the ColorDropperViewmoves a lot faster when the distance between the start and end point is larger. We could play around with the duration property until it looked more acceptable. Ideally, we want the velocity to remain the same and the animation to look consistent, no matter the distance between the two points.

The Solution: SpringAnimation ✨

In order to make the animation more fluid, we need to switch to using the SpringAnimation class (documentation can be found here). The SpringAnimation class allows us to set the property which we will be animating, the velocity and the end value that the property should use.

To use the SpringAnimation class, we need to include the dependency in our build.gradle file:

implementation "androidx.dynamicanimation:dynamicanimation:1.0.0"

There are a bunch of built-in properties that we can use to achieve some standard effects, such as SCALE_XROTATION and ALPHA properties (check the documentation for the full list here). In our case, we needed to animate a custom property — the colour dropper’s X and Y point (the underlying data structure that the view depends on for drawing). So we need to do things a bit differently.

We need to take a look at the SpringAnimation constructor that takes in the FloatPropertyCompat object as an argument. This property will link up our custom view implementation to the animation. The SpringAnimation class uses this object to call into our custom view class on every change of the float value. Here is the implementation of the two custom FloatPropertyCompatobjects for the X position and the Y position on screen:

private val floatPropertyAnimX = object : FloatPropertyCompat<ColorDropperView>(PROPERTY_X) {
    override fun setValue(dropper: ColorDropperView?, value: Float) {

    override fun getValue(dropper: ColorDropperView?): Float {
        return dropper?.getDropperX() ?: 0f

private val floatPropertyAnimY = object : FloatPropertyCompat<ColorDropperView>(PROPERTY_Y) {
    override fun setValue(dropper: ColorDropperView?, value: Float) {

    override fun getValue(dropper: ColorDropperView?): Float {
        return dropper?.getDropperY() ?: 0f

These two objects access two custom methods on the ColorDropperView class and the setValue methods will be called whilst the animation is running, which will set the new interpolated values. In this case, setDropperX() and setDropperY() are custom methods in our ColorDropperView class. When these methods are invoked, they change the underlying value and call invalidate() which will trigger another redraw of the view.

Once we have our properties defined, we can then go on to implement the SpringAnimation effect with these properties.

Now we can see, our animateToPoint() function uses the SpringAnimationclass, passes in the reference to the ColorDropperView (this) and we can set a few properties on the animation (such asstiffness and dampingRatio). We then call start() and the animation will run.

private fun animateToPoint(point: Point) {

    SpringAnimation(this, floatPropertyAnimX, point.x).apply {
        spring.stiffness = SpringForce.STIFFNESS_MEDIUM       
        spring.dampingRatio = SpringForce.DAMPING_RATIO_MEDIUM_BOUNCY

    SpringAnimation(this, floatPropertyAnimY, point.y).apply {
        spring.stiffness = SpringForce.STIFFNESS_MEDIUM
        spring.dampingRatio = SpringForce.DAMPING_RATIO_MEDIUM_BOUNCY

It is worth noting, that we don’t need to (and we can’t) specify the duration of this animation, which makes total sense! In the real world, when something is falling or moving, we can only calculate how long it’ll take based on its mass, stiffness, velocity and other factors. We cannot tell the object how long it should take.

Here is a recording of how the animation works now using the SpringAnimation class. Much smoother and more natural looking, don’t you think?

SpringAnimation in action

Damping Ratio (bouncy-ness) 🎾

The dampingRatio that we can set on a SpringAnimation determines how much bounce the animation will have. There are some built-in options for the dampingRatio that’ll produce different results:


We are also able to set a value between 0 and 1 for this ratio if preferred. Here are three examples of the effect the dampingRatio has on our custom view:

dampingRatio example


Another property that we can set on a SpringAnimation is the stiffnessof the spring force. Like the dampingRatio, we can choose from a few predefined options:

  • STIFFNESS_MEDIUM (default)

The stiffness affects how long the animation will take: if the spring is very stiff (STIFFNESS_HIGH) the animation will perform quicker than if the stiffness is low.

Below is an example of some of the different stiffness values in action:

Stiffness examples

Velocity 🚗

SpringAnimations can also have their startVelocity set using setStartVelocity(). This value is specified in pixels per second. If you would like to specify it, you should convert from a dp value into pixels to ensure the animation looks consistent across different devices. The default startVelocity is 0.

Here is an example of how to set the startVelocity to 5000dp per second:

SpringAnimation(this, floatPropertyAnimY, point.y).apply {
    setStartVelocity(TypedValue.applyDimension(TypedValue.COMPLEX_UNIT_DIP, 5000f, resources.displayMetrics))

This is what different start velocities look like on the custom view:

startVelocity options

Cancel SpringAnimation ✋🏾

Another great part about physics-based animations is that they can be cancelled midway through the animation if required. Calling SpringAnimation#cancel() will terminate the animation. There is also the option toSpringAnimation#skipToEnd() which will immediately show the end state of the animation (this can cause a visual jump — as if the animation wasn’t implied).

Dynamic Animation — KTX Functions

There are currently some extension functions provided by the following KTX dependency (check for the latest version here):

implementation "androidx.dynamicanimation:dynamicanimation-ktx:1.0.0-alpha01"

The alpha version was released on the 9th February 2019, but there are some new changes that haven’t been released yet, which will clean up this code quite a bit. Take a look here at the new extension functions that will be provided soon. The clean up of creating the FloatPropertyCompat objects is particularly interesting and will help clean up this code in the future.

Finally 🧚🏼‍♀️

The physics-based animations in Android are great alternatives when you have animations that need to look more natural. They are also great for when you are moving things around on the screen and you aren’t sure how long the animation should take. Rather don’t try to guess those duration values, use physics animations instead. 👩🏻‍🔬

For our animation, we ended up going with the DAMPING_RATIO_MEDIUM_BOUNCY ,STIFFNESS_MEDIUM and the startVelocity set at 0. This was the final animation that we stuck with:

Final animation

Where else have you found physics-based animations to be useful? Let me know your thoughts on Twitter! 🌈

Thanks to Josh LeibsteinGarima JainNick Rout and Dan Galasko for reviewing this post. 💚

Android Canvas APIs with Kotlin and KTX

Android Canvas APIs with Kotlin and KTX

Learn how to use the Android KTX extension functions to clean up Canvas drawing code

Have you ever wanted to write a Custom View on Android but you were too afraid to deal with X, Y translations on a Canvas object? 

Well, working with them got a whole lot easier when using Kotlin and the Android KTX Extension functions provided by the Android Team at Google. 

Drawing on Canvas without KTX 🙀🙅🏽‍♀️

If you want to translate (move) an object you are drawing on a Canvas, how would you go about doing that? You would likely need to do something like the following:
canvas.translate(200f, 300f)
canvas.drawCircle(...) // drawn on the translated canvas

Canvas#save() and Canvas#restore() can be used to save() the current matrix (transformations like translate, rotate etc) and restore() the state of the Canvas back to its original transformations. 

The drawCircle() method that we are calling will be drawn at a translated point on the Canvas. Once restore is called, the Canvas is no longer translated and any further operations on that Canvas after that point will not be translated.

Example of a circle being drawn to a Canvas, the first doesn’t contain a translation, whereas the second has a translation applied

If we wanted to then do something more complex, for instance, draw a path that is scaled up after we have translated, the code would look as follows:

val translateCheckpoint =
canvas.translate(200f, 300f)
canvas.drawCircle(...) // drawn on the translated canvas
val rotateCheckpoint =
canvas.drawRect(...) // drawn on the translated and rotated canvas
Example of drawing a Rect — Image 1: No transformations — without doing save/restore. Image 2: Translated with same checkpoint as circle. Image 3: Translated and rotated.

To do multiple translations on a Canvas, we would then want to use restoreToCount with the specific checkpoint. This will notify the canvas of the certain checkpoint of the transformations that should be restored. 

We can see that this can easily get out of control and it becomes really difficult to follow what transformations will be applied to a certain draw... call.  

Improving Canvas API calls with Android KTX 😻

To use these extension functions, you need to make sure you are importing the core-ktx dependency in your app level build.gradle file (look here for the latest version):

implementation 'androidx.core:core-ktx:1.0.1'

If we are using KTX, we can simplify the previous draw examples to be the following:

canvas.withTranslate(200f, 300f) {

This wraps up this logic in a block, that makes it easier to understand and our code is cleanly separated. We also don’t need to specify the canvas on which to draw on at this point, since the block now has the Canvas in this scope. If we wished to draw certain parts of the Canvas at different points, say we wanted to translate and scale, we can nest the withRotate function inside the first withTranslate block. 

canvas.withTranslate(200f, 300f) {
drawCircle(...) // drawn on the translated canvas
withRotate(45f) {
drawRect(...) // drawn on the translated and rotated canvas

Now our function calls are clearly separated with parentheses and we can easily see which canvas transformations will be applied to the drawRect function by using these extension functions. 

Diving into the Android KTX implementation 📝

This is one of the extension functions for Canvas defined in the KTX library (original source code can be found here):

* Wrap the specified [block] in calls to []/[Canvas.translate]
* and [Canvas.restoreToCount].
inline fun Canvas.withTranslation(
x: Float = 0.0f,
y: Float = 0.0f,
block: Canvas.() -> Unit
) {
val checkpoint = save()
translate(x, y)
try {
} finally {

Taking a deeper look into how this extension function works, we can see that the functions wrap up the logic of saving and restoring the canvas. The last parameter is a function and that function (block()) is a function literal with receiver. This sounds complicated but this just means that the Canvas instance then becomes the this scope for that function definition. 

This allows us in the block() function, to call the Canvas methods without having to specify the canvas object directly. For example, we don’t have to call canvas.drawCircle() anymore, we can now just call drawCircle() and the correct Canvas object will be used for that method call. 

The block param is the last parameter of the function (and it is a function itself) so we are able to extract the block function outside of the parentheses. For example, both of the following usages are acceptable:

canvas.withTranslate(200f, 300f, {

Outside of parenthesis:

canvas.withTranslate(200f, 300f) {

It is worth noting that in Android Studio, the first example here will produce a lint warning to tell you to rather use the second option. 

What a nifty mechanism for cleaning up our Canvas API interactions!

Finally… ✨

We can see how the AndroidX Canvas Extension functions can help improve the readability of our canvas transformation code. There are a few other extension functions for Canvas, some of the others include:

  • Canvas#withScale() 
  • Canvas#withSkew() 
  • Canvas#withMatrix()

Have you found any other useful extension functions in the KTX library? Let me know on Twitter @riggaroo 

Thanks to Nick Rout and Josh Leibstein for reviewing this post. 

Building Responsive / Resizable Android UIs for ChromeOS 📐📏

Building Responsive / Resizable Android UIs for ChromeOS 📐📏

Learn how using ViewModels can help create great user experiences on ChromeOS

This post originally appeared here.

Supporting ChromeOS devices sounds like a large undertaking with many unknowns. If you didn’t know already, ChromeOS allows users to install Android Apps on their devices. This is great news for ChromeOS users since it unlocks a huge amount of apps that previously weren’t available to them.

Why should you build support for ChromeOS?

ChromeOS support seems like a big task that you may not think is important to implement. But if you do a bit of research into the Chromebook usage, you will see that there is a large portion of the US market that uses Chromebooks. ChromeOS made up 59.6% of mobile computing sales in the US in Q4 of 2017 (according to this article) and whilst in the rest of the world ChromeOS isn’t as popular, it is steadily gaining in popularity. Considering that you can pick up a Chromebook for about $150, you can understand why it might be appealing to purchase.

How to build support in your Android app for ChromeOS?

Spoiler Alert: You don’t need to do anything fancy to enable your Android apps to run on ChromeOS. If your app supports tablets, your app will run on ChromeOS. There are extra options you can consider for a ChromeOS device, such as support for a Stylus and possibly support for if a user doesn’t have a touch screen (If you want them to be able to use your app without one).

Supporting the resizing of layouts can be a bit tricky. If you know what tools to use from the start, building support for this kind of interaction in your apps is something you can do from the beginning of building any new layout or app.

Since releasing the Over App on Android, we’ve been working to optimise our app for ChromeOS devices. But it turns out that we didn’t need to do thatmuch more in order to support it because we were already following best practices (as mostly described in this post). There is obviously room for improvement for us (keyboard shortcut support and UI optimisations). In this article, we will cover how to ensure that your UI remains consistent during app window resizes.

Use ViewModels for storing UI State

If you’ve been out of action in Android for a while, you may have missed all the great libraries that have been released recently that help to solve some complex Android specific issues. ViewModels are a new class available from Android Jetpack.

This is the definition of a ViewModel from the Android Developer docs:

ViewModels help store and manage UI-related data in a lifecycle aware way. The ViewModelclass allows data to survive configuration changes such as screen rotations.

With this definition in mind, this is how ChromeOS handles the lifecycle when resizing an app: It notifies your app of a screen size change, which typically would trigger a new creation of your Activity class. Now if you aren’t using something like a ViewModel, when the new activity is created, you would losethe data that you have backing that view. Since ViewModels have a different lifecycle than an Activity, they outlive a recreation of it.

ViewModel Lifecycle from Android Developer Documentation

In our case, we have our layout state stored in the ViewModel and as such, when a user resizes the app window, the state outlives the Activity and the view automatically keeps the same information after it is being resized.

Our ViewModel looks something like this:

class ProjectEditorViewModel : ViewModel() {

    private val _state = MutableLiveData<EditorState>()
    val state: LiveData<EditorState>
        get() = _state


The usage of this ViewModel in our ProjectEditorFragment, looks like the code below:

class ProjectEditorFragment : Fragment {

    lateinit var viewModelFactory: ViewModelProvider.Factory

    private lateinit var viewModel: ProjectEditorViewModel

    override fun onCreateView(inflater: LayoutInflater, container: ViewGroup?, savedInstanceState: Bundle?): View? {
        val view = inflater.inflate(R.layout.fragment_editor_initial, container, false)

        return view

    override fun onActivityCreated(savedInstanceState: Bundle?) {
        viewModel = ViewModelProviders.of(requireActivity(), viewModelFactory)
    private fun setupViewModel() {
        viewModel.state.observe(this, Observer { editorState ->
            editorState?.let { state ->
                // Set the state of all controls based on the saved state in the ViewModel 

In the sample above, you can see that we have the ProjectEditorViewModelstate being observed for changes. When a new state comes in from the ViewModel, we will then perform all the required changes in order to update the view’s state.

Resizing the Editor Experience keeps a user’s selected tool state and their project state due to using ViewModels.

If we stored our state in a class that wasn’t extending ViewModel or AndroidViewModel, when the activity is resized, the UI information would be lost (i.e. A user’s project changes, the state of the currently selected tool etc.).

With us using ViewModels by default for all our UI state, resizing an activity didn’t present any weird state loss issues. 🎉

Support multiple screen sizes using best practices

Now that we’ve covered how you can go about storing state across window resizing, we might want to transition our layouts in a way that communicates the change in UI. Of course, this is not the only thing you need to do in order to support ChromeOS devices.

There are great guidelines that already exist on how to build for different screen/density size buckets. Follow those in order to support different layout sizes (i.e. Create different size buckets, layout params etc.).


If you start doing these few things you won’t have to retrospectively go back and change your app to support ChromeOS. The great part about using the suggestions above is that it doesn’t just apply to ChromeOS but also to tablets or phones when you use the split screen feature on any Android device.

There are a few more things you can do to support ChromeOS really well, including supporting a stylus and making sure your app has some delightful keyboard shortcuts. We are getting there!

If you have any questions, feel free to reach out to me on Twitter @riggaroo.

Thanks to Joshua Leibstein and Nick Rout for reviewing this post.