Sunday, May 29, 2011

PSA: Netflix for Android spontaneous deactivation fix

Today Netflix on my Android phone (a Nexus One) started giving me this error:

It looks like Netflix has been deactivated on this device. It could be an issue with your account or perhaps your device was deactivated on netflix.com. (2004)

Netflix only lets you have 6 devices activated per account, so at first I thought I might be bumping into the limit, but it turned out that that wasn't my problem.

The thing that eventually worked was to clear all data for the Netflix app. To do this:

  1. Go to the home screen.
  2. Press the menu button.
  3. Select "Manage apps" (or "Settings", then "Applications", then "Manage applications" on older versions of Android).
  4. Select the "Downloaded" tab.
  5. Select the Netflix app.
  6. Click on "Clear data".

The next time you open the Netflix app you'll need to sign in again, but then it should be working correctly.

I talked to Netflix customer support about this issue and apparently they had a ton of devices spontaneously deactivate in the last day or so. It sounded like they either don't really understand the cause, or just didn't want to share the details. Based on the fix it seems like some sort of authentication token either got corrupted or had the server-side rug pulled out from under it. Clearing the app data seems to force it to get a fresh token.

posted Sunday, May 29, 2011 (3 comments)

Monday, May 09, 2011

Android's 2D Canvas Rendering Pipeline

This is a conceptual overview of how Android's 2D Canvas rendering pipeline works. Since Android's Canvas API is mostly a pretty thin veneer on top of Skia it should also serve as a reasonable overview of Skia's operation, though I've only looked at Skia code that's reachable from Android's SDK, and when the Skia and Android terminology differ (which is rare, modulo “Sk” prefixes and capitalization) I've used the Android terminology.

How and Why I Wrote This

I wrote this overview because I've been doing some Android development recently, and I was getting frustrated by the fact that the documentation for android.graphics, particularly when it comes to all of the things that can be set in a Paint object, is extremely sparse. I Googled, and I asked a question on Stack Overflow but I couldn't find anything that explained this stuff to my satisfaction.

This overview is based on reading what little documentation exists (often “between the lines”), doing lots of experiments to see how fringe cases work, poring over the code, and doing even more experiments to verify that I was reading the code correctly. I started writing it as notes for myself, but I figured others might benefit as well so I decided to post it here.

Caveats

I say this is a “conceptual” overview because it does not always explain the actual implementation. The implementation is riddled with special cases that attempt to avoid doing work that isn't necessary. (I remember hearing some quote along the lines of “the fastest way to do something is to not do it at all”.) Understanding the implementation details of all of these special cases is unnecessary to understanding the actual end-result, so I've focused on the most general path through the pipeline. I actually avoided looking at the details of a lot of the special-case code, so if this code contains behavioral inconsistencies I won't have seen them.

Also, there are cases, particularly in the Shading and Transfer sections, where the algorithm described here is far less efficient but easier to visualize (and, I hope, understand) than the actual implementation. For example, I describe Shading as a separate phase that produces an image containing the source colors and Transfer as a phase producing an image with intermediate colors. In reality these two “phases” are interleaved such that only a small set (often just one) of the pixels from each of these virtual images actually “exists” at any instant in time. There is also short-circuiting in this code such that the source and intermediate colors aren't computed at all for pixels where the mask is fully transparent (0x00).

This does mean that this overview can't give one an entirely accurate understanding of the performance (speed and/or memory) of various operations in the pipeline. For that it would be better to performing experiments and profile.

Also keep in mind that because this is documenting what is arguably “undocumented behavior” it's hard to say how much of what is described here is stuff that's guaranteed versus implementation detail, or even outright bugs. I've used some judgement in determining where to put the boundaries between phases (all of that optimization blurs the lines) based on what I think is a “reasonable API” and I've also tried to point out when I think a particular behavior I've discovered looks more like a bug than a feature to rely on.

There are still a number of cases where I'd like to do some more experimentation to verify that my reading of the code is correct and I've tried to indicate those below.


Entering the Pipeline

The pipeline is invoked each time a Canvas.drawSomething method that takes a Paint object is called.

Most of these drawing operations start at the first phase, Path Generation. There are two exceptions, however:

  1. drawPaint skips Path Generation entirely and Rasterization consists of producing a solid opaque mask.

  2. drawBitmap has different behavior depending on the supplied Bitmap's configuration.

    In the case of an ALPHA_8 Bitmap, Path Generation and Rasterization are both skipped and the supplied Bitmap is used as the mask.

    For other Bitmap configurations the Shader is temporarily replaced with a BitmapShader in CLAMP mode. This means that setting a Shader to be used with a drawBitmap call with a non-ALPHA_8 Bitmap is pointless. The pipeline is then executed as though drawRect had been called with a rectangle equal to the bounding box of the Bitmap.

    According to Romain Guy, this behavior is intentional.

Overall Structure

The overall structure of the pipeline. This diagram is available in Gzipped SVG or PDF formats for use as a quick reference card.

At the top of the diagram are the two main inputs to the pipeline: the parameters to the draw method that was called (really multiple inputs) and the “destination” image — the Bitmap connected to the Canvas.

There are four main phases in the pipeline. The details of these will be covered below. While there are exceptions, all of the phases (mostly) follow this pattern: There are two or more sub-phases, the first of which computes an intermediate result, while the later ones “massage” this intermediate result. These later sub-phases often default to the identity function. ie: they usually leave the intermediate result alone unless explicitly told to do otherwise by setting properties on the Paint.

Path Generation

The output of the first phase is a Path.

This phase has three sub-phases:

  1. An initial Path is constructed based on the draw* method that was called. In the case of drawPath, this is simply the Path supplied by the client. In the case of drawOval or drawRect, the output is a Path containing the corresponding primitive.

  2. If the Paint has a PathEffect, it is used to produce a new path based on the inital Path. The PathEffect is essentially a function that takes a Path as its input and returns a Path.

    If no PathEffect is set then the initial Path is passed on to the next phase unmodified. That is, the default PathEffect is the identity function.

    PathEffect implementations include CornerPathEffect, which rounds the corners of the Path, and DashPathEffect which converts the Path into a series of “dashes”.

    One interesting quirk: if the Paint object's style is FILL_AND_STROKE the PathEffect is “lied to” and told that it's FILL. This matters because PathEffect implementations may alter their behavior depending on settings in the Paint. For example, DashPathEffect won't do anything if it is told the style is FILL.

  3. The final sub-phase is “stroking”. If the Paint.Style is Path this does nothing to the Path. If the style is STROKE then a new “stroked” Path is generated. This stroked Path is a Path that encloses the boundary of the input Path, respecting the various stroke properties of the Paint (strokeCap, strokeJoin, strokeMiter, strokeWidth). The idea is that later phases of the pipeline will always fill the Path they are given, and so the stroking process converts Paths into their filled equivalents. If the style is FILL_AND_STROKE the result Path is the stroked Path concatenated to the original Path.

The method Paint.getFillPath() can be used to run the later sub-phases of this phase on a Path object. As far as I can tell this is the only significant part of the pipeline that can be run in isolation.

Rasterization

Rasterization is the process of determining the set of pixels that will be drawn to. This is accomplished by generating a “mask”, which is a alpha-channel image. Opaque (0xFF) pixels on this mask indicate areas we want to draw to at “full strength”, transparent (0x00) areas are areas we don't want to draw to at all, and partially transparent areas will be drawn to at “partial strength”. This is explained more at the end of the final phase. (When visualizing this process I find that it helps to think of opaque as white and transparent as black.)

Rasterization has two completely different behaviors depending on whether a Rasterizer has been set on the Paint.

If no Rasterizer has been set then the default rasterization process is used:

  1. The Path is scan-converted based on parameters from the Paint (eg: the style property) and the Path (eg: the fillType property) to produce an initial mask.

    Pixels “inside” the Path will become opaque, those “ outside” will be left transparent, and those on the boundary may become partially transparent (for anti-aliasing). The mask will end up containing an opaque silhouette of the object.

    The Path object's fillType determines the rule used to determine which pixels are inside versus outside. See Wikipedia's article on the non-zero winding rule for a good explanation if these different rules.

  2. If there is a MaskFilter set, then the initial mask is transformed by the MaskFilter. The MaskFilter is essentially a function that takes a mask (an ALPHA_8 Bitmap) as input and returns a mask as output. For example, a BlurMaskFilter will blur the mask image.

    If no MaskFilter is set then the initial mask is passed on to the next phase unmodified. That is, the default MaskFilter is the identity function.

If a Rasterizer is set on the Paint then, instead of the above two steps, the Rasterizer creates the mask from the Path. The MaskFilter is not invoked after the Rasterizer. (This seems like a bug, but I've verified this behavior experimentally. Romain Guy agreed that this is probably a bug.)

The only Rasterizer implementation in Android is LayerRasterizer. LayerRasterizer makes it possible to create multiple “layers”, each with its own Paint and offset (translation). This means that when n LayerRasterizer layers are present there are n + 1 Paint objects in use: the “top-level” Paint (passed to the draw* method) and an additional n Paint objects, one for each Layer.

LayerRasterizer takes the Path and for each layer runs the Path through the pipeline of that layer's Paint starting at the PathEffects step and rendering to the mask. This has some interesting consequences:

  • Each layer can have its own PathEffect. These are applied to the Path that was generated by the top-level PathEffect (if one was set). So if the PathEffect of the top-level's Paint is set to a CornerPathEffect and a layer's PathEffect set to DashPathEffect that layer will render a dashed shape with rounded corners.

  • Each layer can have its own Rasterizer. Recursive rasterization is recursive.

  • Each layer can have its own MaskFilter. This MaskFilter applies to a separate mask in the sub-pipeline. Remember, the entire pipeline is being run again. For example, if there are two layers and one has a BlurMaskFilter the output of the other layer will not be blurred regardless of the order of the layers.

  • The destination Bitmap of this sub-pipeline is an alpha bitmap, so only the alpha-channel component of the Shading and Transfer phases have any relevance.

Also note that LayerRasterizer does not make use of the MaskFilter in the top-level Paint. Since the top-level MaskFilter is not invoked after invoking the Rasterizer, there is no point in setting a MaskFilter on a Paint if the Rasterizer has been set to a LayerRasterizer. (Perhaps other Rasterizer implementations could make use of the top-level MaskFilter, but LayerRasterizer is the only implementation included with Android.)

Shading

Shading is the process of determining the “source colors” for each pixel. A color (can) consist of alpha, red, green, and blue components (ARGB for short) each of which ranges from 0 to 1. (In reality these are typically represented as bytes from 0x00 to 0xFF.)

At a high level, the output of the Shader can be thought of as a virtual image containing the source colors: the “source” image. The actual implementation doesn't use a Bitmap, but rather uses a function that maps from (x,y) to an ARGB color (the “source color”) for the given pixel, and this function is only called for coordinates where the corresponding pixal may be altered by the source color. This is really just an optimization, however.

Like the previous phases, Shading also has two sub-phases:

  1. An initial “source” image is generated by the Shader. If no Shader has been set it's as if a Shader that produced a single solid color (the Paint's Color) was used.

    The Shader does not get the mask, the Path, or the destination image as inputs.

  2. If a ColorFilter has been set then the colors in the source color image are transformed by this ColorFilter.

    The only input to the ColorFilter during the pipeline are ARGB colors. The ColorFilter does not get the mask, the Path, the destination image, or the coordinates of the pixel whose color it is transforming, as inputs.

Transfer

Transfer is the process of actually transferring color to the destination Bitmap. The transfer phase has the following inputs:

  • The mask generated by Rasterization.

  • The “source color” for each pixel as determined by Shading.

  • The destination bitmap, which tells us the “destination color” for each pixel.

  • The transfer mode (XferMode).

Once again, there are two sub-phases:

  1. An intermediate image is generated from the source image and destination image. For each each (x,y) coordinate the corresponding source and destination colors are passed to a function determined by the XferMode. This function takes the source color and destination color and returns the color for the intermediate image's pixel at (x,y).
  2. Note that the mask is not used in this sub-phase. In particular, the source-alpha comes from the Shader, and the destination alpha comes from the destination image.

    If an XferMode hasn't been set on the Paint then the behavior is as though it was set to PorterDuffXferMode(SRC_OVER).

  3. The second sub-phase takes the intermediate image, the destination image, and the mask as inputs and modifies the destination image. It does not use the XferMode.

    The intermediate image is blended with the destination image through the mask. Blending means that each pixel in the destination image will become a weighted average (or equivalently, linear interpolation) of that pixel's original color and the corresponding pixel in the intermediate image. The opacity of the corresponding mask pixel is the weight of the intermediate color, and its transparency is the weight of the original destination color.

    In other words, a pixel that is transparent (0x00) in the mask will be left unaltered in the destination, a pixel that is opaque (0xFF) in the mask will completely overwritten by the corresponding pixel in the intermediate image, and pixels that are partially transparent will result in a destination pixel color that is proportionately between its original color and the color of the corresponding intermediate image pixel.

This is the final phase. The pipeline is now complete.

More on Porter Duff Transfer Modes

The most commonly used transfer modes are instances of PorterDuffXferMode. The behavior of a PorterDuffXferMode is determined by its PorterDuff.Mode. The documentation for each PorterDuff.Mode (except OVERLAY) shows the function that is applied to the source and destination colors to obtain the intermediate color. For example, SRC_OVER is documented as:

[Sa + (1 - Sa)*Da, Rc = Sc + (1 - Sa)*Dc]

This means:

Ra = Sa + (1 - Sa) * Da
Rr = Sr + (1 - Sa) * Dr
Rg = Sg + (1 - Sa) * Dg
Rb = Sb + (1 - Sa) * Db

Where Rx, Sx, and Dx are the intermediate (result), source and destination values of the x color component.

Some additional notes on the PorterDuff.Mode documentation:

  • The documentation uses “Sc” and “Dc” rather than describing each red, green, and blue component separately. This is because Porter Duff transfer modes always treat the non-alpha channels the same way and each of these channels is unaffected by all other channels except for alpha.

  • SRC_OVER and DST_OVER are the only two modes that have the left hand side of this equation, “Rc”, in their documentation. I'm guessing this inconsistency is a copy-and-paste error.

  • The alpha channel is always unaffected by non-alpha channels. That is, Ra is always a function of only Sa and Da.

  • The documentation for ADD refers to a “Saturate” function. This is just clamping to the range [0,1]. (I don't know why they use such an odd name for clamping, especially “saturation” usually refers to an entirely unrelated concept when talking about colors.)

  • The definition of many of these modes, including OVERLAY, can be found in the SVG Compositing Specification. The Skia code actually links to (an older version of) this document. It has some good diagrams, too.

References
posted Monday, May 09, 2011 (3 comments)