Friday, February 10, 2012
Multi-diff with Vim and Git
I just pushed some stuff to github that you may find useful if you're either a git user, a vim user, or (best of all) both.
git-multidiff
For git users, there's git-multidiff
, which works kind of like git difftool
, except that it invokes your tool of choice once on the entire set of files, instead of once for each pair. This is handy if you have a diff tool that'll let you view multiple diffs simultaneously.
Full installation instructions are in a comment at the top of the file, but it basically consists of putting git-multidiff
and _git-multidiff-helper
in your path and adding an entry to your .gitconfig
. Note that it requires Python (I've tested it with 2.7.2).
tab-multi-diff.vim
Speaking of “diff tools that'll let you view multiple diffs simultaneously”, that's what tab-multi-diff.vim
is for. It lets you do a “vimdiff” on multiple pairs of files, with each pair in a separate tab.
To install it, just save tab-multi-diff.vim
in your vim plugins directory (typically ~/.vim/plugin/
).
To use it, you can invoke vim (or gvim) with a command like:
gvim -c 'silent call TabMultiDiff()' old-foo foo old-bar bar
Thats obviously kind of long, so you probably want to wrap it in a shell script. My script for doing this is vd
(which also depends on v
). Note that that it imposes some of my personal preferences, so you may only want to use it as a starting point.
Using Them Together
To use git-multidiff
and tab-multi-diff.vim
together I have the following in my .gitconfig
:
[multidiff] tool = vd -f
Note that the tool
option for multidiff
is a command line prefix, not a “tool name” as it is for git difftool
. That’s why it’s possible to include a flag. The -f
flag shown here is to prevent backgrounding. (It's always seemed weird to me that git difftool
has this extra layer of indirection.)
Sunday, December 04, 2011
A Sufficiently Advanced Violin
The reactions to the recent story about CT scans being used to recreate a Stradivarius violin are interesting. For example, in the comments on Engadget there's a lot of denial that it could sound as good as the original, as well as people saying it won't sound as good in 300 years. I have to wonder if the latter even matters. If we can cheaply create a clone of a 307-year-old Stradivarius, you can just make a new one when it stops sounding good. And who knows if a 600-year-old Stradivarius will actually sound good?
Musicians Centre has an interesting take: “Why do we have to keep going back and trying to replicate the past when it comes to instruments?”
I agree, but I don't think the scanned Stradivarius has to be just about replicating the past. If we are able to scan instruments that sound good and produce replicas, that means we can experiment with modifications to the design, and iterate to produce better instruments. Without having any way to measure what makes a Stradivarius “good” means iteration is hard, and you end up with people talking about trees that don’t exist anymore or a mysterious fungus that can’t be replicated.
That said, improvements will most likely have to overcome a subjectivity problem. On a large scale there are objective ways of determining that one violin is better than another, but at a finer scale things might not be so clear cut. Assuming you could make a violin that sounds slightly better than even the best Stradivarius by some objective measure, would it just end up sounding weird to people who are used to the real thing?
Sunday, May 29, 2011
PSA: Netflix for Android spontaneous deactivation fix
Today Netflix on my Android phone (a Nexus One) started giving me this error:
It looks like Netflix has been deactivated on this device. It could be an issue with your account or perhaps your device was deactivated on netflix.com. (2004)
Netflix only lets you have 6 devices activated per account, so at first I thought I might be bumping into the limit, but it turned out that that wasn't my problem.
The thing that eventually worked was to clear all data for the Netflix app. To do this:
- Go to the home screen.
- Press the menu button.
- Select "Manage apps" (or "Settings", then "Applications", then "Manage applications" on older versions of Android).
- Select the "Downloaded" tab.
- Select the Netflix app.
- Click on "Clear data".
The next time you open the Netflix app you'll need to sign in again, but then it should be working correctly.
I talked to Netflix customer support about this issue and apparently they had a ton of devices spontaneously deactivate in the last day or so. It sounded like they either don't really understand the cause, or just didn't want to share the details. Based on the fix it seems like some sort of authentication token either got corrupted or had the server-side rug pulled out from under it. Clearing the app data seems to force it to get a fresh token.
Monday, May 09, 2011
Android's 2D Canvas Rendering Pipeline
This is a conceptual overview of how Android's 2D Canvas
rendering pipeline works. Since Android's Canvas API is mostly a
pretty thin veneer on top of Skia it should also serve as a
reasonable overview of Skia's operation, though I've only looked
at Skia code that's reachable from Android's SDK, and when the Skia
and Android terminology differ (which is rare, modulo
“Sk
” prefixes and capitalization) I've used the
Android terminology.
How and Why I Wrote This
I wrote this overview because I've been doing some Android
development recently, and I was getting frustrated by the fact
that the documentation for android.graphics
,
particularly when it comes to all of the things that can be set in a
Paint
object, is extremely sparse. I Googled, and
I asked a question on Stack Overflow but I couldn't find
anything that explained this stuff to my satisfaction.
This overview is based on reading what little documentation exists (often “between the lines”), doing lots of experiments to see how fringe cases work, poring over the code, and doing even more experiments to verify that I was reading the code correctly. I started writing it as notes for myself, but I figured others might benefit as well so I decided to post it here.
Caveats
I say this is a “conceptual” overview because it does not always explain the actual implementation. The implementation is riddled with special cases that attempt to avoid doing work that isn't necessary. (I remember hearing some quote along the lines of “the fastest way to do something is to not do it at all”.) Understanding the implementation details of all of these special cases is unnecessary to understanding the actual end-result, so I've focused on the most general path through the pipeline. I actually avoided looking at the details of a lot of the special-case code, so if this code contains behavioral inconsistencies I won't have seen them.
Also, there are cases, particularly in the Shading and
Transfer sections, where the algorithm described here is far less
efficient but easier to visualize (and, I hope, understand) than
the actual implementation. For example, I describe Shading as a
separate phase that produces an image containing the source
colors and Transfer as a phase producing an image with
intermediate colors. In reality these two “phases” are
interleaved such that only a small set (often just one) of the
pixels from each of these virtual images actually “exists” at any
instant in time. There is also short-circuiting in this code
such that the source and intermediate colors aren't computed at
all for pixels where the mask is fully transparent (0x00
).
This does mean that this overview can't give one an entirely accurate understanding of the performance (speed and/or memory) of various operations in the pipeline. For that it would be better to performing experiments and profile.
Also keep in mind that because this is documenting what is arguably “undocumented behavior” it's hard to say how much of what is described here is stuff that's guaranteed versus implementation detail, or even outright bugs. I've used some judgement in determining where to put the boundaries between phases (all of that optimization blurs the lines) based on what I think is a “reasonable API” and I've also tried to point out when I think a particular behavior I've discovered looks more like a bug than a feature to rely on.
There are still a number of cases where I'd like to do some more experimentation to verify that my reading of the code is correct and I've tried to indicate those below.
Entering the Pipeline
The pipeline is invoked each time a
Canvas.drawSomething
method that takes a
Paint
object is called.
Most of these drawing operations start at the first phase, Path Generation. There are two exceptions, however:
-
drawPaint
skips Path Generation entirely and Rasterization consists of producing a solid opaque mask. -
drawBitmap
has different behavior depending on the suppliedBitmap
's configuration.In the case of an
ALPHA_8
Bitmap
, Path Generation and Rasterization are both skipped and the suppliedBitmap
is used as the mask.For other
Bitmap
configurations theShader
is temporarily replaced with aBitmapShader
inCLAMP
mode. This means that setting aShader
to be used with adrawBitmap
call with a non-ALPHA_8
Bitmap
is pointless. The pipeline is then executed as thoughdrawRect
had been called with a rectangle equal to the bounding box of theBitmap
.According to Romain Guy, this behavior is intentional.
Overall Structure
The overall structure of the pipeline. This diagram is available in Gzipped SVG or PDF formats for use as a quick reference card.
At the top of the diagram are the two main inputs to the pipeline:
the parameters to the draw method that was called (really multiple
inputs) and the “destination” image — the Bitmap
connected to the Canvas
.
There are four main phases in the pipeline. The details of these
will be covered below. While there are exceptions, all of the phases
(mostly) follow this pattern: There are two or more sub-phases, the first of
which computes an intermediate result, while the later ones “massage”
this intermediate result. These later sub-phases often default to
the identity function. ie: they usually leave the intermediate result
alone unless explicitly told to do otherwise by setting properties on
the Paint
.
Path Generation
The output of the first phase is a Path
.
This phase has three sub-phases:
-
An initial
Path
is constructed based on thedraw*
method that was called. In the case ofdrawPath
, this is simply thePath
supplied by the client. In the case ofdrawOval
ordrawRect
, the output is aPath
containing the corresponding primitive. -
If the
Paint
has aPathEffect
, it is used to produce a new path based on the initalPath
. ThePathEffect
is essentially a function that takes aPath
as its input and returns aPath
.If no
PathEffect
is set then the initialPath
is passed on to the next phase unmodified. That is, the defaultPathEffect
is the identity function.PathEffect
implementations includeCornerPathEffect
, which rounds the corners of thePath
, andDashPathEffect
which converts thePath
into a series of “dashes”.One interesting quirk: if the
Paint
object's style isFILL_AND_STROKE
thePathEffect
is “lied to” and told that it'sFILL
. This matters becausePathEffect
implementations may alter their behavior depending on settings in thePaint
. For example,DashPathEffect
won't do anything if it is told the style isFILL
. -
The final sub-phase is “stroking”. If the
Paint.Style
isPath
this does nothing to thePath
. If the style isSTROKE
then a new “stroked”Path
is generated. This strokedPath
is aPath
that encloses the boundary of the inputPath
, respecting the various stroke properties of thePaint
(strokeCap
,strokeJoin
,strokeMiter
,strokeWidth
). The idea is that later phases of the pipeline will always fill the Path they are given, and so the stroking process converts Paths into their filled equivalents. If the style isFILL_AND_STROKE
the resultPath
is the strokedPath
concatenated to the originalPath
.
The method Paint.getFillPath()
can be used to run
the later sub-phases of this phase on a Path
object. As
far as I can tell this is the only significant part of the pipeline
that can be run in isolation.
Rasterization
Rasterization is the process of determining the set of pixels that
will be drawn to. This is accomplished by generating a “mask”, which
is a alpha-channel image. Opaque (0xFF
) pixels on this
mask indicate areas we want to draw to at “full strength”, transparent
(0x00
) areas are areas we don't want to draw to at all,
and partially transparent areas will be drawn to at
“partial strength”. This is explained more at the end of the final
phase. (When visualizing this process I find that it helps to think of
opaque as white and transparent as black.)
Rasterization has two completely different behaviors depending
on whether a Rasterizer
has been set on the
Paint
.
If no Rasterizer
has been set then the default
rasterization process is used:
- The
Path
is scan-converted based on parameters from thePaint
(eg: thestyle
property) and thePath
(eg: thefillType
property) to produce an initial mask.Pixels “inside” the
Path
will become opaque, those “ outside” will be left transparent, and those on the boundary may become partially transparent (for anti-aliasing). The mask will end up containing an opaque silhouette of the object.The
Path
object'sfillType
determines the rule used to determine which pixels are inside versus outside. See Wikipedia's article on the non-zero winding rule for a good explanation if these different rules. - If there is a
MaskFilter
set, then the initial mask is transformed by theMaskFilter
. TheMaskFilter
is essentially a function that takes a mask (anALPHA_8
Bitmap
) as input and returns a mask as output. For example, aBlurMaskFilter
will blur the mask image.If no
MaskFilter
is set then the initial mask is passed on to the next phase unmodified. That is, the defaultMaskFilter
is the identity function.
If a Rasterizer
is set on the
Paint
then, instead of the above two steps, the
Rasterizer
creates the mask from the
Path
. The MaskFilter
is not
invoked after the Rasterizer
. (This seems like a
bug, but I've verified this behavior experimentally. Romain Guy agreed
that this is probably a bug.)
The only Rasterizer
implementation in Android is
LayerRasterizer
. LayerRasterizer
makes it
possible
to create multiple “layers”, each with its own
Paint
and offset (translation). This means that when
n LayerRasterizer
layers are present
there are n + 1 Paint
objects in use: the
“top-level” Paint
(passed to the draw* method) and
an additional n Paint
objects, one for
each Layer.
LayerRasterizer
takes the Path
and
for each layer runs the Path
through the pipeline of
that layer's Paint
starting at the
PathEffects
step and rendering to the mask. This has
some interesting consequences:
-
Each layer can have its own
PathEffect
. These are applied to thePath
that was generated by the top-levelPathEffect
(if one was set). So if thePathEffect
of the top-level'sPaint
is set to aCornerPathEffect
and a layer'sPathEffect
set toDashPathEffect
that layer will render a dashed shape with rounded corners. -
Each layer can have its own
Rasterizer
. Recursive rasterization is recursive. -
Each layer can have its own
MaskFilter
. ThisMaskFilter
applies to a separate mask in the sub-pipeline. Remember, the entire pipeline is being run again. For example, if there are two layers and one has aBlurMaskFilter
the output of the other layer will not be blurred regardless of the order of the layers. -
The destination
Bitmap
of this sub-pipeline is an alpha bitmap, so only the alpha-channel component of the Shading and Transfer phases have any relevance.
Also note that LayerRasterizer
does not make use of
the MaskFilter
in the top-level Paint
. Since
the top-level MaskFilter
is not invoked after invoking
the Rasterizer
, there is no point in setting a
MaskFilter
on a Paint
if the
Rasterizer
has been set to a
LayerRasterizer
. (Perhaps other Rasterizer
implementations could make use of the top-level
MaskFilter
, but LayerRasterizer
is the only
implementation included with Android.)
Shading
Shading is the process of determining the “source colors” for
each pixel. A color (can) consist of alpha, red, green, and blue
components (ARGB for short) each of which ranges
from 0 to 1. (In reality these are typically represented as bytes from
0x00
to 0xFF
.)
At a high level, the output of the Shader
can be
thought of as a virtual image containing the source colors: the “source” image.
The actual implementation doesn't use a Bitmap
, but rather
uses a function that maps from
(x,y)
to an ARGB color (the “source color”) for
the given pixel, and this function is only called for coordinates
where the corresponding pixal may be altered by the source color. This
is really just an optimization, however.
Like the previous phases, Shading also has two sub-phases:
-
An initial “source” image is generated by the
Shader
. If noShader
has been set it's as if aShader
that produced a single solid color (the Paint's Color) was used.The
Shader
does not get the mask, thePath
, or the destination image as inputs. -
If a
ColorFilter
has been set then the colors in the source color image are transformed by thisColorFilter
.The only input to the
ColorFilter
during the pipeline are ARGB colors. TheColorFilter
does not get the mask, thePath
, the destination image, or the coordinates of the pixel whose color it is transforming, as inputs.
Transfer
Transfer is the process of actually transferring color to the
destination Bitmap
. The transfer phase has the
following inputs:
-
The mask generated by Rasterization.
-
The “source color” for each pixel as determined by Shading.
-
The destination bitmap, which tells us the “destination color” for each pixel.
-
The transfer mode (
XferMode
).
Once again, there are two sub-phases:
-
An intermediate image is generated from the source image and
destination image. For each each (x,y) coordinate the corresponding
source and destination colors are passed to a function determined by
the
XferMode
. This function takes the source color and destination color and returns the color for the intermediate image's pixel at (x,y). -
The second sub-phase takes the intermediate image, the destination image, and the mask as inputs and modifies the destination image. It does not use the
XferMode
.The intermediate image is blended with the destination image through the mask. Blending means that each pixel in the destination image will become a weighted average (or equivalently, linear interpolation) of that pixel's original color and the corresponding pixel in the intermediate image. The opacity of the corresponding mask pixel is the weight of the intermediate color, and its transparency is the weight of the original destination color.
In other words, a pixel that is transparent (
0x00
) in the mask will be left unaltered in the destination, a pixel that is opaque (0xFF
) in the mask will completely overwritten by the corresponding pixel in the intermediate image, and pixels that are partially transparent will result in a destination pixel color that is proportionately between its original color and the color of the corresponding intermediate image pixel.
Note that the mask is not used in this sub-phase. In
particular, the source-alpha comes from the Shader
, and
the destination alpha comes from the destination image.
If an XferMode
hasn't been set on the
Paint
then the behavior is as though it was set to
PorterDuffXferMode(SRC_OVER)
.
This is the final phase. The pipeline is now complete.
More on Porter Duff Transfer Modes
The most commonly used transfer modes are instances of
PorterDuffXferMode
. The behavior of a
PorterDuffXferMode
is determined by its
PorterDuff.Mode
. The documentation for each
PorterDuff.Mode
(except OVERLAY
) shows
the function that is applied to the source and destination colors
to obtain the intermediate color. For example,
SRC_OVER
is documented as:
[Sa + (1 - Sa)*Da, Rc = Sc + (1 - Sa)*Dc]
This means:
Ra = Sa + (1 - Sa) * Da Rr = Sr + (1 - Sa) * Dr Rg = Sg + (1 - Sa) * Dg Rb = Sb + (1 - Sa) * Db
Where Rx
, Sx
, and
Dx
are the intermediate (result), source and
destination values of the x color component.
Some additional notes on the PorterDuff.Mode
documentation:
-
The documentation uses “
Sc
” and “Dc
” rather than describing each red, green, and blue component separately. This is because Porter Duff transfer modes always treat the non-alpha channels the same way and each of these channels is unaffected by all other channels except for alpha. -
SRC_OVER
andDST_OVER
are the only two modes that have the left hand side of this equation, “Rc
”, in their documentation. I'm guessing this inconsistency is a copy-and-paste error. -
The alpha channel is always unaffected by non-alpha channels. That is,
Ra
is always a function of onlySa
andDa
. -
The documentation for
ADD
refers to a “Saturate
” function. This is just clamping to the range [0,1]. (I don't know why they use such an odd name for clamping, especially “saturation” usually refers to an entirely unrelated concept when talking about colors.) -
The definition of many of these modes, including
OVERLAY
, can be found in the SVG Compositing Specification. The Skia code actually links to (an older version of) this document. It has some good diagrams, too.
References
- The
android.graphics
documentation. - This answer to “Android Edit Bitmap Channels” on Stack Overflow. Seeing this answer motivated me to learn more about how the pipeline actually works.
- The Android codebase. Since the documentation was so sparse and there didn't seem to be much information I looked to the source. My initial look stopped short when I realized everything was just a wrapper around “native” code.
- Skia documentation, particularly
SkPaint
. Skia is the vast bulk of “native” (C++) code involved. - “Stack Overflow: How do the pieces of Android's (2D) Canvas drawing pipeline fit together?”, a question I asked on Stack Overflow. One member of the Android team actually responded, but didn't really provide the details I was looking for.
- The Skia codebase. The code for
SkCanvas::drawPath
is a good place to start. - SVG Compositing Specification: W3C Working Draft 30 April 2009. This document is mentioned in the Skia code.
- SVG Compositing Specification: W3C Working Draft 15 March 2011. This document supercedes the one mentioned in the Skia code. I believe the relevant bits still apply, but there's more detailed explanation and some good diagrams.
Saturday, March 19, 2011
What's good for the Twitter is good for the Apple
A lot of people have been talking about Twitter's recent stance on third-party apps. I think Mike Loukides of O'Reilly really hits the nail on the head:
...you can't tell people where (or how) to innovate, and where not to. Innovation just doesn't work that way. The best way to prevent "think big" innovation from happening is to cut off the small ideas.
Even John Gruber, unabashed Apple fanboy, agrees:
It’s not that I think Twitter is wrong in any moral sense to do whatever they want with their own API — it’s that I think they’d be foolish to do anything that dampens the diverse ecosystem of client software that has evolved around Twitter. They’re acting against their own self-interest, but apparently don’t realize it.
Whether it's "moral" or not is open to debate. There does, however, seem to be general consensus that the changes in Twitter's policies are bad for developers, bad for users and in the long term bad for Twitter.
The general form of the argument, which I wholeheartedly agree with, goes like this:
- Artificially restricting developers hurts innovation. (See Loukides's quote, above.)
- Hurting innovation hurts users.
- Hurting users hurts the platform creator.
These can be long term things, which makes them hard to measure. You can't just change your policy and see the effects overnight. For example, it might have taken years before a particular sort of ground-breaking third-party product would appear on a restriction-free platform, so in the short term having restrictions that forbid its existence might not appear to have significant detrimental effects. Likewise, most users won't miss the utility of a product they don't know exists, or even can exist. It generally takes a competing, less restricted, platform to come along before people really start to realize what they're missing. This is further slowed down by network effects.
What's interesting is that this exact same chain of reasoning also applies to Apple and their App Store policies. Just as Twitter API clients should not "compete" with the official Twitter clients, apps for iOS are not allowed to compete with Apple products (or even other established iOS apps, to a degree). The iOS policies are actually far more restrictive on innovation than Twitter's policies, as the iOS policies largely forbid using Apple's APIs in any way that Steve Jobs didn't already imagine. "Think Different", indeed. (As an aside, I think Gruber is at least partially aware of the similarity, or he wouldn't have so carefully prefaced his statement with "It’s not that I think Twitter is wrong in any moral sense".)
The parallels run even deeper. Even people who have come out in Twitter's defense on this issue often point out that Twitter's platform was in many ways built by the Twitter community (hash-tags and at-replies were being used by users before Twitter even had special support for them) and the large variety of Twitter clients also contributed to Twitter's success. For Twitter to suddenly institute draconian policies seems like a betrayal to some.
If Twitter betrayed their users by being open at first and then closing up once they achieved popularity then Apple is just as guilty. Apple's trick was to stretch things out over a much longer time frame. Historically, Apple hardware was touted as being quite open. The Apple IIe was easily hackable both in a software and hardware sense. Apple's products weren't marketed as the "computer for the rest of us" just because they were easier or prettier than the competition, but also because they purportedly made it easier to create all sorts of things, including visual art, music and even computer programs. (I say "purportedly" because the Amiga and Atari ST were arguably just as good if not better when it came to certain sorts of creative work.) Remember Hypercard? A third-party equivalent to HyperCard wouldn't even be allowed given the current iOS App Store policies.
One last thing to note is Twitter's stated reason for the policy change:
If there are too many ways to use Twitter that are inconsistent with one another, we risk diffusing the user experience.
Hmmm, sounds like they're worried about "fragmentation".
Saturday, December 18, 2010
PayPal stupidity
It seems that every year, while doing my Christmas shopping for relatives in Canada, I discover another major e-commerce site that doesn't understand that billing addresses and shipping addresses aren't necessarily in the same country.
This year I was surprised to discover that PayPal, who you would think would have a clue, doesn't let you set a shipping address outside of your account's country. I was attempting to order an item from a Canadian website to be shipped to a Canadian address but because my PayPal account is a US account it will only let me create US shipping addresses.
This issue isn't unknown to PayPal, either, as evidenced by the "adding a shipping address in canada" and the "How do I use a foreign address?" threads on the PayPal's Community Help forums. This appears to be the official response:
It is not possible to add an foreign address to your PayPal account within PayPal. You can open a new account with your Canadian address and Canadian financial information.
Given that this appeared to be my only available option I decided to try and set up a Canadian PayPal account. This required that I come up with a new e-mail address for the account, since PayPal uses a single namespace for all accounts (arguably the right thing to do, but it doesn't interact well with the boneheaded policy of requiring a separate account for each country). Luckily I have an unlimited supply of e-mail addresses. The sign-up process then wants you to enter banking or credit card information. Of course, they are restricted to the country that you have selected, in my case Canada. I do not have a Canadian bank account or credit card (anymore). I was about to give up, but then I realized that I could just click on “my account” and bypass this step entirely. To complete my purchase I then:
- Attempted to purchase with the merchant. This was just to find out the exact amount I was going to be charged.
- Transferred funds from my US PayPal account to my Canadian PayPal account by “sending money” to myself. Having a second browser open was useful for this step. Thankfully, I was able to choose which currency to use in my US PayPal account so I didn't need to do any currency conversions by hand.
- Waited several minutes for the funds to show up in my Canadian PayPal account.
- Actually purchased the item from the merchant.
A few minutes (!) after purchasing the item PayPal actually called me on the phone. They wanted to make sure that I "still had control of my account", referring to the new account I had just created. I told him I did, and then I mentioned the annoyance of having to create a second account just so I could ship to another country. They confirmed that what I did was basically the only option, and said the reason for this is to make sure that each account complies with the laws of the country that it is associated with. This seems bogus. I could maybe understand only allowing banking information from a single country per account, but there's no good reason to put the same restriction on shipping information. PayPal does nothing with the shipping information except pass it along to the seller.
Thursday, September 09, 2010
iOS Developer Agreement: Too Little Too Late
It looks like Apple might be regaining some of their sanity given the recent update to the iOS developer agreement.
Compilers
Section 3.3.1 has been updated to only restrict the use of private APIs. This is a perfectly reasonable restriction. The clause which stated that “applications must be originally written in Objective-C” (in my mind, the most offensive part of the iOS developer agreement) has been removed. I'm very glad to see it's gone.
Interpreters
They also updated section 3.3.2, the “no interpreters” section. The language has changed but the meaning apparently hasn't:
An Application may not download or install executable code. Interpreted code may only be used in an Application if all scripts, code and interpreters are packaged in the Application and not downloaded. The only exception to the foregoing is scripts and code downloaded and run by Apple’s built-in WebKit framework.
The old version of this rule was confusing and unclear, and the new version, despite being less verbose, still leaves a lot open to interpretation. For starters, what does “install” mean in this context? If the user of the app manually constructs the executable code, is that allowed or not?
The definition of “executable code” isn't entirely clear either. My inclination is to assume that this means a Turing complete language, but one could argue that there are even non-Turing complete languages that count as “executable code”. For example, I wonder if an iOS port of the classic 8-bit educational game Rocky's Boots would run afoul of this rule. In the game you would construct machines out of various bits including Boolean logic gates and then use these machines to solve various puzzles. “Running” the machines in the game requires the interpreting of executable code.
Either way, the restrictions imposed by this section probably don't affect as many developers as the old 3.3.1 restrictions did. However, in some ways this rule is actually worse. The old 3.3.1 only restricted how one could build apps but it didn't really limit the types of apps that one could build. The no interpreters rule, however, actually makes it impossible to implement several classes of useful software on the iOS platform, including:
- Web browsers that interpret JavaScript on the client.
- Emulators of legacy platforms, like 8-bit computers or old game consoles, that allow the user to run their existing software (e.g.: game ROMs, etc.).
- Educational development tools like Scratch.
- Mathematics software like Mathematica or Maple.
- Electronic circuit simulators.
- PostScript or TeX viewers (both are Turing complete languages).
In particular, we are relaxing all restrictions on the development tools used to create iOS apps, as long as the resulting apps do not download any code. This should give developers the flexibility they want, while preserving the security we need. [emphasis mine]
It's a pretty sad to see Apple is falling back on “security” as an excuse for limiting what customers can do with the products that they purchased. This is the same thing Sony did a few months ago when they removed “install other OS” (an advertised feature) from the PlayStation 3. In Sony's case the security issue had to do with their DRM. In other words, it wasn't a customer's security they were concerned for, but their profit's. One has to wonder if Apple has similar motives. An interpreter acts as a sandbox, so un-trusted code execution there is generally not as big a deal as arbitrary native code execution, as might result from a buffer overflow or similar bug in native code. Last I checked, Apple wasn't prohibiting string manipulation in native apps.
Analytics
Like 3.3.1, section 3.3.9, the privacy and analytics section, has also changed for the better. The language that specifically forbade Google's AdMob is gone, meaning developers can decide which advertising platform to use.
Why?
Apple says in their announcement:
We have listened to our developers and taken much of their feedback to heart. Based on their input, today we are making some important changes to our iOS Developer Program license in sections 3.3.1, 3.3.2 and 3.3.9 to relax some restrictions we put in place earlier this year.
Apple clearly didn't anticipate the backlash that would be caused by 3.3.1 when the “originally in Objective-C” clause was added. Not only were developers angered by that rule, but since its addition, people have been looking much more closely at what's in the developer agreement. Apple doesn't want this scrutiny as it brings to light already existing ridiculous rules, like 3.3.2, and makes people more likely to question Apple's motives when new rules are introduced, like 3.3.9. It also made many developers (and tech savvy users) who liked Apple (myself included) re-evaluate whether this was really a company they wanted to purchase products from or develop for.
I think there's also a possibility that the recent changes to 3.3.9 were made in order to avoid legal issues.
Neither of these are really great reasons for Apple to change their behavior. I think Steve Jobs preferred the older set of rules, but it became clear that developers, and potentially even the law, wouldn't stand for them.
To iOS or not to iOS
The current developer agreement is a lot closer in meaning to the pre-iPad developer agreement. Back when the iPad came out I had considered getting one so that I could experiment with developing for iOS. I gave up on that plan when the “originally in Objective-C” rule was added. So now that the rules are pretty much back where they were, am I going to get an iOS device?
Probably not. Apple has lost my trust, and in order to win it back they'll have to do more than just change things back to the way they were. For starters, I'd like to see them make a rule for themselves that the developer agreement will apply not just to third-party developers, but also to Apple's own iOS apps. For existing Apple apps that violate the rules they can then choose to revise the agreement for everyone, fix the app, remove the app. Apple already has an advantage over third-party developers, so for them to impose rules whose only apparent purpose is to strengthen Apple's advantage over third-party developers is reprehensible. I'm looking at you Safari.
Better yet would be to make it possible for people to distribute native iOS apps without going through the App Store. I'd care a lot less about the App Store policies are if there were other ways to get apps on iDevices. I'm fine with this being a setting users need to enable (as it is on Android devices), but requiring that the user "jailbreak" their device to get such basic functionality is not acceptable.