On the API design of CGBitmapContextCreate

 

2012-06-01 03:24:38 -08:00

Let’s review the prototype of the CGBitmapContextCreate function:

CGContextRef CGBitmapContextCreate (
 void *data,
 size_t width,
 size_t height,
 size_t bitsPerComponent,
 size_t bytesPerRow,
 CGColorSpaceRef colorspace,
 CGBitmapInfo bitmapInfo
);

The arguments:

  • data may be a pointer to pixels. If you pass NULL, the context will create its own buffer and free that buffer itself later. If you pass your own buffer, the context will not free it; it remains your buffer that you must free after you release the context, hopefully for the last time.
  • width and height are what their names say they are, in pixels.
  • bitsPerComponent is the size of each color component and the alpha component (if there is an alpha component), in bits. For 32-bit RGBA or ARGB, this would be 8 (32÷4).
  • bytesPerRow is as its name says. This is sometimes called the “stride”.
  • colorspace is a CGColorSpace object that specifies what color space the pixels are in. Most importantly, it dictates how many color components there are per pixel: An RGB color space has three, CMYK has four, white or black has one. This doesn’t include alpha, which is specified separately, in the next argument.
  • bitmapInfo is a bit mask that specifies, among other things, whether components should be floating-point (default is unsigned integer), whether there is alpha, and whether color components should be premultiplied by alpha.

The most immediate problem with this function is that there are so damn many arguments. This is especially bad in a C function, because it’s easy to lose track of what each value specifies, especially when so many of them are numbers. Suppose you want to make an 8-by-8-pixel grayscale context:

CGContextRef myContext = CGBitmapContextCreate(NULL, 8, 8, 8, 8, myGrayColorSpace, kCGImageAlphaNone);

Now, without looking at the prototype or the list, which argument is bitsPerComponent, which is bytesPerRow, and which are width and height?

Objective-C’s names-and-values message syntax can help with this, as we can see in the similar API (for a different purpose) in NSBitmapImageRep:

NSBitmapImageRep *bir = [[NSBitmapImageRep alloc]
    initWithBitmapDataPlanes:NULL
                  pixelsWide:8
                  pixelsHigh:8
               bitsPerSample:8
             samplesPerPixel:4
                    hasAlpha:YES
                    isPlanar:NO
              colorSpaceName:NSCalibratedRGBColorSpace
                 bytesPerRow:8
                bitsPerPixel:8*4];

But this has other problems, notably the redundant specification of bitsPerPixel and samplesPerPixel. With that and the isPlanar argument, this method takes even more arguments than CGBitmapContextCreate. More importantly, it doesn’t solve the greater problems that I’m writing this post to talk about.

EDIT: Uli Kusterer points out that bitsPerPixel is not redundant if you want to have more bits not in a component than just enough to pad out to a byte. That’s a valid (if probably unusual) use case for NSBitmapImageRep, so I withdraw calling that argument redundant.

I’m going to use the example of both of these APIs, but mainly CGBitmapContextCreate, to talk about a few principles of API design.

The first is that it should not be possible for an object to exist in an unusable state. From the moment a freshly-created object is returned to you, you should be able to use it without it blowing up in your face.

From this principle follows a corollary: Everything an object needs in order to function, it should require when you instantiate it. Otherwise, the object would exist without the needed information—and thereby be unable to function—until you provide it.

It might seem that these APIs are as long as they are in order to uphold that principle. After all, a bitmap context needs to have someplace to put its pixels, right? (In fact, CGBitmapContextCreate‘s buffer argument was required until Snow Leopard and iOS 4.) It needs to know what format the pixels should be in, right?

Now for the second principle: Any information that an object does not need in order to function should be omitted from initialization and provided afterward. In Objective-C, the most common means of this post hoc specification are readwrite properties and delegate messages. Generally, for anything that could be specified in the initializer, the post hoc way to specify it would be via a property.

We’d like to invoke the second principle and move things out of the initializer, but that would seem to conflict with the first principle: What can we move that the context does not require?

The resolution is in a third principle—one that is not specific to APIs, but applies to all interfaces, including user interfaces: An interface should have reasonable defaults for as many parameters as it can—it should only require the user to provide values for parameters for which no default can be reasonably chosen in advance.

With that in mind, let’s look at some of CGBitmapContextCreate‘s arguments and see how we might apply the reasonable-defaults principle to simplify it:

  • bitsPerComponent, bitmapInfo, and colorspace: Most commonly, the caller will want 8-bit RGBA or ARGB, often with the goal of making sure it can be used on the graphics card (either by way of a CG- or CALayer or by passing the pixels directly to OpenGL). That’s a reasonable default, so these three can be eliminated.

    We could make them properties, but there’s an alternative: We could dynamite bitmapInfo and merge some of its values with bitsPerComponent in the form of several pixel-format constants. You’ve seen this approach before in QuickTime and a few other APIs. CGBitmapContext only supports a specified few pixel formats anyway, so this simply makes it impossible to construct impossible requests—another good interface principle.

  • bytesPerRow: Redundant. The number of bytes per row follows from the pixel format and the width in pixels; indeed, CGBitmapContextCreate computes this internally anyway and throws a fit if you guessed a number it wasn’t thinking of. Better to cut it and let CGBitmapContextCreate infer it.

    Making you compute a value for bytesPerRow does provide an important safety check, which I’ll address shortly.

    EDIT: Alastair Houghton points out another case for keeping bytesPerRow. This doesn’t apply to CGBitmapContextCreate, which rejects any value that doesn’t follow from the pixel format and width in pixels, but could be valid for NSBitmapImageRep and CGImage.

  • data (the buffer): Since Snow Leopard and iOS 4, the context will create its own buffer if you don’t provide one. That makes it explicitly optional, which means it is not required.

The only arguments that are truly required are the width and height, which tell the context how many pixels it should allocate its initial buffer for in the given (or default) pixel format.

In fact, if we take the above idea of replacing three of the arguments with a single set of pixel-format constants, then we don’t actually need to make any of the properties readwrite—there isn’t any reason why the owner of the context should be changing the pixel format on the fly. You might want to change the width or height, but CGBitmapContext doesn’t support that and we’re trying to simplify, not add features.

So, what problems do the current APIs solve, what problems do they raise, and how would we address all of both problems?

  • Specifying the pixel format (bitsPerComponent, colorspace, bitmapInfo) up front saves the context having to reallocate the buffer to accommodate any pixel-size changes.

    If we simply removed the pixel format arguments from the initializer and made them readwrite properties (or a property), then the context would have to reallocate the buffer when we change the pixel format from the default (ARGB or something similar) to something else (e.g., grayscale).

    The immediate solution to that would be for the context to allocate its buffer lazily the first time you draw into it, but that would mean every attempt to draw into the context would hit that “have we created our buffer yet” check.

    A better solution would be to follow the above idea of condensing the specification of the pixel format down to a single constant; then, we could have a designated initializer that would take a pixel-format value, and a shorter initializer for the default case that calls the DI with the default pixel-format value.

  • Specifying the buffer as a plain pointer (or pointer to one or more other pointers) requires the dimensions of the buffer to be specified separately.

    It’s a mystery to me why CGBitmapContextCreate doesn’t take a CFMutableData and NSBitmapImageRep’s initializers don’t take an NSMutableData. With these, the length in bytes would be associated with the buffer, enabling the context/rep to check that the length makes sense with the desired (or default) pixel format. This would be better than the current check in two ways: First, the current check only checks bytesPerRow, ignoring the desired height; second and more importantly, the current check only checks the value you gave for bytesPerRow—it can’t check the actual length of the buffer you provided.

    (From that, you can derive a bit of guidance for using the current API: If you pass your own buffer, you should use the value you computed for bytesPerRow in computing the length of your buffer. Otherwise, you risk using one stride value in allocating the buffer and telling a different one to CGBitmapContextCreate.)

  • Requiring (or even enabling) the buffer to be provided by the caller is redundant when the API has all the information it needs to allocate it itself.

    This was especially bad when the buffer was required. Now that CGBitmapContext can create the buffer itself, even having that optional input is unnecessary. We can cut this out entirely and have the context always create (and eventually destroy) its own buffer.

  • The caller must currently choose values for parameters that are not important to the caller.

    The current API makes you precisely describe everything about the context’s pixels.

    WHY? One of the central design aspects of Quartz is that you never work with pixels! It handles file input for you! It handles rendering to the screen for you! It handles file output for you! Core Image handles filtering for you! You never touch pixels directly if you can help it!

    99% of the time, there is no reason why you should care what format the pixels are in. The exact pixel format should be left to the implementation—which knows exactly what format would be best for, say, transfer to the graphics card—except in the tiny percentage of cases where you might actually want to handle pixels yourself.

With all of this in mind, here’s my ideal API for creating a bitmap context:

typedef enum
#if __has_feature(objc_fixed_enum)
: NSUInteger
#endif
{
    //Formats that specify only a color space, leaving pixel format to the implementation.
    PRHBitmapContextPixelFormatDefaultRGBWithAlpha,
    PRHBitmapContextPixelFormatDefaultRGBNoAlpha,
    PRHBitmapContextPixelFormatDefaultWhiteWithAlpha,
    PRHBitmapContextPixelFormatDefaultWhiteNoAlpha,
    PRHBitmapContextPixelFormatDefaultCMYK,
    PRHBitmapContextPixelFormatDefaultMask,

    PRHBitmapContextPixelFormatARGB8888 = 0x100,
    PRHBitmapContextPixelFormatRGBA8888,
    PRHBitmapContextPixelFormatARGBFFFF, //128 bits per pixel, floating-point
    PRHBitmapContextPixelFormatRGBAFFFF,
    PRHBitmapContextPixelFormatWhite8, //8 bpc, gray color space, alpha-none
    PRHBitmapContextPixelFormatWhiteF, //Floating-point, gray color space, alpha-none
    PRHBitmapContextPixelFormatMask8, //8 bpc, null color space, alpha-only
    PRHBitmapContextPixelFormatCMYK8888, //8 bpc, CMYK color space, alpha-none
    PRHBitmapContextPixelFormatCMYKFFFF, //Floating-point, CMYK color space, alpha-none

    //Imagine here any other CGBitmapContext-supported pixel formats that you might need.
} PRHBitmapContextPixelFormat;

@interface PRHBitmapContext: NSObject

- (id) initWithWidth:(NSUInteger)width
    height:(NSUInteger)height;
- (id) initWithWidth:(NSUInteger)width
    height:(NSUInteger)height
    pixelFormat:(PRHBitmapContextPixelFormat)format;

//There may be an initializer more like CGBitmapContextCreate/NSBitmapImageRep's (taking individual pixel-format values such as color space and bits-per-component), but only privately, to be used by the public DI.

//Mutable so that an asynchronous loader can append to it. Probably more useful in an NSBitmapImageRep analogue than a CGBitmapContext analogue.
@property(readonly) NSMutableData *pixelData;

@property(readonly) NSColorSpace *colorSpace;
@property(readonly) bool hasAlpha;
@property(readonly, getter=isFloatingPoint) bool floatingPoint;
@property(readonly) NSUInteger bitsPerComponent;

- (CGImageRef) quartzImage;
//scaleFactor by default matches that of the main-menu (Mac)/built-in (iOS) screen; if it's not 1, the size (in points) of the image will be the pixel size of the quartzImage divided by the scaleFactor.
#if TARGET_OS_MAC
- (NSImage *) image;
- (NSImage *) imageWithScaleFactor:(CGFloat)scale;
#elif TARGET_OS_IPHONE
- (UIImage *) image;
- (UIImage *) imageWithScaleFactor:(CGFloat)scale;
#endif

@end

With the current interface, creating a context generally looks like this:

size_t bitsPerComponent = 8;
size_t bytesPerComponent = bitsPerComponent / 8;
bool hasAlpha = true;
size_t bytesPerRow = (CGColorSpaceGetNumberOfComponents(myColorSpace) + hasAlpha) * bytesPerComponent * width;
CGContextRef context = CGBitmapContextCreate(NULL, width, height, bitsPerComponent, bytesPerRow, myColorSpace, myBitmapInfo);

With an interface such as I’ve described, creating a context would look like this:

PRHBitmapContext *context = [[PRHBitmapContext alloc] initWithWidth:width height:height];

Or this:

PRHBitmapContext *grayscaleContext = [[PRHBitmapContext alloc] initWithWidth:width height:height pixelFormat:PRHBitmapContextPixelFormatWhite8];

Free stuff on the O’Reilly store

 

2012-05-07 07:04:05 -08:00

After downloading a bunch of free O’Reilly books on a run through the Kindle Store, I decided to see which of them were available in O’Reilly’s own store. Here’s what I found.

(Note that I have no intention of updating this list periodically, so some of these may no longer be free or available if you’re reading this some months after I compiled it.)

Publishing

“What Is”

Adobe stuff

Microsoft stuff

“Data products” and “data science”

I have absolutely no idea what order these should be read in, or even really what they’re about.

Some of these are currently in a list of “Free Reports” along the side of O’Reilly’s “Data” category.

Assorted

“Bibliographies”

These are basically catalogs of other (not so free) books.

Analysis of Matias Tactile One Keyboard

 

2012-03-22 12:44:58 -08:00

I can’t really call this a “review”, because I don’t have one (it costs two hundred freaking dollars), but I did notice some things about the Matias Tactile One Keyboard (which I saw an ad for in MacTech magazine) that I wanted to write down.

  • Weird fn key placement: They put it where the AEK2 and Tactile Pro have the right ctrl key. (Indeed, it looks like the lower-right caps are the same sizes as their mirror counterparts in the lower-left.)
  • Weird eject (⏏) key placement: It’s fn-return. Huh?
  • Weird ⌦ key placement: It’s where the AEK2 and Tactile Pro have the right option key. That’s because…
  • No six-block. The AEK2 and Tactile Pro have a block of six keys—four navigation-related, plus “help” and ⌦—above the arrow keys, between the letter-board and number pad. The One, for some reason, omits this. The full photo shows an iPhone resting there, but I see a missed opportunity in its place: For two hundred freaking dollars, they could have put in an iPhone/iPod dock in the gap between the function keys, and kept the six-block.
    • As on Apple’s laptops and Wireless Keyboard, they moved the navigation keys onto the arrow keys as their fn variants.
  • PC-style number pad (with double-height plus key and no equals key), even on the so-called Mac version. This made sense for the Unicomp SpaceSaver M and Das Keyboard Model S for Mac, since they’re PC keyboard manufacturers and probably reused the same PCB (also, both of those keyboards’ fn key placement follows from the Windows layout’s extra “menu” key in the same area), but I can’t figure why Matias did this.
  • An extra tab key where the clear key used to be. I like this idea.

The headlining feature of the “One Keyboard” series (and the reason why they call it that) is the fact that you can switch it between talking to your Mac over USB and talking to some other device (nominally an iPhone, iPod touch, or iPad) over Bluetooth. If they brought that and the number-pad tab key to the Tactile Pro, that might be worth $150 to me. (As for $200? No. Never have I wanted to use the same keyboard on both my Mac and iPad.)

Xcode and Friends

 

2012-02-17 04:35:03 -08:00

Xcode’s distribution has changed greatly in 4.3.

First, it’s now simply an app, without an Installer package. If you install it through the Mac App Store, it’ll install directly–no more “Install Xcode.app” (which I think I read earlier that you have to delete, although I can’t speak to this myself). If you install it from the disk image, it’s a drag-and-drop install.

Second, the set of applications that come with it (now bundled inside) is now much smaller. The other developer applications have been split out into separate disk images that are only available on connect.apple.com.

So, I thought I’d make a catalog listing where everything is now. Every one of the below sections corresponds to a disk image on connect.apple.com, and with the exception of Xcode, every one of those disk images is only available from connect.apple.com—only Xcode is available from the Mac App Store.

Xcode (and the other core tools)

The core tools are inside Xcode regardless of where you get it from. That will be either:

The applications bundled inside Xcode are:

  • Application Loader: One way of submitting your application to the Mac App Store.
  • FileMerge: Differencing tool. As its name implies, primarily for visually merging three versions rather than comparing two.
  • Icon Composer: Create IconFamily (.icns) files from multiple individual images.
  • Instruments: Use this to make your app more efficient, or to hunt zombies.
  • OpenGL ES Performance Detective

Accessibility Tools for Xcode

  • Accessibility Inspector (a.k.a. UIElementInspector): Examine the Accessibility properties of NS/HIViews in any application.
  • Accessibility Verifier: Automatically runs through an application’s accessible object hierarchy, including windows and views, and produces an outline of things it does wrong or fails to do that could cause accessibility problems for users.

Audio Tools for Xcode

  • AU Lab: Set up chains of Audio Units to filter audio or route it from a source to a destination.
  • CoreAudio: Sample code for some of Core Audio’s older APIs.
  • HALLab: Tool for inspecting the audio object hierarchy.

Auxiliary Tools for Xcode

  • 64BitConversion: Tools for porting 32-bit Cocoa code to 64-bit.
  • Clipboard Viewer: See every type on the clipboard (general pasteboard).
  • CrashReporterPrefs: Change how your system reacts to an application crashing.
  • Dictionary Development Kit: Build your own dictionaries for Dictionary.app.
  • Help Indexer: Makes your Help Books searchable. (Only useful for Mac developers.)
  • LegacyAPISurvey: Tells you what APIs you’re using that are in danger of being deprecated.
  • Modem Scripts: Examples of CCL scripts (essentially, modem drivers).
  • PackageMaker: Build Installer packages.
  • Repeat After Me: Test the Speech Synthesis Manager, exporting to either phoneme text or an AIFF file.
  • SleepX
  • SRLanguageModeler: Something to do with Speakable Items, but I couldn’t figure out how to work it.

Dashcode for Xcode

  • Dashcode: IDE for making Dashboard widgets. It “includes a design canvas that produces the graphics assets for you, as well as a powerful code editor, and even a full JavaScript debugger”.

Graphics Tools for Xcode

  • CI Filter Browser (Dashboard widget)
  • OpenGL Profiler
  • OpenGL Shader Builder
  • Pixie: Like DigitalColor Meter, but more developer-oriented. YMMV on which is better.
  • Quartz Composer: Create compositions that can be used within applications or as screensavers. Also useful for developing Core Image filters.
  • Quartz Composer Visualizer
  • Quartz Debug: Monitor your computer’s graphics performance (global frame rate), toggle various settings in the Quartz Compositor, and enable or disable the HiDPI resolutions.

Hardware IO Tools for Xcode

  • Bluetooth Diagnostics Utility
  • Bluetooth Explorer
  • btdump
  • IORegistryExplorer: See what’s connected to your Mac.
  • Network Link Conditioner (prefpane): Makes your internet connection pretend to suck so you can see how your app performs under such conditions.
  • PacketLogger: Another Bluetooth tool.
  • USB Prober: Inspect your USB buses and the devices connected to them.
  1. You’ll understand the bug better. This means you can write a better bug report, which will help Apple fix it more quickly (meaning you may get the fix more quickly).

  2. They’ll understand the bug better. This, too, helps Apple fix it more quickly.

  3. You may find that it is not a bug in the API at all, but that you were misusing it. Perhaps you were using something on a thread that you shouldn’t have been, or expecting some argument to be used a certain way when it’s actually used differently.

    In this case, you may be able to use the API after all, saving you the time you would have spent hacking around a non-bug. This also saves them the time they would have spent triaging and eventually responding to a non-bug.

    If your misunderstanding was borne out of poor documentation (misleading, inaccurate, vague, incomplete), you can file a bug report about that instead. Then the documentation gets better and future users of the same API avoid making the same error you did.

A test app isn’t appropriate for every kind of Radar, but when it is, including it helps everyone.

iOS device user guides on the iBookstore

 

2011-10-16 14:00:50 -08:00

Apple has user guides for their three iOS devices (not counting the TV, in which iOS is an implementation detail), for both iOS 4.3 and 5, free on the iBookstore:

iOS 4.3

iOS 5

Much faster

 

2011-10-15 15:13:33 -08:00

I’ve just pushed a couple of improvements to my ISO 8601 date formatter. Previously, it was pathetically slow compared to C standard library parsing and unparsing; now, it is faster.

Timing ISO8601DateFormatter
Time taken: 0.130194 seconds
Number of dates and strings computed: 10000 each
Time taken per date: 0.000013 seconds
Timing C standard library parsing and unparsing
Time taken: 0.192645 seconds
Number of dates and strings computed: 10000 each
Time taken per date: 0.000019 seconds

You’ll want at least revision [61d2959c6921] or later.

My thanks to Sam Soffes and Rudy Richter for alerting me to the speed problem.

Edited at 16:35. Previously, it was almost as fast as C stdlib. Now it is faster.

Desktop picture: Yawning Void

 

2011-09-15 21:10:04 -08:00

This is a desktop picture for Macs running at 1366 by 768, such as the 11-inch MacBook Air:

Yawning void

No, it’s not pure black. Yes, that bar at the top is part of it.

It’s meant to be used at that resolution and no other, with menu-bar translucency turned on. Scaling it, especially vertically, will make it look wrong.

If you want to adapt it to a different size, here’s the original document, editable in Lineform. You should scale the document and the background to the desired screen size, and the menu-bar background to the desired width.

Conferences 2011

 

2011-09-12 06:12:23 -08:00

It’s that time again! Just like last year, there are a bunch of different conferences going on; unlike last year, I’m not going to even attempt to list all of them.

The two that I have a reason to mention are:

  • Voices That Matter. This time it’s in Boston, November 12 and 13. Their early bird pricing is still on through September; I’ll let their site tell you more. As I do whenever I mention it here, I have a coupon code for it, which is BSTBLOG. As usual, due to time and expense constraints, I won’t be attending this one.
  • MacTech Conference. I’m presenting again—same topic as last year, how to recognize, find, and fix bugs in Cocoa applications, but this time it’s the Xcode 4 edition. The early bird period has ended, but you can get the same $500 off by signing up through this referral link. The time and place is November 2, 3, and 4 in Universal City, California.

Last year at the MacTech Conference, I brought with me some of my useful Cocoa links business card. I’ll be doing that again this year, so if you attend the MacTech Conference, feel free to ask me for one.

I hope to see you at the MacTech Conference!

SD card sleeve

 

2011-07-10 00:54:21 -08:00

If you have an SD card without a case, you may find this handy.

Front of sleeve, labeled.
The back of the SD card sleeve, closed.
The back of the SD card sleeve, open, showing the card inside.

File: SD-card-sleeve.pdf

A PDF file from which you can print out nine sleeves per US Letter sheet. Make sure you print at 100% scale.

Assembly instructions

  1. Cut out a sleeve along the thick lines.
  2. Place the card in the center of the sleeve so that it is outlined by the thin lines.
  3. Fold the bottom (rectangular flap) over first.
  4. Fold the side flaps over.
  5. Secure the bottom and side flaps with one square piece of three-quarter-inch Scotch tape.
  6. Fold the top flap over and inside.
  7. Flip the sleeve over and label it on the seamless side.

How to make the Help key do something useful

 

2011-06-14 01:51:52 -08:00

If you want to see the techniques explored in this blog post in a working application, download ContextHelpTest and/or its source code.

If you’ve used Mac OS X with an Apple extended keyboard of some sort, chances are you’ve seen this:

The Help cursor, a question mark.

That’s the cursor that comes up when you press the Help key. And every time, if you click while that cursor is up, you get a beep and the cursor changes back. (If you press a key instead, the cursor changes back and the keystroke goes through, which often will still get you a beep.)

So most of us probably forget the Help key exists, and curse it when we are reminded of it by pressing it by accident.

But what does it do, really? What is it meant to do?

Every responder can respond to a message called helpRequested:. The default implementation is to ask the help manager for the attributed string set as help for itself. If you’ve never set any help for it, then the help manager will return nil, and the default helpRequested: implementation will pass the message on to the next responder. If you have set help for the responder, then it will tell the help manager to show that help.

So here’s what you need to do:

  • To associate a help text with a view or other object, send the help manager a setContextHelp:forObject: message, passing the help attributed string and the view/other object. When the object is a view, this is all you need to do for Help-clicking on the view to do the right thing.
  • To programmatically show context help for an object, use showContextHelpForObject:locationHint:. Note that you pass the object to look up, not the help text, here. The location hint is where the user might have Help-clicked to bring up the context help.
  • To make your custom view able to show context help for things it draws within itself, override helpRequested:, find what the user clicked on, lazily set the context help for the clicked object (if appropriate), and look it up. If the user didn’t click on anything or you don’t have any help worth providing for it, pass the message on to super.
  • To programmatically enter Help-key mode, send the activateContextHelpMode: action message to the application object. If you want to make a control or menu item in a nib do this, connect it to that action on the First Responder.

Note that the help manager does not retain your definable objects. If an object that has help set for it is deallocated, that will cause a crash later on. Therefore, when setting context help for a view, the view itself should do so within its initWithFrame: (or other designated initializer) and initWithCoder: methods, and remove itself from context help in dealloc. (I don’t know how this goes under GC or ARC.)

You might also have noticed that I’m not simply saying “view” or “responder”. The help manager does not restrict you to setting help for responders or views; any object can have context help set for it. This includes model objects. This is very, very useful for implementing a view that does selective context help on things within it: You set context help for each model object, and the view tells the help manager to show help for the clicked model object.

When setting context help for a non-responder, the controller that owns it should do that, and should remove the object from context help before releasing the ownership.

So, here’s a test app.

Context help works the usual way on the word-count field:

Help-clicking on the word-count field shows a tooltip-looking popover that shows the context help associated with the field.

The text view is a subclass of NSTextView that implements helpRequested: by determining what word the user clicked on (by sending itself characterIndexForInsertionAtPoint: and then using a CFStringTokenizer to walk through its words) and then looking that word up in the dictionary.

Help-clicking on the word-count field shows a tooltip-looking popover that shows the dictionary definition associated with the clicked-on word.

Naturally, since not everybody has a Help key anymore, I provided an alternative.

The Edit menu contains a Define menu item, with the keyboard shortcut of ctrl-slash.

I look forward to seeing what uses you come up with in your apps for this.

The Temporal Orphanage

 

2011-05-31 18:11:09 -08:00

The Temporal Orphanage was a novel solution to the problem of parentless children.

Every child who entered the Orphanage had to sign an agreement stating that they would return in 20 years. As the only licensed user of time-travel technology, the Orphanage used it to send the 20-year-old orphan back in time 20 years to be their own adoptive parent.

Deferments were granted, on a case-by-case basis, to orphans who would eventually marry. The Orphanage preferred the children to be raised by couples, so they granted nearly every request, but undeferred single parents were not uncommon. Every orphan found out within a month of their acceptance whether they would be single or married when they arrived to pick themselves up, and no review was necessary at the time of the deferment application because they already knew whether they would accept it.

A few new laws had to be passed shortly after the Orphanage commenced operations. Some were amendments or repeals to legislation that had assumed time travel did not exist or was completely banned, but one new law that could be tied directly to the Orphanage required life insurance companies to consider the number of years that each insured person lived, not merely the subtraction between their dates of birth and death.

The Orphanage was financed in two parts. The first, and minor, part was by a toy store occupying the other half of the building. Some orphans simply kept all of their toys as they grew up to bring back with them all at once, but this was disfavored by the Orphanage, both because it did not bring in income and because the greater mass made the travel more expensive.

The other part was an apartment complex. Grown-up orphans would begin renting or buy their own residence in their late teens, but then have to go back 20-plus years to a time when they had no home besides the orphanage. Each parent who arrived rented an apartment within the Orphanage’s own apartment building—guaranteed to have an opening—until they could find another, cheaper temporary apartment elsewhere. The trickiest part was remembering to pay the rent upon returning to the apartment they’d left a calendar month/20 biological years ago.

The Orphanage was almost closed down by OSHA, but managed to remain open with the promise of no further hiring, after a power surge uncaused the deaths of all of the staff.

Choose any—or, preferably, none:

  • Make custom buttons, but don’t hire a professional designer to design them. Draw your UI in your pirated copy of Photoshop and make the best buttons you, fellow non-graphically-superpowered programmer, can manage, which look like you downloaded them from GeoCities in 1995 or got them out of a “1001 Buttons!” book from the same year.
  • Make custom UI controls (especially buttons) simply because you can.
  • Fill your App Store page’s “screenshots” section with images that are not purely screenshots. Showing an iPhone 4 with your app on it is a minus. Showing an older iPhone is another minus. Putting in your own inane blather marketing copy with your paint program’s text tool is a minus. Putting the iPhone and/or text on any kind of background is a minus. Any image that does not show the app at all is 500 minuses.
  • Show screenshots, but only of some of the app. Leave me wondering whether your app has the feature or UI pattern I’m looking for. (If your app is free, I’ll try it and find out. If it’s not, I won’t.)
  • Custom backgrounds without (good) custom UI. Extra debit if your background is plain white.
  • Abbr btn nms.

How to keep me actually interested and maybe even get me to buy your app:

  • If you make custom UI, make it awesome. Make a truly original UI that would belong on the cover of Macworld. Make it a custom UI with a purpose that guides and justifies the customizations. (Beware the difference between “purpose” and “theme”.) Otherwise, stick to plain Cocoa Touch controls wherever possible. Functional beats ugly.
  • If you go functional, follow the HIG. Either way, keep things clean and well-organized. Don’t force too much into a single screen. If you “have to” pack multiple things on a line, that’s too much. If you “have to” abbreviate words, that’s too much. Consider cutting features; simplicity is a virtue. If you need to break things out into other views, do it.
  • The screenshots section is for screenshots only. If you need to indicate a gesture, composite in a finger and an arrow, and don’t do that in more than one screenshot. No added text. Ever.

Don’t miss the comments. I’m sure some of you have some other don’ts to suggest that I forgot.

Portal 2

 

2011-04-24 16:45:06 -08:00

The best works of fiction all have in common a certain feeling.

It comes at the end. You’ve finished it. There is no more; you know this, and it hurts you, because you want more, you want the enjoyment you’ve just had to continue forever, and yet you know that if there were always more, if it ran forever, eventually it would get boring, so it is good that is over, and yet it hurts.

Portal 2—which is great, all the way through—leaves you with that feeling. The ending is great, the best ending I’ve ever seen in a video game, and it hurts.

You should play it.

Play the first one first, and then play the second. And then you should probably play them both again—I know there’s some stuff I’ll view differently when I play Portal 2 the second time.

The only thing it left me wanting was a soundtrack album. I only bought the one song (you know the one) from the Orange Box soundtrack, but I would happily buy the whole soundtrack to this game. The music is as wonderful as the game it accompanies. I hope, someday, preferably someday soon, I can go to the iTunes Store or Amazon MP3 and buy it.

Valve and everybody else involved in making this game (and the original): You rock.

EDIT 2011-06-12: Just found this phenomenon on TVTropes. (To be clear, they had it first.)

Writing, n.

 

2011-03-24 15:01:56 -08:00

Over on Tumblr (where I have also started writing), Andy Matuschak and Christopher Bowns have challenged each other:

Long story short: Andy and I threw down our respective gloves after throwing back the third glass of whiskey, and came up with this: every Tuesday, we’ll spend 20 minutes of our respective shuttle rides to work writing something. It doesn’t matter what, but when those 20 minutes are up, you publish it.

After Christopher published that, Colin Barrett entered with this piece. Discussion ensued on Twitter, as it so often does, and it included this tweet from Christopher:

@cbarrett Perfect. I think “overthinking it” can cripple your writing, but it’s hard to let go of, because it’s so useful in engineering.

I responded there, but I’d now like to expand in a somewhat different direction here.

Writing programs is writing.

Consider this: If you write a program that no human can read, you have written it badly, and may have even written an outright bad program. Switch out “program” for “manuscript” and I hope you see my point.

Don’t confuse this with literate programming. That has you write a more-or-less human-only text, and then run a program such as WEB or CWEB to translate it to source code.

The problem with literate programming is the problem with all translation from one language to another, especially automatic translation: Error.

Just as translating a human-only text from one human language to another human language can produce a garbled mash, so can translating a program from one language to another produce a nonsense, or at least broken, program.

Hence the justified suspicion of programs that “upgrade” a program even just from one version of the same language to another: It isn’t the same language; not really. Similar, but not the same. Going from Python 2 to Python 3, for example, is not much different from going from 18th-century English to 21st-century English. Do these words mean the same thing? Maybe not. This is why such translators invariably come with warnings to check the hell out of the result.

No, I’m not talking about literate programming. I’m talking about regular programming, where you write the true text of the program, the text that the interpreter interprets or the compiler converts into machine code.*

Programming is a fusion of a couple of skills. It’s often called “software engineering”, but that’s only half of it. Writing is the other half. I define “writing” here as writing text into an editor, whatever the language, and “engineering” as designing a thing to be built. Things like OOP, unit testing, and design patterns are aspects of software engineering. Things like DRY, commenting practices, naming practices, and style rules are aspects of writing.

If you write a program well, you write it not only for the interpreter or compiler, but for any humans who will read the program after you, including yourself. It is a text like any other.

Writing a program is writing. Every rule, observation, and prescription you apply to writing for humans only, apply also to programming, and vice versa. If the rule does not work, chances are it was not a good rule to begin with.


Suggested reading


* This is, itself, a form of translation; the C standard even calls it such. That’s part of why compiler bugs are possible, and why compiler authors are extremely careful about their work, which is why compiler bugs are so rare.

The application delegate and the new singletons

 

2011-03-18 16:26:57 -08:00

Here is a global variable:

Wizard *gWizard;

I’ll call this a zeroth-order global, on the premise that I need to talk to exactly zero objects (including classes) to gain access to this object.

Next, let’s look at a singleton:

[Wizard sharedWizard]; //hope he's not busy

I’ll call this a first-order global, as we need to ask the class for it (1 step) to gain access to it.

Now, here’s a second-order global:

MyAppDelegate *appDelegate = [[UIApplication sharedApplication] delegate];

(I use UIApplication here because I see this most frequently in Cocoa Touch code, but the pattern applies equally to Cocoa.)

And here’s a third-order global:

Wizard *wizard = [appDelegate wizard];

I need to (1) ask the UIApplication class for the application object, (2) ask that for its delegate, and (3) ask that for the wizard. (Assume here that wizard is a property, not a factory method that creates Wizards on the fly.)

None of these is any less global. If I can get to it from anywhere in the program without knowing about it directly, it is global.

Therefore, all the problems of globals apply:

  • What if two threads want to use the same Wizard?
  • What if the Wizard has a delegate of its own, and I have two objects that want to be its delegate?
  • What if the Wizard keeps internal state that may be corrupted by multiple objects trying to use it? (Nothing should have to worry about this outside of the Wizard itself.)

Your application’s objects form a graph. It should not be a complex one like this:

At the top, the application object. From it, its delegate. From it, your controller objects and a wizard. From each controller object, a path (colored in red) back to the delegate and then to the wizard.

Whenever you have paths bouncing around off of other objects like that, that’s a problem. The red arrows in the problem graph show where you violate the Law of Demeter.

Your object graph should, instead, be straightforward:

At the top, the application object. From it, its delegate. From it, your controller objects. From each one, a wizard.

Note that each of your controllers should own—or, if you prefer, hire—a Wizard all to itself. This eliminates contention between objects and reduces the likelihood of contention between threads (assuming each of the owning objects is supposed to only work on a single thread and not juggle multiple threads).

If contention is not a problem and you have a good reason why there should be only one Wizard, such as memory pressure or union regulations, then use a singleton. But use a real singleton, and only when necessary, and beware of singletons in disguise.

The new and improved Cocoa links card

 

2011-03-15 07:16:12 -08:00

I’ve previously mentioned that I made a business card full of useful Cocoa and Cocoa Touch links to give to new Cocoa and Cocoa Touch programmers at events such as CocoaHeads.

Today, I have updated it and given it a web page. 1-up and 10-up (US Letter) PDFs are available there, as well as the full list of unshortened links.

I encourage you to print out the 10-up onto perforated business card paper, or have it professionally printed (keeping in mind that you probably won’t need 1000 of them), and make the cards available to novice Cocoa and Cocoa Touch programmers at the CocoaHeads or NSCoderNight events you attend. Just please be sure to print both sides, since my credit link is on the back.

Mentoring

 

2011-03-08 20:08:12 -08:00

I originally wrote this for the Adium development list, in a discussion of the upcoming Google Summer of Code, for the benefit of those of our developers who have never mentored students in GSoC before and may be considering doing so. Read this in that context; for example, “our” means “the Adium project’s”.


  • Work with students from the very beginning on the quality of their [application] submissions. 90% of the submissions will be crap. Look past this. See what lies within. If it can be improved, try to get them to improve it. Those who do will be good students.

  • Spur your student to work on code. Make sure they’re committing at a regular and satisfactory rate.

  • Ensure your student actually writes code and doesn’t just crib together tutorials and/or Stack Overflow answers and/or ask you to provide code* for various tasks they need to accomplish. Wildly varying coding styles, inconsistent variable and method naming, and poor indentation are warning signs.

  • Communicate constantly. This includes the aforementioned spurring your student to work as well as being available to answer any questions they have about where things are in our code base, what they need to do to achieve certain goals or specific sub-tasks, etc. These questions are virtuous; you should encourage them, just short of demanding them. The student is not that only in name; they are here both to write code and to learn.

  • Find a medium that works for both of you. For me, email (or Twitter, nowadays) would be best. Maybe you prefer IM, or even voice chat/phone (if it isn’t too expensive). Don’t try to force your habits on the student; if you love the phone but they hate it (or vice versa), a happy medium will work better.

  • Be sure they understand and use Mercurial and good VCS practices generally. Frequent, discrete commits; neither waiting “until it’s done” to commit (it should compile) nor committing half-done work periodically (e.g., daily) nor committing amalgams of unrelated work (they should commit specific sections that comprise a single change).

  • Nowadays, I recommend that you have them fork our Bitbucket repo and push to their fork.

  • Review their code constantly. Subscribe to the commits list (or their fork’s commits feed) and review everything they write.

  • Read their commit messages, not just their code. Lists are a warning sign (unrelated changes lumped together). Inadequately describing the change is also a problem. Work the student out of these habits as soon as possible.

  • Don’t wait until mid-term exams to sit your student down for a serious talk about their work or lack thereof. If they’re committing garbage, set them straight as soon as possible—do not wait. If they don’t commit, get them writing and committing as soon as possible.

  • Always be ready to fail your student. Be compassionate, understanding of life’s realities; they’re not a slave. But they are here to work, and if they don’t do the job or if they do a bad job, be ready to fail them.

  • Make sure they know where they stand. If there’s 1–2 weeks before exams and they’re still in danger of failing, make sure they know what’ll happen if they don’t shape up.

I can’t claim to have been perfect in all of these points in my own mentoring (many of these I learned by not doing them), but it’s what I’ve found works.

There might also be something on the GSoC sites about this. Some viewpoints vary, particularly along the leniency-to-hardassness spectrum.

If you are not willing to do all of this, or don’t think you’ll have the time, you should not mentor a student.

* Six words that should worry every mentor or other help-offerer: “Can you show me a sample?”

Apple documentation search that works

 

2011-03-06 15:58:19 -08:00

You’ve probably tried searching Apple’s developer documentation like this:

The filter field on the ADC documentation library navigation page.

Edit: That’s the filter field, which is not what this post is about. The filter sucks. This isn’t just an easy way to use the filter field; it’s an entirely different solution. Read on.

You’ve probably been searching it like this:

Google.

(And yes, I know about site:developer.apple.com. That often isn’t much better than without it. Again, read on.)

There is a better way.

Better than that: A best way.

Setup

First, you must use Google Chrome or OmniWeb.

Go to your list of custom searches. In Chrome, open the Preferences and click on Manage:

Screenshot with arrow pointing to the Manage button.

In OmniWeb, open the Preferences and click on Shortcuts:

Screenshot of OmniWeb's Shortcuts pane.

Then add one or both of these searches:

For the Mac

Chrome OmniWeb
Name ADC Mac OS X Library
Keyword adcmac adcmac@
URL http://developer.apple.com/library/mac/search/?q=%s http://developer.apple.com/library/mac/search/?q=%@

For iOS

Chrome OmniWeb
Name ADC iOS Library
Keyword adcios adcios@
URL http://developer.apple.com/library/ios/search/?q=%s http://developer.apple.com/library/ios/search/?q=%@

Result

Notice how the results page gives you both guides and references at once, even giving specific-chapter links when relevant. You even get relevant technotes and Q&As. No wild goose chases, no PDF mines, no third-party old backup copies, no having to scroll past six hits of mailing-list threads and Stack Overflow questions. You get the docs, the right docs, and nothing but the docs.

For this specific purpose, you now have something better than Google.

More on the absurdly small size of storage

 

2011-02-03 19:05:04 -08:00

According to Wikipedia, a drop of water can be up to 6 mm in diameter. That works out to 0.1131 ml.

The volume of a microSD card is 0.165 ml.

The largest size of microSDHC card is 32 GB.

This means that just over 21.9 GB of data will fit in the space of a drop of water.