On the API design of CGBitmapContextCreate

2012-06-01 03:24:38 -08:00

Let’s review the prototype of the CGBitmapContextCreate function:

CGContextRef CGBitmapContextCreate (
 void *data,
 size_t width,
 size_t height,
 size_t bitsPerComponent,
 size_t bytesPerRow,
 CGColorSpaceRef colorspace,
 CGBitmapInfo bitmapInfo
);

The arguments:

  • data may be a pointer to pixels. If you pass NULL, the context will create its own buffer and free that buffer itself later. If you pass your own buffer, the context will not free it; it remains your buffer that you must free after you release the context, hopefully for the last time.
  • width and height are what their names say they are, in pixels.
  • bitsPerComponent is the size of each color component and the alpha component (if there is an alpha component), in bits. For 32-bit RGBA or ARGB, this would be 8 (32÷4).
  • bytesPerRow is as its name says. This is sometimes called the “stride”.
  • colorspace is a CGColorSpace object that specifies what color space the pixels are in. Most importantly, it dictates how many color components there are per pixel: An RGB color space has three, CMYK has four, white or black has one. This doesn’t include alpha, which is specified separately, in the next argument.
  • bitmapInfo is a bit mask that specifies, among other things, whether components should be floating-point (default is unsigned integer), whether there is alpha, and whether color components should be premultiplied by alpha.

The most immediate problem with this function is that there are so damn many arguments. This is especially bad in a C function, because it’s easy to lose track of what each value specifies, especially when so many of them are numbers. Suppose you want to make an 8-by-8-pixel grayscale context:

CGContextRef myContext = CGBitmapContextCreate(NULL, 8, 8, 8, 8, myGrayColorSpace, kCGImageAlphaNone);

Now, without looking at the prototype or the list, which argument is bitsPerComponent, which is bytesPerRow, and which are width and height?

Objective-C’s names-and-values message syntax can help with this, as we can see in the similar API (for a different purpose) in NSBitmapImageRep:

NSBitmapImageRep *bir = [[NSBitmapImageRep alloc]
    initWithBitmapDataPlanes:NULL
                  pixelsWide:8
                  pixelsHigh:8
               bitsPerSample:8
             samplesPerPixel:4
                    hasAlpha:YES
                    isPlanar:NO
              colorSpaceName:NSCalibratedRGBColorSpace
                 bytesPerRow:8
                bitsPerPixel:8*4];

But this has other problems, notably the redundant specification of bitsPerPixel and samplesPerPixel. With that and the isPlanar argument, this method takes even more arguments than CGBitmapContextCreate. More importantly, it doesn’t solve the greater problems that I’m writing this post to talk about.

EDIT: Uli Kusterer points out that bitsPerPixel is not redundant if you want to have more bits not in a component than just enough to pad out to a byte. That’s a valid (if probably unusual) use case for NSBitmapImageRep, so I withdraw calling that argument redundant.

I’m going to use the example of both of these APIs, but mainly CGBitmapContextCreate, to talk about a few principles of API design.

The first is that it should not be possible for an object to exist in an unusable state. From the moment a freshly-created object is returned to you, you should be able to use it without it blowing up in your face.

From this principle follows a corollary: Everything an object needs in order to function, it should require when you instantiate it. Otherwise, the object would exist without the needed information—and thereby be unable to function—until you provide it.

It might seem that these APIs are as long as they are in order to uphold that principle. After all, a bitmap context needs to have someplace to put its pixels, right? (In fact, CGBitmapContextCreate‘s buffer argument was required until Snow Leopard and iOS 4.) It needs to know what format the pixels should be in, right?

Now for the second principle: Any information that an object does not need in order to function should be omitted from initialization and provided afterward. In Objective-C, the most common means of this post hoc specification are readwrite properties and delegate messages. Generally, for anything that could be specified in the initializer, the post hoc way to specify it would be via a property.

We’d like to invoke the second principle and move things out of the initializer, but that would seem to conflict with the first principle: What can we move that the context does not require?

The resolution is in a third principle—one that is not specific to APIs, but applies to all interfaces, including user interfaces: An interface should have reasonable defaults for as many parameters as it can—it should only require the user to provide values for parameters for which no default can be reasonably chosen in advance.

With that in mind, let’s look at some of CGBitmapContextCreate‘s arguments and see how we might apply the reasonable-defaults principle to simplify it:

  • bitsPerComponent, bitmapInfo, and colorspace: Most commonly, the caller will want 8-bit RGBA or ARGB, often with the goal of making sure it can be used on the graphics card (either by way of a CG- or CALayer or by passing the pixels directly to OpenGL). That’s a reasonable default, so these three can be eliminated.

    We could make them properties, but there’s an alternative: We could dynamite bitmapInfo and merge some of its values with bitsPerComponent in the form of several pixel-format constants. You’ve seen this approach before in QuickTime and a few other APIs. CGBitmapContext only supports a specified few pixel formats anyway, so this simply makes it impossible to construct impossible requests—another good interface principle.

  • bytesPerRow: Redundant. The number of bytes per row follows from the pixel format and the width in pixels; indeed, CGBitmapContextCreate computes this internally anyway and throws a fit if you guessed a number it wasn’t thinking of. Better to cut it and let CGBitmapContextCreate infer it.

    Making you compute a value for bytesPerRow does provide an important safety check, which I’ll address shortly.

    EDIT: Alastair Houghton points out another case for keeping bytesPerRow. This doesn’t apply to CGBitmapContextCreate, which rejects any value that doesn’t follow from the pixel format and width in pixels, but could be valid for NSBitmapImageRep and CGImage.

  • data (the buffer): Since Snow Leopard and iOS 4, the context will create its own buffer if you don’t provide one. That makes it explicitly optional, which means it is not required.

The only arguments that are truly required are the width and height, which tell the context how many pixels it should allocate its initial buffer for in the given (or default) pixel format.

In fact, if we take the above idea of replacing three of the arguments with a single set of pixel-format constants, then we don’t actually need to make any of the properties readwrite—there isn’t any reason why the owner of the context should be changing the pixel format on the fly. You might want to change the width or height, but CGBitmapContext doesn’t support that and we’re trying to simplify, not add features.

So, what problems do the current APIs solve, what problems do they raise, and how would we address all of both problems?

  • Specifying the pixel format (bitsPerComponent, colorspace, bitmapInfo) up front saves the context having to reallocate the buffer to accommodate any pixel-size changes.

    If we simply removed the pixel format arguments from the initializer and made them readwrite properties (or a property), then the context would have to reallocate the buffer when we change the pixel format from the default (ARGB or something similar) to something else (e.g., grayscale).

    The immediate solution to that would be for the context to allocate its buffer lazily the first time you draw into it, but that would mean every attempt to draw into the context would hit that “have we created our buffer yet” check.

    A better solution would be to follow the above idea of condensing the specification of the pixel format down to a single constant; then, we could have a designated initializer that would take a pixel-format value, and a shorter initializer for the default case that calls the DI with the default pixel-format value.

  • Specifying the buffer as a plain pointer (or pointer to one or more other pointers) requires the dimensions of the buffer to be specified separately.

    It’s a mystery to me why CGBitmapContextCreate doesn’t take a CFMutableData and NSBitmapImageRep’s initializers don’t take an NSMutableData. With these, the length in bytes would be associated with the buffer, enabling the context/rep to check that the length makes sense with the desired (or default) pixel format. This would be better than the current check in two ways: First, the current check only checks bytesPerRow, ignoring the desired height; second and more importantly, the current check only checks the value you gave for bytesPerRow—it can’t check the actual length of the buffer you provided.

    (From that, you can derive a bit of guidance for using the current API: If you pass your own buffer, you should use the value you computed for bytesPerRow in computing the length of your buffer. Otherwise, you risk using one stride value in allocating the buffer and telling a different one to CGBitmapContextCreate.)

  • Requiring (or even enabling) the buffer to be provided by the caller is redundant when the API has all the information it needs to allocate it itself.

    This was especially bad when the buffer was required. Now that CGBitmapContext can create the buffer itself, even having that optional input is unnecessary. We can cut this out entirely and have the context always create (and eventually destroy) its own buffer.

  • The caller must currently choose values for parameters that are not important to the caller.

    The current API makes you precisely describe everything about the context’s pixels.

    WHY? One of the central design aspects of Quartz is that you never work with pixels! It handles file input for you! It handles rendering to the screen for you! It handles file output for you! Core Image handles filtering for you! You never touch pixels directly if you can help it!

    99% of the time, there is no reason why you should care what format the pixels are in. The exact pixel format should be left to the implementation—which knows exactly what format would be best for, say, transfer to the graphics card—except in the tiny percentage of cases where you might actually want to handle pixels yourself.

With all of this in mind, here’s my ideal API for creating a bitmap context:

typedef enum
#if __has_feature(objc_fixed_enum)
: NSUInteger
#endif
{
    //Formats that specify only a color space, leaving pixel format to the implementation.
    PRHBitmapContextPixelFormatDefaultRGBWithAlpha,
    PRHBitmapContextPixelFormatDefaultRGBNoAlpha,
    PRHBitmapContextPixelFormatDefaultWhiteWithAlpha,
    PRHBitmapContextPixelFormatDefaultWhiteNoAlpha,
    PRHBitmapContextPixelFormatDefaultCMYK,
    PRHBitmapContextPixelFormatDefaultMask,

    PRHBitmapContextPixelFormatARGB8888 = 0x100,
    PRHBitmapContextPixelFormatRGBA8888,
    PRHBitmapContextPixelFormatARGBFFFF, //128 bits per pixel, floating-point
    PRHBitmapContextPixelFormatRGBAFFFF,
    PRHBitmapContextPixelFormatWhite8, //8 bpc, gray color space, alpha-none
    PRHBitmapContextPixelFormatWhiteF, //Floating-point, gray color space, alpha-none
    PRHBitmapContextPixelFormatMask8, //8 bpc, null color space, alpha-only
    PRHBitmapContextPixelFormatCMYK8888, //8 bpc, CMYK color space, alpha-none
    PRHBitmapContextPixelFormatCMYKFFFF, //Floating-point, CMYK color space, alpha-none

    //Imagine here any other CGBitmapContext-supported pixel formats that you might need.
} PRHBitmapContextPixelFormat;

@interface PRHBitmapContext: NSObject

- (id) initWithWidth:(NSUInteger)width
    height:(NSUInteger)height;
- (id) initWithWidth:(NSUInteger)width
    height:(NSUInteger)height
    pixelFormat:(PRHBitmapContextPixelFormat)format;

//There may be an initializer more like CGBitmapContextCreate/NSBitmapImageRep's (taking individual pixel-format values such as color space and bits-per-component), but only privately, to be used by the public DI.

//Mutable so that an asynchronous loader can append to it. Probably more useful in an NSBitmapImageRep analogue than a CGBitmapContext analogue.
@property(readonly) NSMutableData *pixelData;

@property(readonly) NSColorSpace *colorSpace;
@property(readonly) bool hasAlpha;
@property(readonly, getter=isFloatingPoint) bool floatingPoint;
@property(readonly) NSUInteger bitsPerComponent;

- (CGImageRef) quartzImage;
//scaleFactor by default matches that of the main-menu (Mac)/built-in (iOS) screen; if it's not 1, the size (in points) of the image will be the pixel size of the quartzImage divided by the scaleFactor.
#if TARGET_OS_MAC
- (NSImage *) image;
- (NSImage *) imageWithScaleFactor:(CGFloat)scale;
#elif TARGET_OS_IPHONE
- (UIImage *) image;
- (UIImage *) imageWithScaleFactor:(CGFloat)scale;
#endif

@end

With the current interface, creating a context generally looks like this:

size_t bitsPerComponent = 8;
size_t bytesPerComponent = bitsPerComponent / 8;
bool hasAlpha = true;
size_t bytesPerRow = (CGColorSpaceGetNumberOfComponents(myColorSpace) + hasAlpha) * bytesPerComponent * width;
CGContextRef context = CGBitmapContextCreate(NULL, width, height, bitsPerComponent, bytesPerRow, myColorSpace, myBitmapInfo);

With an interface such as I’ve described, creating a context would look like this:

PRHBitmapContext *context = [[PRHBitmapContext alloc] initWithWidth:width height:height];

Or this:

PRHBitmapContext *grayscaleContext = [[PRHBitmapContext alloc] initWithWidth:width height:height pixelFormat:PRHBitmapContextPixelFormatWhite8];

8 Responses to “On the API design of CGBitmapContextCreate”

  1. Uli Kusterer Says:

    CGBitmapContextCreate is exactly the *only* way to take some Quartz calls and turn them into raw pixels. That’s what we use it for everywhere. You just seem to have a single, simple use case for which it is too complex. If you want simple, just use an NSImage, which takes care of the appropriate pixel format and works exactly as you describe: Specify a size, and lock focus.

    Due to padding, bitsPerPixel and rowBytes *may* be different than bitsPerSample * width etc. Also, often we want to e.g. draw into existing OpenGL textures or other storage that exists, so we rarely let this call allocate the buffer, and why it would be bad if it solely used NSData. Apple had to decide what to implement, and gave us flexibility over simplicity at the CoreXX level, since the simple, elegant AppKit level already existed.

    That said, if you’re hell-bent on bending the bitmap context to your will, I’d recommend going with a parameter block for all complex parameters, allowing to specify 0 (or whatever invalid value) for all values that can be computed or filled with sensible defaults. Then you’d maintain the flexibility of the existing API, but could provide default param block “constants” that people can pass in to get the common case (e.g. GetNativeGPUPixelFormatForDisplayID(foo) or kRGBAPixelFormat).

  2. Uli Kusterer Says:

    Sorry, that message above sounds a little terse and could be read as angry. It wasn’t intended that way. *hugs* You are definitely right, it is a complex API that could be simplified, and makes for a good example case. Thanks for sharing!

  3. Peter Hosey Says:

    “CGBitmapContextCreate is exactly the only way to take some Quartz calls and turn them into raw pixels. That’s what we use it for everywhere.”

    Yes, but do you really need them to be in a specific format that you specify every little detail of? Especially since CGBitmapContext only supports a select few pixel formats anyway.

    My opinion is that as long as that’s the case, it makes sense to define the list of pixel formats as a list of constants and pick the one you want your pixels in. I went with an enumeration; your structure idea is another valid option, as is pixel format objects (and I remember seeing at least one API that does have pixel format objects).

    Once you’ve drawn what you want into the context, you can then retrieve the pixels in an NSData using a method like the one I included in my illustrative interface.

    “If you want simple, just use an NSImage, which takes care of the appropriate pixel format and works exactly as you describe: Specify a size, and lock focus.”

    Or, in Cocoa Touch, UIGraphicsBeginImageContextWithOptions. I don’t think you can use that from another thread, though. (The NSImage solution is OK.)

  4. Uli Kusterer Says:

    I don’t need that specific format, but OpenGL and other destinations are very picky about what format they are fast with, so THEY definitely need it.

    NSOpenGLView has an NSOpenGLPixelFormat object. If you include the size of the object at the start of the struct (or a version field), it doesn’t matter whether it’s a struct or an object. Yes, an object would be more elegant, but if you use this in animation or video processing, you may not want to spend the cycles on additional objects every time.

    The issue with CGBitmapContextCreate() is that it may need to support more formats in the future, so what is invalid now may not stay that way.

  5. Peter Hosey Says:

    I don’t need that specific format, but OpenGL and other destinations are very picky about what format they are fast with, so THEY definitely need it.

    As long as that destination format is one of those supported by CGBitmapContext today, that problem is already solved.

    One of the problems with CGBitmapContextCreate is that it lets you specify a format that it will reject. Nothing in any other API has any bearing on this; CGBitmapContext either supports a format or it does not. In my ideal API, it would have a constant for every format it supports, and when you need a specific format, you would request that format by name.

    The issue with CGBitmapContextCreate() is that it may need to support more formats in the future, so what is invalid now may not stay that way.

    If the context API gains support for additional formats, it should also gain constants with which to specify them.

    NSOpenGLView has an NSOpenGLPixelFormat object. … Yes, an object would be more elegant, but if you use this in animation or video processing, you may not want to spend the cycles on additional objects every time.

    You don’t need to create an object every time; you can hang onto the object and reuse it. Or reuse the context, if appropriate. Plus, you could have the predefined pixel format objects be stored in global variables and created from the context class’s +load method.

  6. Todd Lehman Says:

    Looks great, Peter!

    BTW, one of the things I love most about OO interfaces is the ease with which one can create multiple initializers. For example, in my case, I work exclusively with 32-bit RGBA data, and I always let Quartz allocate/manage its own bitmap buffer for these bitmaps, so I would personally find it most useful to have a version of the initializer that specifies only width and height in pixels and figures everything else out automatically.

  7. Christopher Lloyd Says:

    CGBitmapContextCreate will accept bytesPerRow which are not exactly width * bytes per pixel as long as it’s at least width * bytes per pixel.

  8. Marcel Weiher Says:

    Have you taken a look at MPWDrawingContext?

    It not has Objective-C convenience methods not only for creating bitmap contexts ( +rgbBitmapContext:(NSSize)size ), but also for drawing ( [[[[context moveto:0 :0] lineto:100 :0] lineto:50 :50] closepath] stroke];.

    github: https://github.com/mpw/MPWDrawingContext

    Blog post: http://blog.metaobject.com/2012/06/pleasant-objective-c-drawing-context.html

Leave a Reply

Do not delete the second sentence.


Warning: Undefined array key "ntt_saved_comment_text" in /home/public/blog/wp-content/plugins/negative-turing-test/negative-turing-test.php on line 143