Archive for the 'Quartz' Category

Simple starter Cocoa app ideas

Wednesday, December 11th, 2013

Inspired partly by tonight’s Hour of Code, here are some index-card-sized outlines of some simple app projects you can make as someone new to Cocoa.

Text editor/word processor

  • Document-based Mac app
  • In document window: NSTextView
  • Use NSAttributedString to read/write document data
  • Document types:
    • public.plain-text
    • public.rtf
    • com.apple.rtfd
    • com.microsoft.word.doc
  • Extra credit:
    • Add a ruler (NSRulerView)

Picture viewer

  • Document-based Mac app
  • In document window: IKImageView
  • Use CGImageSource to read image (picture) & its properties
  • Document types:
    • public.png
    • public.jpeg
  • Extra credit:
    • Floating inspector panel showing the properties in an NSTableView
    • Color-correction panel (IKImageEditPanel)
    • Support folders (public.folder): display images from folder in IKImageBrowserView

PDF viewer

  • Document-based Mac app
  • In document window: PDFView
  • Use PDFDocument to read from .pdf file
  • Document types:
    • com.adobe.pdf
  • Extra credit:
    • Toolbar with zoom in/out buttons, zoom % field, page number field

On the API design of CGBitmapContextCreate

Friday, June 1st, 2012

Let’s review the prototype of the CGBitmapContextCreate function:

CGContextRef CGBitmapContextCreate (
 void *data,
 size_t width,
 size_t height,
 size_t bitsPerComponent,
 size_t bytesPerRow,
 CGColorSpaceRef colorspace,
 CGBitmapInfo bitmapInfo
);

The arguments:

  • data may be a pointer to pixels. If you pass NULL, the context will create its own buffer and free that buffer itself later. If you pass your own buffer, the context will not free it; it remains your buffer that you must free after you release the context, hopefully for the last time.
  • width and height are what their names say they are, in pixels.
  • bitsPerComponent is the size of each color component and the alpha component (if there is an alpha component), in bits. For 32-bit RGBA or ARGB, this would be 8 (32÷4).
  • bytesPerRow is as its name says. This is sometimes called the “stride”.
  • colorspace is a CGColorSpace object that specifies what color space the pixels are in. Most importantly, it dictates how many color components there are per pixel: An RGB color space has three, CMYK has four, white or black has one. This doesn’t include alpha, which is specified separately, in the next argument.
  • bitmapInfo is a bit mask that specifies, among other things, whether components should be floating-point (default is unsigned integer), whether there is alpha, and whether color components should be premultiplied by alpha.

The most immediate problem with this function is that there are so damn many arguments. This is especially bad in a C function, because it’s easy to lose track of what each value specifies, especially when so many of them are numbers. Suppose you want to make an 8-by-8-pixel grayscale context:

CGContextRef myContext = CGBitmapContextCreate(NULL, 8, 8, 8, 8, myGrayColorSpace, kCGImageAlphaNone);

Now, without looking at the prototype or the list, which argument is bitsPerComponent, which is bytesPerRow, and which are width and height?

Objective-C’s names-and-values message syntax can help with this, as we can see in the similar API (for a different purpose) in NSBitmapImageRep:

NSBitmapImageRep *bir = [[NSBitmapImageRep alloc]
    initWithBitmapDataPlanes:NULL
                  pixelsWide:8
                  pixelsHigh:8
               bitsPerSample:8
             samplesPerPixel:4
                    hasAlpha:YES
                    isPlanar:NO
              colorSpaceName:NSCalibratedRGBColorSpace
                 bytesPerRow:8
                bitsPerPixel:8*4];

But this has other problems, notably the redundant specification of bitsPerPixel and samplesPerPixel. With that and the isPlanar argument, this method takes even more arguments than CGBitmapContextCreate. More importantly, it doesn’t solve the greater problems that I’m writing this post to talk about.

EDIT: Uli Kusterer points out that bitsPerPixel is not redundant if you want to have more bits not in a component than just enough to pad out to a byte. That’s a valid (if probably unusual) use case for NSBitmapImageRep, so I withdraw calling that argument redundant.

I’m going to use the example of both of these APIs, but mainly CGBitmapContextCreate, to talk about a few principles of API design.

The first is that it should not be possible for an object to exist in an unusable state. From the moment a freshly-created object is returned to you, you should be able to use it without it blowing up in your face.

From this principle follows a corollary: Everything an object needs in order to function, it should require when you instantiate it. Otherwise, the object would exist without the needed information—and thereby be unable to function—until you provide it.

It might seem that these APIs are as long as they are in order to uphold that principle. After all, a bitmap context needs to have someplace to put its pixels, right? (In fact, CGBitmapContextCreate‘s buffer argument was required until Snow Leopard and iOS 4.) It needs to know what format the pixels should be in, right?

Now for the second principle: Any information that an object does not need in order to function should be omitted from initialization and provided afterward. In Objective-C, the most common means of this post hoc specification are readwrite properties and delegate messages. Generally, for anything that could be specified in the initializer, the post hoc way to specify it would be via a property.

We’d like to invoke the second principle and move things out of the initializer, but that would seem to conflict with the first principle: What can we move that the context does not require?

The resolution is in a third principle—one that is not specific to APIs, but applies to all interfaces, including user interfaces: An interface should have reasonable defaults for as many parameters as it can—it should only require the user to provide values for parameters for which no default can be reasonably chosen in advance.

With that in mind, let’s look at some of CGBitmapContextCreate‘s arguments and see how we might apply the reasonable-defaults principle to simplify it:

  • bitsPerComponent, bitmapInfo, and colorspace: Most commonly, the caller will want 8-bit RGBA or ARGB, often with the goal of making sure it can be used on the graphics card (either by way of a CG- or CALayer or by passing the pixels directly to OpenGL). That’s a reasonable default, so these three can be eliminated.

    We could make them properties, but there’s an alternative: We could dynamite bitmapInfo and merge some of its values with bitsPerComponent in the form of several pixel-format constants. You’ve seen this approach before in QuickTime and a few other APIs. CGBitmapContext only supports a specified few pixel formats anyway, so this simply makes it impossible to construct impossible requests—another good interface principle.

  • bytesPerRow: Redundant. The number of bytes per row follows from the pixel format and the width in pixels; indeed, CGBitmapContextCreate computes this internally anyway and throws a fit if you guessed a number it wasn’t thinking of. Better to cut it and let CGBitmapContextCreate infer it.

    Making you compute a value for bytesPerRow does provide an important safety check, which I’ll address shortly.

    EDIT: Alastair Houghton points out another case for keeping bytesPerRow. This doesn’t apply to CGBitmapContextCreate, which rejects any value that doesn’t follow from the pixel format and width in pixels, but could be valid for NSBitmapImageRep and CGImage.

  • data (the buffer): Since Snow Leopard and iOS 4, the context will create its own buffer if you don’t provide one. That makes it explicitly optional, which means it is not required.

The only arguments that are truly required are the width and height, which tell the context how many pixels it should allocate its initial buffer for in the given (or default) pixel format.

In fact, if we take the above idea of replacing three of the arguments with a single set of pixel-format constants, then we don’t actually need to make any of the properties readwrite—there isn’t any reason why the owner of the context should be changing the pixel format on the fly. You might want to change the width or height, but CGBitmapContext doesn’t support that and we’re trying to simplify, not add features.

So, what problems do the current APIs solve, what problems do they raise, and how would we address all of both problems?

  • Specifying the pixel format (bitsPerComponent, colorspace, bitmapInfo) up front saves the context having to reallocate the buffer to accommodate any pixel-size changes.

    If we simply removed the pixel format arguments from the initializer and made them readwrite properties (or a property), then the context would have to reallocate the buffer when we change the pixel format from the default (ARGB or something similar) to something else (e.g., grayscale).

    The immediate solution to that would be for the context to allocate its buffer lazily the first time you draw into it, but that would mean every attempt to draw into the context would hit that “have we created our buffer yet” check.

    A better solution would be to follow the above idea of condensing the specification of the pixel format down to a single constant; then, we could have a designated initializer that would take a pixel-format value, and a shorter initializer for the default case that calls the DI with the default pixel-format value.

  • Specifying the buffer as a plain pointer (or pointer to one or more other pointers) requires the dimensions of the buffer to be specified separately.

    It’s a mystery to me why CGBitmapContextCreate doesn’t take a CFMutableData and NSBitmapImageRep’s initializers don’t take an NSMutableData. With these, the length in bytes would be associated with the buffer, enabling the context/rep to check that the length makes sense with the desired (or default) pixel format. This would be better than the current check in two ways: First, the current check only checks bytesPerRow, ignoring the desired height; second and more importantly, the current check only checks the value you gave for bytesPerRow—it can’t check the actual length of the buffer you provided.

    (From that, you can derive a bit of guidance for using the current API: If you pass your own buffer, you should use the value you computed for bytesPerRow in computing the length of your buffer. Otherwise, you risk using one stride value in allocating the buffer and telling a different one to CGBitmapContextCreate.)

  • Requiring (or even enabling) the buffer to be provided by the caller is redundant when the API has all the information it needs to allocate it itself.

    This was especially bad when the buffer was required. Now that CGBitmapContext can create the buffer itself, even having that optional input is unnecessary. We can cut this out entirely and have the context always create (and eventually destroy) its own buffer.

  • The caller must currently choose values for parameters that are not important to the caller.

    The current API makes you precisely describe everything about the context’s pixels.

    WHY? One of the central design aspects of Quartz is that you never work with pixels! It handles file input for you! It handles rendering to the screen for you! It handles file output for you! Core Image handles filtering for you! You never touch pixels directly if you can help it!

    99% of the time, there is no reason why you should care what format the pixels are in. The exact pixel format should be left to the implementation—which knows exactly what format would be best for, say, transfer to the graphics card—except in the tiny percentage of cases where you might actually want to handle pixels yourself.

With all of this in mind, here’s my ideal API for creating a bitmap context:

typedef enum
#if __has_feature(objc_fixed_enum)
: NSUInteger
#endif
{
    //Formats that specify only a color space, leaving pixel format to the implementation.
    PRHBitmapContextPixelFormatDefaultRGBWithAlpha,
    PRHBitmapContextPixelFormatDefaultRGBNoAlpha,
    PRHBitmapContextPixelFormatDefaultWhiteWithAlpha,
    PRHBitmapContextPixelFormatDefaultWhiteNoAlpha,
    PRHBitmapContextPixelFormatDefaultCMYK,
    PRHBitmapContextPixelFormatDefaultMask,

    PRHBitmapContextPixelFormatARGB8888 = 0x100,
    PRHBitmapContextPixelFormatRGBA8888,
    PRHBitmapContextPixelFormatARGBFFFF, //128 bits per pixel, floating-point
    PRHBitmapContextPixelFormatRGBAFFFF,
    PRHBitmapContextPixelFormatWhite8, //8 bpc, gray color space, alpha-none
    PRHBitmapContextPixelFormatWhiteF, //Floating-point, gray color space, alpha-none
    PRHBitmapContextPixelFormatMask8, //8 bpc, null color space, alpha-only
    PRHBitmapContextPixelFormatCMYK8888, //8 bpc, CMYK color space, alpha-none
    PRHBitmapContextPixelFormatCMYKFFFF, //Floating-point, CMYK color space, alpha-none

    //Imagine here any other CGBitmapContext-supported pixel formats that you might need.
} PRHBitmapContextPixelFormat;

@interface PRHBitmapContext: NSObject

- (id) initWithWidth:(NSUInteger)width
    height:(NSUInteger)height;
- (id) initWithWidth:(NSUInteger)width
    height:(NSUInteger)height
    pixelFormat:(PRHBitmapContextPixelFormat)format;

//There may be an initializer more like CGBitmapContextCreate/NSBitmapImageRep's (taking individual pixel-format values such as color space and bits-per-component), but only privately, to be used by the public DI.

//Mutable so that an asynchronous loader can append to it. Probably more useful in an NSBitmapImageRep analogue than a CGBitmapContext analogue.
@property(readonly) NSMutableData *pixelData;

@property(readonly) NSColorSpace *colorSpace;
@property(readonly) bool hasAlpha;
@property(readonly, getter=isFloatingPoint) bool floatingPoint;
@property(readonly) NSUInteger bitsPerComponent;

- (CGImageRef) quartzImage;
//scaleFactor by default matches that of the main-menu (Mac)/built-in (iOS) screen; if it's not 1, the size (in points) of the image will be the pixel size of the quartzImage divided by the scaleFactor.
#if TARGET_OS_MAC
- (NSImage *) image;
- (NSImage *) imageWithScaleFactor:(CGFloat)scale;
#elif TARGET_OS_IPHONE
- (UIImage *) image;
- (UIImage *) imageWithScaleFactor:(CGFloat)scale;
#endif

@end

With the current interface, creating a context generally looks like this:

size_t bitsPerComponent = 8;
size_t bytesPerComponent = bitsPerComponent / 8;
bool hasAlpha = true;
size_t bytesPerRow = (CGColorSpaceGetNumberOfComponents(myColorSpace) + hasAlpha) * bytesPerComponent * width;
CGContextRef context = CGBitmapContextCreate(NULL, width, height, bitsPerComponent, bytesPerRow, myColorSpace, myBitmapInfo);

With an interface such as I’ve described, creating a context would look like this:

PRHBitmapContext *context = [[PRHBitmapContext alloc] initWithWidth:width height:height];

Or this:

PRHBitmapContext *grayscaleContext = [[PRHBitmapContext alloc] initWithWidth:width height:height pixelFormat:PRHBitmapContextPixelFormatWhite8];

Apple documentation search that works

Sunday, March 6th, 2011

You’ve probably tried searching Apple’s developer documentation like this:

The filter field on the ADC documentation library navigation page.

Edit: That’s the filter field, which is not what this post is about. The filter sucks. This isn’t just an easy way to use the filter field; it’s an entirely different solution. Read on.

You’ve probably been searching it like this:

Google.

(And yes, I know about site:developer.apple.com. That often isn’t much better than without it. Again, read on.)

There is a better way.

Better than that: A best way.

Setup

First, you must use Google Chrome or OmniWeb.

Go to your list of custom searches. In Chrome, open the Preferences and click on Manage:

Screenshot with arrow pointing to the Manage button.

In OmniWeb, open the Preferences and click on Shortcuts:

Screenshot of OmniWeb's Shortcuts pane.

Then add one or both of these searches:

For the Mac

Chrome OmniWeb
Name ADC Mac OS X Library
Keyword adcmac adcmac@
URL http://developer.apple.com/library/mac/search/?q=%s http://developer.apple.com/library/mac/search/?q=%@

For iOS

Chrome OmniWeb
Name ADC iOS Library
Keyword adcios adcios@
URL http://developer.apple.com/library/ios/search/?q=%s http://developer.apple.com/library/ios/search/?q=%@

Result

Notice how the results page gives you both guides and references at once, even giving specific-chapter links when relevant. You even get relevant technotes and Q&As. No wild goose chases, no PDF mines, no third-party old backup copies, no having to scroll past six hits of mailing-list threads and Stack Overflow questions. You get the docs, the right docs, and nothing but the docs.

For this specific purpose, you now have something better than Google.

Undocumented Goodness: CGSSetSystemDefinedCursor

Saturday, April 3rd, 2010

As with all undocumented private APIs, this one may change or go away any time. Use with extreme caution or, better yet, not at all.

As you’d expect from a CGS function, you’ll need a CGSConnection to use this one:

typedef void *CGSConnectionRef;
extern CGSConnectionRef _CGSDefaultConnection(void);
extern unsigned CGSSetSystemDefinedCursor(CGSConnectionRef connection, unsigned cursorID);

(I am, of course, not sure of the correct type for cursorID, although it is 32-bit.)

The valid cursor IDs range from 0 to 8; for anything else, CGSSetSystemDefinedCursor returns kCGErrorIllegalArgument. You’ll also get that error if you pass a bogus connection ref.

Here are the cursor IDs, as I’ve named them:

enum {
    kCGCursorIDArrow,
    kCGCursorIDIBeamComposite,
    kCGCursorIDIBeamInvert,
    kCGCursorIDLink,
    kCGCursorIDCopy,
    kCGCursorIDArrowWhite,
    kCGCursorIDContextualMenu,
    kCGCursorIDPinwheel,
    kCGCursorIDHidden,
};

IBeamComposite is the normal I-beam cursor. IBeamInvert is a variation that inverts what’s underneath it, instead of compositing over it. The benefit is that this shows up well on dark backgrounds (it appears white), whereas IBeamComposite blends in.

Link and Copy are the Mac OS 8.5 versions of those cursors, not the larger full-color ones introduced in Mac OS X 10.2.

ArrowWhite is exactly what it says on the box: A white-with-black-outline version of the standard Mac arrow cursor. The colors of the Windows arrow cursor with the stem length of the Mac arrow cursor.

ContextualMenu is my rough interpretation, as the stack of white rectangles next to the (black) arrow cursor looks only vaguely like a series of menu items. To my eyes, it looks more like a stack of giant hard disk platters (like the standard icon for a database that has become common recently) than a contextual menu, but I doubt that this was what they meant. This is not the contextual menu cursor from Mac OS 8 and 9.

Pinwheel is exactly what it says on the box. It’s even animated. This is the prize of the collection: If you ever want to deliberately put on the pinwheel cursor (which you should NEVER EVER DO), this is the way to do it.

Hidden is also exactly what it says on the box: A fully-transparent cursor image.

So, there you go, for your information, entertainment, and hopefully not use.

WWDC 2007 session videos are out

Monday, July 30th, 2007

If you attended WWDC, you can head over to ADC on iTunes and see what you missed.

Why Mac programmers should learn PostScript

Saturday, April 7th, 2007

I’ll follow this up with a tutorial called “PostScript for Cocoa programmers”, but today brings my list of reasons why you should care in the first place.

(more…)

What’s the resolution of your screen?

Sunday, February 4th, 2007

A few weeks ago, I installed Adobe Reader to view a particular PDF, and noticed something interesting in its Preferences:

Its Resolution setting is set by default to “System setting: 98 dpi”.

“Wow”, I thought, “I wonder how it knows that.” So I went looking through the Quartz Display Services documentation, and found it.

The function is CGDisplayScreenSize. It returns a struct CGSize containing the number of millimeters in each dimension of the physical size of the screen. Convert to inches and divide the number of pixels by it, and you’ve got DPI.

Not all displays support EDID (which is what the docs for CGDisplayScreenSize say it uses); if yours doesn’t, CGDisplayScreenSize will return CGSizeZero. Watch for this; failure to account for this possibility will lead to division-by-zero errors.

Here’s an app to demonstrate this technique:

ShowAllResolutions' main window: “Resolution from Quartz Display Services: 98.52×96.33 dpi. Resolution from NSScreen: 72 dpi.”

ShowAllResolutions will show one of these windows on each display on your computer, and it should update if your display configuration changes (e.g. you change resolution or plug/unplug a display). If CGDisplayScreenSize comes back with CGZeroSize, ShowAllResolutions will state its resolution as 0 dpi both ways.

The practical usage of this is for things like Adobe Reader and Preview (note: Preview doesn’t do this), and their photographic equivalents. If you’re writing an image editor of any kind, you should consider using the screen resolution to correct the magnification factor so that a 8.5×11″ image takes up exactly 8.5″ across (and 11″ down, if possible).

“Ah,”, you say, “but what about Resolution Independence?”.

The theory of Resolution Independence is that in some future version of Mac OS X (possibly Leopard), the OS will automatically set the UI scale factor so that the interface objects will be some fixed number of (meters|inches) in size, rather than some absolute number of pixels. So in my case, it would set the UI scale factor to roughly 98/72, or about 1+⅓.

This is a great idea, but it screws up the Adobe Reader theory of automatic magnification. With its setting that asks you what resolution your display is, it inherently assumes that your virtual display is 72 dpi—that is, that your UI is not scaled. Multiplying by 98/72 is not appropriate when the entire UI has already been multiplied by this same factor; you would essentially be doing the multiplication twice (the OS does it once, and then you do it again).

The solution to that is in the bottom half of that window. While I was working on ShowAllResolutions, I noticed that NSScreen also has a means to ascertain the screen’s resolution: [[[myScreen deviceDescription] objectForKey:NSDeviceResolution] sizeValue]. It’s not the same as the Quartz Display Services function, as you can see; it seemingly returns { 72, 72 } constantly.

Except it doesn’t.

In fact, the size that it returns is premultiplied by the UI scale factor; if you set your scale factor to 2 in Quartz Debug and launch ShowAllResolutions, you’ll see that NSScreen now returns { 144, 144 }.

The Resolution-Independent version of Mac OS X will probably use CGDisplayScreenSize to set the scale factor automatically, so that on that version of Mac OS X, NSScreen will probably return { 98.52, 98.52 }, { 96.33, 96.33 }, or { 98.52, 96.33 } for me. At that point, dividing the resolution you derived from CGDisplayScreenSize by the resolution you got from NSScreen will be a no-op, and the PDF view will not be doubly-magnified after all. It will be magnified by 133+⅓% by the UI scale factor, and then magnified again by 100% (CGDisplayScreenSize divided by NSDeviceResolution) by the app.

Obviously, that’s assuming that the app actually uses NSScreen to get the virtual resolution, or corrects for HIGetScaleFactor() itself. Adobe Reader doesn’t do that, unfortunately, so it suffers the double-multiplication problem.

So, the summary:

  • To scale your drawing so that its size matches up to real-world measurements, scale by NSDeviceResolution divided by { 72.0f, 72.0f }. For example, in my case, you would scale by { 98.52, 96.33 } / { 72.0, 72.0 } (that is, the x-axis by 98.52/72 and the y-axis by 96.33/72). The correct screen to ask for its resolution is generally [[self window] screen] (where self is a kind of NSView).
  • You do not need to worry about HIGetScaleFactor most of the time. It is only useful for things like -[NSStatusBar thickness], which return a number of pixels rather than points (which is inconvenient in, say, your status item’s content view).

A Core-Image-less Image Unit

Wednesday, January 17th, 2007

Can you imagine an Image Unit that didn’t actually use Core Image?

I just wrote one.

Well, OK, so I did use CIFilter and CIImage — you can’t get away without those. But I did not use a CIKernel. That’s right: This simple filter does its work without a kernel.

For the uninitiated, a kernel is what QuartzCore compiles to either a pixel shader or a series of vector (AltiVec or SSE) instructions. All Image Units (as far as I know) use one — not only because it’s faster than any other way, but because that’s all you see in the documentation.

But I was curious. Could an Image Unit be written that didn’t use a kernel? I saw nothing to prevent it, and indeed, it does work just fine.

The image unit that I wrote simply scales the image by a multiplier, using AppKit. I call it the AppKit-scaling Image Unit. Feel free to try it out or peek at the source code; my usual BSD license applies.

Obviously, this Image Unit shouldn’t require a Core Image-capable GPU.

New Apple documentation

Wednesday, March 8th, 2006

I don’t ordinarily write about changes to Apple’s documentation, since you can get that info straight from ADC RSS, but today’s round of changes is unusually significant.

Apple replaced several of the documents relating to Cocoa with new ones, and added some things to the new ones as well. the full run-down:

Was Now
  • Basic Drawing
  • Drawing and Images
  • The Drawing Environment
  • OpenGL
Cocoa Drawing Guide
  • What Is Cocoa?
  • Cocoa Objects
  • Adding Behavior to a Cocoa Program
  • Cocoa Design Patterns
  • Communicating With Objects
Cocoa Fundamentals Guide

Two new chapters have been added: “The Core Application Architecture” and “Other Cocoa Architectures”. In addition, the discussion of the Model-View-Controller design pattern in “Cocoa Design Patterns” has been greatly expanded.
revision history

  • Sdef Scriptability Guide for Cocoa
  • Scriptable Application Programming Guide for Cocoa
Cocoa Scripting Guide
Document-Based Applications Document-Based Applications Overview

New articles added are “Message Flow in the Document Architecture”, “Creating Multiple-Document-Type Applications”, “Autosaving in the Document Architecture”, and “Error Handling in the Document Architecture”.
revision history

Drawing and Views Scroll View Programming Guide for Cocoa
Drawing and Views View Programming Guide for Cocoa

I think the new set of documents may be much more suitable for passing to people who are learning for the first time. I’ll show The_Tick, who’s learning Cocoa, and see what he thinks.

while I’m at it, I’ll point out that they clarified the docs on CGImageSourceUpdateData:

Each time you call the function CGImageSourceUpdateData, the data parameter must contain all of the image file data accumulated so far.

so you should probably use a CFMutableData (or its NS equivalent) when using this function.

On writing Image Units

Wednesday, January 11th, 2006

Apple’s Core Image documentation doesn’t clearly state how to make a CPU-executable Image Unit, as opposed to a non-executable (GPU-only) one.

The answer is simple: Don’t make a .cikernel file. You should start with one when you’re running the validation tool over your Image Unit, but if you want to make a CPU-executable Image Unit (and please do, so I can use it on my Cube), move the kernel code into your Obj-C code once it compiles, and then delete the .cikernel file.

(BTW, yes, I am testing MarsEdit. Hence the two posts today. ;)

UPDATE 2008-02-18: I’ve since found out that, in fact, the definition of “non-executable” is more strict than that. The Core Image Programming Guide‘s chapter on Executable vs. Non-executable Filters provides a more exact definition:

  • This type of filter is a pure kernel, meaning that it is fully contained in a .cikernel file. As such, it doesn’t have a filter class and is restricted in the types of processing it can provide.

  • Sampling instructions of the following form are the only types of sampling instructions that are valid for a nonexecutable filter:

    color = sample (someSrc, samplerCoord(someSrc));
  • CPU nonexecutable filters must be packaged as part of an image unit.

  • Core Image assumes that the ROI coincides with the domain of definition. This means that nonexecutable filters are not suited for such effects as blur or distortion.

Testing Image Units

Sunday, January 8th, 2006

Some of you may know that I’ve been studying Apple’s Core Image API — specifically, how to write an Image Unit. I just found this, buried in Apple’s website: Software Licensing & Trademark Agreements: Image Units.

What’s special about that? Read the page closely:

3. Download the Image Units Validation Tool (DMG). Use of this application is subject to the terms of the Validation Tool License (RTF) presented upon launch.

It’s actually a command-line tool, and the agreement is displayed when the image is mounted rather than when the tool is run, but nonetheless — it’s a tool that examines your Image Unit and attempts to compile its .cikernel file, and tells you if it finds anything wrong. Highly useful.