Archive for the 'Core Image' Category

Apple documentation search that works

Sunday, March 6th, 2011

You’ve probably tried searching Apple’s developer documentation like this:

The filter field on the ADC documentation library navigation page.

Edit: That’s the filter field, which is not what this post is about. The filter sucks. This isn’t just an easy way to use the filter field; it’s an entirely different solution. Read on.

You’ve probably been searching it like this:

Google.

(And yes, I know about site:developer.apple.com. That often isn’t much better than without it. Again, read on.)

There is a better way.

Better than that: A best way.

Setup

First, you must use Google Chrome or OmniWeb.

Go to your list of custom searches. In Chrome, open the Preferences and click on Manage:

Screenshot with arrow pointing to the Manage button.

In OmniWeb, open the Preferences and click on Shortcuts:

Screenshot of OmniWeb's Shortcuts pane.

Then add one or both of these searches:

For the Mac

Chrome OmniWeb
Name ADC Mac OS X Library
Keyword adcmac adcmac@
URL http://developer.apple.com/library/mac/search/?q=%s http://developer.apple.com/library/mac/search/?q=%@

For iOS

Chrome OmniWeb
Name ADC iOS Library
Keyword adcios adcios@
URL http://developer.apple.com/library/ios/search/?q=%s http://developer.apple.com/library/ios/search/?q=%@

Result

Notice how the results page gives you both guides and references at once, even giving specific-chapter links when relevant. You even get relevant technotes and Q&As. No wild goose chases, no PDF mines, no third-party old backup copies, no having to scroll past six hits of mailing-list threads and Stack Overflow questions. You get the docs, the right docs, and nothing but the docs.

For this specific purpose, you now have something better than Google.

Nearest Neighbor Image Unit

Saturday, February 6th, 2010

I originally wrote this as an application using NSImage (with NSImageInterpolationNone), but decided to rewrite it as an Image Unit. So, here it is.

A public thank-you

Monday, March 17th, 2008

This goes out to whichever engineers at Apple fixed the Image Unit template to not suck.

Before Xcode 2.5, that template was useless. Now, it contains everything I need already set up, to the maximum extent possible.

Thank you, Apple engineers.

What to do if Core Image is ignoring your slider attributes

Wednesday, February 27th, 2008

So you’re writing an Image Unit, and it has a couple of numeric parameters. You expect that Core Image Fun House will show a slider for each of them, and it does—but no matter what you do, the slider’s minimum and maximum are both 0. Furthermore, Core Image Fun House doesn’t show your parameters’ real display names; it simply makes them up from the parameters’ KVC keys.

The problem is that you are specifying those attributes in the wrong place in your Description.plist. And yes, I know you’re specifying them where the project template had them—so was I. The template has it wrong.

One of the filter attributes that Core Image recognizes is CIInputs. The value for this key is an array of dictionaries; each dictionary represents one parameter to the filter. The template has all the parameter attributes in these dictionaries. That makes sense, but it’s not where Core Image looks for them.

In reality, Core Image only looks for three keys in those dictionaries:

  • CIAttributeName
  • CIAttributeClass
  • CIAttributeDefault

Anything else, it simply ignores.

The correct place to put all those other keys (including CIAttributeSliderMin, CIAttributeSliderMax, and CIAttributeDisplayName) is in another dictionary—one for each parameter. These dictionaries go inside the CIFilterAttributes dictionary. In other words, the CIFilterAttributes dictionary should contain:

  • CIInputs => Aforementioned array of (now very small) dictionaries
  • inputFoo => Dictionary fully describing the inputFoo parameter, including slider attributes and display name
  • inputBar => Dictionary fully describing the inputBar parameter, including slider attributes and display name
  • inputBaz => Dictionary fully describing the inputBaz parameter, including slider attributes and display name

Finally, an example:

<key>CIFilterAttributes</key>
<dict>
    ⋮
    <key>CIInputs</key>
    <array>
        <dict>
            <key>CIAttributeClass</key>
            <string>CIImage</string>
            <key>CIAttributeName</key>
            <string>inputImage</string>
        </dict>
        <dict>
            <key>CIAttributeClass</key>
            <string>NSNumber</string>
            <key>CIAttributeDefault</key>
            <real>1.0</real>
            <key>CIAttributeName</key>
            <string>inputWhitePoint</string>
        </dict>
        <dict>
            <key>CIAttributeClass</key>
            <string>NSNumber</string>
            <key>CIAttributeDefault</key>
            <real>0.0</real>
            <key>CIAttributeName</key>
            <string>inputBlackPoint</string>
        </dict>
    </array>
    <key>inputWhitePoint</key>
    <dict>
        <key>CIAttributeClass</key>
        <string>NSNumber</string>
        <key>CIAttributeDefault</key>
        <real>1.0</real>
        <key>CIAttributeDisplayName</key>
        <string>White point</string>
        <key>CIAttributeIdentity</key>
        <real>1.0</real>
        <key>CIAttributeMin</key>
        <real>0.0</real>
        <key>CIAttributeMax</key>
        <real>1.0</real>
        <key>CIAttributeName</key>
        <string>inputWhitePoint</string>
        <key>CIAttributeSliderMin</key>
        <real>0.0</real>
        <key>CIAttributeSliderMax</key>
        <real>1.0</real>
        <key>CIAttributeType</key>
        <string>CIAttributeTypeScalar</string>
    </dict>
    <key>inputBlackPoint</key>
    <dict>
        <key>CIAttributeClass</key>
        <string>NSNumber</string>
        <key>CIAttributeDefault</key>
        <real>0.0</real>
        <key>CIAttributeDisplayName</key>
        <string>Black point</string>
        <key>CIAttributeIdentity</key>
        <real>0.0</real>
        <key>CIAttributeMin</key>
        <real>0.0</real>
        <key>CIAttributeMax</key>
        <real>1.0</real>
        <key>CIAttributeName</key>
        <string>inputBlackPoint</string>
        <key>CIAttributeSliderMin</key>
        <real>0.0</real>
        <key>CIAttributeSliderMax</key>
        <real>1.0</real>
        <key>CIAttributeType</key>
        <string>CIAttributeTypeScalar</string>
    </dict>
</dict>

You can see how the descriptions under CIInputs are as terse as possible; everything besides the absolute necessities is specified in the outer dictionaries.

How to convert an alpha channel to a mask

Monday, February 18th, 2008

Updated 2008-04-17 to clarify the marked-up screenshot of the Color Matrix view. If you’ve seen this post before, check out the before and after.

So, let’s say you want to convert an image to a mask.

Triangle, circle, rectangle

Mask from triangle, circle, rectangle

This is easy to do with the Color Matrix filter in Core Image.

If you’ve ever looked at the Color Matrix filter out of curiosity, you were probably frightened by its imposing array of 20 text fields:

The fields are in five rows of four columns. The first four rows are “Red Vector”, “Green Vector”, “Blue Vector”, and “Alpha Vector”. The fifth row is “Bias Vector”; unlike the others, it is an addend rather than a multiplier, and it does not correspond to a color component.

Don’t worry. The fields are actually very simple, though not explained in the UI:

  • The Red, Green, Blue, and Alpha rows each represent a component of an output pixel.
  • Each column represents a component of an input pixel.
  • Each cell in the component rows is a multiplier.
  • Each cell in the “Bias vector” row is an addend.

This image expresses graphically the same explanation that the previous list provided.

(The documentation for the Color Matrix filter actually does explain it, but I like my explanation better for not using fancy math terms like “dot product”.)

So with this tool, our task is redefined like so:

How to replace the color channels of an image with the alpha channel, and set the alpha channel to all-100%

  1. Set the three color-component vectors (Red Vector, Green Vector, and Blue Vector) to 0, 0, 0, 1. (In other words, multiply every input color component by 0, and the input alpha by 1, and set all three color components to that.)
  2. Set the Alpha Vector to 0, 0, 0, 0. (In other words, multiply all input components by 0, and set the output alpha component to that. In other other words, set every output alpha component to 0.)
  3. Set the Bias Vector also to 0, 0, 0, 1. (In other words, add 0 to all three color components, and add 1 to the alpha component.)

You can generalize this to the extraction of other channels. Let’s say you want to make a mask of the blue channel:

  1. Set the three color-component vectors to 0, 0, 1, 0. (For every output color component, multiply every color component by 0, except for blue. Multiply blue by 1—i.e., don’t change it.)
  2. Set the Alpha Vector to 0, 0, 0, 0. (Multiply every alpha component by 0—i.e., set every output alpha component to 0.)
  3. Set the Bias Vector to 0, 0, 0, 1. (Add 1 to the alpha component. This step is invariant; you always add to the alpha component.)

To demonstrate this, here’s a red-blue gradient (shown in Acorn  to visualize the gradient image’s own transparency):

The gradient image is an oval, filled with an upper-left-to-lower-right red-to-blue gradient, on a transparent background.

If we extract the blue channel, as shown above, we get this:

A mask where the blue parts of the source image are white, and all else is black.

Note how the red parts of the gradient are black, because we extracted the blue channel, and there was little to no blue there.

Likewise, if we extract the red channel, we get this:

A mask where the red parts of the source image are white, and all else is black.

In this case, the converse of the blue-channel mask.

(By the way, in case you’re wondering: No, I don’t know what caused the white pixels along the edge. It could be a Lineform bug, or a Core Image bug, or a graphics-card bug. I didn’t keep the original Lineform file for the source image, stupidly, but in case you’d like to test it on your own machine, I re-created it. Here’s a PDF of the replica; you can use image to convert it to PNG. I can confirm that I saw similar results with this image to the results with the image I used for this post.)

You can even mix up the colors of an image. Suppose we want to reverse that gradient:

  1. Set the Red Vector to 0, 0, 1, 0. (In other words, replace red with blue.)
  2. Set the Blue Vector to 1, 0, 0, 0. (In other words, replace blue with red.)
  3. Leave the Alpha and Bias Vectors at the default values. (In other words, we’re leaving the alpha channel unchanged this time.)

The same oval-shaped gradient image from above, but with red and blue swapped.
The red-to-blue gradient is now a blue-to-red gradient.

So what is this good for?

Well, mainly, so you can create mask images. Several filters require these, such as the Blend with Mask filter in the Stylize category. The Color Matrix filter makes this easy, although you still have to save the mask image somewhere.

It’s even easier in Opacity, where you can create a Color Matrix filter layer, configure it using the Layer Inspector, then hide it by clicking its eye icon. This way, the filter layer won’t show up in the rendered document (or in any of its build products), but you can still use its result as the mask to another filter layer.

Opacity

Wednesday, February 13th, 2008

As you may have read on wootest’s weblog, Like Thought Software released its new image editor, Opacity, today.

Before I go any further, here’s full disclosure: The developer invited me to beta-test the app, and I did. He also gave me a free license for this purpose (the app normally costs $89 USD). Also, I have some code in the app, because it uses IconFamily, which I contributed a patch to a long time ago.

OK, that’s everything. Now, to borrow from wootest’s disclaimer on the same topic:

Don’t confuse this as simple tit-for-tat back-scratching, though. Had I … had no involvement whatsoever, the application would still have been every bit as brilliant, and I would have come out just as strongly in favor of it.

I love this app.

Opacity is an image editor designed to enable app developers to create multiple-resolution and any-resolution graphics easily. It’s built for that specific purpose, and the Opacity website even says so. This app really is not intended for anything other than user-interface graphics.

Key points:

  • It’s mostly vector-based, but it also has primitive raster tools.
  • It has non-destructive Core Image filter layers, similar to Photoshop’s adjustment layers. (Contrast with Acorn, which makes you apply each filter permanently. You can’t go back and edit the filter parameters.)
  • It has built-in templates for most common icon types.

Opacity has several important features over past editors:

  • It has built-in support for multiple resolutions. Every Opacity document has one or more resolutions, and you can add and delete them at will.
  • It has a target-based workflow. Each Opacity document is, essentially, a “project” for one image; every target in the document results in one image file in an external format, such as TIFF or IconFamily (.icns). (The application now calls these “factories”, but early betas did, in fact, call them targets, and I prefer that terminology.) You can build each target factory or all targets factories at will, and there’s an option to build all whenever you Save.
  • You are not limited to the stock suite of transformations (e.g., Rotate 90°, Scale, Flip Vertical); you can make your own.
  • You can create folder layers to group layers (especially filter layers) together, and these folder layers can be nested as deeply as you want.
  • When configuring a Core Image filter that accepts an image as a parameter (e.g., Shaded Material, Blend with Mask, or one of the Transition or Composite filters), you can use any layer in the document—even folder layers.

Opacity is not perfect. Some things don’t quite work like you would expect: for example, vector objects do automatically appear in every resolution, but pixels that you draw or paste don’t automatically get mirrored to the other resolutions; instead, Opacity waits for your explicit say-so (the Clone Current Layer’s Pixels to Other Resolutions command). Opacity also still has a couple of major bugs: Flip Horizontal, for example, takes way too long in one document that I created. Personally, I didn’t expect it to go final this early, and I recommend that you wait until at least 1.0.1.

But those are dark linings in a silver cloud. Once all the major bugs are fixed, I believe that this app is how you will create your application’s custom toolbar and button images for the modern resolution-independent world.

How to make a 512-px version of the Network icon

Saturday, February 2nd, 2008

You will go from the pure-blue .Mac icon……to the purplish-gray Network icon.

UPDATE 2008-01-02: Ahruman commented that you can just use NSNetwork in IconGrabber. No need to go through all these steps and fake one.

If you’ve ever needed a high-resolution version of the Network icon for anything, you may have noticed that Mac OS X does not ship with one. When you select the Network and Copy it, then create a new document from the clipboard in Preview or Acorn, the largest size available is 128-px.

Fortunately, the .Mac icon is available in 512-px, and you can easily change it into the Network icon.

You will, of course, need Leopard (for no other version of Mac OS X has 512-px icons).

  1. Obtain the built-in image NSImageNamedDotMac in either Core Image Fun House or Acorn.
  2. Apply a Hue Adjust filter: +5°.
  3. Apply a Color Controls filter: Saturation × 0.25.

The easiest way to get the .Mac image is IconGrabber. Enter the name “NSDotMac”, then click Draw, then set the size to 512×512, then save. (Note: On an Intel Mac, you’ll need to build from source, because the pre-built version for PowerPCs doesn’t run on Intel for some reason.)

A Core-Image-less Image Unit

Wednesday, January 17th, 2007

Can you imagine an Image Unit that didn’t actually use Core Image?

I just wrote one.

Well, OK, so I did use CIFilter and CIImage — you can’t get away without those. But I did not use a CIKernel. That’s right: This simple filter does its work without a kernel.

For the uninitiated, a kernel is what QuartzCore compiles to either a pixel shader or a series of vector (AltiVec or SSE) instructions. All Image Units (as far as I know) use one — not only because it’s faster than any other way, but because that’s all you see in the documentation.

But I was curious. Could an Image Unit be written that didn’t use a kernel? I saw nothing to prevent it, and indeed, it does work just fine.

The image unit that I wrote simply scales the image by a multiplier, using AppKit. I call it the AppKit-scaling Image Unit. Feel free to try it out or peek at the source code; my usual BSD license applies.

Obviously, this Image Unit shouldn’t require a Core Image-capable GPU.

On writing Image Units

Wednesday, January 11th, 2006

Apple’s Core Image documentation doesn’t clearly state how to make a CPU-executable Image Unit, as opposed to a non-executable (GPU-only) one.

The answer is simple: Don’t make a .cikernel file. You should start with one when you’re running the validation tool over your Image Unit, but if you want to make a CPU-executable Image Unit (and please do, so I can use it on my Cube), move the kernel code into your Obj-C code once it compiles, and then delete the .cikernel file.

(BTW, yes, I am testing MarsEdit. Hence the two posts today. ;)

UPDATE 2008-02-18: I’ve since found out that, in fact, the definition of “non-executable” is more strict than that. The Core Image Programming Guide‘s chapter on Executable vs. Non-executable Filters provides a more exact definition:

  • This type of filter is a pure kernel, meaning that it is fully contained in a .cikernel file. As such, it doesn’t have a filter class and is restricted in the types of processing it can provide.

  • Sampling instructions of the following form are the only types of sampling instructions that are valid for a nonexecutable filter:

    color = sample (someSrc, samplerCoord(someSrc));
  • CPU nonexecutable filters must be packaged as part of an image unit.

  • Core Image assumes that the ROI coincides with the domain of definition. This means that nonexecutable filters are not suited for such effects as blur or distortion.

Testing Image Units

Sunday, January 8th, 2006

Some of you may know that I’ve been studying Apple’s Core Image API — specifically, how to write an Image Unit. I just found this, buried in Apple’s website: Software Licensing & Trademark Agreements: Image Units.

What’s special about that? Read the page closely:

3. Download the Image Units Validation Tool (DMG). Use of this application is subject to the terms of the Validation Tool License (RTF) presented upon launch.

It’s actually a command-line tool, and the agreement is displayed when the image is mounted rather than when the tool is run, but nonetheless — it’s a tool that examines your Image Unit and attempts to compile its .cikernel file, and tells you if it finds anything wrong. Highly useful.