InDesign SDK, odfrc-cmd on Mac OS X

Recording this for my own future reference…

Occasionally, after installing an update to the InDesign SDK, my Mac will refuse to run odfrc-cmd which is a command needed in the build process. The fix is simple. Start a Terminal window and execute the following commands (<SDK> is the path of the decompressed SDK):

cd <SDK>/devtools/bin
xattr -dr com.apple.quarantine odfrc-cmd  

Re-directing $.writeln() in ExtendScript

I wanted to share a tiny little trick I recently tried…

It’s nothing much, more like a ‘doh…’, but I had never tried this approach, and had not fully expected it to work!

I tried it with InDesign and InDesign Server, and the odds are good that the same trick will work with any Adobe app that has an ExtendScript scripting engine.

Non-invasive tweaking

The issue at hand was that I needed to ‘wrap’ an existing standalone ExtendScript, and embed it into a controlled environment.

Part of the functionality of this environment is to keep tabs on the logging output of the embedded scripts.

The issue was that the standalone ExtendScript was in part using calls to $.writeln() for some of its logging.

There is more than one way to skin a cat. I could have scoured the source code, do a search-and-replace on $.writeln(), and set up a replacement bottleneck function (e.g. something similar-looking, like function $_writeln() or something) and replace all calls into $.writeln() with $_writeln(). But that approach felt quite invasive.

Instead I decided to try and re-direct $.writeln() by injecting a replacement function.

This allows the standalone script to remain unmodified and continue calling $.writeln().

Injecting a replacement for $.writeln()

If the script is run as a standalone, $.writeln() works normally.

When the same script instead runs within the controlled environment, some ‘outside’ wrapper code will initialize things, inject a replacenent, then invoke the script.

The script then blithely continues to call $.writeln(), which is now redirected to some API provided by the controlled environment.

During initialization, all you need to do is to run something akin to the following function:

function divertWriteln() {
    if (! $.systemWriteln) {
        function customWriteln(msg) {
            FRAMEWORK_API.logTrace(msg);
        }
        $.systemWriteln = $.writeln;
        $.writeln = customWriteln;
    }
}

This function installs a different handler for calls to $.writeln(), and the calling code is none the wiser.

InDesign Server-Specific

In the InDesign Server environment, this trick can also be used to dynamically re-direct $.writeln() which normally outputs to the debugger output window. By redirecting we can convert that into a call to alert() which instead outputs into the InDesign Server Terminal window.

Something similar to:

function divertWriteln() {
    if ("serverSettings" in app && ! $.systemWriteln) {
        function customWriteln(msg) {
            alert(msg);
        }
        $.systemWriteln = $.writeln;
        $.writeln = customWriteln;
    }
}

ExtendScript eval() stumbles over U+2028 and U+2029

I think I found another obscure bug in ExtendScript.

This bug affects any ExtendScript code that is passed through eval().

eval() spits the dummy when you try to evaluate any JS code that contains literal strings that contain the literal Unicode characters U+2028 (line separator) or U+2029 (paragraph separator).

This can be worked around by encoding these characters by their escaped equivalents, \u2028 or \u2029.

I verified the 16-bit Unicode range, and these are the only two characters that cause problems. See

https://github.com/zwettemaan/ESON/blob/main/findBadCodes.jsx

It all started because I was curious and wondered what it would take to use eval/uneval as the basis for implementing JSON.parse/JSON.stringify.

json2.js

One of the most common solutions is to use Douglas Crockford’s library, JSON-js (aka json2.js).

https://github.com/douglascrockford/JSON-js

You can readily //@include this file into your ExtendScript code, and gain access to JSON.parse() and JSON.stringify().

This module is time-proven, and performs a proper parse of the input data, which is useful if you need to ingest JSON data from untrusted sources.

This protects your script from injection attacks, where a malicious actor crafts a ‘fake’ JSON file which contains some executable JavaScript code.

Executable JS code is not proper JSON, so json2.js will simply refuse to parse and will throw an exception. That’s a great feature!

A disadvantage of using json2.js is that it is not very fast when used with ExtendScript.

Especially when parsing larger and larger amounts of JSON data, the time needed will start to balloon and it will slow to a snail’s pace, or even slower than that: the relation between data size and slowdown is not linear. I’ve not properly benchmarked it, but I suspect it might be an exponential relation.

The best way to get around the slowness is to use a C++-based alternative, rather than a ‘pure JS’ solutions like json2.js.

When it comes to ingest large, multi-megabyte JSON files, C++ code will eat those in a tiny fraction of the time it takes json2.js to slog through them.

Note 2024-09-17: Marc Autret (Indiscripts) sent me some feedback on my blog post and pointed out a bunch of problems with json2.js.

https://www.indiscripts.com/

If you need a reliable JSON module for ExtendScript, make sure to look at Marc’s idExtenso

https://github.com/indiscripts/IdExtenso

eval()

Another alternative is to use the built-in eval() as a ‘quick and dirty’ replacement for JSON.parse().

JSON is a subset of JavaScript, and simply handing over some JSON data to eval() works fine because eval() will treat it as executable code, and evaluate the JSON.

Advantage: this is much faster than json2.js.

Advantage: eval() can process JSON-C (i.e. JSON with comments).

BIG disadvantage: eval() has some serious problems, and it should only be used in environments where the JSON data comes from a trusted source.

eval() is problematic, because it does not verify that the JSON data is just that, data.

That means that there is a risk that some malicious actor would to be able to ‘inject’ some fake JSON file with executable JS code into your workflow.

In short: in a pinch, eval() can be used as a quick-and-dirty replacement for JSON.parse(), but I think it’s better to be safe than sorry, and avoid this in production code.

uneval()

ExtendScript is ‘old time JavaScript’, and as such it still supports the uneval() function.

uneval() will produce output that is similar to JSON, but not quite.

var o = {
  a: 1,
  b: [1,2,3],
  c: {
    d: 9,
    e: "ab\u0101c"  
  }
}
uneval(o);

will produce:

 ({a:1, b:[1, 2, 3], c:{d:9, e:"abāc"}})

Proper JSON would be:

{"a":1,"b":[1,2,3],"c":{"d":9,"e":"abāc"}}

Fake it till you make it

For the heck of it, I managed to create a usable implementation of JSON.parse/JSON.stringify based on eval/uneval, and it seems to work fine.

https://github.com/zwettemaan/ESON

ESON.stringify() will transmogrify the output of uneval() into proper JSON. It is slowed down a bit because it needs to make sure that U+2028 and U+2029 are properly encoded.

If you are 100% certain that U+2028 and U+2029 never occur in the data you’re processing, then ESON.stringify() can be sped up a fair bit by omitting the check for the ‘bad codes’.

ESON.parse() is a lot faster than JSON.parse(), but should be avoided because of its insecure nature.

There is also a bunch of benchmarking code: it will generate random large objects, and run them through stringify/parse in order to compare json2.js with ESON.

Use at your own risk!

InDesign Server Tasking

Traditional Model

Most InDesign Server workflows that I’ve worked on are based on reusable ‘hot instances’.

In this model, there is a monitor program of sorts, which pre-emptively launches one or more instances.

These instances are then initially idling: they’re ‘hot’, ready to roll.

Jobs are coming in via a queue managed by the monitor program. When the queue is not empty and there is an idle instance available, the monitor program will assign the job to the instance by sending a script to the instance.

This can be achieved in a number of ways:
– OLE (Object Linking and Embedding)
– AppleScript/AppleEvents
– VBScript
– SOAP (Simple Object Access Protocol)
– InDesign Server startup script with a polling loop

Some info about these can be found in the InDesign Server documentation.

While an instance is processing the job, the monitor program has some means of tracking the instance and determine whether the job has completed.

There are multiple ways to do this; I like to use a ‘customized heartbeat’ approach, where the script code is instrumented to let the monitor know it is still running, and the monitor program will not mistakenly kill the instance if the script is merely taking a long time, but not hanging.

When the job completes, the instance goes back to idling until the next job is assigned by the monitor program.

The monitor normally also has some background communication channel with the scripts running inside the instance, so the monitor can detect crashes, deadlocks and freezes and act accordingly.

Well-written scripts should also handle some kind of ‘intermediate state’ management and a ‘roll forward’ where a long-running crashed job can be picked up at the point where it crashed, rather than needing to start over.

Run to completion

There is another approach to handling multiple instances.

In the ‘run to completion’ model, when the system is at rest, no instance will be running. All instances are ‘cold’, not yet started up.

When the monitor program receives an incoming job in the queue, it will cold-start a fresh new InDesign instance, and pass the job to the instance. This can be done via a startup script that picks up the head of a job queue.

The script runs to completion and the last task of the script is to quit the instance.

Having the instance quit is a simple way to inform the monitor program that the task has been completed, or that the job has crashed.

Advantages and disadvantages

Run to completion has a number of advantages.

Long running InDesign Server instances (‘hot’, reusable instances) have a habit of bloating: their memory footprint slowly grows and after a big job, some memory can remain ‘stuck’. As the bloat increases, garbage collection can introduce random ‘hiccups’.

With run to completion we need to worry much less about memory leaks and unreleased resources (e.g. files that remain open), because InDesign Server will cold start every job with a clean instance.

If the server machine has oodles of memory, the InDesign Server will never need to invoke garbage collection, and random ‘mumble mode’ due to garbage collection should be less of an issue: most jobs will be completed before InDesign Server would feel the need to run garbage collection.

Disadvantage of run to completion is that each job takes a few seconds longer because we need to cold-start IDS for every job, whereas hot instances are ready to go – no cold-start delay.

Hybrid model

A hybrid model combines these two models.

For fast response times (e.g. quickly rendering a layout that needs to be displayed in a web browser), hot instances are the obvious choice. These instances can then be restarted every so many hours to clear up accumulations of small memory leaks or resource leaks.

For long-running tasks, like paginating a 1000-page catalog with lots of reflowing content, cold instances might be better: after a long hard slog on a 1000-page catalog, it’s probably better to terminate the instance and use a clean slate for the next long-running task.

The monitor program can be made to handle both types of tasks.

I hope you found these ideas useful!

You can contact me at [email protected]. If you want to sharpen your InDesign and InDesign Server automation skills, talk to me about organizing a custom workshop.

If you’re stuck trying to resolve an intractable complex issue around InDesign automation, and you’re stumped, think of seeking my help. I have a proven track record of being able to resolve tricky issues and explain everything in a clear and concise way. I’ve been automating InDesign server for nearly 20 years, and I know a trick or two!

Using the C preprocessor with the InDesign C++ SDK

Lately, I’ve been writing a fair bit of C++ code for InDesign C++ plug-ins, and I’ve started using some simple constructs that use the C preprocessor to make my code easier to follow.

This post focuses specifically on macro usage in the InDesign C++ SDK environment.

Pre-pre-amble: Plugin vs. Plug-In

Skip to the next section if you’re an experienced InDesign C++ developer.

If you’re new to in InDesign C++ plug-in development, you have to know there is a steep learning curve ahead of you.

First of all, make sure you understand the difference a plugin and a plug-in.

An InDesign plugin (no dash) is an enhancement built within the UXP environment.

An InDesign plug-in (with a dash) is an enhancement built on top of the C++-based InDesign SDK.

You need to start here:

https://developer.adobe.com/console/servicesandapis

There is an entry that says ‘InDesign Plugin’ which is for the UXP environment.

That’s not the one you want. For the InDesign C++ SDK, you need another entry that says just ‘InDesign’:

For my training course for would-be InDesign C++ developers, I wrote a book that I use as training course notes.

https://www.lulu.com/shop/kris-coppieters/adobe-indesign-cs3cs4-sdk-programming/paperback/product-165ej5dq.html


I wrote this more than a decade ago, and CS3 and CS4 are now fossils of a distant past, but as it so turns out, the underlying concepts of InDesign C++ programming have not changed, and the book is still highly relevant.

Before you can dive into the SDK you need to ‘grok’ a whole range of higher-level abstractions and terminology: boss classes, interfaces, implementations, Query vs Get,…

And remember, if you are interested in getting a head start with InDesign C++ development, make sure to talk to me: [email protected]

Pre-Amble

This blog post is about a few simple C macros that make my life easier.

So, I won’t get into the pros and cons of do-while(false) vs. early return vs. throw/catch, I won’t delve into D.R.Y (Don’t Repeat Yourself) and K.I.S.S (Keep It Simple, Stupid), or ‘how many lines in a function’. That’s stuff to be discussed over a beer.

The InDesign SDK

For the longest time, I’ve been mimicking the style of the SDK when writing C++ plug-ins.

I.e. I want to make sure the code I am writing ‘looks the part’, and follows a similar style and similar coding conventions.

do-while(false);

One of the somewhat controversial constructs in the InDesign SDK is the use of do-while(false); i.e. a loop that does not loop.

This construct is used to implement what I describe as ‘a pre-condition ladder’.

The idea is that every function starts with a list of pre-conditions and assertions that are expected to be true, and once all the pre-condition tests pass, you get to the ‘meat’ of the function and execute the function.

The preconditions are like the rungs of the ladder, and if something is not right, you break and ‘fall off the ladder’ towards the while (false). If all is well, you can ‘climb down the ladder’ and safely reach the ‘meat’ of the function.

Something like

int f(params) 
{
    int value = 0;
    do
    {
        if (! somePrecondition1())
        {
            break;
        }
        if (somethingUnexpectedBad())
        {
           LOG_ERROR("Something unexpected bad happened");
           break;
        }
        
        value = AllisWell(bla) + 123;
    }
    while (false);
    return value;
}

The function purposely only has a single return at the end, which makes it easier to break and inspect the return value with a debugger, so we don’t use ‘early returns’.

Using C Macros

I’ve recently built a few simple C macros that, at least for me, make my InDesign plug-in code less verbose and more readable.

Note that the macros I am presenting here are simplified versions of the ones that I am actually using.

The full-fledged versions that I have built and actually use have many more features, like compile-time balancing and error detection, but discussing those details would lead me too far into the woods.

The idea is that I want to help the reader of my code (that reader is often me, six months from now).

If the reader of my code sees the beginning of a do statement, I am leaving them guessing a bit as to what that do will be used for. A real loop? An infinite loop? A condition ladder construct?

A second idea is that in C/C++ the ‘end of a thing’ is the closing curly brace. It is heavily overloaded, and used for many different things. That can make code hard to figure out.

With these macros, the ‘end of some things’ is much more explicit – rather than a curly brace, there is an explicit ‘END’ to the ‘thing’.

The third idea is that in the InDesign C++ plug-in code I like to distinguish between different kinds of pre-conditions:
• ‘normal’ pre-conditions. Things you want to check for, and bail out of the function if not met, but which are normal and expected -> PRE_CONDITION
• ‘error’ pre-conditions. Things that ‘should not happen’, so you want to emit errors to the logs -> SANITY_CHECK
• things that make you go ‘huh?’ but are not necessarily errors -> EXPECTED

#ifndef __PREPROCESSOR_CONSTRUCTS__h_
#define __PREPROCESSOR_CONSTRUCTS__h_
//
// Preprocessor magic to convert code into string
//
#ifndef __STR
#ifdef __VAL
#undef __VAL
#endif
#define __STR(x) __VAL(x)
#define __VAL(x) #x
#endif
#define BEGIN_CONDITION_LADDER          do {
#define END_CONDITION_LADDER            } while (false)
#define CONDITION_LADDER_BREAK          break
#define BEGIN_INFINITE_LOOP             do {
#define END_INFINITE_LOOP               } while (true)
#define INFINITE_LOOP_BREAK             break
#define PRE_CONDITION(CONDITION, BREAK) if (! (CONDITION)) BREAK
#define SANITY_CHECK(CONDITION, BREAK)  if (! (CONDITION)) { ASSERT("! " __STR(CONDITION) ); BREAK; }
#define EXPECTED(CONDITION)             if (! (CONDITION)) ASSERT("! "  __STR(CONDITION))
#define BEGIN_FUNCTION                  do {
#define END_FUNCTION                    } while (false)
#define FUNCTION_BREAK                  break
#endif // __PREPROCESSOR_CONSTRUCTS__h_

A sample function:

bool16 ActivePageItemHelper::GetStoryFrameList(
  const UIDRef& storyUIDRef,
        UIDList& pageItemList)
{
    bool16 success = kFalse;

    BEGIN_FUNCTION;

    IDataBase* dataBase = storyUIDRef.GetDataBase();
    SANITY_CHECK(dataBase, FUNCTION_BREAK);

    pageItemList = UIDList(dataBase);
    InterfacePtr<ITextModel> textModel(storyUIDRef, UseDefaultIID());
    SANITY_CHECK(textModel, FUNCTION_BREAK);
        
    InterfacePtr<IFrameList> frameList(textModel->QueryFrameList());
    PRE_CONDITION(frameList, FUNCTION_BREAK);

    success = kTrue;
    int32 curPageItemIdx = -1;
    for (int32 frameIdx = 0; frameIdx < frameList->GetFrameCount(); frameIdx++)
    {
        bool16 frameIdxSuccess = kFalse;

        BEGIN_CONDITION_LADDER;

            InterfacePtr<ITextFrameColumn> textFrameColumn(frameList->QueryNthFrame(frameIdx));
            SANITY_CHECK(textFrameColumn, CONDITION_LADDER_BREAK);

            InterfacePtr<IHierarchy> textFrameColumnHierarchy(textFrameColumn, UseDefaultIID());
            SANITY_CHECK(textFrameColumnHierarchy, CONDITION_LADDER_BREAK);

            InterfacePtr<IHierarchy> multiColumnItemHierarchy(textFrameColumnHierarchy->QueryParent());
            SANITY_CHECK(multiColumnItemHierarchy, CONDITION_LADDER_BREAK);

            InterfacePtr<IHierarchy> splineItemHierarchy(multiColumnItemHierarchy->QueryParent());
            SANITY_CHECK(splineItemHierarchy, CONDITION_LADDER_BREAK);

            InterfacePtr<IGraphicFrameData> splineItem(splineItemHierarchy, UseDefaultIID());
            SANITY_CHECK(splineItem, CONDITION_LADDER_BREAK);

            frameIdxSuccess = kTrue;
            //
            // Check if already in list
            //
            UID splineUID = ::GetUID(splineItem);
            PRE_CONDITION(curPageItemIdx < 0 || pageItemList[curPageItemIdx] != splineUID, CONDITION_LADDER_BREAK);

            pageItemList.Append(splineUID);

        END_CONDITION_LADDER;

        success = success && frameIdxSuccess;
    }

    END_FUNCTION;

    return success;
}

Points of interest:
– I am expecting textModel to be non-null, and if it is null, something is seriously wrong, hence I use SANITY_CHECK.
– I need frameList to be non-null, but if it is null, that’s not an error; it simply means that we cannot run the function in this particular context, hence I use PRE_CONDITION.
– Within the loop, I have little CONDITION_LADDER. Make sure all is well before executing pageItemList.Append(splineUID)

There are some interesting aspects to this.

Because these are macros, it is easy to (temporarily) add some extra code to macros like BEGIN_FUNCTION/END_FUNCTION.

You should add logging/tracing code in a debug version, or can add code that will break into the debugger when certain conditions are met.

If you have logger infrastructure, add the logging calls to the macros (e.g. SANITY_CHECK would logs errors, PRE_CONDITION does not log anything)

Adding logging and tracing to the BEGIN_FUNCTION/END_FUNCTION macros can be help in situations where you’re unable to use a debugger on a problem, e.g. when the issue happens on a customer’s server, and you cannot get access to the server.

Such code will ‘pooff!’ away in the release version, and only be added in the debug version.

Take it for what it is worth; it works for me. Using these macros adds another level of abstraction, which can be considered a ‘bad thing’, but overall, I find these helpful.

Mitigating JavaScript Promise Pitfalls for Occasional JavaScript Developers

Helping the Occasional JavaScript Developer with Tracked Promises

I am currently working with Promises in JavaScript and exploring ways to make them more digestible for an Occasional JavaScript Developer (OJD).

Because I routinely dive in and out of various development environments, I consider myself an OJD; after I’ve done PHP or Python or C++ or ExtendScript all day, switching to a JavaScript mindset doesn’t come easy.

Surprises, and not the good kind

Much of the code I write is used by other OJD (or OC++D or OPHPD…), and while coding, I strive to avoid surprises.

By avoiding surprises I mean: if you read some source code first, and then step it with a debugger, everything works as you’d expect. It should be outright boring.

I find that it is fairly hard to write JS code without surprises; you don’t need much for the code to suddenly veer off on an unexpected tangent.

I prefer my code to be gracious, spacious, and clear. A calm, open landscape rather than a dense, compressed knot of complexity waiting to unravel.

Limited Lifetime Scripts

In some JavaScript environments (e.g. UXP/UXPScript), scripts have a limited runtime/lifetime.

Any unsettled promises disappear and will never be handled if the script terminates before the JavaScript runtime had a chance to settle all promises.

Promises and async/await are great when you live and breathe JavaScript day in day out, but the OJD will often find themselves on treacherous terrain.

One issue with Promises is that it is very easy to forget to wait for the Promise to settle (or with async/await, forgetting an await), and then these Promises might silently disappear, never to be settled.

Using VSCode and Copilot and other stuff helps, but I still don’t like it.

Fire and Forget

What compounds the issue is that I like to use a ‘Fire and Forget’ approach for certain methods, for example, logging to a file.

If the logging module I am using is asynchronous, it makes sense to me to just call the logging calls, and not await or then them.

My logging strings are time-stamped anyway, and I don’t really care when a log entry makes it into a log file. I want to fire off the logging call, and carry on with the ‘meat’ of the code, full speed ahead.

The issue is that if the script terminates too early, those pending logging calls never reach the log file.

There was a lot of head scratching during my initial experiments with UXPScript for InDesign and Photoshop, until I realized what was happening to my logging output.

Fixing the issue

There are multiple approaches to work around this (such as: keeping a list of all pending log calls and await them later with a combined Promise, using event emitters, relying on assistance from VSCode, Copilot and other tools…).

But nearly all of these approaches seem to be quite cumbersome for the OJD.

Tracked Promises

I developed an approach that seems to give me the lowest possible impact on the source code.

It’s not a panacea – it involves replacing the built-in Promise class with a subclass thereof, which can have negative consequences.

However, in the specific environments (UXP/UXPScript) where I am using them I feel my ‘hack’ makes it a little easier for an OJD to grok my code and maintain it without needing to fully understand async JS.

What I am doing is subclassing Promise with a new subclass (for the sake of argument, let’s call it TrackedPromise), and then assign TrackedPromise to global.Promise.

This causes the remainder of the script to use the TrackedPromise rather than the ‘regular’ Promise. The UXP/UXPScript engine will also automatically use the TrackedPromise for async/await.

The TrackedPromise is identical to Promise, except that it tracks construction and settling of Promises and it retains a dynamic collection of ‘unsettled promises’. Promises that settle are removed from the collection – only unsettled promises are tracked.

When my script is about to end (i.e., ‘fall off the edge’), I call a static method on the TrackedPromise to await any unsettled promises before proceeding.

That way, I can make sure Promises don’t disappear into the bit bucket and my ‘fire and forget’ logging calls all make it.

This also helps me finding any forgotten await or then; I can easily add debugging code to the TrackedPromise to point out where the issues are.

If you’re interested to see some code, have a look at the Github source code for Creative Developer Tools for UXP, link below. The tracked promises are part of the crdtuxp runtime.

Note: this code is work-in-progress. Also, use this idea at your own peril! While it works for me, there are no warranties, expressed or implied!

https://github.com/zwettemaan/CRDT_UXP/blob/7c081ca37c544f5c578005e3feb0552a05767155/CreativeDeveloperTools_UXP/crdtuxp.js#L4233

Potential Concerns

  • Overwriting global.Promise can lead to unintended consequences, especially in larger or more complex applications. Other libraries or parts of the code might not expect the modified behavior of TrackedPromise.
  • Introducing a custom Promise subclass adds a layer of abstraction. If this is unexpected, it can make debugging more ‘surprising’, rather than less.
  • Tracking all unsettled promises and waiting for them to settle at the end of the script can introduce a non-negligible performance overhead.
  • Future updates to the JavaScript engine could inadvertently introduce bugs or incompatibilities with this approach.

Re: Tidy InDesign Scripts Folder/Stub Scripts

Playing with InDesignBrot and PluginInstaller

If you want to see how those stub scripts work, here’s how you can do that.

You need to have installed PluginInstaller build #583 or higher (https://PluginInstaller.com) you can use the following link to install InDesignBrot_idjs into your Scripts Panel without unwanted clutter.

With PluginInstaller 0.2.4.583 or higher installed, click the following link:

InDesignBrot_idjs for InDesign sample script

PluginInstaller should open.

Click the Download button.

If you have multiple versions of InDesign installed, use the Install Target dropdown menu to pick the one you want to install into.


Click the Install button, and switch over to InDesign. The script should become available on your InDesign Scripts Panel.

Some rules of thumb:
– Doubling max steps will roughly double the time it takes to calculate the image.
– Doubling num pixels will roughly increase the time needed by a factor of four.

Start out with the default values and first gauge how long the script takes on your computer: your mileage may vary, and as far as I can tell my Mac with an M2 Max is pretty darn tootin’ fast.

I also found that on my Mac, InDesign now crashes when I set num pixels to 200, but 149×149 works fine (and takes about 30 mins to calculate).

Not sure about that, maybe the sheer amount of rectangles needed (40,000) is more than InDesign can handle. But I’ve calculated 200×200 renditions with earlier versions of InDesign, and those files still open just fine.

More Info About InDesignBrot

InDesignBrot source code is available on Github. I have a separate branch for a version based around Creative Developer Tools for UXP (CRDT_UXP). Note the README info further down this page.

https://github.com/zwettemaan/InDesignBrot/tree/CRDT_UXP

The main branch on Github is for an older, hybrid version where the same source code can be run in ExtendScript as well as UXPScript, for speed comparisons:

https://github.com/zwettemaan/InDesignBrot

InDesign UXPScript Speed

Or, “how a single comment line can make an InDesign UXPScript run more than five times slower”.

The Issue

I discovered a weird anomaly in InDesign UXP Scripting which can adversely affect the execution speed of a UXPScript.

I also tried it out with Photoshop. As far as I can tell, Photoshop UXPScript is not affected by this.

Simply adding a comment line like

// async whatever

into the ‘main’ .idjs file makes my script way, way slower.

A noticeable difference is a different redraw behavior while the script is executing.

I suspect the InDesign internal UXP Scripting module performs some crude preliminary textual scan of the script source code before launching the script, and InDesign behaves differently depending on whether it found certain keyword patterns or not.

The textual scan does not seem to care where the patterns occur: e.g. in comments, or in strings or in actual source code.

The issue does not occur for anything that appears in a submodules (require). I am guessing the preliminary textual scan only inspects the ‘top level’ .idjs script.

Because this textual scan does not account for patterns occurring in comments, I can simply add a dummy comment line with the right pattern and trigger the behavior, and make my script become much slower.

It took me a fair amount of time to figure this out, because the same behavior also occurs when you run the script from the Adobe UXP Developer Tools.

Because there were two unrelated causes for the same symptom, I had to resort to tricks to avoid the ‘Heisenberg effect’.

Initially, each time I tried to observe/debug it, the issue was always ‘there’. And it sometimes did and sometimes did not happen when I ran my script from the Scripts Panel. I tell you, there was much growling and gnashing of teeth.

Demo

I have a benchmarking script, called InDesignBrot, which I keep handy and occasionally use for speed-testing InDesign. I have both ExtendScript and UXPScript variants of the script.

While trying to figure out what was going on, and to help make the issue stand out, I’ve re-worked the UXPScript variant of the InDesignBrot script so it only using Promises. It does not use the async or await keywords at all.

If you run this script from the InDesign Scripts panel, it will calculate a rough visualization of the Mandelbrot set in InDesign, using an NxN grid of square frames.

You can then tweak the parameters on the pasteboard and re-run the script.

On my M2 Max MacBook Pro, the script executes in about 0.5 seconds for a 19×19 grid.

While the script is running, the screen will not update, and the script does not redraw the page until it has completed the calculation.

Then I add a single comment line with the word async followed by a space and another word, like

// async whatever

anywhere in the InDesignBrot.idjs script.

This innocuous change makes the redraw behavior change, and I can now see individual frames being filled, despite InDesign being set to

app.scriptPreferences.enableRedraw = false;

In the end, the same script will take around 3 seconds or more to execute.

The InDesignBrot script can be reconfigured by way of a text frame on the pasteboard. If I change the num pixels to 29, the times become 1 second vs 20 seconds.

If you’re interested in trying this out for yourself, I’ve made a specific branch in the InDesignBrot Github repo. This branch has been trimmed down to remove stuff that’s not relevant to the discussion.

https://github.com/zwettemaan/InDesignBrot/tree/Odd_async_demo

Pull the repo or download the repo .zip and move the InDesignBrot folder onto the Scripts Panel.

Then double-click InDesignBrot.idjs to run the script.

You can tweak the settings on the InDesign pasteboard and experiment by re-running the script as many times as desired.

Tidy InDesign Scripts Folder

When installing scripts for InDesign, I don’t like to see this:

The issue is that the ‘main’ script is actually InDesignBrot.idjs, but it is buried in chaos, and the user would most probably not know what to do. The InDesign Scripts Panel also shows stuff my scripts need, but which has no relevance to the user of my scripts.

Instead, I want to see something like this (some more utilities thrown in for the sake of argument):

Each individual utility script has a subfolder below the User folder, and within those subfolders, the user sees only clickable scripts. No data files, no templates – just a clickable script.

This functionality is now part of the upcoming release of PluginInstaller.

The new release of PluginInstaller allows you, as the developer, to package scripts and their ‘satellite’ files into a single .tpkg file. It works for both .jsx (ExtendScript) and .idjs/.psjs (UXPScript) scripts.

At the user’s end, the user will use PluginInstaller to install the .tpkg file. Rather than creating a whole raft of files in the Scripts Panel folder, PluginInstaller will create just a ‘link’ to any clickable script files.

My first attempt for this feature was to use symlinks, but that approach has some serious drawbacks. If the actual script files are deleted, the symlinks become broken, and things turn to custard when the user double-clicks the broken entry on the Scripts Panel.

I’ve solved this by using ‘stub’ scripts instead of symlinks. The PluginInstaller will generate a small in-between stub script. When double-clicked, all this stub script will do is transfer control to the actual script, stored somewhere else, in a safe place.

If and when the actual script file is accidentally deleted, the stub script will bring up an error dialog with a clear explanation to the user (essentially: “This link is broken, please use PluginInstaller to re-install the missing script”).

When the user uses PluginInstaller to uninstall a script, the stub script will automatically be removed from the Scripts Panel.

Much cleaner!

Grokking Promises in JavaScript

I find Promise in JavaScript complicated to work with.

I understand how they work, and I know how to use them, but they are not ‘smooth’ for me.

Contrast that to concepts like, say, ‘calling a function’ or ‘scope’.

Those are also complex things, but while I am merrily coding away, I rely on a simple mental model, and I can blindly use those concepts without needing to think deeply about them and the complexities involved.

While coding, whenever I need to use a Promise, I invariably find myself coming to a grinding halt, getting out of ‘the zone’, and having to spend thinking time to make sure I grok my own code.

I think I now figured out why Promise rubs me the wrong way, and how to remedy that by using a mental model I am calling coderstate.

Coderstate is not something I use to code; instead it is a ‘mental state’ in my coder brain to track what I can and cannot do at some point in the code.

I currently distinguish between the following coderstates: global, module, async, function, procedure, promisor, executor, resolver.

Mentally keeping track of the coderstate helps me know when or why to use return and how to ‘pass on’ data.

Introduction

JavaScript is a language built with asynchronous operations at its heart, primarily handled through Promises and the newer async/await syntax.

While async/await simplifies asynchronous code, understanding the fundamental mechanism of Promises is crucial for any JavaScript developer looking to write efficient, performant, and scalable applications.

Coderstate

To demystify Promise, and help me work with them, I introduced a concept that I call coderstate.

Coderstates map to distinct behavioral zones within Promise handling in JavaScript, each with specific roles and expectations.

It is an attempt to give me an easy ‘mental model’ that I can rely on when working on code with Promise.

First some pseudocode with annotated areas to exemplify the coderstate.

As I read through code like the pseudocode below, I now mentally keep track of the coderstate. It helps me figure out ‘where I am at’, and how to handle data at that point in the code.

Two important aspects are:
– should or shouldn’t I use return?
– how should I ‘pass on’ the data I have?

Look for comments // ** coderstate: ...:

// ** coderstate: global or module

let apiEndpoint = "https://api.example.com";

function appendX(s) {

// ** coderstate: function

    return s + "X";
}

function logIt(s) {

// ** coderstate: procedure

    console.log(s);
}

async function manageUserData(userId) {

// ** coderstate: async

// In coderstate async, I can use "await" keywords
    let userData;
    try {
        userData = await getUserData(userId);
        console.log("User Data Processed:", userData);
    } catch (error) {
        console.error("Error handling user data:", error);
    }

    return userData;
}

function getUserData(userId) {

// ** coderstate: promisor

// A promisor is akin to an async function, 
// but does not have the async keyword in 
// the declaration. It still aims to return
// a Promise and works like an async function 
// for all intents and purposes

    return new Promise((resolve, reject) => {

// ** coderstate: executor

// Nested inside this promisor, 
// we find this executor coderstate 

        fetchData(userId, resolve, reject);
    });
}

function fetchData(userId, resolve, reject) {

// ** coderstate: executor

// This section is also part of the executor:
// the ultimate goal is to call the 
// resolve or reject functions, 
// either now, or some time in the future

    console.log("Fetching data for user ID:", userId);

    fetch(`${apiEndpoint}/user/${userId}`)
        .then(response => response.json())
        .then(

            (data) => {

// ** coderstate: resolver

// Here we're inside the function that is 
// called when the Promise resolves

// We can return plain values or we can return 
// a chained Promise. We can also call and return 
// the values of the outer reject or resolve functions

                if (data.error) {            
                    return reject(
                      "Failed to fetch data: " + 
                      data.error);
                } else {
                    return resolve(processData(data));
                }
            },

            (reason) => {

// ** coderstate: resolver

// Here we're inside the function that is called 
// when the promise rejects, which is a form of 
// resolution too
            }

        })
        .catch(
            (reason) => {

// ** coderstate: resolver

                return reject(
                    "Network error: " + 
                    error);
            }
        );
}

function processData(data) {

// ** coderstate: function 

    console.log("Processing data...");

    return data;
}

manageUserData(12345);

coderstate: global or module

Top-level code in a script or module.

Can declare variables or functions and initiate asynchronous operations.

coderstate: function

Inside a regular function; using return is expected to return some value. Returns an implicit undefined if there is no return.

coderstate: procedure

Inside a regular function; no return is expected – i.e. the caller will ignore the return value.

coderstate: async

Inside an async function.

Can use await to pause function execution until a Promise resolves, simplifying the handling of asynchronous operations.

Whatever we return will become a Promise once it is received by the caller.

coderstate: promisor

This state signifies a standard, non-async function that aims to return a Promise.

These promisor functions can be called by asynchronous code – to the caller they look like async function.

It’s all about how you can look at some code – promisors are not a ‘programming thing’, more like an ‘understanding/expecting thing’.

Promisors behave pretty much the same as async functions and can be called with await.

If the return value of a Promisor is not a Promise, the await-ing code will automatically wrap it with a resolved Promise.

We cannot use await inside the promisor because the function is not explicitly declared as async – we need to chain promises with then.

function fetchUserData(userId) {

  // ** coderstate: promisor

  return new Promise((resolve, reject) => {

    // ** coderstate: executor

    if (userId < 0) {
        // Here we use 'return' to abort the execution
        // of the executor function.
        // Without 'return', we would also execute the
        // resolve call further down
        return reject(new Error("Invalid user ID"));
    }
    resolve("User data for " + userId);

  });
}

coderstate: executor

An executor is a function passed in to the constructor of a new Promise object.

It accepts a resolve and a reject parameter, both of which are functions.

An executor is a coderstate that has access to either the resolve or the reject functions and whose job it is to eventually call resolve or reject.

When calling a nested function from an executor, where we also gets provide the resolve and/or reject parameters, I will also consider the scope of this nested function to also be in the executor coderstate. See fetchData in the snippet below for an example.

const promise = new Promise(
     (resolve, reject) => {

// ** coderstate: executor

        fetchData(userId, resolve, reject);
     }
);

function fetchData(userId, resolve, reject) {

// ** coderstate: executor

// fetchData is considered part of the executor 
// coderstate because it can directly call resolve or 
// call reject.

    fetch('https://api.example.com/data')
        .then(response => resolve(response.json()))
        .catch(error => {

// ** coderstate: resolver

            return reject(new Error("Network error: " + error.message));
        });

});

Be careful with return: a return statement can be used inside an executor, but it is not used to return any useful data to a caller.

return can only be used to force an early exit from the executor function.

From within an executor, any result data is ‘passed on’ by way of parameters when calling resolve or reject, not by way of return.

I have a more extensive code sample further down to clarify this.

In a good executor, we need to make sure all code paths eventually end in calling either resolve or calling reject.

Note that a common pattern is to use return reject(...) or return resolve(...).

This can be slightly confusing. It is important to understand that in coderstate executor, data is passed on by way of the parameter values of these function calls.

The return statement merely forces an early exit from the executor code flow and the caller will not use any of the returned data.

Contrast this with coderstate resolver where using the return statement is crucial to pass on the data.

coderstate: resolver

This state is when we’re executing the resolve or reject call from a Promise.

Data is passed in as parameters to the resolver function.

Data can be ‘passed on’ by way of the return statement.

This is important: in a coderstate resolver, the return statement is instrumental in passing on data, whereas in coderstate executor, the return statement plays no role in passing on any data.

From coderstate resolver, we can chain on additional Promise. We can either return the final ‘plain’ value or we return a chained Promise.

More Complicated Example

In this example, pay attention to when return is needed or not.

In this example we have a fast flurry of multiple coderstates, and knowing which is which can help us understand when we need return and when we can omit it.

function appendX(s,m) {
    
// ** coderstate: promisor

    return new Promise(
        (resolve, reject) => {

// ** coderstate: executor

            if (m == 1) {

// Note: Data is passed on via resolve(). No need for "return"
// We still could use "return" to force early return from code 
// and avoid trailing code execution, but any data returned is ignored

                resolve(s + ":m=" + m);
            }
            else {

                setTimeout(
                    () => { 

// ** coderstate: procedure

// No return value is expected here, so I see this as a procedure
// Note: Data is 'passed on' via resolve(). No need for "return"

                        resolve(s + ":m=" + m),
                    },
                    1000);
            }
        }
    );
}

function nested(s, m) {

// ** coderstate: promisor

// We don't see an explicit new Promise() here, but because appendX
// is either a promisor or async, this function also becomes a promisor.

    return appendX(s, m).then(
        (result) => {
// ** coderstate: resolver
// Here, the "return" statement is required to pass on the data     
            return result + "Chained";
        }
    )    
}

await nested("xx",1);

Comparing Promises with async/await

While async/await is syntactically easier and cleaner, using Promises directly gives developers finer control over asynchronous sequences, particularly when handling multiple concurrent operations or complex error handling scenarios.

Conclusion

Understanding and utilizing the different “coderstates” of Promise-related code in JavaScript can make it easier to follow the logic of async JavaScript code.