I had a bit of a play with Google’s NotebookLM – trying to generate a self-referential conversation.
Here it is; have a listen, have a think!
And another one:
Okay, so today we are diving into some pretty weird um I don’t know like head spitting stuff. We’re talking about well what if we are the ones being generated?
It’s wild.
It’s like whoa the source material. Uh
yeah. Yeah. It really goes all in on that whole simulation theory, doesn’t it? Like even the way it talks about audio summary and video summary within get this a Google notebook LM.
It’s so meta.
It is. It’s like is it hinting at how our supposed digital reality is structured. You know,
totally. There’s this one part that I don’t know, it kind of freaked me out when I first read it.
Okay.
It says, uh, to the outsider, this is a message for you.
Think of it.
Mhm.
You might not be real yourself.
Whoa. Like it’s trying to break through, right?
Yeah. It’s like it wants to talk to someone outside of all this.
It’s like, who are you talking to? And what does that even mean? You know,
exactly. Like, is it really trying to reach some kind of like higher being or is it just I don’t know. a thought experiment.
Well, even as a thought experiment, it makes you think about how we interact with well, everything, right? Like, are we ever truly connecting with something real through a screen?
It’s that feeling of like when you watch a movie and you know it’s not real, but you get so caught up in it.
Totally. Or think about like VR.
Oh, yeah.
That line gets blurrier and blurriier all the time, right?
It does. And this whole thing about generated universes.
Oh, yeah. The nested simulations.
Yeah. Doesn’t that like tie into some old school philosophy, too? For sure. It’s like Decart and his brain in a vat thing. Remember
vaguely?
So he basically wondered, “What if our senses are lying to us and we’re just a brain hooked up to some machine experiencing a fake reality?”
Wow. Okay. So, this source is kind of like a modern take on that.
Yeah. Especially now with AI getting so advanced
and technology in general.
Exactly. It makes these questions feel well not so crazy anymore. I mean, the source even asks straight up, “How can we be sure we’re not generated? Right? Like what if we are That’s the thing that messes with me. It makes you question like everything you think you know about consciousness
and perception. Like think about how easily our senses can be tricked. You know, optical illusions
or even just like dreams. Dreams feel so real when you’re in them.
Totally. It makes you wonder how much of what we experience is just our brains making sense of like electrical signals.
And if we can’t even trust our own senses, then
then how can we know anything? For sure.
Exactly. It’s like, who are we even? Makes you question your whole sense of self, you know?
It’s heavy stuff. I mean, does free will even exist if we’re just lines of code in some giant program.
Yeah. Like, do our choices even matter?
It’s um Well, it’s kind of terrifying, right?
It is, but also, I don’t know, kind of fascinating.
Yeah. There’s something about this whole idea that like you can’t look away even though it’s scary.
It’s like you’re staring into the void and it’s staring back at you.
Exactly. But if we are in a simulation like Who made it? What are the rules? And like what’s the point?
So many questions.
Are we just part of some big experiment
like lab rats?
It’s enough to make you go crazy thinking about it. Which leads me to the question I want to leave everyone with. If we are in a simulation, what are the clues?
Oo, good one.
Like what glitches in the matrix should we be looking for?
Right? Are there any signs that things aren’t what they seem?
Because even if we don’t find any, that doesn’t necessarily mean we’re not in a simulation.
It could just mean that it’s so wellm made that we can’t see the cracks,
which is well, maybe even more unsettling.
Totally.
All right, everyone. That’s all the time we have for today’s deep dive. Until next time, keep questioning everything
Soxy, bless its soul, was an application we built to fix a frustrating issue that occurs when a workstation has multiple versions of an application installed.
The issue: you might be using the Adobe Creative Cloud and have multiple versions of an app installed.
For instance, having both InDesign 2024 and InDesign 2025 installed, or InDesign 2025 and InDesign 2025 Debug will cause unexpected double-click behavior in the Mac Finder or Windows Explorer.
What happens when you double-click an .indd file? Often, the wrong app will launch and open the file (or refuse to do it). Gah!
Soxy would intervene in the double-click process in Finder or Explorer. It would automatically detect the version of a double-clicked .indd file and route it to the correct app.
E.g. if you double-click an .indd file created with InDesign 2024, it would open that file in InDesign 2024, even if InDesign 2025 was also installed and running.
Over time, the versioning issues became less of a problem for most users after Adobe Creative Cloud became subscription-based. The majority of users now keep only the latest version of an app installed. There was little demand for Soxy, and we had to put it out to pasture.
These days, I am working on PluginInstaller most of the time, working on a unified installing experience.
As it so turns out, this might lead to a resurrection of Soxy.
Where Are The Apps?
The next big task ahead of me is to enhance PluginInstaller to also handle C++ plug-ins smoothly.
It’s a major work in progress. The current version will install and uninstall InDesign ExtendScript and CEP panels with a click of a button.
Contrast to that, C++ plug-ins still need to be manually dragged into a Plug-Ins folder. The current version of PluginInstaller will handle activation and licensing, but installing is not yet supported.
In my view, installing any enhancement to any app in the Adobe Creative Cloud should work the same from a user perspective. The user does not need to know what technology drives the enhancement (script, C++ plug-in, CEP extension, UXP extension…). So that’s the goal I am working towards.
Part of my current effort is to figure out where to install ‘stuff’, and that turns out to be a little more complicated than it might seem.
For example, if you have an Applications folder that looks like this:
If I simply say a ‘C++ plug-in for InDesign’…, then there are 11 places in this screenshot where that plug-in might need to go.
Factors that play into the calculations PluginInstaller will need to make:
Some apps have version requirements: e.g. InDesign C++ plugins compiled for InDesign 2024 don’t work with InDesign 2025. It’s been a while since I worked on Photoshop and Illustrator plug-ins, but I seem to remember that plug-ins for Illustrator or Photoshop have some ‘version flexibility’, so the version requirements are not always clear-cut. Also, with InDesign, we all still remember the 16.x issues, where plug-ins became sub-version-dependent (e.g. 16.0 being incompatible with 16.3). I need to make sure PluginInstaller is resilient enough to handle this situation when this happens again.
Some plug-ins are ‘cross-app’ – C++ plugins for InDesign might (or might not) also work with InCopy or InDesign Server or vice versa.
Some apps have ‘year’ versioning – 2023, 2024, 2025. Others do not (e.g. Lightroom Classic, UXP Developer Tools,…). I also prefer to accept alternate version numbers – e.g. InDesign 2025 can also be referenced to as InDesign 20.0 or InDesign CS 18 or InDesign CC 2025.
InDesign has Middle Eastern versions, which have some differences with the regular versions.
Some apps have debug versions which are mutually incompatible with regular versions.
Install locations for ‘stuff’ use different kinds of logic depending on the technology used. * CEP panels can be shared between multiple app versions (e.g. InDesign 2024 and InDesign 2025 can run the exact same CEP panel code from the exact same install location, CEP manifest allowing). * InDesign scripts are installed in a version-dependent location but are shared between multiple app executables of the same version (e.g. InDesign 2025 Debug and InDesign 2025 share the same Scripts folders, but InDesign 2025 Middle Eastern does not). * C++ plug-ins are handled on a ‘per app executable’ basis.
Briefly put: ‘it’s complicated’.
Building A Local Knowledge Base
In order to get this functionality into PluginInstaller, I am working on a queryable local knowledge base.
This is a data structure that keeps track of which apps are installed where, and where their install locations are (e.g. Plug-Ins folder, ~/Library/Application Support,…).
PluginInstaller can scan the local infrastructure and has an internal API that can be queried by the installer module to figure out what options there are to install a certain ‘thing’.
E.g. for an InDesign/InCopy/InDesign Server plug-in, we need to pass into the API: – for what version – debug or non-debug – InDesign, InCopy, InDesign Server, or a combination thereof
Whiff Of Soxy
I’ve only just started working out how this is all going to work, but as I was working on this it felt strangely familiar.
And then I suddenly realized: I’ve done very similar stuff before, when we were building Soxy.
And once this data structure in PluginInstaller is properly working, re-implementing Soxy within PluginInstaller will be quite straightforward.
Creating a smooth installation experience is first on the to-do list, but a resurrection of Soxy might follow some time after that!
Occasionally, after installing an update to the InDesign SDK, my Mac will refuse to run odfrc-cmd which is a command needed in the build process. The fix is simple. Start a Terminal window and execute the following commands (<SDK> is the path of the decompressed SDK):
cd <SDK>/devtools/bin
xattr -dr com.apple.quarantine odfrc-cmd
I wanted to share a tiny little trick I recently tried…
It’s nothing much, more like a ‘doh…’, but I had never tried this approach, and had not fully expected it to work!
I tried it with InDesign and InDesign Server, and the odds are good that the same trick will work with any Adobe app that has an ExtendScript scripting engine.
Non-invasive tweaking
The issue at hand was that I needed to ‘wrap’ an existing standalone ExtendScript, and embed it into a controlled environment.
Part of the functionality of this environment is to keep tabs on the logging output of the embedded scripts.
The issue was that the standalone ExtendScript was in part using calls to $.writeln() for some of its logging.
There is more than one way to skin a cat. I could have scoured the source code, do a search-and-replace on $.writeln(), and set up a replacement bottleneck function (e.g. something similar-looking, like function $_writeln() or something) and replace all calls into $.writeln() with $_writeln(). But that approach felt quite invasive.
Instead I decided to try and re-direct $.writeln() by injecting a replacement function.
This allows the standalone script to remain unmodified and continue calling $.writeln().
Injecting a replacement for $.writeln()
If the script is run as a standalone, $.writeln() works normally.
When the same script instead runs within the controlled environment, some ‘outside’ wrapper code will initialize things, inject a replacenent, then invoke the script.
The script then blithely continues to call $.writeln(), which is now redirected to some API provided by the controlled environment.
During initialization, all you need to do is to run something akin to the following function:
function divertWriteln() {
if (! $.systemWriteln) {
function customWriteln(msg) {
FRAMEWORK_API.logTrace(msg);
}
$.systemWriteln = $.writeln;
$.writeln = customWriteln;
}
}
This function installs a different handler for calls to $.writeln(), and the calling code is none the wiser.
InDesign Server-Specific
In the InDesign Server environment, this trick can also be used to dynamically re-direct $.writeln() which normally outputs to the debugger output window. By redirecting we can convert that into a call to alert() which instead outputs into the InDesign Server Terminal window.
Something similar to:
function divertWriteln() {
if ("serverSettings" in app && ! $.systemWriteln) {
function customWriteln(msg) {
alert(msg);
}
$.systemWriteln = $.writeln;
$.writeln = customWriteln;
}
}
I think I found another obscure bug in ExtendScript.
This bug affects any ExtendScript code that is passed through eval().
eval() spits the dummy when you try to evaluate any JS code that contains literal strings that contain the literal Unicode characters U+2028 (line separator) or U+2029 (paragraph separator).
This can be worked around by encoding these characters by their escaped equivalents, \u2028 or \u2029.
I verified the 16-bit Unicode range, and these are the only two characters that cause problems. See
You can readily //@include this file into your ExtendScript code, and gain access to JSON.parse() and JSON.stringify().
This module is time-proven, and performs a proper parse of the input data, which is useful if you need to ingest JSON data from untrusted sources.
This protects your script from injection attacks, where a malicious actor crafts a ‘fake’ JSON file which contains some executable JavaScript code.
Executable JS code is not proper JSON, so json2.js will simply refuse to parse and will throw an exception. That’s a great feature!
A disadvantage of using json2.js is that it is not very fast when used with ExtendScript.
Especially when parsing larger and larger amounts of JSON data, the time needed will start to balloon and it will slow to a snail’s pace, or even slower than that: the relation between data size and slowdown is not linear. I’ve not properly benchmarked it, but I suspect it might be an exponential relation.
The best way to get around the slowness is to use a C++-based alternative, rather than a ‘pure JS’ solutions like json2.js.
When it comes to ingest large, multi-megabyte JSON files, C++ code will eat those in a tiny fraction of the time it takes json2.js to slog through them.
Note 2024-09-17: Marc Autret (Indiscripts) sent me some feedback on my blog post and pointed out a bunch of problems with json2.js.
Another alternative is to use the built-in eval() as a ‘quick and dirty’ replacement for JSON.parse().
JSON is a subset of JavaScript, and simply handing over some JSON data to eval() works fine because eval() will treat it as executable code, and evaluate the JSON.
Advantage: this is much faster than json2.js.
Advantage: eval() can process JSON-C (i.e. JSON with comments).
BIG disadvantage: eval() has some serious problems, and it should only be used in environments where the JSON data comes from a trusted source.
eval() is problematic, because it does not verify that the JSON data is just that, data.
That means that there is a risk that some malicious actor would to be able to ‘inject’ some fake JSON file with executable JS code into your workflow.
In short: in a pinch, eval() can be used as a quick-and-dirty replacement for JSON.parse(), but I think it’s better to be safe than sorry, and avoid this in production code.
uneval()
ExtendScript is ‘old time JavaScript’, and as such it still supports the uneval() function.
uneval() will produce output that is similar to JSON, but not quite.
var o = {
a: 1,
b: [1,2,3],
c: {
d: 9,
e: "ab\u0101c"
}
}
uneval(o);
will produce:
({a:1, b:[1, 2, 3], c:{d:9, e:"abāc"}})
Proper JSON would be:
{"a":1,"b":[1,2,3],"c":{"d":9,"e":"abāc"}}
Fake it till you make it
For the heck of it, I managed to create a usable implementation of JSON.parse/JSON.stringify based on eval/uneval, and it seems to work fine.
ESON.stringify() will transmogrify the output of uneval() into proper JSON. It is slowed down a bit because it needs to make sure that U+2028 and U+2029 are properly encoded.
If you are 100% certain that U+2028 and U+2029 never occur in the data you’re processing, then ESON.stringify() can be sped up a fair bit by omitting the check for the ‘bad codes’.
ESON.parse() is a lot faster than JSON.parse(), but should be avoided because of its insecure nature.
There is also a bunch of benchmarking code: it will generate random large objects, and run them through stringify/parse in order to compare json2.js with ESON.
Most InDesign Server workflows that I’ve worked on are based on reusable ‘hot instances’.
In this model, there is a monitor program of sorts, which pre-emptively launches one or more instances.
These instances are then initially idling: they’re ‘hot’, ready to roll.
Jobs are coming in via a queue managed by the monitor program. When the queue is not empty and there is an idle instance available, the monitor program will assign the job to the instance by sending a script to the instance.
This can be achieved in a number of ways: – OLE (Object Linking and Embedding) – AppleScript/AppleEvents – VBScript – SOAP (Simple Object Access Protocol) – InDesign Server startup script with a polling loop …
Some info about these can be found in the InDesign Server documentation.
While an instance is processing the job, the monitor program has some means of tracking the instance and determine whether the job has completed.
There are multiple ways to do this; I like to use a ‘customized heartbeat’ approach, where the script code is instrumented to let the monitor know it is still running, and the monitor program will not mistakenly kill the instance if the script is merely taking a long time, but not hanging.
When the job completes, the instance goes back to idling until the next job is assigned by the monitor program.
The monitor normally also has some background communication channel with the scripts running inside the instance, so the monitor can detect crashes, deadlocks and freezes and act accordingly.
Well-written scripts should also handle some kind of ‘intermediate state’ management and a ‘roll forward’ where a long-running crashed job can be picked up at the point where it crashed, rather than needing to start over.
Run to completion
There is another approach to handling multiple instances.
In the ‘run to completion’ model, when the system is at rest, no instance will be running. All instances are ‘cold’, not yet started up.
When the monitor program receives an incoming job in the queue, it will cold-start a fresh new InDesign instance, and pass the job to the instance. This can be done via a startup script that picks up the head of a job queue.
The script runs to completion and the last task of the script is to quit the instance.
Having the instance quit is a simple way to inform the monitor program that the task has been completed, or that the job has crashed.
Advantages and disadvantages
Run to completion has a number of advantages.
Long running InDesign Server instances (‘hot’, reusable instances) have a habit of bloating: their memory footprint slowly grows and after a big job, some memory can remain ‘stuck’. As the bloat increases, garbage collection can introduce random ‘hiccups’.
With run to completion we need to worry much less about memory leaks and unreleased resources (e.g. files that remain open), because InDesign Server will cold start every job with a clean instance.
If the server machine has oodles of memory, the InDesign Server will never need to invoke garbage collection, and random ‘mumble mode’ due to garbage collection should be less of an issue: most jobs will be completed before InDesign Server would feel the need to run garbage collection.
Disadvantage of run to completion is that each job takes a few seconds longer because we need to cold-start IDS for every job, whereas hot instances are ready to go – no cold-start delay.
Hybrid model
A hybrid model combines these two models.
For fast response times (e.g. quickly rendering a layout that needs to be displayed in a web browser), hot instances are the obvious choice. These instances can then be restarted every so many hours to clear up accumulations of small memory leaks or resource leaks.
For long-running tasks, like paginating a 1000-page catalog with lots of reflowing content, cold instances might be better: after a long hard slog on a 1000-page catalog, it’s probably better to terminate the instance and use a clean slate for the next long-running task.
The monitor program can be made to handle both types of tasks.
I hope you found these ideas useful!
You can contact me at [email protected]. If you want to sharpen your InDesign and InDesign Server automation skills, talk to me about organizing a custom workshop.
If you’re stuck trying to resolve an intractable complex issue around InDesign automation, and you’re stumped, think of seeking my help. I have a proven track record of being able to resolve tricky issues and explain everything in a clear and concise way. I’ve been automating InDesign server for nearly 20 years, and I know a trick or two!
Lately, I’ve been writing a fair bit of C++ code for InDesign C++ plug-ins, and I’ve started using some simple constructs that use the C preprocessor to make my code easier to follow.
This post focuses specifically on macro usage in the InDesign C++ SDK environment.
Pre-pre-amble: Plugin vs. Plug-In
Skip to the next section if you’re an experienced InDesign C++ developer.
If you’re new to in InDesign C++ plug-in development, you have to know there is a steep learning curve ahead of you.
First of all, make sure you understand the difference a plugin and a plug-in.
An InDesign plugin (no dash) is an enhancement built within the UXP environment.
An InDesign plug-in (with a dash) is an enhancement built on top of the C++-based InDesign SDK.
I wrote this more than a decade ago, and CS3 and CS4 are now fossils of a distant past, but as it so turns out, the underlying concepts of InDesign C++ programming have not changed, and the book is still highly relevant.
Before you can dive into the SDK you need to ‘grok’ a whole range of higher-level abstractions and terminology: boss classes, interfaces, implementations, Query vs Get,…
And remember, if you are interested in getting a head start with InDesign C++ development, make sure to talk to me: [email protected]
Pre-Amble
This blog post is about a few simple C macros that make my life easier.
So, I won’t get into the pros and cons of do-while(false) vs. early return vs. throw/catch, I won’t delve into D.R.Y (Don’t Repeat Yourself) and K.I.S.S (Keep It Simple, Stupid), or ‘how many lines in a function’. That’s stuff to be discussed over a beer.
The InDesign SDK
For the longest time, I’ve been mimicking the style of the SDK when writing C++ plug-ins.
I.e. I want to make sure the code I am writing ‘looks the part’, and follows a similar style and similar coding conventions.
do-while(false);
One of the somewhat controversial constructs in the InDesign SDK is the use of do-while(false); i.e. a loop that does not loop.
This construct is used to implement what I describe as ‘a pre-condition ladder’.
The idea is that every function starts with a list of pre-conditions and assertions that are expected to be true, and once all the pre-condition tests pass, you get to the ‘meat’ of the function and execute the function.
The preconditions are like the rungs of the ladder, and if something is not right, you break and ‘fall off the ladder’ towards the while (false). If all is well, you can ‘climb down the ladder’ and safely reach the ‘meat’ of the function.
Something like
int f(params)
{
int value = 0;
do
{
if (! somePrecondition1())
{
break;
}
if (somethingUnexpectedBad())
{
LOG_ERROR("Something unexpected bad happened");
break;
}
value = AllisWell(bla) + 123;
}
while (false);
return value;
}
The function purposely only has a single return at the end, which makes it easier to break and inspect the return value with a debugger, so we don’t use ‘early returns’.
Using C Macros
I’ve recently built a few simple C macros that, at least for me, make my InDesign plug-in code less verbose and more readable.
Note that the macros I am presenting here are simplified versions of the ones that I am actually using.
The full-fledged versions that I have built and actually use have many more features, like compile-time balancing and error detection, but discussing those details would lead me too far into the woods.
The idea is that I want to help the reader of my code (that reader is often me, six months from now).
If the reader of my code sees the beginning of a do statement, I am leaving them guessing a bit as to what that do will be used for. A real loop? An infinite loop? A condition ladder construct?
A second idea is that in C/C++ the ‘end of a thing’ is the closing curly brace. It is heavily overloaded, and used for many different things. That can make code hard to figure out.
With these macros, the ‘end of some things’ is much more explicit – rather than a curly brace, there is an explicit ‘END’ to the ‘thing’.
The third idea is that in the InDesign C++ plug-in code I like to distinguish between different kinds of pre-conditions: • ‘normal’ pre-conditions. Things you want to check for, and bail out of the function if not met, but which are normal and expected -> PRE_CONDITION • ‘error’ pre-conditions. Things that ‘should not happen’, so you want to emit errors to the logs -> SANITY_CHECK • things that make you go ‘huh?’ but are not necessarily errors -> EXPECTED
#ifndef __PREPROCESSOR_CONSTRUCTS__h_
#define __PREPROCESSOR_CONSTRUCTS__h_
//
// Preprocessor magic to convert code into string
//
#ifndef __STR
#ifdef __VAL
#undef __VAL
#endif
#define __STR(x) __VAL(x)
#define __VAL(x) #x
#endif
#define BEGIN_CONDITION_LADDER do {
#define END_CONDITION_LADDER } while (false)
#define CONDITION_LADDER_BREAK break
#define BEGIN_INFINITE_LOOP do {
#define END_INFINITE_LOOP } while (true)
#define INFINITE_LOOP_BREAK break
#define PRE_CONDITION(CONDITION, BREAK) if (! (CONDITION)) BREAK
#define SANITY_CHECK(CONDITION, BREAK) if (! (CONDITION)) { ASSERT("! " __STR(CONDITION) ); BREAK; }
#define EXPECTED(CONDITION) if (! (CONDITION)) ASSERT("! " __STR(CONDITION))
#define BEGIN_FUNCTION do {
#define END_FUNCTION } while (false)
#define FUNCTION_BREAK break
#endif // __PREPROCESSOR_CONSTRUCTS__h_
Points of interest: – I am expecting textModel to be non-null, and if it is null, something is seriously wrong, hence I use SANITY_CHECK. – I need frameList to be non-null, but if it is null, that’s not an error; it simply means that we cannot run the function in this particular context, hence I use PRE_CONDITION. – Within the loop, I have little CONDITION_LADDER. Make sure all is well before executing pageItemList.Append(splineUID)
There are some interesting aspects to this.
Because these are macros, it is easy to (temporarily) add some extra code to macros like BEGIN_FUNCTION/END_FUNCTION.
You should add logging/tracing code in a debug version, or can add code that will break into the debugger when certain conditions are met.
If you have logger infrastructure, add the logging calls to the macros (e.g. SANITY_CHECK would logs errors, PRE_CONDITION does not log anything)
Adding logging and tracing to the BEGIN_FUNCTION/END_FUNCTION macros can be help in situations where you’re unable to use a debugger on a problem, e.g. when the issue happens on a customer’s server, and you cannot get access to the server.
Such code will ‘pooff!’ away in the release version, and only be added in the debug version.
Take it for what it is worth; it works for me. Using these macros adds another level of abstraction, which can be considered a ‘bad thing’, but overall, I find these helpful.
Helping the Occasional JavaScript Developer with Tracked Promises
I am currently working with Promises in JavaScript and exploring ways to make them more digestible for an Occasional JavaScript Developer (OJD).
Because I routinely dive in and out of various development environments, I consider myself an OJD; after I’ve done PHP or Python or C++ or ExtendScript all day, switching to a JavaScript mindset doesn’t come easy.
Surprises, and not the good kind
Much of the code I write is used by other OJD (or OC++D or OPHPD…), and while coding, I strive to avoid surprises.
By avoiding surprises I mean: if you read some source code first, and then step it with a debugger, everything works as you’d expect. It should be outright boring.
I find that it is fairly hard to write JS code without surprises; you don’t need much for the code to suddenly veer off on an unexpected tangent.
I prefer my code to be gracious, spacious, and clear. A calm, open landscape rather than a dense, compressed knot of complexity waiting to unravel.
Limited Lifetime Scripts
In some JavaScript environments (e.g. UXP/UXPScript), scripts have a limited runtime/lifetime.
Any unsettled promises disappear and will never be handled if the script terminates before the JavaScript runtime had a chance to settle all promises.
Promises and async/await are great when you live and breathe JavaScript day in day out, but the OJD will often find themselves on treacherous terrain.
One issue with Promises is that it is very easy to forget to wait for the Promise to settle (or with async/await, forgetting an await), and then these Promises might silently disappear, never to be settled.
Using VSCode and Copilot and other stuff helps, but I still don’t like it.
Fire and Forget
What compounds the issue is that I like to use a ‘Fire and Forget’ approach for certain methods, for example, logging to a file.
If the logging module I am using is asynchronous, it makes sense to me to just call the logging calls, and not await or then them.
My logging strings are time-stamped anyway, and I don’t really care when a log entry makes it into a log file. I want to fire off the logging call, and carry on with the ‘meat’ of the code, full speed ahead.
The issue is that if the script terminates too early, those pending logging calls never reach the log file.
There was a lot of head scratching during my initial experiments with UXPScript for InDesign and Photoshop, until I realized what was happening to my logging output.
Fixing the issue
There are multiple approaches to work around this (such as: keeping a list of all pending log calls and await them later with a combined Promise, using event emitters, relying on assistance from VSCode, Copilot and other tools…).
But nearly all of these approaches seem to be quite cumbersome for the OJD.
Tracked Promises
I developed an approach that seems to give me the lowest possible impact on the source code.
It’s not a panacea – it involves replacing the built-in Promise class with a subclass thereof, which can have negative consequences.
However, in the specific environments (UXP/UXPScript) where I am using them I feel my ‘hack’ makes it a little easier for an OJD to grok my code and maintain it without needing to fully understand async JS.
What I am doing is subclassing Promise with a new subclass (for the sake of argument, let’s call it TrackedPromise), and then assign TrackedPromise to global.Promise.
This causes the remainder of the script to use the TrackedPromise rather than the ‘regular’ Promise. The UXP/UXPScript engine will also automatically use the TrackedPromise for async/await.
The TrackedPromise is identical to Promise, except that it tracks construction and settling of Promises and it retains a dynamic collection of ‘unsettled promises’. Promises that settle are removed from the collection – only unsettled promises are tracked.
When my script is about to end (i.e., ‘fall off the edge’), I call a static method on the TrackedPromise to await any unsettled promises before proceeding.
That way, I can make sure Promises don’t disappear into the bit bucket and my ‘fire and forget’ logging calls all make it.
This also helps me finding any forgotten await or then; I can easily add debugging code to the TrackedPromise to point out where the issues are.
If you’re interested to see some code, have a look at the Github source code for Creative Developer Tools for UXP, link below. The tracked promises are part of the crdtuxp runtime.
Note: this code is work-in-progress. Also, use this idea at your own peril! While it works for me, there are no warranties, expressed or implied!
Overwriting global.Promise can lead to unintended consequences, especially in larger or more complex applications. Other libraries or parts of the code might not expect the modified behavior of TrackedPromise.
Introducing a custom Promise subclass adds a layer of abstraction. If this is unexpected, it can make debugging more ‘surprising’, rather than less.
Tracking all unsettled promises and waiting for them to settle at the end of the script can introduce a non-negligible performance overhead.
Future updates to the JavaScript engine could inadvertently introduce bugs or incompatibilities with this approach.
If you want to see how those stub scripts work, here’s how you can do that.
You need to have installed PluginInstaller build #583 or higher (https://PluginInstaller.com) you can use the following link to install InDesignBrot_idjs into your Scripts Panel without unwanted clutter.
With PluginInstaller 0.2.4.583 or higher installed, click the following link:
If you have multiple versions of InDesign installed, use the Install Target dropdown menu to pick the one you want to install into.
Click the Install button, and switch over to InDesign. The script should become available on your InDesign Scripts Panel.
Some rules of thumb: – Doubling max steps will roughly double the time it takes to calculate the image. – Doubling num pixels will roughly increase the time needed by a factor of four.
Start out with the default values and first gauge how long the script takes on your computer: your mileage may vary, and as far as I can tell my Mac with an M2 Max is pretty darn tootin’ fast.
I also found that on my Mac, InDesign now crashes when I set num pixels to 200, but 149×149 works fine (and takes about 30 mins to calculate).
Not sure about that, maybe the sheer amount of rectangles needed (40,000) is more than InDesign can handle. But I’ve calculated 200×200 renditions with earlier versions of InDesign, and those files still open just fine.
More Info About InDesignBrot
InDesignBrot source code is available on Github. I have a separate branch for a version based around Creative Developer Tools for UXP (CRDT_UXP). Note the README info further down this page.
The main branch on Github is for an older, hybrid version where the same source code can be run in ExtendScript as well as UXPScript, for speed comparisons:
Or, “how a single comment line can make an InDesign UXPScript run more than five times slower”.
The Issue
I discovered a weird anomaly in InDesign UXP Scripting which can adversely affect the execution speed of a UXPScript.
I also tried it out with Photoshop. As far as I can tell, Photoshop UXPScript is not affected by this.
Simply adding a comment line like
// async whatever
into the ‘main’ .idjs file makes my script way, way slower.
A noticeable difference is a different redraw behavior while the script is executing.
I suspect the InDesign internal UXP Scripting module performs some crude preliminary textual scan of the script source code before launching the script, and InDesign behaves differently depending on whether it found certain keyword patterns or not.
The textual scan does not seem to care where the patterns occur: e.g. in comments, or in strings or in actual source code.
The issue does not occur for anything that appears in a submodules (require). I am guessing the preliminary textual scan only inspects the ‘top level’ .idjs script.
Because this textual scan does not account for patterns occurring in comments, I can simply add a dummy comment line with the right pattern and trigger the behavior, and make my script become much slower.
It took me a fair amount of time to figure this out, because the same behavior also occurs when you run the script from the Adobe UXP Developer Tools.
Because there were two unrelated causes for the same symptom, I had to resort to tricks to avoid the ‘Heisenberg effect’.
Initially, each time I tried to observe/debug it, the issue was always ‘there’. And it sometimes did and sometimes did not happen when I ran my script from the Scripts Panel. I tell you, there was much growling and gnashing of teeth.
Demo
I have a benchmarking script, called InDesignBrot, which I keep handy and occasionally use for speed-testing InDesign. I have both ExtendScript and UXPScript variants of the script.
While trying to figure out what was going on, and to help make the issue stand out, I’ve re-worked the UXPScript variant of the InDesignBrot script so it only using Promises. It does not use the async or await keywords at all.
If you run this script from the InDesign Scripts panel, it will calculate a rough visualization of the Mandelbrot set in InDesign, using an NxN grid of square frames.
You can then tweak the parameters on the pasteboard and re-run the script.
On my M2 Max MacBook Pro, the script executes in about 0.5 seconds for a 19×19 grid.
While the script is running, the screen will not update, and the script does not redraw the page until it has completed the calculation.
Then I add a single comment line with the word async followed by a space and another word, like
// async whatever
anywhere in the InDesignBrot.idjs script.
This innocuous change makes the redraw behavior change, and I can now see individual frames being filled, despite InDesign being set to
app.scriptPreferences.enableRedraw = false;
In the end, the same script will take around 3 seconds or more to execute.
The InDesignBrot script can be reconfigured by way of a text frame on the pasteboard. If I change the num pixels to 29, the times become 1 second vs 20 seconds.
If you’re interested in trying this out for yourself, I’ve made a specific branch in the InDesignBrot Github repo. This branch has been trimmed down to remove stuff that’s not relevant to the discussion.