Elias Mårtenson

@loke@functional cafe

It has been said that the most popular programming language in the world is the spreadsheet (and in particular these days, Excel).

There is a valid argument that the spreadsheet embodies an array programming model, making Excel the most popular array programming language in the world.

As the author of an array programming language based on the ideas of APL, I see the popularity of the spreadsheet as both a problem (because it's so error-prone) and an opportunity (because while the fans of imperative languages like Python are probably too set in their ways to consider an alternative model of computation, users of spreadsheets are already thinking in terms of matrices of values, and some of those users may be more open to the idea of performing the same computation in a more reliable way).

Before discussing how to leverage the power of spreadsheets with a proper array based programming language, there is a need to discuss how spreadsheets are used today, and some of the problems with them.

The different uses of spreadsheets

When people use a spreadsheet, it's to do one of several things. In short, spreadsheets are used to perform many different activities, which can roughly be grouped into 3 main categories:

  • Data entry
  • Computation
  • Presentation

Data entry

Ever since VisiCalc was released in 1979, the spreadsheet has presented the user with a hugely inviting blank sheet of empty cells when the program was started.

Screenshot of the LibreOffice initial page
What do you want to calculate today?

The page almost calls out to the user to start inputting numbers. Our innate desire to organise things in categories is what has made this design so successful. The VisiCalc initial screen looked like this, as does the latest version of LibreOffice some 45 years later.

All you have to do is to move the cursor to the cell you want to edit, type the values and things will just work. While data entry specialist is still a job title in use today, there was a time when any data input was a complicated process that required specialist knowledge. The spreadsheet changed this to allow anyone to record anything they wanted in digital form.


Once the data is in the spreadsheet, you inevitably want to work with the numbers to produce some output.

Spreadsheets makes it very easy to take the step from simple data entry to computation of results. All you have to do is to add a =sum(a1:a5) to the sheet you'll have the sum in the cell you want.


Spreadsheets aren't just about entering data and computing some results based on that data. It's also about presenting it in an appealing manner.

This is really where the you can see the genius of the invention. From something as simple as looking at a set of numbers, with the sum of those numbers neatly displayed as “Total:” below the values, to a complex form with coloured highlighting and different text styles to emphasise the results.

These sheets also become interactive without any extra work, since any change in the source data will automatically propagate to the sheet that contains the results.

The problems with spreadsheets as a programming environment

The spreadsheet paradigm hides what's important (the formulas) in favour of the superficial (the data)

Once you've taken that first step to compute a sum in a spreadsheet, it's not a very long road until you find yourself in one of those multi-sheet monsters most people who work in a large company have seen. I've certainly spent more time than I want to admit trying to untangle one of those bowls of spaghetti trying to find out why a certain number is computed the way it is.

Screenshot of LibreOffice showing some products, its prices, amounts and the totals

Here's a silly quiz: Can you spot the error in the above spreadsheet?

I don't think is possible to tell just by looking at the screenshot. The problem is that cell D5 has the actual value entered instead of a formula (this could have happened due to some simple copy&paste error).

And this is the true danger of spreadsheets. These kinds of mistakes are completely invisible when they happen. Instead, the results start to deviate (probably slowly) once the input data start to change, and you could be looking at invalid numbers for quite some time before anyone notices the problem. And if the dataset is large, it may be a lot of work to actually find the underlying mistake.

And this bring us to my proposition: Spreadsheets are doing a great job at data entry and presentation, but it's severely lacking as a computation tool.

However, very few tools allow you to have a seamless workflow using a proper language for computation while still using spreadsheets for the other aspects of the workflow. Once a spreadsheet has grown too large, someone decides to write a Python application to replicate it, which inevitably means that they have to build a webapp around it, which is probably going to be much less user-friendly since spreadsheets have had 40+ years of UI refinements, while the webapp was put together in an afternoon.

Spreadsheets intentionally hides errors in an attempt to do what the user thinks they want

Since this section could become very large if I listed all the examples where spreadsheets (and Excel in particular) fails to highlight problems, so here's a sample:

  • Numbers tend to be floating point, which means that the moment a value is entered, there are precision issues. This is actually something that could be improved simply by supporting rational arithmetic, although I don't think there are any spreadsheets that do.
  • If a number is entered as a string, it will look identical to an actual number (except that the alignment will be different by default). If you're lucky, you'll get an error later which will manifest as an #ERR field, and then you'll have a fun time finding the source of that issue.
  • The system invites the user to write long single-line formulas in a language that is not suited for long, single-line formulas. It all ends up being crammed into a space that is nowhere near large enough to hold it. This makes editing the code an absolutely painful experience.
  • Various errors, such as exceeding limits tend to be ignored instead of invalidating the entire document. This is something the UK government was made painfully aware of.

Using Kap as the default tool to solve various computation problems

The Kap programming language provides an alternative. Or rather, it's a work in progress so it's probably better to say it aims to provide an alternative. That said, I already use it for many tasks where a spreadsheet would be the go-to tool for many people.

Like most APL derivatives, the way you use it is often not by opening a test editor and start writing a program. Instead, you start it, you are presented with a prompt, and you start using it more as a calculator than a traditional programming language. This makes sense, since a large number of programming tasks that would take a lot of code in another language, can be solved in just a few characters in Kap.

Screenshot of the Kap welcome screen

When given a problem to solve, you typically look at the problem and then reach for the tool that is the most efficient way to solve that problem, given tool availability and the person's preferences. For example:

  • You need to compute the square of the number 54321. Most people would likely reach for their desktop environment's calculator here.
  • You want to compute the sum of some set of numbers. A natural choice here would be to use a spreadsheet.
  • You're given a dictionary and asked to find the longest word which contains no more than 2 vowels. I suspect a lot of people would open an editor and start writing Python code to solve this.

For me, in all of these cases I open the Kap interactive tool, type a few characters and the problem is solved.

This is how I solved the third problem, by the way:

{ ⊃ ⍵ ⊇⍨ ↑⍒≢¨ ⍵ /⍨ 2≥ (⊂"aeiou") (+/∊⍨)¨ ⍵ } io:read "dict.txt"

At this point, comments suggesting the code is “unreadable” are bound to be thrown around, but once you learn a handful of symbols, this is in fact significantly more readable than the kinds of formulas I see on a regular basis in Excel. This is proof that one is not more difficult to learn than the other, and that the problem is one of marketing rather than technological.

In case anyone is curious about the above code, the next blog post will be a breakdown of the code above to explain how it works. Check this blog in the coming days.

The point I am trying to make by showing this example at all is to try to evoke the same kind of response as beginners get when faced with any code of any kind.

The Kap approach to spreadsheet integration

The array editor

The array editor is a graphical tool that allows the user to edit arrays in a way that is similar to a spreadsheet. The goal of this tool is to be as familiar as possible to someone who has used spreadsheets in the past.

The tool also allows you to type Kap expressions to manipulate values in the sheet. This is not like formulas, but rather direct modification of the data.

Funnily enough, this is actually something that is not obvious how to do in Excel. If you have a set of numbers, and you want to add some constant to them, the way you'd do that is to create a formula that adds the constant to a cell and put that into a different location, usually on the same sheet, and then paste that formula to all the cells in an area with the same size as the original data, and hope that you got the dimensions right.

The array editor allows you to easily edit the content of an array. Of course, you can always do it programmatically, but I think the prevalence of spreadsheets has shown that it's convenient to do it in a visual manner.

Data import

I have been participating in discussion forums and chat channels about APL and other array languages for a while now, and when a beginner starts learning one of these languages, after they have gone through all the tutorials, there comes a time where they want to work with real data and the first question is: How do I get my data into the system?

After all, that's what I asked myself and was frustrated when I realised that preparing the data was harder than actually working with it.

So, when I started working on my own implementation, one of the most important goals was that loading pre-existing data into the interpreter was one of the most important features.

What's the most common format of array-based data today? It's probably not very far fetched to assume it's Excel. For this reason, being able to read spreadsheets into Kap has been a priority, and at the time of writing the following methods are supported:

  • Using functions such as msoffice:read function in the language itself. There are similar functions to load data in other formats, such as CSV and JSON.
  • Open the a spreadsheet file (Excel or LibreOffice) in the gui-based array editor.
  • Copy&paste directly from the spreadsheet.

By the way, you can also paste any table data into the array editor. Here's an example of using Wikipedia content:


This is the part of the Kap UI tool that needs to most work. The vision is to provide a user interface that makes it just as easy to create a presentation of results as it is in Excel. However, it would only be used for presentation purposes, and not for data entry nor computation.

While there is some actual code there that can be tested, it's still not usable except as a demonstration of what the current ideas in this direction are.

The goal of Kap is to be a complement to spreadsheets

When someone needs to compute something, they typically look at the problem and then decide to open a tool that helps them solve said problem. While Kap is a great tool to solve a variety of problems, I tend to believe that it particularly excels in the kinds of problems where the solution would otherwise involve a spreadsheet.

Kap is a fully open source project available under the MIT license. If you want to learn more about Kap, visit the Kap website. Feedback is always welcome.


Kap has a datatype called a list which is container that can hold 0 or more values. The list itself is a scalar (i.e. on a list returns an empty array).

The origin of the idea of the n-tuple type comes from the APL bracket index syntax:

    a ← 3 3 ⍴ ⍳9

It all came out of the idea that I didn't want special syntax for the argument to an axis index operation, so I made 1;2;3;4 an object of its own. This is the n-tuple.

a ← 3 3 ⍴ ⍳9
b ← (2;1)

Once the n-tuple existed, using this as a general way to pass multiple arguments to functions was a logical step:

∇ foo (a;b) {

This function can then be called as such: foo (1;2).

You may now be asking yourselves, “why is he talking about this stuff now?”

Well, there was one weirdness remaining. Support for empty elements in axis specifications, like so: a[1;] could still be improved. Until now, an empty value in an n-tuple: (1;;2;3) yielded (1 ; ⍬ ; 2 ; 3), but the had some special hidden property that was used by the index operation so the standard APL syntax worked. This is obviously really ugly.

This behaviour has now been formalised as null. The null value is a scalar which is very similar to ⎕NULL in Dyalog. An empty element in an n-tuple will now result in null. This also makes it possible programmatically create a value that can be passed to the index operation, which makes things consistent (although perhaps not very useful since squad already exists).

Properties of null

The null value is a scalar. In other words, it's a rank-0 object. The name null is not a regular symbol, but rather a syntactic element, which means it does not have a namespace.

It's type is null, which means that typeof null returns kap:null.

It's boolean value is true, in keeping with the specification that only the scalar value 0 is considered false, and all other values are true.

Next steps

Since there is now a proper null value in Kap, there will be some further changes to the language. Currently, several functions returns when there is no value to return. One such example is the return value from mapGet when the requested key does not exist. This will be changed to return null instead.

I'll start by commenting on what was said at the at the end of the episode of Array Cast with the title Tacit#5: They implied that the episode was long, and people probably wouldn't make it to the end.

In case any of the hosts are reading this: I know it was mentioned that you didn't want to hear if anyone felt the show was too slort. I'm going to say it anyway: The show is too short. Please make it at least 4 hours.

Deriving a function vs. returning a function

In the episode, there was a discussion around the difference between deriving a function and returning a function. I think this distinction was harder to make in the context of APL, because of two main reasons: APL does not have a separate parse step, and it does not support first-class functions. This makes it harder to explain precisely what the difference is.

To state the difference clearly: Deriving a function is a parse-time or compile-time operation, while returning one is a runtime operation.

Since Kap has first-class functions in addition to APL-style operators, I will use it to illustrate the difference.

Consider the following code:

+/ 1 2 3 4

While the above expression is parsed, +/ is processed and the resulting code is a call to the reduction function. The reduction function is an actual function which is created by the parser when it sees the reduction operator. This function contains a reference to another function (+ in this particular case) which then represents the “reduction over sum”.

Note that all of this happens before any code has been evaluated. It's just part of the parser.

By contrast, let's take a look at how to return a function. Here is the definition of the function foo which accepts a value, and returns a function which multiples its argument with that value:

∇ foo (a) {

The λ symbol is Kap-specific syntax, and is used to represent a function as a value. The lambda symbol should be followed by any function, and the entire expression is then a simple scalar value that can be processed like any other value: it can be returned from functions, it can passed to other functions, it can be put inside arrays, etc.

Here is how we can create an array of two functions:

b ← (foo 2) (foo 3)

Here we have two calls to foo, each of which returns a different function (the first multiplying its argument by 2 and the second by 3). This is where functions are returned.

In Kap, to be able to call a function that is represented as a value, you use the “apply” symbol: . This derives a function from an expression:

⍞(b[0]) 5

Note that since deriving a function is a parse-time operation, the derivation doesn't know what b[0] contains. Thus, the derived function references the expression itself, and it's only when the derived function is actually called that the inner expression is evaluated. This yields the function reference which can then be called.

Or, to put it in a different way, the code above is a derived function (which is derived at parse time from the apply symbol). At runtime, this function evaluates an expression (dereferencing the first element in the array b) which returns a function which is then called.

Mention of Kap

Kap was mentioned very briefly in the context of syntax for the fork. In J and Dyalog, a fork has the syntax (A B C) where all elements are functions. Without repeating 5 episodes worth of discussions around this topic (please listen to them, they're interesting) the following expression: x (A B C) y is evaluated as (x A y) B (x C y).

As mentioned on the Podcast, Kap has a special syntax for this which uses special symbols: x A«B»C y. The benefit of this is that the regular 2-element train extends naturally to 3 or more elements. This eliminates the “even vs. odd” issue that causes a lot of confusion to anyone learning tacit programming.

A number of years ago I noted that Unicode did not contain all the characters in PETSCII, the character set used by the Commodore 64 and other classic Commodore computers. The Wikipedia page at the time even explained that some symbols were missing in their character table due to the fact that Unicode didn't support them.

I decided to make a post about it to the Unicode mailing list, and some people there agreed. The discussion expanded to talk about other 80's systems whose character sets were also missing.

So a working group was formed which I was a part of, and a proposal was made to add a number of characters which were used by several old systems. After a few failed attempts, the proposal was finally accepted, and it was included in Unicode 13.

I'm quite happy that my small contribution to this project helped fill a gap that I believe needed to be filled.

Let's use these symbols

A few years have now passed, and font support for these symbols is more widespread. This is good, because the character set contains symbols that are not just useful for interoperability with old systems, but are useful on their own. In particular, the BLOCK SEXTANT characters.

These are the 60 characters starting from U+1FB00, along with 4 more characters that already existed elsewhere. Here they are:


Consider a character that is broken up into 3 rows by 2 columns. These characters are all 64 different combinations of blocks. In other words, we can use these to draw graphics at a resolution which is 2 times the width and 3 times the height of the terminal screen.

Kap is really good at working with arrays of data, so let's write a program to convert a 2-dimensional arrays of bits to a character array of BLOCK SEXTANT characters.

First we need to create the list of character above. Here's the code to do this:

blockMapping ← (@\u2590,)⍢(0x2A↓) (@\u258C,)⍢(0x15↓) @\uA0 , (@\u1FB00+⍳60),@\u2588

Let's break it down. The first part creates a list of the 60 characters in the legacy computing code block, and then prepends U+00A0 NO-BREAK SPACE, and append U+2588 FULL BLOCK. These two characters already existed in Unicode so they have different numbers:

@\uA0 , (@\u1FB00+⍳60),@\u2588

We then want to insert the two remaining symbols that also existed previously: U+2590 RIGHT HALF BLOCK and U+258c LEFT HALF BLOCK. We insert these using structural under.

Structural under deserves a block post of its own (and I've already written extensively about it), so for now we'll just note that the following inserts a at position b in c:

(a,)⍢(b↓) c

Now that we have the characters in a string, all we have to do is to merge each 3-by-2 block in the input array, decode them as a binary number and pick the appropriate character based on the resulting number. Here's the full function:

draw ⇐ {
	(2=≢⍴⍵) or throw "Argument must be a 2-dimensional array"
	v ← ((⍴⍵) + 3 2 | -⍴⍵) ↑ ⍵
	blockMapping ⊇⍨ (2⊥⌽)¨ ((¯2+≢v) ⍴ 1 0 0) ⌿ 3,⌿  ((¯1+1⊇⍴v) ⍴ 1 0) / 2,/v

The first line in the function simply raises an error if the input is not a 2-dimensional array. The second line extends the array so the width is divisible by 2 and the height is divisible by 3. Finally, the third line does all the work, computing the resulting image.

Let's try it:

io:print draw ⊃ {⍵⍴1}¨ ⌽⍳10

This should print:


The quality of the output depends on the font used, and I note that the default font used on write.as (where this blog is hosted) is not very good. With a proper font it looks a lot better.

Try the full example on the Kap web client.

Until now, the only way to return a value from a Kap function was to have it be returned as the return value from the last expression in a function.

Well, there are also exceptions, but those a bit special and should, well, exceptions.

But in some cases it's useful to return a value early. Essentially what return is in languages like C, Java, etc.

So, I did the obvious and implemented return just like in those languages, except that it's called to align with APL.

This example should be self-describing:

∇ foo (x) {
    if (x≡0) {
    } else {

However, since is a regular function, it can also be called dyadically. So if the left argument is true, the right argument will be returned from the calling function. If it is false, then the function will return normally (returning the right argument). Confused yet? The word “return” was used in the description of both behaviours: to return from the current function, and returning a value from the return function itself.

Perhaps this example will make it more clear:

∇ foo (x) {
    (x≡0) → 10

Future work: Returning from different scopes

Common Lisp has a function RETURN-FROM that allows the for returning from an outer function, for example when using nested functions. It also provides a BLOCK form that creates a named scope that can be explicitly returned to.

This is a very useful feature, and implementing it in KAP would not be particularly complicated.

The below is a slightly edited version of a series of posts made on the Fediverse.

Note: The below explanation was written before Kap supported addition and subtraction for characters. With this change, the calls to toCodepoints and fromCodepoints can be replaced by character-@\0 and codepoint+@\0.

For anyone not familiar with the syntax of APL and it's varations, the most important thing to keep in mind about the syntax is that for function calls, the parser binds to the right. It also evaluates its arguments from right to left.

What this means is that the following code: io:println 1 + math:sin 2 is evaluated like this:

a0 ← math:sin 2
a1 ← 1 + a0
io:println a1

Another important fact is that Kap functions are either monadic (they accept a single argument to the right of the function name) or dyadic (accepts two arguments, one on each side of the function). Some functions can be called both monadically and dyadically. One such example is - which subtracts when called dyadically but negates when called monadically.

With that in mind, let's take a look at the solution:

+/ 4 8 3 1 5 9 7 2 6 ⊇⍨ 3 (⊥⍤1) 65 88 -[1]⍨ unicode:toCodepoints 1 0 1 ⫽ ⊃ io:read "testdata.txt"

The input data is in the file testdata.txt, and the first thing that happens is that the entire file is read into an array of strings, where each element is one line from the file.

Each line is three characters, so we use the function which converts an array of arrays to a two-dimensional array of characters.

The next part is 1 0 1 ⫽. This function takes an array on the right, and uses the left argument (1 0 1) as a selector. I means to take one copy of the first column, zero copies of the second and 1 copy of the third column. In other words, this drops the centre column.

Starting with the example array we then get this:

    1 0 1 ⫽ ⊃ "A Y" "B X" "C Z"
┃@a @Y┃
┃@B @X┃
┃@C @Z┃


We then call the function unicode:toCodepoints which converts the characters to the corresponding Unicode codepoints.

We then want to convert the codepoints to actual numbers from 0 to 2. This is done by subtracting 65 from the first column and 88 from the second. Naturally we use the - function to do this, but in order to avoid having to wrap the entire expression in parenthesis, we use the operator which reverses the arguments, putting value to be subtracted from on the right instead of the left.

I must also explain what the [1] means. Since the left argument is a 1-dimensional array, and the right argument is 2-dimensional, the operation will be applied along an axis. The default is that it will be applied along axis 0. If you put an axis within square brackets after the name of a mathematical function, the operation is instead performed along that axis.

The behaviour of the function when passed an axis parameter depends on the function itself. All mathematical functions use the axis parameter this way.

Let's run the code up to this point to see what happens:

    65 88 -[1]⍨ unicode:toCodepoints 1 0 1 ⫽ ⊃ "A Y" "B X" "C Z"
┃0 1┃
┃1 0┃
┃2 2┃


What we see here is that each row in this array represents a binary number that uniquely identifies the game positions. All we have to do is find this number and assign each number a score, and we'll have solved the problem.

This is where the next two symbols come in. is used to decode a sequence of digits in an arbitrary base.

The other symbol, is an operator, just like , operators are is used to change the behaviour of a function. It's a higher level function if you wish. takes a function on the left, and a number on the right and derives a new function which calls the function on the left repeatedly, each with a subarray of a lower dimension. In this particular example, this means that the function will be called on each row of the input. The left argument is a scalar value, so it will always be the constant 3.

In other words, this decodes each row as a base-3 number and returns an array of these numbers:

    3 (⊥⍤1) 65 88 -[1]⍨ unicode:toCodepoints 1 0 1 ⫽ ⊃ "A Y" "B X" "C Z"
┃1 3 8┃


This gives us a list of the game positions for each round, where each number is a value from 0 to 8.

Finding the scores for each position is outside the scope of the solution, but for this simple case of only 9 different combinations, doing it by hand was trivial. For larger number of combinations, one would have to write some code to do it.

There are a few different ways in which you can pick values from an array, but in this case we'll use . This function takes an array on the right, and a list of indices to the left. For each index, the corresponding value is looked up in the array and returned.

In our case, the indices are on the right, so we use to swap the arguments. Again, this is to avoid wrapping the entire expression in parentheses and putting the argument on the right.

The final function to be called is +/. This is used to sum all values in an array. For anyone familiar with functional programming, this is just a simple reduction.

The reduction operator is /, which takes a function on the left, and performs a left reduction using that function. In other words, the following: +/ 1 2 3 4 is equivalent to (((1+2)+3)+4).

For anyone familiar with APL, it's worth noting that this is different from how APL handles this operator. In APL, the above would be interpreted as: (1+(2+(3+4))). In the case of sums, this doesn't matter but for other functions it does, in particular if the function has side effects.

I can be reached on the Fediverse at @loke@functional.cafe

This post was inspired by the latest episode of Array Cast. In the episode, control structures in different array languages were discussed, with particular focus on the (what should probably be referred to as) big three: Dyalog, J and BQN. They also mentioned K, Q and GNU APL.

The participants were very thorough in going through the various options available to programmers of the above-mentioned languages, and it was a very informative episode. One of the more interesting ones they've had, since the series on tacit programming, in my opinion.

What is needed is some commentary on control structures in Kap, and this post serves to provide this.

About the need for control structures

As was touched upon in the episode, array languages generally don't need to use them as often as other languages. As much as Kap attempts to be better at imperative control structures than other array languages, it's somewhat ironic that the main example given for a routine that needs a loop actually does not need one in Kap.

The example was along the lines of:

Multiply a number by 2 until the value is greater than 1000, and return that number.

Of course, this can be solved without any iteration at all just using maths, but for the purpose of this example, the assumption is that the multiplication has to be done some number of times.

The most straightforward solution in APL is:

{1 ⍳⍨ 1000 ≤ {2×⍺}\⍵+⍳1000}

In fact, thanks to Kap's lazy evaluation, this function will only perform the computation until a valid result is found. Thus, a loop is not needed in this case. This feature was discussed in a previous blog post, and also this post.

That said, a looping version would look like this:

  i ← 0
  c ← ⍵
  while (c < 1000) {
    i ← i+1
    c ← c×2

Basic control structures

Kap provides control structures that are very similar in syntax and behaviour to C, Java and other similar languages.

If statement

Here's an example of the if statement:

if (a < b) {
  io:println "a is less than b"
} else {
  io:println "a is greater than or equal to b"

In Dyalog, control structures are not values. An :If statement cannot return a value. In Kap, everything returns a value, and in the case of the if statement, the value returned is that of the last expression in the clause that was evaluated. Hence, the following will print 3:

a ← 10
io:println if(a<100) { a-7 } else { 100 }

Link to code online

While statement

KAP also provides a while loop that also have the same behaviour as that of C:

i ← 0
while (i < 5) {
  io:println i
  i ← i+1

Link to code online

The control structures that were mentioned above are not native to the language and are instead implemented as part of the standard library. How this is done is discussed further below.

When statement

The when is an alternative syntax to chained if/else statements. As Kap doesn't have an else if, such chains results in too much indentation, so when is useful as an alternative.

It consists of a series of conditions and associated clauses. It attempts each condition in turn, and once one returns true, the corresponding clause is evaluated and its return value is used as the return value of the entire statement. If none of the clauses evaluates to true, is returned. Here is a simple example:

a ← 1
res ← when {
  (a=0) { "a is zero" }
  (a=1) { "a is one" }
  (1)   { "a is not in the set [0,1]" }
io:println res

The power operator

Kap implements the power operator (), and is a subset of the capabilities available in Dyalog. For the purposes of the discussion of using the power operator to provide the behaviour of an if statement, the functionality is identical. The following expression prints “foo” if a is true:

({ io:println "foo" }⍣a) 0

This code is not very good at communicating what it actually does, and while it's a lot better than the example mentioned in the podcast episode where people were generating code in a string and evaluated it, it's still not particularly understandable.

I agree with Marshall who pointed out that the power operator is very specific and is not good as a general primitive operator. I agree with this, and I would even go a bit further to say that it's not very good at all. The problem in Dyalog is that it's overloaded to do three very different things:

  • Apply a function some number of times
  • An implementation of C's do/while statement
  • Calling the inverse of a function

The first point is OK. If this was the only thing the power operator did then I wouldn't have much problems with it.

The second point makes for a really bad looping construct, and as Adám mentioned on the show if you want to create a normal while loop then one has to use two separate invocations of .

The third point reuses an existing symbol to provide a completely different functionality. If the right argument to is ¯1, then the derived function is the inverse of the function specified on the left. For example, the following returns 6 in Dyalog, since that is the value that has to be added to 4 in order to get 10.

4 (+⍣¯1) 10

I have to admit that using this syntax is really clever, as it relates to common maths notation. But, as a programming construct it's really bad. Imagine if the right argument is used as a variable. If that variable happens to have the value ¯1, then the behaviour of the function will change completely. I really can't see any situation in which this would be desired behaviour. For this reason, Kap uses ˝ to indicate function inverse.

That said, the Kap implementation of while is basically nothing more than a wrapper around implemented using custom syntax, as explained below.


The goal is to implement a condition system similar to what Common Lisp provides, although this is not available at this time. Thus, the current functionality is very similar to that of languages like Java.

Since Kap does not support goto, the symbol used to provide goto in APL, the right arrow (), has been repurposed to throw exceptions. All exceptions have a type, which is simply any APL object (although it's usually a symbol) and a value, which is another APL object. The general syntax is as follows:

type → value

When called monadically, the type becomes :error.

When an exception is thrown, the stack is unwound and execution continues in the innermost catch handler which is declared to catch exceptions of the given type.

Declaring catch handlers are currently somewhat cumbersome. There is currently no custom syntax defined to make this nicer, so the catch operator have to be used. At the risk of scaring away any interested readers, here's an example of how it's used:

  :foo → 1
  io:println "this will never be printed" 
} catch 1 2 ⍴ :foo λ{ io:println "caught exception with data: " , ⍕⍺ }

Custom syntax

One of the reasons APL is an attractive language is that at its core, the language itself is very simple. It consists of a limited number of primitives on top of which the rest of the language is built. This is similar to another favourite language of mine: Lisp. A lot of the extensions that Kap provides on top of APL are inspired by Lisp.

Lisp allows the developer to extend the syntax using macros. For the reader without experience in Lisp, the one-line summary is that it allows you to treat the code as data, and rewrite the code while it's being compiled. This allows you to completely change the behaviour of the language to suit the problem at hand.

Kap does not implement a full macro system, although this has been considered and may yet happen at some point in the future. Instead, the language provides a keyword: defsyntax that gives instructions to the parser to interpret the code in a different way. The parsed components are then passed to a function that is called at runtime.

As was mentioned earlier, the power operator can serve as a foundation for various control structures. As it already implements a repeat operation, let's use defsyntax to create such a facility:

defsyntax repeat (:value count :nfunction clause) {
  (⍞clause ⍣ count) 0

Once the above has been run, we can print “test” 5 times:

repeat (5) {
  io:println "test"

I was recently listening to the latest episode of Array Cast. I enjoy listening to this show as it gives me some insight into the array programming community and is also reassuring me that I'm not the only one who finds these things interesting.

While I often have comments about what is being said on this show, this particular episode had me commenting on a lot that was said and I found myself disagreeing with many of the major points made during the episode. For this reason, I decided to write them down in the form of a blog post in an attempt to more clearly formulate what I believe are incorrect assumptions made in the episode.

With the exception of the first point which is just a correction, the statements made here are my personal opinions and I don't want to make it seem as though I'm claiming everybody else are outright wrong.

That point about election prediction

In the episode, a claim was made that the first computer-based election predictions were made using APL. As CBS was mentioned, it seems obvious that this indeed refers to the 1952 presidential election in the US. During this election, most people were predicting a close call while the software predicted a landslide. This was so off from expectations that CBS didn't want to air the prediction. It wasn't until the actual results started to come in that they admitted that the computer had actually predicted this result.

The computer used was a Univac I. This was long before the first programming languages started to appear. APL wasn't created until 1966.

It's possible that APL has been used for election predictions, and it would be interesting to see something about that, but I have not been able to find any information about this.

Debugging code

There was a discussion around debugging complex code. There is an implied argument that the “Whitney style” C code is hard to debug. The following statement was made:

And then another time the guy said, “well, how're we going to debug it?” he said. And he said, “oh, if I have to debug this…” — and I never understood that until much later — “oh, you really want me to? You want this thing to be bug free? Well, then I'll have to spend a couple of months reading the code.” Could you imagine what? Yeah, in other words. Think about it. Usually, you know, you'll run something, and you test it and, you know. And if it breaks, you know where to go or whatever. But for Arthur, the way his mind works is if you really want to debug the thing, he had to go and read the code very, very carefully. Not execute the code. Read the code.

Later, the following is said:

Yeah, well, I'm actually going back to what you were talking about, Joel, with the understanding of the programming and have having to read it to say I could debug it. That actually reminds me of something that Aaron Hsu had said. Or that I've heard Aaron Hsu talk about in his Co-dfns compiler, [23] is that he finds now that he understands it so tightly, he can look at it and make very quick changes to it because he's almost internalized the process so tightly that he can see exactly what he needs to do, and he doesn't have all these side effects right rippling through. It's a level of understanding of a program that I don't think most people who write, or program actually get to because they do get into that loop of “I'll try this. Oh, it didn't work. I'll try this. It didn't work.”

In my opinion, none of the above justifies the use of “Whitney style” C code. In fact, it's a vacuous statement. If you have no tools to debug or otherwise analyse a piece of code, of course you are going to have to spend a lot of time reading and understanding the code. You really have no choice.

That's not to say that you shouldn't understand the code you're working on. That goes without saying. The method suggested at the end of the second quote is bad practice no matter what code you are working on. Whenever one makes a change to a codebase, one has to understand what that change does and how it affects the rest of the system.

When discussing “Whitney style” code, this is what I'm referring to. The below is a snippet from the J interpreter:

static A jtsprarg(J jt,I f,A x){A q;B*b,c;I r;P*xp;
 r=AR(x); xp=PAV(x);
 if(SPARSE&AT(x)){c=1; RZ(b=bfi(r,SPA(xp,a),1)); DO(f, if(!b[i]){c=0; break;});}
 else{c=0; GATV(q,B01,r,1,0); b=BAV(q); memset(b,C0,r);}
 R c||!r?x:reaxis(ifb(r,b),x);

It should be said that this is an older version of the interpreter. The later versions have added a lot of comments that explain what's going on. This is great, and my understanding is that we have Henry Rich to thank for that.

The purpose of source code is to be read and understood. A lot more time is spent looking at and anlysing source code compared to the time spent writing it. For this reason, it's important that code is clear and easy to understand.

There are certainly a lot of people who are capable of understanding this. It can even be argued that it's not that hard to figure out. You have to learn what the macros do, and you have be aware of what the single-character names do.

But even setting that aside, running this code in a debugger is terrible. The macros make things particularly problematic, and setting breakpoints when you have multiple statements on a single line is just a hassle. But the biggest issue with all of this is that getting people up to speed on the codebase is a problem. There is value in following established principles.

Even more importantly, there is no value in compressing the code by removing every single whitespace and using single-character variable names. Hard drives are large enough these days to be able to store source code with long names.

I fully agree that Whitney is a very smart person. He's done some very impressive things. However, not everything he came up with is great.

My argument is that J was successful in spite of its code style, not thanks to it.

I also believe that APL suffers from this to a limited extent. However, it's offset by the fact that the language itself is designed to handle it. This is similar to maths notation which is concise and used by people all over the world. However, writing maths in ASCII is unbearable, which is proven by looking at any mathematician writing C code.

This relates to what was said in the episode: notation matters. APL notation is designed to be written in a very terse form. C was not. Just because you can write highly packed code in C doesn't necessarily mean that you should do that.

Performance of basic functions

From the episode:

Arthur's design goal was that the slowest primitive to the fastest primitive have to live within one order of magnitude in speed.

If taken literally, very few if any languages can obey this principle. The difference between addition and exponentiation is already more than one order of magnitude of difference.

After listening to the discussion around this topic, I have spent a lot of time thinking about what it really means. And I have come to the conclusion that the statement itself is meaningless. It's clearly heavily dependent on what the term “primitive function” means. I'm guessing that in the case of K, it's the functions with a single character name. As such, it's a completely arbitrary choice since there is no rule that says that single character names have to behave any differently than any other function.

Even when assuming there is a natural definition of a primitive function, it's not clear why it's important. In the podcast some examples were brought up where in some specific language certain functions were parallelised and incredibly fast while others were significantly slower because they did not have the same optimisations. As best as I can tell, we have two different issues here:

  • The case where + is fast but - is slower by a factor of just 1.5. This would be bad.
  • The case where + is fast, and the built-in implementation of a Monte-Carlo simulation is slow. This would be expected.

With this in mind, I'd use the following statement which is less impactful but perhaps a bit more correct:

“No primitive operations should be unexpectedly slow”

With “primitive” being used very loosely to refer to common functions used in day-to-day programming.

This is one of those occasions where I can point to a specific feature in Dyalog APL, and use that to discuss some feature of Kap.

This time I'd like to mention the At operator in Dyalog. It can be used to update a subset of an array by a function, where the values to be updated are selected using a binary mask (the same function can also be used to select cells by coordinate, but that's outside the scope of this post).

Let's say you have an array with two columns and some number of rows:

┃"abc" 1┃
┃"def" 2┃
┃"ghi" 3┃
┃"jkl" 4┃

You now wish to add 1 to the values in the second column. In order to do this we create a boolean mask that is true in all cells that you want to update. Since this is a simple case of a two-column array, the simplest solution is probably:

    (⍴foo) ⍴ 0 1
┃0 1┃
┃0 1┃
┃0 1┃
┃0 1┃

In Dyalog APL, the way to use the At operator to achieve this is as follows:

    {⍵+1}@{(⍴⍵) ⍴ 0 1} foo
↓ ┌→──┐   │
│ │abc│ 2 │
│ └───┘   │
│ ┌→──┐   │
│ │def│ 3 │
│ └───┘   │
│ ┌→──┐   │
│ │ghi│ 4 │
│ └───┘   │
│ ┌→──┐   │
│ │jkl│ 5 │
│ └───┘   │

As mentioned above, the At operator have several different variations, and the one used here is the type where both the left and right arguments are functions. The right function accepts the array as an argument and returns the boolean mask that selects what values to operate on. The left argument is a function that is evaluated for each selected value.

Kap currently does not implement the At operator. That doesn't mean that it's impossible to do selection by mask. We just have to take advantage of two different features: The Case function and lazy evaluation.

Lazy evaluation was described in the previous post, so let's have a look at the Case function first.

Case function

The Case function (%) was inspired by Dzaima/APL. It accepts an array of indices and a list of arrays, each with the same shape as the index array, and for each cell in the index, pick a corresponding value in one of the subarrays.

An example will hopefully make this more clear:

    0 1 1 2 % "abcd" "1234" "defg"

The behaviour of this function should be reasonably clear. The first cell in the left argument is 0, so it picks the first element of the first array in the right argument. The second cell is 1, so the second value is picked from the second array, and so on.

If the left argument is a boolean array, then that is merely a specialisation of the general case described above where there are only two arrays in the right argument.

Selection by mask in Kap

Based on the description above, it should be obvious how the selection part can be achieved using Case. However, we also want conditional execution. This actually happens automatically thanks to the fact that lazy evaluation is available. Here is the solution in Kap:

    ((⍴foo) ⍴ 0 1) % foo ({⍵+1}¨ foo)
┃"abc" 2┃
┃"def" 3┃
┃"ghi" 4┃
┃"jkl" 5┃

The reason this works is because the function {⍵+1} is only evaluated for the cells that are actually used in the result. If this wasn't the case, the above would raise an error when an attempt is made to add 1 to a string.

The above was an example of cases where lazy evaluation isn't just an optimisation, but is a central feature required to perform certain operations.



Kap is built around the observation that a lot of straightforward APL code ends up expressing the solution to a problem in a very clear and concise way, but has the drawback of performing excessive computation.

Let's say you have a number of products, let's say mobile applications, made by a wide range of authors. Each application has a rating between 0 and 5. For the purpose of this example, we only care about the ratings of the applications and nothing else, so we can represent this information using a simple 2-dimensional matrix. Each row represents an application, with the first column being the name of the developer and the second the score.

    applist ← ("John" "Lars" "Peter" "John" "John" "Lars" "Lars") ,[0.5] 3 4 1 5 0 3 2
┃ "John" 3┃
┃ "Lars" 4┃
┃"Peter" 1┃
┃ "John" 5┃
┃ "John" 0┃
┃ "Lars" 3┃
┃ "Lars" 2┃

You are now tasked to find a developer who has had no applications rated 5 (perhaps the purpose is to sell them developer training). For the example above, that would be Lars.

A straightforward way to think about this problem is to break it up into three separate steps:

  • Collect the the scores for each developer
  • Filter the results to remove all developers with applications with a score of 5
  • Return the first result

In APL (using Dyalog for this example), this can be achieved as follows:

names ← applist[;0]
scores ← applist[;1]
⊃ {⍵[⍵[;1] ⍳ 1 ; 0]} names {⍺ , ~5∊⍵}⌸ scores

The above is a pretty much one-to-one mapping to the simple solution. It is definitely not the best solution, and the astute reader may note that this code checks whether all the developers had a score-5 application, even though we're only interested in the first one. In this particular case we're only calling Member, and it doesn't matter much from a performance perspective, but if the operation was much more complicated, this could result in a rather substantial amount of data being calculated which are immediately thrown away.

A detour: Implementing Key in Kap

At the time of this writing, Kap does not implement the Dyalog Key operator. The reason for this is that the precise syntax to use hasn't been decided, but we can easily implement it ourselves:

∇ (keys) key (values) {
	a ← ∪keys
	a ,[0.5] {values ⫽⍨ keys≡¨⊂⍵}¨ a

This function implements a variant of the dyadic form of Key which has the following properties: Given a vector of keys on the left, group the corresponding values on the right and return a matrix with two columns where the first column is the key and the second is a vector of all values mapped to that key.

Thus, the following:

1 2 1 1 key "foo" "bar" "abc" "def"


┃1 ┏━━━━━━━━━━━━━━━━━┓┃
┃  ┃"foo" "abc" "def"┃┃
┃  ┗━━━━━━━━━━━━━━━━━┛┃
┃2             ┏━━━━━┓┃
┃              ┃"bar"┃┃
┃              ┗━━━━━┛┃

The way this implementation works should be reasonably simple to understand. The variable a is assigned a vector of the unique values, then for each unique key, select all values mapped to that key and collect it into a vector. Finally, concatenate the list of keys with the results.

Porting the original problem to Kap

With this new function in our toolbox, we can now create a version of the previous program that works in Kap:

⊃ {⍵[1 ⍳⍨ ~5 ∊¨ ⍵[;1] ; 0]} names key scores

At this point, we have two very similar programs in APL and Kap. The method used may not be the best for either language, but it's a very simple implementation of the original idea.

The effect of lazy evaluation

In Kap, the function io:println is used to print its argument, followed by a newline. The function then returns its argument. Thus the following will print all the values in scores while computing its average:

    (+/÷≢) io:println¨ scores


The above is not surprising, but let's do the same thing to the previous example:

    ⊃ {⍵[1 ⍳⍨ ~5 ∊¨ ⍵[;1] ; 0]} names key io:println¨ scores


Note how it only printed 5 values from the scores vector. What happened to the other two values?

Whenever possible, Kap will avoid performing any computations until the result of that computation is needed. In the case above, io:println¨ scores does not evaluate the function, but instead returns an object that will at some point in the future return the value you need. If that value was never requested, the function is never called.

There are some exceptions to the rule above with the main one being that variable assignment forces what is referred to as “collapse”. The collapse operation ensures that all results are computed. Thus, the following prints all 7 scores:

    ⊃ {⍵[1 ⍳⍨ ~5 ∊¨ ⍵[;1] ; 0]} names key a←io:println¨ scores


It is also possible to use the function comp (for “compress”) to force evaluation.

Benefits and drawbacks of lazy evaluation

The benefits in cases like the above should be obvious. The more simple approach to solving a problem will often result in fewer computations than the APL solution.

The drawback is that evaluation is more complicated which precludes Kap from implementing low level optimisations that Dyalog and BQN are able to do. These implementations have highly optimised code paths for performing vector calculations on very large arrays of numbers. Doing this in Kap would be difficult, as the implementation would need to determine, at runtime, whether or not it is more beneficial to perform an optimised full-array operation which can use vector instructions in the CPU, or whether it's better to delay the computation in case not all results will be needed.

I can be reached on the Fediverse at @loke@functional.cafe