Elias Mårtenson

@loke@functional cafe

A number of years ago I noted that Unicode did not contain all the characters in PETSCII, the character set used by the Commodore 64 and other classic Commodore computers. The Wikipedia page at the time even explained that some symbols were missing in their character table due to the fact that Unicode didn't support them.

I decided to make a post about it to the Unicode mailing list, and some people there agreed. The discussion expanded to talk about other 80's systems whose character sets were also missing.

So a working group was formed which I was a part of, and a proposal was made to add a number of characters which were used by several old systems. After a few failed attempts, the proposal was finally accepted, and it was included in Unicode 13.

I'm quite happy that my small contribution to this project helped fill a gap that I believe needed to be filled.

Let's use these symbols

A few years have now passed, and font support for these symbols is more widespread. This is good, because the character set contains symbols that are not just useful for interoperability with old systems, but are useful on their own. In particular, the BLOCK SEXTANT characters.

These are the 60 characters starting from U+1FB00, along with 4 more characters that already existed elsewhere. Here they are:


Consider a character that is broken up into 3 rows by 2 columns. These characters are all 64 different combinations of blocks. In other words, we can use these to draw graphics at a resolution which is 2 times the width and 3 times the height of the terminal screen.

Kap is really good at working with arrays of data, so let's write a program to convert a 2-dimensional arrays of bits to a character array of BLOCK SEXTANT characters.

First we need to create the list of character above. Here's the code to do this:

blockMapping ← (@\u2590,)⍢(0x2A↓) (@\u258C,)⍢(0x15↓) @\uA0 , (@\u1FB00+⍳60),@\u2588

Let's break it down. The first part creates a list of the 60 characters in the legacy computing code block, and then prepends U+00A0 NO-BREAK SPACE, and append U+2588 FULL BLOCK. These two characters already existed in Unicode so they have different numbers:

@\uA0 , (@\u1FB00+⍳60),@\u2588

We then want to insert the two remaining symbols that also existed previously: U+2590 RIGHT HALF BLOCK and U+258c LEFT HALF BLOCK. We insert these using structural under.

Structural under deserves a block post of its own (and I've already written extensively about it), so for now we'll just note that the following inserts a at position b in c:

(a,)⍢(b↓) c

Now that we have the characters in a string, all we have to do is to merge each 3-by-2 block in the input array, decode them as a binary number and pick the appropriate character based on the resulting number. Here's the full function:

draw ⇐ {
	(2=≢⍴⍵) or throw "Argument must be a 2-dimensional array"
	v ← ((⍴⍵) + 3 2 | -⍴⍵) ↑ ⍵
	blockMapping ⊇⍨ (2⊥⌽)¨ ((¯2+≢v) ⍴ 1 0 0) ⌿ 3,⌿  ((¯1+1⊇⍴v) ⍴ 1 0) / 2,/v

The first line in the function simply raises an error if the input is not a 2-dimensional array. The second line extends the array so the width is divisible by 2 and the height is divisible by 3. Finally, the third line does all the work, computing the resulting image.

Let's try it:

io:print draw ⊃ {⍵⍴1}¨ ⌽⍳10

This should print:


The quality of the output depends on the font used, and I note that the default font used on write.as (where this blog is hosted) is not very good. With a proper font it looks a lot better.

Try the full example on the Kap web client.

Until now, the only way to return a value from a KAP function was to have it be returned as the return value from the last expression in a function.

Well, there are also exceptions, but those a bit special and should, well, exceptions.

But in some cases it's useful to return a value early. Essentially what return is in languages like C, Java, etc.

So, I did the obvious and implemented return just like in those languages, except that it's called to align with APL.

This example should be self-describing:

∇ foo (x) {
    if (x≡0) {
    } else {

However, since is a regular function, it can also be called dyadically. So if the left argument is true, the right argument will be returned from the calling function. If it is false, then the function will return normally (returning the right argument). Confused yet? The word “return” was used in the description of both behaviours: to return from the current function, and returning a value from the return function itself.

Perhaps this example will make it more clear:

∇ foo (x) {
    (x≡0) → 10

Future work: Returning from different scopes

Common Lisp has a function RETURN-FROM that allows the for returning from an outer function, for example when using nested functions. It also provides a BLOCK form that creates a named scope that can be explicitly returned to.

This is a very useful feature, and implementing it in KAP would not be particularly complicated.

The below is a slightly edited version of a series of posts made on the Fediverse.

For anyone not familiar with the syntax of APL and it's varations, the most important thing to keep in mind about the syntax is that for function calls, the parser binds to the right. It also evaluates its arguments from right to left.

What this means is that the following code: io:println 1 + math:sin 2` is evaluated like this:

a0 ← math:sin 2
a1 ← 1 + a0
io:println a1

Another important fact is that KAP functions are either monadic (they accept a single argument to the right of the function name) or dyadic (accepts two arguments, one on each side of the function). Some functions can be called both monadically and dyadically. One such example is - which subtracts when called dyadically but negates when called monadically.

With that in mind, let's take a look at the solution:

+/ 4 8 3 1 5 9 7 2 6 ⊇⍨ 3 (⊥⍤1) 65 88 -[1]⍨ unicode:toCodepoints 1 0 1 ⫽ ⊃ io:read "testdata.txt"

The input data is in the file testdata.txt, and the first thing that happens is that the entire file is read into an array of strings, where each element is one line from the file.

Each line is three characters, so we use the function which converts an array of arrays to a two-dimensional array of characters.

The next part is 1 0 1 ⫽. This function takes an array on the right, and uses the left argument (1 0 1) as a selector. I means to take one copy of the first column, zero copies of the second and 1 copy of the third column. In other words, this drops the centre column.

Starting with the example array we then get this:

    1 0 1 ⫽ ⊃ "A Y" "B X" "C Z"
┃@a @Y┃
┃@B @X┃
┃@C @Z┃


We then call the function unicode:toCodepoints which converts the characters to the corresponding Unicode codepoints.

We then want to convert the codepoints to actual numbers from 0 to 2. This is done by subtracting 65 from the first column and 88 from the second. Naturally we use the - function to do this, but in order to avoid having to wrap the entire expression in parenthesis, we use the operator which reverses the arguments, putting value to be subtracted from on the right instead of the left.

@tfb I must also explain what the [1] means. Since the left argument is a 1-dimensional array, and the right argument is 2-dimensional, the operation will be applied along an axis. The default is that it will be applied along axis 0. If you put an axis within square brackets after the name of a mathematical function, the operation is instead performed along that axis.

The behaviour of the function when passed an axis parameter depends on the function itself. All mathematical functions use the axis parameter this way.

Let's run the code up to this point to see what happens:

    65 88 -[1]⍨ unicode:toCodepoints 1 0 1 ⫽ ⊃ "A Y" "B X" "C Z"
┃0 1┃
┃1 0┃
┃2 2┃


What we see here is that each row in this array represents a binary number that uniquely identifies the game positions. All we have to do is find this number and assign each number a score, and we'll have solved the problem.

This is where the next two symbols come in. is used to decode a sequence of digits in an arbitrary base.

The other symbol, is an operator, just like , operators are is used to change the behaviour of a function. It's a higher level function if you wish. takes a function on the left, and a number on the right and derives a new function which calls the function on the left repeatedly, each with a subarray of a lower dimension. In this particular example, this means that the function will be called on each row of the input. The left argument is a scalar value, so it will always be the constant 3.

In other words, this decodes each row as a base-3 number and returns an array of these numbers:

    3 (⊥⍤1) 65 88 -[1]⍨ unicode:toCodepoints 1 0 1 ⫽ ⊃ "A Y" "B X" "C Z"
┃1 3 8┃


This gives us a list of the game positions for each round, where each number is a value from 0 to 8.

@tfb Finding the scores for each position is outside the scope of the solution, but for this simple case of only 9 different combinations, doing it by hand was trivial. For larger number of combinations, one would have to write some code to do it.

There are a few different ways in which you can pick values from an array, but in this case we'll use . This function takes an array on the right, and a list of indices to the left. For each index, the corresponding value is looked up in the array and returned.

In our case, the indices are on the right, so we use to swap the arguments. Again, this is to avoid wrapping the entire expression in parentheses and putting the argument on the right.

The final function to be called is +/. This is used to sum all values in an array. For anyone familiar with functional programming, this is just a simple reduction.

The reduction operator is /, which takes a function on the left, and performs a left reduction using that function. In other words, the following: +/ 1 2 3 4 is equivalent to (((1+2)+3)+4).

For anyone familiar with APL, it's worth noting that this is different from how APL handles this operator. In APL, the above would be interpreted as: (1+(2+(3+4))). In the case of sums, this doesn't matter but for other functions it does, in particular if the function has side effects.

I can be reached on the Fediverse at @loke@functional.cafe

This post was inspired by the latest episode of Array Cast. In the episode, control structures in different array languages were discussed, with particular focus on the (what should probably be referred to as) big three: Dyalog, J and BQN. They also mentioned K, Q and GNU APL.

The participants were very thorough in going through the various options available to programmers of the above-mentioned languages, and it was a very informative episode. One of the more interesting ones they've had, since the series on tacit programming, in my opinion.

What is needed is some commentary on control structures in KAP, and this post serves to provide this.

About the need for control structures

As was touched upon in the episode, array languages generally don't need to use them as often as other languages. As much as KAP attempts to be better at imperative control structures than other array languages, it's somewhat ironic that the main example given for a routine that needs a loop actually does not need one in KAP.

The example was along the lines of:

Multiply a number by 2 until the value is greater than 1000, and return that number.

Of course, this can be solved without any iteration at all just using maths, but for the purpose of this example, the assumption is that the multiplication has to be done some number of times.

The most straightforward solution in APL is:

{1 ⍳⍨ 1000 ≤ {2×⍺}\⍵+⍳1000}

In fact, thanks to KAP's lazy evaluation, this function will only perform the computation until a valid result is found. Thus, a loop is not needed in this case. This feature was discussed in a previous blog post, and also this post.

That said, a looping version would look like this:

  i ← 0
  c ← ⍵
  while (c < 1000) {
    i ← i+1
    c ← c×2

Basic control structures

KAP provides control structures that are very similar in syntax and behaviour to C, Java and other similar languages.

If statement

Here's an example of the if statement:

if (a < b) {
  io:println "a is less than b"
} else {
  io:println "a is greater than or equal to b"

In Dyalog, control structures are not values. An :If statement cannot return a value. In KAP, everything returns a value, and in the case of the if statement, the value returned is that of the last expression in the clause that was evaluated. Hence, the following will print 3:

a ← 10
io:println if(a<100) { a-7 } else { 100 }

Link to code online

While statement

KAP also provides a while loop that also have the same behaviour as that of C:

i ← 0
while (i < 5) {
  io:println i
  i ← i+1

Link to code online

The control structures that were mentioned above are not native to the language and are instead implemented as part of the standard library. How this is done is discussed further below.

When statement

The when is an alternative syntax to chained if/else statements. As KAP doesn't have an else if, such chains results in too much indentation, so when is useful as an alternative.

It consists of a series of conditions and associated clauses. It attempts each condition in turn, and once one returns true, the corresponding clause is evaluated and its return value is used as the return value of the entire statement. If none of the clauses evaluates to true, is returned. Here is a simple example:

a ← 1
res ← when {
  (a=0) { "a is zero" }
  (a=1) { "a is one" }
  (1)   { "a is not in the set [0,1]" }
io:println res

The power operator

KAP implements the power operator (), and is a subset of the capabilities available in Dyalog. For the purposes of the discussion of using the power operator to provide the behaviour of an if statement, the functionality is identical. The following expression prints “foo” if a is true:

({ io:println "foo" }⍣a) 0

This code is not very good at communicating what it actually does, and while it's a lot better than the example mentioned in the podcast episode where people were generating code in a string and evaluated it, it's still not particularly understandable.

I agree with Marshall who pointed out that the power operator is very specific and is not good as a general primitive operator. I agree with this, and I would even go a bit further to say that it's not very good at all. The problem in Dyalog is that it's overloaded to do three very different things:

  • Apply a function some number of times
  • An implementation of C's do/while statement
  • Calling the inverse of a function

The first point is OK. If this was the only thing the power operator did then I wouldn't have much problems with it.

The second point makes for a really bad looping construct, and as Adám mentioned on the show if you want to create a normal while loop then one has to use two separate invocations of .

The third point reuses an existing symbol to provide a completely different functionality. If the right argument to is ¯1, then the derived function is the inverse of the function specified on the left. For example, the following returns 6 in Dyalog, since that is the value that has to be added to 4 in order to get 10.

4 (+⍣¯1) 10

I have to admit that using this syntax is really clever, as it relates to common maths notation. But, as a programming construct it's really bad. Imagine if the right argument is used as a variable. If that variable happens to have the value ¯1, then the behaviour of the function will change completely. I really can't see any situation in which this would be desired behaviour. For this reason, KAP uses ˝ to indicate function inverse.

That said, the KAP implementation of while is basically nothing more than a wrapper around implemented using custom syntax, as explained below.


The goal is to implement a condition system similar to what Common Lisp provides, although this is not available at this time. Thus, the current functionality is very similar to that of languages like Java.

Since KAP does not support goto, the symbol used to provide goto in APL, the right arrow (), has been repurposed to throw exceptions. All exceptions have a type, which is simply any APL object (although it's usually a symbol) and a value, which is another APL object. The general syntax is as follows:

type → value

When called monadically, the type becomes :error.

When an exception is thrown, the stack is unwound and execution continues in the innermost catch handler which is declared to catch exceptions of the given type.

Declaring catch handlers are currently somewhat cumbersome. There is currently no custom syntax defined to make this nicer, so the catch operator have to be used. At the risk of scaring away any interested readers, here's an example of how it's used:

  :foo → 1
  io:println "this will never be printed" 
} catch 1 2 ⍴ :foo λ{ io:println "caught exception with data: " , ⍕⍺ }

Custom syntax

One of the reasons APL is an attractive language is that at its core, the language itself is very simple. It consists of a limited number of primitives on top of which the rest of the language is built. This is similar to another favourite language of mine: Lisp. A lot of the extensions that KAP provides on top of APL are inspired by Lisp.

Lisp allows the developer to extend the syntax using macros. For the reader without experience in Lisp, the one-line summary is that it allows you to treat the code as data, and rewrite the code while it's being compiled. This allows you to completely change the behaviour of the language to suit the problem at hand.

KAP does not implement a full macro system, although this has been considered and may yet happen at some point in the future. Instead, the language provides a keyword: defsyntax that gives instructions to the parser to interpret the code in a different way. The parsed components are then passed to a function that is called at runtime.

As was mentioned earlier, the power operator can serve as a foundation for various control structures. As it already implements a repeat operation, let's use defsyntax to create such a facility:

defsyntax repeat (:value count :nfunction clause) {
  (⍞clause ⍣ count) 0

Once the above has been run, we can print “test” 5 times:

repeat (5) {
  io:println "test"

I was recently listening to the latest episode of Array Cast. I enjoy listening to this show as it gives me some insight into the array programming community and is also reassuring me that I'm not the only one who finds these things interesting.

While I often have comments about what is being said on this show, this particular episode had me commenting on a lot that was said and I found myself disagreeing with many of the major points made during the episode. For this reason, I decided to write them down in the form of a blog post in an attempt to more clearly formulate what I believe are incorrect assumptions made in the episode.

With the exception of the first point which is just a correction, the statements made here are my personal opinions and I don't want to make it seem as though I'm claiming everybody else are outright wrong.

That point about election prediction

In the episode, a claim was made that the first computer-based election predictions were made using APL. As CBS was mentioned, it seems obvious that this indeed refers to the 1952 presidential election in the US. During this election, most people were predicting a close call while the software predicted a landslide. This was so off from expectations that CBS didn't want to air the prediction. It wasn't until the actual results started to come in that they admitted that the computer had actually predicted this result.

The computer used was a Univac I. This was long before the first programming languages started to appear. APL wasn't created until 1966.

It's possible that APL has been used for election predictions, and it would be interesting to see something about that, but I have not been able to find any information about this.

Debugging code

There was a discussion around debugging complex code. There is an implied argument that the “Whitney style” C code is hard to debug. The following statement was made:

And then another time the guy said, “well, how're we going to debug it?” he said. And he said, “oh, if I have to debug this…” — and I never understood that until much later — “oh, you really want me to? You want this thing to be bug free? Well, then I'll have to spend a couple of months reading the code.” Could you imagine what? Yeah, in other words. Think about it. Usually, you know, you'll run something, and you test it and, you know. And if it breaks, you know where to go or whatever. But for Arthur, the way his mind works is if you really want to debug the thing, he had to go and read the code very, very carefully. Not execute the code. Read the code.

Later, the following is said:

Yeah, well, I'm actually going back to what you were talking about, Joel, with the understanding of the programming and have having to read it to say I could debug it. That actually reminds me of something that Aaron Hsu had said. Or that I've heard Aaron Hsu talk about in his Co-dfns compiler, [23] is that he finds now that he understands it so tightly, he can look at it and make very quick changes to it because he's almost internalized the process so tightly that he can see exactly what he needs to do, and he doesn't have all these side effects right rippling through. It's a level of understanding of a program that I don't think most people who write, or program actually get to because they do get into that loop of “I'll try this. Oh, it didn't work. I'll try this. It didn't work.”

In my opinion, none of the above justifies the use of “Whitney style” C code. In fact, it's a vacuous statement. If you have no tools to debug or otherwise analyse a piece of code, of course you are going to have to spend a lot of time reading and understanding the code. You really have no choice.

That's not to say that you shouldn't understand the code you're working on. That goes without saying. The method suggested at the end of the second quote is bad practice no matter what code you are working on. Whenever one makes a change to a codebase, one has to understand what that change does and how it affects the rest of the system.

When discussing “Whitney style” code, this is what I'm referring to. The below is a snippet from the J interpreter:

static A jtsprarg(J jt,I f,A x){A q;B*b,c;I r;P*xp;
 r=AR(x); xp=PAV(x);
 if(SPARSE&AT(x)){c=1; RZ(b=bfi(r,SPA(xp,a),1)); DO(f, if(!b[i]){c=0; break;});}
 else{c=0; GATV(q,B01,r,1,0); b=BAV(q); memset(b,C0,r);}
 R c||!r?x:reaxis(ifb(r,b),x);

It should be said that this is an older version of the interpreter. The later versions have added a lot of comments that explain what's going on. This is great, and my understanding is that we have Henry Rich to thank for that.

The purpose of source code is to be read and understood. A lot more time is spent looking at and anlysing source code compared to the time spent writing it. For this reason, it's important that code is clear and easy to understand.

There are certainly a lot of people who are capable of understanding this. It can even be argued that it's not that hard to figure out. You have to learn what the macros do, and you have be aware of what the single-character names do.

But even setting that aside, running this code in a debugger is terrible. The macros make things particularly problematic, and setting breakpoints when you have multiple statements on a single line is just a hassle. But the biggest issue with all of this is that getting people up to speed on the codebase is a problem. There is value in following established principles.

Even more importantly, there is no value in compressing the code by removing every single whitespace and using single-character variable names. Hard drives are large enough these days to be able to store source code with long names.

I fully agree that Whitney is a very smart person. He's done some very impressive things. However, not everything he came up with is great.

My argument is that J was successful in spite of its code style, not thanks to it.

I also believe that APL suffers from this to a limited extent. However, it's offset by the fact that the language itself is designed to handle it. This is similar to maths notation which is concise and used by people all over the world. However, writing maths in ASCII is unbearable, which is proven by looking at any mathematician writing C code.

This relates to what was said in the episode: notation matters. APL notation is designed to be written in a very terse form. C was not. Just because you can write highly packed code in C doesn't necessarily mean that you should do that.

Performance of basic functions

From the episode:

Arthur's design goal was that the slowest primitive to the fastest primitive have to live within one order of magnitude in speed.

If taken literally, very few if any languages can obey this principle. The difference between addition and exponentiation is already more than one order of magnitude of difference.

After listening to the discussion around this topic, I have spent a lot of time thinking about what it really means. And I have come to the conclusion that the statement itself is meaningless. It's clearly heavily dependent on what the term “primitive function” means. I'm guessing that in the case of K, it's the functions with a single character name. As such, it's a completely arbitrary choice since there is no rule that says that single character names have to behave any differently than any other function.

Even when assuming there is a natural definition of a primitive function, it's not clear why it's important. In the podcast some examples were brought up where in some specific language certain functions were parallelised and incredibly fast while others were significantly slower because they did not have the same optimisations. As best as I can tell, we have two different issues here:

  • The case where + is fast but - is slower by a factor of just 1.5. This would be bad.
  • The case where + is fast, and the built-in implementation of a Monte-Carlo simulation is slow. This would be expected.

With this in mind, I'd use the following statement which is less impactful but perhaps a bit more correct:

“No primitive operations should be unexpectedly slow”

With “primitive” being used very loosely to refer to common functions used in day-to-day programming.

This is one of those occasions where I can point to a specific feature in Dyalog APL, and use that to discuss some feature of KAP.

This time I'd like to mention the At operator in Dyalog. It can be used to update a subset of an array by a function, where the values to be updated are selected using a binary mask (the same function can also be used to select cells by coordinate, but that's outside the scope of this post).

Let's say you have an array with two columns and some number of rows:

┃"abc" 1┃
┃"def" 2┃
┃"ghi" 3┃
┃"jkl" 4┃

You now wish to add 1 to the values in the second column. In order to do this we create a boolean mask that is true in all cells that you want to update. Since this is a simple case of a two-column array, the simplest solution is probably:

    (⍴foo) ⍴ 0 1
┃0 1┃
┃0 1┃
┃0 1┃
┃0 1┃

In Dyalog APL, the way to use the At operator to achieve this is as follows:

    {⍵+1}@{(⍴⍵) ⍴ 0 1} foo
↓ ┌→──┐   │
│ │abc│ 2 │
│ └───┘   │
│ ┌→──┐   │
│ │def│ 3 │
│ └───┘   │
│ ┌→──┐   │
│ │ghi│ 4 │
│ └───┘   │
│ ┌→──┐   │
│ │jkl│ 5 │
│ └───┘   │

As mentioned above, the At operator have several different variations, and the one used here is the type where both the left and right arguments are functions. The right function accepts the array as an argument and returns the boolean mask that selects what values to operate on. The left argument is a function that is evaluated for each selected value.

KAP currently does not implement the At operator. That doesn't mean that it's impossible to do selection by mask. We just have to take advantage of two different features: The Case function and lazy evaluation.

Lazy evaluation was described in the previous post, so let's have a look at the Case function first.

Case function

The Case function (%) was inspired by Dzaima/APL. It accepts an array of indices and a list of arrays, each with the same shape as the index array, and for each cell in the index, pick a corresponding value in one of the subarrays.

An example will hopefully make this more clear:

    0 1 1 2 % "abcd" "1234" "defg"

The behaviour of this function should be reasonably clear. The first cell in the left argument is 0, so it picks the first element of the first array in the right argument. The second cell is 1, so the second value is picked from the second array, and so on.

If the left argument is a boolean array, then that is merely a specialisation of the general case described above where there are only two arrays in the right argument.

Selection by mask in KAP

Based on the description above, it should be obvious how the selection part can be achieved using Case. However, we also want conditional execution. This actually happens automatically thanks to the fact that lazy evaluation is available. Here is the solution in KAP:

    ((⍴foo) ⍴ 0 1) % foo ({⍵+1}¨ foo)
┃"abc" 2┃
┃"def" 3┃
┃"ghi" 4┃
┃"jkl" 5┃

The reason this works is because the function {⍵+1} is only evaluated for the cells that are actually used in the result. If this wasn't the case, the above would raise an error when an attempt is made to add 1 to a string.

The above was an example of cases where lazy evaluation isn't just an optimisation, but is a central feature required to perform certain operations.



KAP is built around the observation that a lot of straightforward APL code ends up expressing the solution to a problem in a very clear and concise way, but has the drawback of performing excessive computation.

Let's say you have a number of products, let's say mobile applications, made by a wide range of authors. Each application has a rating between 0 and 5. For the purpose of this example, we only care about the ratings of the applications and nothing else, so we can represent this information using a simple 2-dimensional matrix. Each row represents an application, with the first column being the name of the developer and the second the score.

    applist ← ("John" "Lars" "Peter" "John" "John" "Lars" "Lars") ,[0.5] 3 4 1 5 0 3 2
┃ "John" 3┃
┃ "Lars" 4┃
┃"Peter" 1┃
┃ "John" 5┃
┃ "John" 0┃
┃ "Lars" 3┃
┃ "Lars" 2┃

You are now tasked to find a developer who has had no applications rated 5 (perhaps the purpose is to sell them developer training). For the example above, that would be Lars.

A straightforward way to think about this problem is to break it up into three separate steps:

  • Collect the the scores for each developer
  • Filter the results to remove all developers with applications with a score of 5
  • Return the first result

In APL (using Dyalog for this example), this can be achieved as follows:

names ← applist[;0]
scores ← applist[;1]
⊃ {⍵[⍵[;1] ⍳ 1 ; 0]} names {⍺ , ~5∊⍵}⌸ scores

The above is a pretty much one-to-one mapping to the simple solution. It is definitely not the best solution, and the astute reader may note that this code checks whether all the developers had a score-5 application, even though we're only interested in the first one. In this particular case we're only calling Member, and it doesn't matter much from a performance perspective, but if the operation was much more complicated, this could result in a rather substantial amount of data being calculated which are immediately thrown away.

A detour: Implementing Key in KAP

At the time of this writing, KAP does not implement the Dyalog Key operator. The reason for this is that the precise syntax to use hasn't been decided, but we can easily implement it ourselves:

∇ (keys) key (values) {
	a ← ∪keys
	a ,[0.5] {values ⫽⍨ keys≡¨⊂⍵}¨ a

This function implements a variant of the dyadic form of Key which has the following properties: Given a vector of keys on the left, group the corresponding values on the right and return a matrix with two columns where the first column is the key and the second is a vector of all values mapped to that key.

Thus, the following:

1 2 1 1 key "foo" "bar" "abc" "def"


┃1 ┏━━━━━━━━━━━━━━━━━┓┃
┃  ┃"foo" "abc" "def"┃┃
┃  ┗━━━━━━━━━━━━━━━━━┛┃
┃2             ┏━━━━━┓┃
┃              ┃"bar"┃┃
┃              ┗━━━━━┛┃

The way this implementation works should be reasonably simple to understand. The variable a is assigned a vector of the unique values, then for each unique key, select all values mapped to that key and collect it into a vector. Finally, concatenate the list of keys with the results.

Porting the original problem to KAP

With this new function in our toolbox, we can now create a version of the previous program that works in KAP:

⊃ {⍵[1 ⍳⍨ ~5 ∊¨ ⍵[;1] ; 0]} names key scores

At this point, we have two very similar programs in APL and KAP. The method used may not be the best for either language, but it's a very simple implementation of the original idea.

The effect of lazy evaluation

In KAP, the function io:println is used to print its argument, followed by a newline. The function then returns its argument. Thus the following will print all the values in scores while computing its average:

    (+/÷≢) io:println¨ scores


The above is not surprising, but let's do the same thing to the previous example:

    ⊃ {⍵[1 ⍳⍨ ~5 ∊¨ ⍵[;1] ; 0]} names key io:println¨ scores


Note how it only printed 5 values from the scores vector. What happened to the other two values?

Whenever possible, KAP will avoid performing any computations until the result of that computation is needed. In the case above, io:println¨ scores does not evaluate the function, but instead returns an object that will at some point in the future return the value you need. If that value was never requested, the function is never called.

There are some exceptions to the rule above with the main one being that variable assignment forces what is referred to as “collapse”. The collapse operation ensures that all results are computed. Thus, the following prints all 7 scores:

    ⊃ {⍵[1 ⍳⍨ ~5 ∊¨ ⍵[;1] ; 0]} names key a←io:println¨ scores


It is also possible to use the function comp (for “compress”) to force evaluation.

Benefits and drawbacks of lazy evaluation

The benefits in cases like the above should be obvious. The more simple approach to solving a problem will often result in fewer computations than the APL solution.

The drawback is that evaluation is more complicated which precludes KAP from implementing low level optimisations that Dyalog and BQN are able to do. These implementations have highly optimised code paths for performing vector calculations on very large arrays of numbers. Doing this in KAP would be difficult, as the implementation would need to determine, at runtime, whether or not it is more beneficial to perform an optimised full-array operation which can use vector instructions in the CPU, or whether it's better to delay the computation in case not all results will be needed.

I can be reached on the Fediverse at @loke@functional.cafe

It will chew you up and spit you out, and you'll not even complain.

I often start new projects at the beginning of the year, around December/January. This is not because I particularly plan it that way, but probably more because the holidays gives me time to actually experiment with new ideas that I've spent the previous year thinking about.

Around the time of the start of the pandemic (January 2020) I started working on designing my own programming language. It seemed like a great idea at the time, since I wanted to explore some language ideas I felt had never really been tried before, and it should be easy enough to build a proof-of-concept before I went back to more rewarding projects.

Someone should have told me that once you design a programming language you fall down a rabbit hole so deep you can't even see the light when you look up anymore.

Programming language design is at the same time the most interesting technical challenge you can take on, while at the same time being for the most part an utter waste of time.

My own project has exactly 1 user (myself). There are a handful of other people who have an academic interest in what I do, but they are also language designers, working on their own related languages.

Even though you are aware of the fact that very few people, if any, will benefit from your work, it is impossible to not keep thinking about ways to improve your project. You keep adding new features that are really cool, and eventually everything programming-related you think about gets focused on how that particular topic can be applied to your language.

It's also difficult to go back to working on other languages. You end up thinking about how certain features or ideas could be applied to your language, and as soon as you have some spare time, it's back to the drawing board to implement these ideas in your project.

You are fully aware that your work is almost guaranteed to be of little to no value, once you've fallen down the rabbit hole, you're stuck. Even when working on something else it's impossible to stop thinking about all the small things you want to add to your project. I think this is the most devious part of language design. It covers so many different areas that you are never “done”:

  • Compiler techniques
  • Code optimisation
  • Maths
  • GUI programming (your language needs an IDE, right?)
  • Graphics programming
  • Operating system integration (standard library implementation)
  • Mobile development (programming on mobile devices is a thing these days)
  • Networking (again, needed for the standard library)
  • Writing (your language needs documentation)

In fact, I can't think of another programming project that literally touches on every aspect of software engineering. This is the main reason designing a programming language is a project that you can never finish.

It also affects your communication with the rest of the programming community. I'm sure you've all seen the person, that guy who can't help himself from answering programming questions with an explanation how it's done in their own language. The person that discusses software optimisation techniques with regards how they implemented it in their compiler.

I have to bite my tongue every time, and I still can't stop myself completely.

This is the destiny of most people who design programming languages. A few get lucky and stumble across something other people start to use.

That said, I'm not entirely sure they are lucky. Getting actual users is probably a recipe for never being able to leave the project. Perhaps I should be happy that I am the only user of my language.


When mentioning APL to someone who has never used it, the reaction is often along the lines of “isn't that the unreadable language with weird symbols?”.

The classic example of APL code is Conway's Game of Life which can be written as such:

{⊃1 ⍵ ∨.∧ 3 4 = +/ +⌿ ¯1 0 1 ∘.⊖ ¯1 0 1 ⌽¨ ⊂⍵}

This implies that the answer to the question above is yes, but there is an implication in that statement that suggests that the symbols is what makes the language different.

Another language in the same family as APL is J. This language uses many of the same principles as APL, and Game of Life in J looks like this:

n1dim=:3 +/\ (0,,&0)
neighbors=: -~ [: n1dim"1 n1dim
life=: [: (0.5 >: 3 |@- ]) -:+neighbors

For someone without knowledge of either APL nor J, both of these examples are probably inscrutable. The point of the example above is to emphasise that it's not the symbols that makes APL seem alien to a newcomer.

The question then becomes: Why does APL look the way it does?

To answer this question, we need to look into the history of the language and the requirements that led to its design.

The designer of both APL and J was Ken Iverson. He was a mathematician, and looked at problem solving from a mathematical perspective. APL was originally a notation for array manipulation, and it was only later that it was actually implemented as a programming language for computers.

Thus, to understand the APL notation, it's important to think about it as a concise way to describe mathematical operations, especially operations working on arrays of values.

The first step is to look at the syntax of maths, not programming languages. Let's take a simple example of summing all the elements of an array \(b\):

\[ \sum_i b_i \]

The above syntax is a bit imprecise and we are assuming that \(i\) will take on all the valid indexes in \(b\).

If we want to take the product of all the elements in \(b\), then we can replace the summation sign with the product symbol:

\[ \prod_i b_i \]

What if we want to create a chain of exponentiations? There is no dedicated symbol for this, so we need to write:

\[ b_0^{b_1^{b_2^{\cdot^{\cdot^{\cdot^{b_{n-1}}}}}}} \]

In the example above, we're assuming that the reader will understand that \(n\) represents the number of elements in the array.

The observation here is that these operations are all variations of the same underlying operation. In functional programming this is usually referred to as a reduction.

To perform the sum in APL, one wrould write the following:


The + is the operation to perform, and / is the reduction operator. The natural extension to the product is:


And for exponentiation:


The argument in favour of APL notation is that it's simpler and more concise than traditional mathematical notation. That's no short order, as maths is already supposed to be both simple and concise. Not only that, the APL notation is free of imprecision as opposed to the mathematical examples, as was discussed above.

The APL syntax also allows you to use any other function as part of a reduction. For example, the function returns the smaller of the two values, so the expression ⌊/b can be used to find the smallest value in \(b\). The way to write this in mathematical notation is as follows:

\[ \min_i(b_i) \]

It is assumed that \(\min\) is well known, since in APL we make the same assumption concerning .

The reason why APL syntax looks the way it does, is because it's a mathematical notation that happens to be suitable for evaluation by a computer, rather than a set of instructions for a computer to perform.

Common Lisp is one of the few languages which is both dynamic and also gives you a full native compiler and the ability to declare types so that the optimiser can provide more efficient code.

The following examples are with full optimisation enabled and safety flags off, just to get rid of the explicit type checks in the generated code.

Let's take a small function that simply adds 100 to its argument:

(defun foo (x)
  (+ x 100))

After defining the function (no need to run a separate compiler or anything like that, this is all in the REPL) I can see the generated machine code by calling (disassemble #'foo)

CL-USER> (disassemble #'foo)
; disassembly for FOO
; Size: 18 bytes. Origin: #x52BF4B06                          ; FOO
; 06:       BFC8000000       MOV EDI, 200
; 0B:       FF142500010052   CALL QWORD PTR [#x52000100]      ; GENERIC-+
; 12:       488BE5           MOV RSP, RBP
; 15:       F8               CLC
; 16:       5D               POP RBP
; 17:       C3               RET

OK, that's a bit much, and notice how it calls a library function to perform the actual addition. This is because Common Lisp supports things like rational numbers and bignums, and since there are no declarations the compiler have to assume that the argument can be any type of number.

Let's help the compiler a bit by telling it that the argument is in fact a FIXNUM (Lisp terminology for an integer that fits in a register):

(defun foo (x)
  (declare (type fixnum x))
  (+ x 100))

This is what the disassembly looks like now:

CL-USER> (disassemble #'foo)
; disassembly for FOO
; Size: 25 bytes. Origin: #x52BBBDC9                          ; FOO
; C9:       4883C264         ADD RDX, 100
; CD:       48D1E2           SHL RDX, 1
; D0:       710A             JNO L0
; D2:       48D1DA           RCR RDX, 1
; D5:       FF142560010052   CALL QWORD PTR [#x52000160]      ; ALLOC-SIGNED-BIGNUM-IN-RDX
; DC: L0:   488BE5           MOV RSP, RBP
; DF:       F8               CLC
; E0:       5D               POP RBP
; E1:       C3               RET

Interesting. It now does a regular machine addition but there is more stuff going on here. What really happens is that even though we know that the input will fit in a FIXNUM, we don't know if the result will. After all, the input could have been just right at the maximum value for a FIXNUM. The code checks for an overflow, and if that happens, it allocates a bignum and returns it.

Let's change the declaration to say that the input is simply an integer between 0 and 10000000. This way the compiler can be sure the result will fit in a FIXNUM:

(defun foo (x)
  (declare (type (integer 0 10000000) x))
  (+ x 100))

And now let's disassemble the code:

CL-USER> (disassemble #'foo)
; disassembly for FOO
; Size: 13 bytes. Origin: #x52BFC186                          ; FOO
; 86:       4881C2C8000000   ADD RDX, 200
; 8D:       488BE5           MOV RSP, RBP
; 90:       F8               CLC
; 91:       5D               POP RBP
; 92:       C3               RET

Nice, now we can see that it simply performs a simple ADD instruction and returns the result.

For completeness sake, I want to explain two things that may be confusing:

First of all, why is it adding 200 and not 100? This is because SBCL (which I am using for these examples) use the lowest bits of a register for type tags. If the least significant bit is 0, then the rest of the bits represents a FIXNUM. Since the value to add is a constant, the compiler simply adds 200 instead of emitting instructions to do bit shifting.

The second question may be why there is a CLC instruction there. This is because Common Lisp supports multiple return values (a function can return any number of arguments, although returning a single one is the typical case). If the carry flag is cleared after a function call, that means that the function returned a single value, which is stored in RDX. If the carry flag is set, then that means there are multiple return values. When returning multiple values, the primary return value is still in RDX which means that a caller who doesn't care about the other values can simply ignore the carry flag. It's pretty clever if you ask me.

What's really cool about all of this is that I can do these things without ever restarting my Lisp runtime. I get the benefit of a high level language where I can work with the code without having to restart anything, but I can still take advantage of a native compiler and really good performance.

There are a lot of declarations one can apply. Not just type declarations, but several others that can be used to control the compiler. All the details can be found in the reference: http://clhs.lisp.se/Body/s_declar.htm#declare

I can be reached on the Fediverse @loke@functional.cafe