corpse

This blog is written in Retro and has served as my primary means of posting things concerning Retro since 2010. The core code for Corpse is included in the Retro releases and can be freely studied and deployed.

The most recent posts are shown below. You can also view a list of all posts.

Post 223 of 223

2014-03-20

Discontinuing Corpse

This blog has run its course. Since October 2010 I have posted over 200 items to it, but have recently decided to move on. I'll still be posting articles related to Retro and Forth, but these will be done on my personal site, rather than here.

The code for Corpse will continue to be available in the releases and repositories for Retro, but I will no longer be providing support for new deployments. If you want to run this, you'll need to look at the existing documents and sources for help.


Post 222 of 223

2013-12-29

On meta compilation in Retro

On a few occasions over the last several years I’ve been asked to explain the meta compiler in Retro. I did a commentary on the source once, but despite this and various discussions via email and IRC, I’ve never taken the time to cover it in a single place. So this post will hopefully explain the general concepts, and how it works overall.

Chickens and Eggs

Retro, as a meta compiled, image based system, has an interesting problem: how do you create a new image? Currently the latest image can be used to rebuild itself, possibly with some changes made. So we build, test, debug, commit the image and sources to the repository and repeat. It’s not quite possible to build the next image from the prior stable image since changes arise over time. But if you follow the repository history back, you can track the changes to the image over time.

Before we began meta compilation, we built the image using a cross compiler. There’s a subtle difference here: a meta compiler is written in the host language, while a cross compiler is written in another language. So let’s go back.

Between the creation of Retro 10 and the end of Retro 9, I had two other experiments: a stack based language called Toka and a little virtual machin called Maunga. These formed the base for what became Retro 10. The instruction set and memory model of the virtual machine became the original Ngaro. And Toka, while proving to be a dead end, provided a language suitable for implementing a cross compiler.

So I created a little assembler for the Ngaro byte codes, and gradually built a library of routines suitable for hosting a small Forth dialect. Eventually I wrote a simple compiler and interpreter over these routines and the first Retro 10 image was born. This continued for a few releases, until 10.3.

I wanted to drop the need for Toka. It worked, but meant that I needed to maintain an extra language and source tree just to build new images. And it wasn’t as portable as I had hoped. I had to make tweaks frequently for Windows, BeOS, OS X. And it was annoying. The obvious choice was to start meta compiling. And it wasn’t too difficult. The original routines from Toka provided an assembler and some functions for creating an initial dictionary. I recoded both of these in Retro, and ever since we’ve meta compiled new images.

How it works

The essential bits are:

  • Setup a memory area for the new image
  • Define functions that map to Ngaro instructions
  • Define functions to build an initial dictionary
  • Write some relocation routines

With these parts (meta.rx), and the initial kernel (core.rx), we can build a new image in place of the original one. Over time the meta.rx grew richer, until now, when it provides a machine forth dialect.

Everything in core.rx gets compiled into the the dedicated memory area. The initial dictionary is created, and then the magic happens: the new memory area is copied over the original one, and one of two things happens. The new image (the core functionality only) either gets saved in place of the original retroImage and the meta compiler edits or a jump to the starting address is performed. If a jump is performed, then execution continues, with the new image in control.

Generally, in either case, the stage2.rx will then be loaded, compiled, and saved into the new image. Once this is done, a new, fully functional image is complete.

Concluding Thoughts

This isn’t really a detailed coverage of the meta compiler. As mentioned earlier, there is a commented source file that covers the entire process.

In the next couple of months I’m hoping to simplify the meta compiler code and core.rx, making things somewhat smaller and more modular. As I do this I’ll provide updates and more detailed source comments to help keep things fully documented from the outset.

Please contact me with any additional questions and I’ll try to explain things that aren’t clear better.


Post 221 of 223

2013-11-30

Parable with I/O

I have uploaded a varient of the pre runtime for Parable with support for basic console output, and limited file i/o. This can be found at http://sprunge.us/iTjS. A small test script which reads the first line of /etc/timezone is also provided.

With this, Parable becomes much closer to being useful for actual development. I've also implemented an interface with support for drawing basic shapes and reading finger touch locations under iOS (via Pythonista). So work on samples with I/O functionality is proceeding nicely.

I'm hoping to extend this to offer Retro compatible file i/o functionality in the next week or two. After this is done, I'll consider implementing support for other things.

This is made possible through use of the byte code extensions introduced earlier this month, which means that I can keep my core language separate from the i/o model, and therefore be more flexible in how I proceed with things. So far I'm finding this to be beneficial.


Post 220 of 223

2013-11-06

Extending Parable's VM

In the most recent updates to Parable's Python implementation I have modified the bytecode interpreter to allow for custom extensions without requiring parable.py to be altered. This opens up the possibility of supporting I/O on a per-application basis.

To make use of this, you will need to define a function that processes your extended opcodes. E.g., something like:

def opcodes(slice, offset, opcode):
    if opcode == 1000:
        display_value()
        stack_pop()
    elif opcode == 1001:
        exit()

    return offset

And then modify calls to interpret to pass this new function as an argument:

interpret(compile(src, request_slice()), opcodes)

If the byte code is not recognized as one of the defaults, it will be passed to the provided function for further processing. You can then map the new bytecodes to functions with the ` prefix as normal.

With this I think it'll be possible to do interesting things in a much cleaner fashion. Things like the fork with turtle graphics can be done with no changes to the core codebase, so keeping everything in sync should be much easier now.

This change will be ported to the PHP implementation in the near future.