Laurent Sansonetti on RubyMotion Internals
Yesterday I posted the first half of my interview with Laurent Sansonetti about RubyMotion, an implementation of Ruby that targets Apple’s iOS mobile platform. If you’re not very familiar with RubyMotion be sure to read that first. We had a chance to discuss RubyMotion basics: what it is, how the project started, and how writing a RubyMotion app differs from writing a standard Ruby app using MRI.
In this post, the second half of our interview, I had a chance to ask Laurent about the inner workings of RubyMotion: How does RubyMotion compile Ruby code? What does this mean, exactly? How does your code get transformed from Ruby into native machine language that your iPhone or iPad can understand? How does RubyMotion differ from MacRuby and Rubinius?
When I wrote Ruby Under a Microscope last year, I wasn’t able to include any information about RubyMotion since it’s not an open source project. Having the chance to talk with Laurent directly was a great way for me – and will be for you – to learn about RubyMotion internals. It’s a truly unique implementation of Ruby and is something all Ruby developers should know about even if they are not currently doing iOS development.
RubyMotion and LLVM
Q: I read on your web site somewhere that you’re using LLVM as the technology behind your compiler. I’m wondering how that works, at a high level. Do you compile the Ruby code into the LLVM IR instruction set?
Yes, we do that. The RubyMotion compiler is going to parse the Ruby source code into an AST, and here we actually use the Ruby 1.9 parser.
Q: You’ve taken the parse.y file from MRI?
Yes. We use the Ruby 1.9 parser and then get a tree of AST nodes for the grammar of the Ruby file. And then, we iterate over each of these and create the equivalent in the LLVM language.
LLVM can be explained as two things:
The first thing is a very abstract assembly language, which is processor-agnostic. It is supposed to be cross-platform which is why it has a limited set of instructions. It’s a language you can write yourself, or you can generate it using an API. LLVM provides a C++ API you can use to generate the instructions.
The second part of the LLVM project is a compiler for that language. It’s a set of modules that you can use to translate these LLVM instructions, which is also called the IR or “internal representation,” into assembly.
Q: So there’s no virtual machine? There’s no “VM” in “LLVM?” That’s one of the confusing things about LLVM, isn’t it?
Absolutely; there’s actually no VM – virtual machine – in LLVM. Sometimes people think it’s a replacement for the JVM. They say: “We need to port this language from JVM to LLVM.” But LLVM is just a compiler. There is no runtime there.
You can use LLVM in two ways – the first way is as a static, ahead-of-time compiler. You pass it the LLVM IR and it dumps a bitcode file. Bitcode is actually a binary representation of the language. And then from the bitcode you have low level tools you can use to compile to assembly instructions for a specific processor and architecture. You can target Intel 32-bit, 64-bit, ARM, PowerPC, etc. There are lots of processors supported by LLVM.
Q: …including the ones in iOS devices?
I think this is the most commonly used way of using LLVM. Then there is the way the Clang project uses LLVM. Clang is the new C-level compiler from Apple. It is supposed to replace gcc and it is using LLVM that way, like RubyMotion is.
The second way to use LLVM is as a JIT, or “Just In Time” compiler. This is what Rubinius is doing, as far as I know, and this is what MacRuby is doing.
Q: So how is that different?
The only difference is that LLVM has a C++ API to do the compilation process at runtime. You create the IR instructions and then you call this specific API and you get a pointer to machine code, that you can just call inside your program.
Q: So it’s the same compiler but you’re running it at a different time?
Exactly. You run it at runtime. In MacRuby this is the default mode. When you run “macruby foo.rb” it will parse, execute and then do a just-in-time compilation of the entire file and execute it at runtime.
I believe that Rubinius is doing the same thing; I may be mistaken because I don’t really follow the Rubinius project much these days. I think they use LLVM that way. In MacRuby we used LLVM very early, I think at the same time as Rubinius. I think Rubinius had a prototype, at one point, which was very slow. In MacRuby we tried LLVM a bit later and it was working for us. Then Evan Phoenix told me that they actually managed to get it working. At that time, LLVM was very immature; there were many bugs and it was very unstable.
Q: This is back in 2007 or 2008?
This was 2009. At that time, LLVM developers were saying that LLVM is a great just-in-time compiler because Apple was using it in the OpenGL stack, I think. Programming languages should actually use it as a JIT. And that’s what we did for MacRuby and Rubinius. And there was a project that used it from Google, an implementation of Python called Unladen Swallow.
It was clear after a few years that LLVM was not a good just-in-time compiler. It is very slow and also a bit unstable. Also, it’s very tedious to do proper exception handling. At the same time, LLVM was and is still a great platform to do static compilers, ahead-of-time compilers, and this is how we use it in RubyMotion.
Q: I think what you’ve done is amazing: to take a dynamic language like Ruby and compile it into machine language instructions is impressive. Would it be possible for you to show us how RubyMotion converts a simple Ruby method into LLVM IR instructions, and later into assembly language?
Sure – absolutely. If we start with a basic hello.rb file:
…the compiler will generate the following LLVM bitcode (instructions for the
LLVM IR language):
Here, the interesting stuff is the construction of the interpolated string (the rb_str_new* calls) and the sending of the #puts message (the vm_dispatch call).
From that LLVM IR, the compiler will then generate assembly. Here is the i386 version (for the simulator):
Optimizing local variables and basic arithmetic
I think that everyone who implements Ruby really hates this method. The best argument was probably made by Charles Nutter. This method is evil because it makes it impossible to optimize local variables. Here’s an example:
Here it is impossible to know what the method returns. Normally it should return 3, but if the bar method is written like this:
Q: So it’s evaluating the “x=42” code in the context of that binding?
Yes, because the bar method is evaluating in the binding of the proc, it is going to change the values of the local variables. So in RubyMotion if you try to use Proc#binding it will raise an exception and say we don’t support that.
Thanks to that we can optimize local variables.
Q: I see. It’s interesting because what makes RubyMotion different from JRuby, Rubinius or other rubies is that you’re changing the language in these ways. JRuby and Rubinius try to keep the language identical to MRI, using the RubySpec specification.
This was also the same in MacRuby; we also supported Proc#binding. You’re right, in RubyMotion I tried to take a different approach. Well, these methods are not really used. They are not used in Rails as far as I know, and Rails is the biggest use case of Ruby feature. They pretty much use everything, but they don’t use Proc#binding. Also, Proc#binding has security implications that have been recently invoked.
I decided to remove it from RubyMotion, because performance matters and this allows us to optimize local variables. At the same time in RubyMotion you cannot redefine operators on the Fixnum class. You cannot open the Fixnum class and redefine plus to do something else.
Q: Because you wanted to allow the native code to do the adding?
Exactly, to compile arithmetic operations as fast as possible. We cannot afford to check if the Fixnum method has been redefined. In MacRuby we support all of this, but in RubyMotion I removed some features of Ruby that would actually slow down the process.
Q: It seems like a really good tradeoff: you’ve simplified your technology, you’ve made the target application much faster, and you’ve only removed a few minor things that most people don’t use.
The only complaints we get are about “require.” They say we should have something like that, so we might actually do something in this regard.
How RubyMotion debugging works
Q: What is DWARF?
DWARF is a debugging format for C level languages. A DWARF file contains annotations for every address in the binary. For example, this is a function… in this area we save local variables of a certain type… this area contains a C++ class.
This is a debugging format that comes along with the binary. And debuggers such as GDB or profilers are able to load this format so it can actually know more about the binary.
Q: So you include that information in every binary? Or only when you have certain flags on?
It is only used for development, and also the DWARF file is not actually part of the binary; it is actually in a separate file.
So when you connect GDB to a RubyMotion application, you can actually see the file and line information for backtrace frames and you can use “next” to jump to the next Ruby line, which is really awesome! This is great for people who know how to use GDB, but most Ruby developers really have a hard time with GDB. One of our plans is to actually write a high level Ruby debugger on top of this.
Q: What’s the future of debugging in RubyMotion?
Right now it very hard to RubyMotion to profile their applications since you need to stick with GDB, MallocDebug, GuardMalloc, sample, and these are very low level tools. This is not easy. So we want to create some sort of an abstraction on top of it.