At WWDC 2015, Apple announced some substantial updates to the Swift Programming Language for version 2.0. Since its announcement, Swift has been undergoing a lot of updates. In fact, version 1.2 of the language had been released in April 2015 and by June we had the first preview of 2.0.
I decided to blog about some of the changes to the language that really caught my attention. The one that I like the most is the “defer” keyword. In .Net (and other languages), we have this concept of try/finally. Sometimes you can include a catch in there, but it isn’t required. The idea is that after the code executes in the try portion, the code in finally is guaranteed to execute. That’s the perfect place to make sure that you’ve disposed of objects or otherwise cleaned up after yourself. Pretty much every tutorial that I’ve ever seen in .Net has something like this:
try { connection = new SqlConnection(connectionString); connection.Open(); // Do Some Stuff } finally { connection.Dispose(); }
Sharp .Netters might point out that coding a “using” statement accomplishes essentially the same thing with anything that implements IDisposable, but I’m just trying to demonstrate a point 😉
There are a few problems with this try/finally style of coding, however. First, what tends to happen is that developers wrap large blocks of code in try blocks. The problem with that is that you don’t know what actually exists by the time you get to the finally. Maybe you’ve returned early from the method, or maybe an exception was thrown. Now, you’ve got to litter your entire finally block with “if this exists, set it to null” or “if this is open, close it” kind of stuff. That’s where “defer” comes in. Let’s take a look at some code that I ran in an Xcode 7 Beta 2 Playground:
class Foo { var Bar : String = "" var Baz : Int = 0 } func showOffDefer() { var originalFoo: Foo? defer { print ("Bar was \(originalFoo!.Bar)") originalFoo = nil print ("Now it is \(originalFoo?.Bar)") } originalFoo = Foo() originalFoo!.Bar = "Lorem Ipsum" originalFoo!.Baz = 7 print("We are doing other work") } showOffDefer()
Remember, the defer block isn’t called until the function is exited. So, what is written to the console is:
We are doing other work Bar was Lorem Ipsum Now it is nil
Do you see the power in that? Now, after I declare an object, I can write a deferred block and it is guaranteed to execute when the function exits. That can be any number of early return statements, that can be because of an exception, or it can be because the function just ran out of code and returned automatically (like mine here). These defer blocks are also managed like a stack, so the most recent defer statements are run first. Let’s see that in action:
class Foo { var Bar : String = "" var Baz : Int = 0 } func showOffDefer() { var originalFoo: Foo? defer { print ("Original Foo's Bar was \(originalFoo!.Bar)") originalFoo = nil print ("Now it is \(originalFoo?.Bar)") } originalFoo = Foo() originalFoo!.Bar = "Lorem Ipsum" originalFoo!.Baz = 7 print("We are doing other work") var newFoo: Foo? defer { print ("New Foo's Bar was \(newFoo!.Bar)") newFoo = nil print("Now it is \(newFoo?.Bar)") } newFoo = Foo() newFoo!.Bar = "Monkeys" newFoo!.Baz = 42 print("We are doing even more work") } showOffDefer()
This gives us
We are doing other work We are doing even more work New Foo's Bar was Monkeys Now it is nil Original Foo's Bar was Lorem Ipsum Now it is nil
Hopefully, that example helps make some sense of it. So what did we get? First of all, we got our two print statements that we were doing other work and then even more work showing that the entirety of the function executes before any defer blocks are called. But then it executes newFoo’s defer block first, finishing with originalFoo’s block last.
That seems pretty awesome to me. I realize that Swift isn’t starting anything terribly new here. The Go Programming Language already had this exact concept. That doesn’t mean that it isn’t a good idea.
Swift was created from the very beginning to be a “safe” language and I think defer blocks accomplish this in a few ways. First of all, it ensures that code gets executed. Secondly, it makes sure that only appropriate code is executed (it won’t try to cleanup for objects not yet created). Thirdly, it keeps the cleanup right near the declaration, so readability and discoverability is improved. Why is that safe? If code is easy to read and understand, and the places for modification are well-known and well-understood, then the code is going to be higher quality.
I’m excited for what Swift 2.0 is bringing to the table.