While the vast majority of my blogposts are about electronics related stuff. This post will be more on the software development side of technology. Besides my recent rediscovered love for electronics I’ve got a huge passion for software development and everything Apple. These two passions were the reasons to start developing iPhone apps by studying the Objective-C programming language last year. And although my two finished apps QuickHue and Bermuda eventually made me some extra money to spent on my hobbies, I still see App-development as a nice way to waste my spare time.
Recently Apple announced the new program language Swift. And while my primary reaction was: “Did I really do all the Objective-C studying for nothing?!”, I soon discovered the beauty about this new language: it saves you A LOT of typing and thus it allows you to transform your ideas to working products in a fraction of the time.
Despite the fact it’s in a beta stage, and I still need to master the new syntax, it already had a lot of fun trying out this new way of app-development. One of my biggest experiments, was to create my own Gesture Recognizer. In this post I’ll share the process and concept behind the Gesture Recognizer. Since there are already A LOT of blog posts comparing Objective-C and Swift I wont dive deep into Swift’s Syntax. Be prepared for mostly mathematical nonsense.
The gesture recognizer
Before I dive into the process, lets take a short moment to explain the use of a gesture recognizer. by taking a peek into Apple’s documentation:
Gesture recognizers convert low-level event handling code into higher-level actions. They are objects that you attach to a view, which allows the view to respond to actions the way a control does. Gesture recognizers interpret touches to determine whether they correspond to a specific gesture, such as a swipe, pinch, or rotation.
In other words, gesture recognizers allow you to convert the user’s input into usable information to interact with your application.
While in most cases the default gestures (tap, pinch, pan, swipe and rotate) will be sufficient, it might be usefull to built your own gesture recognizer. This can be done by subclassing the UIGestureRecognizer
.
Subclassing: A Subclass, "derived class", heir class, or child class is a modular, derivative class that inherits one or more language entities from one or more other classes (called superclasses, base classes, or parent classes).
The gesture I’ll built will be a one finger rotation recognizer. It behaves like a volume button on an amplifier, and will convert circular motion around a mid point into a value usable by the application.
Subclassing in Swift
I once built my own Gesture Recognizer in Objective-C, and this tough me I needed to include the UIKit/UIGestureRecognizerSubclass.h
in order to overwrite the existing methods.
In Swift, this would be done by adding the following code:
import UIKit.UIGestureRecognizerSubclass
But although this didn’t give my any errors in the editor, and even ‘code-compleded’ all subclassable methods, the application would not compile. After filing a bug report, I needed to find an alternative solution. Fortunatly this was found when I discovered the possibility of using a Bridging Header.
The Mathematical Magic
The rotation of my Gesture Recognizer will be measured by calculating the change of angle relative to the center point of the gesture recognizer. A brief search in my memory of the math lessons I once had during highschool (many many moons ago), reminded me of the arctangent function to calculate the angle. Because this function will return a value between -π/2
and π/2
, I had to add add 2*π
when the angle was lower than 0. This added up to the following function:
func angleForPoint(point:CGPoint) -> CGFloat {
var angle = -atan2f(point.x - midPoint.x, point.y - midPoint.y) + π/2
if (angle < 0) {
angle += π*2;
}
return angle
}
Note that π
is not (yet) a default variable. But since Swift allows you to use any unicode character as variable name, it’s easy to assign the value of Pi to π
:
let π = Float(M_PI)
Now that I’m able to calculate the angle for a certain point, It’s pretty easy to calculate the angle difference between two points:
func angleBetween(pointA:CGPoint, andPointB pointB:CGPoint) -> CGFloat {
return angleForPoint(pointA) - angleForPoint(pointB)
}
These two functions are the base of my Gesture Recognizer and will eventually do most of the magic.
Additionally we want to know the distance of a touch relative to the center point. This distance will be used to check if the touch is within the desired bounds for the Gesture Recognizer. This distance is calculated by using Pythagorean theorem and results in the following function:
func distanceBetween(pointA:CGPoint, andPointB pointB:CGPoint) -> CGFloat {
let dx = pointA.x - pointB.x
let dy = pointA.y - pointB.y
return sqrtf(dx*dx + dy*dy)
}
Subclassing the superclass methods
One function we definitely want to create, is the designated initializer. Since we want to calculate the angle based on a midpoint, this initializer requires the midpoint to be given. Additionally, it allows you two optionally set the minimum inner radius, and maximum outer radius of the gesture:
init(midPoint:CGPoint, innerRadius:CGFloat?, outerRadius:CGFloat?, target:AnyObject?, action:Selector) {
super.init(target: target, action: action)
self.midPoint = midPoint
self.innerRadius = innerRadius
self.outerRadius = outerRadius
}
Of course, you want your subclass to do all the magic when a touch is recognized. This is done by overwriting the following functions:
touchesBegan(touches: NSSet!, withEvent event: UIEvent!)
In this function we check if the touches are within the desired range by checking the distance from the center using the previous declared function, and if so, set the gesture state to .Began
touchesMoved(touches: NSSet!, withEvent event: UIEvent!)
In this function we will set the previous and current touch point, and set the gesture state to .Changed
touchesEnded(touches: NSSet!, withEvent event: UIEvent!)
In this class we will reset the previous and current touch point, and set the gesture state to.Ended
Based on the information set by above functions, the values for the angle can be calculated by using previous declared functions.
Yeah, yeah … show me the result!
Okay, okay, I understand above might be all a bit abstract and boring, so let me demonstrate the result using a small animation:
As you can see, It’s possible to:
- Read out the relative rotation and (in this example) use that relative value to adjust a value within your application. (For example: the volume of your sound.)
- Read out the absolute angle relative to the center point. Note that 0° & 360° will be at 3 'o clock.
- Read out the distance from the center point.
There are many use cases for this gesture. And since you’re able to use this Gesture Recognizer with only a few lines of code. It is an awesome piece of coding (oh, don’t you just love my humility?). But what’s more important is the ease in which Swift allowed me to code this after a few days of Swift experimentation.
I can’t wait to see what Swift’s future will bring!
As always, if you’re interested in the complete source code, or just want to use this Gesture Recognizer in your own project, head on to GitHub and start forking!
Forks, patches and other feedback are welcome. Comments below this post even more!
UPDATE 22/7/2014: Xcode 6 Beta 4 made some significant changes to the CGFloat / Float types. I pushed a fixed version to GitHub.