Monday, November 4, 2013

Objective C Checklist

As I move closer to production with an app that may get a lot of uptake, I developed the following good practices checklist:

 

From Effective Objective C 2.0

  • Just designate @class when possible — minimizes leakage (remember no real (complier) enforcement of private) in Objective C
  • Use literal constructors, e.g., rather than NSNumber *someNumber = [NSNumber numberWithInteger:1], use NSNumber *someNumber = 01; --searching for "with" might be effective
  • Use static constants rather than #define, e.g., rather than #define ANIMATION_DURATION 0.3 use static const NSTimeInterval kAnimationDuration = 0.3;
  • @property (nonatomic, readwrite, copy)
    • Sort of obvious ones except for copy -- use copy for strings (which may or may NOT be mutable -- NSMutableString might get passed in, which could lead to weird unexpected behavior), assign works for scalars, but there's also strong and weak (for avoiding circular references that would confuse ARC)
    • One other caveat, if in the “readwrite" slot, retain is also an option -- retain should be used for pointers (and copy, etc. become irrelevant), e.g., @property (nonatomic, strong) NSObject *aThing; retain has been deprecated (replaced with strong) in ARC per https://developer.apple.com/library/mac/releasenotes/ObjectiveC/RN-TransitioningToARC/Introduction/Introduction.html
  • Implement description method, e.g.,  (although, if you don’t have any instance variables/properties that are informative, it's probably not worthwhile & you don't, don't don't want to overwrite the autogenerated CoreDataModel code)

  -(NSString *) description{

return [NSString stringWithFormat:@"<%@: %p,  \"%@ %@\">", 

[self class], self, _firstName, _lastName];

}

  • Define private instance variables in the implementation file. This is done by defining another @interface in the .m file, and prevents leaking -- Although I understand this at the level of "a reasonable workaround for deficiencies in the language"  I do find it a bit off-putting: I like ALL definitions in the .h files (although being a Java/Lisp guy, I hate .h files).  BTW the syntax for this is  (the parens matter!!!)

@interface HCSituationEvaluator ()

 

@end

  • Use NSCache rather than NSDictionary for caching (didn’t realize this class even existed)

From iOS Programming, The Big Nerd Ranch Guide

  • Set breakpoint to break on all errors

NewImage

NewImage

    • Consider using removeObjectIdenticalTo, rather than removeObject (the right answer depends upon the circumstance: removeObject calls isEqual, so it doesn’t require exact instance identity)
    • Check setDelegate in xml parser — the recommendation is to have delegates for each sub node in the xml parse tree. This makes sense, and makes for more maintainable code that building your own stack based state machine. However, I'm a more skeptical of rolling all the parsing into the class associated with that node, primarily because the parsed data may be stored in CoreData, or in a custom datastore. In these cases, I tend to have side classes that can handle “out of band” operations, and put the parsing there, using the pattern CoreDataClassNameHelper
      • HOWEVER (& Critically Importantdelegate is a weak reference, you need to hold onto the value somewhere else or the memory will be reused!

 

Some "older" writeups advise you to avoid storyboards. This seems to be out of date with Xcode 5, which doesn't present you with a non-storyboard option.

 

    • Penultimate step "Apple reserves the right to useall two letter class prefixes"
    • Final step, be sure all situation eval is done via notifications and timed events e.g., performSelector:@selector(aSelector) withObject:nilafterDelay:0.5

Monday, July 29, 2013

Integrating Sensors and Prompts/Effectors

I’m putting together an architecture that looks like this:

Sensors signals and actuator

The application is a smart-home/safe-home system supporting a combination of

  • Place basedobjects: primarily sensors, e.g., stovetop monitors, water overflow sensors, etc.,
  • Person basedobjects: either prompting/facilitating (flashing lights, feedback tones, vibration) devices, or sensors facilitating a quantified self paradigm (steps, heart rate, blood pressure).

At first glance this architecture looks overly complicated: why have the ZigBee network at all, wouldn’t it be possible to achieve the same result without it, and if so, wouldn’t that be preferable?

 

ZigBee Sensor Network Advantages

I was driven to this architecture from two directions, first on the ZigBee side: The ZigBee communication fabric is designed for high capacity sensor networks -- in this case high capacity means many sensors, rather than high bandwidth. Considerations included:

 

  • ZigBee is used in the Phillips Hue lighting system -- which can control up to 50 (!) bulbs from a single gateway. Their website contains a paper which discusses potential interference issues, demonstrating how unlikely it is that interference will be an issue in practice
  • ZigBee is low power -- the radio can run off of batteries for a “long time” (battery life varies with radio settings). A single radio can report on up to 4 different sensors without requiring an Arduino or other micro controller. This not only increases battery life but also substantially reduces costs (any Arduino is more costly that an XBee radio, and uses substantially more power).

And finally: 

Data Cloud

 

Once you have the ZigBee sensing network in place, getting the data up to a cloud is straightforward using one of Digi’s gateways (I’m using a ConnectPort X2 at the moment). Digi runs their own cloud service etherios.com geared towards sensor networks. Etherios provides the service at no cost for a small number of gateway nodes -- a nice way to get going and see if your ideas have any traction.

 

With the sensor data securely stored in a cloud service, caregiver facing applications can access this data via a REST interface. 

 

 

Smartphone Prompting/Effector Network

 

Output device connectivity is an area where “smartphones” shine. I currently only work with iPhones, since I’m familiar with iOS development, and the libraries provided for building user interfaces are complete, well tested and constantly improving. Smartphones also provide a rich set of information about the user situation, with GPS, accelerometer data, etc..

 

Smartphones have become the default target for consumer facing add-on devices as they support bluetooth, wifi and cellular data connections. The net effect is that an iPhone app can use bluetooth to integrate with a sphero robotand wifi/cellular data to access the data cloud and control Phillips Hue lightbulbs.

 

In addition, the phone’s GPS and geo fencing capabilities allow checks to be run before leaving the house (stove off, backdoors locked, etc.), making the system potentially attractive to many people, some of whose only “cognitive disability” is having a hectic life. 

 

On a more speculative level, multiple new sensor types are becoming available in the quantified selfspace. There are a number of startups developing tools to track and measure your activities along with associated apps and API’s. I expect this area to evolve rapidly as companies with a track record of developing high quality consumer products acquire these technologies. Just to pick one example: Jawbone recently acquired BodyMedia

 

 

Network Partitioning

 

Although the functionality->network partition is flexible. It isn’t arbitrary. Low power, sensors requiring reliable, robust transmission will gravitate towards the ZigBee network. The network’s “self healing mesh” topology gives a higher level of assurance and has the advantageous knock-on effect that more nodes increases rather than decreases reliability.

 

Aside: As I was writing this up, I read an article in the August issue of Computer describing Washington State’s CASAS project, which has similar characteristics