Blog

  • LG Ultrafine 5k Review

    img_0082

    I’m still waiting to see what Apple does with desktop Macs this year, but I knew I also wanted a 5k display, so I ordered the new Ultrafine 5k without a Mac to go with it. I wanted to get the display under the promotional pricing, but ordered it before I noticed Apple extended the promotional pricing until March.

    The reason I kept the order instead of cancelling it is this support document, which states that the 5k Ultrafine Display will work on older, Thunderbolt 2 machines with an adapter, but at a resolution of 4k. I have a 2013 Retina MacBook Pro, but Apple states that a 2014 MacBook Pro or higher is required for the display to work at all. My hunch was that Apple was being conservative. The 2013 MacBook Pro I use is identical to a 2014, which was just a minor CPU speed bump. So for this review, I’ll be doing something a little unusual, and using the display with something outside the support range. (more…)

  • Apple’s Vertical Disintegration

    I’ve been recently looking to replace my 2008 Mac Pro, but it’s hard to find an Apple product that’s right for me. The new Macbook Pro has far less horsepower than my existing tower, and the 2013 Mac Pro isn’t a clear replacement either. The Mac Pro’s GPU is not really an improvement over the GeForce 680 in my Mac Pro, and it doesn’t support the new LG 5k wide gamut display, which is a must for me. I’ve been orphaned by Apple with nowhere to go, except to Newegg to build a PC.

    Apple’s trimming and neglect of their product line is dangerous. Vertical integration relies on tiers within a company that might not be profit leaders, or even profitable, but exist to support the profitable product lines. Apple as a whole can’t be unprofitable, but it’s just as important to maintain a foundation that supports the core business.

    The original example I always go back to with Apple is when the Xserve line was cancelled. The Xserve was clearly not a huge moneymaker, and I can only imagine that the OS X Server software business was similarly a money pit on paper as well. But that division helped to move Macs. Take a look at the original demo of the first version of OS X Server. The Computerease Chicago points out that the hardware and the software were probably never profitable, but OS X Server launched the Mac back into the education world in a big way by making system management easy. Similarly, the Xserve pushed a lot of Mac hardware into education by simplifying system management and allowing Apple to be an organization’s single vendor. The Xserve would have been an attractive defense against Google in education. Why keep your data and files offsite in Google’s cloud when it would be easy to host everything in house using Macs and Xserves? Without the Xserve, Apple wasn’t able to complete their education story, and has seen their education share slashed by Chromebooks. Google has a complete vertical integration story for education. Apple doesn’t.

    For me personally, the lack of a Mac Pro update is a hole in a critical piece of my Apple ecosystem story. If I have to buy a PC desktop, I’m probably going to look at alternatives to iOS and Mac development, or at least cross platform development. That means I’m not inclined to support Apple technologies like Metal. When it comes time to replace my Macbook Pro, I’ll probably start looking around at alternatives. I almost certainly wouldn’t buy another iPad. I’d have very little reason to keep using iCloud and keep my data locked into the Apple ecosystem. And at that point, especially if I lose my vertical integration with iMessage, my iPhone is the next thing to go. Again, I’m sure the Mac Pro isn’t a huge profit center for Apple. But it’s a central plank that supports all my other Apple purchases and my role as an Apple platform developer.

    Even for users of Apple products that don’t rely on the Mac Pro, Apple’s position is still precarious. iPad sales aren’t particularly great. The iMac is also being neglected. Apple’s laptop line isn’t particularly ideal for anyone. And the iPhone, while not in any immediate danger, continues to see declining share. I don’t know personally know anyone who is happy with their Apple experience right now, on iOS or Mac. And I don’t know anyone right now who feels like Apple’s product line meets their actually needs, even for people outside of the pro or development community. My mom, a Mac user for 20 years, hasn’t exactly been happy with Apple. At best, I’ve maybe seen a few people on Twitter who seem happy right now.

    Apple has been on a cutting spree recently, sending display production outside of Apple, cutting the Airport base station line, and neglecting Mac desktops. It’s tempting to cut everything that isn’t a massive line of profit, but if Apple isn’t careful with their removal of supports, they’ll bring the whole house down on top of them.

  • Slow Decline Of The Mac Pro

    I wanted to write a bit more about the future of “pros” on the Mac, but about the Mac Pro.

    Pros are the most easily spooked, jittery segment of the computer market, and they have reason to be. When they buy equipment from a vendor, whether that is Apple or HP or Dell or whoever, they are spending a substantial amount of money, and are risking their business on a platform. Buying the wrong equipment or buying into the wrong strategy has serious consequences to the bottom line. If a business chooses wrong it would take a serious amount of time and money to migrate users, equipment, and existing projects. If computers become slower, billable hours become higher and less competitive. Often I see posts on Twitter complaining that people critical of Apple are spending too much time focusing on specs or timely updates or on having the fastest available computers, but these are all crucial factors when looking at pro hardware for good reason.

    Apple, for decades, has had a basic pact with pro users (although I’m starting to suspect Apple never knew it.) Windows has always been the less risky platform, just due to vendor choice. If you’re a business that buys all HP, but HP stops creating solutions that are right for your business, it’s very little trouble to migrate to Dell. If you run your businesses on the Mac, and especially if you run your business on Mac only software like Final Cut Pro, it’s harder to transition off the platform, and Apple is a larger risk to your business. But pro users have been content with this risk as long as Apple continues to deliver as fast or faster hardware than their competitors, and they upgrade every year. This basic pact has even helped resolve a lot of Apple’s secrecy issues. You don’t need to know Apple’s roadmap as long as you know, whatever it is, it will show up next year, be faster, and be better. Apple still works this way on iOS. You could run trains on Apple’s typical iPhone and iPad update schedule, even with all the secrecy.

    I’ve heard the tower Mac Pro’s sales were quite good. I don’t know anything about the 2013 Mac Pro sales, but I could guess that they probably aren’t that good.

    Before the 2013 Mac Pro, Apple hadn’t upgraded the Mac Pro in three years (and Apple’s neglect of Final Cut Pro 7 didn’t help.) I with video pros at the time and the panic was already setting in. A two year gap, like the one from 2006 to 2008, was digestible. But at three years you start to wonder if the Mac Pro was going to be updated at all. And if you don’t think the Mac Pro is going to be updated, for the good of your business, you’re going to start looking at the Adobe Suite and Windows workstations, and start that transition as early as possible. In that span of time, the uncertainty took Apple’s Final Cut Pro dominance, and handed it to Adobe.

    When Apple released the 2013 Mac Pro it never calmed the pro community. The 2013 Mac Pro a risky proposition for businesses because it was slower than Windows hardware, which translates to dollars on the bottom line. A job that takes twice as long to render costs twice as much. And that just continued to feed the narrative that investing in the Apple platform was a risky proposition. And then three years later Apple still hasn’t shipped an upgrade, continuing the tailspin in pro’s confidence of Apple. Mac Pro sales are likely down a bit due to the specs, but I think Mac Pro sales are down as low as they are because Apple can’t demonstrate a commitment to their platform for professionals.

    I think the Mac Pro could sell a whole lot. People need workstations. But to revive sales of the Mac Pro Apple needs to do two basic things:

    • Release a 2018 Mac Pro. No, that’s not a typo. I don’t think it’s the next Mac Pro that will be important as the one that comes after, and I hope that’s not discouraging because I really think Apple could succeed with pros. I’ve already had people tell me they won’t buy the next Mac Pro because they are worried it will be the last one, they don’t want to be on a dying platform, and would rather move over now.
    • Say Apple is committed to the Mac Pro. Apple has been able to keep their roadmaps secret because their release schedule has been dependable. If the Mac Pro releases aren’t dependable, stop jerking people around. All Apple has to do to calm pro users right now is say that there is a new Mac Pro coming but they haven’t been able to show it yet. And Phil Schiller has come so close to saying this. If you can’t rebuild the trust with actual releases, rebuild the trust through the press.
    • Specs? It’s honestly less important than rebuilding trust, but still important. Intel may have been standing still, but GPU vendors were not. The 2013 Mac Pro uses 2012 GPUs that were already dated when it shipped. AMD has floundered a bit, but Nvidia has at least released three solid updates since. For a pro business, that lost productivity is pretty hard to ignore.

  • On Managing Expectations (Macbook Pro follow up)

    One of the counter points to criticism of the Macbook Pro event is that expectations are too high. Users are expecting that a laptop should be just as powerful as a desktop, and that’s unreasonable. Generally, I agree. The Macbook Pro has not really been a good desktop replacement since almost the Powerbook G3.

    But the problem is Apple themselves is marketing the Macbook Pro as a desktop replacement.

    I mentioned in the previous post that a lot of the angst from pro users probably would have been avoided if desktop Macs were mentioned or updated. I still think that’s true. If you don’t think the Mac Pro is going to be updated, and that the Macbook Pro is what Apple is pitching as a replacement, you’re going to compare it to desktop workstations. Even if you think the Mac Pro is going to be updated, Apple’s lack of a mention of it (or the iMac) implies that Apple is still misjudging the expectations of the pro community. When you’re a Pro, you don’t like uncertainty around the tools you need to earn a living. Would you risk your business on a vendor that doesn’t have a clear plan on continuing to support your workflow?

    I think it’s fair to criticize Apple on not clearing up all this uncertainty with the different Mac lines during the event. After not getting any serious updates for three years, the 2013 Mac Pro was announced six months before it shipped. When I worked in IT we were apprehensive about ordering PowerPC machines after the Intel transition was announced. Apple responded by letting us pre-order the original Macbooks before they were announced to the public. It’s easy to say that Apple operates in complete secrecy and we just all need to deal with it, but Apple selectively keeps secrets only when it benefits them. Even a “we’re working on” for the Mac Pro would have gone a long way towards re-assuring a community that depends on Apple’s roadmap for a living.

  • Mac Apple Event Thoughts

    I’m very supportive of going all in on Thunderbolt 3. Thunderbolt 3 is a huge advance, and I think it’s worth ditching all the legacy connectors. It will be a bumpy transition at first, but once it’s done having one universal connection will be worth it (although I’m not holding my breath for corporate projectors to start adopting USB-C or Thunderbolt 3.)

    AMD and Nvidia have been working hard on shrinking the size of their chips, and AMD’s 400 series (known as Polaris 10 for mid range desktops, Polaris 11 for laptops) and Nvidia’s 1000 series (known as Pascal) offer approximately double the performance per watt, and have balanced this improvement with increasing performance and power savings.

    Apple appears to be offering the highest end Polaris 11 part available: the Radeon 460. This is a huge improvement over previous generations where Apple tended to only use the middle end of AMD’s mobile offerings. But while AMD has improved their performance compared to their previous generation, they’ve failed to take the performance crown from Nvidia. Nvidia’s low end professional notebook GPU, the GTX 1060m, is still almost twice as fast as the Radeon 460.

    The issue with the new Macbook Pro is it ignores everything professionals have been asking for, while adding things that they didn’t. Unnecessarily making the laptop thinner prevents them from using a mobile GPU like Nvidia’s 1080m, which offers nearly four times the performance of the Radeon 460. And as GPU advancements slow again and GPUs become more and more power hungry, the increased thinness of Apple’s design may also force them back to lower end mobile GPUs.

    Apple also ignored almost the full list of what pros were looking for in a new Macbook Pro: features like upgradable storage, higher resolution displays, more RAM, external graphics expansion… Apple is pushing this laptop as a 4k editing professional notebook, but hasn’t even equipped it with a 4k display. Whatever you think about Microsoft’s new Surface Studio product, it’s at least trying to get at that list of needs pros have. It’s at least showing some sort of awareness of what the market is asking for that Apple isn’t.

    A lot of pros still work in environments where they need the best possible workstations to work efficiently. Movies still don’t render instantly. VR and 3D graphics work is still very hardware bound. I even have Xcode projects that take a considerable amount of time on my Macbook Pro to build. The Mac used to be the best choice for these sorts of use cases. Apple provided the fastest hardware, with the most reliable operating system, and it made an easy choice for environments where your computer’s efficiency directly made you more money. While macOS does maintain a slim reliability lead over Windows, Apple’s slower hardware is hurting the bottom line of these kinds of businesses. If a Macbook Pro takes twice as long to render your film than a competing Windows notebook, is it really worth it to stay on the platform? At a certain point, even if you love Apple, macOS, or the fancy new Touch Bar, you are losing money by staying with Apple.

    There is a giant unknown in all of this, and that is the Mac Pro. Competitively slow Macbook Pro performance was tolerated as long as Apple offered a fast desktop for people to use for performance oriented tasks. The classic Mac Pro was beloved because it fit in perfectly with compute hungry workplaces. Apple literally took the best Intel had to offer, and the almost-best GPU makers had to offer, threw them into a nice, flexible box, and sold them to pro users. It wasn’t complicated, but it didn’t need to be. The job of the Mac Pro was not to make a statement, but to burn through any creative task as fast or faster than any other machine on the market.

    I don’t think the Mac Pro is dead (MacWorld is claiming there will be a new Mac Pro in November). If the Mac Pro is updated, it will quiet some of the complaints creative pros have with Apple right now. But Apple has also been ignoring the needs of Mac Pro users as well. Besides the lack of updates, the design of the 2013 Mac Pro also missed the mark. It got dual GPUs standard, but it sacrificed dual CPUs. The design is too small to fit any higher end GPUs, and can only fit one SSD. Apple made a large number of important sacrifices to achieve a design nobody asked for or needed.

    If Apple really wants to pro user market to return, they just need to keep it simple. Stuff the fastest possible components into well priced, reliable macOS boxes that help people get work done. They don’t need to art pieces, and they don’t even need to be razor thin. Apple needs to build workhorses again. It may not be exciting, but pros don’t want excitement in their computer purchasing, they want reliability. And throwing the fastest components into a few computers every year is a cheap way to keep a reliable income stream from happy users going.

    Bonus: Death of Apple Displays

    The new LG displays are nice. I’d buy one if I had a machine that I could plug one into. But I’m a little mystified on why Apple didn’t just take the step of slapping an Apple logo on the display, and selling it as an Apple branded product. I’m sure that US based Mac Pro factory has some overhead to put together some Apple monitor cases.

    It’s more than just being superficial. The monitor not being Apple branded means it is no longer Apple supported. When you buy an Apple branded monitor with a Mac, it’s covered under the same warranty as your Mac. If your Mac had three year AppleCare, your monitor was covered for three years too. And your monitor was serviced at the same local stores your Mac was serviced at. With an LG monitor, that piece of mind I had is now gone. I’ve had Apple monitors die and get repaired under a three year AppleCare plan. If I have an issues with an LG display, I don’t have a local store to get it serviced at. And what about out of warranty repairs? My cat chewed on the cables of my 27″ Cinema Display, and for a small fee that Apple store replaced the built in cables. If I have any other accidental or out of warranty issues, will LG fix them for a fee?

    I don’t know how many monitors Apple sold. My hunch is it wasn’t as many as Dell or HP, but I also saw enough of them around I can’t imagine they didn’t sell at all. But having one vendor to deal with all your problems was always a great thing about buying Apple gear. Now Apple wants me to buy third party displays. If I’m looking at Dell or HP displays, I might also take a look at their computers too. They both offer on site service, their computers are faster than Apple’s, and I only have to work through one vendor. Sounds pretty compelling to me.

    It would be great if Apple could service the LG displays, cover them under AppleCare, or at least act as a front line for passing hardware issues along to LG. That would make their relationship feel a lot more partner-y and make me more comfortable with buying Apple.

  • iPad Pro Initial Thoughts

    I’ve been working on an app intended for use with the Apple Pencil, so I went to the store and picked up an iPad Pro this morning. (Sadly no Apple Pencil or Keyboard, both are deeply backlogged it seems.) At my desk I have a Mac Pro, I carry a Macbook Pro for working on the go, and I have an iPhone 6 Plus and iPad Air 2, so I’ve been thinking a lot about how the iPad Pro fits in with my workflow as it is now.

    I might publish more thoughts on it as I spend more time with the device, but I’ve already had a few reactions and thoughts on it, both good and bad.

    Good: The Hardware

    On the outside it looks a lot like a bigger iPad Air 2, which isn’t a bad thing. Apple has added speakers on the “top” near the lock button, and the “bottom” near the Lightening port (or the left and the right of the iPad if you hold it in landscape.) There are two sets of holes on each edge for stereo sound in both orientations.

    The speakers sounds very good for an iPad. The bass is audible, and the volume is much much higher than my iPad Air 2. I’m not sure if it sounds as good as the built in audio on my Macbook Pro, but it’s at least pretty close. The built in output is not a replacement for a decent pair of speakers, but it sounds great for a portable device. My only complaint is that Apple is still opting for side mounted speakers instead of front facing speakers. I hold the device by the sides, and it’s very easy for my hands to cover the speakers and for the sound to become muddled.

    I’m writing this without an Apple Keyboard or Pencil, but I’ll say the typing experience is miserable without them. Worse than the iPad Air. The keyboard is simply too big on the larger screen. Typing two handed is bad enough, but with one hand it’s unbearable. I’m not this is going to be a good replacement for a laptop even with a physical keyboard, but if you’re going to be doing anything as basic as typing medium sized emails, do yourself a favor and get the hardware keyboard. Long term, I’d love to see Apple add handwriting recognition, even if it’s not super. They at least have a starting point with the Mac’s handwriting recognition that’s available for graphics tablets.

    The performance is good. I’ve seen the synthetic benchmarks that make the performance look very favorable compared to the iPad Air 2. But some of the numbers I’ve seen in running my own apps indicate that performance of applications may actually be imperceptibly slower. The extra CPU gains Apple made with the A9 may be getting used up driving the larger display. The iPad Air 2 always felt like a snappy device, so if Apple is just able to deliver the same user experience on the iPad Pro, it’s not a significant issue. But if you’re coming from an iPad Air 2 I’m not convinced things are going to feel significantly faster.

    Speaking of the display size… It’s big. I told someone earlier it feels like I’ve been given a novelty giant iPad as a joke. Not in a bad way, I like the extra real estate, but it’s not an easy to carry device like the smaller iPads. Most of the time I use it I let it lay completely flat on a desk instead of holding it (which makes me regret not buying the Smart Case to prop it up with.)

    I’ve been tinkering with creative apps on it, and the extra screen size is great. As I mentioned, I don’t have my Apple Pencil yet, and I’m sure the hardware will feel even better once it arrives.

    The one thing I’d like to see on a future iPad Air is support for Thunderbolt, and beyond that support for pointing devices. One impressive thing about the Surface Pro is the transition it can make to a desktop PC when you plug it into a docking station. It would be nice to be able to plug an iPad Pro into a Thunderbolt display, and make use of the wired network, keyboard, mouse and other accessories.

    Bad: The Software

    When I talked about the hardware, I mentioned a lot about how it just felt like a bigger iPad Air 2. This is a good thing. With the software, it’s pretty much the same thing: it feels like a bigger iPad Air 2. This is a bad thing.

    Originally I was on the fence about whether I should buy an iPad Pro or a Surface Pro. The attractive thing about the Surface was the lack of boundaries put in place by the software. Want to run real Photoshop with a bunch of windows? Go ahead! Mount network shares or plug in a USB printer? No problem! Run a DOS emulator to play a 20 year old game that happens to be touch friendly? Go for it!

    A lot of apps have been updated, but there are still some strange gaps. Garageband doesn’t seem to be updated for the iPad Pro screen. Neither has the WWDC app. (One of my favorite third party games, Blizzard’s Hearthstone, doesn’t seem to be either.) I was expecting a premier Apple application like Garageband to be updated before launch. Apps like Keynote have been updated, and they look great on the display. Apps that haven’t been updated simply appear to be stretched, and they look pretty clearly pixelated compared to other modernized applications that look brilliant on the iPad Pro display.

    Some apps that have been updated have an annoying habit of leaving the additional space empty. Apps like Messages, Twitter and News all have the habit of dealing with the extra space by just leaving ridiculous margins around content. I’m hoping in time this issue gets fixed.

    The big problem with iOS on the iPad Pro is it still struggles with the productivity basics. Multitasking has been nice on the iPad Air 2, and it’s certainly better on the iPad Pro. But it still can only run two apps at the same time. Navigating between applications is slow and cumbersome. And worse yet, you can still only have one window an application open at a time. Want to compare two Excel spreadsheets side by side at the same time? Nope, out of luck.

    Initial setup was also not great as I realized how fragmented applications have become. Panic and Adobe both have excellent apps on the iPad Pro, but both have their own sync services with their own separate logins because Apple has placed restrictions on iCloud usage, and doesn’t provide any sort of single sign on service to make up the gap. (And to be clear: I’m not blaming Adobe or Panic for a situation that is rooted in how Apple treats Mac applications.) I dug into the Adobe apps only to realize I didn’t have my stock artwork available. I couldn’t login to my network share to copy the artwork down, and I couldn’t download a zip of it from the internet because there is no app to decompress the zip, and no filesystem to decompress it to. Adobe seems to have a way to load the artwork into their proprietary cloud, but I haven’t done so yet, and I shouldn’t have to set up a new proprietary cloud system for every application just to load some files in.

    The iPad Pro still shares the same basic problem as it’s older iPad Air 2 sibling: Productivity on the device is killed by a thousand tiny paper cuts. I’m not trying to say you shouldn’t buy one, but I’m saying that you should expect to have the same productivity on it as you would an iPad Air 2. The screen size can’t solve the productivity issues without the software.

    I’ll revisit this when the Apple Pencil comes out. I’ve heard really great things about it, and I’m sure for artists this will be a great supplemental device. But I don’t think anything about the iPad Pro has changed to make it a dramatically better PC replacement device than the iPad Air 2. If the iPad Air 2 has been a good PC replacement for you, the iPad Pro will continue to be, but with a larger screen size. Otherwise, Apple’s continued resistance to making iOS more serious for professional workflows will just slow you down compared to a Macbook.

    I don’t mean to be too down on the iPad Pro. I’ve mostly been talking about the iPad Pro as a PC replacement because Apple has been talking about the iPad Pro as a PC replacement. The hardware is great, and I can definitely see some sort of future here. I’m not totally convinced that a touch based tablet can take the place of a laptop with dedicated keyboard and trackpad (something Apple themselves have repeatedly said in response to other faux-laptop tablet combos like the Surface Pro), but for me it’s easy to see this as a good ultra portable device. And as a developer, I see all sorts of cool things I could do on a touch based device this large and this powerful. But as a user, the software still holds me back from getting things done as efficiently as I could on a laptop. I know my needs are greater than most PC users, but I’m just not convinced that the iPad Pro has changed the decision making process someone goes through for buying a tablet vs. buying a PC.

  • Swift Needs KVO/KVC

    I’m just finishing up my first app store bound project that was written in Swift. It’s nothing hugely exciting, just a giant calculator sort of application. Why I chose Swift is that Swift’s static typing really made me think about the data layer, and how data flows through the application. What I missed terribly was KVO/KVC, and I’m not alone. Brent Simmons has also mentioned this, but as someone who’s used a lot of KVO and KVC over the years, I find that it’s helped me ship code a lot more quickly, and has been one of the most valuable features of the Mac frameworks. A lot of developers who are new to the platform aren’t aware of these constructs.

    The idea is something like this: We’re done a really good job of optimizing the model layer of Model/View/Controller applications. And Swift has done an amazing job. Static typing provides huge advantages in reliability and coherency. But the Obj-C philosophy is really about re-usable components. In that philosophy, components written by one vendor need a way to seamlessly talk to another, and this is really where Swift and static typing fall flat. A view from one vendor or component needs a way to render data from a model from another component. We find this even in the system where a component like CoreData needs to be passed into a controller where it needs to be searched, or…

    Hold on. I can hear the Swift developers already. “We have protocols and extensions for that! I can make a component from one source talk to a component from another source. All I need to do is define a protocol in an extension and I can have my static typing and everything!”

    Ok. Let’s go down the rabbit hole.

    The Swift Protocol Route

    Let’s take a classic case that is actually a scenario that Apple shipped on the Mac in Mac OS 10.4. I want to have a controller, that given an input of an array of types, will filter the array based on a search term and output the result. The key here is my search controller doesn’t know the input type beforehand (maybe it came from a different vendor) and my input types don’t know about the search controller. I want to have a re-usable search controller, that I can use across all my projects, with minimal integration effort to save implementation time.

    Using protocols, you might define a new protocol called “Searchable”. You extend or modify your existing model objects to follow the protocol. Under the “Searchable” protocol, objects would have to implement a function that receives a search term string input and return true or false based on if the object thinks it matches the search term. Easy.

    But there are a few problems with this approach. The controller has become blind to how the search is actually performed, which isn’t what I wanted at all. The idea was that the controller would perform the search logic for me so I didn’t have to continuously rewrite it, and now I’m rewriting it for every searchable object in my project. If I need search to be customizable, where the user was selecting which fields they wanted to search, or selecting options like case sensitive or starts with/contains search, those options now need to be sent down into each Searchable object, and then logic written in each object to deal with that. Reusable components was supposed to make my code easier to write, and this sounds worse, not better.

    Maybe I could try and flip this around. Instead of having extensions for my objects, I can have a search controller object that I subclass, and fill in with details about my objects. But I’d still have the same problem. I’m writing a lot of search logic all over again, when the point is I want to reuse my search logic between applications.

    (If you’ve used NSPredicate, you probably know where this is going.)

    Function Pointers

    Alright, so clearly we were trying to implement this all in a naive way. We can do multiple searchable fields. When the search controller calls in to our Searchable object, we’ll provide it back an map of function pointers to searchable values, associated with a name for each field. This way all the logic stays in the controller. It just has to call the function pointers to get values, decide if the object meets the search criteria, and then either save or discard it. Easy. And we are getting closer to a workable solution, but now we have a few new problems.

    Live search throws a wrench into this whole scheme. Not only do we need a way to know if an object meets the search criteria, but now we also need a way of knowing if an object’s properties have changed that could make it’s inclusion in our search change. This is especially important if I have multiple views. Maybe I have a form and a graph open for my model objects in different windows. If I change an entry in the form, I’d want the graph to possibly live update. And the form view and the graph view might have no knowledge of each other. So we need a way to callback to an interested observer of the object when a value changes. We could use a timer to check every second or so for changed values, but in some scenarios that could be a very expensive and needless operation. So while that would work, performance and battery life would significantly suffer. And it’s more code we don’t want to write.

    There’s also the issue of nested values. Maybe what I’m searching are objects that represent employees, but now I also want to search on the name of the department employees belong to. In my object graph, it’s very likely that departments will be another model object type that will have a relationship with employee objects. So now I’m not just looking at returning function pointers to not just my employee objects, but department objects they belong to. And now I need to communicate changes not only in my object’s own values, but changes in it’s relationships to other objects.

    Also there is the small issue of this approach not working with properties. As far as I know, you can’t create an function pointer to a property. So now I need to wrap all my properties with functions.

    This is getting complicated again. Once again I’m writing a lot of code, and not saving any time at all. There has got to be a better way.

    Key Value Coding

    Well fortunately after years of going through this same mess in other languages, Apple came up with Key Value Coding as a solution.

    Key Value Coding is extremely simple: It’s a protocol that allows any Obj-C object to be accessed like a dictionary. It’s properties (or getter and setter functions) can be referred to by using their names as keys. All NSObject subclasses have the following functions:

    func valueForKey(_ keyString) -> AnyObject?
    func setValue(_ valueAnyObject?, forKey keyString)

     

    (Reference to the entire protocol, which contains some other interesting functions, is here.)

    Now my search controller is easy. I can simply tell the search controller all the possible searchable properties like so:

    class Employee: NSObject {
        dynamic var lastName: String?
        dynamic var firstName: String?
        dynamic var title: String?
        dynamic var department: Department?
    }
    searchController.objects = SomeArrayOfEmployees
    searchController.searchKeys = ["firstName", "lastName", "title"]

     

    Now I can have a generalized search controller, that I can share between projects or provide as a framework to other developers, that doesn’t have to know anything about the Employee object ahead of time. I can describe the shape of an object using it’s string key names. Underneath the hood, my search controller can call valueForKey passing the keys as arguments, and the object will dynamically return the values of it’s properties.

    Another great example of the advantages of keys is NSPredicate. NSPredicate lets you write a SQL-like query against your objects, which is harder to do without being able to refer to your object’s fields by name.

    There is a catch. If you’re a strong static typing proponent, you’ll notice that none of this is statically typed. I’m able to lie about what keys an object has, as there is no way to enforce the name of a key I’m giving as a string actually exists on the object before hand. I don’t even know what the return time will be. valueForKey returns AnyObject.

    Quite simply, I don’t think static typing helps this use case. I think it hurts it. I don’t see a way to make this concept workable without dropping static typing, and I think that’s ok. Dynamic typing came about because of scenarios like this. It’s ok to use dynamic typing where it works better. And all isn’t lost. When our search controller ingests this data, if it’s a Swift object, it will have to transition these objects back into a type checked environment. So even though static typing can’t cover this whole use case, it improves the reliability of using Key Value Coding by validating that the values for keys are actually the type we assumed they would be.

    Key Value Paths

    There are a few problems KVC hasn’t solved yet. One is the object graph problem that was talked about above. What if we want to search the name of an employee’s department? Fortunately KVC solves this for us! Keys don’t just have to be one level deep, they can be entire paths!

    The KVC protocol defines the following function:

    func valueForKeyPath(_ keyPathString) -> AnyObject?
    keyPath
    A key path of the form relationship.property (with one or more relationships); for example “department.name” or “department.manager.lastName”.

     

    Hey look, that’s uhhhh, exactly our demo scenario.

    So now I can traverse several levels deep in my object. I can tell my search controller, after some modification, to use a key path of “department.name” on my employee object.

    searchController.objects = SomeArrayOfEmployees
    searchController.searchKeyPaths = ["firstName", "lastName", "title", "department.name"]

     

    Now internally, instead of calling valueForKey, my search controller just needs to call valueForKeyPath. I can use single level deep paths with valueForKeyPath with no issue, so my existing keys will work.

    Notice that valueForKey and valueForKeyPath are functions that are called on your object. I’m not going to do a deep dive right now, but you could use these to implement fully dynamic values for your keys and key paths. Apple’s implementation of this function inspects your object and looks for a property or function that’s name matches the key, but there is no reason you can’t override the same function and do your own lookup on the fly. It’s useful for if your object is abstracting JSON or perhaps a SQL row.

    It’s also important that this works on any NSObject. I can insert placeholder NSDictionary objects for temporary data right alongside my actual employee objects, and the same search logic will work across them. As long as the object has lastName, firstName, title, and department values, the object type no longer matters.

    Key Value Observing

    Well all that’s great, but we still have one more issue. We need to know when values change. Enter Key Value Observing. Key Value Observing is simple: Any time a property is called, or a setter function is called, a notification will automatically be dispatched to all interested objects. An object can signal interest in changes to a key’s value with the following function:

    func addObserver(_ anObserverNSObject,
          forKeyPath keyPathString,
             options optionsNSKeyValueObservingOptions,
             context contextUnsafeMutablePointer<Void>)

     

    (It’s worth checking out the other functions. They can give you finer control over sending change notifications. Also lookup the documentation for specifics on the change callback.)

    Notice that the function takes a key path. An employee’s department name will not only change if their department’s name changes, but also if their department relationship changes. This covers both cases by observing any change to any object within the “department.name” path.

    It’s also worth checking out the options. We can have the change callback provide both the new and old value, or even the inserted rows and removed rows of an array. Not only is this a great tool for observing changes in objects that our class doesn’t have deep knowledge of, but it’s just great in general. This sort of observing is really handy for controlling add/remove animations in collection views or table views.

    In our search controller, we just need to observe all the keys we are searching in all the objects we are given, and then we can recalculate the search on an object by object basis. There are no timers running in the background, this change can fire directly from an object’s value being set.

    So what’s the problem in Swift?

    I’ve mentioned one problem already: Only classes that subclass NSObject can provide KVO/KVC support. Before Swift, that wasn’t a major problem. Now with Swift, we have non-NSObject subclasses, and non class types. Structs can’t support KVO/KVC in any fashion.

    The properties/functions being observed also have to be dynamic. Again, not a problem in Obj-C where all functions and properties are dynamic. But not only are Swift functions not dynamic by default, some Swift types are not supported by dynamic functions. Want to observe a Swift enum type property? Can’t do that.

    Even more worrisome, the open source Swift distribution could possibly not include any dynamic support, and KVO/KVC are defined as part of the Cocoa frameworks, which aren’t likely to be included with open source Swift. Any code that wants to target cross platform Swift might be forced to avoid KVO/KVC support. Ironically, just as we could be entering a golden age of framework availability with Swift, we might be discarding the technology which makes all those frameworks play cleanly with each other.

    So what would I like to see from Swift?

    • Include KVO/KVC functionality as part of core Swift: The current KVO/KVC are defined as part of Foundation. They don’t need to be moved, but Swift needs an equivalent that can bridge, and is cross platform.
    • Have more dynamic functionality on by default: Another issue is that dynamic functionality is currently opt in. This is for a good reason: things like method swizzling won’t work with Swift’s static functions. But Apple could split the difference: Allow statically linked functions (and properties) to at least be looked up dynamically. This would allow functionality like KVO and KVC to work without giving up direct calling of functions or opening back up to method swizzling.
    • Have the KVC/KVO replacement work with structs and Swift types: Simple. Enums in Swift are great. Now I just want to access them with KVC and observe them.

  • Swift and Obj-C: 2015 Plans And Obstacles

    My current plans for Swift adoption in 2015:

    • I’m just finished a contract project in Swift 1.2. It was a really great experience for working through Swift and finding it’s strengths and weaknesses.
    • At work, we may introduce some Swift 2.0 into our application for new source, but there is currently no pressing need to transition anything existing.
    • Our library we ship to other developers will continue shipping in Obj-C. There are currently no plans to port to Swift as Swift 2.0 still doesn’t meet our requirements, and a full port would be technically impossible. But we’ll be cleaning up the interface for bridging well to Swift, and hopefully beginning to distribute as a framework in our next major version.

    There are reasons for these different approaches, and some things I’d still love to see from Swift to be more comfortable with it:

    C++ Support

    Swift still cannot directly link to C++ code. The current workaround for this is to wrap C++ code in Obj-C code. The two schools of thought on this are that Swift will never get C++ support, or C++ support may come to Swift in a future enhancement. This is a big reason a full port of our library for third party developers will not get a Swift version in it’s current form: It contains shared cross platform C++ code. If Swift does not get C++ support, than Obj-C will be used in the project indefinitely. And we’re not alone: Most major companies are in this position as well. Microsoft, Adobe and Apple are just a few companies that have large C++ code bases for cross platform projects, and it’s unlikely they’d be able to drop Obj-C as well. It’s possible to add another layer of abstraction in Swift around our Obj-C code, but that doesn’t seem like an efficient use of resources, especially when Obj-C bridging is making such large strides.

    Dynamic Swift

    Swift is still a mixed bag when it comes to using some of the more dynamic concepts from Obj-C. KVO can still be painful, especially when trying to mix in Swift concepts. For example, it’s not possible to declare a Swift dynamic function that is also an emum. Having to declare functions dynamic at all is also a painful design decision. It seems that it would be possible for Apple to split the difference. They could allow functions to be accessed both statically and dynamically automatically. Directly calling a function could follow the fast, static path, while dynamic observation and calls could be done through a lookup table that simply forwards to the static function. My understanding is that dynamic functions are all or nothing currently, but having a middle ground where functionality like method swizzling is just not support, or not supported well would be a good compromise.

    Like C++ support, there are two different thoughts on this. One is that Swift should be totally a static language and that Apple platforms should make a huge shift away from dynamic types of programming. The other is that Swift can find some sort of middle ground, which I really hope is the course Apple takes. Swift really shines in a lot of situations that pair well with type safety, but it can really be painful to use in dynamic programming where Obj-C excels. Obj-C was born out of painful experiences with languages that were too static, and it would be a shame to backtrack on that.

    Language Stability

    Swift is still an unstable language, which can cause deeper problems beyond just issues with maintaining source. The Swift 2.0 ABI is still considered unstable by Apple (which I re-confirmed post WWDC), so shipping a precompiled library/framework is still not a advisable option (this is the other significant issue with shipping our library with any Swift.) An unstable ABI means that a Swift project can only link to other pre-compiled Swift code that was compiled with the same version of Xcode. This makes it difficult to service multiple customers with the latest fixes at the same time.

    The unstable nature of the language itself is a smaller problem, but can still be an obstacle. If we have multiple active branches, changes in Swift can causes schisms in our repository. It also makes recompiling and resigning an older application much more difficult. We’d feel a lot more secure in our Swift investments if the language was more stable. iOS has small API changes from release to release, but Obj-C has been extremely stable.

    Cocoa Support

    Swift is still unwieldy under Cocoa, but Swift 2.0 and the latest release of Obj-C has made huge strides and I’m really looking forward to it. I think this will continue to improve, but parts of Cocoa that use pointers are still painful, and types like NSNumber don’t bridge to native Swift types. The one really bright spot is Swift 2.0’s formalized error handling is a great advance, and works extremely well with Cocoa. I’d like to see the dynamic support in Swift brought up to speed to better bridge with Cocoa APIs and KVO. I’d even be ok with a KVO replacement in Swift as long as it could bridge. Observing class properties is a big part of Cocoa, and I’d like it to be a big part of Swift as well. The exceptions enhancements are welcome as well. I was happy that Swift had eliminated exceptions, but I quickly ran into problems porting existing unit testing code that tested a Cocoa style API.

  • WWDC Quick Thoughts

    My initial takes on WWDC announcements… (more…)

  • My Wish For Swift At WWDC: C++ Support

    At work, we support a lot of platforms. We support iOS and Android, Windows, Linux, supermarket checkout scanners, Raspberry Pis, old Windows CE devices, and more. And all the devices run our same (large) core code, and all that code is written in C++. I’m not the biggest fan of C++. But there’s no doubt when we need to write something that works across a range of platforms, it’s a rich, commonly understood tool. It’s also been a massive blocker for Swift adoption for us.

    For our mobile customers, we do provide both Java and Obj-C APIs. They’re both just wrappings around our C++ core, and they do the conversion from all the Obj-C or iOS native formats into the raw buffers we need to handle in our C++ core. Whenever I look at doing a Swift native SDK in the future, I’m still stuck on not having native C++ support from Swift code. In order to provide a pure native Swift API in the future, I’d have to wrap our ever growing C++ source base once in Obj-C, and then wrap it again in Swift. It just doesn’t make sense to wrap the same code twice over. (more…)