Replacing Objective-C and Cocoa

Ash Furrow wrote an article arguing that Apple needed to replace Objective-C with something else. The crux of the argument is that programming languages have moved to higher levels of abstraction over time, edging further away from direct hardware access. By the time such a transition were completed (say within 10 years), using C-based languages will seem as archaic as using assembly. Ash then lays out features he would like to see in such a language.

Replacing something as fundamental to a platform as its language is no small feat. Apple did this once before with Cocoa and the compatibility bridge of Carbon when moving from OS 9 to OS X, and its migration took 12 years to be fully finished in public API. Developers fought this change for many years before Cocoa became the de-facto standard. So a migration to something newer cannot be a cavalier move done to embrace trends; it must be done with a clear purpose that fixes common issues in the thing it replaces, and it must set a foundation upon which to build at least a decade or two of software. And it must coexist with that which came before it. With the OS X transition, Apple didn’t just have a new language; they had a whole new operating system. It came with entirely different ways of handling memory, threading, files, and graphics. It delivered frameworks that were way more usable than their predecessors. It wasn’t just a new programming language; it was a revolution in how we built software.

That’s what it should take to inspire a radical change in developer tools - improvements on an order of magnitude in building software, making it easier to solve hard problems, and fixing issues in common coding standards that have arisen through heavy use. This goes beyond just a programming language; it will require new frameworks and design patterns to really bring about the benefit. Apple owns their developer technologies stack; from compilation with LLVM, to language features in Objective-C, to framework features in Cocoa, to web technologies in WebKit. When you have control of all of these pieces, the problems at the top of the stack can be addressed at the bottom, and vice-versa.

Here are some things I’d love to see in a next-generation developer platform.

  • Concurrency. Apple added one of the biggest breakthroughs in libdispatch, a far superior method of handling concurrency and flow control than any system based on threads. A new language should add the kinds of concurrency control in libdispatch as primitives to the language itself, so that everything from the bottom up can use it. Basically if they use the word “thread” anywhere in the API, they blew it. If we can kill the concept of a god main thread as well, all the better.
  • Functional processing. Functional languages like Haskell have significant benefits by keeping most data immutable and changing it with transformation functions. This has significant benefits in reducing bugs, creating less error-prone code, and most importantly reducing the kind of bottlenecks that kill concurrency like locks. We’ve got the CPU cycles for it.
  • …but keep it imperative. Pretty much anybody who can code can write imperative code. Purely functional languages are not heavily used, and until they are, it’s just not realistic to expect everyone to figure this out.
  • Strong types and duck invocations. One of the biggest causes of crashes I’ve seen in Cocoa has been because someone passed something they shouldn’t have to a method. id in Objective-C can be useful, but it has been abused by developers who take shortcuts that end up biting them in the ass. Strong typing means that passing the wrong thing is an error. It’s worth the headache. Standardize on method names, so that design patterns emerge naturally. If an object responds to a toString() function, let me call it without having an interface or protocol set up for it.
  • One style to rule them all. Remember getting angry about dot-notation? Let’s avoid that. Whatever systems exist for common problem should have exactly one way to solve it, and that solution should be simple. While we’re at it, let’s take a cue from languages like Python and make syntatical style choices hard rules, so that we all (humans and IDEs) just learn them and get on with it without arguing about tabs or spaces. Then getting rid of braces and semicolons becomes much easier to do.
  • Compiled. Code compiled up front is always going to be faster.
  • Secure. 10 years ago, our lives lived on one PC. Now we have multiple devices tracking everything about our lives. Every machine is an attack vector for all of our highly personal data. Security should be the default. Subprocesses with strong sandboxes and privilege separation should be the default, encryption should be easy, and establishing trust will be necessary. Privacy controls for people should guide most of these decisions.
  • Modular. Open source has changed how we develop apps. Instead of developers writing everything themselves only on a monolithic framework, apps now include source code written by dozens of developers building tiny, reusable components. CocoaPods is now a thing. Most modern languages have package management as a feature, because it makes it faster to prototype and build software.
  • Move beyond C. No pointers. No main(). No manual memory management. ARC was a massive improvement in how we deal with memory, but there are still issues with hard-to-debug circular references and zeroing weak references. This should be easier, more obvious, and easier to debug.
  • …but let me get back to it. Decades of code have been built on C, so we should be able to get back to it. Most higher-level languages include some form of foreign function interface to get to lower-level bits of code. Just keep it in an isolated sandbox so it doesn’t crash the app.
  • Interoperable. Apps should be able to talk to each other, and pass data between each other, through standardized interfaces. If the language (or at the very least the libraries) support this, we can build more powerful apps, and do a better job applying security practices.
  • Introspective. Getting into the guts of a program at runtime can have some powerful benefits. We should be able to fiddle with the internals of our software. Most of the rules laid out here should be bendable. But to keep the integrity of the system in place, this API should be difficult to use.
  • More sophisticated UI. In 2014, most displays showing Apple software sit between 4 inches and 13 inches. In the next ten years, that may span from 1 inch watch displays to 60 inch TVs. Concurrent graphics rendering and better support for remote screens a la AirPlay Mirroring should be APIs we can use.
  • Powerful networking. If I asked you to name an app you use regularly that didn’t talk to the Internet, you’d probably have a hard time answering. Networking should be easy, powerful, secure, extensible, and designed to withstand failures.
  • Easy data modeling/storage. Core Data remains a wonderful tool for development, but it was designed for an era long past, not for the present. Getting a stack running correctly has been a pain since the framework’s inception, and that’s before you try shuffling data into it from, say, a JSON-based web service. A new system built with concurrency and interaction with remote services is essential.
  • Social. The Internet communications devices we all carry know all kinds of stuff about our friends and families. Apps should be able to move content between people in ways that are easier than sticking a photo in an email.
  • Real-time communication. Moving data quickly between components, apps, devices, and services is super important. To create the illusion that everything works together seamlessly, this needs to be real-time with as little latency as possible. The OS can do this better than any developer can.
  • Design to fail. Most apps crash because the developer did something dumb. As someone who has caused thousands of crashes on devices, I’m as guilty as anyone. Because in today’s programming environment, accounting for failure is hard. Every consideration of any new language and its resulting libraries should be designed to make it easy to notice and catch problems quickly. Crashes are not always avoidable, but if you make it simple to handle, they become much more manageable.

To summarize: a new developer platform includes a language designed for concurrency and safety, tools better tuned for writing/reading/debugging code, and powerful frameworks for building apps that can easily talk to our friends, apps, services, and devices. Such a platform would be very well suited for the current hardware we own (laptops, phones, tablets, TVs) and can expand to other device categories like smart watches, cars, appliances, and office tools. It would help us make better, safer, more secure, and more stable software.

When Apple came out with Cocoa in 2000 and improved it over the early 2000s, their pitch was that it enabled indie developers and small development shops to build software of a class previously attainable only by large companies like Adobe and Microsoft. This was a big part of what inspired me to begin teaching myself how to write Mac apps. In the years that have followed, the demands of modern Internet apps have exploded, and we are back at a stage of competitive complexity. A new developer platform could once again level the playing field and inspire a new generation of developers to aim high.

Discounted App Upgrades Are a Terrible, Complex, Outdated Idea

Much ado has been made about the idea of App Store apps getting discounted upgrades, where you buy an app once, and then pay a discounted amount for an upgrade. This idea is not new; for many years, developers sold apps under a model known as shareware. In an era when software was harder to use and people feared viruses, this model thrived among technically-savvy people who tended to spend more money on technology. The generally-accepted model in the Apple world has a fair bit of complexity, involving trials (some time-based, some feature-restricted), serial numbers or license files, and periodic requests for more money. Then the App Store came and replaced it outright with a new, simpler model that favors a traditional retail-style system of cheap software that you pay for once. It’s far more straightforward and easier to understand; if you want the app, you buy it, and then you own it.

Developers who have thrived under the old model have complained for a long time that they want discounted upgrades to make a return to iOS. Along with it comes the added complexity of managing different tiers of ownership, both technically and mentally. Apple has not offered this, instead opting to make major updates available as a separate, standalone app that existing users pay for in full, as was done recently with Logic Pro X. To me, this is far and away a better and simpler approach to handling upgrades in an era when non-technical people buy software.


Consider Apple’s Perspective

To properly understand why Apple hasn’t enabled discounted upgrades, it’s worth viewing Apple’s decisions through a very simple lens - they want current and future customers to buy iPhones, iPads, and Macs. In a decision between what will benefit a developer and what will benefit a customer, Apple will pick the customer, every time. Apple advertises that their devices are not mere cell phones or devices with fast web browsers; they have the App Store, a powerful destination for people to find apps to solve problems and games to entertain. With hundreds of thousands of apps and games on Apple’s stores, no matter how niche the tool you need is or what genre of video games you enjoy playing, there’s an app for that.

When a new customer brings that iPhone home, they want their device to live up to that expectation by filling it up with apps, and to do so without costing them a fortune. Apple loves that apps are free or cheap, because it makes it easy for customers to try out new things with low risk and high reward. In a sense, the race to the bottom on app prices is great for Apple, even if it’s not great for developers. But despite prices being terribly low, it hasn’t stopped growth. There are more developers building on iOS than there ever were. Apple’s not hurting for apps or developers, as they’re flowing under natural momentum. Aside from massive brands like Facebook and Angry Birds, no one specific app is really needed. Nature abhors a vacuum, and app opportunities will arise and be fulfilled by someone in the network.

Discounted upgrades throw a wrench into the works. A discounted upgrade is essentially a developer telling an existing customer that they have to pay more money to continue receiving new features and bug fixes. Were they to be implemented into the existing App Store update infrastructure, you’d be told there’s an update, and if you didn’t want it, you’d have to have some way of hiding that update (and then getting it back later if you wanted to upgrade after all). But you, the customer, would be responsible for shutting off something you didn’t even necessarily want in the first place. Discounted upgrades add a whole lot of complexity to a simple, habitual process, and it gives the customer absolutely no value. Contrast this with a separate product in the App Store, where it is opt-in, and looks and feels like you’re buying a separate thing, and the process is dramatically simpler. You buy a thing, and you buy another thing. You don’t buy a thing in various states of upgrade-ness.

So Apple likes cheap, straightforward purchases and free stuff because it’s a selling point on why you should buy an iPhone. A discounted upgrade complicates many parts of an app’s life span for a customer. But even if they didn’t, there’s an even stronger reason for Apple to maintain the status quo of separate products for major updates, one with a decade-long history of prior art on the Mac.


Developers Suck at Discounted Upgrades

One example of a long-standing Mac shareware developer is the Omni Group. Here is a company that has thrived for almost two decades on the shareware model. Just yesterday, they released an update to their popular Mac app, OmniGraffle. OmniGraffle has existed since long before the Mac App Store, and is a wonderful app well deserving of its praise. However, like many shareware apps of the era before the Mac App Store, they offer upgrade pricing, and are unfortunately a perfect case study in all the ways you can get the user experience of an upgrade wrong.

When you open the app, OmniGraffle prompts you with a dialog box asking you if you want to upgrade to the latest update. They show a long list of new features that sound great, but with not a single mention that this is a discounted upgrade. A customer who decides they want these features clicks through, and (in this case) get taken to the OmniGraffle website listing more of those features, but there is again no mention of upgrade pricing. You click the download link, install the update, and it suddenly says you’re in trial mode. Edit: The update for OmniGraffle 6 is installed alongside 5; it doesn’t replace the existing app. But both apps show the same display name, with no difference in icon, making it difficult to tell which app is the update. If they uninstalled the existing app and want to undo this process, they have to find the old version, buried as the 5th question on their support page and listed in a large FTP index with files that end in extensions like tbz2.

If you know that this particular update will involve upgrade pricing, you’re at least in a position to have already decided to pay to upgrade or not. But if you just opened the app to work, consider the emotional roller-coaster this process puts you through. The customer enters an app to solve a problem, to get their work done. Suddenly they’re offered the delight of having more features and capabilities to solve their problem in a better way. They put their problem on hold to install your update, and when they’re finished, they’re confused at why the product they already paid for is demanding more money, and worried that they can’t get the old app back. Getting back to the old app involves many weird and non-obvious hurdles.

Omni makes this app available via the Mac App Store, making for a comparable user experience. For the App Store version, release notes and update information are shown through the Mac App Store. But for this major version update, the old app is replaced with a new, separate product that must be bought in its entirety. They’re free to buy it or ignore it, at their own pace. The new app sits next to the old one, not replacing it. No customer is ever asked to enter a serial number or download a license file (let alone store it somewhere for future reference if they buy a new computer). The only downside is that the customer is asked to pay the same amount of money for the new thing, instead of getting a discount. But as Justin Williams points out, this is basically the only industry where this phenomenon happens. Omni is so fundamentally glued to the old shareware model that they came up with this thing, the OmniKeyMaster, which is basically a piece of software that converts licenses from one store to licenses in another store, so you could buy a cheaper upgrade from the other store (and of course, Apple made them kill it almost immediately). Their customers were asked to figure out what their license was and why it mattered for them to save a few bucks on an upgrade.

I’m not picking on Omni because I harbor any animosity or negative thoughts to what they do (and I do sincerely apologize to Omni for singling them out, as they were a convenient example while writing this). They build amazing, award-winning software and their apps are worthy of all the praise they get. They’re one of the best developers on the Mac building stuff outside of the App Store. But they got their discounted upgrade experience completely and totally wrong. Their process assumes you will pay them to upgrade; they have no incentive to make keeping the product you have easy or enjoyable. And they’re not alone. Lots of apps fail at the process of making a discounted app upgrade a positive experience. If the best of the best can get this process so wrong, why should Apple trust the shifty ones already gaming the App Store to get it right?


Market Conditions

It’s not surprising that old hats have had trouble adapting. Many companies have tried to desperately cling to the old ways of doing business, holding on with a few key products or failing outright and going out of business. Many other companies have emerged, finding tremendous success in taking advantage of the rules and human behaviors to maximize profit or growth. While it’s hardly a healthy market for small developers, it’s the only market we have to work with if we want to target iOS devices, and one of two on the Mac. But it’s not our market; all markets are owned, controlled, and directed by buyers and not sellers. Developers are sellers; we must adapt to the market and its rules.

Developers can move their product beyond the App Store, and sell subscription-based services, though you’re likely to experience more resistance with customers having to pay regularly. They can also move towards a model driven by in-app purchases, where you provide some functionality for free (or cheap) in the App Store, and then let them add more features. Apple seems to be wanting developers to move further in the latter direction. A wonderful case study in this is the iPad app Paper, which gives you basic tools for free and charges in-app purchases for more advanced ones. It’s worth noting that Apple lauded that app with both App of the Year and an Apple Design Award.

Humans are predictable, as are their reactions to your moves in a marketplace. Want to keep people happy? Don’t violate your customers’ expectations. Don’t surprise your customers by asking for money after they’ve already paid you. Don’t make your customers feel like they’re missing out on a deal by offering a time-based discount. If you must break these rules, do it while treating your customers with respect and honesty. Take care to make sure they have a pleasant way to not pay you, to keep things as they have always had them. Software shouldn’t feel like ransom. You can fly fast and loose with these rules if your target market is people with disposable income who look past sleights because they want to support developers (and if you have these customers, cherish them, because they are exceptions). But average people don’t care about your livelihood. They care about solving their problem. Despite this, they will be more likely to treat you with respect if you show them the same.

But my company depends on discounted upgrade revenue! On the Mac, you can still rely on this model, but there’s a reason that developers want their apps through the App Store. It’s a far smoother process for them. By relying on a different model, you’re complicating the buying process and probably losing customers. Just please, spend some time making sure the upgrade process is good for your users, not just for you.

But I want to reward my loyal customers! By giving your customers a discount on something they have to pay for again, you’re not rewarding them. You’re punishing them less. If you want to reward your existing customers, give them something extra instead.

But I get one-star reviews when I release separate paid apps! You made the decision to ask people who have already paid you to pay you again. You also picked the prices. People do have the right to be mad when a company asks them for more money, and sometimes will take that energy to reviews, because they know you’re paying attention to them. And you do control how existing customers learn how the updates work. Confuse them to make them mad. Make them mad to get spiteful reviews. Unfair? Perhaps, but again, it’s not your market. As a side note, reduce this issue by making sure your updates are actually worth paying for (and your iOS 7 reskin probably isn’t).

But my customers won’t find out about the separate app! Your app is an updateable piece of intelligence. Before your big update goes out, plan for it and add some announcement UI in your app that you can trigger over the Internet. With new features in StoreKit, you can directly link to the new app, and close the sale directly in the app.

But people have no right to complain about an update that costs 99 cents/$2/$3/etc. when their iPhone cost hundreds of dollars! This privileged point of view comes from people who love supporting developers and who have spare money. And while it’s noble, it doesn’t reflect how most customers think about apps. The toothpaste isn’t going back in the tube here; people are now used to free and cheap apps with updates. Your app isn’t why they bought an iPhone; it’s an accessory, one that can be discarded as easily as an iPhone bumper.

Discounted upgrades offer nothing but complexity to Apple and to end users. Short of some UI breakthrough we haven’t thought of yet, they aren’t coming. The standard of separate, standalone apps has been established, by third-party developers and by Apple themselves. Having separate apps offers UI benefits, less stress, and no perceived bait-and-switch to end users. The only people who benefit from discounted upgrades are developers who have customers who eagerly pay for upgrades. We had that during the Mac. It’s far from the reality of what we have today. The market has spoken; shareware is dead. Let’s go build something new.

Once again, my sincere apologies for singling out Omni in this post. They were a convenient and lazy target for me because of their recent app update. Their apps are great and you should buy them.

DevCenter.me

In 2013, developers rely on web-based services to get much of their work done. Whether it’s looking up server API documentation, provisioning iOS devices, or registering API keys, a part of a developer’s day involves going to a website and looking something up. And that means keeping track of how to find all of these disparate websites. The problem is that there’s no real standard way or schema to get to them. Each service has a different URL for their developer site. Some examples:

  • iOS: http://developer.apple.com/devcenter/ios/
  • Android: http://developer.android.com/
  • Chrome: https://developers.google.com/chrome/
  • Dropbox: https://www.dropbox.com/developers
  • Twitter: https://dev.twitter.com/
  • Instagram: http://instagram.com/developer/
  • Windows: http://msdn.microsoft.com/en-us/windows
  • Heroku: https://devcenter.heroku.com/
  • Twilio: http://www.twilio.com/docs/api
  • Amazon Web Services: http://aws.amazon.com/documentation/

There’s 10 different examples, each with completely different URL structures. If you have to interact with more than a few of these, it becomes impossible to keep track of them all in your head. Now, you could rely on bookmarks or web searches, but these aren’t as fast as just typing a URL into your browser or using a shortcut in an app like Alfred. Speaking from my own experience, for the sites I can keep in my head, I’ll type part of the URL and hope that the browser’s URL autocomplete will finish the job, simply because it’s usually more efficient. But as a human, sometimes I’m wrong, which wastes time. Wouldn’t it be great if there was a simple, consistent URL structure for all these different sites?

So I built DevCenter.me to do this. DevCenter.me is a directory of developer sites that can be accessed via consistent URLs that either you or your browser can remember. Each developer site can be accessed via one or more shortcuts (for example, ios for the iOS developer center, or app.net or adn for App.net) through either of two different URL structures: shortcut.devcenter.me or devcenter.me/shortcut. These shortcuts are designed to be memorized. Frequently used shortcuts will end up getting persisted in your web browser’s URL field (the service sends a 302 temporary redirect, not a permanent one, so your browser will probably still cache it). After a few uses, you can just type fb into your browser’s URL field, and it will automatically recognize you’re typing fb.devcenter.me, taking you to the right site in just a few keys.

DevCenter.me was designed to be a productivity tool for developers. There’s a full list of sites and their shortcuts available in JSON, as well as on the website (with shortcuts shown by pressing the option key). You can even download an Alfred extension to open a site via one of these shortcuts (shout out to Will Smidlein for writing this, as well as providing the motiviational tweets to write this service). And if a site is missing. you can fork the project on GitHub, add it to the list, and submit a pull request. When I initially published the project, there were 10 sites. As of this writing less than 12 hours later, there are 56, and undoubtedly more pull requests for me to merge in. That’s really cool to see, and I hope people keep submitting requests for more sites.

I built DevCenter.me because keeping track of all these websites was a thorn in my side; wasted memory space in my head. Accessing developer sites may not be a world-changing problem, but saving a few seconds a few times a day adds up over time. A better solution keeps you focused on what you’re actually trying to do rather than requiring you to keep estoeric URLs in your head. I hope it finds value as a tool for lots of developers.

Thoughts on the Dropbox Datastore API

As I’ve stated before, your app needs to sync. This has not gone unnoticed by the startup world, who are offering more and more options for developers to build sync into their apps. Today, Dropbox announced a new datastore API, a system for syncing application data (that isn’t file-based) into and out of Dropbox.

At first glance, this looks like a wonderful solution. You get a drop-in component on iOS, Android, or the web to put your app’s data into the cloud, with very little thought by you. Data gets stored offline automatically. It even handles merging and conflicts quietly in the background. Pretty great, right?

I hope that the Dropbox Datastore API can deliver on these promises. I don’t think they’re necessarily impossible problems to solve. But these are the exact same problems that Core Data + iCloud claims to solve, and between iOS 5 and iOS 6, iCloud hasn’t been able to deliver on that promise. This certainly doesn’t mean Dropbox can’t, or that the Datastore API has problems. Dropbox is certainly well versed in the concept of syncing blobs of data between multiple systems, silently and effectively, as that’s what they’ve been doing for the last five years. But this solution should be approached skeptically and carefully.

Before you ship your app built with the Dropbox Datastore, these claims should be tested thoroughly. Test data sync across 1, 2, 3, 8 different devices. Test it offline. Introduce conflicts. Save data offline. Try multiple conflicts. Create conflicts on one machine while offline, etc. There’s a lot of ways a magic sync solution can fail.

Of course, the benefits to a drop-in solution are immense. You don’t have to write sync logic. You don’t have to wake up at 4 AM because your MongoDB process randomly died, taking your server API out with it. You don’t need to handle the differences in online and offline state. You know there’s a company whose goal it is to solve this problem for you; that’s their job.

Just be careful. When it comes to any tool that claims to be a cure-all, make sure it does the job.

Beginner’s Guide to Bitcoin

In the last few years, an interesting alternative to paper currency has risen in the form of Bitcoin. I started paying attention to Bitcoin two years ago, and have owned some ever since. I’ve purchased physical and digital goods with it, as well as traded it for cash. It’s particularly intriguing to me because it maintains the core properties of economic models while creating a system of inflation guided by encryption algorithms rather than humans. And it’s grown in huge popularity over that time, with a net worldwide worth of over $750 million USD at the time of this writing.

Many people have asked me about Bitcoin, so here’s a basic introduction to what Bitcoin is and how it works.

Note: I am neither a lawyer nor an accountant. This should not be construed as legal or financial advice. If you use the information in this post, you agree that it is at your own risk.

What is Bitcoin?

A currency can be thought of as a limited supply of something that can be traded. In the case of the US dollar, that something* is paper bills and metal coins, each of which has a value to the total limited supply of all money. The limited nature of a currency is where it gets the value; the idea being that we all collectively trust some entity to manage the supply of the currency. In the case of the US dollar, the Federal Reserve manages the supply against the full faith and credit of the United States.

Bitcoin is a type of currency that is managed not by people, but by cryptography. Transferring currency between people involves computing some work and signing it with a public key. Volunteers verify the transactions using really hard math problems. Every few minutes, a small amount of Bitcoins is given randomly to a miner, slowly and steadily inflating the amount of currency in the system and giving incentive for people to verify the transactions. The history of all transactions are publicly available, but the addresses are basically random data, making transactions hard to identify. The result of this is a system where the supply grows at a well-defined rate that everyone can see. It also means transactions are fairly anonymous, verified by others, and irreversible.

How do you get Bitcoins?

There are two primary ways to get Bitcoins. But before this, you need a program called a Bitcoin wallet, which tracks the Bitcoins you have, and manages your addresses. You can get a Bitcoin wallet for pretty much any desktop OS, for Android, or you can use a hosted site like Mt. Gox or Coinbase (though remember, these sites are hosting your money, so take precautions here). Your wallet manages some extremely sensitive data, and so you should take care to keep it safe and backed up, or you can lose your Bitcoins forever.

The easiest way is the same as with any other currency - by receiving them from someone else. To do this, you create an address with your Bitcoin wallet that you can give to others; for example, my address is 19LayHDujXwVpuLv2cqgFFfBFwmegx3Z8s. You give this address to someone, and they send a certain amount of Bitcoins (or fractions of Bitcoins) to that address. This transaction is submitted into the Internet, where other people verify the transaction to prove that the person actually owns those Bitcoins. Once enough people have verified the transaction, the coins are considered “transferred”, and they’re yours. There are several exchanges to trade Bitcoins with others for other currencies (I use Mt. Gox), though there are often times difficulties in transferring US dollars to one of these (as PayPal and the credit card vendors don’t let you use them for buying Bitcoins). You can also find people who will trade informally in places like #bitcoin-otc on Freenode IRC.

The other way is a process called mining, which involves running a program (a miner) which helps verify transactions on the network. This program verifies other people’s transactions by processing really hard math problems. The result of verifying these transactions, however, is the chance for free money. Every few minutes, a new block of Bitcoins is “found” by a miner, and they get to keep them. Additionally, when other people transfer Bitcoins, they can optionally include a “transaction fee”, which goes to a miner. One problem with this is that finding Bitcoins is difficult to do, because so many other people are trying to do the same. To mitigate this, many people join a mining pool, where any Bitcoins found are distributed among the whole pool. Some people go so far as to build custom computers to do Bitcoin mining at scale, and there are hardware lists that talk about how to get the most bang for your buck.

What do you do with Bitcoins?

You spend them, of course! An incomplete list of vendors to spend Bitcoin at can be found on the Bitcoin wiki. You can buy all kinds of things with Bitcoins, like food, clothes, services, toys, books, and even beer. Many sites will trade by the market value, meaning that they’ll sell something for either $20 via PayPal or $20 at market rate in Bitcoin (so if a Bitcoin cost $1, that would cost 20 BTC).

You can also treat Bitcoins as a commodity and do forex (currency) trading. So if you buy 10 Bitcoin at $1 and sell them at $2, you’ve made a profit of $10. The markets for this tend to fluctuate quite a bit; the month prior to writing this, the value has increased from $30 to peak at $75. But what goes up must come down; in 2011 the USD to BTC exchange rate plummeted from a peak of $33 to $2.51 and took a long time to recover. The volatility means that you could have huge gains, but also catastrophic losses.

Is it safe?

I’ll divide safe into four areas here: value stability, information security, identity, and legality.

You want a currency to be fairly stable in value, meaning that huge swings don’t happen (or at least are rare). Otherwise it’s difficult to put faith in the currency. As of right now, there are still large swings in value that make it difficult to be reliable. It’s certainly more volatile than most paper currencies. So if you’re looking for a safe bet to put your savings into, this isn’t it. The volatility could be desirable if you’re looking for short-term gains, but not for your life savings. Treat it like a gamble.

The Bitcoin protocol has some smartly designed security features. It uses similar encryption to what your computer uses when talking to your bank’s website, and if hackers figure out how to beat that, we’ll all have bigger problems than our Bitcoins being hacked. It relies on a massive network of volunteers verifying transactions and reaching consensus; the only way to work around this is to build a network more powerful than all the world’s supercomputers combined. And it relies on problems that are computationally complex as to require significant time from would-be hackers. The combined result of these layers of security is a system that has only had one major security flaw over two years ago (and hackers have a huge financial incentive to break this system, so you can bet they’re trying). Where hackers have found successes is in attacking online wallet services and exchanges. You should be very careful if you store your Bitcoins in one of these.

Your identity is never attached to a transaction or address. Each transaction, however, is publicly available and has a history. If you share an address and receive Bitcoins from it, and that address is Googleable, your name might be found attached to some Bitcoin. But purely from the network layer, transactions are anonymous. They’re just heavily recorded.

As of this writing, there’s no laws that I know of against owning or trading Bitcoins. There are very few regulations; the only one I know of was actually just implemented today. That also means there’s little safeguards in terms of risk or fraud. Tax implications are also unclear, but you probably should just file it as income. Talk to your accountant and your lawyer, because I am neither.

Should I use it?

Bitcoin is definitely not the easiest thing to use. But there are plenty of things you can buy with it, so it’s a viable currency for many things. And it’s an interesting market for speculation. If you want to buy things online and prefer to avoid PayPal or giving out a credit card, it’s a nice alternative that is basically digital cash.

If you get into Bitcoin because of this article, feel free to send some Bitcoin as a thanks to my address: 19LayHDujXwVpuLv2cqgFFfBFwmegx3Z8s

Project Amy

At the last App.net hackathon, I unveiled Apparchy, a proxy server that converts your App.net timelines into data that looks like the Twitter API, which you could then connect to via Twitter’s official iOS apps to post and view to and from App.net. This was a really cool hack, but it suffered from many problems. It relied on a proxy server, which had issues relating to security and privacy, as well as being a single point of failure. If the proxy server went down, everyone’s app broke. Apparchy itself was built to work with Twitter’s official apps, which use a LOT of private methods on Twitter’s server API, and those private APIs changed often from release to release, meaning the app would break if you updated it. It was a big pain to set up, taking many steps that were easy to get wrong. But perhaps the most important and the most philosophical problem that plagued Apparchy was that the early adopter audience of App.net were not the kind of people who embraced Twitter’s official apps. They used Tweetbot or Twitterrific or some other app because it was better suited for their needs or looked better or some other reason.

Even with all these problems, it was fun and fascinating to make. Going into this weekend’s App.net hackathon, I wanted to top it. How? By building something that was just as mindblowingly cool, that also fixed all of those problems.

Since the last hackathon four or five months ago, App.net has been hustling on getting new APIs out, having added the Messaging API and the Files API, among other things. I’ve been dreaming for years of a better chat app with first-class, bulletproof file transfer support, and App.net has all the ingredients for a killer implementation. Similarly, for over a year, I’ve been sitting on the knowledge of the existence of the IMServicePlugIn framework, waiting for an opportunity to use it for something.

And thus, #ProjectAmy was born. App.net messaging integrated natively into Messages for the Mac.

Here’s how to set it up. If you’re running Mountain Lion, download the Amy installer and run it. Once it has finished installing, open Messages’ Preferences and add an account. A new option for App.net will appear: pick it and enter your username and password for App.net. Add the account and you’re done.

The people you follow will be shown in your buddy list. Double clicking one will open a new chat window and start a new private messaging session like you would see on Omega. Sending and receiving messages happen very fast, usually within a second. You can drag files into the chat window, and they will upload to your account’s App.net file storage and be sent as part of a message. Similarly, if you attach a file elsewhere, it will let you download it directly from Messages - and if that file is an image, including animated GIFs, those images will appear inline like they would with AIM or iMessage. If you subscribe to group chats or Patter chat rooms, those will appear as well. It’s very nicely integrated.

Project Amy tops what I did with Apparchy. It deeply integrates App.net into a major app. It doesn’t rely on a proxy server (or in fact communicate with any API except for App.net), so there’s no concerns about privacy and far less issues with stability. It uses 100% public, documented APIs that are exposed, so updates will not break the plugin (in theory). It’s super simple to set up; just run the installer and add your account information. And it is an integration in an app used by millions of people, with a wide intersection of the people who use App.net. It fixes all of the problems, makes Messages more useful, and makes App.net more useful.

That said, this initial release is a beta. There are bugs and I’ll be working on fixing them over time. But in the last few weeks as I’ve developed it, it’s vastly changed how I use PMs on App.net. I use it a lot more, and respond to people a lot more frequently and much more quickly. I hope every Mac-using App.net user downloads it and installs it.

You can download the Project Amy beta for free from my App.net file storage. I hope you like it!

Files as UI vs API

Rene Ritchie on iCloud vs. Dropbox:

For all Dropbox’s automagical-ness, it’s a relic of the past. It’s a file system. It’s a hierarchy. It’s a folder sync. It’s a bunch of encrypted data stored on Amazon’s S3 network.

As much as iCloud is the right idea still not realized, Dropbox is the wrong thing done brilliantly well. And at the end of the day, that still amounts to the wrong thing.

There’s an important distinction here, and that’s separating files as UI from files as API. iOS (and, to a lesser but growing extent, Mac OS) has proven the value that users should not have to manage their own file system, that files as UI is a poorer user experience. You shouldn’t have to worry about where photos are stored in your photo library; iPhoto will manage collections of photos for you and they get stored on your disk somewhere. Apps can present organizational context that files cannot, letting one photo be in your library, your photo stream, an event, multiple albums, and with multiple people, all without having to exist in folders representing each of these collections. This is a good thing, it’s a significant advancement forward in human/computer interaction design, and it’s the model that computing on all platforms will be following going forward.

Files as API, however, are as important as ever. Besides being organizational chaos for a user to manage, a file system can be thought of as a structured way of mapping lots of pieces of separate data to a physical disk. Apps can store whatever data they want into a file, and the OS figures out how to actually store it. It’s a system that works very well. Using the iPhoto example, those photos may be interacted with in one or many collections with a smarter and more appropriate UI, but that photo is still being stored as a file somewhere on disk.

So while the UI has transcended the need for users to use the Finder for organization, the underlying data still relies on files, which is still the best way to save large amounts of disparate data. Just about every abstraction on files (e.g. databases or object stores) ultimately ends up writing files to the file system. Developers are still heavily reliant on files as API, even if we’ve moved beyond needing or wanting them in UI.

Besides the key/value store (which I believe uses a different syncing mechanism), iCloud advertises three mechanisms for syncing data - the file store, the document store, and the Core Data store. All of these are actually based on the same syncing mechanism for syncing files (a “document” refers to something like a Pages document, which is stored on disk as a folder with multiple files for separate images, text, and metadata; Core Data store refers to database-style apps that have lots of little pieces of data and maybe some files that go along with them). With iCloud, developers get a folder that the user never ever sees called a “container” to move files to and from the cloud. And it’s this basic file/folder synching mechanism that is apparently flawed, as there have been many reports of iCloud-based apps that have had problems, whether they’re based on Core Data, on documents, or on other storage with files.

Dropbox, on the other hand, was designed around files, both from a UI point of view and an API point of view. This means their file syncing is very, very good. If a file gets put into a Dropbox somewhere, it ends up everywhere quickly, basically with absolute reliability (short of network errors). If you’re building an app that needs to sync, that kind of reliability is exactly what you’re looking for. And you’re already using files to store stuff. The Dropbox file UI side of things is optional for users; they have to seek it out, either on the website or by having one of the Dropbox apps - there’s nothing stopping you from having a Dropbox account purely for syncing data, without ever installing the Mac app or viewing a directory on the web site. But their syncing of files works. Apps can build better UI on those files whether they’re stored locally, stored in Dropbox, or stored in iCloud. But Dropbox has proven it’s reliability, and iCloud hasn’t.

So while there is an argument to be made that Dropbox’s UI is a relic, its value as a syncing engine is still huge, precisely because it’s built around the paradigm of files, a paradigm we have decades of experience working with.

The Ragequit

Every few weeks, some tech company is in the fire over changes to their rules. This week it’s Instagram, but who knows who it’ll be next week. They put out some change to their terms of service that claims new or changed rights over what they can do, someone notices, bloggers and headline-hungry tech reporters find it, and suddenly we have us a news cycle. In 2012, the truth is not the actual truth, but that which is tweetable. People circulate headlines speculating on what the new terms mean, a few rounds of telephone go by at the speed of light, and pretty soon the company in question is the most evil entity on earth for the next two or three days.

This nuclear chain reaction cascades, and eventually people get mad; so mad, they decide to pull off a move that could never have existed in the pre-Internet era: the ragequit. A ragequit consists of three parts - backing up your account data (usually), deleting your account, and then talking very loudly about it on social media. Usually this decision is made within hours of the change going viral public. Its intent is to send a message that says that these changes are not OK, and if you’re going to make them, I’ll just take my ball and go home, so you should fix them.

In a way, the ragequit is fascinating to observe in human nature. In just a few hours, someone can go from ignorance to apathy to fear to anger, and let this rush of emotions dictate a permanent decision. We’ve now moved to a point where software is so disposable that we will spend months and years putting our life into it and throw it away at the first sign of perceived injustice against ourselves. It’s equally curious how people think a few scattered deleted accounts will end up persuading the company to see the error in their ways, as opposed to all the monstrous bad press being simultaneously thrown at them.

One of the most infamous incidents of the ragequit happened in 2010 when Facebook announced a number of changes to their privacy options and policies. As with all things Facebook and privacy (hence Instagram and privacy), people got mad and deleted their accounts en masse. Did it work? Well, no. Facebook didn’t even bother to dignify the effort with a response. They likely picked up more new users that day than they lost from ragequitters. That was two and a half years ago, and it’s not like Facebook’s privacy controls have gotten any better. The whole thing was a futile effort that made some people feel good, and effected no change.

Nobody has ever been called noble or admirable for knee-jerk removing part of their online presence. Those who do it are never celebrated for it beyond the moment, and many times end up crawling back, tail between the legs, and resuming their use of the service. So remember, if you’re thinking of pulling off the ragequit, it probably won’t do anything but make you feel better in the moment. The company might end up backpedaling, the story ends, and suddenly you’re looking for a new photo-sharing app.

And yes, I am entirely guilty of the ragequit in the past.

Your App Needs to Sync

In the last few years, we’ve seen a pretty significant shift in how we use computers. We’ve gone from primarily using one Internet-enabled device (the PC) to using two (PC + phone) to using three (PC + phone + tablet), and who knows what else we’ll add in the next couple years. Not only are we looking up our data and documents on all these devices, we’re creating data and documents on them, and the time we’re spending to do it on the PC is getting smaller. Effortless and ubiquitous access to data is increasingly important to people.

If your app deals with user’s data, building cloud sync into your app should not be a feature you bolt on to an app - it is the feature. It’s why you will beat competitors or lose hard to them. It’s what will make your app feel effortless, thoughtless, and magical. It’s what will gain a user’s trust, and once you have that, they will sing your app’s praises and never give it up. But to earn that trust, you have to account for sync at every step of the design and engineering of your app.

Developers have a number of choices as to how to build an app around sync. You can use iCloud, you can use a hosted service like Parse, or you can build a custom sync service for your app. Each solution has trade offs. So what should you optimize for?

The reality is that every app is different, and each sync system must cater to the data that is syncing. While it is certainly the most work, it’s my belief that you should optimize for control. You should have total and complete control over when and how your app syncs its data. You should be handling errors and conflicts, not abstracting them behind black-box middleware. Without this level of integration, you’re bound to a system that can fail unreliably, leaving users to figure out what went wrong. And when they try, it’ll be your app that feels broken.

When you build it all yourself, you have absolute ownership over everything. When you decide your app should sync, it will. When you get the data, you can update the UI immediately. When there’s a problem or a network error, it’s not lost in some abstraction; it’s your code that can hear about it and deal with it or tell the user. You know when everything is working and when it isn’t. And if you know, you can empower the user to make decisions and set their expectations.

iCloud’s biggest problem is that it goes out of its way to obscure a lot of this detail from you. Their pitch is that creating apps with the document system and putting them in iCloud means they will all sync magically and you don’t have to worry and we’ll handle it for you thank you very much. But the reality of syncing data is that it’s tough, and network availability is not always reliable or fast (especially on mobile). You have to write a lot of nonobvious code to handle updates and problems. Building for iCloud once means you limit yourself to only Apple devices; you can never get that data synced to an Android device or make it accessible via the web (short of later building your own system, updating your apps, and making them push iCloud-stored data to your own server). And iCloud has not exactly gained notoriety for its stability or its friendliness to developers. The only real debugging tools you have are a web app that lets you see what’s in an iCloud folder and some rather verbose logging flags you can turn on that tell you some stuff about the syncing process. In other words, it’s not easy. I’ve tried to integrate iCloud no less than 6 times in various app prototypes, and every single time I’ve ran from it.

Note: I don’t mean this to denigrate the work of the iCloud team. What they’re doing is immensely difficult, and I have great respect for that team for trying to solve a problem of this magnitude in a way amenable to every app. But sympathy is not something you can build a business on.

Of course, the downside to the DIY solution is that it’s really, really hard. On top of building the app itself, you now have to build a database for data and the other half of a sync engine. You have to worry about security of users’ data. You have to back all this stuff up somewhere. You need a fallback strategy for when your main server has a problem. You have to know how to scale to lots of users and know what the hell terms like parititoning and sharding mean. Oh, and you have to know how to keep a server running and accessible. But the upsides are really valuable. You have limitless flexibility and control in what you use and how you use it. If something is running slow, you can fix it. You can compress your data into pushing everything important in a single network request in the most optimized form imaginable (which is a huge win for how fast your app syncs). You can manage your costs in a much more straightforward way. If you want to add some more advanced network features, like real-time sync, just do it. You own the whole stack, you can build what you want. You just have to build it.

Both extremes have upsides and downsides. And any system is a balance of tradeoffs. There is a middle ground in the form of services like Parse which handle a backend for you. You can push whatever data you want into their system and get it back later. You can build your own sync engine hooked up to their APIs. As you develop your app, you can update your data models without any effort. Their errors are well defined in documentation and you can handle them appropriately. But you are limited by what they can do (which is admittedly a lot, but not everything). No real-time updates, for example. You are limited in how you can package your data, making it tough to optimize network traffic. And as you grow, you’re bound by their pricing model, which can get far more expensive than hosting it yourself. And while the possibility is remote, they could disappear or get acquired by someone and shut the product down, leaving you to scramble to build the server stuff yourself in a tiny window of time.

In the end, it’s all about tradeoffs. If you have the technical chops and the time to build, run, and maintain servers, a self-assembled system will give you the most flexibility, best power, and lowest cost. If you are talented only in building apps, learning how to run a server is absolutely not a bad skill to have, but requires a different engineering mindset that takes time and experience to learn. In the meantime, a system like Parse is your best bet. iCloud, while a beautiful idea, is not production-ready for most apps, will cause endless headaches, and will lock you in to forever developing apps only for Apple products.

But whatever you decide, you need sync in your app. And you need to think about it at the beginning, not at the end.

Hopes for the iPad Mini

When I got my first-gen iPad, I stopped using it regularly within a few weeks. It was just too heavy, too big, too thick to really consider using as a replacement for a laptop, or to bring with me places. It’s too heavy to hold for a sustained period of time. In many ways, the iPad mini is what I really wanted the iPad itself to be, and how I want to use it. Smaller, thinner, and lighter than a laptop. Easy to carry everywhere. More immersive than an iPhone. It’s much better suited for the couch, bed, hammock, bus, or car. It’s the size of a book but the weight of a pad of paper.

Today, most iPhone apps are meant to be used in portrait (if not exclusively, then at least primarily). The OS goes out of its way to enforce this; the home screen is in portrait, and locking the orientation restricts you to portrait (even in cases like video and the camera where it makes no sense). On iPad, you can orient the device any way you like, including for the homescreen and orientation lock, but I’d wager that most people use it primarily in landscape. The narrower edge design of the iPad mini seems to encourage more portrait use, which means there may be an awkward early adopter period of apps that aren’t as useful on the mini because they are optimized for landscape over portrait. One possible benefit of the smaller size and the portrait emphasis is that maybe, just maybe, scaled-up iPhone apps won’t look as comically bad on the mini (and don’t scoff, as there are hundreds of thousands of apps that aren’t optimized for iPad). Who knows.

Last week I said I wasn’t going to buy one until I tried it out and felt the size. Oops. I guess we’ll see how it feels when I get mine on Friday.

Managing Dependencies With CocoaPods

Dealing with dependencies in Objective-C has always been a tedious process. You typically do some git submodule stuff, import their Xcode project into yours, add a dependency, add a linker target, set some compiler flags, etc., or you include the project’s .h and .m files manually. Then you end up running into problems because the header paths are wrong, or you forgot to add some linker flags that include categories, or some other problem. If that project requires ARC or iOS 6, you have to figure that out and set it up to be consistent with your project. Then, when you need to upgrade the library, you need to make sure all these steps still work, and hope nothing new got added that might break. It’s a very error prone process. Now, being a stubborn developer that’s always done it this way, I’ve been wary of any tools to automate this process, as I usually think I can handle it myself, and I’m usually wrong. Other languages have had package managers to solve this problem, so why not Objective-C?

CocoaPods tries to solve this problem by automating the process of fetching dependencies (and recursively fetching their subdependencies), adding them to an Xcode project, managing paths for everything, adding any extra compiler or linker flags, copying in any resources (images, nibs, sounds, or whatever else), and building it into your project. The end result is a very simple process of defining your dependencies in a file (called a Podfile), running a command line process, and then just building your app and referencing those dependencies. If you need to update dependencies or add new ones, just add them to the Podfile and run the command line process again. It’s very simple, and a far cry from managing all this stuff yourself. And, as of this writing, there are over 600 projects you can include in your app.

Under the hood, CocoaPods is creating an Xcode project which builds a static library, libPods.a, consisting of all your dependencies. It adds this project to an Xcode workspace and makes your project dependent on libPods.a using an Xcode config file. It then rewrites your Xcode project to link libPods.a and copy resources, and set some paths to variables included from the config file. It even detects if your project uses ARC, and sets flags appropriately. The result is that the majority of changes to your project are minimal, but instead reference a project that is under the control of CocoaPods, and as such it can be changed while rarely affecting your project. It’s a well thought out system.

To get started, you need to install the CocoaPods gem with a gem install cocoapods at the command line. Then, in the root of your Xcode project, add a Podfile that lists your dependencies and your deployment target. For this example, we’ll target an iOS 6 app that depends on the AFNetworking and FormatterKit projects. You can search for more projects on CocoaPods.org.

platform :ios, '6.0'
pod 'AFNetworking', '~> 1.0'
pod 'FormatterKit', '1.0.1'

Note: CocoaPods uses semantic versioning to determine how to handle version numbers. The version string can either be a specific version, or can include an operator that tells CocoaPods to pick a version for you. The ~> operator says, for version X.Y.Z, “use any version matching X.Y.*”, but you can also use >, >=, <, or <= which do what you expect.

Once you have this in place, run pod install. This command will:

  • download the podspec (a manifest listing instructions on the project’s requirements and build instructions) for each dependency you list, and those for any subdependencies
  • check the requirements for each podspec to ensure that your project meets the minimum requirements (so a Mac project won’t be added to an iOS app, or a project that only works on iOS 6 will not work on iOS 5)
  • set up a new xcodeproj with a static library target for all the source files in the dependency tree
  • set up an xcworkspace if you don’t already have one
  • add the Pods xcodeproj to this new xcworkspace
  • create an xcconfigfile that includes header paths for all dependencies
  • change your xcodeproj to use the xcconfig file for header and linker paths
  • add the libPods.a library to the Link Bundle With Libraries phase of your ‘xcodeproj’
  • add a new Copy Pods Resources script phase to copy any resources to your bundle

Once this is in place, you can build and run. Unless there are any problems with the dependencies, Xcode will compile all the dependencies and link them into your app. It’s very important that you use the xcworkspace, so Xcode knows how to build the Pods project correctly. You can then #include <AFNetworking/AFNetworking.h> to begin using the code. That’s it!

I’ve started using CocoaPods on a project and have been enjoying using it over managing dependencies myself. I haven’t seen any reason to believe this would be more problematic than doing it all myself, but there are plenty of benefits. Dependencies can be kept up to date much more easily, and their inclusion process is much more strictly defined (and automated). For many projects, it’s far more likely to get the setup process right than I am, and it’s faster to get set up. I recommend checking it out for your projects.

Letterpress Review

About a week ago, Loren Brichter teased a new game that was about to be released. Wednesday, it went live as Letterpress, an unique and easy-to-learn multiplayer word game. It uses Game Center’s turn-based system (similar to what plus+ and OpenFeint had before it) to save and push game state between players, whether the other person has the game open or not. It has a very clean, simple visual style not unlike Microsoft’s Metro with a twist of aesthetic of the old Mac’s System 7 - simple colors, squares, circles, lines, and clean typography. It looks and works great.

The game itself is very well designed. You make words from any of the 25 tiles on the board. Playing a word turns all the tiles to your color. Surrounding a tile you own with one on each of the four sides “shields” it such that it can’t be taken until one of the surrounding tiles is taken. Any letter can be played in any turn. Whoever has the most tiles when they’re all captured is the winner.

My favorite thing about this game is its free-to-play nature. It removes the barrier entirely to invite someone new to play. If you are only interested in playing with a significant other, you don’t have to wrestle with the idea that this game might not be for you; just try it. If you like it, and want to expand beyond two games or choose a different theme, it’s a 99 cent upgrade. This is freemium done absolutely right, and shows how you can use the tool without seeming like a scumbag trying to nickel-and-dime people out of their cash.

For a 1.0, it’s very polished, but there are a few quirks. I’ve got a LOT of games running at the same time, some with very close friends, some with random Internet people. I’d love to be able to “pin” certain people to the top of the list, so that when they’ve made their move, they would immediately move to the top, and I could play their games faster. There also needs to be a better way to setup a rematch; Game Center’s UI is not forgiving in this matter. The dependence on Game Center has been Letterpress’ most crippling restriction so far, with a number of server outages in the two days its been out. And the board could use a better algorithm for laying out tiles in the grid, because a game with six Z’s and one vowel is not very fun. But these are minor nits that get solved with time or a rematch.

I’ve been hooked on Letterpress since it came out, and you should be too. Get it free from the App Store.

iPad Mini

The iPad mini is basically a small iPad 2. It has an upgraded camera, improved wireless, and a 15% higher density screen. But the screen is only as good as the original iPhone, and it’s running the same 19-month-old A5 processor (which is no slouch, but is hardly state-of-the-art). This is the same chip used in the latest iPod touch, but has more pixels to drive. I wouldn’t be surprised if, even with the non-Retina display, this device feels a little sluggish compared to an iPhone 5, or even a 4S.

The mini certainly fills a need; the current iPad is too large to be truly portable, but is smaller than every notebook you can buy. The iPad has definitely been the dominant player in the 10-inch tablet market, but the 7-inch tablet market has been growing. The leading competition in the 7-inch tablet space is the Nexus 7 (which is a very capable tablet), which will probably end up in a respectable #2 place by the end of 2012 in the area of several million units. It makes sense that Apple would want to try to hold on to the top seat.

The $329 base price point, however, is a strange and awkward place to start the lineup. Not only is this $130 more expensive than the Nexus 7, it misses the psychological barrier of getting under $300. This propagates through the upgraded models as well, and causing a weird staggering effect. In fact, adding in the iPad 2’s and the iPad 4’s price points, we get this pricing chart of 13 prices spread out over 14 models:

Price Model Storage Cell Data
$329 iPad mini 16 GB None
$399 iPad 2 16 GB None
$429 iPad mini 32 GB None
$459 iPad mini 16 GB 4G
$499 iPad 4 16 GB None
$529 iPad mini 64 GB None
$529 iPad 2 16 GB 3G
$559 iPad mini 32 GB 4G
$599 iPad 4 32 GB None
$629 iPad 4 16 GB 4G
$659 iPad mini 64 GB 4G
$699 iPad 4 64 GB None
$729 iPad 4 32 GB 4G
$829 iPad 4 64 GB 4G

While there are some overarching rules (e.g. if you want more space, or you want 4G data, you’re paying more), there’s no consistency when you move up or down by one price point. If you were thinking of spending an extra $30, you suddenly have a lot more variables to consider. Perhaps Apple did this to maybe get a few extra dollars out of the customer, but my hunch is that it’ll have the opposite effect. Say you walk into the Apple Store to buy a base model iPad 4 at $499. If you wanted to spend a little more, you could get a slower iPad with 3G, or a smaller iPad with a lot of space you don’t know if you need. On the other hand you could get the iPad mini with the exact same storage, a smaller screen, and 4G data, all while walking out of the store with $40 in your pocket. It’s not a hard conclusion to draw.

In the end, Apple will sell a zillion of them, and they’ll work fine. In a year, Apple will announce the next iPad mini, which will probably include a Retina display, a more modern chipset, and probably a price drop to $299 as well. It just feels like they’re holding some of that stuff back from this version, and it doesn’t seem like price is the motivating factor.

Personally I’m waiting to get one until I actually hold it and try to fit it into my large-but-not-iPad-large jacket pocket. The true test of a device like the iPad mini is its portability. The Nexus 7 fits my jacket, but barely. Hopefully the iPad mini fits as well.

Apparchy

When Twitter’s mobile apps were still Tweetie, they had a screen which let you change the API root. So if your API method is named 1/statuses/update.json, you add that to the end of the API root, giving you a URL that looks like https://api.twitter.com/1/statuses/update.json. If you change it to http://foo.com/bar/, then the API’s URL becomes http://foo.com/bar/1/statuses/update.json. You could use this if you were in a network where Twitter’s API was blocked, but you had a proxy server which wasn’t, you could still connect. Soon after, WordPress and Tumblr built versions of their API which supported the Twitter API, so you could use those services from within Tweetie. Then Twitter bought Tweetie and moved everyone to OAuth.

A couple weeks ago, I noticed that this was still present in Twitter’s official apps. I’ve been a big fan of App.net since it came out, whose API is different than Twitter’s, but not terribly so. I thought it might be interesting to try to build an “API translator” which pulled the App.net streams and posts into the Twitter app. The team behind App.net had a hackathon this weekend, and I had my project.

Today I shipped the first alpha of Apparchy, which turns Twitter’s official iOS apps into App.net clients. You sign up for a free account on apparchy.net, add your app.net account, and then log into the Twitter app with your Apparchy username and password. Then, the Twitter app will start loading data from app.net through the Apparchy API. You can view your stream, your mentions, your profile, your followers, and your friends, as well as post, reply, star, and repost. It’s not entirely complete, and some parts of the app will have no data or return nothing, but the core experience is pretty good.

Apparchy implements Twitter’s OAuth security, and sends all data over HTTPS, so the process is as secure as any other call through Twitter. Apparchy doesn’t touch the Twitter API at all, and so it’s not bound by any of Twitter’s terms of service, should they be applicable. The only way this will get shut down is if Twitter removes the ability to change the API root in an update to their app (which doesn’t sound likely, from what I’ve heard).

Apparchy is what is possible when you have open APIs like App.net’s and standards for how to handle server communication. It took a lot of research and trial/error to get it to work, but it works very well. I had a blast building this, and hope that it can be used for a long time. If you have an App.net account, give it a try for free at Apparchy.net.

Wii U Chat: How Not to Design Features

The Wii U includes an unusual controller, the GamePad, that looks and acts like a small tablet with physical controls (or a large PlayStation Vita). Besides the conventional array of game controls like two analog sticks and a bunch of hardware buttons, the controller includes a microphone, speakers, a headphone port, a screen, and a front-facing camera. There is also a more conventional, Xbox-like controller called the Pro controller which has none of those inputs.

Kyle Orland of Ars Technica wrote this piece on Nintendo’s “solution” for in-game voice chat in their upcoming Wii U console. Nintendo decided to add in-game chat to the Wii U, which is something you’ve been able to do for almost a decade on other gaming platforms. But here’s the catch: those ports on the GamePad won’t work with it. You have to buy a standalone headset and plug it in to your GamePad to get it to work. Even stranger, the Pro controller doesn’t have the port you need to even use it. Furthermore, unlike Xbox and PlayStation, this support is not baked into the system as a whole, but will be opt-in for whatever games choose to spend the time, money, and energy to support it.

When you design a feature into anything, some percentage of people will use it, some won’t. The more barriers you place between the person and what they try to do, the more of them will give up. Design involves removing the barriers between the person and the solution to their problem. I reach for my iPhone over my Vita because my iPhone is usually closer. I reach for my Vita over my Xbox because the Vita is self-contained and doesn’t make me change my TV’s inputs. I reach for my Xbox controller over my Mac laptop with Windows on it because my Xbox doesn’t make me log out of everything I’m doing and restart. These barriers may be small and subtle, but people choose the path of least resistance to solve their problems, and barriers act as resistance.

Frankly, this kind of half-assed solution for a voice chat feature - voice chat, mind you, being an integral part of mutliplayer gaming for many - just increases our concern that Nintendo is still struggling to get online functionality right this time around.

Inexplicably, Nintendo chose to add all kinds of barriers to this one solution - how to talk to your friends while playing games. I can’t tell if this was done intentionally or as just a gaffe in design. Either way, what does it say of the rest of the Wii U? And what other features are going to suffer as a result of focusing on something they don’t care so much about?

Apple Buying Color Tragically Makes Sense

This afternoon, Matthew Panzarino and Ken Yeung of The Next Web posted about a potential acquisition of the poster childs of startup excess, Color. A gut reaction of stunned disbelief is not unreasonable here, after a string of flopped products and tales of the CEO splitting for Maui. But after the shock comes intrigue. The Next Web rarely posts rumors of acquisitions unless they’ve triple checked everything. So if we assume it’s true, the question remains, why?

$41 million means you can make extremely compelling offers to the best engineers. Daniel Jalkut found that some of that cash went to paying for a number of tech patents as well. These patents relate to their technologies in grouping people together by their location and sharing content between them. These engineers spent the last year and a half tuning these algorithms, even if they weren’t used by people very much. Part of the reason Color was such a flop was because everyone had to use it for you to want to use it. That’s not an easy sell, especially in an environment like iOS where a person has to actively be using your app in order for it to provide value to others.

But let’s imagine a world where this stuff is built into the heart of iOS. There may not have been a lot of people who used Color, but there are a lot of people who use iOS devices, and suddenly Apple has solved the chicken-and-egg problem of availability. The solution Color offered can become much more useful if offered by Apple, who can break any of the rules they impose on 3rdparty developers. If you go to a barbeque or a concert, and everyone’s taking pictures and video, your iPhone will know that all these photos relate to the same event, and can group things together. It can tie in data from your address book to determine who the people around you are, and if you know them. You can create Photo Streams of events with everyone’s (or just your friends’) photos. Maybe this would integrate with the calendar, or even Facebook, to automatically associate photos and videos with events. (It’s worth noting that Google recently introduced a very similar feature to Google+ and Android.) And who knows, maybe in some weird way, this could become an aid to Apple’s troubled Maps, providing some kind of functionality like Street View or Microsoft’s Photosynth.

So maybe it makes sense that Apple might acquire this company for their expertise. Sure, they could do it all themselves, but Apple tends to buy companies with expertise in areas Apple wants to do better in. And buying the company outright gets you the engineers and saves you from the patent lawsuits. But if Color managed to “succeed”, it’s for many of the wrong reasons. Making it via a ton of money, a few unused apps, a pile of patents built on stale prior art, and a pool of developers to focus on a niche set of knowledge is a role of research departments within companies like Apple. If this were a model for the industry, we’d be looking at apps that have no real utility to people built by companies that focus on compartmentalizing knowledge and locking it off for others to use, all in the hopes that a cell phone maker has a bunch of cash to throw at you for the next new feature. That’s not a bright future. So while this may make sense for Apple, Color, and for iOS users, it sets an uncomfortable precedent. Hopefully it won’t change the idea of the overfunded startup into a model to be emulated.

Declaring Blocks in Objective-C

Blocks in Objective-C are super useful for making your object-oriented code a bit more functional. But as blocks are an extension to the C language, they have to play by the rules of C, so the syntax is a little obscure, and the documentation can be a little hard to find. So here’s a guide on how to declare blocks so you can use them in various scenarios.

The standard way of declaring a block:

returnType (^variableName)(parameters);

void (^handler)(NSString *thing1, int thing2); // call with handler(@"foo", 42);
int  (^block)(NSString *); // call with: int value = block(@"foo");
BOOL (^doSomething)(void); // call with: BOOL response = doSomething();

The order here is return type, then the block name, then the parameters, or void if you have none. Counterintuitively, the variable name is NOT the last thing on the list. It comes after the ^ which accompanies blocks. Parentheses are significant, so don’t forget them. You can also add or remove whitespace between anything. When declaring a block like this, you can supply names for the parameters, or omit them. They’re a good thing to add, though; if you use Xcode to autocomplete one of these, it will include the parameter names in the autocompleted method call, and omit them (leading to compile errors you have to manually fix) if you don’t.

If you find yourself using a block type in more than one place, you can make it a typedef. Xcode even has a handy autocomplete for this, by typing typedefBlock and autocompleting, you’ll get a template that is way easier to use than remembering all this stuff. Then you can reference it like any other typedef. Again, parentheses are significant, whitespace is not.

typedef returnType (^typedefName)(parameters);

typedef void (^MyBlockType)      (NSString *thing1, int thing2);
typedef int  (^MyOtherBlockType) (NSString *);
typedef BOOL (^MyThirdBlockType) (void);

Properties:

@property (nonatomic, copy) void (^handler)(NSString *param1, int param2);
@property (nonatomic, copy) void (^block)(NSString *);
@property (nonatomic, copy) BOOL (^doSomething)(void);

@property (nonatomic, copy) MyBlockType aBlock;

Instance Variables:

@interface MyBlockThing : NSObject {
  void (^handler)(NSString *thing1, int thing2);
  int  (^block)(NSString *);
  BOOL (^doSomething)(void);

  MyBlockType aBlock;
}
@end

Stack Variables:

-(void)run{
  void (^handler)(NSString *thing1, int thing2) = ^(NSString *thing1, int thing2){
      // do something
  };

  int (^block)(NSString *) = ^int(NSString *thing1){
      // do something
      return 42;
  };

  BOOL (^doSomething)(void) = ^BOOL(void){ // note: the (void) is optional here
      // do something
      return YES;
  };

  MyBlockType aBlock = ^(NSString *param1, int param2){
      // do something;
  };

  handler(@"foo", 42);
  int value = block(@"foo");
  BOOL response = doSomething();
  aBlock(@"foo", 42);
}

C Function Parameters:

void doSomethingWithBlock1(void (^block)(char *thing1, int thing2));
void doSomethingWithBlock2(int (^block)(char *));
void doSomethingWithBlock3(bool (^block)(void));
void doSomethingWithBlock4(MyBlockType block);

Objective-C Method Parameters:

Note: When writing method signatures that include blocks, you do NOT include the name after the ^ declaration. Instead, you include it like any other Objective-C method parameter name, just outside of the type’s parentheses.

-(void)doSomethingWithBlock1:(void (^)(NSString *thing1, int thing2))block;
-(void)doSomethingWithBlock2:(int (^)(NSString *))block;
-(void)doSomethingWithBlock3:(BOOL (^)(void))block;
-(void)doSomethingWithBlock4:(MyBlockType)block;

So there you have it. Those are the most common cases for declaring Objective-C blocks. Go forth and do evil with them. For more information on blocks, see Apple’s Blocks Programming Guide and Apple’s Short Practical Guide to Blocks.

async.js

One of JavaScript’s greatest strengths is its extremely powerful function syntax. You can use it to encapsulate portions of code, large or small, and pass these invocations around as any other object. node.js takes this and builds an entire API on top of it. This power is great, but unchecked, it can lead to convoluted and complex code, the “callback spaghetti” problem. Here’s a sample JavaScript function which chains together some asynchronous APIs.

function run() {
  doSomething(function(error, foo) {
      if(error){
          // handle the error
      }else{
          doAnotherThing(function(error, bar) {
              if(error){
                  // handle the error
              }else{
                  doAThirdThing(function(error, baz){
                      if(error){
                          // handle the error
                      }else{
                          // finish
                      }
                  });
              }
          });
      }
  });
}

async.js tries to fix this by providing APIs for structuring these steps in a more logical fashion while keeping the asynchronous design pattern that works so well. It offers a number of control flow management tools for running asynchronous operations serially, in parallel, or in a “waterfall” (where if any step has an error, the operation doesn’t continue). It also works with arrays, and can iterate over items, run map/reduce/filter operations, and sort arrays. It does all of this asynchronously, and returns the data in a structured manner. So the above code looks like this:

function run() {
  async.waterfall([
      function(callback){
          doSomething(function(error, foo){
              callback(error, foo);
          });
      },
      function(foo, callback){
          doAnotherThing(function(error, bar){
              callback(error, foo, bar);
          });
      },
      function(foo, bar, callback){
          doAThirdThing(function(error, baz){
              callback(error, foo, bar, baz);
          });
      }
  ], function(error, foo, bar, baz){
      if(error){
          // one of the steps had an error and failed early
      }else{
          // finish
      }
  });
}

While the resulting code may be a little longer in terms of lines, it has a much more clearly defined structure, and the error cases are handled in one place, not three. This makes the code much more readable and maintainable. I love this pattern so much, I’ve begun porting it to Objective-C as IIIAsync. If you’re a JavaScript developer, and you use function for its more advanced use cases, you want to use async.js.

Starting Over

Starting over is often a difficult, but necessary, way to revitalize yourself. When I was in middle school, I wrote a column in my school’s student newspaper talking about video games. I’ve been writing since before I was coding, but lately the coding has superceded the importance of writing in my life. I haven’t been content with this for a long time, and have been trying a variety of strategies to make myself write more. My personal blog, SteveStreza.com, has acted somewhat as the outlet for this, and it has succeeded in getting me to write long-form articles, but it has largely failed at producing shorter and more frequent content. But the opposite side of that is a lack of focus and a burden of further long-form content. I love writing, but the hole in what I have been writing has been bothering me for awhile.

So today, I’m beginning a new experiment, Informal Protocol. This new blog is focused around the topics of development, design, tech, and culture. The goal is to keep most articles at 5 paragraphs or fewer, and to have at least one new post a day. But as the name implies, this is an informal protocol, and won’t always be followed. Quantity and quality will have peaks and valleys, and the focus may skew one way or another. It’s wholly possible the direction may drift and this becomes something else entirely. But hey, sometimes you just have to give it a shot.

Informal Protocol is an experiment. Like all experiments, it may fail. But sometimes you have to just jump face first into a new adventure and start over. If you would like to join me on this adventure, you can follow new posts via RSS, App.net, or Twitter.