Conceptually, add or change only one thing at a time, else confuse your customers.
Imagine if the iPhone had been released ten years earlier.
Would it have failed? Arguably, yes! At minimum, adoption would have taken much, much longer.
The market wasn’t ready. Not enough of the mass population in the US or likely elsewhere would have accepted it much earlier, except for maybe in Japan.
Features of the iPhone would have been too foreign to most markets, even if all technology was readily available. Much of it was available earlier, but Apple Newton and Palm PDAs of 1990’s didn’t attain heights of success of the iPhone. Why?
This isn’t about technology. It’s about adoption.
Adoption requires understanding.
Understanding comes from an awareness, familiarity or being comfortable with the matter at hand.
You might build a better mousetrap, but unless other people understand why they might need a mousetrap in the first place, your invention goes unused by them.
They need a compelling reason to switch, especially if the previous thing was good enough at satisfying their own perceived needs.
If you disagree and believe that the iPhone or iPad could have achieved similar success a decade earlier, consider twenty years earlier.
Some believe the iPad should have been released many decades ago.
After all, Allan Kay designed an iPad-like concept in 1972, called the Dynabook. Epson released the HX-20 in 1981 as a flat slab portable small enough to fit full scale in a two page spread within Time magazine that year. Yet the tablet form-factor didn’t attain mainstream traction in North America until 2010 with the iPad. Even so, Allen Kay stated in 2017 that the world still lacks full expression of his Dynabook concept 45 years on.
Consider the sequence of mainstream adoption for various underlying functionality and mainstream awareness taken for granted by 2007, facilitating convergence for the iPhone and iPad to each become an “overnight” success:
There is another dimension to this story. Consider mid 1980’s tag line for American Express charge cards, “Don’t leave home without it.” Those ads were prevalent back then. Therefore, it’s unlikely within the intervening two decades that nobody before Steve Jobs gave the particular design direction, “Make something that people will never leave home without.” Of course, having several billion dollars in cash reserves at the time makes such direction much more likely to succeed.
Apple was also building upon core components from MacOSX (now macOS) when implementing the initial iPhone OS (now iOS). Tiger and Leopard from that era were quite stable MacOSX versions and gained much respect among long time BSD Unix people. Apple had a very stable foundation from which to expand the capabilities for a new smartphone.
While there were qualitative differences, the set of functionality on their device was largely on par with what was available elsewhere: push email, SMS text, web browser, etc.
Key new features were visual voicemail and a streamlined on-boarding process, each of which were more about changing established patterns within corporations of mobile carriers.
In hindsight other differences were incremental: virtual keyboard instead of recognizing hand-drawn letters or physical keyboard, some gestures like pinch to zoom, gyroscope for changing portrait/landscape orientation, rendering web pages as on a desktop/laptop because resolution on tiny screens now accommodated reasonable depiction, etc. But of course, these were all done with design finesse for which Apple was known at that time.
Other gestures like swipe had already existed on Ultra-Mobile PC tablets.
It’s interesting to note, however, what was absent upon release compared to later:
Particularly notable is that many months after launch would support for “Web 2.0” apps be announced and another full year before third-party native apps would be available.
While dockable HTML5 apps represented a big change, it addressed two important factors:
By deferring availability of third-party apps, this also addressed Gartner’s trough of disillusionment. One need wait only a few months after release to learn of forthcoming apps, then a few months after that for details of those apps, until finally being able to use such apps. Each of those milestones hit at or before most people would have perceived any ill affects depicted by Gartner’s chart.
This is perhaps the only principle where marketers and engineers might be willing to agree without involving a mediator or nanny:
Too many changes at once risks confusing the very people you are hoping to use the product or service.
Such a degree of change increases cognitive load, so more effort is required on behalf of your audience or population.
Stated another way, consider the parable: if you want to boil a frog and throw him into water that’s already boiling, he’ll simply jump out. But putting him in water and then bringing it to a boil slowly will get you frog soup.
Instead of literally turning up the temperature, think of it as iterating.
Of course, you can iterate quickly and frequently.
Just be certain there are clear demarcations between such iterations.
This brings us to the software engineering perspective.
Many developers– but unfortunately not enough, as seen from experience– have discipline restricting their changesets to a single item, group of highly interdependent items or at least thematically related.
For instance, a Google Coding Style Guide requires getting permission before making mass changes.
This becomes sound practice when working with others, as each person’s modifications to the same feature is much more likely to merge without conflict.
Beyond backroom coders, deploying new functionality that is clean and isolated makes it easier to test and confirm, assuming well crafted tests, sufficient code coverage and boundaries of I/O combinations addressed.
Adding version numbers to communications protocols, data structures, files and API calls will be addressed in another essay for further isolating changes.
By introducing or changing only one fundamental piece of functionality at a time, it’s easier for people to grasp the sequence of modifications.
Something that experienced software developers will recognize: when a new programming language comes along, there’s an advantage to learning it early and following its updates along the way.
It was less necessary for Python due to intended simplicity of its language
design but clearly an advantage for those using C++ since early days when
template was introduced as an unused reserved word.
The same has been said of Common Lisp by those familiar with its pre-ANSI Standard history. Someone(?) noted that you could determine the decade in which a particular function was introduced into the language by peculiarities of its spelling. This leads to a predictive ability. Understanding such nuances of naming allows one to easily guess the name of what a function could be, and if not in the standard library, it’s probably been written by someone else and readily available via Quicklisp.
Violations to this rule of thumb include modern art.
Perhaps that should be written with a trademark symbol, as some consider many new directions since the Impressionists to be rubbish and cite reports of CIA’s clandestine involvement in promoting it as evidence.
Debates over art versus expression notwithstanding– the chasm between opposing sides may be simplified by understanding the following examples:
When the Impressionists were criticized for not blending, not finishing their paintings, one position quickly emerged as valid. In the age of photography, a painting may now omit attempts at realism.
Having been inspired by the hypercube and quantum mechanics, Cubism takes this lack of realism one step further. For those today who have seen animation of a four dimensional cube being rotated might find new insights to this period of art.
Respect for other branches, however, diminishes quickly for some when Duchamp’s “Fountain” gets mentioned or exhibited. Debate continues whether or not something such as his urinal may be deemed art simply because someone declares it to be so and signs using a pseudonym.
Of those that initially rejected “Fountain” as art, some later accepted it (or at least agreed to be neutral) after studying the broader context of art history, symbolism, expression and so on.
For our purposes, that last bit of context illustrates our point that too broad of a change or too many features changing all at once can be harmful. But when people catch-up and digest it all, some at least will be accepting again.
Others continue standing by certain principles and may never accept the change.
But as a designer, marketer or engineer, would you want to be caught in that struggle?
As final example– and coming full circle– some feel Apple lost its way since resignation and then passing of its co-founder, Steve Jobs. Core principles of their own guidelines dating back to 1980’s have since been neglected for “simplicity” or minimalism. Converts from BSD Unix to MacOSX were once staunch proponents, and now many of these same people advocate for abandoning Apple products. (I for one eagerly await tasty treats of Jordan Hubbard’s NextBSD, but I digress…)
Changing features such as elimination of a headphone jack on iPhone 7, eliminating Ethernet ports and dropping the magnetic power connector on laptops occurs slowly, my little frog. The planned merger of apps across all their product lines implies soup is almost ready. (For posterity, that’s currently: desktops, laptops, tablets and phones.) Such changes segue to a topic for later review. Time will tell how far an already successful product line can carry itself into its own future.
Whether talking about the Newton versus iPhone, technology or adoption, the scales of acceptance and rejection may be swayed. If you are developing a product, it’s straight-forward enough to tip those scales in your favour by adding or changing only one thing at a time.