I’ve had this in mind for a small project some time now. When I learnt that someone had already built one in my neighbourhood and that it stood less than a week before being vandalized, I was slightly heartbroken. But undeterred.
Ultimately, what I learned from my inadvertent Wikipedia hoax was not that Wikipedia itself isn’t reliable, but that so many people believe it is. My lie—because that’s what it was, really—was repeated by dozens of sources, from bloggers to academics to journalists. They knew better than to attribute Wikipedia directly, because even a seventh-grader knows citing Wikipedia as a source is tantamount to admitting that you haven’t done any research at all. Instead, they referred to the source of the Amelia Bedelia Cameroon lie in vague terms, such as “the literature I’ve read” on the subject, or even to Parish himself.
This was reinforced when I moved again this year, and was told by the movers, multiple times, that my boxes of books, rather than furniture like a bed and a couch, was what was weighing down their truck.
I can attest that wonton book collecting is indeed an issue. I don’t want to be saved.
The meticulous and laboured research phase seems overkill at first, but the newfound familiarity to the subject that results from it makes putting the final product together feel effortless. As in sports, there’s a muscle memory to the creative process.
The moral and emotional failure of the jerk is obvious. The intellectual failure is obvious, too: no one is as right about everything as the jerk thinks he is. He would learn by listening. And one of the things he might learn is the true scope of his jerkitude – a fact about which, as I will explain shortly, the all-out jerk is inevitably ignorant. Which brings me to the other great benefit of a theory of jerks: it might help you figure out if you yourself are one.
When I asked him what he wanted his life to be like when he retired, not at 35 after basketball, but at 70 or 75, after his business or movie or political careers, he said, “Hopefully … I‘ll have myself a big boat which I can sail off to the Mediterranean with my friends and my family and we can hit every coast that we want to hit.” Of course, the Mediterranean was below us, spread out flat and blue towards the horizon, with nothing on it but a few large yachts; and I remember thinking, But this is what you‘re already doing.
The only ill-willed, PR nightmarish thought about LeBron James you could take away from this article—and really the example supra is the only I could find—is that he kinda seems disconnected from his reality. Except it’s the same kind of incongruent and rather normal surface level statements (”I want a job which doesn’t feel like work.”) we are all prone to give in answer to benign questions (”What do you see yourself doing in the next five years?”) because we (young people/young men) have such a hard time articulating the emotional truth behind them (”I want a job that’s satisfying and fulfilling and which doesn’t feel like an entirely separate but obligatory component of my life whose only byproduct is regret and frustration”).
I don’t doubt LeBron is aware he already gets to live a Mediterranean life. What he means to say, I think, is that he hopes for one which isn’t unremittingly premeditated, orchestrated, intervened in, meddled with, glossed over, and otherwise marionette-ed by operatives concerned only with extracting maximal financial value and myth-building from his person, and who to no one’s surprise would be rather diametrical in their opposition to LeBron audibly articulating said awareness with anything less than total obfuscation or demonstrating anything resembling relatable human emotion and vulnerability. The irony being that what could be the most inspiring and exciting about LeBron James—and what I think Benjamin Markovits is grappling with in his article—is that he is in fact nostalgic,vulnerable, and as human as the rest of us.
Unfortunately, he never comes out and says exactly which dictionary he’s getting all this juice out of. But I was desperate to find it. What was this secret book, this dictionary so rich and alive that one of my favorite writers was using it to make heroic improvements to his writing?
I did a little sleuthing. It wasn’t so hard with the examples McPhee gives, and Google. He says, for instance, that in three years of research for a book about Alaska he’d forgotten to look up the word Arctic. He said that his dictionary gave him this: “Pertaining to, or situated under, the northern constellation called the Bear.”
And that turned out to be enough to find it.
I’m still cleaning bits of my brain from the wall. There was a speed record made today in books reaching my Amazon wish list.
Certainly lots of interesting intelligence related predilections to find, but the majority of this writer’s guide is made of stuff anyone can benefit from. For instance, ever notice how pervasive redundancies still remain throughout all types of writing? Also news to me: NA stands for not available and not, as I’ve always used it, not applicable.
Researchers from the Netherlands unveiled a burger made entirely from lab-grown stem cells Monday, cooking and tasting the test tube meat at a media event in London. Dr. Mark Post, a professor at Maastricht University, heralded today’s public tasting as an important step toward wide-scale adoption of synthetic meat — a transition that some see as a solution to looming environmental and agricultural crises.
I’m excited about the possibilities for these kinds of projects. There’s so much upside if we can get past, as Toor points out, the many hurdles standing in the way.
The article instantly reminds me of Margaret Atwood’s 2003 novel Oryx and Crake, where its characters skeptically witness the rise of lab grown meat in their own not-too-distant future:
Next they went to NeoAgriculturals. AgriCouture was its nickname among the students.
"This is the latest" said Crake.
What they were looking at was a large bulblike object that seemed to be covered with stippled whitish-yellow skin. Out of it came twenty thick fleshy tubes, and at the end of each tube another bulb was growing.
"What the hell is it?" said Jimmy.
"Those are chickens," said Crake. "Chicken parts. Just the breasts, on this one. They’ve got ones that specialize in drumsticks too, weave to a growth unit."
"But there aren’t any heads," said Jimmy. He grasped the concept — he’d grown up with sus multiorganifer, after all — but this thing was going too far.
"Those kids are going to clean up," said Crake after they’d left.
For us, now — years into this war-footing and this legal dynamic — to loudly proclaim our indignation at the maintenance of an essential and comprehensive investigative database while at the same time insisting on a proactive response to the inevitable attempts at terrorism is as childish as it is obtuse. We want cake, we want to eat it, and we want to stay skinny and never puke up a thing. Of course we do.
I like Simon’s tone, his rousing “Hello!”, more than his “relax it’s just an old dog showing us new tricks” attitude, especially seen against the PRISM hyperbole (Simon’s response is directed specifically at the Verizon data surveillance. PRISM isn’t something to dismiss or ignore. A log of my phone calls is not a log of my internet metadata. But seditiousness without reason or reflection on our part is equally bad. Despite having discovered something over the last week that’s cause for suspicion and alarm, we still don’t know if anything illegal has happened, if any wrongdoing or abuse has taken place, or heard any plain spoken explanation of what exactly is going on. Faced with this complexity it’s easier to assume it must be bad; that PRISM was created by nefarious people and intended only for abuse and malicious behaviour. The worst-case scenario can be the only scenario.
And if we decide that’s the truth of it, as Simon points out, then we ought to eat our own bullshit. Privacy should matter now as much as it did when we deemed spending our evenings on Reddit pouring over—sans oversight—thousands of pictures in search of terrorists acceptable use of our metadata. Privacy can’t be a switch we turn on and off as it suits us, and it can’t be conversation we only want to have after we’ve suffered the consequences.
Chris Moneymaker — the name, the story, the way he won, it’s the exact fantasy. Suddenly everybody was able to justify playing poker. You’re telling your wife you’re going to the World Series of Poker, and it wasn’t a crazy thing, it was a good investment.
What a nostalgic piece. I don’t remember watching the 2003 WSOP, or who in my group of friends was responsible for starting our obsession, but nothing was more important in our lives that summer than poker. I turned 16 that year and Hold ‘Em was a natural fit because of its seedier, decidedly kid unfriendly cache that’s so alluring to teens trying to escape being teens. It wasn’t just my friends and I, my whole suburb was under the same fever. Knockoff metallic chip cases started appearing in everyone’s living rooms, always at hand. Esoteric pastimes of CCG players, e.g., caring about shuffling techniques and the paper stock of their decks, became totally normal pursuits all of a sudden. Kids who happened to luck into their own poker tables seemed more popular those who lucked into their first cars. Math was not uncool anymore. Strange as it was, it’s still so vivid in my mind because it turned out to be the most social summer I had growing up. All of a sudden it became acceptable to flag a stranger down at the corner store carrying a long rectangular briefcase to see if they knew about some action that night. Everyone had something to say about Rounders. Minor social barriers broke down (french/english segregation amongst school is big where I’m from) in the nobler pursuit of petty cash. I made friends I saw everyday for months on end and have never seen since. I met a girl who said her dad was a spy. I never met him and I’ve never won the World Series of Poker. Obviously. But when you’re living in a collective fantasy, everything seems possible.
In a world inhabited by story-telling and story-consuming creatures, in which narrative is no longer expected to function as a vehicle for any deep meaning, our fate may be to get stories that never reach a conclusion.
An insightful read, in case 5 odyssean books and a TV series aren’t enough to fill your Game of Thrones fix. Show-only fans beware however, there’s a major spoiler two-thirds in that, sadly, I can’t un-read.
Latter-day attempts at “relevance”—which have seen Superman tackling issues like world hunger and racism—backfired because Superman functions on a higher symbolic level. It is a hard-won lesson of comics: Showing a guy in blue tights and red cape weeping over the body of an abused child doesn’t bridge the distance between his world and ours, it brings the yawning gulf between them into sharp relief
Interesting overview by Glen Weldon of Superman’s lockstep with American culture over the decades, especially for a guy like me who’s never been able to muster any interest in the character. I don’t get the appeal. If he’s supposed to represent some ideal for us to strive for—“truth, justice, and the American way!”—then we’ve been set up for failure. Superman gets to be who he is precisely because he lacks the humanity that makes the rest of us so falible. We’re unable, as a species, to be so stoically selfless. That’s why I prefer his portrayal in Smallville and Superman Returns as someone who struggles, despite his incredible powers, with the same issues and doubts we all face and who’s morals and convictions are actually challenged from time to time. Ratner’s Kal-El might be out-of character by Weldon’s standards, but I think it’s the relatable Superman that’s inspriational beyond mere symbolism.
Visual effects caught on camera will always be more captivating than digital ones processed after the fact, so it’s always reassuring to see young filmmakers who appreciate that difference. And compared to others, it seems like working on the set of Oblivion lives up to the promises of Hollywood movie magic. It also just looks damn fun.
The editing for this clip however is not as impressive. More than once, a talking head/voiceover sequence praising XYZ is followed by another talking head/voiceover extolling verbatim about the same subject, revealing how rehearsed and insincere these behind the scenes looks can be.
Odd blog post by the team behind Forecast, the excellent new weather app that’s attracting attention both for its quality and the medium used to create it.
So why does it feel as if the average native app is so much better than the average web app?
The line of questioning is what nags me. The interesting part of the web vs native debate isn’t about which technology feels better (a question which they’re convincing enough in demonstrating is immaterial anyways) but about why app developers overwhelmingly choose to work on native platforms. What I would’ve like to see is an argument for why developers should prefer web solutions over native ones. Some reasons are obvious: cross platform compatibility, over-the-air updates, and a dynamic and adaptable programming foundation in HTML. What’s tricky is convincing developers those things are worth leaving the advantages native apps provide, especially where it involves justifying a web app’s absence from the one place the majority of people shop for and discover new software. Technologies asides, native apps get a head-start from the visibility, added security, ease of use, and built-in marketing app stores provide. The reality the web advocates have to overcome is one where we’ve built an economy and marketplace around native solutions. The funny historical twist is that if web apps were as capable 6 years ago as they are today1, I can’t start to imagine what kind of conversation we’d be having instead.
In comparison, the iPhone really was a tough space for web developers in 2007. App-wise, where did one start? There was little precedent to take inspiration from. Neither could they benefit from all the legwork Apple would save native app developers from with the iOS SDK in 2008. Other factors were altogether out of their control. Even until the 3GS, iPhones didn’t have the computing, or networking, power to be able to render web performance comparable to native apps. ↩
Great cheat sheet by Jessica Hiche to all those tiny typographic details that make a big difference in quality. Includes all the Mac keystrokes as well, saving you an additional Google search.
Concerning the em dash, make sure that what’s included between them—what you’re going to put here—is a strong enough break from the current thought to warrant its flair. In Hiche’s example, a simple period would be as effective in conveying the emphasis of the narrator’s reactions: I once had to use the bus station bathroom. Horrifying. My own yard stick is to use an em dash for asides too brief to warrant a footnote1 but not suitable for parentheses (use parentheses to add details directly related to the ongoing thought).
Use footnotes for tangents so long they would be distracting placed in the main text. They’re DVD extras. ↩
Two points I’m glad Dieter Bohn emphasizes in his review: Besides the Facebook integrations, the First runs stock Android (4.1 unfortunately). It also bucks the bigger is better mantra we’re used to seeing when it comes to screen size.
Looks to be the most user focused Android design to date, even compared to Google’s own Nexus line of products.
A friend of mine believes that all big tech companies treat the web as a service worth competing over. Running with his perspective, I feel less crazy suggesting that Facebook’s ultimate goal is to become its own version of the internet. (He probably put that thought in my head too) This idea lands somewhere between theory and practice. You can — right now — open a browser tab, log into Facebook, and find in that one place a majority of the information you could otherwise go through a variety of sites to find: restaurant menus, concert listings, what your friends are up to, photos, news, blog posts, games, ad infinitum. Still, the other sites endure. Some because they provide better info. Some are more popular. Many survive because Facebook hasn’t found a way to convince us all to log into it every time we launch our browsers. Which is why whatever Facebook Home turns out to be, it’s at least a somewhat pretty sizeable deal.
Despite the whole thing not even being official, the Facebook centric launcher/homescreen/app-OS (Is it all these at once?) is already a strange entity. I’m shocked that Google is endorsing this at all, even if only by doing nothing. Letting Samsung or Amazon use Android as a base layer for their own operating systems is one thing; the information plumbing still runs through Google. Facebook Home is trying to circumvent Google altogether. There may be a Google search bar at the top of the screen, but I have a hard time believing that’s all it takes take to convince your biggest competitor that your intentions are noble and just and “look pal, I think this is the beginning of a fortuitous partnership”. The Google branded search bar — heck the fact that Android is even prominently mentioned on the event invitation — must be a good bit of Jedi Mind Tricks. At first I was going to criticize Facebook Home for being too tentative. Why only a launcher/homescreen/app-OS instead of an entire OS or customized Android UI? Why not try and block Google’s access in every way possible? But it’s obvious, thinking it over twice, why Facebook Home makes more sense as is, rather than as a OS blitzkrieg that would almost certainly fail. A proprietary OS or Android Fork would involve selling millions upon million of phones, something I don’t think OEMs (Facebook is probably the one doing HTC a favour by giving them this much primetime US media attention), carriers, or customers are eager for, nor something I think Facebook has the skills to execute at the scale they need to pull this off. And you can be sure they would be drawing the line in the sand for the whole Android as commodity OS for other web companies thing. Hence why a launcher/homescreen/app is so brilliant. Facebook gets to simulate an OS without the overhead of engineering their own, which they can potentially propagate like a virus across millions of its competitors existing and forthcoming devices. And they get away with it unchallenged because of a search bar? No really, it’s astonishing that Larry Page doesn’t realize he’s handing over his lunch to Mark Zuckerberg.
The splitting of the mobile ad pie is going to matter, but on a bigger scale than most of us are realizing. Supposing Facebook’s new project is in any way successful, it’s going to solve their recurrent login issue and let them be the internet’s portal on the fastest growing, most used class of computers. It means getting our first glimpse of the internet’s segregation.
Thanks to mobile computing, we access the internet through an array of apps, widgets, and OS level services instead of a single browser window. Controlling those portals is big business. Once there’s no longer a single way to reach the World Wide Web (your browser and a search engine being the first), the ad value of our attention skyrockets, meaning the richness and exclusivity of a platform’s information becomes an asset to be stockpiled rather than shared. Facebook wants to be the only internet its users need, and to do so it needs a combination of portals (Facebook Home/Facebook Messenger) and content (anything that’ll fit on a profile page).1 Adding reasons for us to stay, e.g., Instagram, Zynga, Bing search and Maps, becomes crucial to the business. When it begins taking sizeable chunks of their revenue away, it’ll force Google to limit access to their content in an effort to lure and hold onto people using its services. If you were looking for perspective on why Apple would create its own mapping service, simply play this scenario out a little further.
The somewhat pretty sizeable deal in all this: Facebook Home stands to mark the beginning of the absolute ecosystems: silos not just of hardware and software, but also of knowledge. The islands of internet. And proof of my friend’s supernatural prescience.
Google has been doing the same thing for a long time now, only in a less overt manner. It’s strategy was to provide the information backend for everyone else’s software and hardware and monetize the data flowing back and forth. Facebook is the first service with enough scale and reach to actually challenge them, not because it has more services or better information, but because its own data is more valuable to advertisers.
The textile industry is squandering an opportunity. Despite accounting for 8% of manufactured goods sales around the world, they’ve managed to stay on the sidelines of our mind share ever since ire over sweatshops boiled over in the 1990’s. Nowadays it’s software designers undertaking the bulk of the PR work for textiles, as skeuomorphism finally impresses upon an otherwise fabric oblivious generation the nuances of linen, felt, faux leather, and whatever other basic textiles make up your shirt’s blend. Blame my cynicism but I’m shocked Cotton or DuPont hasn’t sized the moment and begun demanding their logos mar every wallpaper or user interface element on which digitized versions of their products appear. Unfortunately for them, it looks as though the public’s honeymoon with skeuomorphism is already coming to a brandless end.
“The Trend Against Skeuomorphic Textures and Effects in User Interface Design”, the latest in a long list of attempts at explaining this particular eventide, stands out thanks to John Gruber’s uncanny ability to summon a history of events wholly disconnected from reality. His essay, like most magic, begins on a benign observation: there’s a trend forming among top tier1 iOS developers steering away from the skeuomorphic design language of the platform. Trying to figure out why, Gruber cites Letterpress, Instapaper, and Twitteriffic 5 as case studies (other good examples: Realmac Software’s Clear, Flipboard, and Simplebot’s Rise), endorsing Dave Wiscus’s false rationale that the examples supra cement iOS’s legacy as the birthplace of leading-edge, non-skeuomorphic design. Things immediately start to fall apart.
*Proper usage of the word skeuomorphism is contentious enough to warrant its own article, so I’ll address it here to avoid issues later on. Most of the ire is concentrated around its misappropriation to designs which aren’t by definition skeuomorphic at all. I prefer deferring to the experts: Christopher Downer provides a good introductory overview that delineates the apples and oranges. In contrast, Chris Baraniuk’s position is polemic, calling into question the entire use of the word in relation to UI design and—not content to stop there—wonders whether or not the Wikipedia definition is more or less entirely rubbish. Louie Mantia also provides some needed Mythbusting on the issue. While I tend to agree with each’s arguments, I still can’t get on board with their prescriptivist position. Doing so would be ignoring how the word has transcended the boundaries of its old meaning and become a catchall term to a larger body of people using and adapting a definition that’s more popular in everyday use. Much the same way minimalism is flung around with little regard for definitions, we can use skeuomorphism as a genre word that, though perhaps frequently misapplied, is apt enough in practice for everyone to distinguish between a skeuomorphic-ish design and one that isn’t. And it’ll be used as such here.
From the start, both men’s design myopia refuses to acknowledge that non-skeuomorphic design has existed elsewhere prior to 2012, whether as the preeminent aesthetic of Windows Phone 7, Microsoft’s mobile operating system2, or through the clean lines and sci-fi sterility of Android’s not-completely-flat-yet-not-stuffed-with-chrome UI. The sidestepping of any outside influence is meant as misdirection, a reshaping of events that encourages the idea that iOS designers live in a vacuum controlled by the whims of Apple. My guess is that Gruber thinks he can get away with this fallacy since Windows Phone sales have been tepid at best and that the stock Android UI is almost always redecorated by whoever’s supplying the hardware. Except popularity isn’t a necessary condition of influence. Any competent accounting of flat UI design shouldn’t, and wouldn’t, ignore the contributions of Microsoft, Google, or even Palm, no matter how disappointing their sales records.3 Having declared iOS as the epicenter of this new trend, an iota of sleight is all that’s needed for Gruber to switch Apple’s position from beneficiary to benefactor.
Gruber’s chosen Apple’s Retina display to be the hero of his story, declaring it a singular breakthrough absolving designers from employing the “the textures, the shadows, the subtle (and sometimes unsubtle) 3D effects” of skeuomorphs that were “accommodat[ing] for [the] crude pixels” of non Retina quality displays. His thought process involves comparing the influence of high resolution displays on UI design to the influence—in this case real and documented—they’ve had on digital type design. Quick recap: Retina caliber displays are behind the viability of print hinted fonts rendered digitally, which had hitherto looked insulting on the sub-par resolution of non Retina displays. They’ve also had the reverse effect on screen optimized fonts by suddenly making them appear vulgar, ridding them of their purpose. Gruber equates the trimmings of skeuomorphic design to stopgap fonts like Georgia and Verdana4: poor solutions used for a lack of better options, given that the “hallmarks of modern UI graphic design style are (almost) never used in good print graphic design”. Therefore, we ought to be thanking Apple for granting designers the opportunity to produce “graphic design that truly looks good.” on our devices.
There’s no evidence I can find—and suspect will ever find—to defend the claim that skeuomorphic textures and effects are scapegoats for the inefficacies of lower quality displays. Gruber so heavily leans on his comparison to screen fonts he starts to redefine the term, implicitly suggesting that skeuomorphism is equivalent to poor design taste. If you’ve made it this far then you know how spurious the whole idea is. Even Dave Wiscus’s 100-level explanation is enough for anyone to articulate the relationship between a skeuomorph’s purpose and a heavily textured material surface. Neither is there any reason to believe that skeuomorphic design is now defunct thanks to Retina displays, given that (a) we know a skeuomorph’s primary function isn’t too cover for crude pixels; (b) contrary to Gruber’s subjective analysis that all drop shadows and glassy surfaces look worse on them, Retina caliber displays allow for even more detailed and striking effects, making already beautiful apps using skeuomorphic elements all the more stunning; and (c) even if we cede the last two points, questions abound on why, since the release of the Retina bearing iPhone 4 in June 2010, Apple has all but ignored the apparent Retina-resolution design era and pushed towards heavier and heavier use of so-called parlor tricks on both iOS and Mac OS, or why so few third party developers have moved away from the skeuomorphic model. His entire essay is being driven in a car without a rear view mirror, aces rushing out of its driver’s sleeves.
Most of the sensible explanations put forth in “The Trend Against Skeuomorphic Textures and Effects in User Interface Design”—that skeuomorphic elements are overused, how Retina caliber displays can influence UI design—are perverted by the misconception that print design and UI design are one and the same.5 They’re not. Where print design is concerned with aesthetic cues and organization of information that’s conveyed subconsciously to the reader (e.g., the way the eye moves between two paragraphs and understands new ideas are being introduced, or how text size imparts hierarchy), UI design’s cues are dynamic and explicit. They must convey function, respond to input, morph, adapt, and tangibly interact with the user. The set of skills required for one doesn’t come close to the set needed for the other. When Gruber tells us that “[the] hallmarks of modern UI graphic design style are (almost) never used in good print graphic design”, he’s right for all the wrong reasons. The differences don’t even matter. What’s he’s trying to demonstrate is how UI design is undergoing the same crippling transitional phase print design—specifically as it concerned fonts—had to endure with the introduction of digital displays. His account of digital type’s hobbled history, right down to its rescue by high-resolution displays, is spot on. Yet the paths between the two arts don’t run parallel; software’s only ever been digital. Where’s the analog6 (or digital) counterpart we compare it to and say “We could do so much more if only we weren’t stuck designing this software on a screen”? As displays march on towards human-grade optics, of course designers’ options have improved, but there isn’t some past UI standard they’re trying to return to. Progress here is strictly forward. Nothing forced skeuomorphism on us.
The upshot to this mess is that Gruber’s initial question is actually worth considering. It never once occurs to him however, that the answer needn’t be as convoluted as he makes it.
In his own words: “There is a shift going on, fashion-wise”.
Designers. Users. No one is immune to the fatigue brought on from overexposure. The numbers themselves are staggering. 700,000 apps downloaded 35,000,000,000 times. Even accounting for the large number of games making up that total, the prominence to skeuomorphic design is inescapable. We’ve refined, developed, added to, twisted, and debased the style down to a chintzy polish.7 Why doesn’t Gruber wonder whether we’ve simply tired of seeing yet another faux-textile background mimic a pair of pants no one would dare buy in the real world?
The analogies to fashion are easy to latch onto because they help make the distinction between aesthetics and function, something Gruber understands and has leaned on previously when describing user interfaces as “clothing for the mind”8. The premise is simple: No matter the amount of “stylistic tweaking”, UIs—or clothes—remain true to their form. So long as it remains able to divide the bill at the end of lunch (form), your calculator app can resemble whatever model Braun calculator it wants (stylistic tweak). The couture comparisons might be heavy handed, but they’re a good starting point from which to find better reasons why we’re moving towards flat user interfaces. For example, it could be that designers are realizing there’s a whole new generation of people for whom the cues of skeuomorphic design aren’t referential, but merely aesthetic.9 What’s the point of mimicking a Braun TG 60 reel-to-reel deck to millions of kids and young adults who will never lay eyes on—never mind use—an actual physical tape recorder in their lives?10 Why stick by a design that’s losing its raison-d’être?(_ed notes: an update to the Podcasts app on 21-03-2013 got rid of the tape deck simulacrum_) We might also consider whether skeuomorphic design is even fit for the UIs of modern computing anymore. As we increasingly interface by way of gestures, voice commands, and inputs disconnected from physical analogs, are digital knobs and textures the most efficient or practical solution? Asking these sorts of questions—not wondering what’s changed since Apple released a new iPhone—is how we begin noticing the influence of an entire mobile industry on itself: We can trace the career of Matias Duarte from Palm to Google and see WebOS’s legacy of physicality continuing on Android. It’s why designers at Microsoft can find solace in the fact that designers are apparently taking inspiration from Windows Phone 8’s text-centric, chrome-less aesthetic and adapting it to their software. Point being, it’s pure fantasy to imagine third party iOS developers leading the charge against embossed text on the basis of a single and insularly engineered cataclysm.11
Skeuomorphism isn’t bad design. Nor is it a fad. A pragmatist might complain it’s no longer ideal in 2013. A pessimist would say we’ve made it kitsch. I suspect John Gruber knows and believes these things. Otherwise his essay is a change of opinion that throws away years of Daring Fireball posts. Then why go to such lengths to find a solution so stretched and un-obvious? My suspicion is that any scenario wherein we acknowledge that fashion-wise something has fallen out of favour inevitably leads to questions about exactly what’s causing the falling out. Fingers want to be pointed and the inconvenient truth here is that skeuomorphism has no bigger an evangelist than Apple.
What goes unmentioned in Gruber’s essay is that most of the gaudy elements he’s reproaching were introduced, if not heavily endorsed and popularized, by Apple.12 iOS’s contribution was to dial the exposure knob to 11 by attracting thousands of eager developers to its ready-made developer tools favouring conformity and uniformity across the entire platform. The formula’s proved so successful that the entire UI language of specific classes of apps has been codified, standardized, and left customizable only at the level of “Which texture or drop shadow angle should we use here?”. Hence the excess.
There’s little satisfaction in getting this far only to have me pin this on one writer blindly marching his party line. While there’s no doubt Gruber’s over thought the situation so Apple can walk away unscathed, what I want to try and coax into sight are the actual consequences at play in this debate. Blaming Apple for abrading our tolerance of skeuomorphism isn’t as worrisome as the idea that it might have no intention of stopping. Hardware aside, there’s enough evidence to suggest that Apple’s institutionalized its taste for the playful, safe, non-threatening, and innocent genre of software espoused by iOS. You’ll notice small doses of it in places like the App Store, where categories and catalogs are given their own tacky icons filled with garish fonts and unimaginative emblems: a golden plaque background for its hall of fame category, an assortment of balls to decorate its sports section. Where it’s most apparent is in their now celebrity-laden, heartstring-tugging commercials, the charms of which have less to do with Apple’s clever wit and genuine passion than applying its fastidious work ethic to clichés we’ve seen elsewhere in advertising. There’s a shift occurring at Apple about who it considers its core audience to be, a shift that consequently reverberates across its product design, i.e., why it continues to be attached to skeuomorphism.
* Marketing is often the simplest way to see who a company cares about, how it perceive its audience, and how it cares to be portrayed. The best way to illustrate this particular shift—without rewinding too far—is by drawing a line somewhere around the launch of the iPhone 4 and comparing Apple’s advertising efforts before and after. The biggest visible change is the introduction of the decidedly cinematic and ostentatious suburban lifestyle vignettes exemplified by the Sam Mende’s directed FaceTime videos, as well as almost the entire run of Siri spots, and the short-lived_ Apple Geniusseries. They’re evidence of a company shedding its aura of pretentious coolness in favour of innocuous inclusiveness. Even going as far back as the Jeff Goldblum narrated iMac G3 commercials, Apple’s marketing pre-iPhone 4 was often about differentiating its values: Apple’s, and everyone else’s. The Manchurian-like effect on consumers meant—besides exemplifying TBWA\Chiat\Day’s own genius—that owning something California designed was a token of membership. If nothing prevented anyone from enjoying those iPodShilouettedance videos, nor the charms of theGet a Mac _series, those ads nonetheless introduced dividing lines. If you didn’t own an iPod, didn’t recognize the catchy music (remember when Apple abandoned the opaque dancers and upcoming hipster bands in favour of unmistakable U2 and Coldplay mini-music videos?), owned a PC because you honestly couldn’t tell the difference, or weren’t savvy enough to make out all the references in the classic “Hello” iPhone Oscars spot, you couldn’t help but notice how different you were from those people who did own Apple products, a realization laced with all the consumerist impulses we like to pretend we’re immune to. Today, with so many iPhones and iPads in the hands of people who decidedly don’t care to fit that particular brand image, the old approach becomes alienating. Thus the current marketing—because Apple’s demographics run such a broad spectrum—goes out of its way to avoid any delineation, aiming to associate the brand with a wholesome, family values, American Dream lifestyle that almost anyone can relate or aspire to in some way.
Apple’s cutting edge innovations are both blessing and curse. As responsible as they are for the massive success and ubiquity of Apple within the pockets of a large portion of the developed world, they’re also responsible for populating its base with customers for whom cutting edge technologies have little appeal, traction, or even desirability. Today’s average Apple enthusiast is less likely to care about trends in UI design than they are about whether their current iPhone’s case will fit the next one. The kicker is that it’s proof of Apple shrewd business acumen: the skeuomorphic designs introduced in iOS back in 2007 were central to overthrowing the crude and unapproachable UIs powering devices preceding the iPhone and transforming the smartphone into something desirable to people outside office buildings. In hindsight it’s easy to explain why Apple had a hit on its hands. Today however, the huge heterogenous market Apple managed to attract to iOS is also the huge, heterogenous, and sensitive to change market which expects its median to be catered too. Dealing with expectations of this magnitude is a new world for the company, one which they may not comfortable operating in.13 Even assuming it remains a best of breed consumer electronics company well into the future, the attrition caused by the demands of ubiquitous user base means it’ll be increasingly harder for Apple to remain at the leading edge of the industry, at least UI-wise, without running the risk of estranging that base. While it won’t prevent them from innovating on hardware and technologies, it could force them into tempering their software breakthroughs in aspects they otherwise wouldn’t have if the target market still resembled what it was in 2007. Multi-touch gestures are a good example. Despite possessing the most advanced touch display technology in the industry, gestures remain woefully underplayed in the core iOS interface. Four and five fingered iOS navigation only became available to the public on iOS 5, and their—turned off by default—use limited to the iPad. There’s also no reason why some of those same gestures couldn’t work on smaller iPhone sized devices with one or two fingered substitutes. Yet their absence is conspicuous. Six years in, the gist of working one’s way through iOS remains by tapping buttons over and over again. Even prominent 3rd party innovations like “pull to refresh”, which thanks to their popularity on third party apps could routinely be mistaken as a core element of iOS’s interface, have only been timidly adopted by Apple, if at all. This underlines why the charge away from skeuomorphism is being led by third party developers, and not Apple as Gruber suggests. Third party developers aren’t beholden to the median of iOS users. They can find success in narrow audiences. They can take more risks UI-wise, acting as outliers with aspirations of becoming the trendsetters for next year’s UI fashion trends. It’s a can’t-lose scenario for Apple: at a minimum there’s enough apps to please anyone’s tastes, and if any of these Flat UI projects happen to take off at scale, e.g., Google Maps, certain elements of the native Facebook app, or pull to refresh, Apple benefits by osmosis.
There’s a hitch of course. Nothing explained, debated, or corrected supra applies to any industrial design related activity Apple’s been involved with over the last 13 years. No one would contest that every desktop, notebook, or mobile device bearing its logo hasn’t at one time represented the absolute bleeding edge of its field, achievements superseded only by their successors. There’s no denying how relentless Jony Ive14 and his team have been at pushing the boundaries of what a computer device ought to be, how it ought to look like, and what it ought be made of. Theirs is a unique focus that, mixed with a healthy disregard for whatever customers might want or expect (floppy disks, DVD drives, removable batteries, whatever I/O ports the iPad doesn’t have, and bigger or smaller iPhones depending on the rumours circulating the day you’re reading this), is almost enough to vindicate Apple’s overabundant affection of superlatives when describing its products. But hardware designers enjoy some privileges the software guys don’t. The big one concerns how being at the leading edge of electronic industrial design—as it seems only Apple has realized—actually aligns itself with the goals of the art. However striking its design, hardware’s ultimate goal is to disappear into the user’s unconscious: Lighter so as to not fatigue the hand, smaller so it can fit into any bag. Faster, longer lasting, higher resolution-ed. Whatever means necessary to prevent it from impeding the user’s experience.15 So long as the result doesn’t wildly diverge from the norm (say, twenty-seven inch convertible desktop tablets or buttonless iPods), there’s otherwise little consumer attrition constraining the imaginations of industrial designers. Once in use, most of the physical aspects of our computers fade into the unconscious, out-shined by the attention its software commands. The burden for the software guys lives in that differing proportion of attention. Our relationship with software is so immediate that any atomic change to our literacy of a given UI elicits a larger and longer sustained reaction than any material changes made to our favourite products.16 We’re prone to blame, justly or not, the successes and failures of our computers on software. The feel of brushed aluminum matters more on our screens than in our hands.
Whether tangible or pixelated, fashion remains capitalism’s favourite child. Being able to tap into—or manufacture—the desires of an enormous aggregation of people is SOP for any company hoping to reach the rarefied company of Apples, Coca Colas, and McDonalds(s), even if the usefulness of their brand images don’t make significant contributions past enlarging the guts of the many and the wallets of the few. Yet for UI design, fashion is more than an agent for consumerism: it can solve crucial problems that define how meaningful technologies can be. It’s especially important in mobile computing, where rejection of a long history of desktop UI paradigms has renewed exploration of the ways in which we use computers and what we can accomplish with them.17 What worries me is the possibility that stagnation is penetrating a field that’s still trying to define itself. Even scarier is the possibility that this stagnation germinates from iOS, for the simple reason (personal allegiances aside) that Apple has up to now been the only major tech company with any proven track record of saving us from stagnant trends ,e.g., command line UIs, floppy drives, physical music, and desktop computing. The dilemma with skeuomorphism is that as major driving force for iOS’s success, it’s a design strategy that’s hard to argue against, let alone abandon. Therefore whatever new possibilities leading edge UI design is pointing towards, Apple’s role risks becoming reactive instead of proactive. My question then is whether or not—no matter how best of breed their products remain—having Apple so consummately dominate the mobile computing space is what’s best for the industry. I know the question seems rhetorical given the idiom that competition breeds innovation, but try and name any leading edge mobile platforms that have enjoyed success in any way similar to Apple’s: WebOS not just ruined but killed Palm. Windows Phone 8 is eroding what’s left of Nokia. Windows 8 in general has Microsoft and its OEM partners in a frenzy that proves why not all ideas aren’t created equal (again, like twenty-seven inch convertible tablet desktops marketed to moms and kids). Android as a commodity OS for hardware manufacturers has been a bestseller, but it has left the platform disjointed and lacking cohesiveness from one device to another. Android the stock, presented-by-Google, operating system is almost a misnomer given its relative obscurity to the public. The only thing standing between us and the troves of innovations the aforementioned have created is the painful truth that only Apple has a proven track record of being able to popularize them.
If John Gruber can be fooled into thinking Apple remains at the leading edges of UI design, it’s thanks to its 3rd party developers who’ve inadvertently earned the majority stake in maintaining iOS’s innovative and dazzling pedigree, inadvertently making them iOS’s greatest asset in the process. While Apple is happy to oblige with statistics about the ever enlarging successes of the App Store, little is mentioned about how the ever enlarging clout of the store is shifting the power dynamics of the developer/platform provider relationship. You might describe equilibrium like this: Apple provides a product and platform customers want to buy into, e.g., the iPhone, thereby attracting developers with the promise of an untapped audience. In return developers provide the platform with (sometimes) exclusive software that distinguishes Apple’s platform from others, keeping current customers in the fold and also attracting outsiders that want a seat at the table, e.g., anyone who wanted to use Instagram prior to April 2012.18 This feedback loop is self renewing as long as each player maintains their stride: a new desirable iPhone every year, followed by new apps that take advantage of its new features. Things challenging this balance: On one front, the other platforms are rapidly catching up to, and in some cases surpassing, iOS both software and hardware wise, strengthening their own feedback loops. On another, there’s the aforementioned trend away from skeuomorphism that, at least UI wise, is dulling the appeal of a sticking-to-its-guns iOS and denying developers19 the guidance needed to meet the needs of this new vogue. The latter puts in play a few consequences. If Apple isn’t at least mildly proactive about updating its UI interfaces and campaigning them through its Human Interface Guidelines, then developers are left to act upon their own whims. This lack of uniformity and convention means that a Retina-Resolution era of UI becomes defined as one thing by The Icon Factory and as another by Path, by Simple Bots, Marco Arment, Realmac Software, Flipboard, and every other designer attempting to navigate iOS’s future without Apple’s guidance. I’m already frustrated by the number of Twitter clients disagreeing on what actions left-to-right and right-to-left swipes are supposed to invoke. But here’s the bigger worry: Apple’s hardware edge notwithstanding, what if the only incentive to develop for iOS—or to own an iOS device—is the promise of an ecosystem controlled, determined, and made enticing primarily by developers outside Cupertino? How does Apple prevent a mass migration if (when) another platform comes around proving they can foster developers the same way iOS did back in 2008?20 It’s no small feat for the challengers, but we’re fast approaching this reality.21 Developers aren’t just Apple’s biggest asset then, they’re also its biggest liability. For almost 6 years to pass without Apple demonstrating little interest in updating its UI beyond restrained refinement, beyond what’s necessary to show up with at a yearly keynote event, is either brazen confidence bordering on negligence or a lack of tactical manoeuvrability.
This for me is the real intrigue—the delicate balance between reassuring users and guiding developers—that’s simmering beneath the Skeuo v. Flat debate. Because in 2013 it’s winning the software battles that matters. The challenge for Apple then is whether they can settle on a UI design that’s simple and familiar enough to assuage the large swath of its users who seek nothing else, yet also avant-garde enough to secure its role as the pace-setter of an industry fuelled by innovation. Such a balancing act requires a flummoxing understanding of the power of design and UI’s undisputed role as the nexus of computing today. A particular design decision can not only solve a particular user experience problem, it can also make or break entire corporations while spontaneously introducing new user experience problems we’re not even sure exist yet, begetting new decision solutions, which themselves may or may not solve other unknown user experience problems, introducing who knows what kinds of make or break challenges that will be the death of some companies and the birth of others. On most minds—to say nothing of mine—the entanglement of implications is like boiling water to oatmeal. Imagine if it we were talking about anything more than a trend.
1: I’m tempted to substitute “top-tier” for a one time non-pejorative use of high brow. The distinction is important because we aren’t dealing with a “this is what all the cool kids are doing” type of trend but a “we’re the kids that were doing this before all the cool kids were” kind of trend, one that isn’t responsible for making something mainstream but rather for influencing other designers who’s apps will eventually take it into the mainstream. See: The Devil Wears Prada
2: That Gruber relegates any mention of the Metro aesthetic to 10pt footnotes is pre-emptive of reader riposte at best and negligent at worst.
3: And I’d argue that outside influences of flat design on iOS are too obvious to ignore. Not only thanks to the prevalence of Google’s own apps on iOS, but through the growing popularity of horizontally swiped views that owe a lot to Android and WebOS.
4: No word yet on when Daring Fireball plans to join the retina-resolution era.
5:A mistake on the scale of “print magazines are just as easy on tablets!”
6: Although in a primitive sense we can work our way backwards from our digital user interfaces to the very analog control panels, knobs, levers, keypads, and switches we use to interface with a variety of tools and appliances, which we’ve ⌘c ⌘v into our software. Ergo.
7: Explaining why Gruber’s complaints are often directed at the misapplication—whether by design or laziness—of skeuomorphic elements to UI designs which aren’t skeuomorphic at all, e.g., Find My Friends.
8: Quoted from this Webstock’11 talk. Given Gruber’s apparent knowledge of the subject, it’s all the more suspect that as basic an argument as “style changes” doesn’t warrant the briefest mention in his essay.
9: See: The broach, overalls, fanny packs. The monocle.
10: Nostalgia perhaps, the kind that lets me defend my love of You’ve Got Mail on its historical merits and memories of my own childhood ePenpals. But lets be honest about the Apple Podcast app, and about You’ve Got Mail.
11: And the emergence of flat UI design on iOS proper is still so negligent that it’s hard to go along with a premise that casts Retina displays as the catalyst for designer agency in all this. When Gruber—unblinkingly I imagine—informs us that the Windows 8 interface is “meant to look best on retina-caliber displays”, you have to ask yourself whether you believe in the sort of conspiracies that say either Microsoft is so forward thinking it’s willing to push out a suboptimal product for 2 years waiting for Apple to rescue them, or is just carving another notch in the bedpost of their own folly by being cravenly inept.
12: The representation of physical elements through digital form has been around since the release of the original Macintosh, but it’s really in the last 12 years, since the release of OS X, that Apple has pushed this design philosophy into every corner, background, and window pane of its operating systems. The greater the technology, the greater the amount of physical mimicry Apple has added to its software.
13: Apple’s motto until now has been It Isn’t The Consumers Job to Know What They Want. Even when the iPod was at its peak, Apple showed a surprising disregard for maintaining continuity in the line, often radically redesigning a product within a single generation, and sometimes backtracking the following year when those new designs failed to catch on. Underscored here is the relative insignificance of the iPod software in relation to the physicality of the device. This proportion is reversed with iOS.
14: Hence the excitement over Ive’s recent promotion to director of human interface at Apple, given the decidedly leading edge and un-skeuomorphic style of the industrial design team Ive leads (manifested in their distaste for the philistine, superficial, and heavy-handed traps the accoutrements skeuomorphic design often fall prey to). I liken the situation to MJs decision to try baseball. Here’s a guy who possessed unique natural talents that would make him gifted in any sport he decided to get up for in the morning, yet which weren’t sufficient to finding immediate success versus the experience of his competitors. At the highest levels, all else being equal, experience trumps skill.
15: A topic Ive has broached in the iPhone 5’s introductory video, demonstrating the power of familiarity in user experience.
16: The Microsoft Surface is a perfect case study: incredible, innovative industrial design buried and ignored in the face of the radical changes introduced by Windows 8.
17: A small list of things we either don’t have at all, would have on a smaller scale, or probably would have waited longer to see introduced were it not for smartphones: Siri & Google Now, social networking on a global scale, the explosive ubiquity of digital photography, a gaming industry divorced from its tenured oligopoly, wearable computing, ubiquitous connectivity, geo-location based services, and Angry Birds.
18: An exact description of the video game market from the mid 80s up until 2005/2006, when the economics of making a first rate video game on the current generation of consoles made it virtually impossible to succeed unless it’s sold on every available platform, putting the kibosh on decades of schoolyard turf wars over which console systems were best. But its only made exclusivity that much more valuable. Nintendo’s IP is the only reason the company has any relevance today, if you need just one example.
19: You need only make your own list of restricted, convoluted, clamoured for but denied, or impoverished APIs that could otherwise enable developers to create apps even greater than they already are.
20: Continuing with the video game theme from 18, we’re now describing what Steam could do with the Steam Box, its bid for our living rooms. Valve not only has a Nintendo-like following around its game titles, its also got the best disc-less distribution system out there in Steam. There’s likely no better candidate to endorse in the “most likely to replicate for gaming what the iPhone did for mobile computing” race.
21: Observable (a Google search will emphasize my point better than a link) from the variety of essays and switcher articles on Android finally reaching parity with iOS. From a developer/platform feedback loop perspective, where not quite there yet. While most of the major players (Facebook, Flipboard, Twitter, Instagram, and Angry Birds) have Android versions of their apps, what’s still lacking are desirable exclusives that attracts large swaths of users and makes those on competing platforms jealous. Yet this kind of slow leakage threatens to turn into a flood; the greater the number of major developers on the platform, the greater the level of confidence developers have in it, the greater the odds of Android getting those exclusives. Combined with its superior web services and ever improving hardware, Android is slowly changing the conversation from “Why wouldn’t I get an iPhone?” to “Why should I get an Android device over an iPhone?” to “Why should I get an iPhone over an Android Device?”.
The rituals surrounding updates to Path are verging on tradition: An initial surge of excitement at whatever new beautifully crafted features the social network reveals, propelling everyone to hang around a couple days until its prompt abandonment the following week. At least, that’s what it feels like on my feed.
Path current incarnation is established, isn’t showing signs of struggle, and by most accounts is a shining example of taking existing products — in this case Facebook, Twitter, Foursquare, and Instagram, and remixing them into something fresh. Yet 2 years later I still can’t figure out what Path is aspiring to be beyond a showroom for great ideas and novel iOS design. Judging from this release, I’m not quite sure they’ve figured it out either.*
The new features introduced in 3.0 steer Path away from the social journaling ethos espoused in its beginnings towards an unsubtle version of a less permeable Facebook. Private messaging seems like the last item left to pick from the hat of social network standards not yet addressed. It gets weird with the art stickers: while they would otherwise go unnoticed by my Pavlovian apathy for In App Purchases, they’re so reminiscent of the Susan Kare designed gifts Facebook launched in 2007 it’s difficult to resist the twinge of nostalgia for a younger version of me who was excited by a now extinct version of Facebook, one concerned with staying connected to close friends and family. Maybe Path is onto something.
On a recent episode of NPR’s Fresh Air discussing online bullying, a brief mention about how kids choose which social media to engage with stuck out, perhaps because the topic was also momentarily mentioned in the first part of This American Life’s engrossing series on Harper High School. While you wouldn’t even have to listen to either show to figure out that most of the decision is weighted by cool hunting (Myspace: old, Instagram: New), part of it also rests on the way teens conceive of the internet as a particular symbiosis of private and public space. They seek out is whatever service has the best balance of exposure and privacy that to us seems like delusional have-your-cake-and-eat-it fantasy: they want their content to be public enough to reach an id flattering audience, and enough isolation to sequester themselves from authority figures that ruin the party.
Here’s a theory. Path could be that utopia. All the cool essentials you could want in one place are accounted for. There’s photo filters, stickers, something that let’s you signal you like something else, robust messaging, media integration, a youthful design. But here’s the important part: Path is a goddamned master when it comes to balancing private and public. Profiles are private as a feature instead of a setting, giving user’s granular control over the size of their networks. Have as many friends as you want participate in your timeline while turning everyone else away. Posts aren’t available to the public unless willingly shared outside the App. For those who get into your network, there’s plenty to do. The genius is that Path’s curated set of actions is big enough to be an effective social network hybrid, yet not big enough for timelines to be bogged down by extraneous activities (games, for example) that turn other social networks (Facebook, for example) into impersonal balagans of information.*
*I also have another theory that Path might be better suited to discourage bullying precisely because public exposure is limited on the service. There’s also no ability to, say, create a Shadoe-Sucks fan page that’s available for everyone — not just Path users, to see. Granted, it doesn’t prevent bullies from acting out on their own profiles with their friends, nor does it have any means for responsible adults to monitor their kids activity (which seems counter-intuitive to complain about now, but is actually important because we do want mom and dad and teachers and friends intervening when things get out of hand). My point is that it grants victims of bullying an ability to participate in a social network that doesn’t also contribute to an interminable and inescapable cycle of online harassment that doesn’t end once last period is dismissed.
So then why hasn’t Path taken off in high schools across America yet? You might say it’s difficult to run against an enormous incumbent like Facebook. You’d be right, except Instagram ran such an effective and popular campaign it was bought out by an enormous incumbent. Maybe it’s a wave of luck and timing that Path can’t seem to find. My guess is it’s just hard to sell to anyone, not just teens. And stickers and private messages won’t change that. Being a hybrid of existing services, it’s challenging for Path to convince us to switch over without some obvious hook — Twitter’s limited set of characters, Instagram’s filters, old Facebook’s network exclusivity, that gets people insatiably curious and signing up. Path’s problem is that the hook it does have, i.e., being able to find that footing between what’s personal and what’s public, thereby letting us feel like our lives are shared rather than exposed, isn’t easy to describe in a way that’s concise and easily differentiated from the competition. Figuring that out should be higher on the list than stickers.
There’s no beating around the bush: cancer sucks. Chances are you know about that first, second, or even third hand. So why not do something about it? Why not grow a mustache?
“Movember” is the name of the global campaign for raising awareness about prostate cancer and mental illness in men. From their official about page:
During November each year, Movember is responsible for the sprouting of moustaches on thousands of men’s faces, in Canada and around the world. With their “Mo’s”, these men raise vital funds and awareness for men’s health, specifically prostate cancer and male mental health initiatives.
Just last year, Movember raised 127 million dollars worldwide1. Grooming for a good cause? Count me in. And I need your support. This month is going to be Smarterbits’s Momembership month. Here’s how you can participate:
Make a donation through my Movember profile and I’ll hook you up with a lifetime membership. You can donate any amount. Just click the ‘donate to me’ button under my mugshot.
If you sign up for a renewing membership through my membership page, your first payment will go towards my Movember donations, AND I’ll match your initial donation myself, up to a maximum total of $300.
If you are already a member, this month’s payment is going to the cause.
And in the meantime I will try to grow a decent moustache for you all to enjoy. In fact, I encourage you to grow your own too.
Let’s get the snark out of the way. From the looks of it, it seems we’ve discovered who the Galaxy Note is designed for: the enormous hands of NBA superstars. In Lebron James’s palm, the Galaxy Note II seems like… a Galaxy S3. (Honestly, I couldn’t even tell until they showed the name at the end.) That aside, I wanted to talk about this new commercial from Samsung because it’s probably the best smartphone commercial I’ve seen from anyone in quite some time. The spot is fun, captivating, informative, but most of all it comes off as effortless. You might even say cool. And yes I’m talking about Samsung.
The first and most obvious reason why this ad works is Lebron James himself. His performance never looks contrived or forced. It’s a small miracle that someone hasn’t talked Lebron James into a role on some major motion picture.1 James has amassed a fantastic resume of commercials over the years, which I’d attribute to both his otherworldly charisma and comfort in front of the camera. In particular, I think the fact that James’s considers himself just another regular dude with regular friends like the rest of us is what makes him so endearing and relatable.2
In this instance we watch as he’s going around town on the day of the NBA season opener: eating breakfast with his family, getting chased around by fans, grabbing a taco, visiting the barbershop, and getting dressed in a locker room with the faint but looming roar of thousands. There’s nothing special about it but consider how much more natural it comes off than Apple’s series of Siri commercials which employed a similar concept and casted actual bonafide movie stars. The premises to those Zoe Deschanel/Samuel Jackson/Martin Scorsese are ostensibly the same, but they feel like characters in an overly polished Williams & Sonoma showroom, not famous people letting us catch a glimpse into their lives. Those commercials are an instance of Apple’s attention to polish and detail working against them. In abstract spots such as all those utilitarian ‘hand on a white background’ iPhone spots, the production polish helps makes the otherwise uneventful visuals shine. It even works to create an aura of magic in those early iPad spots, where we yearn to be the one under the sheets being bewildered by dinosaurs again for the first time. In the case of the Siri campaign however, the too-even lighting, the way they phones are held all to perfectly to frame the device, and the obviously scripted narrative creates a kind of uncanny valley where we know right away what’s in front of us is fake. The magic is lost. We know that it’s Zoe Deschanel acting in a fake version of Zoe Deschanel’s life that’s supposed to feel like a real version of Zoe Deschanel’s life.
It’s precisely that lack of polish — the absence of perfection, in the world of Samsung/James’s spot that makes it believable and allows us lose ourselves in it. And because the ad doesn’t try to build a narrative around any one specific feature of the phone, events seem to unfold organically. Who doesn’t let their kids play with their phones over breakfast? I too would be watching a video if I was lying down on the floor silently getting stretched by a physiotherapist. And I actually do tweet pictures of my shoes (only sometimes ok?). Again, the ad generates fun and interest by feeling effortless. That’s what allows the scripted glimpse into James’s life to feel plausible. And feigning effortlessness has to this point been’s Samsung’s Achilles’s heel.
Why Samsung is able to feign effortlessness this time around is because they’ve finally produced a campaign that doesn’t acknowledge the existence of the iPhone or try and comment on the meta-commentary surrounding Samsung and Apple. Simply put, it doesn’t try and pander to the tech crowd. Even if there’s some element of humour in some ofthose Galaxy S3 spots, building a script around turning the success of your main competitor into some laughable flaw comes off as the ploy of someone who’s decidedly in second place and is sour about it. Many have made the comparison to Pepsi’s own marketing, which often hinges around talking down or mocking Coca-Cola. While that might be true in spirit, I think we can conclude that Samsung’s ad agency isn’t as skilled at pulling off mockery that doesn’t come off as insecurity.3 The result is even worse when Samsung tries its hand at replicating the magical world building of Apple’s commercials and crawls through its own series of contrived scenarios designed to stimulate emotions that immediately comes off as fake. I’m getting a little reflux just thinking about slow motion shots of a couple gazing into each other’s eye’s as their phones touch to share god knows what over NFC.
The secret to effective movie making, which turns out to also be the secret to effective advertising, is that emotions have to be dangled as threads for the audience to unravel. We have to be left alone to come to our own conclusion. That this new Galaxy Note II ad contains none of it’s previous propaganda is what allows it to succeed. Without some agenda or message it’s determined to beat us over our heads with, Samsung finally gives the Galaxy line its own identity as a phone, one that doesn’t have to live within the shadows of another.
Try to recall all your favourite movies with athletes in the leading role to understand why it’s a miracle. I’m sure many of you will point out Space Jam, but I’d argue that one is the exception that proves the rule. And having seen it in adulthood, I can assure you, it’s not as good as you remember. ↩
I don’t think it’s just a public persona either. From everything I’ve read or seen about the man, I really think the predominant trait that defines Lebron James is his desire to surround himself with friends and family. Parse through all the significant events and details of his life, and the public’s reaction to them, and I think what rises to the surface is an enduring attempt on his part not to become isolated from the everyday world the more famous and prominent he becomes. (Otherwise known as the standard paradox of fame and success.) Contrast this with Michael Jordan, who always seemed to embrace his position as the centre of attention. He chased and enjoyed the notoriety of being the king of the mountain. Even in commercials, MJ is always utterly alone and isolated in his world. That’s not to say he wasn’t as charismatic or magnetic as James, but my feeling is that Michael Jordan was attractive because we wanted to be him, while Lebron James is attractive because we want to know him. ↩
Consider how much of a gamble it is for Samsung to include so many inside jokes based around specific anti-Apple sentiment stemming from a very specific segment of the tech-community. I understand where the digs about the iPhone 5 being perceived as a jewel come from, but I’m not convinced that joke has a very broad reach. And who should be the target of a primetime national ad campaign: the blasé nerd keeping a spot in an iPhone line for his parents, or the parents themselves? ↩
Marco Arment lifted the veil off his much anticipated new app, surprising many with an app-published magazine with a form-eponymous name that raises — and itself asks — lots of questions. The content of this inaugural issue is for the most part1 anodyne; subscribers to Read and Trust will feel right at home, as will anyone who’s browsed a decent blog in the last decade. My impression is that the magazine was inspired by the blog, written by blog writers, and meant for blog readers. And so the whole thing’s raison d’être is a big question mark for me. Marco believes The Magazine sits in a “category between individuals and major publishers” but I can’t, at least so far, distinguish how its content does anything to occupy a space that isn’t already littered with minimalist Wordpress themes.
Reading through Marco’s foreword fails to provide any examples of the new and/or experimental. The content is positioned as being for geeks but not about technology, in other words the stock and trade of blogs. In this first issue you’ll find the same meta-commentary, GTD personality analyses, love letters to sports, and personal introspection you’d find filling your RSS or Twitter stream any other day. The intentionally bare bone layout is great for speed and readability but does nothing to build an interesting foundation for text to build on, something magazines were designed to do. Even the pitch, with its call to arms for content ownership and disdain for traditional media, is familiar. The only perceivable differences (or similarities if you’re on the magazine side of things) are a preference for longer form writing, a publishing platform that’s universally loathed, and a payment scheme that’s been collecting dust in the internet’s closet for a long time. I am curious however about how the latter example has the potential to fill a gaping hole in web publishing. The Magazine actually addresses the topic itself with an article from Guy English:
A business model where the author only occasionally writes longer pieces can’t be sustained — there’s too much time between pieces for sponsorships to work, and daily site traffic will be so low that ads won’t work well either. A Linkblog format offers the author a way to keep consistent traffic, be a constant voice in the greater conversation, and buy time between more in-depth pieces without losing audience interest.
The optimist in me sees The Magazine as an attempt to solve this problem while the pessimist sees ancillary income for the semi-independant link-blogger who’s long-formed thoughts aren’t as profitable on his homepage. There’s opportunity here, but for now Marco’s vision of — and for — the magazine (common) seems better espoused by sites like Thought Catalog, the New Inquiry, or what the Atlantic and the Verge could be if they weren’t beholden to precisely the issues The Magazine is attempting to defeat. And if it doesn’t, then the worst that could happen is that this project turns into a blog centric, shortlist version of The Feature. Albeit one that can actually pay its writers.
Going back to forms for a second, perhaps my biggest question is why Arment felt he had to create another platform to foster long form, non-traditional, financially viable writing. Wasn’t that Instapaper’s destiny? Isn’t it already poised to accomplish what he’s setting out to do with The Magazine? You might even argue Instapaper is in a better position given its popularity among a new generation of readers that want to be the the gravitational center of the content they consume. From the readers perceptive The Magazine is a throwback to the old traditions. Yet this is true only if you believe that the magazine serves only at the reader’s leisure, or that it’s only meant as a compliment to Instapaper. For one it’s probably better suited for the economic stuff. OneWayback Machine trip to Readability.com shows how difficult it is to turn the Instapaper model into a viable living for the independent writer. Could the app-publication’s master end up being the writer? As long as interests and intentions align, there’s no reason why The Magazine couldn’t succeed where Readability failed.2 Maybe that’s enough to justify The Magazine’s existence (blog-centric Readlist or otherwise). Playing along with that possibility would mean Arment is henceforth offering two solutions to the nagging plights of reading in the 21st century: One that empowers the reader, and another for the writer to wield. Whether both succeed or fail, at least there’s someone out there bored by the idea of reinventing publishing only once.
The expression “plant” a basketball is so revealing of one’s lack of knowledge of the sport that I half wonder whether Jason Snell is actually using it to drive his point home. That not being the case, misfit sport-jocks should be aware that one can nail a three-pointer, sink 100% of his free-throws, be nothing but net from downtown, or possess a smooth stroke that never hits anything but twine. But one never plants a jump shot. ↩
Note for the confidence weary: the 4-week deadline to profitability before the plug is pulled doesn’t seem like an idea hatched by someone in it for the long haul. ↩
This week on the show voted as Clipperton Island’s number 1 lovemaking podcast, the Techblock’s Abdel Ibrahim joins us to talk about whatever it is people are talking about this week when it comes to gadgets. Did a new MP3 player come out? I don’t remember. Can a phone save the economy? Not if it’s a Lumia.
As we brace for this week’s deluge, I thought I’d provide two samples—from Gizmodo no less—in contrast to my complaints from last week. Despite sharing a similar format and tone, both are among my favorite gadget reviews. What I like in particular is the way (which most reviews tend to do in reverse) both Chen and Lam use the experience of living with these devices as the method by which we might extract value and meaning from them; Is there space in our lives for the iPad? How do you—why should you—redefine the already ubiquitous experience of owning an iPhone?1
Neither article is perfect (at times too coy and proud of it) but they do point towards an alternative discussion of consumer technology I can’t recall seeing elsewhere since. The only writer exploring in a similar manner whose name comes to mind is Shawn Blanc. The difference is that where Blanc’s voice is technical and definitive, Lam and Chen’s are ambiguous but honest. It’s too bad these reviews remain a distant anomaly in Gizmodo’s mired rear view mirror.2
In hindsight, the reviews complete each other. I’d wager the way Chen talks about the iPhone 4 is the way we could approach this year’s reviews of the fourth generation of the iPad. The numerology isn’t the point here, but rather the transition from the merely ubiquitous to the utterly ubiquitous. ↩
Knowing full well the futility, I remain curious about what shape a Gizmodo sans Gawker would have taken. The double edged sword to Gizmodo’s identity has always been it’s willingness to risk being contrarian—if not even rebellious—in a space that discourages it. While that risk sometimes materialized into the aforementioned reviews, it’s also been responsible for theft, glamorizing theft, contrarianism for contrarianism’s sake, and hiring Jesus Diaz. My pet theory states that by removing Gawker’s appetite for exploiting sensationalism, Gizmodo’s risk taking would amount to more than repeated embarrassment. ↩
You’ll have to excuse the forthcoming confusion but I think Siegler is using the wrong analogy to make his point. In any magic trick the purpose of the turn is to fool the audience into believing what’s happening on stage, to convince them that what’s unfolding before their eyes isn’t a magician’s simulacra but in fact reality. The prestige, where magic is concerned, is the byproduct of an effective deception. Siegler’s turn— Apple’s meticulous penchant for innovating through repeated iteration, isn’t deception: All those hardware refinements actually come together to create a phone that’s lighter, faster, larger, and more beautiful than anything before it. The difference this year is that the resulting prestige isn’t as effective. If anything, it’s hard to see in the iPhone 5 any difference between turn and prestige.
If I can empathize1 with any part of all these articles lamenting its writers disappointment in the iPhone 5, it would be the absence of any exclusive feature or experience that can’t be had on an iPhone that already exists. You can explain some of this simply as smartphones arriving to computing maturity. The issues that weighed down the original iPhone have long been addressed. The turns of each subsequent generation provided useful and desirable prestige-s: competent networking in the iPhone 3G, the gaming and processing advances of the 3GS, the photographic prowess of the 4, and a next generation2 user interface in the 4S. Each advancement introduced by each successive version of the iPhone were not only leaps in technology, but often unprecedented. All thanks— as Siegler argues, to Apple’s relentless attention to the turn.
The difference this year is that rather than selling us on the prestige created by its advancements in the turn, Apple took the stage and sold us the turn as prestige. Although every Apple keynote is filled with long and detailed accounts of its design and engineering efforts, this year’s keynote offered little evidence of the exclusive leaps in experience3 these advancements were supposed to provide. The iPhone 5 may be superior technically, but little about using it will feel unprecedented. Perhaps this explains why the presentation emphasized the design and engineering processes above all else. It’s uncharacteristic of Apple to tout how hard it works as an argument for why we should buy their products. Worse, it’s highly uncharacteristic of Apple to have as few and uninspiring4 reasons why all that hard work — most of which goes unnoticed, matters once the device is in our hands.
Where I differ from the aforementioned lament-ees is in the belief that this year’s lack of surprises, this year’s dissatisfying prestige, is somehow foreboding. The iPhone remains the best smartphone experience one can purchase, and this latest version keeps taking steps forward. Nor is this the end of Apple as the beacon of innovation. At best the iPhone 5 presents a difficult upgrade decision for 4S owners. At worst it’s a signal that priorities in hardware5 are, if not reaching a plateau, arriving to a golden age where performance gaps between successive generations of smartphones are narrow. If this is the case then Siegler’s argument is on the mark, despite the misplaced metaphor: the endgame is indeed in the turn.
As an iPhone 4S owner. If you’re coming from any device prior, the upgrade will feel significant. ↩
Compare the unveiling of the taller iPhone 5 screen to the Retina Display reveal, or the introduction of the iPad. With the latter, it was easy for Apple to be unequivocal about how game-changing these turns were going to be for your enjoyment of iOS devices or how they advanced the industry. Besides widescreen video (which I’d argue is more relevant to the iPod touch) and a fifth row of apps, Apple was surprisingly short on reasons why we should care for a taller iPhone. By contrast, it’s obvious why a taller iPhone matters for Apple to achieve its engineering and strategic (read: responding to market trends) goals. ↩
Alternatively, the “worst” part might be that Apple presented such a disappointing homecoming for iOS 6. If they have cornered the hardware turns market, the market for software turns is a lot more competitive. And it’s over the latter that the battle will be fought. ↩
Thanks to some scheduling magic, 70 Decibels noted celebrity Myke Hurley was kind enough to drop by The Impromptu to record a special Monday edition of the show with us. Topics include: An introduction to standard British introductions, the HP iMac, exclusive confirmation of Valve’s involvement in a next generation Apple TV, how Myke came around to liking the little black gaps on the iPhone 4’s antenna, catering to the pen audience, year 2 of 70 Decibels, and a confirmed exclusive about which future show may or may not be broadcast fortnightly.
I’m ethically—lazily, adverse to maintaining a linked list here, yet I do want to share stuff that’s caught my reading eye I think would catch yours too. To remedy this I’ve started creating Readlists to, like, roundup those eye catching articles and share them with you. I already know it’s good stuff but I, like, encourage you to make up your own mind.
This week on Estonia’s number 1 rated hour of television, we talk about the just announced Nokia 9xx and faking things that already work fine. We also dive into the also-just-announced new Kindle Fires XX HDs (Who can remember all those names anyways?) and what Amazon’s ascension to major tech force might mean in the grand scheme of things.
I’m starting to get behind Amazon’s efforts. If you’re looking for an alternative to the iPad, I don’t think you can - or should, look anywhere else.
I’m most excited by the new Paperwhite Kindle. I was close to getting last year’s Kindle Touch but the wait seems to have paid off: brighter display, a significant bump in resolution1, better contrast, and an improved (in my eyes anyways) design. Amazon is setting a new standard in e-readers with this update, finally breaking away from the previous array of readers which were all more or less the same technology dressed in slightly different clothes. This could be crushing for Barnes and Noble and Kobo this holiday season. Judging from Kobo’s lackluster lineup refresh announced this morning before Amazon’s keynote, it already doesn’t bode well.
Although the new Kindle Fires (HD) sure look impressive on paper (the software innovations in particular), time will tell whether Amazon’s second generation tablet can keep up with Apple’s leading pace in design, use, and feel. But from an ideological perspective - an understanding why people want these devices in the first place perspective, I think Amazon proved today they’re a company that gets it.
One of the more interesting similarities between Apple and Amazon (and perhaps key to why they’re the only ones making any inroads with tablet sales) is the way both can leverage the voluminous parts of their businesses - flash storage and myriad component orders in Apple’s case, selling most anything consumable in Amazon’s, to gain significant pricing advantages over their competition.2A couple more years at this pace and no one will be surprised when Amazon starts giving away Kindles.
Many companies stood on stage this week to proclaim how awesome their latest devices are. Only Amazon explained why we should care. Who can’t get behind that? Now if only I could get my affiliate links working…
This is going to sound strange, self-indulgent, and maybe redundant by the end of it, but I want to explain the elation I felt reading David Barnard’s breakdown of Sparrow’s sales1 in light of its acquisition. That is, I was elated at the existence of the article itself, not Sparrow’s acquisition or Barnard’s math. You see, it’s rare that the (I’m never sure what to call it) Fancy Web community is presented with near undeniable evidence2 that their perceptions can be proven fallible. And we (as I am very much a part of this community) need to be proven fallible more often.
The Impromptu has a long running text file filled with potential show topics waiting to be optioned for a forthcoming episode. In this list is a link to an article by David McRaney on the illusion of asymmetrical insight, a fascinating historical account of a persistent misconception. I want to suggest you click through before continuing on here, but the important point I want to impart can be found in the first three lines of the article:
> The Misconception: You celebrate diversity and respect others’ points of view.
> The Truth: You are driven to create and form groups and then believe others are wrong just because they are others.
Though I don’t want to imply this phenomenon is exclusive to bloggers(duh), I do want to single us because this phenomenon has us speeding ahead, head down, towards an immovable object. For the most part we’re oblivious to its existence. Only when something like Barnard’s article appears are we ever given the chance to look up and realize the potential harm we’re about to inflict on ourselves.
On the day the next iPhone or iPad comes out, try this exercise: Comb through your RSS feed and delete every linked list link to reviews and press releases disguised as news from the day. Then comb through the available reviews left and pick your favourite. Delete every other one unless it diverges significantly from your favourite in either tone, opinion or content. Delete the obvious flame-bait. Then delete every article that’s a witty retort to said flame-bait or opines on how terrible the competition’s suffering will be in the face of the overwhelming awesomeness that is the newest Apple product. The articles that remain represent the breadth in discourse of your feed. Don’t let the emptiness of your feed discourage you.3 It’s not you, it’s us. That emptiness is an indictment of the discourse, or lack thereof, going on in whatever fields or hobbies you take interest in. We’re to blame for not offering - or crowding out, any meaningful alternatives.
The point - which I’m not the first to make4, is this: the echo chamber has gotten so loud we’ve stopped recognizing our own bullshit. Go back and search for some tweets about Sparrow’s acquisition and you’ll get a feel for the betrayal people felt over it. What those people may not realize is that it wasn’t Sparrow who betrayed them. They were betrayed by the idea that well designed and independently funded apps were an ethical, superior, commendable, and virtuous way to do business. And we (the writers) betrayed them with the incessant peddling of this idea that virtue alone should beget success. Because of it, we held Sparrow to an impossible standard.5 Worse, we adhere to this idea only when it fits our mold of the universe. Matt Gemmell wrote a great recap of the Sparrow kerfuffle and the community’s reaction to it. Except it’s a post that directly contradicts his opinions about HBO and piracy that he’d written about only a few months prior. One person mentioned the discrepancy but there’s otherwise been little discussion about how the current issues plaguing the app store, indie developers, HBO, and the media conglomerates may have some similarities6. We ignore how issues aren’t as simple as drawing a line in the sand and dividing good guys and bad guys. And we really ignore our close mindedness to that possibility.
Friendships and acquaintances - the community at the heart of blogs, hasn’t really helped things either. I’ve noticed that the writers I’m exposed to on a daily basis are shying away from even mild and respectful disagreement whenever an opportunity presents itself. If I seem more bothered by it more than ought to seem reasonable (In a hobbyist’s community after all it should be reasonable to expect its members to share similar opinions.) it’s because our contextual biases have themselves created biases towards the quality and integrity of our writing. When’s the last time you saw someone critique someone else’s work in the open? Someone tries to offer an explanation, an observation of a disturbing behaviour, and we (the writers) skewer him for it.7 This is the stuff that disturbs me to my self-indulgent core.
Having shied away from debate, criticism, and substantive discourse we’ve instead defined being a successful blogger8 as a status obtained through the ownership of a specific set of tools, values, and relationships. We define everything - from which phone platforms are best to which notebooks are most pocketable - as topics that are either black or white. Definitely this, and definitely not that. Add to this formula the bonds of online friendships and community which sees dispute as dissent and every subject, every idea, becomes a binary topic. Hence my elation over Sparrow’s acquisition and Barnard’s subsequent article: It forced us to confront an issue that’s grey.
I’m bothered by this - the strict adherence to blacks and whites, because it’s a signal that we’re too scared of the hard stuff. The good stuff. Editing, debate, and criticism are the tools by which we grow and develop as a community, as a group, and as individuals. Doesn’t the idealist in every first year social science student sees rigorous debate as the cornerstone of higher learning? As a writer, I’m worried that proclamations of allegiance, not the value of writing and opinions, have become the measure of status in our communities. On a recent episode of Hypercritical, Merlin Mann discussed how easy it’s become to emulate the people who inspire us. As aspiring writers, photographers, designers, and developers, we have cheaper and more abundant access to tools and learning materials than any previous generation of creators before us. On paper Merlin is certainly right. In practice, we’ve copped out and settled for the tools. We’ve decided that ownership is enough. John Gruber has a semi running joke on the Talk Show about his soon-to-be-published, tell all guide to internet blogging fame outlining the various esoteric coffee brewing, water over-carbonating, and mechanical keyboard evaluating techniques that in only a few short minutes will have you careening off into the sunsets of internet fame. The sad grain of truth is that most of us do in fact subscribe to this belief. I did. I still try not to.
To be fair, this kind of behaviour isn’t exclusive to geeks. I bought Air Jordans when I was kid because I thought I’d jump higher. I wear tight jeans because I don’t want to stand out at the Girl Talk concert. People want to belong and socialize in groups and materialism and conspicuous consumption are often the means to that end. So it’s hard for me to chastise geeks for sociological behaviour that’s near universal and which has been going for centuries. I will lament, however, the pernicious perpetuation of the idea that thanks to these tools - and not the work we accomplish with them, we’ve become a community of intellectual aesthetes. We don’t just write. We pen. Web pages are - somehow, printed rather than published or compiled. Our individual egos can’t be contained in an about page. We must co-opt the breathing space of the colophon typically reserved for endless back pages of technical and esoteric data. Everything and anything to achieve some superficial sense of nostalgic nobility. We’re the new Hemingways and Cartier-Bressons, standing (upright) at our computers at the break of dawn and issuing forth into the world the best 500 words we can muster on re-assertions (“My latest GTD tips!”) and re-re-assertions (“How I’m using my iPhone with my latest GTD tips!”) of tired subject matter. Developed thoughts or creativity - the actual Hemingway stuff, we quote from others to fill out our linked lists on a slow day.
Maybe I’m self-indulging my fears out loud. My desire to write well, to express myself well, and to consider myself peer to those people who I look up to seems so far into the distance that it pains me to consider there are good chances I’ll never get there. And it probably scares a lot of people. I can understand then the appeal, the ease, of settling for the tools. After all, there’s lots of fame and popularity to be gained talking about esoteric coffee brewing methods over and over again. And if caffeine is the passion that tugs at your heart, that’s fine. There’s no need to consult me about it. But if I can’t mention your writing with anything but effusive praise in order for you to acknowledge me with any civility, let alone suggest another esoteric coffee brewing method may yield a cup of joe as delicious as yours, then I don’t know what the point of this reader/writer relationship is anymore.
Ever since its inception, people have been trying to draw lines between blogging and what we consider “real writing”. The argument is, for the most part, zero sum; there are as many bad blogs as there are abhorrent novels published every year. Still, allow me to give it a go. The difference between blogging and other forms of writing is that the former, as a publishing format, is a craft performed within the safety net of its specific community. It is a place where ideas are rarely challenged so long as its originators subscribe to the pack’s determined style guide. Rare are the times when blogging provides an external challenge to the writer, or rewards his risks. Instead, it encourages a virtuous/vicious circle of content and feedback9 which, over time, becomes harder to break and prone to bland repetition. The irony is that many of the unique qualities of blogging, the ease of publishing, the immediate and intimate access to an audience, the freedom of technical and political constraints, are at the roots of all these issues. Perhaps bloggers have abused those qualities to mask how difficult a thing it is to both express oneself well and be recognized for it. How scary it can be to admit our insecurities. I can’t imagine how much we’ve missed out on as a result, how much we’re withholding from ourselves.
We’ve reached the part of the post where I’d end on some hopeful platitude, but we’ve abused enough of them already. Suffice to say I wish - self-indulgently of course, things were more grey.
Despite his predictions being proven wrong in the end. ↩
Or at least, evidence that is later refuted by equally irrefutable evidence. ↩
I’m referring here to Anthony Kay calling into question the credibility of bloggers and involuntarily demonstrating its importance in this age of communal writing. His language may have been inflammatory (and probably the main reason why the response towards him was so negative) and I may not agree with his every point, but the larger issue remains unacknowledged: The responsibility bestowed upon writers by readers who trust them to be arbiters of taste and discourse. Except that in a world where writers describe everything as either super great -especially if produced by a friend, or super bad, readers have no way to discern what is actually good or actually bad, or why one super great thing is better or worse than another super great thing. I’m not suggesting that liking the things your friends make is wrong or unbecoming if you also happen to have a blog. However, writers have to be aware of its audience ability to discern between things that are good of its own merits, and things which we deem to be good because we want to encourage our friends. Of course it’s possible for something to be both good of its own merits and created by a friend. But given our fear of critique, we tend to shy away from saying when something our friend creates is bad, or at least not super great. The eventual result of all this, when left unchecked, is the erosion of the trust between readers and writers. Dismissing Kay is easy because that erosion is hard to perceive thanks to the bonds between reader and writer being more personal than they’ve ever been. Being able to reach out directly to your favourite blogger on Twitter and entertain a relationship with them creates incredible bonds of trust. And since the blogging community has an inability to view disagreement and critique as anything other than an attack on its person, we’re afraid to speak up on even the slightest of issues. Doing so would risk facing reprisals, and worse, losing the trust, access, and friendships of the people we look up to. And being denied this access is not unlike being kicked out of all the cool parts of the internet. So when we create bonds, every fibre of our being wants to maintain them at any costs. The sad part is that no one is going to admit this because of the two McRaney points I quoted above. ↩
And more specifically tech bloggers, so as to not insult any bloggers who don’t cover tech exclusively and which I may not interact with as much (and thus what could I possible know about them) and who surely aren’t susceptible to the same issues that I’m trying to explain plague tech bloggers. ↩
The tweet may have Mike’s typical facetious cadence, but beneath that veneer his tweet speaks volumes2 about the dichotomy between the tech community’s dislike of patents’ ability to stifle innovation, and its dependance on patents to protect and defend those same innovations. This is the reason why I’m having mixed emotions about the outcome of this trial. I’m happy that Apple was successful in defending itself against Samsung. They had clear motives to do so and they obtained the verdict most would agree they— and Samsung, deserved. Yet I can’t help but worry that this case could set a precedent which, used in the wrong hands, has the potential to cause real harm to the industry.3 In the long run, will this trial end up causing more damage than it put a end to yesterday?
Look at me sell out to Twitter’s rules of the road. ↩
About the speed at which we hop from one bandwagon to the other. ↩
On the other hand, the ruling should hopefully stop anyone’s desire to take cues from the Samsung design guidebook. ↩
This week on Greece’s most economically stable radio show — while the rest of us are away figuring out the future of platforms — Adam and Michael discuss On Live, streaming in the gaming industry, and the Browett era of Apple Retail. Sponsored by no one.
Translation: “Once you get big enough for us to notice, we’re going to require you to adhere to more strict, unpublished rules to make sure you don’t compete with us or take too much value from our network.”
…I wanted to let everyone know that the world isn’t ending, Tweetbot for Mac is coming out soon, Tweetbot for iOS isn’t going anywhere.
…In general assuming the numbers listed on Twitter’s side remain consistent this should make for an overall better user experience.
…We’ll be working with Twitter over the next 6 months to make sure we comply with these new requirements as much as possible. I don’t expect the changes to be huge, but we’ll keep everyone up to date as we know more.
I can blame Twitter for the incensed backlash because they seem unable to speak about anything important with any measure of clarity and non-corpo talk about quadrants. Still, quid pro quo bloggers. Chew ten times before you swallow.
You have to appreciate the use of a paywall that only restricts the content that isn’t guaranteed to generate pageviews. ↩
This week on Nepal’s best rated talk show, the gang sandwhich a dive into the soul of app.net between follow-up on that smartphone trial - with our resident legal expert to be Chris Martucci back in action, and the larger implications of Mat Honan’s brush with digital disintegration. Nate Boateng guests stars while I sit this one out, unable to defend my impromptu coronation as the show’s resident jackass.1 Sponsored by no one.
If you’ve been following the show for the last couple of weeks, you may have noticed our recurring discourse on the fancy web and the status seeking its been encouraging of late. If you happen to find any of this stuff interesting, I’d like to point you towards two books by Andrew Potter: The Rebel Sell and The Authenticity Hoax. There’s no mention of the fancy web in either, but both do a great job of defining—and identifying examples of—status-seeking as a mass-cultural obsession with often counterproductive ambitions.
A title, which really, I embrace with open arms. ↩
"Now I want you to start over.” I thought I’d heard wrong. My first semester in university as a photography major was coming to a close and I was wrapping up a portfolio review I thought had gone wonderfully. My teacher - herself finishing her masters degree, was effusive in her praise of the landscape series I’d spent weeks toiling over on train tracks and in the darkroom. Judging from her reaction, I was under the impression that I was going to knock it out of the park going into my final peer review. My ego was riding high; I’m competitive by nature and that my teacher thought I was surpassing my classmates gave me immense satisfaction. Better still, it was confirmation that I was on the right track. Here I was impressing someone who’d been in my shoes only a few years prior. She knew what was needed in order to succeed. So I was caught off guard when she turned to me as we were wrapping up the session and said: “Shadoe I think this is really great work. You’ll probably get an A with this. But now I want you to start over.”
It’d be hard to describe the confused contortion of my face at that exact moment. Hadn’t we gone over how wonderful and technically precise my work was not a minute ago? Frustration came next because I didn’t see how I could reasonably be expected to redo 2 months work in the 2 weeks remaining in the semester.1 I stood silent waiting for an explanation until, after what seemed like an eternity, she finally described “starting over”: I should edit down the 15 images I’d selected for my series down to ten. She suggested reshooting some images or picking a different frame from the rolls of film I already had (This was in 2007 and our photography department insisted that first year students shoot in film, which I’ve learned to appreciate for reasons meriting it’s own article.2) She also believed I was skilled enough in printing to start to experimenting with colour and exposure to unify my work visually and create specific moods or themes. She really was asking me to start over. And I did, somewhat begrudgingly. The following two weeks ended up being the most torturous and stressful of my semester. I was even late to my own review, scrambling in the darkroom dusting my prints well after class had started. But those two weeks turned out to be extremely gratifying. Not because the extra work bumped my A up into an A+ (it didn’t) but because I participated in my first group show the following spring with that same project, which I’d eventually done over a second time and pared down to 5 images. And that show opened up at lot of opportunities for me I wouldn’t have had otherwise.3
"Edit, edit, edit" goes the photographer’s mantra, one often repeated by many of the successful guest lecturers that would visit my classes or the photographers who I wanted most to emulate. Photographers often retread similar subjects or themes in their images because what they’re attempting to convey through their photographs is never completely resolved. Were he alive today, you’d probably still find Ansel Adams amongst the crowds at Yosemite Park. Constant editing and iteration is extremely difficult because it requires us to be persistent, to reassess, re-do, and re-start even when a project feels completed or when it means throwing out something you’ve invested heart and soul into. Editing is a personal and sometimes intimate exercise. The better you become at it, the better the end result.
Though I ended up dropping out of photography school, “edit, edit, edit” has stayed with me and it’s a trait I’m always on the lookout for in other’s work. Apple is a great example of a company that’s great at iterating and editing. Though it’s sometimes held against them (“The iPhone 4S is only marginally different then the iPhone 4!”), it’s actually one of the qualities most essential to Apple’s “magic”. If Apple’s efforts stopped at “good”, we’d still have the MacBook Pro from 2006 or an iPhone in the same form factor as the original.4 Fortunately, Apple is always searching for the best solution, and the way they keep improving on seemingly perfect products is through a passion for editing, iteration, and sweating the details. How else describe a company with a singular interest in asymmetrical fan blades, stainless steel antennas, SIM card miniaturization, and camera loading speeds? Whatever your craft, think back to those projects which you’re most proud of, those in which you learnt a new skill or those which always seemed on the verge of impossibility until the very end when you somehow made it come to life. More often then not you’ll find the editing process at work in those achievements. It manifests itself in our drive to resolve a particular problem that won’t go away and in the passion that keeps us up late at night perfecting a project anyone else would have considered handing in days ago. Editing is the bridge between good and great, whether in photography or in the amalgamation of fast food and bacon.
I want to suggest that in order to remove friction in your life, it behooves you to become an editor, someone driven by iteration. Editing is the discovery process by which - instead of settling for the merely less frictional or the first solution that presents itself, you end up with something that’s frictionless, that’s the right solution. How you define friction is up to you. In the literal sense it might mean finding the difference between a web layout that’s confusing to users and one that entices them to interact with it. Or it could be as simple and trivial as finding your ideal cold coffee brewing method. But editing can also remove friction in an abstract sense. My teacher helped me find ways to improve my photography project by identifying those tiny bits of friction, which although didn’t prevent my work from being good, prevented it from reflecting what I was truly capable of. By forcing me to edit, she was expediting a process that most leave to hindsight and time: experience. The abstract friction I’m talking about could be best described as that feeling you get as you look over that project from years ago and wonder why in the world you ever made the decisions you made creating it and how you could ever have felt satisfied by it at the time. Ruthless editing is a way to confront and process that feeling in the present.5
We often talk about tools; which are right, which are wrong. And we often talk about processes; which are best here, which are wrong there. But tools and processes are only ancillary to the act of creation. They enable us to take action but don’t direct, instruct, or suggest how to maximize - how to perfect, the output of our efforts. That part6 may be more ephemeral, but I suggest you try and seek it out.
As it turns out, most photography classes I ended up taking took a similar turn; 12-13 weeks of congenial higher learning followed by two weeks of do-overs, finding out your camera had light leaks all semester when you finally look over your film, realizing the only free time in the week you have to use the darkroom is the same as 90% of the other students in your major, and typically just being too flat broke to actually afford the ambitious project that blew everyone out of the water when you talked about it 12-13 weeks ago over free wine at some gallery opening. ↩
The answer to “What’s the best way to improve my skills as a photographer?” should always be “Spend a year with nothing but a 35mm film camera.”. ↩
I also learnt I was stretching out two weeks worth of work over two months the whole semester, a disturbing trend I’ve noticed in other areas of my life as well. ↩
This is generally the last thing that knockoff Apple imitators, well, knockoff. ↩
Really, that feeling never goes away. You will always have more experience in the future than you do now. But editing can help you squeeze out the maximum potential from the experience you have now. If anything, editing helps you become aware of how little you actually know. Editing is what turns “if only I knew then what I know now” on it’s head. ↩
This week on Poland’s 7th best output of creativity: reader mail on our Readability episode, a Star Wars segue into app distribution, Youtube on iOS 6 beta 4 (or lack thereof), Dalton Caldwell’s candidacy for president of the Fancy Web, and our initial impressions of the Samsung vs Apple trial.
With the impending release of the final chapter in Christopher Nolan’s Batman trilogy, I asked The Impromptu co-host Adam Hyland to indulge me and start a back and forth about the previous two installments. What follows is the long and spoiler filled (Although shame on you if you’ve seen neither at this point.) exchange we’ve shared, which covers just about anything - and everything, that’s come to shape our opinions of Batman Begins and The Dark Kinght Missed the beginning? You can find the first part here.
Adam: I’ll concede the Nolan topic to you. I know a lot about the studios in general and little about the Batman Begins’s politics in particular so I can’t contest the claim that it suffered due to interference. Before I let it go, I’m reminded by the saying “socialism cannot fail, it can only be failed.” It’s too easy to lay Batman Begins’s many flaws at the feet of the studio and credit Nolan for its latent virtues.
At the core of my issues with this movie lies one perverse flaw: there is a great film in Batman Begins struggling to get out. When the film was released it appeared audacious and powerful because it was a super hero movie that was about something beyond costumed freaks. In contrast to the parade of Batman movies before it which seemed to delight in marrying some specific stylistic element of the franchise with X million dollars of movie magic, Batman Begins devoted itself to an ideal. The whole movie was bent around what constitues fear, how we can become warped by it, and how we can overcome it. On repeated viewings however, this audacious core recedes further and further behind a nest of confusing decisions, platitudinous speeches, and uninteresting characters. Taken at face value, this storytelling plaque is the typical DNA of a modern action movie; Transformers is nothing but confusion, disinterest and faff sent hurling through the screen with a thousand loud explosions. Even fairly good action movies (e.g. The Avengers) stumble through this trifecta. But watching it happen to Batman Begins felt more frustrating given the potential strength of its theme.
Let’s begin at the beginning, shall we? A young Bruce Wayne is playing with Rachel Dawes and falls into what will eventually become the Bat Cave. If I’m being charitable to Nolan I’d commend him on introducing several things very efficiently. We meet Dawes, the arrowhead (I’m not sure it’s of any significance but it returns later), and the aforementioned Bat Cave. We’re also shown Bruce’s somewhat carefree existence and his - despite his best efforts later on, platonic relationship with Dawes. She (and the audience) see him as something of a proto-playboy, taking what he wants and having fun even when it may lead to him getting hurt. I don’t want to over analyze this but I think it’s worth mentioning how much is being introduced in that first scene. Just as important is the quick flash forward to a Bhutanese prison. That’s the hook for the audience. “We don’t remember Batman being a prisoner! How did this happen?!”
So far so good. Then comes the chow line fight. Wayne beats up a cadre of baddies and is dragged off for “their” protection. What were we meant to take away from this? That Wayne is a badass? Perhaps it helps set our expectations for the next batch of scenes where Wayne must suddenly fend off ninjas at high altitudes. Knowing how the movie plays out, we can imagine Nolan wanting us to see Wayne as someone filled with anger and preparing, somehow, to avenge his parents’ death. Wayne’s assailants are “practice.”, but don’t the prison fights occur after Joe Chill (killer of the Wayne family) is murdered? Regardless, Bruce’s stretch of solitary is incredibly brief. Wayne meets Ra’s al Ghul, posing as Ducard, who offers him a cryptic bargain: take a rare blue flower to a mountaintop monastery and learn the secret of true justice.
Next (You didn’t think I was going to recap the whole movie did you? )we’re shown a training montage mixed with flashbacks. We learn of Wayne’s consuming fear and thirst for revenge. We see brief hints at Ducard’s true motivations. Before long we’re introduced to the League of Shadows in full (nearly) and Wayne’s initiation is to be consummated with an execution of an accused murderer. Except Wayne balks. Ducard presses, insisting that justice demands retribution, fear, and absolute intolerance of crime. Then something odd happens.
Ducard reveals that the true purpose of the League of Shadows is to serve as a world-historical brush fire. Old and corrupt cities are swept away upon which the new may be built. Further, the League threatens Gotham directly. I say odd because we see no hints of this up to this point and the prospect of such dramatic villainy seems to diminish the moral quandary of summary execution introduced mere seconds ago.
What follows is the first of Batman Begins’s two trolley problems. Wayne refuses to murder the prisoner (who is bound and gagged), opting instead to destroy the entire monastery in a conflagration which surely kills someone and very likely the prisoner he’d spared. Wayne isn’t directly responsible for his death or the death of any other innocent (yet surely evil) denizens of the mansion, but he’s not far removed. I know we’re establishing Batman’s trademark refusal to murder indiscriminately but this feels like an odd way to do it.
Soon afterwards we’ve returned to Gotham and meet a panoply of villains. By my count (and in order of villainy) we’ve got the CEO of Wayne Enterprises, Judge Fagan, Gordon’s partner, Dr. Crane, and Carmine Falcone. This surplus of adversaries is meant to illustrate the breadth and depth of Gotham’s corrupt nature, but does it really do anything beyond pad the movie’s running time? The plot line with Fagan has him intervening in Falcone’s prosecution but that stops short when the DA follows up on an incorrect cargo manifest and gets murdered. I guess we’re meant to understand this as a sign of seriousness on behalf of whoever stole the microwave machine but I can’t see how the DA is necessary to that end. We also establish early on a chain of events needed to prosecute Falcone and devote considerable attention to his arraignment only to see it vanish without any real purpose. To top it off, Falcone requests a psychiatric evaluation while in pre-trial confinement (Why?) so Nolan can reveal Dr. Crane as the Scarecrow. If all of this is meant to cement the real peril involved, I’m having trouble seeing it.
We mentioned earlier that the movie proceeds roughly in three acts: Falcone and Gotham’s criminal element dominates the first act, Scarecrow the second, and Ducard/Ra’s al Ghul the third. But in terms of screen time and thematic consideration, both Falcone’s and the Scarecrow’s roles are abbreviated and pointless. Falcone is tossed aside once the real threat gains speed, and Scarecrow is dispensed shortly after he is revealed as our avatar of fear. Wayne gets the andidote after some technobabble from Lucious Fox and turns his gas against him. It felt like I was being hurried along to the grand finale even though the movie is two hours and change.
Two things frustrate me about these pacing problems.
First, the core theme of the film is fear. How do we justify creating what could be a franchise defining villain and giving him only two interactions with Batman? In the first he lights Batman on fire and offers some limp quip during the act. The second occurs when he kidnaps Rachel Dawes and ends up receiving a dose of his own medicine. That’s it. I don’t actually have a problem with Cillian Murphy’s performance; The idea of Crane as a preening and unthreatening doctor transformed into a monster by the power of fear is perfect for this film. In fact he should have served as a great foil for Batman who is attempting to do the exact same thing!
My other frustration stems from knowing that Nolan can employ an economy of presentation yet seeing that skill wasted. The film’s first scene embeds a number of ideas and conflicts despite being about 90 seconds long and bereft of any main cast members. With Falcone, by contrast, we spend a good 20 minutes elaborating on his influence and venality. We’re subjected to a Katie Holmes speech about the difference between justice and retribution (a potential low point for the Nolan franchise) and a stock villain taunt aimed at Wayne’s parents. Where’s our core theme all this time? What do we discover about Falcone that remains important after his incapacitation? If Falcone is a disposable mook, that’s fine. Action movies have leaned on mooks for a long time. Except he’s not. Nolan attempts to fully characterize him and wastes a considerable amount of time doing so.
In a sense, Batman Begins suffers from a common sequel disease without being itself a sequel. There are far too many villains running around causing havok for the audience to concentrate on any one in particular.
Not content with two villains cluttering the stage, Nolan decides to insert a superfluous corporate dispute amidst them. I suppose William Earle is meant as another example of Gotham’s trademark corruption. Is corporate drama Nolan’s guilty pleasure? After all, the mole in The Dark Knight comes from inside the company. Maybe the boardroom provides a break from dark alleys and what-not, but it’s one more adversary in a film teeming with them. What did the fight over Wayne Enterprises signify? A fight, keep in mind, won soundly by Wayne without any real on screen effort. It’s a minor quibble but it supports my main issue with the film; clutter obscuring the message.
There are a host of other minor complaints before we reach the third act: Nolan is a sub-par action director. The fights are hard to follow and not particularly interesting (with the exception of the frozen lake duel). The entire sub-plot about Batman being hated by the Gotham police feels tacked on. Batman captures Falcone and provides ample evidence of his crimes and yet somehow the Gotham police force is run by black J. Jonah Jameson? We get a decent scene in Arkham later where Batman has to evade a SWAT team but I don’t know why the police would direct any more attention to him over a random criminal. I realize that as savvy audience members we recognize the tradition of Batman being hated by police even as he works in cahoots with Gordon but Batman Begins feels like an attempt to check that box rather than find its own reasons for that dynamic. Who cast Tom Wilkinson as an Italian gangster? And who coached him on his accent? I also have zero interest in Wayne’s parents. I kept expecting Thomas Wayne to pull off his tuxedo and reveal Captain America’s uniform underneath. He’s kind, a grand philanthropist, and apparently free of any flaws. The city’s public transportation exists courtesy of him and all of his wealth was generated with scruples strong enough to reach forward 20 years from the grave and move at least some members of the current Wayne Enterprise board.
Where were we? So Batman’s escape from Arkham with Dawes in tow provides the perfect sort of chase which would make the police hate Batman. Of course it comes in 40 minutes after this hatred is established and demonstrated in the firefight beforehand. Oh well. We return to Wayne Manor and Ducard is revealed to be Ra’s al Ghul. He reiterates his plan to destroy Gotham and burns the place down while displaying Bond villain level judgement by leaving Wayne alive inside the burning building. Alfred saves him and we’re back to Goth - I mean “The Narrows”, a strange and stylized slum which looks like Toon Town mixed with Calcutta.
The rest of the movie proceeds in a relatively predictable fashion. Both Dawes and Gordon are given something to do while Batman fights the real bad guys. Ducard and Batman fight it out on the train while water and power operators repeat expository dialogue in ever increasingly franic tones (seriously watch this part again, it’s terrible(Shadoe: I know). Eventually Batman wins but not before we revisit the first trolley problem: He won’t kill Ducard/Ra’s. He’ll merely incapacitate him and send him hurtling to his likely horrible death. Hopefully without injuring any bystanders when a 500 ton train is dashed on a city street. Yeah…
I’ve skipped over the thematic conflict between Ducard and Batman primarily because it seems to compete with the main themes in the film. I’d go so far as to say it belongs in another film. Asking the viewer if Gotham is worth saving (or, alternately, if saving Gotham means not destroying its institutions root and branch) and then ignoring the question for 90 minutes hardly feels like a theme at all. Further, we’re never given a reason why Gotham does deserve to be saved. I am loathe to praise the God-awful ferry scene from The Dark Knight but at least that answered a question the film was asking. Yet here we have one guy advocating burning a city to the ground (Figuratively! In reality it would be destroyed by a lunatic population!) and another guy who disagrees. WHAT A CONFLICT. Besides, aerosolizing a nerve agent into a city’s water supply is so cartoonish and hilariously evil that even the Bruce Wayne who almost killed Joe Chill would’ve fought against it. And what did Ducard “need” Wayne for in the first place? If we recall his initiation, Wayne was told his position would be ideal to help destroy Gotham from within. Except the plans to steal the device and distribute the drugs were already in place at that point.
Batman Begins is a good but frustrating movie. And it is made all the more frustrating by the scores of post hoc re-evaluations after The Dark Knight’s honeymoon elapsed. Batman Begins became something of a hipster honeypot. Anyone who wanted to put some distance between themselves and popular consensus on the now blockbuster Batman franchise could insist that “the first one was better.” But I see no reason to support that claim. As we’ll likely cover later, The Dark Knight is not without some serious flaws. But Batman Begins fails to articulate and reinforce its core themes and fills up its running time with slack exposition and needless characters. It is history’s greatest monster.
Back to you.
Shadoe: That last paragraph really resonates with me(except the history’s greatest monster part). Yes Batman Begins is both good and frustrating at the same time. But I wonder whether that’s due to its flaws or to those post hoc re-evaluations? Imagine trying to re-evaluate grunge bands after Nirvana. On its own merits alone, I think a fair critique would conclude that Begins ranks as a good example of your average super hero movie done well. A darker Iron Man with more interesting themes than Spiderman II. And unlike The Dark Knight, I don’t think it’s over reaching and trying to reinvent the wheel. I doubt Nolan would have gotten lucky twice.
I’ll piggy back onto your brief recap to air my grievances. My first - and most grating, is Bruce’s refusal to kill anyone unless he’s looking elsewhere as he does. I understand Bruce not wanting to label himself an executioner but I don’t get why he flat out refuses to kill anyone voluntarily or why it’s a persistent topic throughout the trilogy. I’m sure there’s lots of MPAA reasons, but I’ll ignore those. Is it because Bruce believes in some pure ideal of justice? Then why does he beat up goons, casually destroy city property, let innocents die (on screen and off), vigorously interrogate mobsters, and break all sorts of laws doing “detective work” while ignoring due process? The police are right to label him a masked vigilante.
My interpretation is that the “intentional kill” line is drawn so Bruce can separate himself from the villains, something that’s made explicit in that execution scene. Debating what differentiates Batman from his foes is interesting, but not as much as Nolan thinks it is. To me, having a Batman willing to kill would actually heighten the consequences of the larger themes at work in the series: At what cost is Gotham worth saving? Is killing in the name of a larger cause justifiable? Can Bruce ever physically, or psychologically, hang up the cape once he has? Can he hide his fears behind violence? But for those questions to have meaning, we have to explore the shades of gray, and Begins treats this subject only in blacks and whites. Worse, the whole dilemma is irritating since the difference between “won’t kill” and “won’t save” is so fine as to be insulting. It’s obvious Bruce is responsible for a litany of deaths and that at best he’s a hypocrite. The Dark Knight takes the matter to a whole other level by going to great lengths to show Batman “not killing” and not “not saving” anyone: captured drug dealers and knockoff Batmans are shown bound and no worse for wear, Chinese bodyguards are “safely” disarmed, and SWAT officers are roped and safely tossed off the incomplete floors of a building. Batman even knows precisely how high you can toss a mobster off a building without killing him. No word however on the pitbull tossed off the higher levels of that parking lot.
There’s a ton of heavy handed exposition in Batman Begins that becomes aggravating on repeated viewings. Nolan will make you feel intelligent for picking up on patterns and symbols in one scene and beat you over the head with their meaning in the next. We’re repeatedly shown shots of Gotham’s decaying streets and citizenry in order to reveal its underbelly. I think the point is sufficiently driven home but alas we need Rachel to give Bruce a tour of the city and illustrate to everyone that “Hey! Gotham and its people are in the pits!”. I second that the scene is a low point in Nolan’s career with the exposition water engineers close behind. You can list tons of other examples where Nolan isn’t content letting the action on screen inform. I want to say that the added exposition is for the kids in the theatre but I can’t help feeling as though Nolan doesn’t think too highly of anyone in the audience.
The last major element that bothers me about Batman Begins - even before seeing The Dark Knight, is the over stylized Gotham. Not because it looks as ludicrous as the neon counter culture Gotham of Batman & Robin but because it’s in conflict with the rest of the elements making up Nolan’s Batman universe. The Gotham in Burton’s movies may look and feel eccentric, but so are the people living in it. That kind of aesthetic unity isn’t resolved in Batman Begins. Gotham is a visual manifestation of what you’d expect to find in an issue of Detective Comics while everything else tries to live by some semblance of the conventions we expect in the real world. The gothic palette of orange and brown doesn’t even follow super hero movie modern trends of its time. Even if Marvel properties have the luxury of heroes inhabiting the literal United States, I think Hollywood has figured out that you don’t have to shoot everything on a set to sell the idea of a fantasy universe to an audience. The Gotham in Batman Begins is so overtly fake that it robs us of Nolan’s eye for the grandiose. We never pull back and experience the city (except perhaps for car chases and establishing shots of a Gotham skyline with dated CG layered on top) or explore it in any meaningful sense. Compare this to the way Gotham is presented in The Dark Knight. One is a location where scenes in a movie unfold and the other is plays an active role in the film. As for “the Narrows”, it’s another case of being beaten over the head with themes. YES I GET IT. GOTHAM IS A REALLY SHITTY PLACE TO LIVE IN RIGHT NOW. AND IT’S ALWAYS WET AND RAINING. SAVE US. Ditto for the skyrail that - of course, passes right through Wayne tower. Despite all its efforts, Gotham feels generic. Environments set tone and emotion, but it’s the people living in them that provide weight and meaning. I think Batman Begins forgets this. In The Dark Knight, Gotham looks industrial, ordered, and relatively calm. But its inhabitants never let us doubt the chaotic siege that threatens it. That’s why it’s memorable.
Overall I can still find a place for Batman Begins in my heart. There are enough hints at bigger ambitions to keep things interesting despite the generic plot once we leave Bhutan and our underdeveloped villains. By focusing on the character of Bruce Wayne, Nolan effectively lessens the importance of the usual super hero movie beats. Sure Scarecrow is wasted and Katie Holmes is grating, but do we really care? Even if they were properly executed, would they be the elements we remember as the defining parts of the film? Batman Begins’s goal was to put the chess pieces in place for a superhero tragedy that disguises itself as summer popcorn entertainment and proving that such a concept could be compelling. It’s hard to hate the movie that ever let The Dark Knightsee the light of day.
On that note, I’ll let you have the final word on Begins. What’s the legacy of the movie in your mind? And what’s our jumping off point into The Dark Knight?
Adam: I think I lavished so much time on recapping the movie because it felt like the only way to illustrate the manifestations of Nolan’s major flaws as a director. Nolan is a great director who has put out some tremendous films and yet they all seem to be plagued by a subset of his signature peccadilloes, blind spots, and fixations. His better films don’t even have fewer of these flaws! They just seem to be structured in such a way as to either hide or overshadow them with elements that do work. Like The Dark Knight (and The Dark Knight Rises), Batman Begins contains many of these flaws. We’ve catalogued a ton but their existence doesn’t sink the movie. What does is the dilution of what I see as the central theme among a multiplicity of proposed central themes! Batman Begins wants to tell the story of how Bruce Wayne became Batman and how Gotham fell and rose again. Those two themes compete for screen time when one should be competing with the film’s rough edges instead.
Where does that leave Batman Begins in my heart? I’m unsure how to score Batman Begins’s role in bringing The Dark Knight into the world. Part of this has to do with The Dark Knight being something of lightning in a bottle. Batman Begins may have allowed a movie like The Dark Knight to be created but it was entirely possible it could be as frustrating too. Another problem is a near total lack of precedent. We have a shortage of sequels which are better than the original. Only Godfather II, Empire Strikes Back, and Star Trek II come to mind. And each of those films used some characteristic of their prequels to rise to greatness. To pick on Star Trek II, nearly the entire character of the film was built as a reaction to Star Trek:The Motion Picture. Paramount was so frustrated with ST:TMP that they brought in new faces to crowd out Roddenberry. The action, tension, music and tone all stem from an attempt to escape ST:TMP’s bizarre faux “2001” feel. Do I ascribe some of the brilliance in Wrath of Khan to the sublime failure of ST:TMP? How good does Wrath have to be in order to retroactively raise that assessment?
Maybe that’s unfair. I don’t think The Dark Knight was deemed great as a reaction to the flaws in Batman Begins so the comparison to ST:TMP is a bit pathological. You’ve made a great case for Batman Begins being neither fish nor fowl; torn between the studio and Nolan. In that respect we can imagine Batman Begins’s success as a signal to the studios to back off. That lack of pressure certainly helped The Dark Knight along but was it enough for me to praise Batman Begins as a great stepping stone? While we’re on the subject of retroactive reevaluation, is it correct to say Begins’s goal was to set up the pieces for The Dark Knight? Even if we afford Nolan a great deal of deference on his foresight, I can’t support that claim.
We’ll get to this in the future but The Dark Knight Rises may be a better comparison to Batman Begins than The Dark Knight is. Remember when I mentioned that the “should Gotham be saved” question felt like it belonged in another movie? When I wrote those words I didn’t know that question would animate most of the third film in ways it never did in the second. The only difference is that neither Batman Begins nor The Dark Knight Rises had Heath Ledger’s Joker as a central, all-consuming performance. A performance which may make The Dark Knight somewhat incommensurable. But here goes…
I’m a simple man, so I’ll start at the beginning. Great action movies - and great reboots, often lean on the first five minutes of contact with the audience. J.J. Abrams’s 2009 Star Trek reboot contains a superlative example. The very first scene establishes a visual break with the past, a break in tempo and tone, and a narrative departure (vis a vis the internal logic of the universe) form Star Trek as we knew it. As five minutes of film goes it’s near masterwork (and I want to point out that it is very nearly 5 minutes on the nose). The Dark Knight opens with a scene almost as strong. A daring daylight heist scored with a muted soundtrack puts (almost) all of Nolan’s talent on display. Nothing feels fake or shiny. We aren’t robbing the president or the Justice League. It’s a simple bank robbery which serves to establish the main villain and one of the minor conflicts in the film. Little is said but a lot is shown. Immediately we understand this film will not be characterized by cartoonish violence or nighttime capers. There are other scenes in the movie which emanate power but I remember the opening scene best.
Did you get a similar feel from that scene? Were there any others which stuck with you?
Shadoe: The main difference between the second and first films in my mind is how astonishing the former was on the first viewing, a feeling that never quite arrives in the latter. The heist opening is brilliant on a general level because it gets you up to speed without you realizing it. What stands out most to me however, is the visual, auditive and tonal contrast to Batman Begins. I’m a photography nerd at heart so I must mention how compelling the use of IMAX is in this movie. The 70mm film format gives us incredible depth of field, a sense of texture on everything from concrete buildings to face paint, and rich colours and tonal range. Using IMAX in that heist scene captivates you immediately. Another, perhaps intentional, side effect of using IMAX is that Nolan opts to pull the camera back a lot more in this movie. I guess if you’re going to spend inordinate amounts of money and technical expertise to operate a gargantuan and fail prone camera that no one else in Hollywood wants to touch, you’re not going to waste it on crops and close ups of cramped spaces right? You’ll use it on establishing shots of crooks rappelling across a buildings, Batman base jumping from a skyscraper in the Chinese financial district, and to witness the destruction of Gotham General. Sure those are all “set pieces” but I think Nolan’s approach to shooting those influence his approach elsewhere in the movie. The result - beyond the improved technical aspects, is a visually more immersive film that allows us to consciously immerse ourselves in its world.
Ironically, pulling back the camera and having a generally slower editing pace reveals flaws in our hero that heavy CG and quick cuts masked in Begins. I’m talking here about the original batsuit, which in it’s few appearances looks borderline comical. I get giggles every time I see the close up of Batman’s anger face as he’s trying to hold onto the side of the van with his icepick augmented gauntlets. I’m reminded of the live action The Tick show. I really want to believe that the new costume wasn’t just some plot element.(Which considering the recurring “we wanted Christian to be able to move his neck while in costume” news bits during the film’s production may be true.) The Bourne style photography during fight scenes also sticks out in this one. That worked for the stylized “sweeping out of the shadows” way Batman moved around in Begins but here it left me longing for more. Not only is there no satisfying fisticuffs between Batman and his nemesis but every minor confrontation with cops and assorted thugs fails to satiate if you came to see this movie for the action. Which of course isn’t the point of this movie at all. I just thought I should mention it.
You asked me if any other scenes stand out in my mind. There aren’t. Not because there aren’t any good ones - there are plenty of great ones, but because the images burned in my mind are from specific moments or sequences in the movie: Joker sticking his head out of the cop car after his escape from GPD to take in the mayhem he’s inflicted on the city, Batman’s base jump where all you can hear is the sound of the wind across his cape-kite; a calm before his storming into Lau’s office. I recall the Joker trying to take a sip from the champagne flute at Dent’s fundraiser but intentionally missing over his shoulder. There’s a litany of moments like these in The Dark Knight. For all the plot details Nolan can gloss over in his movies, he can also show a surprisingly poetic attention to other small details, either by recognizing the eccentricities of his actors or mastering the marriage of sound and image.
If I actually have to pick a scene, it would have to be the interrogation scene between the Joker and Batman. For one the performances are spectacular. Here are two heavily costumed characters from a comic book having a meaningful exchange of dialogue that’s more emotionally stirring than most drama releases in any given year. It’s unchartered territory for an action movie. The scene also brings many of the film’s minor themes to a head: the futility of force, Bruce’s relationship with Rachel, and Bruce’s inability to escalate the means necessary to the situation (Which is both the “won’t kill” trolley and the “will the Joker break Gotham’s spirit?” theme that’s made explicit in the ferry prisoners’ dilemma later on.). It’s also a perfect encapsulation of both character’s psyches. On the one hand we have the Joker’s complete non reaction to anything Batman attempts to provoke from him. He’s simply soaking up and enjoying our protagonist’s rising frustration and desperation. On the other we have Bruce Wayne beginning to hit his wit’s end, struggling against his desire to save Rachel for his own selfish reasons and rescuing Harvey Dent for the sake of Gotham. Brilliant stuff.
Aside on Bruce’s motivations in deciding who to rescue:
Bruce rescuing Dent is, in a way, self-motivated too. Dent is Bruce’s exit strategy for Batman; how he can finally walk off into the sunset with Rachel on his arm. Of course, if he doesn’t save Rachel the whole operation is for naught. That Bruce picks to go after Rachel almost instantly says a lot about what’s most important to him. Remember that he’s going to spend the rest of the movie - and the next one, beating himself up over it. I don’t think it stands up to any rigorous debate, but I like that it leaves an opening for us to interpret Bruce as an entirely self-absorbed character who goes to these preposterous lengths to get over his own psychological issues.
I think that to discuss The Dark Knight in any meaningful way, you have to start by addressing the Joker. You can make the case - Heath Ledger’s performance aside, that he’s the central focus of this movie. At the very least its central force. I think you already mentioned how The Dark Knight casts aside many of the themes introduced in Batman Begins. Would it be fair to say that there’s a shift in focus from Bruce Wayne to Gotham City in this movie? And since Gotham City doesn’t actually have any agency itself, would it be wrong to think that it’s up to the Joker’s actions to hint at the larger narrative Nolan is trying to tell?
Adam: I’m glad you brought up the first scene with Batman and the comically stiff Bat-suit. Re-watching it I was reminded of the Michael Keaton Batman movies where every new threat was met by Batman completely turning his body to meet it. Some of that could be covered up with editing but there was no escaping how the cowl met both shoulder pads in one solid piece of rubber. They played it a bit for yucks here but never really return to the new suit until the very end of the film. I shouldn’t really complain as it is a bit of levity in a film whose first half has very little to offer. I also suspect it served another purpose.
The first few times I watched The Dark Knight I wondered why the initial scene with Scarecrow and disposable Russian gangster A was in the movie. Obviously some things were mentioned and set up as callbacks: the dogs, the copycat bat-men, and the wanton property damage to parked cars. But other than that, what purpose does it serve apart from paying Cillian Murphy’s mortgage? Now I’m starting to see it as part of a three scene arc. The parking garage, Gordon and Ramirez on the top of MCU, and the aftermath of the bank heist all fit together. If we imagine the Joker as an organic antithesis to Batman - a necessary component of the dialectic if you will, we need to establish his provenance. The movie starts out with a brutal and daring heist on a mob bank and then cuts away to three illustrations of Gotham’s underworld. Scarecrow is demoted from avatar of fear to skittish drug dealer; criminals refuse to carjack (mouthy) citizens when the Bat-signal is in the sky; Batman ignores the Joker in order to concentrate on crushing the mob for good. The criminal element is desperate, cornered. As Alfred opines, the mob reaches out to the Joker because they have nowhere else to turn. But the Joker doesn’t exist because the mob needs him. He exists as an answer to the order imposed over Gotham. As much as we pick on Nolan for heaping exposition on the viewer, he conveys this feeling rather neatly without resorting to obvious expositions or parking garage road trips.
In order to completely buy this, we need to be convinced of one other part: That the Joker is not a man. As Jamelle Bouie mentioned on our The Dark Knight Risespodcast, he’s a “force of nature.” Does this make sense? Both Batman Begins and The Dark Knight Rises have villains who are real people, insofar as Bane and Ra’s al Ghul are tangible entities. But the Joker transcends The Dark Knight’s reality. Yes within the narrative of the film the Joker is a person, able to be captured or killed and (just barely) forced to be in one place at a time. But that doesn’t preclude us from perceiving him as a manifestation of something else entirely. If we think about it, there is no clear backstory for the Joker. He tells at least three lies about his beginnings. It is chilling to have what appeared to be a personal revelation uncovered as a total falsehood; perhaps created at the moment, perhaps recycled in his head over and over again. We could chalk this up to sheer mendacity but the same character murders or has murdered all his cohorts in a bank robbery ostensibly to capture a larger share for himself, only to burn his - and the gangsters, shares later on. He lies about the location of Rachel and Harvey. He lies, or at least changes his mind, about his desire to reveal Batman’s identity. Obviously all these anecdotes share one characteristic: they’re lies! But they also speak to the Joker’s ends and desires and how each is as contradictory and inchoate as the last. It isn’t sufficient only to establish the Joker as a cipher. He openly declares himself to be an agent of chaos, an enemy of “schemers” and divorced from concern for both his life and fortunes. The film is replete with these references so I won’t belabour the point.
Let’s dig deeper. It isn’t sufficient to say he’s merely a manifestation of chaos. I think he’s a force brought into the world to oppose Batman at his core. The best example of this being the superlative interrogation scene. The Joker pushes Batman to his limits, both explicitly and with precision. Batman’s refusal to kill constituting one of his core delineations, the Joker desperately wants to force that transgression even at the cost of his own life and the cost of his grander plan! His existence and his actions sought to undo who Batman was at a fundamental level. And if we imagine Dent as Wayne’s exit strategy and the fundamental outgrowth of Batman’s faith in Gotham, the Joker succeeds in undoing that element. In my mind even the insane contrivances necessary for the Joker’s “plan” to come to fruition are indications of the whirling chaos at the center of his character. We can go back and forth about this but fundamentally I’m unconcerned about the in-universe problems and expectations. Complaining (and I’m not saying you are) about how the Joker could know he’d be allowed a phone call is a bit like insisting how unlikely it was Bilbo would find the ring in the Misty Mountains.
All of this wouldn’t matter if Heath Ledger’s performance wasn’t absolutely spellbinding. You mentioned that The Dark Knight was astonishing the first run through and I agree wholeheartedly. Even further, watching it after letting the DVD collect some dust is just as astonishing. Ledger is lightning in a bottle, but the story as a whole is electrifying too. Apart from some standard Nolan-isms (the obviously corrupt cops, the somewhat inconsistent transitions, and some needless characters) the first hour and thirty minutes are unbelievably good. Up until the hospital is blown up the movie builds a terrible crescendo of violence and danger as the Joker advances further and further on Batman.
Having said that, the weakest part of the film follows the hospital explosion. I’d say the last act of The Dark Knight robs this film of a top ten place among American cinema in general. If it’s ok with you, I’d like to zero in on just the third act. What went wrong? Are we (or am I) misreading Nolan’s intent? Does Christopher Nolan have an obsession with isolating Gotham in increasingly contrived ways? What was the point of the cell-phone sonar net aside from paying off Dent’s Cincinnatus comment?
I’m going to loathe myself afterwards, but in this case I can’t help myself. Here’s Shawn Blanc evaluating the Nexus 7 and everyone’s favorite tablet related C-word:
Well, if the iPad is not meant for content creation, then the Nexus 7 certainly is not. For two main reasons: its screen size (and, thus its keyboard size) and its app store.
There’s a dark humor to watching Blanc lay on the Nexus 7 the same arguments we’ve already dispelled against the iPad. You’d think that by now everyone would have come to their senses: creation is a process entirely dependent on the will of the individual. It is not an innate quality exclusive to any specific piece of electronic. I’m sure early reviews of the phonograph were all about its failings as a content creation device too. Except it’s people who are expected to create. At least that’s how it worked out for the phonograph. Considering Blanc’s previous stances on the matter, it’s laughable to see him take the other side of the debate now. What he could have said instead is that while the Nexus 7 doesn’t allow him to be creative, we should be using his experience as a litmus test to come to our own conclusion. Of course the case is instead ipso facto in favor of consumption once a cursory perusal of the Google Play store reveals no adequate simulacrum of a blogger’s favorite iOS apps.
I’d ask him to consider all the ways a smaller tablet might allow for different creative opportunities which aren’t offered by the iPad, but I realize that’s an ambitious hope. I’m curious to see how “creative” he’ll judge an eventual iPad mini to be.