The Future of Personal Computing

iPad stand
Image by Veronica Belmont via Flickr

(This is Part 2 of 2; Part 1 covers the shift in personal computing from the age of the standalone PC to the age of cloud computing.)

We left off with the idea that personal computing was inexorably, though slowly shifting toward a Web-based model in which our computers’ main purpose is to run browsers and we spend most of our time on the Internet. A decade ago when this idea became popular it was not particularly practical, because you simply couldn’t do very interesting things in a browser; it was originally designed, after all, for reading static web pages. But in the past decade, web sites have become much richer and interactive — think about something like Gmail, with its automatic refreshing and keyboard shortcuts, or Google Documents, which allows multiple people to edit a document at the same time — to the point where most of what people do most of the time can be done in a browser.

But then there was Apple.

Apple has a computer business, but there’s nothing revolutionary about it. They make very nice, somewhat expensive computers that are structurally basically the same as Windows machines: they have an OS, people write programs that you install on top of that OS (and still not as many people as the ones who write Windows programs), and people with those computers spend more and more of their time using the browser. The increased importance of the Internet as opposed to local (on-computer) applications has probably helped their market share a little, but not that much.

Then there was the iPod, but while it was revolutionary in many ways, it didn’t mean much for the course of computing. It’s dependent on the existence of a computer running iTunes, which is an ordinary application; the only thing “Internet” about it is that it can access the Internet to buy music.

And then there was the iPhone. The iPhone was a big hit for multiple reasons, like the fact that it was cool, but for our purposes the most important is that it was the first powerful, usable computer in your pocket. Besides email (which it never did as well as a BlackBerry), you can run applications on it that will do virtually anything, since its operating system provides an API that lets developers do pretty much whatever they want.

For most iPhone users, I suspect, what they like about the iPhone is that they can check their email, take photos, and do other things that any smartphone can do. But technology commentators have focused on iPhone’s “apps,” and Apple has used its app library as a selling point against its competitors.

See Also

So what is an app? It’s just a plain old application — like the kind we’ve had on our PCs for decades — except someone figured out that if you drop the last three syllables it sounds new and cool. An app is a piece of software that runs directly on the iPhone OS (a variant of OS X, the operating system on Mac computers), and that you download and buy from Apple’s App Store.

If you’ve followed me to this point of the story, you should realize why I find the app craze so perplexing: it seems like a giant step backward, back to a pre-Internet world where we had to install a bunch of separate applications on our computers, and developers had to write different programs for each operating system. It seems worse than that, even, because with Apple the only place you can get software is from the App Store, which means that Apple gets to decide what can run on your iPhone.

Read more . . .

What's Your Reaction?
Don't Like it!
0
I Like it!
0
View Comments (0)

Leave a Reply

Your email address will not be published.

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Scroll To Top