Ignis Fatuus

Heads in the Cloud Part II: What Will Ubiquitous Internet Do to Our Minds?

The Youth of Tomorrow?

The Youth of Tomorrow?

In Part I, I tried to contend that the era of the information cloud has already started, and that we live in a world saturated with data that permeates the air around us, letting us access the information we need whenever and wherever we need it.  We’re still in the earliest stages — the infrastructure is inadequate and usage is still uncommon — but over the next decade, mobile web browsers will become as common as cellphones (exactly as common, in fact, since they inhabit the same device).  Instant access to information creates all sorts of interesting opportunities to network remotely, but what effects will this have on our mental habits?

The most obvious is the tendency to process this information differently.  I cited an earlier post in Part I, which said that the Army had already identified the “millennials” as living “in two places at once.”  Last week, Forrester Research published a study, cited by IP Carrier, who said that “technology is so deeply embedded into everything Gen Yers do that they are truly the first native online population.”  Really, they’re saying the same thing: the generation born after 1980 has grown up with computers since infancy, and their approach to computers is totally different from preceding generations, including Gen X; the Internet is not a tool for accessing indexed information, like a dictionary or a phonebook — it’s a medium that has the potential to permeate every aspect of life, a medium in which totally new forms of culture and communication can grow, parallel to those which exist offline.

This is crucial to understanding how life is different “living in the cloud.”  It’s not merely about having information, it’s about processing information in a whole new way.  It’s not just about increased connectivity, it’s about communicating in whole new ways.

For example: anyone in Gen X or older remembers having to memorise world capitals.  You might even remember thinking, “When am I ever going to use this?” but, sadly, back then, if you ever needed to know a world capital in the absence of an atlas, the only way to get that information was to remember it.  If it wasn’t in your brain, you were out of luck.  But put an iPhone in your hand, and suddenly the task of memorising world capitals really is useless — why bother learning something you can access with a few keystrokes?

I have no idea what impact this has had on education so far (I’m guessing minimal), but it seems obvious that it will have an effect over time.  There’s a certain romantic charm to the image of children at desks lined up in rows, reciting the seven times tables, or repeating The Wreck of the Hesperus from memory.  But is that really an effective use of time and brainpower?  Why focus on teaching kids to memorise information that they’ll always be able to find in an instant anyway?  It would be much more effective to teach kids what’s out there, and how to find it.  That is, you can’t search for something you don’t know exists, so teaching kids to be as polymathic as possible is more important than encouraging them to learn facts by rote — to be familiar with many different bodies of knowledge, even if you can’t summon them from memory.  And then, being able to find what you need — teaching kids proper researching skills, from boolean logic to search algorithms, as well as what resources to use, how to evaluate the trustworthiness of a given source, and, not least of all, how to think critically about the information once it has been discovered.  With an infinite set of knowledge at one’s fingertips, the emphasis should be on first knowing what you need, and then how to find what you need.

Already this demands a complete overhaul to the way we think about education.  And if education is the training of the mind, then it follows that if we reprioritise the way we learn, we change the way we think.

The act of mentally inhabiting two places at once has already altered the way people of Generation Y think.  Gen Y’ers spend more time online than they spend watching TV — but it’s hard to know exactly how to interpret a statistic like that, because it’s rare for them to do only one or the other.  On the contrary — they update their blog while chatting online with friends, with music or the TV on in the background.  They’re great mental multitaskers, and (I’m speculating here:) they’d probably be bored if forced to choose only one activity at a time.  But “continuous partial attention” comes at a price: when attention is twice as broad, it’s half as long.  It’s no coincidence that YouTube videos average about two and a half minutes long, or that, when Google commissioned Seth MacFarlane to create some original content, he forsook the 22-minute format of his TV shows and made self-contained clips of under two minutes in length.  Online, an 8-minute clip is considered long; forget sitting through a single story for four hours.  The Internet, as a medium for the delivery of content, is definitely geared towards short-format, and there’s evidence that spending a lot of time online has a deleterious effect on one’s attention span.

Besides the diminutive length of the texts consumed, it’s likely that the fractured attention of a multitasker also leads to a shorter attention span (although that’s just speculation until studies have been done) — and there’s also the issues of comprehension and retention.  Is it possible to understand a complicated set of instructions or a complex theory if you learn it while chatting on MSN and glancing up at whatever’s on TV?  We might extrapolate the multitasking tendencies of millennials to some future generation which is totally at home swimming in a sea of content, but unable to linger on any one subject exclusively or for very long.  Is that fair?  Probably not.  Human nature seeks meaning; grazing is certainly a great way to collect memes that bolster a given schema, but what forms that schema in the first place?

To plot the course human learning and cognition will take as our consciousness becomes fused with the cloud, I propose a thought experiment: to limn the best- and worst-case scenarios; the truth will probably lie somewhere in between.

This thing is about to take off in a big way!

This thing is about to take off in a big way!

The cyber-utopists would extrapolate the current trend into a future of unlimited access to information and unlimited connectivity, perhaps even by cybernetically combining and expanding our minds with computers.  To access a fact just by thinking about it borders on omniscience!  To be able to carry on a conversation via a cybernetic cellphone is like telekinesis!  To access aural and visual media through cybernetic implants would let us hear music or watch video that exists only in our own heads, and would give a whole new meaning to “theatre of the mind!”  Our memories themselves could be recorded and stored in the cloud, giving us total recall of our entire lives!  We would live as gods, being everywhere at once, connected to everyone at once, accessing all information at once!  It would almost certainly usher in an era of global peace and understanding.

The cyber-dystopists, on the other hand, might not deny the possibility of these technological advances, but would take the opposing view of their likely effects.  Living with access to all this information would devalue understanding; as the breadth of information grows, the ability to process data and synthesize it into meaningful knowledge decreases, and eventually, we would become so overstimulated we’d be unable to contemplate, substituting data for understanding.  There’s also the matter of over-dependence on technology — what happens if the system buckles?  What if your memories fall victim to hackers, or worse — what if your memories are held hostage by evil corporations?  What if the services we come to rely on every day to navigate the world, in the absence of information learned and committed to memory, are data locked?  What if we’re forced to watch advertising projected directly into our brains just to perform a simple function like finding our location on a map, or accessing a friend’s phone number?  And what if the information we can access is censored — will governments be able to control the thoughts we’ll be able to think?  And what about the spectre of government surveillance — skimming through our very memories?  We might, inadvertently, turn all our cognition and memory over to governments or privately owned companies who would then control our very minds.

So where, between the shiny optimism and the paranoid caveats, is our real trajectory?  Barring any physiological alterations (which are not outside the realm of possibility, but at least lie outside the realm of predictability), we will not be outsourcing our memories anytime soon.  Increased connectivity and access to information are presumptively a good thing — who wouldn’t like easier access to what they want? — but it is important to remember that these are all achieved using services furnished for us by private corporations, which have their own interests at heart, not ours; everything comes at a price.  We’ve already seen that we can’t trust our fates to corporations — always have a backup handy, in other words.

At least until we begin altering our bodies, our access to the cloud will be via devices like laptops, mobile handsets, and cellphones.  Patterns of behaviour definitely affect patterns of thinking (some would argue that they’re really the same thing), but these effects are learned habits — an adaptation to the stimuli we try to make sense of every day.  We use external devices to mediate our access to this other world; they present the world in new ways, and our understanding of the world changes accordingly.  But this interface — and the way we process this information — is significant only in how it changes the way we interact with the worlds we see, both the virtual and physical.  Is the world a different place for a kid who spends all his time online, compared to one who doesn’t?  Is it the same world, but viewed in a different light?  Ultimately, the act of processing information isn’t as important as the final result: are we better equipped to live in the world, and understand our place in it?  I would argue that given freedom in cyberspace, yes, we are — but if our access in cyberspace is controlled, censored or mediated by non-neutral parties, then it has the potential to put us in a very dire situation indeed.

It’s difficult to conceive of life in the cloud; it’s not just a cognitive prosthesis, it’s not just a portable encyclopedia, it’s not just a window into livingrooms and bedrooms around the world, and it’s not just a magic mirror that shows us every film, TV show or book every made.  It’s all these things, but it’s also the matrix in which all these things are organised, indexed, filtered, and made sense of — in short, it’s a way of making sense of the virtual world, which is a world as real as any other.  When it’s in our hands, it means the expansion of the capabilities of our minds — but when it’s in the hands of others, it turns information and even control of our cognition over to potentially sinister parties.  As with everything to do with cyberspace, neutrality and transparency are crucial.

Category: The Archives

Tagged:

Comments are closed.