The Unreal Nature of Real-time
The term “real-time” burst onto the techno-business stage in the early 90’s, full of promise; one of those irresistible ways to improve performance made possible by the networking of computing machines.
The basic idea was hard to argue with—systems of all kinds work better with current, accurate information—i.e. feedback. As you improve on the two ideals of faster speed and error-elimination, organizations become leaner, more adaptive and ultimately fitter.
Any process or service could be magically improved by simply placing these two hyphenated words in front of the thing you were selling or defending: real-time computing, decision-making, supply-chain operations, energy management, you name it. We checked Amazon for books searchable using the term “real-time”, and they list more than 174,000. Discount half of these as misfits and you’re still facing an overwhelming number of applications of the concept to everything (literally) from strategy to parenting to rendering animation.
After a decade and a half of tuning, tightening and retrofitting our world to be more real-time, we’re learning that as with all highly leveraged interventions, there are often nasty unintended consequences to deal with. The very methods and tools we use to achieve our ends turn around and start shaping us in surprising and often limiting ways (think of Orwell, mood-altering drugs, Shelley’s Frankenstein and greedy King Midas). Three looming impacts of the real-time revolution we’re most concerned about are the loss of redundancies, erosion of personal time and space, and diminished learning capacity.
1. Loss Of Redundancies
Redundancy gets a bad rap. It’s considered extra, unnecessary and duplicative. It’s what you get rid of. But in systems terms, redundancies are a necessary source of flexibility, ensuring there is back-up capacity should vital parts become compromised or fail. Nature is full of redundancies, starting with the outpouring of sperm that never make it to their procreative destination. Imagine if we could streamline that operation and only release the one sperm cell that is needed—efficient, but at what cost to diversification and evolution? How many friends is enough—two, six, one? On the traditional, low-tech farm, workers have always developed overlapping competencies, building in back-up support along the way. Should someone become injured or ill, others could carry on. The related concept of multi-skilling became popular in the mid-1900s as part of high-performance and socio-technical-systems work designs for the same reason. Gains were considerable, not radical, but they were sustainable. We saw this first-hand in our work at Shell Canada in the early 1990s, realizing performance improvements up to 50 percent in plants that deployed self-managing teams.
The drift to real-time performance systems challenges the need for redundancy by removing uncertainty and replacing it with information. If we knew exactly what we needed at each moment, we wouldn’t have to maintain any slack. Redundancy costs money. Eliminate extra inventory, space, advertising and people, and you’re a winner. Enter the lean, hyper-efficient 21st Century outsourcing business organization model. A great short-term competitor, but a lousy long-term bet. A lot of the hollowing out of US industry over the past quarter century can be attributed to this kind of thinking. Reduce inventory and risk being caught out when faced with unanticipated surges, as has happened literally in the case of power outages. Businesses are similarly bottlenecked when they under-stock needed items, and their customers are inconvenienced when their dependence on a single supplier leaves them with no alternatives. A recent google-search collapse is a case in point; with the majority of web searches moving on their networks, this leaves no viable options when there is a breakdown. Cut space through hoteling, telework and other virtualization methods and you risk killing corporate culture and identity – who is left to care about the organization? Narrowcast your advertising and risk capping important new growth, not to mention eliminating serendipitous learning and new partnering opportunities; and let all non-essential staff go and you’re now dependent on less committed temps and often faceless suppliers to respond to emergencies and opportunities. A fully-tapped out workforce has no time for anything beyond their core assignment, eroding essential processes like socialization, error detection and prevention outside of the scope of primary responsibilities, opportunity-seeking and learning.
The net effect of all of this over-pruning and specialization is the loss of agility and resourcefulness. As market needs shift and the competitive field gets bumpy, narrowly focused supply chains can’t easily switch over to new tasks.
2. Erosion Of Personal Time And Space
We start our classes and lectures these days with the request that people turn off their phones and PDAs (personal digital assistants). We know this is a futile effort, but at least it slows down the amount and length of interruptions to come.
PDA’s are real-time devices, provisioning information to the user immediately. If it stopped there, we’d be grateful and celebrate, but of course it doesn’t. For the device trains us faster than we teach ourselves to use it, and we quickly become addicted to every beck, call and vibration of the damnable machine. We become endlessly curious—who or what is trying to contact me? We have to know. Maybe it’s important; maybe it’s urgent! No accident that people often call the RIM Blackberry the “crackberry” referring to it’s addictive properties.
Not so long ago, when you left the office or your place of work, you were on your own time. PDA’s are changing this. The new expectation is that email and text messages are accessed 24/7 and will be responded to within hours or even minutes. The reasoning goes something like, “if he knows about something important (or just current) now, and could address it immediately, then he should do it”. There is no place anymore to hide. You’re never off the grid, out of reach.
The crazy thing about this, is that we participate in this hijacking willingly, even enthusiastically. Real-time communications, gaming and a slew of information-based services accessed through PDA-like devices (entertainment, search, location-based, and so forth) are compelling and unfathomably distracting, turning us into A-D-D-like multi-taskers increasingly incapable of simply being in the moment.
3. Diminished Learning Capacity
Before paper, we had a limited capacity to store and access knowledge. The printing press invented in 1440 went a long way to preserving and sharing information beyond what was in the heads of local wise folks. Powerful as these innovations undoubtedly were, they were static, linear and asynchronous. To benefit from them the user still had to do a lot of basic thinking and problem solving.
Enter real-time information access and smart systems and watch out. Can’t remember a fact, find a street or who wrote To Kill a Mockingbird? Don’t fret. In fact, don’t even try to remember; just google it and presto, problem solved. How this may affect learning and memory is as yet an unanswered question. Our brains work efficiently using the most direct routes, where there is the least amount of resistance and effort required. The new skill we are learning is how to use search engines and smart devices, quite possibly in place of critical thinking and the establishment of important neural connections that lock in memories.
A second learning implication of real-time is the elimination or automatic correction of errors by smart devices. They are popping up everywhere: driving a car, spelling and grammar, vocal pitch adjustment and more. The trouble is, so much of how we learn comes as a result of making mistakes, feeling the pain, frustration or even disapproval that results, and making necessary corrections—i.e. learning. Lose that direct contact with the results of our effort, and nothing is learned until the artificial support is absent and we fail big time.
And one more learning casualty of real-time may be moral development in young children. Research at the University of Southern California’s Brain and Creativity institute suggests that constant exposure to fast paced, constantly adjusting images involving pain and violence conditions children to accept others’ pain without feeling concern or compassion. There is simply not enough time for the brain to process complex emotional responses. This is particularly true regarding forms of social or psychological pain and suffering, where it takes between six and eight seconds for our brain to respond. When these kids later witness similar acts in the real world, they may be desensitized and slow to respond with compassion. According to Manuel Castells, perhaps the most prominent sociologist writing about networked society, “Lasting compassion in relationship to psychological suffering requires a level of persistent, emotional attention.”
Progress is always a double-edged sword, and the closer we fly to this particular sun, the more over-heated and vulnerable our wings become (excuse the crass mixing of metaphors). Real-time face-recognition security systems and integrated health records management are two related examples of constructive change and improvement. Perpetual personal availability through PDAs is just plain destructive and spiraling out of control. We humans are no better suited to real-time-all-the-time than we are to flying around with make-shift wings. Make a list of the three things you value most, cherished life moments or memories, and we’ll wager they seriously lack in real-time qualities. Real-time is a convenient means to an end, not a way of being. Real life takes time, often lots of it, to sense, share experiences, learn through trial and error, to cry and to laugh. This is true for organizations and societies as well as for individuals, especially children. Now that we’ve figured out how to deploy real-time, it’s time to learn how to set healthy limits and when it is important not to use it.