1.21 Nascent stages
Looking at the first years of the 21st Century with the benefit of some hindsight, we can see perhaps the digital world is something only now sufficiently mature for us to have a semblance of understanding about it. By taking a step back and appreciating circumstances that have shaped it so far, it’s possible to view both its nascent stages, and ours, as early digital humans.
In the first decade of the web’s existence, from 1990 to 2000, it was a curiosity, interesting but marginal.
During this time, it was regularly predicted people would never just sit still and stare at a screen all day and online shopping would not catch on. The idea that the internet would change things was fanciful.
The first decade of the 21st century, and the Web’s second, from 2000 to 2010, was a cauldron of confusion. It began with one financial crash in 2000 and then hosted another – the big one – just seven years later.
This was the time when humanity and digital technology first began to fuse. The people-powered web was born out of the dot.com crash. It arrived between 1999-2004, as entrepreneurs realised that raw technology created ‘friction-free’ value, but removed margins. The lightbulb moment was both an irony and a paradox; the realization that it was people sharing content was the way to scale. The value of digital networks was as much social as technical, with humans as the conduits and the crucial ingredient for commercial value.
In a weird way, there are parallels here with the origins of the computing revolution. Just as code itself arose out of the necessities of the Second World War, the social web emerged in a post 9/11 context, an atmosphere of increasing surveillance, uncertainty and fear. Emergence has always been a story of clouds and silver linings.
The period between 2000 and the global financial crisis in 2007-8 was also characterized by a reactionary, almost stubborn, imperative to maintain the status quo. Stung by a series of socio-political and commercial shocks, it was a time of wilful denial and unswerving insistence on maintaining year-on-year ROI commercially. Not many capitalists were ready to accept that in such a climate, a digital marketplace might change things much. There certainly wasn’t an appetite to ‘spook the markets’.
And so, debt was leveraged as the temporary solution. In this world, even a sub-prime marketplace was fair game to maintain margins. Inconvenient economic implications around digital business and concepts like ‘freemium’ were swept under the carpet until, one day, the debt strategy came undone. One could say that time caught up with digital denial when, in September 2008, with the bankruptcy of Lehman Bros , that mortgaged house of cards collapsed.
In the 15 years since 2000, industry after industry has been up-ended by the web. And if we look for signals in amongst the frenetic pace of change, there’s been signs a new kind of economic activity is gathering.
The sharing economy, crowdfunding, co-creation, maker technologies, advances in nanotechnology, artificial intelligence and alternative currencies are all offering different models. The waves of innovation happening today may mean we have the wherewithal to review some of the fundamental assumptions we’ve relied upon throughout the age of industrialization.
While the industrialized factory age has always required continuous levels of consumption to keep the machines going, over the last few decades there’s been a shift in the relationship between ourselves and our planet’s natural resources. We have the data now to know they are no longer something we can consider as infinite. The idea of endless production to satiate the needs of the machines, in some ways, has hit the buffers.
As we map and better understand a networked Earth and open up and experience the enormity of our digital capability in doing so, we’re simultaneously beginning to appreciate there are limits in the real world that we have to respect.
The next chapter of evolution will be coloured by how we address this. A week ago, Pope Francis produced this encyclical, a stark warning and confirmation of this same thing. Ostensibly, it’s about climate change, it’s also about what it means to be human.
By remaining locked in an industrial mindset of endless conveyor belt manufacture, one that demands constant growth for the investments made in it, the danger is the digital world and all the opportunity it brings for new growth is transmuted into a mirror of a broken model, a pipeline for continual, even mindlessly incessant output, in much the same way as a dumb industrialised machine would produce things, instead of a smart system.
All the capitalization and noise-to-signal challenges of the digital world potentially devalue the characteristics of the web that originally made it irresistible; the currency of the connections we make with one another. And, as it stretches our own bandwidth, so even human genetics may have to change for us to be able to keep managing it.
Digital krill
Without a mindful approach as to how we enter the Digital Age, we may end up attention-starved and memory-deficient digital krill, living in what Nicholas Carr has called ‘the shallows’.
Our complex and unique individuality as humans may be processed and rendered down to data sets. We run the risk, in this state, of being digital pond life thermostatically controlled by Google, Facebook and the those looking after our interactivity. The human gene pool may be swiped aside by the ‘Tinderisation of everything’.
If data is ‘the new oil’, as sometimes described, then how can we make the best use of all its riches? Can surveillance and freedom of expression possibly co-exist online?
These are questions Sir Tim Berners-Lee, the web’s inventor web, has suggested we ask and answer. And maybe we answer them by thinking differently as part of a new chapter. As he has put it, ‘people need the web and the web also needs people’.
While the machines are doing huge amounts of computational thinking, can we try and match that by activating our own collective reasoning.