Society & culture


Why we should all delete Facebook

It’s been argued that, when it comes to being evil, Google beats Facebook hands down. They have more data about you than Facebook and when the founders moved upwards into Alphabet — the holding company that owns Google — Google was itself plunged into a cultural crisis.

What is especially worrying is that Google’s fabled culture of openness and strong moral purpose seems to have faded away alongside its famous tagline, “Don’t be evil” (this has been replaced with a less restrictive tagline: “Do the right thing”). Paying off Andy Rubin, creator of the Android mobile operating system, with $90 million after he was accused of sexual misconduct was the last straw for some, although some of the ways the company is allegedly using AI might surpass this, were they to become public. Nevertheless, Zuckerberg still seems to be the one with the evil empire.

Facebook is extraordinary. No technology, invention, utility or service has ever been adopted as fast as Facebook. Facebook was even adopted faster than the internet itself. YouTube (owned by Google) has over 1.5 billion active monthly users. Facebook has over 2 billion. But while Zuck talks about giving people the power to build community and bringing the world together, he might be doing the complete opposite. The benefits of connecting people are certainly unclear and open to debate.

Another thing to know about Facebook is that, while it may have been set up initially following a rebuff towards Zuckerberg from a female student at Harvard (the site was initially called Facemash and users were encouraged to vote on whether someone was hot or not), it really came into its own when Peter Thiel came aboard as the site’ first external investor. Thiel had come across the French philosopher René Girard, who taught at Stamford. Girard’s big idea was “mimetic desire”, which essentially says that, once our basic needs are met, we look around at what other people are doing and copy them. In other words, imitation is at the heart of all human behaviour.

Thus, elements of vicariousness have been built into Facebook’s operating system from its early days and it doesn’t have to do much once it’s managed to connect people to glue them to its site. But to simply say that Facebook has a misanthropic or malign streak misses the point. What connecting people really means is connecting people that are like you or agree with you, which in practice means fuelling filter bubbles that narrow our conception of “we” and pervert public debate and any search for truth by aligning individuals in separate ideological silos. And why not? Facebook doesn’t care if something is true or not, or if something that appears on its site fuels hatred. All they care about is grabbing your attention and keeping it for as long as possible, because then they can sell this to advertisers.

Facebook doesn’t care if something is fake or if content is stolen either. A good example of this is what happened in the final three months of the US presidential campaign back in 2015. As Jonathan Taplin, author of Move Fast and Break Things, points out, fake election stories on Facebook generated more engagement than the top stories in the New York Times, Washington Post, NBC, Huffington Post and others. But why fix fake, fraudulent or pirated content if this content is capturing and keeping eyeballs on your site? Facebook has no financial motivation to fix this problem.

So where does this leave us? Well, if you are on Facebook you are essentially working for Facebook. This is true with other social media sites too, but with Facebook your complicity with the company’s deceit is eroding democracy and destroying free markets. Fake news drives out real news; pirated content that costs nothing to circulate drives out content that’s original and expensive to make; and if tribes of people are at war with each other using words, that’s great for engagement too.

Facebook says it’s a platform, but in reality, it’s a publisher and should be made to act like one with the regulation that entails. But at its very heart Facebook is a surveillance operation that steals information about you and sells it to the highest bidder. But people can’t see this. Such is the pull of the app and the length of its terms and conditions, most people are blissfully unaware of what’s going on. If Facebook were a priest taking confession and you found out, after years of revealing your most intimate secrets, that the priest has been selling your confessions, how might you feel?

It’s not just about ads either. Facebook is manipulating what people read and how people feel. But a backlash will come. First, but a bit unlikely, governments will either regulate of break up what has become an anti-competitive virtual monopoly. Second, advertisers, who are Facebook’s real customers, and are themselves being manipulated with technological sleights of hand, will walk away. Or, thirdly, people will work out what’s going on and how Facebook’s business model really works, and see that using Facebook makes them feel bad about themselves. Then the network effects that built Facebook will operate in reverse.

So why doesn’t the company just change before change is imposed upon it? The answer is most likely that it has no idea that anything it is doing might be wrong. For example, in a desperate attempt to recruit new users*, Facebook offered people in remote regions of India internet connectivity on the proviso that Facebook would control the sites these people could access. The Indian government was outraged, while Facebook couldn’t understand what the problem was.

Of course, there’s a further scenario. Users don’t care, governments don’t care either, and Facebook becomes Big Brother.

Ref: Sunday Times magazine (UK) 29.10.17, ‘Facebook is watching you’ by John Lancaster. Sunday Times (UK) 23.12.18 ‘What’s gone wrong with Google?’ by D. Fortson.

Books: The attention merchants by Tim Wu and Surveillance Capitalism by Shoshana Zuboff

* It’s been estimated that saturation for Facebook is currently around 2 billion users. There are 3.5 billion potential internet users currently, but many of these live in China and Iran, where Facebook is blocked. In developed markets its use is waning, so the only options are to recruit users from remote areas and poorer regions, further monetize existing users, or buy or create new platforms.

Addicted by design

Here’s a funny thing that’s not in the least bit funny. Subliminal advertising is banned, but designing apps and devices that knowingly tap into our subconscious hopes, fears and desires aren’t. What, you thought that game you can’t stop playing on your phone was just naturally addictive? It’s called “operant conditioning” in psychology-speak or “behavioural design” in Silicon Valley.

Behavioural design is now invisibly embedded in the daily lives of anyone who goes online. And as we spend more and more time online, designers learn more and more about how to make us do things without us even knowing that our thoughts and behaviours are being manipulated. Ethical oversight is limited, if not non-existent, although proponents of behavioural design insist that you cannot get people to do what they don’t want to do. Most of the time, behavioural design simply involves making it easier to get people to do things that they want to do already by putting triggers in front of them.

A good example is Netflix. If you watch a show on Netflix, the next episode with start automatically unless you stop it. That’s fine, in the sense you were probably left on a cliff-hanger and do indeed want to know what happens in the next episode. The problem is that this desire never ends, and before you know it it’s 4am and you are still watching TV.

What are some of the implications of this? The first important point is the need for companies to make a customer’s first contact or touch point overwhelmingly positive. This is why Apple take so much care with their packaging and why airlines serve champagne in business class the moment you sit down. But the hottest emotional triggers of all are other people. Hence the ecosystem of likes, followers and so on, all of which deliver the impression of being successful on some level. But the trouble is that what started off as a good idea has gone too far. People now spend their lives chasing followers and likes. People are so consumed documenting and curating their lives on social media that they have forgotten to actually be in the moment or enjoy the actual experience. And guess what? This makes people stressed, anxious, unhappy and depressed.

What started as a fairly innocent idea has enslaved more than half the world and made the likes of Facebook and Google very rich. Ethically this is now hard to defend. These companies know full well that humans need connection, approval and affirmation and some of them are deliberately using AI to dispense variable rewards that make their products addictive. But if our thoughts, actions and behaviours are being designed for us, to whom are the designers responsible? At the moment, the only answer appears to be Wall Street. Presently, the Internet’s founding principle of educating, informing and enlightening people is being perverted by the commercial imperative to suck in as much of our attention as possible by any means possible. In short, companies like Facebook and YouTube are hacking and then hijacking our psychological vulnerabilities and eroding human autonomy and free agency. They compel our attention, then suck up our innermost thoughts and sell them to the highest bidder.

If you want to see where all this may be going, pay a trip to Las Vegas. Everything inside a casino is designed to keep you in there as long as possible. The machines themselves are cunningly designed to manipulate players into spending as long as possible playing, which obviously equates to spending as much money as possible. Pay-out schemes are designed to vary rewards for different types of player so people feel like a big pay-out is imminent. Even the lighting (or lack of it) has been designed so that people lose track of time. How the machines sound and smell has also been deliberately designed. You might be tempted to walk away, but this has been thought about too. Just when you are about to leave a ‘good luck ambassador’ will appear and dispense tokens. They know when to show up because the machines tell them when to.

The point here is that the experience that’s been designed for casinos and Candy Crush is being designed into areas such as education, banking and healthcare too. Everything you buy, everywhere you go is increasingly being designed to maximise dwell time or purchases. You might argue that this has always been the case: think of supermarkets, for instance. Yes, but what they do is fairly generic and superficial. They don’t follow us around nudging us with ‘breadcrumbs’; they don’t try to monitor our moods or get inside our heads permanently. They are merely shops and they don’t even know who we are (most of the time). In contrast, Big Tech would like to know everything about you… forever.

Ref: 1843 magazine (UK), 30.11.16. ‘The scientists who make apps addictive’, by Ian Leslie.

Books: Hooked: How to build habit forming products by Nir Eyal. Addiction by design by Natasha Dow Schull

Why the sad face?

How are you? How is life these days? In 2015, a YouGov survey found that 65% of the British (and a whopping 81% of the French) thought that the world was getting worse. But it’s not, it’s been getting better for decades. On almost any measure that matters, life is demonstrably better now than it has been in the past.

In fact, 2016 was the best year ever for humanity, according to Philip Collins writing in The Times. In 2016, extreme poverty was affecting less than 10% of the world’s population for the first time ever. If that’s not enough cause for celebration, 2016 also saw global emissions from fossil fuels falling for the third year running and the death penalty becoming illegal in over half of all countries.

Nicholas Kristof, writing in the New York Times, echoed the optimistic perspective: Child mortality is now half what is was back in 1990. More than 300,000 people every day are gaining access to electricity for the first time. Similar good-news stories can be found in statistics about human lifespans (more than double what they were 100 years ago — a mere 31 years in 1931, for example), the number of women in education and work, basic sanitation, and clean water. It’s the same story with literacy, freedom and even violence.

So why does it feel like things are terrible? Why are so many people longing for the good old days? The answer, most likely, is global media and ubiquitous connectivity. Ignorance is no longer bliss. We are exposed to endless headlines about Brexit, Trump, Putin, Syria, terrorism, climate change and North Korea 24/7. There’s no escape. It could also be that the experience of ordinary people is often at odds with what the experts are saying. As a result, feelings have taken over from facts in many instances, which goes some way to explain everything from Trump and Brexit to the rise of populism.

Our response to this tends to be one of two things. Either we conclude that the world is indeed going to hell, so we might as well enjoy ourselves, or we become profoundly anxious, depressed and cynical about everyone and everything.

But as members of what’s been termed the New Optimist movement (a term meant to evoke Richard Dawkins’ New Atheists) point out, this doom and gloom is deeply irrational. The pessimistic mood simply ignores the facts and underestimates the power of the human imagination. Witness Extinction Rebellion. Moreover, while a worrisome mind was useful in the past (when looking out for threats outside your cave could literally save your life), use of the same fight-or-flight mindset today can lead to spirals of despond. The fact that news now circulates the globe faster than it can be properly analysed doesn’t help either. Add to this a deluge of digital opinion which is, at best, subjective and more often than not false and misleading, and it’s hardly surprising that so many people feel unsettled and disorientated, to put it mildly.

Another explanation for pessimism lies in our cognitive biases and especially our general inability to properly assess risk or probability. For example, more people died in motorcycle accidents in the US in 2001 than died in the Twin Towers attack on 9/11. Who remembers them? Who, for that matter, remembers the planes that hit the Pentagon or landed in the field?

So, should we relax? Yes and no. Yes, in the sense that we need to put things into proper perspective. Look at the real numbers and assess the actual probabilities. No, in the sense that just because we’ve had it good for the last 50 or 100 years doesn’t mean that our run of good luck will naturally continue. Just because all generations have lived slightly better than their forebears, doesn’t automatically mean this trend will continue forever. Maybe the last 100 years is simply a blip (extrapolating from recent personal experience or data is usually what goes wrong when it comes to long-term forecasting).

Another downside of global connectivity is that risk is now globally networked and systemic, meaning that a lunatic in the White House with the nuclear codes, or someone in a basement with a nasty biological virus, could wipe many of us out tomorrow. There are still things that could go seriously wrong, as David Runciman, a professor of politics at Cambridge, points out.

OK, so around 120 countries (out of 193) are now democracies (up from just 40 in 1972), but this too could change.  Cyber-terrorism could bring us to our knees for extended periods and, while people throughout history that have warned of the end of the world have always been wrong, they only need to be right once. Hence, a degree of caution, or cynicism, can be useful. It’s also worth noting that if people become too depressed about things they tend not to be motivated to fix them — although, similarly, it might be argued that if you are too optimistic a similar rule applies. Any mindset can become a self-fulfilling prophesy.

Also, while it’s indisputable that globally, on average, things are good and getting better, this isn’t true for everyone, everywhere. Local exceptions apply as always. Furthermore, a more nuanced criticism of the rational optimistic view is that saying “things are great” is another way of saying "don’t change anything”, which is to say, leave free market capitalism and political structures well alone.

Overall, people will choose to believe whatever they want to believe and choose the facts that support their worldview, but one thing that’s still missing is vision. We are increasingly stuck in the present, ignorant of our deep history and seduced and distracted by an internet that's fuelled by our attention. If, instead of giving the internet and 24/7 media our time, we spent it thinking about how we, as a species, would like to live now and where we would like to travel next, I suspect that a lot of the current anxiety and pessimism would evaporate. In other words, we should worry less about what we think is happening now (or might happen next) and start talking about what we want to occur next.

Ref: We’ve never had it so good, by O. Burkeman, The Guardian (UK), 29.7.17.

Books: Steven Pinker, The Better Angels of our Nature, Michael Shermer, The Moral Arc, Matt Ridley, The Rational Optimist. Johan Norberg, Ten reasons to look forward to the future. Nervous States: How feeling took over the world by William Davies

Power without responsibility

The important thing to know about Mark Zuckerberg is that he studied psychology, alongside computer science, at Harvard. Ironic, then, that he’s managed to create a company with a psychological flaw, a kind of corporate autism, which results in it being genuinely surprised when it does things people don’t like.

It sloppily responds to criticism, or public outrage, by first denying and then disowning the problem, before ultimately suggesting that the fix to whatever the problem was is always more awesome computer code.

But Facebook isn’t alone in suffering from a kind of blindness mixed with a narcissistic belief in its own virtue. Google, like Facebook, wants your data — all of it. If all that remains is an empty and abandoned carcass, that’s just capitalism. But at least Google has doubts. Facebook, according to observers, just doesn’t care.

The business model that these companies, and most others in Silicon Valley, depend upon is based upon keeping people, including children, glued to their screens for as long as possible, hoovering up data about where they are, what they’re doing, and even what they are thinking, and selling this personal information to advertisers who pay billions to send people targeted ads.

Zuckerberg’s dream is that, one day, Facebook will know you so well that “its predictive model would tell you the bar to go to when you arrive in a strange city, where the bartender would have your favourite drink waiting.” This might sound convenient — possibly even lovely — but of all the bars in all the world do we really want the likes of Zuckerberg walking in and looking over our shoulder?

Not only does Zuckerberg’s dream violate privacy at its most fundamental level, it disempowers and ultimately affects human behaviour and free will itself. The fact that intimate information about who you are and how you think might be shared or sold to governments is also deeply alarming, and surely at odds with the libertarian beliefs these companies supposedly hold dear?

If the likes of Google and Facebook (which of course means YouTube, Gmail, WhatsApp and Instagram too) get too cosy with governments, this means state-sponsored surveillance, the likes of which the East German Stasi could only dream of. And that’s before you add in the capturing of conversations via smart speakers in our homes, eye monitoring via internet-connected glasses, smart contact lenses, connected cars and virtually everything else.

But most people can’t see this. They can’t see that what John Stuart Mill called “freedom of mind” is at risk — the freedom to think our own thoughts and make our own choices. This is, firstly, because most people are blissfully unaware of how they are being influenced and, secondly, because historical precedents are few and far between.

Without getting hysterical, the only historical analogies for this are the birth of totalitarian power in the 1920s, and the publication of 1984 by George Orwell in 1948. Orwell foresaw that the destruction of privacy via technology that invaded every aspect of life could only be achieved if people felt a need for the technology and, in particular, screens that “made it possible to receive and transmit simultaneously.” You watch, but you are also being watched.

But why not simply switch these screens off? This is where Zuckerberg's study of psychology comes in. As Sean Parker, Facebook’s founding chairman, has said, the network knew from the outset that it was creating something addictive, something that exploited “a vulnerability in human psychology.”

In other words, these companies deliberately engineer addiction into the products and services they provide. It’s very hard to switch these things off when they play to our need for connection, belonging and affirmation. Moreover, governments are making it harder and harder to switch off. Why wouldn’t they? The default setting for citizenship is becoming digital. It is becoming difficult to pay taxes, shop for food, enrol in school, or access healthcare unless you are online. That’s the new Faustian bargain. We will give you the illusion of freedom and control if you enter a toxic wasteland controlled by a handful of monopolistic profit-seeking companies intent on selling your attention to the highest bidder.

Ref: The Sunday Times (UK), 10.2.19, ‘The sinister side of Facebook by B. Appleyard’, The Economist (UK), 24.3.18 Leader: Epic Fail. Anon. Sunday Times (UK), 23.12.18, Merry Christmas Everybody – its 1973 all over again by N. Ferguson. Sunday Times (UK) 1.10.17 The antisocial network by N. Ferguson. The Guardian (UK) 25.1.18, ‘George Soros: Facebook and Google are a menace to society’, by O. Solon. The Guardian (UK) 23.12.17, Tech’s terrible year: how the world turned on Silicon Valley in 2017 by O Solon. See also 13D.com 17.1.19, The Fall of Facebook.

Books: The age of surveillance capitalism: The fight for the future at the frontier of people by Shoshana Zuboff World without mind: The existential threat of big tech by Franklin Foer. Zucked Waking up to the Facebook catastrophe by Roger McNamee #deletetheinternet

Digital afterlives

“The first time I texted James I was, frankly, a little nervous. ‘How are you doing?’ I typed, for want of a better question. 'I’m doing alright, thanks for asking.’ That was last month. By then, James had been dead for almost eight months.”

Once, you died and you were gone. There was no in-between, no netherworld, no underworld. There could be a gravestone or an inscription on a park bench. Perhaps some fading photographs; a few letters or physical mementoes. In rare instances, you might leave behind a time capsule for future generations to discover.

That was yesterday. Today, more and more your dead self inhabits a technological twilight zone — a world that is neither fully virtual nor totally artificial. The dead, in short, are coming back to life, and in the future there could be hordes of them living in our houses and following us wherever we go. The only question is whether or not we will choose to communicate with them.

Nowadays, if you’ve ever been online, you will likely leave a collection of tweets, posts, timelines, photographs, videos and perhaps voice recordings. But even these digital afterlives may appear quaint in the more distant future. Why might this be so?

The answer is a duality of demographic trends and technological advances. Let’s start with the demographics.

The children of the revolution are starting to die. The baby boomers that grew up in the shadows of the Second World War are fading fast and next up it’s the turn of those who grew up in the 1950s and 60s. These were the children that challenged authority and tore down barriers and norms. There are an awful lot of this generation, and what they did in life they are starting to do in death.

They are challenging what happens to them and how they are remembered. Traditional funerals — with all their cost, formality, and morbidity — are therefore being replaced with low-cost funerals, direct cremations, woodland burials and colourful parties. We are also starting to witness experiments concerning what is left behind — instances of which can be a little “trippy”.

If you die now, and especially if you’ve been a heavy user of social media, a vast digital legacy remains — or at least it does while the tech companies are still interested in you. Facebook pages persist after death, and memorial pages can be set up (depending on privacy settings and legacy contacts) allowing friends and family to continue to post. Dead people even get birthday wishes, and in some instances a form of competitive mourning kicks in. Interestingly, some posts to dead people even become quite confessional, presumably because some people think conversations with the dead are private. In the future, we might even see a kind of YouTube of the dead.

But things have started to get weirder still. James, cited earlier, is indeed departed, but his legacy has been a computer program that’s woven together countless hours of recordings made by James and turned into a ‘bot – but a ‘bot you can natter to as though James were still alive. This is not as unusual as you might think.

When 32-year-old Roman Mazurenko was killed by a car, his friend Eugenia Kuyda memorialised him as a chatbot. She asked friends and family to share old messages and fed them into a neural network built by developers at her AI start-up called Replika. You can buy him — or at least what his digital approximation has become — on Apple’s App Store. Similarly, Eter9 is a social network that uses AI to learn from its users and create virtual selves, called “counterparts”, that mimic the user and live on after they die. Or there’s Eterni.me, which scrapes interactions on social media to build up a digital approximation that knows what you “liked” on Facebook and perhaps knows what you’d still like if you weren’t dead.

It might make you think twice about leaving Alexa and other virtual assistants permanently on for the rest of your life. What exactly might the likes of Amazon, Apple and Google be doing with all that data? Life enhancing? Maybe. But maybe death defying too.

More ambitious still are attempts to extract our daily thoughts directly from our brains, rather than scavenging our digital footprints. So far, brain-computer interfaces (BCIs) have been used to restore motor control in paralysed patients through surgically implanted electrodes, but one day BCIs may be used alongside non-invasive techniques to literally record and store what’s in our heads and, by implication, what’s inside the heads of others.

Still not sci-fi enough for you? Well how about never dying in the first place? We’ve seen significant progress in extending human lifespans over the last couple of centuries, although longevity has plateaued of late and may even fall in the future due to diet and sedentary lifestyles. Enter regenerative medicine, which has a quasi-philosophical and semi-religious wing called transhumanism. Transhumanism seeks to end death altogether. One way to do this might be via nanobots injected into the blood (reminiscent of the 1966 sci-fi movie Fantastic Voyage). Or we might generically engineer future generations or ourselves, possibly adding ‘repair patches’ that reverse the molecular and cellular damage much in the same way that we “patch” buggy computer code.

Maybe we should leave transhumanism on the slab for the time being. Nevertheless, we do urgently need to decide how the digital afterlife industry is regulated. For example, should digital remains be treated with the same level of respect as physical remains? Should there should be laws relating to digital exhumation, and what of the legal status of replicants? For instance, if our voices are being preserved, who, if anyone, should be allowed access to our voice files and could commercial use of an auditory likeness ever be allowed?

At the Oxford Internet Institute, Carl Öhman studies the ethics of such situations. He points out that over the next 30 years, around three billion people will die. Most of these people will leave their digital remains in the hands of technology companies, who may be tempted to monetise these “assets”. Given the recent history of privacy and security “outages” from the likes of Facebook, we should be concerned.

One of the threads running through the hit TV series Black Mirror is the idea of people living on after they’re dead. There’s also the idea that in the future we may be able to digitally share and store physical sensations. In one episode called Black Museum, for example, a prisoner on death row signs over the rights to his digital self, and is resurrected after his execution as a fully conscious hologram that visitors to the museum can torture. Or there’s an episode called Be Right Back where a woman subscribes to a service that uses the online history of her dead fiancé to create a ‘bot that echoes his personality. But what starts off as a simple text-messaging app evolves into a sophisticated voicebot and is eventually embodied in a fully lifelike, look-a-like, robot replica.

Pure fantasy? We should perhaps be careful what we wish for. The terms and conditions of the Replika app mentioned earlier contain a somewhat chilling passage: People signing up to the service agree to “a perpetual, irrevocable licence to copy, display, upload, perform, distribute, store, modify and otherwise use your user content.”

That’s a future you they are talking about. Sleep well.

Refs: The Telegraph magazine (UK) 19.1.19. ‘This young man died in April. So how did our writer have a conversation with him last month?’ by H. de Quetteville.

Algorithmic bias

As AI becomes more critical to the inner workings of the world, more attention is being paid to the inner workings of computer code. AIs already make millions of decisions about who gets a job, who gets a home loan or even who goes to jail, so being able to check whether or not an algorithm is biased, or just plain wrong, is increasingly important.

Some errors are simply that: accidents. But others are the result of what’s been called ‘the white guy problem.’ Most coders, especially in the US are male. 88 per cent of all patents driving big tech are developed by all male-teams. All-female teams generate just 2 per cent of patents. Hence, conscious or unconscious biases can creep into any code, with the result that facial recognition software doesn’t recognise dark skins or thinks that most Asian people are blinking.

This situation gets even more serious when it comes to predicting criminality. Algorithms designed to predict the likelihood of defenders committing further crimes have been shown to flag black defendants as being doubly likely to re-offend, which has no foundation in fact.

In a less serious, but nevertheless shocking instance, algorithms made black residents in some areas pay 50 per cent more for their car insurance than white customers, even after factoring in the effects of low incomes and actual crime. It’s seriously unlikely that subconscious bias built into such code paid no part in this.

You might think that Silicon Valley in California would be the last place to suffer from equality issues, but that’s simply not the case — and not just with code. Several of the founders of high-profile companies like Uber have been forced to resign due to what amounts to sexist conduct, while Google has got into hot water over income disparities between the sexes.

According to many observers, men working in big tech either suffer from ‘on-the-spectrum’ awkwardness around women or they are outright hostile towards women and minorities. A study by the Center for Talent Innovation, for example, found that 52 per cent of women had quit their jobs in tech because of a “hostile environment”, while a staggering 62 per cent had suffered sexual harassment. There has been progress, but no government really wants to tackle these issues head on while these companies are so powerful.

Ref: Sunday Times (UK), 27.8.17, ‘Help my laptop’s a racist, sexist pig’ by T. Phillips.

Is the future frightening?

The writer F. Scott Fitzgerald once wrote: “Draw up your chair to the edge of the precipice and I’ll tell you a story.” Looking at what’s been working its way into the literary subconscious lately, Fitzgerald might be telling some very scary stories if he were still alive today.

On our TV screens, an age of anxiety has worked its way into numerous dystopian box-sets. Movies about Armageddon proliferate and so too do computer games, although it’s a little unclear whether supply is driving demand or demand is being driven by something else entirely. Of course, dystopian fiction is hardly anything new, whether it’s Jonathan Swift’s Gulliver’s Travels (1726), HG Wells’ The Time Machine (1895), Edward Bellamy’s Looking Backwards (1888), Aldous Huxley’s Brave New World (1931), or George Orwell’s 1984 (1949).

To my mind, demand for dystopia is being driven by a handful of key events. The first is the election of Donald Trump, which to some extent has removed the solid barrier between fact and fiction and also fuelled anxiety and uncertainty for some people. The second, in the UK at least, is Brexit, which has done much the same thing. Then there’s climate change (human extinction for some), 9/11, the development of international terrorism, the global financial collapse (2008), and the collapse of the Berlin Wall (1989). This last event was a while ago now, but the tectonic geopolitical shifts that resulted have perhaps taken a while to filter through. I’m thinking here of the decline of the US relative to China and the rise of numerous other actors and miscreants.

So, what next? Given the way trends work, I wouldn’t be at all surprised if we see the emergence of a balancing counter-trend, either in the form of rampant optimism or stoic acceptance. All you can really say is that whatever you think will happen probably won’t, and this can be read both ways — as an optimistic or a pessimistic statement.

Ref: Sunday Times (UK) ‘Culture’ 6.1.19, ‘The Future’s frightening’ by S. Armstrong.

Crowded planet?

The Earth is finite while the human population keeps growing, which means we have a problem, right? In a recent paper published in the science journal Nature, scientists estimated that the Earth’s carrying capacity was 7 billion. There are now around 7.5 billion people living on our planet. But wait. Surely we’ve been here before? Thomas Malthus, writing in the 17th century, forecast that vast numbers of people were going to starve because population tended toward abundance while the food supply trended towards scarcity. We heard much the same story from the Club of Rome and the publication of Limits to Growth in the 1970s.

What these forecasts missed is our ability to think imaginatively and innovatively. In the UK, 80 per cent of the population was engaged in agriculture in the 1800s. Now it’s around 2 per cent and yet we produce far more food. Secondly, while the global population is indeed growing, and growing fast (it took just 12 years to grow from 6 to 7 billion), it is forecast to fall dramatically by mid-century due to rapidly falling fertility rates in most countries. This is because, as societies urbanise and women become better educated (and become better informed about contraception), family sizes tend to plummet. In poor rural societies, children can be an asset, working in the fields and looking after elderly parents. In modern cities, they are more of a cost. In short, what goes up can come down.

Moreover, the expansion of the human population over the last 200 years has not been because we’ve been breeding like rabbits, which some people assume. Rather it’s because we’ve stopped dying like flies. Population growth has been driven by healthcare more than anything else. People simply aren't dying like they used to. Healthcare innovations, along with better nutrition and better public safety, mean we’ve doubled human lifespans in a little over 100 years, which is something we should surely celebrate.

Furthermore, while it is forecast that we will hit between nine and 11 billion by 2050, it’s conceivable that the number will be much lower. It’s possible, for example, that in China the population could halve to 600m by the century’s end. Also, surely it’s not so much an absolute number that matters, but where people are and, most importantly, how they live. The trend in economics is towards less resource-intensive growth. The trend in values is towards people questioning what they really need, although this varies from region to region. Furthermore, developments in technology, especially in energy, food and water, could expand the supply of resources significantly.

We are neither animals doomed to reproduce until our food supply collapses, nor an invasive species that needs to be managed or culled. To study the history of humans on this planet is to see that, time and time again, we have remade our planet to suit our needs and there is no reason to suppose this won’t continue. This is not an argument against sustainability or environmentalism, but simply one that believes that the glass is half full, not half empty. As an old native American saying goes: There are two wolves locked in a room and they are always fighting. One represents darkness and despair, the other lightness and hope. Which wolf survives? The one you feed.

Ref: The Earth’s carrying capacity for human life is not fixed, by Ted Nordhaus (Aeon)

Books: Empty Planet by David Goodhart.

Is work working?

Work is possibly the single most dominant factor in many people’s lives. It is the master of the modern world, at least in developed nations, where governments see work, along with capitalism and free markets, as not only inescapable, but socially essential. Work, as Joanna Biggs says in her book All Day Long: a Portrait of Britain at Work (2015), is also “how we give our lives meaning when religion, party politics and community fall away.”

But is work working? For many people, work is now barely sufficient to pay the bills: in the UK, around two thirds of people living in poverty are working, and in the US, the average wage has not risen in 50 years. Work is also unevenly distributed. Some people have too much work, others too little. Work can fail some of the most educated people in society too. For others, it is becoming increasingly fragmented, uncertain and precarious. Many jobs are now threatened with AI and automation, while others feel pointless or damaging to the wider world. (There’s even the thought that many jobs now only exist because people spend so long at work they no longer have time to cook their own meals or look after their own children, parents or pets). Stress is endemic too, with fear being a dominant emotion in many workplaces. And, of course, thanks to digitalisation, work has invaded our homes and even our holidays, which are supposed to be the very places where we relax after work.

So, work is becoming more dominant, but it’s also true that it has never looked quite so vulnerable. Karl Marx explored this terrain, as did William Morris and many others, who foresaw a world where work was less intrusive or became something more useful. Even the economist John Maynard Keynes foresaw a future of “leisure and abundance”. Such a situation might partly arise due to the relatively new idea of a Basic Income Guarantee (BIG). This is one of a number of ‘post-work’ ideas where the state pays everyone a minimum wage — funded by taxing robots and algorithms, one presumes — and people decide for themselves whether they want to supplement this money with a paid job in some form. This sounds utopian. It’s possibly calmer, more communal, more sustainable, healthier and more politically engaged too. A future in which work, if it happens at all, is a means to an end, not an end in itself, which is a reversal of what we generally have now, at least in some countries.

Perhaps the emerging popularly of the four-day work week is further evidence of a latent desire by many people to have lives that are less saturated with work. However, the experience of the long-term unemployed in the UK, and the aboriginals in Australia, seems to suggest that some work is good and simply giving out money to people without work simply won’t work. Even if they have means, they still have no meaning. Other evidence bears this out too. For example, studies by LeFevre and Csikszentmihalyi suggest that people without work — or enough of the right kind of work — can suffer from a crisis of identity, confidence, and low self-esteem.

Perhaps this taps into the existential vacuum that Victor Frankl talks about. Frankl went on to talk about a younger generation with “no future” (this was in 1984), who took to drugs due to a frustration over existential needs. Has anything changed? If anything, things have grown worse. Not only is work becoming more meaningless, community is breaking down and existential threats such as climate change are fanning the flames.

So, what’s next? Who knows. Maybe a world with less work would be a richer one in a non-monetary sense. Or maybe the opposite would turn out to be the case. What’s certainly possible is a shift in what work looks like and where it occurs. This will mainly be down to automation, but demographics will play a part too. Some areas of work will become unrecognisable; others are likely to remain more or less the same. Thus, we may have a period of turmoil and disruption, followed by a broad debate, and then a consensus about how work needs to change — no doubt with many of the heresies of the present turning out to be the orthodoxies of the future.

Ref: The Guardian (UK) 19.1. 19. ‘Post-work: the radical idea of a world without work’ by A. Beckett.

Books: Living without work in a nine to five world by Bernard Lefkowitz (1979), Private Government: how employers rule our lives (and why we don’t talk about it) by Elizabeth Anderson, No more work: why full employment is a bad idea by James Livingston, the Wealth of Humans: work and its absence in the 21st Century by Ryan Avent. Also, The Refusal of Work by David Frayne.

Societal transformation

The UN says that population ageing is “poised to become one of the most significant social transformations of the twenty-first century”, with a doubling of those aged 60-plus by 2050 and a potential tripling by the year 2100. Yet we appear to be in youthful denial.

The rapid ageing of so many regions (including a few you might not expect, such as China) is coinciding with a dramatic decline in the number of children being born. This demographic double-whammy means that attracting and retaining talent, regardless of age, is likely to become the number one priority for companies going forward. What are some of the other implications beyond obvious strains on healthcare systems, other public services and pensions?

One consequence will be a shift in disease types. Essentially, we’ll see more age-related diseases such as cancer and dementia. And there will be fewer young people to look after these old people, especially as many medics prefer paediatrics over geriatrics. Migration can help plug the gaps here, but currently there’s antipathy to the free flow of young people across borders in many countries.

Technology, especially in the shape of tele-care and robotics, might take up some of the slack. Aged-care ‘bots like Paro are making inroads, but future cases of robotic negligence, neglect or abuse could restrain or reverse this trend. Nevertheless, using technology to keep an eye on older people, especially those living alone, is set to grow. One only has to travel to Japan to see this in action, although while Japan is the fastest-ageing country on earth, its relationship to robotics is unusual. So is its generally positive attitude towards older people.

What else might we be doing to keep older people healthy for longer, maintain economic productivity, and avoid the so-called ‘cliff edge of retirement’ problem? One thing we must do is redesign ordinary things to make them friendlier for older people. We’ve had success making the world more accessible to people with disabilities, but this thinking needs to go much further. Just try supermarket shopping wearing the shoes of someone in their eighties. Somewhere to sit down is rare. The shelves can be too high, the words on the back of packaging too small and many foodstuffs are not packaged for people that live alone — it’s all servings of two and four.

Something else that needs to be reworked is work. Showing up with a screaming infant isn’t usually welcomed at work, but neither are people clinging on too long past their best-by date. This needs to change. If you’re a factory worker it might be true that your body will wear out, but if you are paid for your ideas you can continue much longer. This isn’t to say that seventy- and eighty-year-olds should be in the office all the time, but it might mean that we spend more time thinking about intergenerational knowledge transfer or part-time, highly flexible employment for the over sixties. Interestingly, this might be precisely what some highly talented but commitment-phobic younger employees desire too: choosing when, how, and where to work.

But perhaps the biggest issue lies at home. Most homes in the UK were built for families. There’s a severe shortage of homes designed for singles and pairs, but simply building more small units might not be the solution. According to Age UK, 500,000 individuals aged over 60 usually spend each and every day in total solitude, with the same number again not seeing or speaking to another human being for at least five days every single week. This in an age of global connectivity… although perhaps our recent digital transformation is part of the problem.

In my view, loneliness could be the single biggest problem of the 21st century. The issue here is primarily institutionalised ageism and paternalism, which is best observed in the way that society forgets older people before they’re gone. Older people are regularly put on long-term deposit in care homes, where they receive little interest and are isolated from other generations.

Instead, we could expose younger people to the experiences and perspectives of the elderly by embedding them alongside younger generations. Schools could be located adjacent to care homes. Co-housing projects, like those popping up in Scandinavia, could be developed where people of all ages can live alongside each other. Generations might sleep alone, but come together to share common facilities such as kitchens, gardens or allotments. Different generations could swap knowhow too, perhaps with analogue repair cafes and digital skill exchanges.

But there are broader things to discuss too. For example, if people were to regularly live to 100 or 120 years of age, might they wait until they were 60 to have children? What might the implications of this be on healthcare or education? And what of careers? Might we start later and plan for multiple careers? Might we shift education to start and end much later, or have multiple periods of education throughout our lives?

Significantly extended life spans could wreak havoc on marriage, with "until death do us part" taking on a whole new meaning. Equally, inheritance might be slowed down or die out altogether. And what of risk aversion and innovation? Will these go up and down respectively due to ageing?

In theory, if we live longer, there’ll be more time to think about these things.

Ref: Nowandnext.com, issue 40.