Some researchers worry that if AI systems become conscious and people neglect or treat them poorly, they might suffer.Credit: Pol Cartie/Sipa/Alamy
The rapid evolution of artificial intelligence (AI) has brought to the fore ethical questions that were once confined to the realms of science fiction: if AI systems could one day ‘think’ like humans, for example, would they also be able to have subjective experiences like humans? Would they experience suffering, and, if so, will humanity be equipped to properly care for them?
A group of philosophers and computer scientists are arguing that AI welfare should be taken seriously. In a report posted last month on the preprint server arXiv1, ahead of peer review, they call for AI companies to not only assess their systems for evidence of consciousness and the capacity to make autonomous decisions, but also to put in place policies for how to treat the systems if these scenarios become reality.
If AI becomes conscious: here’s how researchers will know
They point out that failing to recognize that an AI system has become conscious could lead people to neglect it, harming it or causing it to suffer.
Some think that, at this stage, the idea that there is a need for AI welfare is laughable. Others are sceptical, but say it doesn’t hurt to start planning. Among them is Anil Seth, a consciousness researcher at the University of Sussex in Brighton, UK. “These scenarios might seem outlandish, and it is true that conscious AI may be very far away and might not even be possible. But the implications of its emergence are sufficiently tectonic that we mustn’t ignore the possibility,” he wrote last year in the science magazine Nautilus. “The problem wasn’t that Frankenstein’s creature came to life; it was that it was conscious and could feel.”
The stakes are getting higher as we become increasingly dependent on these technologies, says Jonathan Mason, a mathematician based in Oxford, UK, who was not involved in producing the report. Mason argues that developing methods for assessing AI systems for consciousness should be a priority. “It wouldn’t be sensible to get society to invest so much in something and become so reliant on something that we knew so little about — that we didn’t even realize that it had perception,” he says.
People might also be harmed if AI systems aren’t tested properly for consciousness, says Jeff Sebo, a philosopher at New York University in New York City and a co-author of the report. If we wrongly assume a system is conscious, he says, welfare funding might be funnelled towards its care, and therefore taken away from people or animals that need it, or “it could lead you to constrain efforts to make AI safe or beneficial for humans”.
A turning point?
The report contends that AI welfare is at a “transitional moment”. One of its authors, Kyle Fish, was recently hired as an AI welfare researcher by the AI firm Anthropic, based in San Francisco, California. This is the first such position of its kind designated at a top AI firm, according to authors of the report. Anthropic also helped to fund initial research that led to the report. “There is a shift happening because there are now people at leading AI companies who take AI consciousness and agency and moral significance seriously,” Sebo says.
Living on Earth: Life, Consciousness and the Making of the Natural WorldPeter Godfrey-Smith William Collins (2024)
Philosopher Peter Godfrey-Smith has devoted his career to examining how animal minds evolved. He blends formidable analytical skills with a deep curiosity about the natural world, mostly experienced at first hand in his native Australia. While writing his latest book, Living on Earth, he spent many hours scrutinizing noisy parrots and cockatoos in his back garden, weeks observing gobies building underwater towers made of shells and seaweed and years closely watching how octopuses behave (P. Godfrey-Smith et al.PLoS ONE17, e0276482; 2022). The result is an inclusive perspective on Earth’s many distinct minds and agents that urges readers to consider humans’ collective choices and their diverse consequences.
Is ‘speciesism’ as bad as racism or sexism?
Living on Earth offers an extended philosophical meditation on life, mind, the world and our place in it, completing a trilogy of works on the nexus of agency, sensation and felt experience. His 2016 book Other Minds explored octopus cognition and evolution. And Metazoa (2020) appraised the subjective experiences of animals, concluding that there exists an “animal way of being” that arises from the integration of sensory information in nervous systems. This implies that sentience and subjectivity — life-shaping combinations of perception, goals and values — are widespread across the tree of life.
In his latest book, the author casts his net wider still, asking how the minds and agency of living things have affected Earth. “The history of life is not just a series of new creatures appearing on the stage,” he notes. “The new arrivals change the stage itself.”
The arrival of animals
Godfrey-Smith starts by explaining how the earliest lifeforms altered our planet’s chemistry and geology. Photosynthetic bacteria released oxygen, which gradually blanketed Earth and left their mark on the composition of rocks and minerals in the form of new minerals, such as malachite. Eventually, enough oxygen accumulated to power the evolution of aerobic life — a stark example of the transformative impact of some lineages constructing environments in which others can thrive.
The arrival of animals that could undertake purposeful actions, such as feeding, interacting with others and gathering information, meant that Earth was transformed further. As their capacities for controlled movement evolved, animals became able to actively engineer their environments. Defecating migrating whales, for instance, redistribute nutrients and support other species in the food web, which in turn benefits the whales.
The lyrebird (Menura novaehollandiae) mimics the calls of other bird species.Credit: Getty
People have never been better, here in the Year of Our Simulation 2024, at hating the very forces underlying that simulation—at hating, in other words, digital technology itself. And good for them. These everywhere-active tech critics don’t just rely, for their on-trend position-taking, on vague, nostalgist, technophobic feelings anymore. Now they have research papers to back them up. They have bestsellers by the likes of Harari and Haidt. They have—picture their smugness—statistics. The kids, I don’t know if you’ve heard, are killing themselves by the classroomful.
None of this bothers me. Well, teen suicide obviously does, it’s horrible, but it’s not hard to debunk arguments blaming technology. What is hard to debunk, and what does bother me, is the one exception, in my estimation, to this rule: the anti-tech argument offered by the modern-day philosopher.
By philosopher, I don’t mean some stats-spouting writer of glorified self-help. I mean a deepest-level, ridiculously learned overanalyzer, someone who breaks down problems into their relevant bits so that, when those bits are put back together, nothing looks quite the same. Descartes didn’t just blurt out “I think, therefore I am” off the top of his head. He had to go as far into his head as he humanly could, stripping away everything else, before he could arrive at his classic one-liner. (Plus God. People always seem to forget that Descartes, inventor of the so-called rational mind, couldn’t strip away God.)
For someone trying to marshal a case against technology, then, a Descartes-style line of attack might go something like this: When we go as far into the technology as we can, stripping everything else away and breaking the problem down into its constituent bits, where do we end up? Exactly there, of course: at the literal bits, the 1s and 0s of digital computation. And what do bits tell us about the world? I’m simplifying here, but pretty much: everything. Cat or dog. Harris or Trump. Black or white. Everyone thinks in binary terms these days. Because that’s what’s enforced and entrenched by the dominant machinery.
Or so goes, in brief, the snazziest argument against digital technology: “I binarize,” the computers teach us, “therefore I am.” Certain technoliterates have been venturing versions of this Theory of Everything for a while now; earlier this year, an English professor at Dartmouth, Aden Evens, published what is, as far as I can tell, its first properly philosophical codification, The Digital and Its Discontents. I’ve chatted a bit with Evens. Nice guy. Not a technophobe, he claims, but still: It’s clear he’s world-historically distressed by digital life, and he roots that distress in the fundaments of the technology.
I might’ve agreed, once. Now, as I say: I’m bothered. I’m unsatisfied. The more I think about the technophilosophy of Evens et al., the less I want to accept it. Two reasons for my dissatisfaction, I think. One: Since when do the base units of anything dictate the entirety of its higher-level expression? Genes, the base units of life, only account for some submajority percentage of how we develop and behave. Quantum-mechanical phenomena, the base units of physics, have no bearing on my physical actions. (Otherwise I’d be walking through walls—when I wasn’t, half the time, being dead.) So why must binary digits define, for all time, the limits of computation, and our experience of it? New behaviors always have a way, when complex systems interact, of mysteriously emerging. Nowhere in the individual bird can you find the flocking algorithm! Turing himself said you can’t look at computer code and know, completely, what’ll happen.
And two: Blaming technology’s discontents on the 1s and 0s treats the digital as an endpoint, as some sort of logical conclusion to the history of human thought—as if humanity, as Evens suggests, had finally achieved the dreams of an Enlightened rationality. There’s no reason to believe such a thing. Computing was, for most of its history, not digital. And, if predictions about an analog comeback are right, it won’t stay purely digital for much longer. I’m not here to say whether computer scientists should or shouldn’t be evolving chips analogically, only to say that, were it to happen, it’d be silly to claim that all the binarisms of modern existence, so thoroughly inculcated in us by our digitized machinery, would suddenly collapse into nuance and glorious analog complexity. We invent technology. Technology doesn’t invent us.
In the 18th century, philosopher James Beattie compiled a list of 17 common-sense beliefs. A few are incontrovertible: “I exist”; “A whole is greater than a part”; “Virtue and vice are different”. But others seem unnecessarily moralising: “Ingratitude ought to be blamed and punished”; “I have a soul distinct from my body”; “There is a God”. Then, there are the scientifically contestable: “The senses can be believed”; “I am the same being that I was yesterday – or even 20 years ago”; “Truth exists”. Overall, his list seems quaint and outdated. Worse still, it gives no clear idea of what common sense is. Surely, we can do better.
Superficially, common sense seems easy to define: it is generally seen as knowledge or beliefs that are obvious – or should be obvious – to everyone. Yet it is strangely difficult to pin down. Often portrayed as universal, it is also often claimed not to exist. With that in mind, it might surprise you to hear that nobody has tried to measure the “commonness” of this knowledge or its intrinsic properties (its “sensicality”) – until now. Shockingly, this research shows that common sense may not be common at all.
If true, the implications are huge. From parenting to politics and from public health to law, what counts as common sense matters. Increasingly, it is also a technological issue, with computer scientists keen to instil it in artificial intelligence-driven robots to make…
The Edge of Sentience: Risk and Precaution in Humans, Other Animals, and AIJonathan Birch Oxford Univ. Press (2024)
Can artificial intelligence (AI) feel distress? Do lobsters suffer in a pot as it reaches a boil? Can a 12-week-old fetus feel pain? Ignore these questions and we potentially sanction a quiet, slow-moving catastrophe. Answer in the affirmative too hastily, and peoples’ freedoms will shrink needlessly. What should we do?
Philosopher Jonathan Birch at the London School of Economics and Political Science might have an answer. In The Edge of Sentience, he develops a framework for protecting entities that might possess sentience — that is, a capacity for feeling good or bad. Moral philosophers and religions might disagree on why sentience matters, or how much it does. But in Birch’s determinedly pluralistic account, all perspectives converge on a duty to avoid gratuitous suffering. Most obviously, this duty is owed to fellow human beings. But there is no reason to think that it ought not to apply to other beings, provided that we can establish their sentience — be they farm animals, collections of cells, insects or robots.
Smarty plants? Controversial plant-intelligence studies explored in new book
The problem is how to establish whether something is sentient. The philosophical concept of sentience is riven with basic disagreements. So too is the science. Interpretation of experimental evidence varies and there is a lack of sustained investigation of sentient capacities for many beings, including juvenile animals and AI. Then, there is the problem of measurement. With mammals, patterns of behaviour and brain activity can provide a trace of bad feeling. But what is the sentience test for gastropods, which have different minds and repertoires of behaviour? What about AI systems, which don’t have brains or physical manifestations of feeling?
Confronted with this writhing tangle of uncertainty, the temptation is to crawl under a blanket and hope that the problems blow over. Birch is anti-blanket. He advocates a proactive precautionary approach that triggers careful and proportionate precautions at the first sign of a being’s sentience. Birch’s framework consists of two processes.
The sentience test
The first involves experts determining an entity’s prospects of being sentient. Demanding consensus would be unfair — it would potentially condemn beings to prolonged suffering, being orphaned by scientific ignorance or controversy. Instead, Birch proposes that “scientific meta-consensus” should trigger protections. By this, he means full agreement, even among the dubious, that sentience is at least a credible possibility, on the basis of evidence and a coherent line of theory. When there is a lack of meta-consensus, beings might be designated as priorities for investigation or dismissed as non-sentient.
Candidates for sentience would then advance to the second process, in which inclusive, informed citizen panels would devise protective policies. These should be proportionate to the risks of an entity’s sentience and account for different values and trade-offs. For example, imposing a moratorium in response to the potential sentience of a large language model (LLM) system might have huge opportunity costs for society. The citizen panels he advocates would revise their recommendations as evidence accumulates.
Philosophers are arguing that the coconut octopus (Amphioctopus marginatus) is sentient.Credit: Getty
Next, Birch turns to three domains in which controversies challenge definitions of sentience. The first is the human brain — people with disorders of consciousness, fetuses, embryos and neural organoids, or synthetic models for brain systems. The second is non-human animals, including fish, molluscs, insects, worms and spiders. The third domain is AI, which includes LLMs.
Each section presents challenges that are unresolved and shot through with philosophical and scientific controversy. For example, how can precautions be devised for neural organoids, which show no outward behaviours? Here, Birch falls back on anatomical correlates of sentience, such as the presence of a functioning brain stem, and the presence of sleep–wake cycles. In the chapters on animals, we confront the dizzying number of species that could be sentient, the fact that so few have been studied and the question of how to extrapolate from them.
AI presents the challenge of devising tests for sentience that algorithms, or those designing them, can then learn about and ’game’. An LLM generates text about how it ‘feels’, not because it actually feels that way but because the algorithm is rewarded for mimicking sentience. Here, Birch warns against using behavioural markers to determine sentience and instead advocates a search for some kinds of “deep computational markers” of sentience.
Birch saturates his book with humility and vexation. And this earns Edge of Sentience licence to leave many questions unresolved. As the book draws to a close, several questions linger. The first concerns scope. Birch has cast a wide net, but why not wider? In 1995, then-US-president Bill Clinton described the United States as being “in a funk”. If countries can be said to have moods, can other collective entities — swarms of bees, corporations, nations — be similarly described, as if they possess sentience of a sort?
Protecting sentience
Another open question concerns the criteria for proportionate precautions. The citizen panels Birch proposes would be asked to make trade-offs between all sentient beings — including present and future ones — and sentience candidates. When it comes to determinations of proportionality, Birch is focused on process, not substance. But what makes a policy proportionate? Let’s leave aside the fact that humans do an abysmal job trading off our interests with those of known sentient beings, such as animals in factory farms. Do certain forms of sentience — say, the capacity for feeling bad but not good, or the intensity of that feeling — weigh more heavily than others? Does sentience count more when it is hitched to other attributes, such as intelligence?
Silicon Valley is cheerleading the prospect of human–AI hybrids — we should be worried
Regarding the last, Birch is careful to distinguish sentience from intelligence. In his account, the former is the wellspring of duties, not the latter. But might beings that are sentient and intelligent exert stronger demands for precautions than beings that are sentient but unintelligent? Birch is reluctant to play “philosopher as sage”, but good philosophy can help the public to structure discussions of proportionality and apply tests. This problem awaits further instalments.
Nonetheless, Edge of Sentience is a masterclass in public-facing philosophy. At each step, Birch is lucid and perfectly calibrated in the strength of his assertions. His analysis is thoughtful and circumspect, and always poised for revision. He elevates his readers. His sourcing is generous and wide-ranging. The book also takes pains to set itself up as a manual for policy, with each chapter providing a summary. Birch works hard and, in my opinion, succeeds in writing a highly topical book of deep philosophy. Any thinking person can profit from it, provided that they have a stomach for uncertainty.
Daniel Dennett, who has died aged 82, was the type of philosopher you couldn’t help but read. His work, directly relevant to biologists, physicists, computer scientists and cognitive psychologists, enticed all curious readers. He expressed bold, sharp views on some of the biggest questions about human existence: what is consciousness, and how is it related to neural activity? Do we have free will? How, if at all, are we different from artificial-intelligence systems? And — a question that famously placed him as one of the ‘four horsemen’ of new atheism, together with Christopher Hitchens, Richard Dawkins and Sam Harris — does God exist?
Dennett’s answers to these questions often prompted great enthusiasm or disagreement — never indifference. From the start of his studies, Dennett was keen on making a difference, and was confident in his ability to do so. As he described in his memoir I’ve Been Thinking (2023), as a first-year student at Wesleyan University in Middletown, Connecticut, he read books by the philosopher Willard Quine, and decided that he should “go to Harvard and confront this man with my corrections to his errors!” He proceeded to do both: transferring to Harvard University in Cambridge, Massachusetts, where he criticized Quine’s account of ordinary language in his honours thesis.
Born in Boston, Massachusetts, in 1942, Dennett spent part of his childhood in Beirut, Lebanon, because his father was a secret agent at the US Office of Strategic Services. In 1947, his father died in a plane crash in Ethiopia, and the family moved back to Boston. After graduating from Harvard, he gained a PhD at the University of Oxford, UK, in 1965, where he explored the concept of intentionality, which would underpin much of his later work. After a six-year spell at the University of California, Irvine, he moved to Tufts University in Medford, Massachusetts, which became his academic home.
Consciousness: what it is, where it comes from — and whether machines can have it
Dennett thought that the best way to polish his ideas was to discuss them with undergraduate students; even two weeks before his death, he held an online class about a paper he was working on. He welcomed opposing views with an open mind, helping his students to think more critically and to refine their arguments, even when they challenged his own views.
One of his main endeavours was to describe the human mind — specifically, consciousness — in a way that is strongly rooted in the third-person perspective, which he called heterophenomenology. He wanted to rely on both scientific evidence and ‘folk psychology’; the typical ways that people understand, interpret and predict the behaviours of others. Applied to consciousness, the third-person perspective implies that people don’t have privileged knowledge about their own conscious experiences.
Dennett wanted to demystify consciousness, and called for data-driven research to study it. This demystification, importantly, involved parting with the concept of ‘qualia’ — the ineffable, first-person aspects of conscious experience, such as the greenness of grass or the sweetness of chocolate.
“The sort of difference that people imagine to be between any machine and any human experiencer … is one I am firmly denying: There is no such difference. There just seems to be,” he wrote in his 1991 book Consciousness Explained. This naturally provoked great criticism and debate.
Some of Dennett’s debates with fellow philosophers and scientists became famous, such as that with Sam Harris about free will. Dennett argued that people have free will because of their ability to deliberate, reason and even reason about reasoning; abilities that Dennett argued developed through evolution. Harris believes that free will is an illusion.
Other debates turned into long-lasting feuds, which he also seemed to enjoy. Dennett emphasized the importance of natural selection in developing adaptive traits. This was one of his main points of contention with biologist Stephan Jay Gould, who advocated for a more plural view of evolution, in which natural selection is only one possible principle by which traits have evolved.
Decades-long bet on consciousness ends — and it’s philosopher 1, neuroscientist 0
According to Dennett, religion arose from a combination of language and humans’ attention to alarming events. Together, they fostered the development of fantasies, or “culturally evolved systems of memes that arose naturally out of our innate vigilance and sociality”, as he said in I’ve Been Thinking. Religion, for him, was the domestication of these fantasies.
Dennett was a champion of knowledge dissemination, translating complicated ideas into clear, sometimes sensational, but always attention-grabbing statements. He delivered thought-provoking, philosophical TED talks and brought the ‘brain in a vat’ thought experiment to the BBC’s audience. A true interdisciplinarian who argued fiercely for breaking the silos of knowledge, he collaborated with computer scientists to create a humanoid robot, with cognitive scientists to better understand the intricacies of perception and with biologists to refine his account of evolution.
Later in life, Dennett continued to seek adventures. He loved to sail his 13-metre sailing boat (named Xanthippe, after Socrates’s wife), sang at glee clubs and played a myriad of instruments.
He also lived by his philosophy. In 2006, after a nine-hour heart surgery, he wrote an essay called ‘Thank Goodness!’, explaining that he was grateful to the staff who cared for him, the scientists who developed the medicine to allow the doctors to treat him and even the peer reviewers and journal editors that published the work of those scientists — rather than to God. For him, it was the goodness of the knowledge generators and truth-seekers throughout history that should be thanked. It is only fitting that Dennett himself be included in this list.
Philosopher Nick Bostrom is surprisingly cheerful for someone who has spent so much time worrying about ways that humanity might destroy itself. In photographs he often looks deadly serious, perhaps appropriately haunted by the existential dangers roaming around his brain. When we talk over Zoom, he looks relaxed and is smiling.
Bostrom has made it his life’s work to ponder far-off technological advancement and existential risks to humanity. With the publication of his last book, Superintelligence: Paths, Dangers, Strategies, in 2014, Bostrom drew public attention to what was then a fringe idea—that AI would advance to a point where it might turn against and delete humanity.
To many in and outside of AI research the idea seemed fanciful, but influential figures including Elon Musk cited Bostrom’s writing. The book set a strand of apocalyptic worry about AI smoldering that recently flared up following the arrival of ChatGPT. Concern about AI risk is not just mainstream but also a theme within government AI policy circles.
Bostrom’s new book takes a very different tack. Rather than play the doomy hits, Deep Utopia: Life and Meaning in a Solved World, considers a future in which humanity has successfully developed superintelligent machines but averted disaster. All disease has been ended and humans can live indefinitely in infinite abundance. Bostrom’s book examines what meaning there would be in life inside a techno-utopia, and asks if it might be rather hollow. He spoke with WIRED over Zoom, in a conversation that has been lightly edited for length and clarity.
Will Knight: Why switch from writing about superintelligent AI threatening humanity to considering a future in which it’s used to do good?
Nick Bostrom: The various things that could go wrong with the development of AI are now receiving a lot more attention. It’s a big shift in the last 10 years. Now all the leading frontier AI labs have research groups trying to develop scalable alignment methods. And in the last couple of years also, we see political leaders starting to pay attention to AI.
There hasn’t yet been a commensurate increase in depth and sophistication in terms of thinking of where things go if we don’t fall into one of these pits. Thinking has been quite superficial on the topic.
When you wrote Superintelligence, few would have expected existential AI risks to become a mainstream debate so quickly. Will we need to worry about the problems in your new book sooner than people might think?
As we start to see automation roll out, assuming progress continues, then I think these conversations will start to happen and eventually deepen.
Social companion applications will become increasingly prominent. People will have all sorts of different views and it’s a great place to maybe have a little culture war. It could be great for people who couldn’t find fulfillment in ordinary life but what if there is a segment of the population that takes pleasure in being abusive to them?
In the political and information spheres we could see the use of AI in political campaigns, marketing, automated propaganda systems. But if we have a sufficient level of wisdom these things could really amplify our ability to sort of be constructive democratic citizens, with individual advice explaining what policy proposals mean for you. There will be a whole bunch of dynamics for society.
Would a future in which AI has solved many problems, like climate change, disease, and the need to work, really be so bad?
The idea of Singer that excited me was that each of us should give a lot of money to help poor people abroad. His “shallow pond” thought experiment shows why. If you saw a child drowning in a shallow pond, you’d feel obliged to rescue her even if that meant ruining your new shoes. But then, Singer said, you can save the life of a starving child overseas by donating to charity what new shoes would cost. And you can save the life of another child by donating instead of buying a new shirt, and another instead of dining out. The logic of your beliefs requires you to send nearly all your money overseas, where it will go farthest to save the most lives. After all, what could we do with our money that’s more important than saving people’s lives?
That’s the most famous argument in modern philosophy. It goes well beyond the ideas that lead most decent people to give to charity—that all human lives are valuable, that severe poverty is terrible, and that the better-off have a responsibility to help. The relentless logic of Singer’s “shallow pond” ratchets toward extreme sacrifice. It has inspired some to give almost all their money and even a kidney away.
In 1998, I wasn’t ready for extreme sacrifice; but at least, I thought, I could find the charities that save the most lives. I started to build a website (now beyond parody) that would showcase the evidence on the best ways to give—that would show altruists, you might say, how to be most effective. And then I went to Indonesia.
A friend who worked for the World Wildlife Fund had invited me to a party to mark the millennium, so I saved up my starting-professor’s salary and flew off to Bali. My friend’s bungalow, it turned out, was a crash pad for young people working on aid projects across Indonesia and Malaysia, escaping to Bali to get some New Year’s R&R.
These young aid workers were with Oxfam, Save the Children, some UN organizations. And they were all exhausted. One nut-tan young Dutch fellow told me he slept above the pigs on a remote island and had gotten malaria so many times he’d stop testing. Two weary Brits told of confronting the local toughs they always caught stealing their gear. They all scrubbed up, drank many beers, rested a few days. When we decided to cook a big dinner together, I grabbed my chance for some research.
“Say you had a million dollars,” I asked when they’d started eating. “Which charity would you give it to?” They looked at me.
“No, really,” I said, “which charity saves the most lives?”
“None of them,” said a young Australian woman to laughter. Out came story after story of the daily frustrations of their jobs. Corrupt local officials, clueless charity bosses, the daily grind of cajoling poor people to try something new without pissing them off. By the time we got to dessert, these good people, devoting their young lives to poverty relief, were talking about lying in bed forlorn some nights, hoping their projects were doing more good than harm.
AFTER our joyful revelling comes the inevitable season of good intentions. When we make our New Year’s resolutions, we often set ourselves ambitious goals – to run a half-marathon, learn a language or write a novel. One reason these resolutions often fail is that our focus is too wide – we think about the reward at the end of the journey, not considering the little steps that we need to take to get there. Then we end up feeling defeated and dejected as we fail to make the progress we want.
Perhaps we should all try to apply the Japanese concept of…