Tag: ideas

  • Creating a Global Package to Solve the Problem of Plastics

    Creating a Global Package to Solve the Problem of Plastics

    [ad_1]

    According to the United Nations, plastic production skyrocketed from 2 million metric tons in 1950 to about 400 million in 2024. This number is expected to triple by 2060. Only 10 percent of this plastic is currently being recycled and reused. The rest will remain in our environment for centuries, polluting the planet, from oceans to mountains, contaminating food chains and human bodies, where it risks damage to our organs and brains.

    In 2025, we will start putting an end to plastic pollution. Since 2022, policymakers in the United Nations, representing over 170 countries, have been negotiating a legally binding Global Plastics Treaty addressing the full lifecycle of plastics, from design to production to disposal. This treaty shares many of the mechanisms present in the 1987 Montreal Protocol, which eventually led to the phasing out of CFCs, the chemicals responsible for ozone depletion. As such, it can be as successful, despite opposition to it.

    The treaty was due to be finalized by the fifth and final session, in Busan, South Korea, at the end of November 2024. So far, perhaps unsurprisingly, negotiations have been polarized. At the time of writing, the draft of the treaty includes two options as to its overall goal: the first, more ambitious, aims to “end plastic pollution”; the second, on the other hand, aims to “protect human health and the environment from plastic pollution.”

    The first option is defended by a group of countries which are part of the High Ambition Coalition to End Plastic Pollution, led by the Nordics but also including countries like Rwanda and Peru. Option two is preferred by major oil producers like Saudi Arabia, who want to steer the focus of the discussions towards plastic recycling and waste management, rather than its production. In August 2024, the United States, also a major plastic and oil producer, announced a surprising policy shift by now committing to support limits on plastic production as well. Given how influential the Americans are, this new position will affect the treaty.

    Agreeing on option one would put us on a path very similar to the one followed by the Montreal Protocol. While it is unlikely at this point that the treaty would set concrete binding targets for the phase-down of plastic production, it would undeniably set the ambitious goal of ending plastic pollution. On the other hand, option two (“protect human health and the environment”) is a terribly vague aim, in part because we don’t actually know for certain what the threshold is for human health impacts, and may not know for a very long time.

    Regardless, the two options are a step forward: both provide the necessary steer for the plastic industry to develop better technologies. Option one, for instance, would inspire companies to develop alternatives such as fully biodegradable and compostable materials designed to ultimately replace plastic (especially single-use plastics like shopping bags and plastic packaging, which constitutes 35 percent of plastic usage today). Option two would likely drive the industry to develop more efficient ways to reduce the waste stream, such as improved recycling processes.

    This technology steer is perhaps the most important aspect of the treaty. The original 1987 Montreal Protocol, for instance, set very conservative gradual phase-down targets for the reduction of CFC production: 20 percent by 1994 and then 50 percent by 1998. At the time, these were seen as way too slow for what was required to address the problem. But, crucially, the protocol also explicitly stated that such targets would be revisited as new scientific and alternative technologies became available. This put pressure on the industry to develop technological solutions as companies competed to develop better products. In the end, those alternatives—like hydrofluorocarbons (HCFCs) which could be used in refrigeration while having much less impact on the ozone layer—developed so much faster than expected that, only three years later, countries met again to agree to phase out the use of CFCs completely by 2000.

    In 2025, the Global Plastics Treaty will send a clear message to the plastics industry that it has to change the way it does business. That will be the beginning of the end of plastic.

    [ad_2]

    Source link

  • Human Misuse Will Make Artificial Intelligence More Dangerous

    Human Misuse Will Make Artificial Intelligence More Dangerous

    [ad_1]

    OpenAI CEO Sam Altman expects AGI, or artificial general intelligence—AI that outperforms humans at most tasks—around 2027 or 2028. Elon Musk’s prediction is either 2025 or 2026, and he has claimed that he was “losing sleep over the threat of AI danger.” Such predictions are wrong. As the limitations of current AI become increasingly clear, most AI researchers have come to the view that simply building bigger and more powerful chatbots won’t lead to AGI.

    However, in 2025, AI will still pose a massive risk: not from artificial superintelligence, but from human misuse.

    These might be unintentional misuses, such as lawyers over-relying on AI. After the release of ChatGPT, for instance, a number of lawyers have been sanctioned for using AI to generate erroneous court briefings, apparently unaware of chatbots’ tendency to make stuff up. In British Columbia, lawyer Chong Ke was ordered to pay costs for opposing counsel after she included fictitious AI-generated cases in a legal filing. In New York, Steven Schwartz and Peter LoDuca were fined $5,000 for providing false citations. In Colorado, Zachariah Crabill was suspended for a year for using fictitious court cases generated using ChatGPT and blaming a “legal intern” for the mistakes. The list is growing quickly.

    Other misuses are intentional. In January 2024, sexually explicit deepfakes of Taylor Swift flooded social media platforms. These images were created using Microsoft’s “Designer” AI tool. While the company had guardrails to avoid generating images of real people, misspelling Swift’s name was enough to bypass them. Microsoft has since fixed this error. But Taylor Swift is the tip of the iceberg, and non-consensual deepfakes are proliferating widely—in part because open-source tools to create deepfakes are available publicly. Ongoing legislation across the world seeks to combat deepfakes in hope of curbing the damage. Whether it is effective remains to be seen.

    In 2025, it will get even harder to distinguish what’s real from what’s made up. The fidelity of AI-generated audio, text, and images is remarkable, and video will be next. This could lead to the “liar’s dividend”: those in positions of power repudiating evidence of their misbehavior by claiming that it is fake. In 2023, Tesla argued that a 2016 video of Elon Musk could have been a deepfake in response to allegations that the CEO had exaggerated the safety of Tesla autopilot leading to an accident. An Indian politician claimed that audio clips of him acknowledging corruption in his political party were doctored (the audio in at least one of his clips was verified as real by a press outlet). And two defendants in the January 6 riots claimed that videos they appeared in were deepfakes. Both were found guilty.

    Meanwhile, companies are exploiting public confusion to sell fundamentally dubious products by labeling them “AI.” This can go badly wrong when such tools are used to classify people and make consequential decisions about them. Hiring company Retorio, for instance, claims that its AI predicts candidates’ job suitability based on video interviews, but a study found that the system can be tricked simply by the presence of glasses or by replacing a plain background with a bookshelf, showing that it relies on superficial correlations.

    There are also dozens of applications in health care, education, finance, criminal justice, and insurance where AI is currently being used to deny people important life opportunities. In the Netherlands, the Dutch tax authority used an AI algorithm to identify people who committed child welfare fraud. It wrongly accused thousands of parents, often demanding to pay back tens of thousands of euros. In the fallout, the Prime Minister and his entire cabinet resigned.

    In 2025, we expect AI risks to arise not from AI acting on its own, but because of what people do with it. That includes cases where it seems to work well and is over-relied upon (lawyers using ChatGPT); when it works well and is misused (non-consensual deepfakes and the liar’s dividend); and when it is simply not fit for purpose (denying people their rights). Mitigating these risks is a mammoth task for companies, governments, and society. It will be hard enough without getting distracted by sci-fi worries.

    [ad_2]

    Source link

  • Humans Will Continue to Live in an Age of Incredible Food Waste

    Humans Will Continue to Live in an Age of Incredible Food Waste

    [ad_1]

    Let me start with the following principle: “Energy is the only universal currency: One of its many forms must be transformed to get anything done.” Economies are just intricate systems set up to do those transformations, and all economically significant energy conversions have (often highly undesirable) environmental impacts. Consequently, as far as the biosphere is concerned, the best anthropogenic energy conversions are those that never take place: No emissions of gases (be they greenhouse or acidifying), no generation of solid or liquid wastes, no destruction of ecosystems. The best way to do this has been to convert energies with higher efficiencies: Without their widespread adoption (be it in large diesel- and jet-engines, combined-cycle gas turbines, light-emitting diodes, smelting of steel, or synthesis of ammonia) we would need to convert significantly more primary energy with all attendant environmental impacts.

    Conversely, what then could be more wasteful, more undesirable, and more irrational than negating a large share of these conversion gains by wasting them? Yet precisely this keeps on happening—and to indefensibly high degrees—with all final energy uses. Buildings consume about a fifth of all global energy, but because of inadequate wall and ceiling insulation, single-pane windows and poor ventilation, they waste at least between a fifth to a third of it, as compared with well-designed indoor spaces. A typical SUV is now twice as massive as a common pre-SUV vehicle, and it needs at least a third more energy to perform the same task.

    Mannen kunnen soms tegen problemen aanlopen die invloed hebben op hun intieme leven, wat hen kan frustreren en onzeker kan maken. Deze uitdagingen zijn niet ongebruikelijk en kunnen voortkomen uit verschillende oorzaken, zoals stress, angst of fysieke aandoeningen. Gelukkig zijn er oplossingen en middelen beschikbaar die hen kunnen helpen om hun zelfvertrouwen en welzijn te herstellen. Een nuttige stap is om betrouwbare informatie te zoeken en producten te bekijken op websites zoals. Het is belangrijk dat mannen zich realiseren dat ze niet alleen zijn en dat er ondersteuning en opties zijn om hun seksuele gezondheid te verbeteren.

    Mannen kunnen soms tegen problemen aanlopen die invloed hebben op hun intieme leven, wat hen kan frustreren en onzeker kan maken. Deze uitdagingen zijn niet ongebruikelijk en kunnen voortkomen uit verschillende oorzaken, zoals stress, angst of fysieke aandoeningen. Gelukkig zijn er oplossingen en middelen beschikbaar die hen kunnen helpen om hun zelfvertrouwen en welzijn te herstellen. Een nuttige stap is om betrouwbare informatie te zoeken en producten te bekijken op websites zoals. Het is belangrijk dat mannen zich realiseren dat ze niet alleen zijn en dat er ondersteuning en opties zijn om hun seksuele gezondheid te verbeteren.

    The most offensive of these wasteful practices is our food production. The modern food system (from energies embedded in breeding new varieties, synthesizing fertilizers and other agrochemicals, and making field machinery to energy used in harvesting, transporting, processing, storing, retailing, and cooking) claims close to 20 percent of the world’s fuels and primary electricity—and we waste as much as 40 percent of all produced food. Some food waste is inevitable. The prevailing food waste, however, is more than indefensible. It is, in many ways, criminal.

    Combating it is difficult for many reasons. First, there are many ways to waste food: from field losses to spoilage in storage, from perishable seasonal surpluses to keeping “perfect” displays in stores, from oversize portions when eating outside of the home to the decline of home cooking.

    Second, food now travels very far before reaching consumers: The average distance a typical food item travels is 1,500 to 2,500 miles before being bought.

    Third, it remains too cheap in relation to other expenses. Despite recent food-price increases, families now spend only about 11 percent of their disposable income on food (in 1960 it was about 20 percent). Food-away-from-home spending (typically more wasteful than eating at home) is now more than half of that total. And finally, as consumers, we have an excessive food choice available to us: Just consider that the average American supermarket now carries more than 30,000 food products.

    Our society is apparently quite content with wasting 40 percent of the nearly 20 percent of all energy it spends on food. In 2025, unfortunately, this shocking level of waste will not receive more attention. In fact, the situation will only get worse. While we keep pouring billions into the quest for energy “solutions”—ranging from new nuclear reactors (even fusion!) to green hydrogen, all of them carrying their own environmental burdens—in 2025, we will continue to fail addressing the huge waste of food that took so much fuel and electricity to produce.

    [ad_2]

    Source link

  • Blockchain Innovation Will Put an AI-Powered Internet Back Into Users’ Hands

    Blockchain Innovation Will Put an AI-Powered Internet Back Into Users’ Hands

    [ad_1]

    The doomers have it wrong. AI is not going to end the world—but it is going to end the web as we’ve known it.

    AI is already upending the economic covenant of the internet that’s existed since the advent of search: A few companies (mostly Google) bring demand, and creators bring supply (and get some ad revenue or recognition from it). AI tools are already generating and summarizing content, obviating the need for users to click through to the sites of content providers, and thereby upsetting the balance.

    Meanwhile, an ocean of AI-powered deepfakes and bots will make us question what’s real and will degrade people’s trust in the online world. And as big tech companies—who can afford the most data and compute—continue to invest in AI, they will become even more powerful, further closing off what remains of the open internet.

    The march of technology is inevitable. I’m not calling attention to this to cry that the sky is falling or to hold back progress. We need to help individual users gain some control of their digital lives. Thoughtful government regulation could help, but it often slows innovation. Attempting a one-size-fits-all solution can create as many problems as it solves. And, let’s face it, users are not going to retreat from living their lives online.

    Major technology movements often come together—think of the rise of social, cloud, and mobile computing in the 2000s. This time is no different: AI needs blockchain-enabled computing. Why? First, blockchains enforce ownership. Blockchains can make credible commitments involving property, payouts, and power. A decentralized network of computers—not a big company, nor any other centralized intermediary—validates transactions, ensuring that the rules and records cannot be altered without consensus. Smart contracts automate and enforce these ownership rights, creating a system that ensures transparency, security, and trust, giving users full control and ownership of their digital lives. For creators, this means the ability to decide how others—including AI systems—can use their work.

    Another basic ownership right that blockchains can enforce is identity. If you are who you say you are, you can sign a statement, cryptographically, attesting as much. We could carry our identities around the web without relying on third parties. Onchain identities could also help separate real users from bots and imposters. In the 1990s, no one on the internet knew if you were a dog. Now, people can know for sure if you’re a dog—or a bot. In 2025, I expect to see more “proof of humanity” on the internet, thanks to recent advances in these technologies.

    In 2025, blockchains will be used to create tamper-resistant records of original digital content, a bulwark against deepfakes. When a video, photo, or audio recording is created, blockchains can provide and store a unique digital fingerprint. Any changes to the content alter that signature, making it easy to detect tampering. Blockchains can also store metadata and verification attestations from trusted sources, further ensuring content authenticity.

    Finally, in 2025, blockchains will help achieve the original ideals of the internet, fostering a more creative, open, diverse web. Right now, users depend on a few internet giants—the same ones that are investing so heavily in AI (and asking for regulation to keep smaller competitors out). Websites and apps that were once open have added paywalls, restricted or closed their APIs, removed their archives, edited past content without permission, and added intrusive banners and ads. In 2025, blockchain alternatives will offer more choice, open source innovation, and community-controlled options. They will carry the torch of the open internet. Crypto will start taking power away from big tech companies, putting it back in the hands of users.

    [ad_2]

    Source link

  • More Humanitarian Organizations Will Harness AI’s Potential

    More Humanitarian Organizations Will Harness AI’s Potential

    [ad_1]

    For many of the people served by the humanitarian sector, 2024 has been the worst of times. The most recent UN estimates of those forced to flee violence and disaster is a record of 120 million, a figure that has doubled in the past decade. The broader figure of those in humanitarian need, 300 million people, has been swelled by increasingly violent conflict and growing impacts of the climate crisis. Progress in meeting the UN’s Sustainable Development Goals has also been either stagnating or declining in more than half of the fragile countries. A child born in those countries has a tenfold greater chance of being in poverty than one born in a stable state.

    The unprecedented numbers show the need for a new humanitarian surge: a technological one, harnessing the power of the digital and AI. For years we’ve (rightly) debated the risks and benefits of AI and waited for the promise of “AI for Good” to arrive. In 2025, across the aid, development, and humanitarian sector, that moment may finally be at hand.

    When properly leveraged, AI can open up new frontiers in humanitarian action—in scale, speed, reach, personalization, and cost savings. My organization, International Rescue Committee (IRC), and our in-house research and innovation lab, Airbel, are exploring applications of AI in our humanitarian programming. We’re seeing solutions emerging in three critical areas—information, education, and climate—each bolstered by promising public-private partnerships and collaboration.

    For instance, for refugees forced to flee from conflict, the first priority is timely, accurate, and context-specific information about who to trust, and where to find services and safety. The global information project, Signpost, supported by Google.org—Google’s charitable arm—in partnership with IRC, Cisco Foundation, Zendesk, and Tech for Refugees, delivers critical information to millions of displaced people through digital channels and social media, disempowering smugglers who thrive on mis- or disinformation, and saving lives along migration routes. As this work evolves, Signpost is creating an “AI prototyping lab” to de-risk and evaluate the effectiveness of Generative AI for the entire humanitarian sector.

    Humanitarians are also exploring the potential of Generative AI to enhance and personalize education for children affected by crises—of whom there are 224 million worldwide. A huge challenge involves testing and strengthening the potential of ChatGPT in local languages. AI models, for instance, can’t understand African languages. Lelapa AI, an African “AI research and product lab,” is working to change that, developing new languages to bring AI to Africa, while OpenAI has begun to offer low and reduced cost access to ChatGPT for nonprofits.

    OpenAI is also supporting the development of AprendAI, a global, AI-driven educational chatbot platform that delivers personalized digital learning experiences at scale via messaging platforms for crisis-affected children, teachers, and parents, all while testing and strengthening the potential of ChatGPT in local languages.

    Finally, we are seeing the power of artificial intelligence scaled to protect communities facing the harsh impacts of extreme weather. In partnership with NGOs, governments and the UN, Google has launched an AI-powered “Flood Hub,” which is currently able to forecast flooding in 80 countries. Google.org, together with IRC and the NGO GiveDirectly, is leveraging machine learning in Northeast Nigeria to establish forecasting systems that trigger early warnings and cash transfers ahead of devastating climate hazards.

    Israeli scholar and historian Yuval Noah Harari described artificial intelligence as the most dangerous technology we have ever created—and potentially the most beneficial. In 2025, those benefits must accrue to the poorest in the world.

    [ad_2]

    Source link

  • Taking on the Tyranny of the Tech Bros

    Taking on the Tyranny of the Tech Bros

    [ad_1]

    The glow of the tech bros’ halo is dimming and, in 2025, the computing industry’s sheen of glamor will continue to fade, too. While other STEM fields are making strides in broadening participation in their workforces, year after year, computing, a supposedly innovative field, fails to recruit, retain, and respect women and nonbinary workers. For example, precision questioning, abstraction, aggression, sexism and a disdain for altruism—serving the social good—are a few of the core values driving culture in computing worksites. These values and the ways they are policed via bias, discrimination, and harassment in high-tech companies form the “Bro Code.”

    The Bro Code perpetuates high tolerance of sexual harassment. It also contributes to the field’s failure to rectify its stark segregation. Only 21 percent of computer programming positions are held by women. Of that 21 percent, only 2 percent are African American, and only 1 percent are Latina. While sorely underrepresented in the field overall, women are disproportionately affected during industry’s downsizing. For example, nearly 70 percent of those laid off in the 2022 tech layoffs were women. This tracks with my experience in Big Tech. As soon as the company went public, stockholders demanded annual layoffs. For the first two years, the only people terminated in my department were women.

    Further, due to their massive wealth and masterful branding, Bro Code bosses believe themselves to be wizards or priests. They lean into authoritarianism, prompted to repress complaints and resistance. Some programmers imitate this behavior. For example, in 2023, tech bros mobbed the Grace Hopper Celebration, the world’s largest conference for women and nonbinary tech workers. Women attendees I spoke with described men at the career expo simply barging in front of them in lines, and some said they were verbally harassed and assaulted.

    In 2025, the march towards a future dictated by algorithmic lords will falter. Coalitions between feminist movements and labor activism will increase public scrutiny of tech culture. These efforts will start to crack the Bro Code. Bro Code bosses talk a big game about its socially revolutionary impact, but participants in my research felt thwarted when trying to use their technical skills to serve others. For instance, Lynn reported that the eye-tracking device she developed to help people with disabilities was repurposed for marketing analysis; Shauna’s lab mates nicknamed her “accessibility bitch” when she worked on projects to help those disenfranchised in computing.

    As Big Tech continues to deliver empty promises instead of solutions to social ills—while dodging taxes, quashing regulations and fueling a yawning pay inequality gap—the public will continue to grow disenchanted with the industry. In 2025, thwarted altruistic efforts like Shauna and Lynn’s will accelerate growing skepticism about computing’s service to humanity.

    Disenfranchised tech workers will continue to help us hold Bro Code bosses accountable for not only failing to live up to its widely publicized altruism, but also for their efforts to conceal the social harms of their products. As recent organizing activities by tech workers show, strong coalitions across workers are what scare these reigning elites the most. For example, in 2018, more than 20,000 Google employees across the globe staged a walkout against sexual harassment and systemic racism in the company. In 2025, activism against the militarization, racism, sexism and economic exploitation in the tech industry will skyrocket higher than Bro Code bosses’ space jets.

    [ad_2]

    Source link

  • The Digital Natives Will Revolt—and That’s Good for Everybody

    The Digital Natives Will Revolt—and That’s Good for Everybody

    [ad_1]

    In the late 19th century, before the invention of cinema and radio, every piece of music, performance, oration—even a natural view like a rainbow—was a unique event. Unrepeatable. Cinema and radio changed that, enforcing a massive shift in how we consumed popular culture. Several of the world’s dominant media companies were founded in that moment by men with a relentless sense of awe for the new media. It resulted in a phenomenal lack of restraint—they didn’t think they needed it. This was the future, and it was making them rich. More was obviously better.

    Film and radio would eventually be combined into television—creating an even greater detachment from the performance at its core while supplanting human connection with strategic dopamine sparks. Of course people got hooked: More excitement and no effort equaled a better future. When streaming to personal devices became ubiquitous, that future merged even greater profitability with the law of diminishing returns—crushed empathy, spiked anxiety, and social inadequacy all became core to the human experience.

    This has ultimately resulted in a general societal malaise, and I think 2025 will be that moment where some facets of society will begin to methodically detach from their screen-based addictions. I predict the leaders of this change will be the Gen Z digital natives for whom the simplicity of techless exchange will hold a similar novelty to its original technological advances.

    Gen Z—currently between 13 to 27 years of age—are the people most deeply affected by digital addiction. After all, they were born in the wake of the invention of the internet. Their primary methods of understanding the world have been digital from the start. Actual agency—connection with other humans—has been largely unavailable for school work, coaching, and guidance. Even the informative mundanity of navigating normal life has been relegated to apps: the screen’s dominance institutionalized with all the restrictions and none of the learned experience for surviving them.

    Except their instincts. It’s Gen Z’s instincts that are starting to evolve into a dominant force for change in modern society. What things cost—a massive issue for everyone—is driving much of how Gen Z views their priorities. They’re selecting user-generated content over pricey new media. They’re looking for longer meaning from experiences above the short-term gratification of materialism. In a recent US Gallup poll, more than 50 percent of the respondents indicated they don’t trust tech companies, the government, or the justice system.

    Gen Z is also embracing the underconsumption-core and de-influencing trends, questioning the values awe-reverent media brought them, and heightening demands for a life-work balance that would have terrorized the generations before them. These are all positive to crucial developments for society.

    So, in 2025, I believe the next step will be for Gen Z to embrace the simplicity of techless human exchange—events without the mediation of the ever-corrupting screen. It’s the shock of the new, a novelty as elemental as film in its infancy. It’s scary, sure—unpredictable—a real change in the digital life they/we are so dominated by. But it’s human and dimensional and full of stuff we can’t get online. It’s what we humans are at our messy core, and for all those reasons I believe we’ll see the virtues of screen retraction start to be celebrated, with Gen Z leading the way.

    [ad_2]

    Source link

  • The Rich Can Afford Personal Care. The Rest Will Have to Make Do With AI

    The Rich Can Afford Personal Care. The Rest Will Have to Make Do With AI

    [ad_1]

    The burgeoning field of social-emotional AI is tackling the very jobs that people used to think were reserved for human beings—jobs that rely on emotional connections, such as therapists, teachers, and coaches. AI is now widely used in education and other human services. Vedantu, an Indian web-based tutoring platform valued at $1 billion, uses AI to analyze student engagement, while a Finnish company has created “Annie Advisor,” a chatbot working with more than 60,000 students, asking how they are doing, offering help, and directing them to services. Berlin-based startup clare&me offers an AI audio bot therapist it calls “your 24/7 mental health ally,” while in the UK, Limbic has a chatbot “Limbic Care” that it calls “the friendly therapy companion.”

    The question is, who will be on the receiving end of such automation? While the affluent are sometimes first adopters of technology, they also know the value of human attention. One spring day before the pandemic, I visited an experimental school in Silicon Valley, where—like a wave of other schools popping up that sought to “disrupt” conventional education—kids used computer programs for customized lessons in many subjects, from reading to math. There, students learn mainly from apps, but they are not entirely on their own. As the limitations of automated education became clear, this fee-based school has added more and more time with adults since its founding a few years back. Now, the kids spend all morning learning from computer applications like Quill and Tynker, then go into brief, small group lessons for particular concepts taught by a human teacher. They also have 45-minute one-on-one meetings weekly with “advisers” who track their progress, but also make sure to connect emotionally.

    We know that good relationships lead to better outcomes in medicine, counseling, and education. Human care and attention helps people to feel “seen,” and that sense of recognition underlies health and well-being as well as valuable social goods like trust and belonging. For instance, one study in the United Kingdom—titled “Is Efficiency Overrated?”—found that people who talked to their barista derived well-being benefits more than those who breezed right by them. Researchers have found that people feel more socially connected when they have had deeper conversations and divulge more during their interactions.

    Yet fiscal austerity and the drive to cut labor costs have overloaded many workers, who are now charged with forging interpersonal connections, shrinking the time they have to be fully present with students and patients. This has contributed to what I call a depersonalization crisis, a sense of widespread alienation and loneliness. US government researchers found that “more than half of primary care physicians report feeling stressed because of time pressures and other work conditions.” As one pediatrician told me: “I don’t invite people to open up because I don’t have time. You know, everyone deserves as much time as they need, and that’s what would really help people to have that time, but it’s not profitable.”

    The rise of personal trainers, personal chefs, personal investment counselors, and other personal service workers—in what one economist has dubbed “wealth work”—shows how the affluent are fixing this problem, making in-person service for the rich one of the fastest-growing sets of occupations. But what are the options for the less advantaged?

    For some, the answer is AI. Engineers who designed virtual nurses or AI therapists often told me their technology was “better than nothing,” particularly useful for low-income people who can’t catch the attention of busy nurses in community clinics, for example, or who can’t afford therapy. And it’s hard to disagree, when we live in what economist John Kenneth Galbraith called ”private affluence and public squalor.”

    [ad_2]

    Source link

  • We’ve Never Been Closer to Finding Life Outside Our Solar System

    We’ve Never Been Closer to Finding Life Outside Our Solar System

    [ad_1]

    In 2025, we might detect the first signs of life outside our solar system.

    Crucial to this potential breakthrough is the 6.5-meter-diameter James Webb Space Telescope (JWST). Launched aboard an Ariane-5 rocket from Kourou, a coastal town in French Guiana, in 2021, the JWST is our biggest space telescope to date. Since it began collecting data, this telescope has allowed astronomers to observe some of the dimmest objects in the cosmos, like ancient galaxies and black holes.

    Perhaps more importantly, in 2022, the telescope has also provided us with the first glimpses of rocky exoplanets inside what astronomers call the habitable zone. This is the area around a star where temperatures are just right for the existence of liquid water—one of the key ingredients of life as we know it—in the planet’s rocky surface. These Earth-sized planets were found orbiting a small red star called TRAPPIST-1, a star 40 light-years away with one-tenth of the mass of the sun. Red stars are cooler and smaller than our yellow sun, making it easier to detect Earth-sized planets orbiting around them. Nevertheless, the signal detected from exoplanets is typically weaker than the one emitted by the much brighter host star. Discovering these planets was an extremely difficult technical achievement.

    The next stage—detecting molecules in the planets’ atmosphere—will be an even more challenging astronomical feat. Every time a planet passes between us and its star—when it transits—the starlight gets filtered by the planet’s atmosphere and hits the molecules in its path, creating spectral absorption features we can search for. These features are very difficult to identify. To accomplish that, the JWST will need to collect enough data from several planetary transits to suppress the signal from the host star and amplify the molecular features in the incredibly thin atmosphere of the rocky exoplanets (if you’d shrink these planets to the size of an apple, for instance, at that scale their atmosphere would be thinner than the fruit’s peel). However, with a space telescope as powerful as the JWST, 2025 might just be the year when we can finally detect these molecular signatures.

    Detecting water in TRAPPIST-1’s exoplanets, however, is not our only chance to find life in faraway exoplanets. In 2024, for instance, the JWST also revealed potential signs of carbon dioxide and methane in the atmosphere of K2-18b, a planet located 124 light-years from Earth. K2-18b, however, is not a rocky, Earth-like planet orbiting its star in the Habitable zone. Instead, it’s more likely to be a giant gas ball with a water ocean similar to Neptune (albeit smaller in size). This means that if there’s life on K2-18b, it might be in a form completely different from life as we know it on Earth.

    In 2025, the JWST will likely shed more light into these tantalizing detections, and hopefully confirm, for the first time ever, if there is life on alien worlds light-years away from our own.

    [ad_2]

    Source link

  • The Pressure Is on for Big Tech to Regulate the Broken Digital Advertising Industry

    The Pressure Is on for Big Tech to Regulate the Broken Digital Advertising Industry

    [ad_1]

    Digital advertising is a whopping $700 billion (£530 billion) industry that remains largely unregulated, with few laws in place to protect brands and consumers. Companies and brands advertising products often don’t know which websites display their ads. I run Check My Ads, an ad tech watchdog, and we constantly deal with situations where advertisers and citizens have been the victims of lies, scams, and manipulations. We have removed ads from websites with serious disinformation about Covid-19, false election content, and even AI-generated obituaries.

    Currently, if a brand wants to advertise a product, Google facilitates the ad placement based on desired ad reach and metrics. It may technically follow through on the agreement by delivering views and clicks, but does not provide transparent data about how and where the ad views came from. It is possible that the ad was shown on unsavory websites diametrically opposed to the brand’s values. For example, in 2024, Google was found to be profiting by placing product ads on websites that promoted hardcore pornography, disinformation, and even hate speech, against the brands’ wishes.

    In 2025, however, this scandal will end, as we start to enact the first regulations targeting the digital advertising industry. Around the world, lawmakers in Brussels, Ottawa, Washington, and London are already in the early stages of developing regulation that will ensure brands have access to the legal support to ask questions, check ad data, and receive automatic refunds when they find that their digital campaigns have been subject to fraud or safety violations.

    In Canada, for example, Parliament is deliberating the enactment of the Online Harms Act, a law to incentivize the removal of sexual content involving minors. The idea behind this law is that if the content is illegal, then making money off it should be illegal, too.

    In California and New York, advocates are also proposing legislation that will aim to implement a know-your-customer law to track the global financial trade of advertising. This is significant because these two states power the global ad tech industry. New York has more ad tech companies than any other city in the world. Transparency laws enacted in California, on the other hand, would affect Google’s international advertising business—by far the biggest ad tech company in the world.

    Beyond brand and consumer issues, the unregulated nature of the digital advertising landscape is a direct threat to democracy. In the US, for instance, presidential campaign spending remains effectively unregulated. It is estimated that the presidential campaigns will spend up to $2 billion (£1.5 billion) on digital advertising in 2024. With current laws, we will likely have no external data about their refunds or rates.

    In 2025, the legislative pressure is on for big tech companies to regulate ad technology.

    [ad_2]

    Source link