Tag: video

  • Kyu’s Tiny Camera Only Captures 9-Second Videos

    Kyu’s Tiny Camera Only Captures 9-Second Videos

    [ad_1]

    This is hardly the first time a company has pitched simplicity as a way to capture and relive memories. Google debuted an AI-powered camera called Clips in 2018 that could record short videos, and you didn’t even have to press a button. Just turn it on and the AI could figure out the right moments to capture, and these 7-second clips were then accessible in the phone app. Clips was discontinued nearly two years after its launch.

    The time may be ripe for Kyu to step in as a private, personal social network. BeReal, the social media app that championed authenticity, has been in sharp decline since its explosive growth in 2022. The mass migration from X to Bluesky has left some people wondering where to post. And TikTok may get banned in the US in 2025.

    Perhaps the intent of carrying around Kyu in your hand, the instant camera-like “limit” on how much you can capture, and the easily stitched-together edits in the app will help create memory bursts that are bite-sized but still can transport you beyond the same old scroll. Since you’ll have to be more choosy with what you capture before space runs out, you won’t have unnecessary files hogging space. And the resulting videos are mercifully short—no one wants to sit through your 10-minute travel log. You can also control whether these videos are saved in your digital library instead of the automatic nonstop backup most of us are used to with our smartphones.

    A hand holding the Kyu camera an ovalshaped grey device with a small button and screen on one side and a camera lens on...

    Courtesy of Kyu

    Right now, the Google Photos app has “Memories” you can cycle through that show old images, but these are often random photos chosen by Google’s AI instead of a collection of memories tied to a specific event. Google recently launched a feature that employs generative artificial intelligence to create a yearly recap of your memories with AI-written captions. My 2024 recap didn’t particularly tug at my heartstrings, but maybe Kyu clips would have made more of an impact.

    This is the first product from a new company, so we’ll need to see the camera in action before passing judgment. I’m hoping it can complete its promised functions successfully—a low bar for 2024. The Kyu is available for preorder globally and costs $299. There’s an optional $30 subscription in the works to, ironically, store your memories in the cloud, though Ando assures me it will also include other perks like insurance to protect the device, a repair program, and even a discount on future products.

    The hardware launches in April, but if you have an iPhone, you can download the Kyu app now and start capturing 9-second videos. Just don’t call them Vines.

    [ad_2]

    Source link

  • The Guy Behind the Most Nostalgic Sites on the Internet

    The Guy Behind the Most Nostalgic Sites on the Internet

    [ad_1]

    I was going to ask if anyone has found it yet. Does anyone know where it is?

    I haven’t shared the location online or even a picture of what the box looks like, just because I don’t want anyone to get too—for the city to have a very easy time finding it if they want to take it down. I feel like by now though, it’s pretty obvious where it is. Just think about the most chaotic place in the Mission, where it could be, it’s there, which a lot of locals probably can guess. I think almost every SF publication covered it. So at this point, if the mayor’s office wanted to take it down, they definitely could, but it’s still up, so that’s a good sign.

    As a former resident, I’d say it’s near the 24th Street BART station, but that’s just me.

    Close, but not there. But somewhere that’s sort of like that.

    Another random question based on something I saw on your site. Were you the one who created that fake profile for someone named Andrew Walz who was allegedly running for Congress in 2020 but didn’t exist—then had his account verified by Twitter?

    When I was in high school, yeah. Do you remember that? That’s funny.

    I remember because I remember everyone kind of laughing about it. What happened after that?

    That’s pretty much it. The account got suspended. It was a weird story because it was a week before Covid happened. Somehow I got hold of a reporter at CNN. I had no idea how to talk to the press or anything. They actually came with a camera crew to my high school in upstate New York and interviewed me in a classroom, and it was such a surreal experience. And then once they heard the story, Twitter just suspended the account. But yeah, that was before you could pay for a check mark, so it was much harder to get a blue badge at that point.

    I just have to ask, you’re not related to Tim Walz, right?

    My dad is Tim Walz, but not that Tim Walz.

    Ha! So, do you know what your next site is going to be?

    I want to actually set up a script that scrapes the list app for the Citi Bikes, and every minute will track where each bike is in the system. So I want to make a website that’s like Citi Bike Shocker, where you can enter a bike’s ID number and see where it’s been. So, like, it started the morning in Brooklyn, then someone took it across the bridge into Tribeca and then it went uptown to Columbia or something. It’d be cool to have states on which specific bike in the system has been the farthest, which one hasn’t been used in the longest time. But yeah, that’s the idea.

    Please send that to me when it’s up. I need the serotonin.

    Will do.

    Loose Threads:

    “Sure, who cares anymore,” or some variation on that theme, is creeping up online. Basically, it’s people posting stuff that seems a little odd, with the resignation of someone who understands that everything is a little odd right now. For example, have you heard about Cinnamon Toast Crunch bacon?

    The Nosferatu movie is getting its own special popcorn bucket. It’s not quite as intense as Dune: Part Two’s sandworm or the Deadpool & Wolverine one. It’s just a coffin. A coffin full of popcorn.

    Speaking of Dune, here’s “Bene Gesserit” set to the tune of “Bennie and the Jets.” Good luck getting this earworm out.



    [ad_2]

    Source link

  • DJI Mic Mini Review: Tiny Wireless Microphones

    DJI Mic Mini Review: Tiny Wireless Microphones

    [ad_1]

    The Mic 2 supports internal recording, meaning it can save your audio as a backup directly to the transmitter’s internal storage, but this is not supported on the Mic Mini. Also, the Mic 2 is capable of 32-bit float internal recording, which gives you more headroom when you’re editing. Basically, you have more information to work with in case something goes wrong with the audio. This is also not supported on the Mic Mini. The Mic Mini doesn’t support a Lavalier microphone (no wires here!), and there’s no touchscreen display to interface with. (There is a dial just like on the Mic 2 to adjust the gain.)

    But the Mic Mini does have some tricks up its sleeve. It supports automatic limiting to prevent audio clipping, meaning it will reduce the signal’s volume if you’re approaching those limits. To test this, I intentionally maxed out the gain on the receiver and spoke loudly into both the Mic 2 and Mic Mini. The latter sounded fine, but the Mic 2’s audio was distorted and clipped in a few places. Yay!

    Also, in general, you will get far better battery life with the Mic Mini. Despite its small size, the lack of internal recording enables it to hit 48 hours of operating time, whereas the Mic 2 is limited to 18 hours.

    As for the microphone quality, I found it largely similar to the Mic 2, barring the slight differences in noise canceling. Watch my video above to see how it fares compared to the built-in mics on the iPhone 16 Pro—the amount of ambient sound these mics eliminate never ceases to impress me.

    All in all, the Mic Mini is a simple, affordable, and effective wireless microphone system, and I think it’s suitable for most people getting started and upgrading from their phone’s built-in audio (or the mic on the EarPods). Keep in mind that you also can buy the DJI Mic 2 in parts—the transmitter alone is $99, and there are reasons why you may want it over the Mini. But while it can be harder to finagle with because of its small size, the Mic Mini is so much more discreet and nicer to have on a shirt, and that makes me want to use it even more.

    [ad_2]

    Source link

  • Lo-Fi Weather Channel Videos Are Soothing Climate Fears on YouTube

    Lo-Fi Weather Channel Videos Are Soothing Climate Fears on YouTube

    [ad_1]

    The Vaporwave album Conditions at Hickory begins with static, as if you’re tuning in to a 1940s radio broadcast. First and second tracks “Foothills” and “Daily Commute” start out humdrum and benign enough. Then, the mood shifts. Sounds come like warnings, cautions of something sinister to come. Beeping sounds and tornado sirens start to interrupt the music. By the time you get to “Thunderheads” and “Squall,” you’re in the thick of it.

    Kana, aka Dreamweather, released the seven-track album on YouTube, where it soundtracks a frozen image: a bright-red severe weather warning for Hickory, North Carolina. It could be that Conditions is trying to warn you about an impending storm. It could be that the album, with its smooth, jazzy AM-radio tones, is trying to rock you to sleep in the midst of it.

    Kana is one of a number of artists who have taken transmissions from the weather reports of yesteryear and merged them with the lo-fi electronic music genre known as vaporwave. Emerging in the early 2010s, vaporwave has exploded on YouTube recently, soundtracking nostalgic video footage like family trips to Florida in the ’90s or Transformers cartoon clips. The effect is as unsettling as it is comforting—a visual reminder of a different, maybe better era that can’t be lived again.

    As the trend has evolved, many of the more popular vaporwave clips have been those that place ambient sounds over Weather Channel broadcasts from the ’80s and ’90s. Like Twisters, these sometimes eight-hour-long broadcasts evoke a time when TV and radio offered guidance in a storm—and a time before climate change made extreme weather events more frequent.

    Popular vaporwave artists play their music over weather forecasts from fearless stormchaser Jim Cantore. Others—sometimes practitioners of the subgenre known as weatherwave—soothe you with sound as longtime severe weather expert Steve Lyons waves his hands madly about an impending Indiana tornado.

    “As a child, I would often just sit and watch the Weather Channel for hours on end,” Kana says. “I vibed with the local forecasts, its music, and its programs a lot, so discovering that other people were interested in this extreme niche blew my mind.”

    Some of the most popular weatherwave clips use a VHS recording of a Weather Channel broadcast on a random cold ’90s night in the winter. One, a 41-minute video from YouTuber onceinalifetime, has nearly 900,000 views; another is an eight-hour megamood from chyllvester with nearly 650,000 views. Many comments below them speak in nostalgic terms: “I basically lived in hotels growing up (long story). The Weather Channel was the only real constant from place to place. It helped me greatly then. It’s still helping me today.”

    The Weather Channel was founded in Atlanta, Georgia, in May 1982. From the beginning it coupled its stalwart weather broadcasts with a steady stream of smooth jazz, a combo that came to define the 24/7/365 weather network. Whether you were tuning in for the tropical update segment or international weather, the sounds stayed constant and steady, even if the weather did not.

    [ad_2]

    Source link

  • What to know about this new Chinese text-to-video AI model

    What to know about this new Chinese text-to-video AI model

    [ad_1]

    The short-video platform, which has over 600 million active users, announced the new tool on June 6. It’s called Kling. Like OpenAI’s Sora model, Kling is able to generate videos “up to two minutes long with a frame rate of 30fps and video resolution up to 1080p,” the company says on its website.

    But unlike Sora, which still remains inaccessible to the public four months after OpenAI trialed it, Kling soon started letting people try the model themselves. 

    I was one of them. I got access to it after downloading Kuaishou’s video-editing tool, signing up with a Chinese number, getting on a waitlist, and filling out an additional form through Kuaishou’s user feedback groups. The model can’t process prompts written entirely in English, but you can get around that by either translating the phrase you want to use into Chinese or including one or two Chinese words.

    So, first things first. Here are a few results I generated with Kling to show you what it’s like. Remember Sora’s impressive demo video of Tokyo’s street scenes or the cat darting through a garden? Here are Kling’s takes:

    Remember the image of Dall-E’s horse-riding astronaut? I asked Kling to generate a video version too. 

    There are a few things worth applauding here. None of these videos deviates from the prompt much, and the physics seem right—the panning of the camera, the ruffling leaves, and the way the horse and astronaut turn, showing Earth behind them. The generation process took around three minutes for each of them. Not the fastest, but totally acceptable. 

    But there are obvious shortcomings, too. The videos, while 720p in format, seem blurry and grainy; sometimes Kling ignores a major request in the prompt; and most important, all videos generated now are capped at five seconds long, which makes them far less dynamic or complex.

    However, it’s not really fair to compare these results with things like Sora’s demos, which are hand-picked by OpenAI to release to the public and probably represent better-than-average results. These Kling videos are from the first attempts I had with each prompt, and I rarely included prompt-engineering keywords like “8k, photorealism” to fine-tune the results. 

    [ad_2]

    Source link

  • Lux vs. Lumens and Explaining Other Lighting Gear Terms as You Shop (2024)

    Lux vs. Lumens and Explaining Other Lighting Gear Terms as You Shop (2024)

    [ad_1]

    Planning your lighting for a photo or video shoot can be complicated, and the terminology used to measure light in lighting equipment can make things even more confusing. When shopping online, most lights list “lumens” or “lux” among their technical specs, though sometimes “lumens” is written as “luminous flux.” You might even get tripped up on luminance versus illuminance. It’s a lot. So let’s break it all down.

    One thing to note: You’ve probably seen light bulbs with output measured in watts; LED bulbs often say something like “60W equivalent.” However, watts are a measure of how much power a light bulb uses, not how much light it puts out. This metric is a holdover from when incandescent lights were commonplace and used significantly more energy than today’s LEDs. However, as more energy-efficient lights have grown in popularity, it’s no longer useful to use watts as a shorthand for how much light a bulb puts out (it wasn’t super useful to begin with). This is why you’ll see terms like lumens or lux on professional lighting gear.

    If you buy something using links in our stories, we may earn a commission. This helps support our journalism. Learn more.

    What Is Lumens?

    The first term you should get to know is lumens. The amount of visible light that a source puts out is referred to as luminous flux and the lumen is a unit of measurement for that raw output. You can think of this as how “distance” refers to how far apart two places are, while “kilometers” is the unit used to measure that distance. Lumens in the kilometers in that analogy.

    This can be a little confusing because companies will list “luminous flux” without naming the unit of measurement. For example, one of our favorite lights for shooting professional videos, the Godox SL-60W, lists its luminous flux as 4,500 in its description. Using our metaphor above, this is like saying “Distance: 4,500” without listing what unit that number refers to.

    That said, while they’re not interchangeable, if you see “luminous flux” listed on a product spec sheet, it’s probably referring to lumens. However, this does make it important to double-check that you’re comparing comparable numbers when shopping for lights from different manufacturers.

    You might also see this metric as “luminance,” which is another, less common way to refer to a light source’s output. And, like “luminous flux,” it’s a general term for the concept, not a unit of measurement. However, this is distinct from illuminance, which refers to reflected light, not the light coming directly from a source. And yes, it’s confusing.

    What Is Lux?

    How much light a source puts out is only part of the story. After all, the sun puts out enough energy to melt … basically everything. Fortunately, the sun has a good sense of personal space and stays far enough away from the Earth to not destroy us all. For similar, less catastrophic reasons, factoring in how far away your light source sits affects the kind of light you’ll need.

    Lux is defined as one lumen per square meter, though the math can get tricky because we’re dealing with surface areas in three dimensions. Don’t worry, you don’t need to sweat it too much. Most professional lighting will simply list their output in lux (as well as lumens), and specify a distance. For example, the Aputure Amaran P60X is rated for 5,070 lux at 1 meter.

    This means a subject 1 meter from the light source will effectively perceive a little more than 5,000 lumens of light on them. If they’re farther away, less of the light from the source will hit them, and thus the subject will appear dimmer. This is helpful when shopping for a light because you might not need the brightest lights in the world. You only need to ensure the subject is the right distance away from the light source.

    The Inverse Square Law (and Other Math)

    Figuring out the right distance for a light source isn’t exactly intuitive, because of two major factors: The first is the inverse square law, one of those weird quirks of the universe that we have to deal with. Put very simply (mathematicians, please don’t yell at me), every time you double the distance between the subject and the light source, you quadruple how much light is needed to light them the same way.

    In other words, it means that every time you move twice as far away from a light source, only one-fourth of the light from that source will hit your subject. So, using the Amaran P60X above as an example, if it’s rated for 5,070 lux at 1 meter, then the subject will experience around 1,267 lux at 2 meters. Some lights will specify their lux at shorter distances, like 0.5 meters, which makes it crucial to make sure you’re comparing equivalent specs when looking at lights from different manufacturers.

    [ad_2]

    Source link

  • We Stood on Both Sides of the New York–Dublin Portal and It Was Glorious

    We Stood on Both Sides of the New York–Dublin Portal and It Was Glorious

    [ad_1]

    Amanda: I got to the Portal in Manhattan’s Flatiron District a little before 11 am New York time, and found that there’s now a fence keeping people several feet away from it (but the same isn’t happening in Dublin). This is part of the new security the organizers have implemented: If someone steps on the Portal or blocks the camera, the livestream will blur for both sides, organizers say. For the next hour, a steady stream of people stopped by the Portal, with usually about 30 there at any time. They waved, they smiled, they danced YMCA and the Macarena on both sides. People brought dogs, and a group of preschoolers in a line walked by and waved.

    David: Dublin’s Portal, located facing Dublin’s main thoroughfare, O’Connell Street and the historic General Post Office building, has one permanent observer—James Joyce. A statue of Ireland’s most celebrated writer and author of the archetypal Dublin novel, Ulysses, stands just meters from the video screen. But rather than reciting Joyce, it was a 20th-century American rapper that particularly inspired one Portal visitor. A woman dressed head-to-toe in white danced silently before the screen for a few minutes, before turning around and singing: “You better lose yourself in the music, the moment, you own it, you better never let it go. You only get one shot, do not miss your chance to blow. This opportunity comes once in a lifetime.” Joyce and Eminem may not seem like natural bedfellows, but in Dublin and in front of the Portal, it seemed oddly fitting to lose oneself in the moment.

    Amanda: While we couldn’t hear the Eminem lyrics on the New York side of the Portal, the crowd enjoyed watching the woman’s energy and dance moves. Even without sound, people were able to convey emotion, and all eyes were on the silent performance broadcast from Dublin.

    David: The police in Ireland did finally move on the Eminem tribute act, but one of the “Dublin Portal Ambassadors” —who told me clearly that they were not security—felt that the woman was doing no harm. Though the ambassador, who refused to give his name, added that the night before, things did get a bit more rowdy after 6 pm, with some groups on pub crawls around the city briefly disrupting other people’s interactions before things quickly returned to normal. As part of the measures introduced for the Portal’s reopening, opening hours have been limited to 6 am until 4 pm ET (11 am to 9 pm Dublin time).

    The Portals stand 3.4 meters tall and weigh “multiple tons,” the organizers say, but they won’t give details about the camera and screen technology being used, adding: “It’s like the paint used to paint a painting—we want the audience to focus on the result.”

    Amanda: Those working on the New York side handed out signs that read “I ‘heart’ Dublin” and “I ‘shamrock’ Dublin” for people to hold up, artificially ramping up the perceived goodwill between the two cities. One of the people working told me he hasn’t seen issues since it reopened—it’s been nothing but love and good vibes.

    [ad_2]

    Source link

  • Blackmagic Cinema Camera 6K Review: Finally Full Frame

    Blackmagic Cinema Camera 6K Review: Finally Full Frame

    [ad_1]

    Few camera manufacturers have managed to stand out the way Blackmagic has when it comes to capturing high-quality video on a mirrorless camera. The Pocket Cinema Camera 6K Pro (dubbed PCC6K Pro) impressed me when I reviewed it a few years ago, but somehow the company’s new Cinema Camera 6K has managed to top it. With a full-frame sensor, the new L mount, and a similar $2,600 price, it’s turning my head again.

    The Cinema Camera 6K is largely similar to its predecessor, with nearly identical battery life (about an hour on one 3,500-mAh battery), and it retains the intuitive controls compared to what you’ll find on most professional cameras. It lacks the built-in neutral density filters I liked in the PCC6K Pro, but the new features are worth the trade-off.

    The Full-Frame Sensor Experience

    The biggest upgrade to the Cinema Camera 6K is the one so important they put it right on the front of the casing: a full-frame, 36 x 24-millimeter sensor. Compared to the Super 35-mm sensor on the previous models–which, despite its name, measures 23 x 13 mm–the new model’s sensor is a significant upgrade.

    Full-frame sensors are comparable in size to 35-mm film. The most prominent benefit of this is that there’s no crop factor when using most lenses. Cropped sensors result in a smaller field of view, meaning you can fit less of a scene into a frame compared to a camera with a full-frame sensor. Put simply, you need to be further away, use shorter lenses, or both to get the same image. This can often come at the expense of things like a shallow depth of field or worse low-light performance.

    Putting a full-frame sensor inside one of Blackmagic’s cameras is probably the best upgrade I could’ve asked for. I often shoot videos in my apartment, and it can be difficult to get images that look good because there simply isn’t enough space in the frame to get the scene that I want. For example, below are two photos taken with a 50-mm lens, first with the PCC6K Pro and the second with the new Cinema Camera 6K; I stood in the same spot in my tiny living room. The full-frame sensor can capture significantly more of my living space. For some people like me who often have to shoot in cramped spaces, this is nothing short of a godsend.

    The new model feels just as comfortable to use as Blackmagic’s other cinema cameras. It might be a little bulky, but its chassis feels excellent whether you’re holding it with one or two hands. The autofocus is impeccable; there’s still no autofocus tracking nor in-body image stabilization (IBIS), but with the handy focus button next to the left thumb, I find it easy to land the focus directly on my subject. The whole thing can be heavy, especially if you use it with Blackmagic’s optional battery grip, but this is still my favorite design for everything from the studio to run-and-gun shoots.

    Low-Light Performance

    With a bigger sensor comes larger pixels that can capture more light. Compared to the sensor on the previous 6K Pro, the full-frame sensor has nearly three times as much surface area, but the same 6K resolution. That means that each pixel is capturing almost three times as much light for each pixel in the image.

    The result is that the new Cinema Camera 6K performs even better in low-light conditions than the already impressive model that came before it. Here are two photos, one with the previous 6K Pro, and one with the new Cinema Camera 6K. Both cameras were set to an ISO of 400, at an ƒ/3 aperture, and 1/30 shutter speed. They were also captured from the same position, although I cropped the full-frame photo to a comparable area of the 6K Pro.

    [ad_2]

    Source link

  • Your Kid May Already Be Watching AI-Generated Videos on YouTube

    Your Kid May Already Be Watching AI-Generated Videos on YouTube

    [ad_1]

    Neither Yes! Neo nor Super Crazy Kids responded to WIRED’s request for comment.

    Few Limits

    Yes! Neo, Super Crazy Kids, and other similar channels share a common look—they feature 3D animation in a style similar to Cocomelon, YouTube’s most popular children’s channel in the US. (Dana Steiner, a spokesperson for Cocomelon’s parent company Moonbug, says that none of its shows currently use AI, “but our talented creative team is always exploring new tools and technologies.”)

    This familiar aesthetic means that a busy parent glancing quickly at a screen might confuse the AI content for a program they’ve vetted. And while it is not particularly well-crafted, the content of the videos put out by these channels tends to be shoddy in the same way that so much of today’s human-made children’s entertainment is shoddy—frenetic, loud, unoriginal.

    YouTube is in the process of introducing new policies for AI-generated content, although the company doesn’t seek to significantly restrict it. “YouTube will soon be introducing content labels and disclosure requirements for creators who upload content that contains realistic altered or synthetic material, including content geared toward kids and families,” YouTube spokesperson Elena Hernandez says.

    When WIRED inquired whether YouTube will be proactively seeking out AI-generated content and labeling it as such, Hernandez said more details will come later but that it plans to rely primarily on voluntary disclosure. “Our main approach will be to require creators themselves to disclose when they’ve created altered or synthetic content that’s realistic.” The company says it uses a combination of automated filters, human review, and user feedback to determine what content is accessible in the more restricted YouTube Kids service.

    Some fear YouTube and parents around the world aren’t adequately prepared for the coming wave of AI-generated kids content. Neuroscientist Erik Hoel recently watched some of the tutorials on making kids content with AI, as well as some videos he suspected to be made using the technology. Hoel was so unsettled by what he saw that inveighed against the concept on his Substack, including by singling out Super Crazy Kids. “All around the nation there are toddlers plunked down in front of iPads being subjected to synthetic runoff, deprived of human contact even in the media they consume,” he wrote. “There’s no other word but dystopian.”

    Hoel’s warning recalls the last great scandal about children’s YouTube, dubbed “Elsagate.” It kicked off in 2017 when people started noticing surreal and disturbing videos aimed at kids on the platform, often featuring popular characters like Elsa from Disney’s Frozen, Spiderman, and the titular porcine hero from Peppa Pig. While AI-generated content hasn’t reached a similar nadir, its creators appear to be chasing a similar goal of drawing the attention of YouTube’s automated recommendations.

    Creative Baby Padre

    Some more obscure AI video channels are already veering into weird territory. The channel Brain Nursey Egg TV, for example, gives its unsettling videos names like “Cars for Kids. Trailer the Slide With Lyrics.” The video’s description is a gigantic string of keywords, including “disney junior elimi birakma 24 chima sorozat BeamNG-Destruction ali babanın çiftliği şarkısı la brujita creative baby padre finger.”

    The plotless video is an amalgamation of glitchy visuals like floating eyeballs and melting blocks of color. The soundtrack features children applauding, a robotic voice counting, individual babies laughing, and different robotic voices intoning the word “YouTube” at seemingly random intervals. “This has generated voices throughout and is either powered by an AI-generated script or may be one of the greatest and most underrated works of surrealist video art in recent memory,” says Colman of Reality Defender. Either way, this kind of content hasn’t picked up much traction yet—some of the channel’s videos only have a handful of views. Brain Nursery Egg TV does not provide an email address or other way to contact those running the channel.

    [ad_2]

    Source link

  • An AI-Altered Hitler Speech Is Going Viral On X

    An AI-Altered Hitler Speech Is Going Viral On X

    [ad_1]

    AI-altered video clips of Adolf Hitler’s 1939 Reichstag speech at the beginning of World War II have recently gone viral. In the speech, Hitler proclaimed that the upcoming war would bring about the “annihilation of the Jewish race in Europe.” While Hitler did say this, the speech in the video clips was translated from German to English. The videos, which feature text that makes it clear the speech was an AI audio translation, have been viewed more than 15 million times, according to X.

    The two videos were first shared on Thursday by a hugely influential far-right conspiracy influencer known as Dom Lucre on X, who has previously shared child exploitation imagery.

    In comments accompanying the videos on X, Lucre claimed he was simply “sharing what is news as I always do,” and warned that the videos are “extremely antisemitic.” However, comments on the videos indicate viewers have drawn their own opinions.

    “I’m beginning to think we may have lost WWII,” wrote one commenter who has a verified X account. “It sounds like these people cared about their country above all else,” another follower wrote. Many others shared links to the 2017 neo-Nazi film Europa: The Last Battle.

    Another conspiracy theorist, Owen Benjamin, also commented on the AI Hitler videos and erroneously claimed that they showed the dictator “didn’t want to go to war and was chastising other countries for not helping the [Jews].” Benjamin’s tweet has more than 3.5 million views.

    X did not respond to WIRED’s request for comment.

    The video clips appear to have been taken from a video first posted to YouTube two months ago by an account called Time Unveiled, which has also posted AI-translated videos featuring Osama bin Laden, Joseph Stalin, and Hideki Tojo.

    In the Hitler video’s description section on YouTube, the creators said they used technology from voice-cloning startup ElevenLabs to generate the audio. ElevenLabs’ technology was also under fire earlier this year when it was used to help create an AI-generated robocall impersonating President Joe Biden. ElevenLabs and YouTube did not respond to requests for comment.

    Lucre also posted one of the videos to his Instagram account, though it didn’t get nearly as much attention as his posts on X. Instagram did not respond to WIRED’s request for comment, and while Lucre’s account is still active, the video was removed from the platform over the weekend.

    Lucre, whose real name is Dominick McGee, has become a hugely influential figure in conspiracy circles, where he shares QAnon content and GOP commentary, much of it accompanied by images or videos that have been altered. His content is often shared by prominent lawmakers, including former president Donald Trump.

    Lucre first came to national attention last July when Elon Musk personally intervened to reinstate his account despite the fact that Lucre posted child exploitation images just days earlier, going against company policy.

    [ad_2]

    Source link