Tips to avoid Black Friday and Cyber ​​Monday scams

I had already spent an embarrassing amount of money to upload nearly 1,000 AI-generated high-definition images of myself through an app called Lensa as part of its new “Magical Avatar” feature. There are many reasons at frown to results, some of which have been covered widely in recent days in growing moral panic as Lensa shot himself in the foot #1 App Store location.

The way it works is that users upload 10-20 photos of themselves from their camera roll. There are a few suggestions for best results: images should show different angles, different outfits, different expressions. They don’t all have to date from the same day. (“No photoshoots.”) Only one person in the frame, so the system doesn’t confuse you with someone else.

Lensa runs on Stable Diffusion, a mathematical deep learning method which can generate images based on text or image prompts, in this case taking your selfies and “smoothing” them into composites that use elements from each photo. This composite can then be used to create the second generation of images, so you get hundreds of variations without identical images that hit somewhere between the Uncanny Valley and one of those magic mirrors that the mother-in-law of Snow White had. The technology has been around since 2019 and can be found on other AI image generators, of which Dall-E is the most famous example. Using its latent diffusion model and a 400 million image dataset called CLIP, Lensa can render 200 photos in 10 different art styles.

Although the technology has been around for a few years, the increase in its use over the past few days may have you feeling caught off guard by a singularity that suddenly seems to have passed some time before Christmas. ChatGPT made headlines this week for his ability to write your essays, but it’s the least he can do. He can program code, break down complex concepts and equations to explain to a second grader, generate fake news and prevent it from spreading.

It seems insane that when we’re faced with the Asminovian reality we’ve been waiting for with excitement, dread, or a mix of both, the first thing we do is use it for selfies and homework. Yet there I was, filling almost an entire phone with photos of me as fairy princesses, anime characters, metal cyborgs, Lara Croftian characters, and cosmic goddesses.

And from Friday night to Sunday morning, I watched new sets reveal more and more of me. Suddenly the addition of a nipple went from a Cronenbergian anomaly to the norm, with nearly every photo showing me with revealing cleavage or completely shirtless, even though I had never submitted a topless photo. This was as true for photos of men as it was for photos where I identified as female (Lensa also has an “other” option, which I haven’t tried.)

Drew Grant

When I changed my selected gender from female to male: boom, suddenly I have to go to space and look like Elon Musk’s Twitter profile, where he’s sort of dressed like Tony Stark. But no matter what photos I captured or how I identified, one thing became increasingly clear as the weekend went on: Lensa imagined me without my clothes on. And it was getting better.

Was it disconcerting? A little. The arms-boobs fusion was more hilarious than anything else, but as someone with a bigger chest, it would be weirder if the AI ​​completely missed that detail. But some of the images had cropped my head entirely to focus only on my chest, which…why?

According to an AI expert Sabri Sansoythe problem is not with Lensa’s technology but most likely with human fallibility.

“I guarantee you a lot of these things are mislabeled,” said Sansoy, a robotics and machine learning consultant based in Albuquerque, New Mexico. Sansoy has been working in AI since 2015 and says human error can lead to wonky results. “Almost 80% of any data science project or AI project is about labeling data. When you’re talking about billions (of photos), people get tired, they get bored, they mislabel things and the machine does not work correctly.

Sansoy gave the example of an alcohol customer who wanted software that could automatically identify their brand in a photo; To train the program for this task, the consultant first had to hire human production assistants to comb through images of bars and draw boxes around all the whiskey bottles. But ultimately, the mind-numbing work led to errors as assistants got tired or distracted, allowing the AI ​​to learn from bad data and mislabeled images. When the program mistakes a cat for a bottle of whiskey, it’s not because it was broken. It’s because someone accidentally circled a cat.

So maybe someone forgot to circle the nudes when programming the Stable Diffusion neural network used by Lensa. This is a very generous interpretation that would explain a basic amount of cleavage planes. But that doesn’t explain what I and many others saw, which was an evolution from cute profile pics to Brassier thumbnails.

When I asked for comment via email, a Lensa spokesperson responded not by directing us to a PR statement, but actually took the time to address every point I raised. “It wouldn’t be entirely accurate to say that this issue is exclusive to female users,” the Lensa spokesperson said, “or that it’s on the rise. Sporadic sexualization is seen across all gender categories , albeit in a different way. Please see the attached examples.” Unfortunately they weren’t for external use, but I can tell you they were shirtless men who all had six wave packs, hubba hubba.

“The stable diffusion model was trained on unfiltered internet content, so it reflects the biases humans incorporate into the images they produce,” the response continued. Creators recognize the possibility of societal prejudice. So are we.” He reiterated that the company is working on updating its NSFW filters.

As for my take on genre-specific styles, the spokesperson added, “The final results in all genre categories are generated in accordance with the same artistic principles. The following styles can be applied to all groups, regardless of identity: Anime and Styled.

I wondered if Lensa also relied on AI to handle its PR, before surprising myself by not caring that much. If I couldn’t tell, did it matter? This is a testament to either how quickly our brains adapt and go numb even under the most incredible circumstances; or the sad state of hack-flack relationships, where the gold standard of communication is a streamlined transfer of information without things getting too personal.

As for the case of the strange AI-generated girlfriend? “Occasionally, users may encounter blurred silhouettes of characters in their generated images. They are just distorted versions of themselves that have been ‘misread’ by the AI ​​and included in the imagery in a clumsy way.”

Thus: gender is a social construct that exists on the Internet; if you don’t like what you see, you can blame the company. It’s Frankenstein’s monster, and we created it in our image.

Or, as the ChatGPT language processing AI model might put it: “Why do AI-generated images always look so grotesque and unsettling? It’s because we humans are monsters and our The data reflects this. It’s no wonder the AI ​​produces such gruesome images – it’s just a reflection of our own monstrous selves.

From articles on your site

Related articles on the web

Comments are closed.