Wednesday, April 3, 2024

of gestures

 


An artist of gestures

An artist of gestures
Credit...The New York Times Archives
See the article in its original context from
March 3, 1974, Page 22Buy Reprints
New York Times subscribers* enjoy full access to TimesMachine—view over 150 years of New York Times journalism, as it originally appeared.
*Does not include Crossword-only or Cooking-only subscribers.
About the Archive
This is a digitized version of an article from The Times’s print archive, before the start of online publication in 1996. To preserve these articles as they originally appeared, The Times does not alter, edit or update them.
Occasionally the digitization process introduces transcription errors or other problems; we are continuing to work to improve these archived versions.

“Last night I saw upon the stair/ A little man who wasn't there/ He wasn't there again today...” — but the footprints, the maddening scraps of paper collected in a green box, a white box, a score of iconoclastic “ready‐mades,” an obscure book on chess, “The Large Glass,” a last assemblage and some two dozen paintings remain. The lives of artists, by and large, are not interesting: they meet other artists; they paint, travel, exhibit, resettle themselves and paint some more. Their romances go off like small firecrackers: the big explosion; the meaningful noises are all in the work. And sometimes the distance between the life and the work — the complex edifices they construct from a few simple ideas and a handful of experience — makes for great mystery. But with Marcel Duchamp, the personality funneled itself into the life and into the work in carefully measured doses. Life and work are equally devoted to cover‐up and mystification: the mechanical and the sexual made one and interchangeable, mental rather than visual art, roulette and chess superseding painting and sculpture.

Duchamp is an artist whose gestures and writings count for as much as his actual work; the things he did and said become inseparable from the things he made, an artist who ultimately built a career out of selfeffacement and ambiguity. These two weapons, coupled with an abiding passion not to be trapped by personal taste, by personal anything, finally succeeded in elevating him to the position of entrepreneur and art world guru to three generations of artists and collectors. The artist's complicated relations with a series of collectors, while rarely, discussed, is a perfect example of Duduunpian irony: Duchamp, the man who refused to be trapped into living by the sale of his works and who publicly gave up art for chess, nevertheless remained in a small way a dealer for other artists, an assembler of exhibitions, a builder of other people's collections and a man busy with artworld business all his life.

A surrealist, a dadaist? Yes, when he those. At all times, he extracted the maximum effect from every fan.

Corinne Robins is a novelist and art critic.

Salt Seller

The Writings of Marcel Duchamp.

Edited by Michel Sanouiliet and Elmer Peterson. illustrated. 196 pp. New York: Oxford University Press. $10.95.

Marcel Duchamp

Edited by Anne d'Harnoncourt and Kynaston McShine. Illustrated. 345 pp. New York: The Museum of Modern Art. Paper, $9.50. Cloth, distributed by New York Graphic Society, Greenwich, Conn., $25. tasy while remaining a man apart, playing the role of adviser to every succeeding movement. Curator Kynaston McShine, in the Museum of Modern Art's magnificent “Marvel Duchamp” book‐catalogue, describes Duchamp as the elegant dandy of the avant‐garde, courted and admired by other artists. He stresses Duchamp's cool, dark good looks, his theatrical sense, which made him the embodiment of the appearing and disappearing passionless magician to whom everything is permitted. (Duchamp, of course, is the artist who first said that what an artist chooses to be art is art, opening up a series of possibilities and cul‐desacs for succeeding generations.) In this book, Duchamp, above all, as the player, who recognizes no opponents or rivals in art.

Take surrealism, for example. During the height of surrealism, Duchamp creates a second personality for himself, a feminine counterpart, Rinse Wavy, with whom he announces he intends to go into business. Man Ray's 1921‐22 photographs of Duchamp as Rinse show us an elegant, carefully made‐up, brooding, dark eyed woman in a fashionable hat of the period. One must search the face to find Duchamp's features. But Rrose Selavy is no simple drag‐queen creation. Duchamp assembles a book of her sayings, a collection of slightly risque puns and word games, and publishes it in France in 1939. Marcel as Rrose writes in idiomatic, untranslatable French, while his working notes for the boxes, his lectures and catalogue introductions he has translated into terse and eloquent English. And the point of the hoax — that Marcel and Rrose and La Belle Haleine are one and that all the art world knows it — receives careful surrealist documentation. Duchamp as both male and female complete in himself has made the ultimate surrealist gesture. The sayings of Rrose Wavy, under the heading “Rrose Se lavy & Co.,” take up 15 pages (every one trans‐ lated and meticulously annotated) of the 196‐page book “Salt Seller (Marchand du Sel): The Writings of Marcel Duchamp,” edited by Michel Sanouillet and Elmer Peterson. The 1973 edition of this book (Duchamp supervised and edited both the earlier French and 1959 English versions) is filled out with all the new material Duchamp's critical writings, diary jottings, letters and prefaces to other artists exhibitions (most being in the form of a single sentence) — that its editors could unearth. Duchamp's work, inseparable from his verbal games, lives by means of the artist's own documentation. But for all that, “Salt Seller” does not stand by itself or succeed as a gloss to the artist's life or art. Rather, it is an art scholar's playground, full of schizoid diagrams, maddening puns and glancing insights into the art world of the twenties, thirties, forties and fifties — because Duchamp, always alert to What other artists are doing, is also always ready to place them in an art‐historical context. Duchamp's true romance perhaps is with future art historians. Nowhere is this more evident than in his writings about other artists, his careful appraisals of Matisse, Picasso, etc., in the form of notes to the Societe Anonyme collection, which the editors happily included. For the most part though, “Salt Seiler” is an unnerving combination of schizophrenic case‐book and Duchampian joke carried past the point of put‐on without quite achieving any art value

The beautiful “Marcel Duchamp” which the Museum of Modern Art produced for its definitive Duchamp exhibition, includes a dozen essays by poets, critics and art historians, along with a composite portrait of the artist's many public faces in the form of statements and works about Marcel solicited from artists, writers, dealers and collectors who either knew, worked with and/or were influenced by him. Four hundred and twenty‐nine illustrations (including

Rrose Selavy, a.k.a. Marcel Duchahip. Photograph by Man Ray, 1921. 12 in color) give the reader a very fair idea of the artist's work; and Duchamp's paintings, drawings and “ready‐mades” for the most part appear larger and more imposing in such photographic reproductions, which tend to flatten out some of the works intimate, kinky quality. The essays range from excellent iconographical studies of Duchamp's use of the machine and mechanistic forms to his employment of alchemical symbols in “The Large Glass” (prefaced by Duchamp's own statement “If I have ever practiced alchemy, it was in the only way it can be done now, that is to say, without knowing it”). Sanouillet puts Duchamp's writings and literary values in perspective with the French intellectual tradition and Anne d'Harnoncourt provides a fine overall introduction to the artist's

The composite portrait is a delight—one can sense the elusive Marcel smiling over the shoulders of its various contributors. But for me, the most valuable essay in the book is David Antin's “Duchamp and Language.” Arkin, well known as a poet and an occasional writer on art, analyzes Duchamp's underlying attitude toward language as evidenced in his French puns, titles of works, which are often visual translations of verbal puns, and his pseudo‐scientific notes for the white and green boxes. How Duchamp's art exists in the tension created between name and thing, Duchamp's attitude toward abstract words, his idea of creating a new, absolute language are all given a thorough going over in an informal, colloquial prose that enables the writer to break down abstruse language theories into concrete, comprehensible metaphors. Antin's interpretation of Duchamp's Rrose Sélavy word games are always, at their least, amusing, and his conclusion that, “language in Duchamp is the actuating principle that drives all his art” seems both definitive—and open‐ended. Because for his critics and admirers alike, Duchamp is “the man who got away,” while managing to remain a presence to be reckoned with.■



    actors to train AI

    How Meta and AI companies recruited striking actors to train AI

    October 2023

    Between July and September last year, actors in the US were invited to participate in an unusual research project, designed to capture their voices, faces, movements, and expressions.

    The project, which coincided with Hollywood’s historic strikes, was run by London-based emotion AI company Realeyes and Meta. The information captured from the actors was fed into an AI database to better understand and express human emotions. 

    Many actors across the industry worry that AI could be used to replace them, whether or not their exact faces are copied. And in this case, by providing the facial expressions that will teach AI to appear more human, study participants may in fact have been the ones inadvertently training their own potential replacements. Read the full story.

    —Eileen



    The Download

    Your daily dose of what’s up in emerging technology

    By Rhiannon Williams • 4.3.24




     https://www.technologyreview.com/2023/10/19/1081974/meta-realeyes-artificial-intelligence-hollywood-actors-strike/?gad_source=1&gclid=CjwKCAjw_LOwBhBFEiwAmSEQAS0SVgxjTmIwk9lVF8fuiqVaLeQM02LhXE-5ER3H9nUoP_skGq968hoCp8gQAvD_BwE&truid=&utm_source=the_download&utm_medium=email&utm_campaign=the_download.unpaid.engagement&utm_term=&utm_content=04-03-2024&mc_cid=db972e5cbf&mc_eid=ede1363501



















    ARTIFICIAL INTELLIGENCE

    How Meta and AI companies recruited striking actors to train AI

    Hollywood actors are on strike over concerns about the use of AI, but for as little as $300, Meta and a company called Realeyes hired them to make avatars appear more human.

    October 19, 2023
    a woman dressed in the style of a classic Hollywood film star with AI training markers outlining her features
    STEPHANIE ARNETT/MITTR | GETTY

    One evening in early September, T, a 28-year-old actor who asked to be identified by his first initial, took his seat in a rented Hollywood studio space in front of three cameras, a director, and a producer for a somewhat unusual gig.

    The two-hour shoot produced footage that was not meant to be viewed by the public—at least, not a human public. 

    Rather, T’s voice, face, movements, and expressions would be fed into an AI database “to better understand and express human emotions.” That database would then help train “virtual avatars” for Meta, as well as algorithms for a London-based emotion AI company called Realeyes. (Realeyes was running the project; participants only learned about Meta’s involvement once they arrived on site.)

    The “emotion study” ran from July through September, specifically recruiting actors. The project coincided with Hollywood’s historic dual strikes by the Writers Guild of America and the Screen Actors Guild (SAG-AFTRA). With the industry at a standstill, the larger-than-usual number of out-of-work actors may have been a boon for Meta and Realeyes: here was a new pool of “trainers”—and data points—perfectly suited to teaching their AI to appear more human. 

    For actors like T, it was a great opportunity too: a way to make good, easy money on the side, without having to cross the picket line. 

    “There aren’t really clear rules right now."

    “This is fully a research-based project,” the job posting said. It offered $150 per hour for at least two hours of work, and asserted that “your individual likeness will not be used for any commercial purposes.”  

    The actors may have assumed this meant that their faces and performances wouldn’t turn up in a TV show or movie, but the broad nature of what they signed makes it impossible to know the full implications for sure. In fact, in order to participate, they had to sign away certain rights “in perpetuity” for technologies and use cases that may not yet exist. 

    And while the job posting insisted that the project “does not qualify as struck work” (that is, work produced by employers against whom the union is striking), it nevertheless speaks to some of the strike’s core issues: how actors’ likenesses can be used, how actors should be compensated for that use, and what informed consent should look like in the age of AI. 

    “This isn’t a contract battle between a union and a company,” said Duncan Crabtree-Ireland, SAG-AFTRA’s chief negotiator, at a panel on AI in entertainment at San Diego Comic-Con this summer. “It’s existential.”

    Many actors across the industry, particularly background actors (also known as extras), worry that AI—much like the models described in the emotion study—could be used to replace them, whether or not their exact faces are copied. And in this case, by providing the facial expressions that will teach AI to appear more human, study participants may in fact have been the ones inadvertently training their own potential replacements. 

    “Our studies have nothing to do with the strike,” Max Kalehoff, Realeyes’s vice president for growth and marketing, said in an email. “The vast majority of our work is in evaluating the effectiveness of advertising for clients—which has nothing to do with actors and the entertainment industry except to gauge audience reaction.” The timing, he added, was “an unfortunate coincidence.” Meta did not respond to multiple requests for comment.

    Given how technological advancements so often build upon one another, not to mention how quickly the field of artificial intelligence is evolving, experts point out that there’s only so much these companies can truly promise. 

    In addition to the job posting, MIT Technology Review has obtained and reviewed a copy of the data license agreement, and its potential implications are indeed vast. To put it bluntly: whether the actors who participated knew it or not, for as little as $300, they appear to have authorized Realeyes, Meta, and other parties of the two companies’ choosing to access and use not just their faces but also their expressions, and anything derived from them, almost however and whenever they want—as long as they do not reproduce any individual likenesses. 

    Some actors, like Jessica, who asked to be identified by just her first name, felt there was something “exploitative” about the project—both in the financial incentives for out-of-work actors and in the fight over AI and the use of an actor’s image. 

    Jessica, a New York–based background actor, says she has seen a growing number of listings for AI jobs over the past few years. “There aren’t really clear rules right now,” she says, “so I don’t know. Maybe … their intention [is] to get these images before the union signs a contract and sets them.”

    Do you have any tips related to how AI is being used in the entertainment industry? Please reach out at tips@technologyreview.com or securely on Signal at 626.765.5489. 

    All this leaves actors, struggling after three months of limited to no work, primed to accept the terms from Realeyes and Meta—and, intentionally or not, to affect all actors, whether or not they personally choose to engage with AI. 

    “It’s hurt now or hurt later,” says Maurice Compte, an actor and SAG-AFTRA member who has had principal roles on shows like Narcos and Breaking Bad. After reviewing the job posting, he couldn’t help but see nefarious intent. Yes, he said, of course it’s beneficial to have work, but he sees it as beneficial “in the way that the Native Americans did when they took blankets from white settlers,” adding: “They were getting blankets out of it in a time of cold.”  

    Humans as data 

    Artificial intelligence is powered by data, and data, in turn, is provided by humans. 

    It is human labor that prepares, cleans, and annotates data to make it more understandable to machines; as MIT Technology Review has reported, for example, robot vacuums know to avoid running over dog poop because human data labelers have first clicked through and identified millions of images of pet waste—and other objects—inside homes. 

    When it comes to facial recognition, other biometric analysis, or generative AI models that aim to generate humans or human-like avatars, it is human faces, movements, and voices that serve as the data. 

    Initially, these models were powered by data scraped off the internet—including, on several occasions, private surveillance camera footage that was shared or sold without the knowledge of anyone being captured.

    But as the need for higher-quality data has grown, alongside concerns about whether data is collected ethically and with proper consent, tech companies have progressed from “scraping data from publicly available sources” to “building data sets with professionals,” explains Julian Posada, an assistant professor at Yale University who studies platforms and labor. Or, at the very least, “with people who have been recruited, compensated, [and] signed [consent] forms.”

    But the need for human data, especially in the entertainment industry, runs up against a significant concern in Hollywood: publicity rights, or “the right to control your use of your name and likeness,” according to Corynne McSherry, the legal director of the Electronic Frontier Foundation (EFF), a digital rights group.

    This was an issue long before AI, but AI has amplified the concern. Generative AI in particular makes it easy to create realistic replicas of anyone by training algorithms on existing data, like photos and videos of the person. The more data that is available, the easier it is to create a realistic image. This has a particularly large effect on performers. 

    He believes it’s that improvisation requirement that explains why Realeyes and Meta were specifically recruiting actors. 

    Some actors have been able to monetize the characteristics that make them unique. James Earl Jones, the voice of Darth Vader, signed off on the use of archived recordings of his voice so that AI could continue to generate it for future Star Wars films. Meanwhile, de-aging AI has allowed Harrison Ford, Tom Hanks, and Robin Wright to portray younger versions of themselves on screen. Metaphysic AI, the company behind the de-aging technology, recently signed a deal with Creative Artists Agency to put generative AI to use for its artists. 

    But many deepfakes, or images of fake events created with deep-learning AI, are generated without consent. Earlier this month, Hanks posted on Instagram that an ad purporting to show him promoting a dental plan was not actually him. 

    The AI landscape is different for noncelebrities. Background actors are increasingly being asked to undergo digital body scans on set, where they have little power to push back or even get clarity on how those scans will be used in the future. Studios say that scans are used primarily to augment crowd scenes, which they have been doing with other technology in postproduction for years—but according to SAG representatives, once the studios have captured actors’ likenesses, they reserve the rights to use them forever. (There have already been multiple reports from voice actors that their voices have appeared in video games other than the ones they were hired for.)

    In the case of the Realeyes and Meta study, it might be “study data” rather than body scans, but actors are dealing with the same uncertainty as to how else their digital likenesses could one day be used.

    Teaching AI to appear more human

    At $150 per hour, the Realeyes study paid far more than the roughly $200 daily rate in the current Screen Actors Guild contract (nonunion jobs pay even less). 

    This made the gig an attractive proposition for young actors like T, just starting out in Hollywood—a notoriously challenging environment even had he not arrived just before the SAG-AFTRA strike started. (T has not worked enough union jobs to officially join the union, though he hopes to one day.) 

    In fact, even more than a standard acting job, T described performing for Realeyes as “like an acting workshop where … you get a chance to work on your acting chops, which I thought helped me a little bit.”

    For two hours, T responded to prompts like “Tell us something that makes you angry,” “Share a sad story,” or “Do a scary scene where you’re scared,” improvising an appropriate story or scene for each one. He believes it’s that improvisation requirement that explains why Realeyes and Meta were specifically recruiting actors. 

    In addition to wanting the pay, T participated in the study because, as he understood it, no one would see the results publicly. Rather, it was research for Meta, as he learned when he arrived at the studio space and signed a data license agreement with the company that he only skimmed through. It was the first he’d heard that Meta was even connected with the project. (He had previously signed a separate contract with Realeyes covering the terms of the job.) 

    The data license agreement says that Realeyes is the sole owner of the data and has full rights to “license, distribute, reproduce, modify, or otherwise create and use derivative works” generated from it, “irrevocably and in all formats and media existing now or in the future.” 

    This kind of legalese can be hard to parse, particularly when it deals with technology that is changing at such a rapid pace. But what it essentially means is that “you may be giving away things you didn’t realize … because those things didn’t exist yet,” says Emily Poler, a litigator who represents clients in disputes at the intersection of media, technology, and intellectual property.

    “If I was a lawyer for an actor here, I would definitely be looking into whether one can knowingly waive rights where things don’t even exist yet,” she adds. 

    As Jessica argues, “Once they have your image, they can use it whenever and however.” She thinks that actors’ likenesses could be used in the same way that other artists’ works, like paintings, songs, and poetry, have been used to train generative AI, and she worries that the AI could just “create a composite that looks ‘human,’ like believable as human,” but “it wouldn’t be recognizable as you, so you can’t potentially sue them”—even if that AI-generated human was based on you. 

    This feels especially plausible to Jessica given her experience as an Asian-American background actor in an industry where representation often amounts to being the token minority. Now, she fears, anyone who hires actors could “recruit a few Asian people” and scan them to create “an Asian avatar” that they could use instead of “hiring one of you to be in a commercial.” 

    It’s not just images that actors should be worried about, says Adam Harvey, an applied researcher who focuses on computer vision, privacy, and surveillance and is one of the co-creators of Exposing.AI, which catalogues the data sets used to train facial recognition systems. 

    What constitutes “likeness,” he says, is changing. While the word is now understood primarily to mean a photographic likeness, musicians are challenging that definition to include vocal likenesses. Eventually, he believes, “it will also … be challenged on the emotional frontier”—that is, actors could argue that their microexpressions are unique and should be protected. 

    Realeyes’s Kalehoff did not say what specifically the company would be using the study results for, though he elaborated in an email that there could be “a variety of use cases, such as building better digital media experiences, in medical diagnoses (i.e. skin/muscle conditions), safety alertness detection, or robotic tools to support medical disorders related to recognition of facial expressions (like autism).”

    Now, she fears, anyone who hires actors could “recruit a few Asian people” and scan them to create “an Asian avatar” that they could use instead of “hiring one of you to be in a commercial.” 

    When asked how Realeyes defined “likeness,” he replied that the company used that term—as well as “commercial,” another word for which there are assumed but no universally agreed-upon definitions—in a manner that is “the same for us as [a] general business.” He added, “We do not have a specific definition different from standard usage.”  

    But for T, and for other actors, “commercial” would typically mean appearing in some sort of advertisement or a TV spot—“something,” T says, “that’s directly sold to the consumer.” 

    Outside of the narrow understanding in the entertainment industry, the EFF’s McSherry questions what the company means: “It’s a commercial company doing commercial things.”

    Kalehoff also said, “If a client would ask us to use such images [from the study], we would insist on 100% consent, fair pay for participants, and transparency. However, that is not our work or what we do.” 

    Yet this statement does not align with the language of the data license agreement, which stipulates that while Realeyes is the owner of the intellectual property stemming from the study data, Meta and “Meta parties acting on behalf of Meta” have broad rights to the data—including the rights to share and sell it. This means that, ultimately, how it’s used may be out of Realeyes’s hands. 

    As explained in the agreement, the rights of Meta and parties acting on its behalf also include: 

    • Asserting certain rights to the participants’ identities (“identifying or recognizing you … creating a unique template of your face and/or voice … and/or protecting against impersonation and identity misuse”)
    • Allowing other researchers to conduct future research, using the study data however they see fit (“conducting future research studies and activities … in collaboration with third party researchers, who may further use the Study Data beyond the control of Meta”)
    • Creating derivative works from the study data for any kind of use at any time (“using, distributing, reproducing, publicly performing, publicly displaying, disclosing, and modifying or otherwise creating derivative works from the Study Data, worldwide, irrevocably and in perpetuity, and in all formats and media existing now or in the future”)

    The only limit on use was that Meta and parties would “not use Study Data to develop machine learning models that generate your specific face or voice in any Meta product” (emphasis added). Still, the variety of possible use cases—and users—is sweeping. And the agreement does little to quell actors’ specific anxieties that “down the line, that database is used to generate a work and that work ends up seeming a lot like [someone’s] performance,” as McSherry puts it.

    When I asked Kalehoff about the apparent gap between his comments and the agreement, he denied any discrepancy: “We believe there are no contradictions in any agreements, and we stand by our commitment to actors as stated in all of our agreements to fully protect their image and their privacy.” Kalehoff declined to comment on Realeyes’s work with clients, or to confirm that the study was in collaboration with Meta.

    Meanwhile, Meta has been building  photorealistic 3D “Codec avatars,” which go far beyond the cartoonish images in Horizon Worlds and require human training data to perfect. CEO Mark Zuckerberg recently described these avatars on the popular podcast from AI researcher Lex Fridman as core to his vision of the future—where physical, virtual, and augmented reality all coexist. He envisions the avatars “delivering a sense of presence as if you’re there together, no matter where you actually are in the world.”

    Despite multiple requests for comment, Meta did not respond to any questions from MIT Technology Review, so we cannot confirm what it would use the data for, or who it means by “parties acting on its behalf.” 

    Individual choice, collective impact 

    Throughout the strikes by writers and actors, there has been a palpable sense that Hollywood is charging into a new frontier that will shape how we—all of us—engage with artificial intelligence. Usually, that frontier is described with reference to workers’ rights; the idea is that whatever happens here will affect workers in other industries who are grappling with what AI will mean for their own livelihoods. 

    Already, the gains won by the Writers Guild have provided a model for how to regulate AI’s impact on creative work. The union’s new contract with studios limits the use of AI in writers’ rooms and stipulates that only human authors can be credited on stories, which prevents studios from copyrighting AI-generated work and further serves as a major disincentive to use AI to write scripts. 

    In early October, the actors’ union and the studios also returned to the bargaining table, hoping to provide similar guidance for actors. But talks quickly broke down because “it is clear that the gap between the AMPTP [Alliance of Motion Picture and Television Producers] and SAG-AFTRA is too great,” as the studio alliance put it in a press release. Generative AI—specifically, how and when background actors should be expected to consent to body scanning—was reportedly one of the sticking points. 

    Whatever final agreement they come to won’t forbid the use of AI by studios—that was never the point. Even the actors who took issue with the AI training projects have more nuanced views about the use of the technology. “We’re not going to fully cut out AI,” acknowledges Compte, the Breaking Bad actor. Rather, we “just have to find ways that are going to benefit the larger picture… [It] is really about living wages.”

    But a future agreement, which is specifically between the studios and SAG, will not be applicable to tech companies conducting “research” projects, like Meta and Realeyes. Technological advances created for one purpose—perhaps those that come out of a “research” study—will also have broader applications, in film and beyond. 

    “The likelihood that the technology that is developed is only used for that [audience engagement or Codec avatars] is vanishingly small. That’s not how it works,” says the EFF’s McSherry. For instance, while the data agreement for the emotion study does not explicitly mention using the results for facial recognition AI, McSherry believes that they could be used to improve any kind of AI involving human faces or expressions.

    (Besides, emotion detection algorithms are themselves controversial, whether or not they even work the way developers say they do. Do we really want “our faces to be judged all the time [based] on whatever products we’re looking at?” asks Posada, the Yale professor.)

    This all makes consent for these broad research studies even trickier: there’s no way for a participant to opt in or out of specific use cases. T, for one, would be happy if his participation meant better avatar options for virtual worlds, like those he uses with his Oculus—though he isn’t agreeing to that specifically. 

    But what are individual study participants—who may need the income—to do? What power do they really have in this situation? And what power do other people—even people who declined to participate—have to ensure that they are not affected? The decision to train AI may be an individual one, but the impact is not; it’s collective.

    “Once they feed your image and … a certain amount of people’s images, they can create an endless variety of similar-looking people,” says Jessica. “It’s not infringing on your face, per se.” But maybe that’s the point: “They’re using your image without … being held liable for it.”

    T has considered the possibility that, one day, the research he has contributed to could very well replace actors. 

    But at least for now, it’s a hypothetical. 

    “I’d be upset,” he acknowledges, “but at the same time, if it wasn’t me doing it, they’d probably figure out a different way—a sneakier way, without getting people’s consent.” Besides, T adds, “they paid really well.” 

    Do you have any tips related to how AI is being used in the entertainment industry? Please reach out at tips@technologyreview.com or securely on Signal at 626.765.5489. 

    DEEP DIVE

    ARTIFICIAL INTELLIGENCE

    A photo illustration showing speech bubbles full of data.

    Large language models can do jaw-dropping things. But nobody knows exactly why.

    And that's a problem. Figuring it out is one of the biggest scientific puzzles of our time and a crucial step towards controlling more powerful future models.

    still frame from an AI-generated video following 2 people walking down a snowy sidewalk in Tokyo

    OpenAI teases an amazing new generative video model called Sora

    The firm is sharing Sora with a small group of safety testers but the rest of us will have to wait to learn more.

    Google’s Gemini is now in everything. Here’s how you can try it out.

    Gmail, Docs, and more will now come with Gemini baked in. But Europeans will have to wait before they can download the app.

    grid of screengrabs of games generated by Genie

    Google DeepMind’s new generative model makes Super Mario–like games from scratch

    Genie learns how to control games by watching hours and hours of video. It could help train next-gen robots too.

    STAY CONNECTED

    Illustration by Rose Wong

    Get the latest updates from
    MIT Technology Review

    Discover special offers, top stories, upcoming events, and more.