Learn More
Oct 5, 2024 by Marc Peruzzi
Photo: To capture this image of Orion as seen from Yellowstone, Hence member Kelly Gorham waited out a three day blizzard. When the skies cleared, temps dropped to -45 at 11p.m.

Only Solutions: Is This The Last A.I. Article You’ll Read?

Or is it only the last article about A.I. I will write? Either way, one can only hope that the bubble bursts.

Despite the fact that OpenAI has raised more than $20 billion and is now valued at $157 billion—a $70 billion bump in nine months doesn’t seem logical—or maybe because of such bubble-like growth across the nascent A.I. world, there’s been much talk recently about A.I. as a bubble economy. And when Goldman Sachs’ head of equity research, Jim Covello, starts saying exactly that, you would think that investors old enough to remember crypto bubble 1 (the sequel is coming soon), the housing collapse, and the tech bubble before that, might pay attention. “Overbuilding things the world doesn’t have use for, or is not ready for, typically ends badly,” he was quoted in an internal report.

Besides sharing this now forgotten existential concern about A.I. penned not by Luddites but by more than 350 A.I. executives way back in 2023, “Mitigating the risk of extinction from A.I. should be a global priority alongside other societal-scale risks such as pandemics and nuclear war,” I’m actually bullish on some applications of A.I. Scanning MRIs for tumors with the help of A.I.-boosted mechanical learning could save lives by taking human error out of the equation—without replacing actual technicians. For similar reasons, A.I. is also helping back up overworked and underpaid air traffic controllers. Sorry if it’s putting risk takers in Michelin Man suits out of work, but defusing bombs seems like a better job for an A.I. controlled machine.

The slogan would go like this: “A.I., a cure for the type of monotony that gets people killed.” Sounds appropriately narrow, because after alleviating and augmenting boring but deadly work, one struggles to find examples of A.I.’s upsides.

The conceit that A.I. could help make unbiased decisions because it lacks emotion (maybe) fails to recognize that the people who build A.I. systems have plenty of biases and emotions. Google’s Gemini AI that depicted Black men and Asian women as Nazi’s was perhaps the most hilarious example of algorithmic bias. In trying to prove they weren’t biased, the Gemini developers revealed their bias that all of human history should be whitewashed with the DEI brush. As someone of both German and Italian descent, I’m happy to spread the blame Gemini!

Then there’s the cost and environmental waste of it all. Generating one A.I. image takes as much energy as charging your cell phone. That adds up. Microsoft is reopening the nuclear power plant at Three Mile Island, not to secure clean energy for a country keen to combat climate change, but to power its A.I. data centers.

In part because of that energy load—and OpenAI’s Sam Altman’s claim that he wants to use A.I. to “capture much of the world’s wealth”—A.I. is also expensive. Engineering, data mining, and electricity are not free. Covello of Goldman Sachs points out in his research that to date, A.I. designed to make better, faster, cheaper business reports only succeeds in delivering poorer quality and more expensive results somewhat quickly. The phone companies make a big deal about using A.I. photo software to make it look like you can jump higher than you can or that it’s now possible to delete the rest of the people on the beach in the photo you snapped on vacation, but that seems a benefit for a few people with puny minds or Orwellian bureaucrats rewriting history in The Ministry of Truth. After you manipulate a photo once, the shine is off the coin.

So is A.I. a bubble? I’m a creative not a stock analyst. But from my reading, the correct answer at this point in A.I. development is that it has bubble tendencies; especially when it comes to that point about building features and services that nobody asked for.

Nowhere is that “build it and someone might care” approach more clear than with how generative A.I. is being applied to the creative industry. Can it, for example, make an illustrator’s life easier by generating filler material at an illustration’s periphery? Certainly. But does it have much of a role to play in film, photography, and (egads) writing? That appears to be true for the A.I. curious, but for the sake of our culture and the creatives who shape our culture, I hope it’s a trend that’s already fading. In fact, I’m making that call right now: Here are a few reasons why the generative A.I. bubble will burst in the creative industry.

Realism Still Matters

I once was invited on a press trip to Utah’s Wasatch Range. The premise was furthering the science of hypothermia in avalanche burials.

My job as a journalist was to strip down to a non-insulated polypropylene base layer under a waterproof shell. Then the scientists buried me sitting in a snow cave. I had a thermometer down my throat and another up my ass. In my mouth I was clamping a Black Diamond AvaLung avalanche safety device that pulls clean oxygen from the snowpack around you and expels your deadly CO2 into the snowpack behind you, prolonging the length of time you can survive in a burial without an air pocket.

Naturally, with me, the experiment went dramatically wrong. The snow that the researchers packed around me was faceted like decorative sugar crystals. It wouldn’t have made a snowball. So in the first moments after my burial, the spent CO2 I was exhausting at my back surreptitiously flowed around those facets and back to my face. This was not a flaw of the AvaLung, the snow was just a poor facsimile of actual avalanche debris, which is much more dense than facets.

My oxygen levels plummeted on the researcher’s screens. I said something to the effect of, “this can’t be right.” into the microphone. And suddenly there was a lot of excitement as my world was painting itself black. The lead researcher, a burly guy, got his shovel blade to my chest in two strokes, and then hauled me out of that crypt into the clean air just before I passed out.

The study was over for me. Because I was built like a bike racer at the time, I was deeply hypothermic after a few minutes. I shivered for an hour. And if I’m being honest I was a little traumatized. But although their experiment on me was a failure—they figured out how to bury the subsequent folks correctly—my personal experiment was a success. I got to experience what it’s like to die in an avalanche without much risk of dying or breaking bones. And that lesson has been meaningful to me as a backcountry skier and a journalist who writes about avalanches. I felt the death mask of CO2. I got the panicked rush of suffocation. I gasped that last breath before blacking out but didn’t. And then I was breathing again.

Hence Journal contributing writer Dan Oko calls this type of lived experience out in the world as “Meat Space.” He is working on a Longer Read on the subject. But briefly from a magazine editor and writer’s perspective: Adventures that go perfectly are not adventures, they are vacations, and nobody wants to see the slideshow from your vacation. We call those “And the plane landed safely stories.” They aren’t worth telling. The avalanche burial wasn’t a “plane landed safely” story for me. I wrote about it.

For contrast, here’s a safe landing story: A few summers ago I had encounters with 19 different bears on my mountain bike and I’ve had plenty more since then. One fairly big bruin the color of nutmeg with a white star on his chest stood up on me and a buddy and then dropped to all fours and galloped towards us. In another incident, I thought I’d gotten clear of a family of big bears only to ride back into them on the next switchback. It was a rugby scrum of bears. But unlike my mock avalanche burial, those bear encounters aren’t much of a story because the bears didn’t maul me, which they seldom do. I hope I never get to write a bear adventure story.

The point I’m getting to is that fieldwork matters more than ever because A.I. is already making it so filmmakers aren’t going on location to get the shots they need to bring realism to their work. A great example of this was the opener to the latest “True Detective” HBO series that tried to depict a caribou hunt by native Alaskans. The CGI scene was garish, oversaturated, and false. The filmmakers forgot about realism because they forgot that making a film—or reporting a story, or taking a photo that you care about—is hard work that takes place in the elements.

The head of Patagonia Films was quoted recently in Hence Journal as saying that the essence of a Patagonia film shoot happens on a climbing wall or on an ultrarun, but really that’s true of all storytelling. Think about how much better the “True Detective’s” opener would have been had they actually captured native Alaskans hunting on their ancestral lands. They would have made the story believable. And along the way someone might have also told them that you can’t chop through pack ice with an aluminum ski mountaineering ice axe, or so many of the other comical winter-themed mistakes they made in their series—including with hypothermia deaths.

In an increasingly virtual world where it’s difficult to make human and natural connections, creatives need to spend more time on assignments.

Comet NEOWISE as seen over the Springhill Community north of Bozeman, Montana. Tuesday, July 14, 2020. "Generative A.I. goes against my entire reason for doing photography, which is to bear witness." Photo: Kelly Gorham

CGI can ruin movies. Example Two.

As a child, I dog-eared J.R.R. Tolkien’s “The Lord of The Rings” trilogy in fifth grade and read the covers off them before high school started. All I needed were the words and Tolkien’s hand sketched maps to see Middle Earth as he saw it.

I am not unique in this regard. At a very specific point in our evolution, humans developed the ability to create and enjoy fiction. Religion and civilization appears to have followed that shift. It wasn’t the other way around. Reading Tolkien—he wrote for his son who was away fighting World War II after losing most of his own classmates to battle in WWI—is both a lesson in literature and civics. The trilogy was not intended as a parable, but with his fiction Tolkien did ask the question “What is worth dying for?” The answer, fittingly, is “civilization.”

With the trilogy, the filmmaker Peter Jackson, also a Tolkien fan, took the book and brought it to life. Yes, the editor in me cringed at times because he butchered the language. (Humor me: In the book, Gandalf said, “I threw down my enemy, and he fell from the high place and broke the mountainside where he smote it in his ruin.” Not the nonsensical, “I threw down my enemy and smote his ruin upon the mountainside.” Language matters!) But again back to the point, most LOR fans will tell you that visually Jackson recreated Middle Earth and its inhabitants exceedingly well. He did this with landscape cinematography from New Zealand, the best costume designers Hollywood had to offer, and a light but expert touch with the CGI—mostly with the character Gollum.

And then Jackson made “The Hobbit” and screwed everything up. Maybe the studios were at fault too because the love interest between an elf and a dwarf was something out of Disney not Tolkien, but it was the overwrought CGI that wrecked the Hobbit films for me. Instead of scary humans in orc costumes we got giant bloated computer generated expressionless beings out of a Marvel film. A CGI dragon—Smaug—and CGI wolflike wargs make sense, they don’t exist. But a CGI Lonely Mountain? Why would you do such a thing? Earth has mountains already. Just like earth has red deserts, which is why “The Martian” was shot in the Wadi Rum desert in Jordan.

In our house we don’t rewatch “The Hobbit” because Jackson’s CGI smote its ruin on the suspension of disbelief.

AI is making this style of CGI cheaper and more accessible. And it’s making films worse. Blow some shit up like “Oppenheimer.” Fly fighter jets like “Top Gun.” Hire a stunt team like “Furioso.” People go to the movies because enough realism suspends disbelief. Just a smidge too much CGI breaks the spell.

I’ll Take Vinyl Any Day

Neanderthals made flutes from cave bear bones in what is now Slovenia 60,000 years ago. Which means that unlike fiction, music made by hominids predates civilization by at least 54,000 years, probably more.

But now we have A.I. generated music. To apply the Goldman Sachs razor to why that is, the first question should be: Who asked for that? Certainly not audiences of any type. Why would anyone ask for synthetic music? Now recall that making a product that nobody asked for is a bad business model. But despite that, A.I. “music” (without the muse) is being incorporated by low budget film and commercial producers who apparently don’t want to pay a human being for original work, no matter how much better it would make the end result.

This represents yet another fundamental misunderstanding of the power of music by people in tech. First, the streamers thought they could pay musicians pennies to distribute their work to new audiences, and they did, but music is not something merely to stream. Concerts are on the rise because ever since our cousins the Neanderthals carved those flutes, music has always been a shared medium. Like book clubs and film festivals, people are reclaiming a connection to live, original music as a shared human experience. Hominids with instruments are packing stadiums for what is still almost entirely an audible experience.

An algorithm can also never understand what you want to listen to or when. The algorithm doesn’t know that you have a bottle of red wine on the table, or that you are camping under desert stars, or that you’re about to run on a treadmill.

Given the time, we are our own best DJs. And when we don’t have time, we still want our music curated, not automated. That’s especially true in film where music is paired with cinematography and storyline to create an emotive response.

To me, the best use of established music in recent memory was in Quentin Tarrantino’s “Once Upon a Time in Hollywood.” Tarantino and his music supervisor Mary Ramos built the 30 song soundtrack. For the audience that lived through the ’70s, the music pulls out memories. For younger audiences it places the film in the same historical context as the costumes and cars.

And here’s the thing. I don’t want to listen to that soundtrack independent of the film. I mean, there are two Deep Purple tunes on that list and I haven’t liked Deep Purple since I was 11. But in film, even classic rock feels somehow fresh. The music and the story feed off each other.

As I’ve written about before, originally scored music elevates film by rising and falling with the action and storyline. I am not a musician so this is magic to me. And the “Dune” films are perhaps the best recent example of this magic. In excess of 50 artists and artistic technicians created the soundtrack under sound designers Mark Mangin (Dune Part One) and Richard King (Dune Part Two). Both artists are Oscar winners.

Imagine how horrible those films would have been if AI generated the score. It would sound like a pharmaceutical commercial or something that Red Lobster actually did as they were coming out of bankruptcy.

Just because technology allows you to do something doesn’t mean you should. Which reminds me of another example. Back when tablets were supposed to save magazine style publishing, a marketer on my team kept insisting that whenever we had winter stories we should animate them with falling snow like The New York Times had done exactly once to much acclaim.

We, of course, restrained ourselves.

 

The Orgasmatron

Don’t begrudge the fact that the A.I. hawkers are so narcissistic as to think that their technology will capture the world’s wealth and make us all richer while hiding the energy tolls A.I. will take on the climate and the lack of demand for their supply. The A.I. exercise is yet another chance to ask the same question we’ve been pondering since industrialization. How much of human existence and creativity do we want to outsource to machines?

It’s an old theme. In Fritz Lang’s “Metropolis” (1927), the lead, Freder, hallucinates that a malfunctioning machine is demanding human sacrifices like a sun god. In Charlie Chaplin’s “Modern Times” (1937) humans became literal cogs. James Cameron’s “The Terminator” (1984 fittingly enough) featured killer robots with the ability to learn. In “The Matrix” (1999) we are pulled into the data stream. These were all films depicting dystopias. But they weren’t far-fetched. The Ukraine war is now being fought with A.I. war drones.

I am not saying that A.I. will drag us into a dystopia, for that we have partisan politics, but we should take the warnings from A.I.’s own enablers and regulate the technology before some angsty teens get the launch codes. I’m more of a mind that A.I. will continue to produce a bunch of technology for technology’s sake that nobody asked for, wants or needs. In other words, junk content that will break the internet.

It was another filmmaker who was prescient in this regard. In Woody Allen’s “Sleeper” Miles Monroe (played by Woody) is a jazz musician and the owner of the Happy Carrot health food store owner in 1973. A medical procedure gone wrong leaves him cryogenically preserved for 200 years, waking to America in 2173. He is revived by rogue scientists to help overthrow the police state. Miles disguises himself as a robot “Milo” until finally confessing to his owner Luna Schlosser played by Diane Keaton that he is from the past. A love affair ensues and Luna invites Miles to “perform sex” with him in the Orgasmatron machine which simulates sex to climax for both parties.

Miles, like creatives who would rather avoid a simulation today, is hesitant. “I’m strictly a hand operator,” he says. “I don’t like anything with moving parts that are not my own.”