Moneycontrol Be a Pro
Get App
you are here: HomeNewsBusiness
Last Updated : Mar 26, 2019 03:27 PM IST | Source: Moneycontrol.com

Podcast | Digging Deeper - Who’s Afraid of AI Art?

Do androids dream of electric sheep? And then proceed to paint them?


Rima M. | Rakesh Sharma

Moneycontrol Contributors

Be it the late Korean artist Nam June Paik who stretched the meaning of multi-media, is known as the originator of video art and one of the earliest champions of the term "electronic super highway" or the feisty Yayoi Kusama, a legendary Japanese contemporary artist who plays with human perception with illusions of infinity in installation art, and also creates art via painting, performance, film, fashion, poetry, fiction, and more, artistic pursuits are no longer limited to one mode of expression.

Close

Art today could be anything. It could be the mysterious protest grafitti of the unseen artist Banksy and almost a subversive performance art as when last year after auction house Sotheby had sold a Banksy work for $1.4 million, it spectacularly shredded itself and self-destructed.

And of course, there was Andy Warhol who almost single-handedly birthed the visual art movement known as pop art.

It was just a matter of time before digital art transcended its limits and began to scramble minds and established notions of artistic creation.

You are all probably familiar with this line - “It was a bright cold day in April, and the clocks were striking thirteen.” It is the opening line of the George Orwell classic – alternatively, a Nostradamusian prediction of our current dystopian political climate – 1984. OpenAI, a nonprofit research company backed by Elon Musk, Reid Hoffman, Sam Altman, and others, fed that line into its new AI model called GPT2, it followed it up with this: “I was in my car on my way to a new job in Seattle. I put the gas in, put the key in, and then I let it run. I just imagined what the day would be like. A hundred years from now. In 2045, I was a teacher in some school in a poor part of rural China. I started with Chinese history and history of science.”

The tone is futuristic, the style is novelistic, and the intelligence artificial.

On this edition of Digging Deeper, we will try and understand the intervention of Artificial Intelligence in the realm of art – this time, visual art – which once upon a time drew its meaning from the organic relationship between an artist's imagination and his or her tools of creation. What these modern brushstrokes mean for the world of art is what we discuss today with me Rakesh Sharma on Moneycontrol.

The rise of the AI artist

Fingertips, brushes, sculpting instruments, solid and liquid pigments, tangible surfaces and conventional media that could be touched and experienced in an almost sensual way during creation… these have always been the ingredients of art. The human element was at the core of this process but things are changing as a recent exhibition at the HG Contemporary gallery in Chelsea proved. All the images on display there were created by a computer.

Ian Bogost, contributing editor to the The Atlantic reported that this exhibition of prints called “Faceless Portraits Transcending Time,” celebrated the inevitable collaboration between an artificial intelligence named AICAN and its creator, Dr. Ahmed Elgammal.

This was a move, says the writer and we quote, “meant to spotlight, and anthropomorphize, the machine-learning algorithm that did most of the work. According to HG Contemporary, it’s the first solo gallery exhibit devoted to an AI artist.”

Obviously such an enterprise needed some unusual minds to back it and as the piece points out, the players involved were all interesting enough to inspire an ensemble film. The artist is a computer scientist who commands five-figure print sales from software that generates inkjet-printed images. The technogallerist is a former hotel-chain financial analyst. There was a venture capitalist with two doctoral degrees in biomedical informatics; and also an art consultant who put the whole thing together. The Atlantic piece said, “Together, they hope to reinvent visual art, or at least to cash in on machine-learning hype along the way. The gallery show might just be a coming-out party for Elgammal’s venture-backed, fine-art econometrics start-up. The computer scientist has created some legitimately striking pieces. But he and his partners also want to sell AICAN as a “solution” to art, one that could predict forthcoming trends and perhaps even produce works in those styles. The idea is so contemporary and extravagant, it might qualify as art better than the strange portraits on exhibit at the gallery.” That, of course, is a subjective take on what “art” is, and certainly what “better” art is, but as the piece informs, The AI-art gold rush began in earnest last October, when the New York auction house Christie’s sold a piece called, Portrait of Edmond de Belamy, an algorithm-generated print in the style of 19th-century European portraiture, for $432,500.

An emerging market

As the piece recalls, the art world was shocked because the print that sold for $432,500 had never been shown in galleries or exhibitions before coming to market at auction, which is usually a rarefied channel reserved for established work.

What bothered the purists was not just the fact that the winning bid was made anonymously by telephone but also that the so-called art was created by a computer program. We quote, “A computer program that generates new images based on patterns in a body of existing work, whose features the AI learns.” What’s more, the artists who generated the work, the French collective Obvious, hadn’t even written the algorithm or the training set. They just downloaded them, made some tweaks, and sent the results to market.”

So the connection between imagination and interpretation that we spoke about earlier was neither organic nor inspired. It was merely and quite obviously convenient.

Pierre Fautrel, a member of the collective Obvious, however said unpologetically, “We are the people who decided to do this, and decided to print it on canvas, sign it as a mathematical formula, put it in a gold frame.”

Can this level of derivation be called art?

In the immortal words of Patsy Stone in Absolutely Fabulous, we have to ask ourselves, “But is it art, Eddie?”

As The Atlantic writer opines, “A century after Marcel Duchamp made a urinal into art by putting it in a gallery, not much has changed, with or without computers. As Andy Warhol famously said, “Art is what you can get away with.””

As the writer puts it, the best way to get away with something is to make it feel new and surprising. So at a time when using a computer is de rigueur and when machines offer all kinds of ways to generate images that can be output, framed, displayed, and sold—from digital photography to artificial intelligence, the fashionable choice has become generative adversarial networks, or GANs, the technology that created Portrait of Edmond de Belamy.

We quote, “Like other machine-learning methods, GANs use a sample set—in this case, art, or at least images of it—to deduce patterns, and then they use that knowledge to create new pieces. A typical Renaissance portrait, for example, might be composed as a bust or three-quarter view of a subject. The computer may have no idea what a bust is, but if it sees enough of them, it might learn the pattern and try to replicate it in an image. GANs use two neural nets (a way of processing information modeled after the human brain) to produce images: a “generator” and a “discerner.” The generator produces new outputs—images, in the case of visual art—and the discerner tests them against the training set to make sure they comply with whatever patterns the computer has gleaned from that data. The quality or usefulness of the results depends largely on having a well-trained system, which is difficult.”

That’s why, says the writer, the folks in the know were upset by the Edmond de Belamy auction. The whole thing reeked of dishonesty because, as the piece points out, the image was created by an algorithm the artists didn’t write, and trained on an “Old Masters” image set they also didn’t create.

The damning verdict being and we quote, “the brave new world of AI painting appeared to be just more found art, the machine-learning equivalent of a urinal on a plinth.”

Can AI art be original and new?

The piece quotes Ahmed Elgammal who thinks AI art can be much more than that. He is a Rutgers University professor of computer science, and runs an art-and-artificial-intelligence lab. At this lab, he and his colleagues develop technologies that try to understand and generate new “art” with AI—not just credible copies of existing work, like GANs do. Elgammal says matter-of factly about of GAN-made images, “That’s not art, that’s just repainting. It’s what a bad artist would do.” In direct contrast of GAN, Elgammal calls his approach a “creative adversarial network,” or CAN.

Elgammal said, “CAN swaps a GAN’s discerner—the part that ensures similarity—for one that introduces novelty instead. The system amounts to a theory of how art evolves: through small alterations to a known style that produce a new one. That’s a convenient take, given that any machine-learning technique has to base its work on a specific training set. The results are striking and strange, although calling them a new artistic style might be a stretch. They’re more like credible takes on visual abstraction.”

According to Elgammal, ordinary observers can’t tell the difference between an AI-generated image and a “normal” one in the context of a gallery or an art fair. That’s an accomplishment, says the writer and it takes some application to achieve artistic coherence. For instance, Elgammal sent the writer a Dropbox link to 3,000 portraits by varied artists across at least two centuries—Titian, Gerard ter Borch, and Giovanni Antonio Boltraffio, among others. To play with such a wide range of contexts, subjects, artists, and styles requires both skill and knowledge that extends beyond technology.

But as the writer says, “That might be an inevitability of AI art: Wide swaths of art-historical context are abstracted into general, visual patterns. AICAN’s system can pick up on general rules of composition, but in the process, it can overlook other features common to works of a particular era and style.”

Does AI miss the fine print?

With this preoccupation with imagery and its manipulation, does AI art miss the subtle symbolism that informs certain artistic eras?

For instance, John Sharp, an art historian trained in 15th-century Italian painting and the director of the M.F.A. program in design and technology at Parsons School of Design, told The Atlantic how charged with cultural meaning portraiture is. Portraiture across centuries said important things about the cultural, historical and personal contexts of subjects but as the piece says, a neural net couldn’t infer anything about the particular symbolic trappings of the Renaissance or antiquity—unless it was taught to, and that wouldn’t happen just by showing it lots of portraits.

We quote, “For Sharp and other critics of computer-generated art, the result betrays an unforgivable ignorance about the supposed influence of the source material.”

The use of preexisting art also begs the question… can AI create something that is not derivative? To capture the essence of the time, could it not also be fed information about the period, and perhaps let it conjure an imaginary Count or Duke, and then make a portrait of him? We saw how AI took 1984 forward, could it be too inconceivable that a machine couldn’t create the story, history, and a face for an imaginary royal sitting down for a portrait? This, only time will tell.

Is AI a threat to visual artists?

Some critics of the Chelsea show seemed to think so and an Instagram post said and we quote, “What a shame for an art gallery…instead of supporting human beings giving their vibrant vision of our world.”

Elgammal thinks the worry is misplaced because what he does is collaborative art but as the writer points out with a sense of irony, “it’s odd to list AICAN as a collaborator—painters credit pigment as a medium, not as a partner. Even the most committed digital artists don’t present the tools of their own inventions that way; when they do, it’s only after years, or even decades, of ongoing use and refinement.”

Artists like Elgammal possibly think of AICAN as a device or a tool just as a camera is and which is not creative on its own but in the right hands, can produce unexpected results. Though he can’t resist adding, “It’s the first time in history that a tool has had some kind of creativity, that it can surprise you.”

The writer also cites Casey Reas, a digital artist who co-designed the popular visual-arts-oriented coding platform Processing, who says that the artist should claim responsibility over the work rather than to cede that agency to the tool or the system they create.

The idea of the primacy of the human mind over technology is reassuring to many because it shows that art is still a byproduct of agency rather than coding. But how long this human hegemony over agency remains is anyone’s guess. Ultimately, art is assessed by the audience – should the audience increasingly prefer art created by an entity made of 0s and 1s and not one made of C, H, N, O, should we or should we not cede authorship to that entity? Should we or should we not attribute originality to that entity? One might argue that the entity only “learnt” how to create art based on what it was shown by a sentient human. But is that not also how we, as sentient humans, learn and create ourselves? What aspect of what we say, create, paint, sing, isn’t in some way informed by what we learnt or were exposed to?

The real murky area is the ethics of it all, but that is for another day, at another time.

Big money in AI art?

Are we however coming to a point where technology will make more money than the artist who uses it?

The writer explains and we quote, “Elgammal’s financial interest in AICAN might explain his insistence on foregrounding its role. Unlike a specialized printmaking technique or even the Processing coding environment, AICAN isn’t just a device that Elgammal created. It’s also a commercial enterprise. Elgammal has already spun off a company, Artrendex that provides “artificial-intelligence innovations for the art market.” One of them offers provenance authentication for artworks; another can suggest works a viewer or collector might appreciate based on an existing collection; another, a system for cataloging images by visual properties and not just by metadata, has been licensed by the Barnes Foundation to drive its collection-browsing website.”

As the piece clearly points out, the company’s plans are more ambitious than recommendations and fancy online catalogs. And it is getting attention at the right places.

We quote, “When presenting on a panel about the uses of blockchain for managing art sales and provenance, Elgammal caught the attention of Jessica Davidson, an art consultant who advises artists and galleries in building collections and exhibits. Davidson had been looking for business-development partnerships, and she became intrigued by AICAN as a marketable product. Davidson also sold the “Faceless Portraits Transcending Time” show to Hoerle-Guggenheim, who had been looking for an AI-oriented artist to feature in his gallery.”

Historically accurate or contextual or not, the prints are ambitiously priced from $6,000 to $18,000. But the idea is to transcend the show and as the piece says, “the exhibit is also a rhetorical maneuver to lay the groundwork for a larger effort: to use AI to understand, and maybe even define, future visual aesthetics.”

The show could upend the belief that artistic progress depends on human reason alone and according to Elgammal, AICAN can be a potentially valuable business-intelligence platform.

Artrendex could also bring about a shift in an art market that is worth more than $64 billion as a viable data analytics tool  that can gauge the potential value of art.

We quote, “Last year, Khosla Ventures funded the company with a $2.4 million investment to build and market tools for art econometrics. That’s more than the average visual artist will make in a lifetime. Aican’s commercial potential turns the tool from a quirky AI-art partner into a potentially valuable general-purpose technology."

Elgammal is also establishing an artist-in-residence program to bring in artists to collaborate with AICAN internally and Davidson hopes AICAN will help in “building out pipelines” to corporate collections—such as those of hotels or office buildings, which need art to hang in commercial spaces.

Quote: "Given enough data about user preferences for visual images, AICAN and its cousins could, in theory, deduce the hippest looks for the next season, and Artrendex could create and manufacture low-cost editions suitable for hanging in guest rooms or office lobbies. Perhaps the company could even sell a subscription to refresh those images, a kind of Thomas Kinkade of machine-learning art that would produce a regular income to satisfy the expectations of Artrendex’s venture-capital investors.”

But what this enterprise lacks obviously is the bite of social comment that a Banksy brings to his work even though he does not profit from it. And nobody who uses AI as a collaborator will ever be able to avoid being asked if what they are creating is really art or as the piece says, just a tech demo more than a deliberate oeuvre.

The article does speculate, if machines will absorb artistic practices entirely but as of now, it seems improbable. We quote, “For now, the AI look is interesting and novel, but it will always be an aesthetic bound to a particular time period. The trappings of machine learning look fresh and interesting today, but soon they too will become tiresome, like NTSC-video scan lines and JPEG compression artifacts did after they ceased to be novelties brought into the gallery. Eventually, the most important ones carry on as art history. AICAN is neither a savior nor an annihilator of art. It’s just another style, bound by trends and accidents to a moment that will pass like any other.”

Yes, the writer concedes, the democratisation of art is clear and present with the advent of YouTube, Instagram, DeviantArt etc. but will art’s fate depend on which story fetches the higher bid, is open to debate.

Automation vs humans

The Verge has published a few interesting pieces on AI art and writer Dami Lee, in one such, imagines what would happen to artists if AI-enabled features started painting, editing, and doing other parts of their jobs for them?

She also asks, now that AI tools are already starting to automate what used to be time-consuming manual processes — could the results be good for artists’ creativity, rather than potential job killers?

She recalls how companies that make industry-standard creative tools like Adobe and Celsys have been adding AI features to their digital art software in recent years in the hopes that it’ll speed up workflows by eliminating drudge work, and give artists more time to experiment.

She mentions machine learning tools that help find specific video frames faster, to features that color in entire works of line art with just a button and concedes that AI is being incorporated in subtle, but surprisingly impactful ways.

She cites Tatiana Mejia, who manages Adobe’s AI platform, Sensei. Tatiana thinks that  the best AI features can assist artists and cut out repetitive tasks.

We quote, “Tatiana’s assessment comes from a Pfeiffer Consulting study commissioned by Adobe, in which most creatives said they weren’t worried about being replaced by AI, and that they could see the most potential for AI and machine learning applied to tedious, uncreative tasks. That could mean a smart cropping feature that automatically recognizes the subject of a photo, or automatic image tagging to help people find stock photos faster. They’ll still require an artist’s control, too.”

Tatiana says decisively in the piece, “Creativity is profoundly human. AI cannot replace the creative spark.”

Still, there is no getting away from the fact that AI is going where no human has possibly gone before.

AI art, an inevitability?

Are we depending on machines to colour between the lines in more ways than one? And what kinds of features are being used the most today? The writer points that beyond Adobe’s big noise advanced concept AI tools, it’s usually the more subtle AI features that actually ship.

We quote, “Recent additions include an automated audio mixing feature in Premiere and the ability to create searchable PDFs through optical character recognition in Adobe Scan. It’s a lot less flashy than AI automatically removing unwanted objects out of videos, but it’s enough to take some of the drudgery out of creative work.”

And there is more to come, she says. More AI features that could enhance human creativity rather than replace it.

As she puts it, “One feature can take a video of a dog jumping in a pool and generate descriptive tags; another can take a simple doodle of a mushroom and pull up similar-looking photographs, much like Google’s experimental Quick Draw and AutoDrawtools that use neural networks to recognize sketches. Other AI tools could have more dramatic implications for how artists work, like an auto-coloring tool designed for comics and animation. A beta version of Celsys’ manga and illustration software Clip Studio now includes an AI feature that, with just a little guidance from the artist, can automatically color in black-and-white line drawings. The results can be unpredictable and require a little cleanup, but there’s huge potential in the way the technology can be used by artists and studios.”

Applied AI arts?

How can AI be applied to varied artistic disciplines? The piece cites animator João Do Lago, who has worked on various anime including Netflix’s Castlevania and thinks AI coloring tools could play a big role in the future of 2D animation and could give artists room to experiment, by cutting down the time it takes to color each frame.

The piece further explains that users could simply fill in line art with “hints” of where certain colors go, the input data could be  sent to Clip Studio’s servers for processing (a network connection is required for this part), then the results are sent back to the software as a fully colored image.

As Lago says, “One of the things that makes animation really hard to do is the amount of time it takes to create, which makes it so that most studios just stick to a visual style and formula that has been proven to work before. But when you can automate a big part of that process, you allow more room for different ideas and visual languages to be explored, since you can iterate on them a lot faster.”

No wonder then that studios have already begun investing in auto-coloring research, including OLM, the production studio behind the Pokémon anime. The consensus being that AI's will play a huge role in the future ‘look’ of 2d animation.

There is no telling where the evolution of AI tools will go. As the piece points Celsys' AI is based on a deep learning method, which combines computer vision tools, like those used in self-driving cars, with visual creation tools that can create extremely realistic AI-generated faces.

And Celsys truly believes that the tools it’s providing are in artists’ best interests, rather than technologies meant to replace them.

Can AI exist without human intervention?

In an interesting March piece in The Verge,  the writer James Vincent says, “Training algorithms to generate art is, in some ways, the easy part. You feed them data, they look for patterns, and they do their best to replicate what they’ve seen. But like all automatons, AI systems are tireless and produce a never-ending stream of images.” He then cites German AI artist Mario Klingemann, who says, the tricky part is knowing what to do with it all.

Klingemann, informs the piece, has created Memories of Passersby I, a video installation  that consists of two screens, each using AI to generate a portrait every few seconds. Every image is unique and morphs seamlessly into its successor. It’s like watching a lava lamp made of human faces. And yes, it has found its way to Sotheby’s too.

We quote, “Auctions like this show that after years of fermentation, AI art is moving into the world of high art. But as it does, it invites questions about the nature of art and creativity. What is the relationship between artist and machine? Can AI programs ever really be called creative?”

That question is relevant despite AI art that often seems to mimic, as the piece rightly says, the grotesque and unsettling imagery.

Chris Peters, a former software engineer and AI artist, says this is a “horrible” way to approach the medium. “Where’s the humanity?” he asks and though he too curates images from GANs, he paints them himself. He thinks, this is the best way to honor the original artists who created the work used to train the AI.

He says and we quote, “I learned in art school it can take hours and hours of careful observation before your mind quiets down to the point you can really see and understand something. I wanted to get inside the AI’s head, to achieve some understanding of what it was trying to do. I was able to, but only after days and days of looking at them while painting them.

If I just printed out the image, I would not understand 1/100th of what is there compared to standing for hours and hours and days and days painting.”

Creativity vs appropriation

What do you give when you take from art already in existence? The piece gives the example of artist, Robbie Barrat, who recently collaborated with a painter named Ronan Barrot, training a GAN on the latter’s many paintings of skulls. The GAN outputs were displayed alongside the original paintings, but Barrat also found a way to take advantage of AI’s infinite output. He created a peepshow box, which only one person can look in at a time. They press a button and it generates a new image of a skull.

There is also a certain kind of magic in the way technology done right interacts with the onlooker.

The piece describes how looking at Klingemann’s Memories of Passersby I, you would likely see an image that would never exist again.

We quote, “It’s a casual brush with infinity, like the fact that every time you shuffle a deck of cards, you’re creating a configuration that’s probably never existed in history before.

When it comes to AI, the feeling of infinite productivity resonates strongly with the technology’s cultural history. World myths are full of machines that reveal the hubris of humans by simply working without end. “

To use this potential without letting it overwhelm the human element is a fine balance.  Many creators like Klingemann, are happy to keep exploring the potential that these machines have. Like he says there is very little to compare to the joy of unceasing production from AI — this almost overwhelming output that will never stop. Ask that of an artist, and you’ll know how useful that could be in the art market.

The Great Diwali Discount!
Unlock 75% more savings this festive season. Get Moneycontrol Pro for a year for Rs 289 only.
Coupon code: DIWALI. Offer valid till 10th November, 2019 .
First Published on Mar 26, 2019 03:25 pm
Loading...
Sections
Follow us on
Available On
PCI DSS Compliant