Robots in art: poetry in the sand, portrait of David.

Robots in art: poetry in the sand, portrait of David

Developers of artificial intelligence systems and robotics are exploring creative professions. We can not only buy a robot vacuum cleaner, but also go to a robot group concert or visit an exhibition of an android artist.

In 2019, “the world’s first ultra-realistic humanoid AI robot artist” Ai-Da held his solo exhibition in the UK of eight drawings, twenty paintings, four sculptures and two video works. The Declassified Future exhibition featured abstract paintings of trees, sculptures of a bee and even a video, one of which, Privacy, was dedicated to Yoko Ono’s 1965 Cut Piece.

The artist was named after the British mathematician and computer pioneer Ada Lovelace. Ai-Da can draw from life thanks to cameras in her eyeballs and AI algorithms created by scientists at Oxford University. They help calculate the coordinates for her hand. Ai-Da uses a pencil or pen to sketch, but can also sculpt and paint ceramics.

In 2020, Hong Kong artist Victor Wong using A.I. Gemini created paintings of traditional Chinese painting style. The AI ​​learned to use the techniques of calligraphy, line drawing and ink shading to reproduce paintings on xuan rice paper.

Gemini uses formulas to calculate terrain, taking into account gravity and tectonic movements, to create 3D sketches in the mind.

Russian engineer-inventor Alexei Lyanguzov develops robotic artists who paint in the style of “neuroclassicism” and suprematism.

They can paint a portrait of David with acrylics on canvas. A special tool in the robot arm precisely dispenses enamels of varying viscosity, superimposing strokes on each other with textured strokes. The process of creating one painting takes about a week.

Berlin artist Frank Branes, together with designers Markus Kolb and Stock Plum, created the Compressorhead collective of anthropomorphic robotic musicians in 2013. The band performs heavy metal music. The robotic musicians use the energy of compressed air to move. Their characteristic feature is not only performance capabilities, but also realistic stage plastic.

The band consists of six robots – a four-armed Stickboy drummer, a Fingers manipulator-guitarist with 78 fingers, a Hi-Hat Humper drum pedal manipulator with a metal mohawk, a bassist with two four-fingered hands Bones, a second guitar and backing vocals, a female robot Hellgå Tarr, as well as the frontman of Mega-Wattson, who imitates singing, opens his mouth, flexibly moves his torso and can move around the stage on tracks. The band debuted at Brainpool Studios in Cologne in 2012, where they performed a cover of AC / DC’s TNT song.

Another team – Z-machines – was also created in 2013 by Japanese engineers under the leadership of renowned designer Kenjiro Matsuo. A robot guitarist named Marsh uses 78 fingers and 12 picks on a double neck guitar. Ashura is a 22-arm drummer robot, while Kosmo, a robot keyboardist, is distinguished by the speed of sound production. The tool controls are electrically and pneumatically driven.

The musical group became famous for their collaboration with Tom Jenkinson, known as Squarepusher. The musician, using robots, recorded an album Music for Robots of five tracks.

In 2020, the GPT-3 neural network wrote the script for the short film Solicitors. It was filmed by two film students from Chapman University in California.

in 2016, a novel written by artificial intelligence debuted for the first time in a Japanese literary competition. It was called “The Day the Computer Wrote a Novel.” The work went to the final, bypassing about one and a half thousand texts from human authors. A development team from the University of Future Hakodate simply selected words and sentences for the system and set the parameters for constructing the text.

The robot poet, created in 2017 by the Chinese designer Yuxi Liu, can draw rhymes in the sand. “Poet on the Beach” wanders the coast and writes poetry in the sand.

In 2021, artificial intelligence researchers at OpenAI unveiled a neural network called DALL · E that is capable of generating images from textual descriptions in natural language. The neural network is trained on 12 billion parameters and generates pictures from text descriptions from text-image pairs that are in its dataset.

Researchers have found that DALL · E has great generative potential. For example, a neural network can create anthropomorphic animals and other unusual objects, such as an avocado-shaped chair.

DALL · E has also learned the historical and geographical context and is capable of generalizing trends in design and technology.

Share with friends: