Since the start of the Covid-19-related restrictions on the arts and culture scene, we have been hosting weekly streams from the Schlachthaus Tübingen. Here you can find the results in full length and with all the breakdowns. We have also hosted the complete programme as a podcast on iTunes or Hearthis.
The area of the former slaughterhouse in Villingen-Schwenningen has been abandoned for 20 years, but now the site is being transformed for two days into a special field of experimentation: Students from the University of Liechtenstein as well as artists from the surroundings of the State University of Music Trossingen, the Furtwangen University of Applied Sciences and the „Global Forest“ association will literally show parts of the site in a new light under the title „Restoration!“ as part of a Lost Place & Media Art Festival.
Various sound and light installations are planned along a course, some of which will involve visitors interactively – the light projections onto the striking water tower will be real highlights. Hess Licht + Form provides the luminaires for the artistic illumination of the buildings. The building materials specialist Sto is supporting the artistic realisation of a large colour cloud, as is the Rombach Merkt craftsman’s workshop.
We will show our installation clone door as part of the course on the site.
Tickets and reservations are available here!
In functional programming, fold refers to a family of higher-order functions that analyze a recursive structure through use of a given combining operation, recombine the results of recursively processing its constituent parts, building up a return value. Map represents an associative abstract data type composed of a collection of key, value pairs.
This is exactly what is happening with the cultural scene at the very moment. We systematically and methodically change concepts, mix online and offline life and try to offer an optimal solution for events and cultural enjoyment for all participants. We present interactive art, music & design – for visitors and online guests with artists* on site and around the world. This project receives state funding from the „Culture Summer 2020“ program.
Monica Vlad (AV live)
Alexandra Cárdenas (live coding)
Tatsuru Arai (AV live)
Nick Rothwell & Shama Rahman (live coding, sitar)
Atsushi Tadokoro (AV live coding)
Zacharias Fasshauer (Double Bass)
Jaume Darbra Fa & Marçal Xirau (flute & guitar with electronics)
Mári Mákó (live electronics) Scott Wilson, Konstantinos Vasilakos, Erik Nyström and Tsun Winston Yeung (live electronics)
Over several weeks, XSICHT has been trained to match faces and audio. With a training batch of tens of thousands of frames, the AI has learned to construct a human face from any given audio input. What happens when we abstract the input? This is the question, XSICHT tries to answer.
Since it is nothing more than a complex concatenation of intertwined non- linear functions that get amplified or dampened, its complexity is often hard to understand, which is why the intrinsic of an AI is called Hidden Layers or a Blackbox.
XSICHT doubles the unpredictability by feeding it not the voices it was trained on, but music, leading to unexpected results when confronted with various genres or instruments. Harmonic piano music, for example, more often leads to the recreation of female faces, while bassline-driven techno mostly resembles male speakers.
A brief technical overview can be split into data and network architecture.
The former is given to XSICHT in form of a 0.2-second-long spectrogram, calculated using the Short Time Fourier Transformation. To enhance the spatial representation of lower frequencies, the spectrogram is logarithmically recalculated to resemble the human sound perception, called a MEL spectrogram.
The latter takes this input and convolutes it down to a 1×1 pixel sized latent space from where the information is used to deconvolute the compressed information. This is called a U-shaped architecture or more common an Image- to-Image GAN network, but here it is used without skip connections between the de- and convolution pipe. During learning, the counterpart of this generator, the discriminator, works in a patch-based manner.
XSICHT gets input from a live dialog between Synthesizers and acoustic instruments produced by Timo Dufner, a voice or prerecorded sounds that harmonize with the visualization.