das Shining

Author page of timodufner

Blog content of this author

Malen mit Maschinen 2023

Painting with machines – Exhibition & Vernissage – „Malen mit Maschinen“ – Four artists from Tübingen exhibit in the Kulturhalle.

From January 20 to February 18, 2023, Joachim Wedekind & the Shining (Timo Dufner, Mathias Schlenker, Jens Schindel) will present paintings and installations that address the role of digital media in the artistic process. All interested parties are cordially invited to the exhibition opening on

Thursday, January 19, 2023, 7 pm, in the Kulturhalle, Nonnengasse 19.

Dagmar Waizenegger, Head of the Department of Art and Culture, will welcome you. Prof. Stephan Schwan, head of the Realistic Representations Working Group at the Tübingen Institute for Knowledge Media, will introduce the exhibition.

The exhibits in this exhibition are based on algorithms and their implementation and representation is tied to technical devices. On this basis, they (almost all) foresee a participation of the recipients in the creation and output of the media elements. There are differences in active or passive participation. The exhibition „Painting with Machines“ thus aims to stimulate reflection on the role of technical tools as well as recipients in the process of creating artistic artifacts.

The collective „das Shining“ is a group of interdisciplinary media and computer experts who studied media informatics together in Tübingen before expanding their expertise in other academic fields. Joachim Wedekind was an instructional technologist and media didactic. As a latecomer and lateral entrant, he now explores – with close reference to early computer art – the role of painting machines.

Event information (German):

Joachim Wedekind & das Shining (Timo Dufner, Mathias Schlenker, Jens Schindel): Malen mit Maschinen
20. Januar bis zum 18. Februar 2023
Mittwoch bis Freitag 16 bis 19 Uhr, Samstag 11 bis 15 Uhr
Weitere Veranstaltung: Donnerstag, den 02.02.2023, 18 Uhr: Livedemo „Malen mit dem Roboter“
Kulturhalle Tübingen, Nonnengasse 19
Eintritt frei

Audiovisual Stream Collection

Since the start of the Covid-19-related restrictions on the arts and culture scene, we have been hosting weekly streams from the Schlachthaus Tübingen. Here you can find the results in full length and with all the breakdowns. We have also hosted the complete programme as a podcast on iTunes or Hearthis.

„Instandsetzung!“ Lost Place & Medienkunst Festival

The area of the former slaughterhouse in Villingen-Schwenningen has been abandoned for 20 years, but now the site is being transformed for two days into a special field of experimentation: Students from the University of Liechtenstein as well as artists from the surroundings of the State University of Music Trossingen, the Furtwangen University of Applied Sciences and the „Global Forest“ association will literally show parts of the site in a new light under the title „Restoration!“ as part of a Lost Place & Media Art Festival.
Various sound and light installations are planned along a course, some of which will involve visitors interactively – the light projections onto the striking water tower will be real highlights. Hess Licht + Form provides the luminaires for the artistic illumination of the buildings. The building materials specialist Sto is supporting the artistic realisation of a large colour cloud, as is the Rombach Merkt craftsman’s workshop.

We will show our installation clone door as part of the course on the site.
Tickets and reservations are available here!

Map & Fold Festival 2020

In functional programming, fold refers to a family of higher-order functions that analyze a recursive structure through use of a given combining operation, recombine the results of recursively processing its constituent parts, building up a return value. Map represents an associative abstract data type composed of a collection of key, value pairs. 
This is exactly what is happening with the cultural scene at the very moment. We systematically and methodically change concepts, mix online and offline life and try to offer an optimal solution for events and cultural enjoyment for all participants. We present interactive art, music & design – for visitors and online guests with artists* on site and around the world. This project receives state funding from the „Culture Summer 2020“ program.
Monica Vlad (AV live)
Alexandra Cárdenas (live coding)
Tatsuru Arai (AV live)
Nick Rothwell & Shama Rahman (live coding, sitar)
Atsushi Tadokoro (AV live coding)
Zacharias Fasshauer (Double Bass)
Jaume Darbra Fa & Marçal Xirau (flute & guitar with electronics)
Mári Mákó (live electronics) Scott Wilson, Konstantinos Vasilakos, Erik Nyström and Tsun Winston Yeung (live electronics)

more infos

XSICHT at ISEA 2019

Over several weeks, XSICHT has been trained to match faces and audio. With a training batch of tens of thousands of frames, the AI has learned to construct a human face from any given audio input. What happens when we abstract the input? This is the question, XSICHT tries to answer.
Since it is nothing more than a complex concatenation of intertwined non- linear functions that get amplified or dampened, its complexity is often hard to understand, which is why the intrinsic of an AI is called Hidden Layers or a Blackbox.
XSICHT doubles the unpredictability by feeding it not the voices it was trained on, but music, leading to unexpected results when confronted with various genres or instruments. Harmonic piano music, for example, more often leads to the recreation of female faces, while bassline-driven techno mostly resembles male speakers.

Insight

A brief technical overview can be split into data and network architecture.
The former is given to XSICHT in form of a 0.2-second-long spectrogram, calculated using the Short Time Fourier Transformation. To enhance the spatial representation of lower frequencies, the spectrogram is logarithmically recalculated to resemble the human sound perception, called a MEL spectrogram.


The latter takes this input and convolutes it down to a 1×1 pixel sized latent space from where the information is used to deconvolute the compressed information. This is called a U-shaped architecture or more common an Image- to-Image GAN network, but here it is used without skip connections between the de- and convolution pipe. During learning, the counterpart of this generator, the discriminator, works in a patch-based manner.
XSICHT gets input from a live dialog between Synthesizers and acoustic instruments produced by Timo Dufner, a voice or prerecorded sounds that harmonize with the visualization.

more

 

Projects of this author

Map & Fold Events

In functional programming, fold refers to a family of higher-order functions that analyze a recursive structure through use of a given combining operation, recombine the results of recursively processing its constituent parts, building up a return value. Map represents an associative abstract data type composed of a collection of key, value pairs. This is exactly what is happening with the cultural scene at the very moment. See full project…