International Conference | 15-17 November 2019 | Bucharest

Elephants were replaced from Circus shows by holographic images and we are all very happy about it, hoping that soon this will also happen in zoos. Telepresence, plays written by AI, dancing with robots, digital sets have become a daily part in performing arts.

How will the performing arts of the future look like? Will it be a globalized AI controlled world, where avatars of faraway people interact? Will it be performed by cyborgs in virtual sets controlled by the biofeedback of the spectator? Will it be the live development of genetically modified organisms specially designed to blossom and die in front of cheering crowds? How far can we go to express ourselves, how far can we go to get more attention to our work? By technologically augmenting performance and performers do we get closer to our full potential or further from ourselves?


We invite you to submit papers for the international interdisciplinary conference Augmenting Performance held at the International Center for Research and Education in Innovative Creative Technologies – CINETicin Bucharest, 15-17 November 2019.

The conference is focused on research, development and practices on augmenting artistic performance through interdisciplinary art-technology-science approaches.

We look forward to papers on Human Computer Interaction, Wearable Tech and use of Biosensors, Data Visualization and Sonification, Robotics, Mobile Augmentations, Scripting Interactive Environments, Sound and Music Interaction, Motion Tracking and Gaming Practice used to augment Performing Arts Practices. Papers on interdisciplinary fields like film, animation, VR, theatre, video-gaming or specialized fields of technology will be accepted if they are highly consistent and relevant to the general subject. Projects developed through research and presentations of research results will be highly valued. 

We look forward to receive papers of original, provoking work and research.

One section of the conference will be dedicated to presentations of innovative art practice and one section to research papers.


Send an abstract of 1000 words and a biography of 100 words at the following address: 

For case presentations of artistic work, please include 2 images or a video.

Deadline for abstract

16 September 2019.

Deadline for submitting final papers

10 October 2019.

All accepted papers will be presented in the conference. Final papers will be put up for peer review with the possibility of them being subsequently published in a volume.

Foreign students and artists will be offered 2 grants covering all expenses for 5 days which will include participation in the conference and visiting the CINETic center and UNATC (travel, housing, food).

Upon request, up to 30 Ph.D. and master students, whose work is accepted, will be hosted in the UNATC student dormitory for the duration of the Conference. 



Institute for Computational Perception, Johannes Kepler University Linz (JKU) and Austrian Research Institute for Artificial Intelligence (OFAI), Vienna (Austria)


Much of current research in Artificial Intelligence and Music, and particularly in the field of Music Information Retrieval (MIR), focuses on algorithms that interpret musical signals and recognise musically relevant objects and patterns at various levels - from notes to beats and rhythm, to melodic and harmonic patterns and higher-level structure -, with the goal of supporting novel applications in the digital music world. This presentation will give the audience a glimpse of what computational music perception systems can currently do with music, and what this is good for. However, we will also find that while some of these capabilities are quite impressive, they are still far from showing (or requiring) a deeper "understanding" of music. An ongoing project will be presented that aims to take AI & music research a step further, going beyond surface features and focusing on the *expressive* aspects, and how these are communicated in music. We will look at recent work on computational models of expressive music performance and some examples of the state of the art, and will discuss possible applications of this research. In the process, the audience will be subjected to a little experiment which may, or may not, yield a surprising result.

GERHARD WIDMER should have become a pianist, but at age 15 decided that Beethoven was boring.

He studied computer science (and some music) in Austria and the U.S., and is currently a professor at the Johannes Kepler University Linz, where he heads the Institute of Computational Perception.

He also founded and leads the Intelligent Music Processing and Machine Learning Group at the Austrian Research Institute for Artificial Intelligence (OFAI), Vienna.

His research interests are in AI, Machine Learning, computational audio and music perception, Music Information Retrieval (MIR), and computational models of musical skills (notably: expressive music performance).

He is considered a pioneer in inter-disciplinary research at the intersection of computer science, AI, and music, and has been awarded several research prizes, including the highest scientific award in the country of Austria, the „Wittgenstein Prize” (2009). He is a Fellow of the European Association for Artificial Intelligence (EurAI), and the recipient of an ERC Advanced Grant (2015) of the European Research Council. His attitude towards Beethoven has also changed substantially, in the meantime.




Media and Performance Laboratory at HKU University of the Arts Utrecht (Netherlands)


This keynote explores the artistic potential of developments in VR and AR technologies and how they can be applied in performative mixed reality experiences. 

Through examples of internationally recognized productions and by reflecting on his own recent experimental work Joris Weijdom offers artistic questions and design challenges that are intricately intertwined with mixed reality design. 

By looking with a dramaturgical eye to these productions, rather than a technological one, the augmentation of performance becomes a discussion of meaning rather than experiential effect. 

JORIS WEIJDOM is a researcher and designer of mixed-reality experiences focusing on interdisciplinary creative processes and performativity. 

He is a lecturer at the HKU University of the Arts Utrecht where he founded the Media and Performance Laboratory (MAPLAB), enabling from 2012 until 2015 practice-led artistic research on the intersection of performance, media and technology. 

As part of his PhD project Joris researches creative processes in collaborative mixed reality environments (CMRE) in collaboration with the Utrecht University and the University of Twente.



Tutor in the Animation programme at the Royal College of Art London (UK)


Expanded forms of animation, its use within live stage production and the role of research knowledge exchange within live projects

Using a recent project between the Royal College of Art, Peter Gabriel, Sting and Real World Productions as a case study, this presentation explores an exchange of knowledge between academic research into aesthetic development and industry to create an innovative live music production with audiovisual technology.

This presentation is an expanded version of a recent paper exploring ways of working particularly within an educational framework. It looks at the difference in producing amination for stage as opposed to more traditional filmic forms

In May 2016 the Royal College of Art entered into a project to develop the concept and visual content for the Peter Gabriel and Sting US tour. The presentation will discuss the knowledge exchange value of the project and will go through the methods of its development, the creative innovation used and its aesthetic outcome. Traditional methods were used, but reinterpreted for this expanded animation project. The difference between more traditional linear, short film or music video production and the new technical challenges of creating visuals for a live show that formed part of the research for this project will be explored.

JOE KING is an award winning artist filmmaker living and working in the UK. He is also a tutor on the Animation programme at the Royal College of Art. Joe has worked in arts education for the past fifteen years supervising at masters and research levels. He has been a guest lectures at a number of international colleges, and has been awarded a visiting professorship at Jilin University China. 

Originally working in animation his work now spills over into multi-media works that operate in tandem with or as an adjunct to moving image, playing freely with and between the spaces of site, screen and gallery. Joe uses a variety of techniques and animation to combine and manipulate photography, film and sound. His work moves between single screen and multimedia gallery installations. 

Joe is a is also a founding member of folk-projects often working in collaboration with co-founder and fellow artist Rosie Pedlow. Before concentrating on his own practice Joe was a director for Slinky pictures, directing commercial work including advertising, music videos as well as producing visuals for live performance. Joe’s work is exhibited internationally and his personal films have won several awards.              




Lecturer of Education, Erasmus Coordinator, Sonic Arts Research Centre, School of Arts English and Languages, Queen’s University Belfast (UK)


This workshop will offer an introduction to electronics, sensors technology, computing programming, and audio processing for musical applications. It covers basics of interaction design, instrument studies, and performance practices with novel interfaces.

MIGUEL ORTIZ is a Mexican composer and sound artist based in Belfast. His practice explores a vast array of performing mediums ranging from traditional acoustic instruments such as cello and trumpet, to laptop improvisation, performance with bio-instruments and hyperinstruments.

He currently works as a Lecturer at the Sonic Arts Research Centre, Queen’s University Belfast.



School of Interactive Arts and Technology, Simon Fraser University, Vancouver (CANADA)

Institute for Computer Music and Sound Technologies, Zürich University for the Arts (SWITZERLAND)


Creative Artificial Intelligence (Creative AI) applies autonomous software architectures to creative tasks. Creative tasks differ from rational problem-solving. Unlike rational problem solving, the quality measures of creative tasks are ill-defined. For example, there is no notion of an optimum musical improvisation. There is no universal measure of the quality of musical improvisation. Because of these challenges of creative tasks, the implementations of creative AI differ from rational problem solving and traditional search strategies of Machine Learning.

This workshop on Creative AI consists of three parts: philosophy, system design, and practice. The philosophy part covers the background of Artificial Intelligence, Generative Art, and Computational Creativity; while the system design gives an overall view on generative systems. The last part of the workshop invites participants for hands-on practice of the knowledge. The applications in focus cover Machine Learning, Artificial Intelligence, and Multi-agent Systems technologies for artistic applications.

KIVANÇ TATAR is “a worker in rhythms, frequencies, and intensities;” playing trumpet and electronics, composing experimental music, performing audio-visuals, and researching Creative Artificial Intelligence for Music and Interactive Media. His career aims to integrate Science, Technology, Engineering, Interactive Arts, Contemporary Arts, and Design to research interdisciplinary topics to create transdisciplinary knowledge.

His work has been exhibited in Germany, Italy, Romania, Switzerland, Austria, Russia, Brazil, Australia, USA (New York and Atlanta), Canada (Vancouver and Montreal), South Korea, and Turkey; including the events: the cultural program at Rio Olympics 2016, the Ars Electronica Festival 2017 (with the theme Artificial Intelligence), CHI 2018, and Mutek Montreal 2018, Contemporary İstanbul PlugIn 2019.

His research spans Creative Artificial Intelligence, Machine Learning, Audio Synthesis, Generative Art, and Musical Composition & Performance.