Search

Workshop

Poster Session & Coffee & Tee

E1 05 (Leibniz-Saal)

Abstract

Paul Breiding
Max Planck Institute for Mathematics in the Sciences, Germany

3264 Conics in a Second

by Paul Breiding, Bernd Sturmfels and Sascha Timme.

In 1848 Jakob Steiner asked ``How many conics are tangent to five conics?'.'
In 2019 we ask ``Which conics are tangent to your five conics?''

The answer is at juliahomotopycontinuation.org/do-it-yourself/

Petru Hlihor
Romanian Institute of Science and Technology, Romania

Reconstructions by Variational AutoEncoders as a Defense Strategy against Adversarial Examples

Adversarial Examples for classification tasks are inputs provided to a machine learning model, specifically designed to produce a wrong classification. They are usually obtained by malicious perturbations of a sample in a dataset, which are difficult to be recognized even by a human. In this poster we study the use of Variational AutoEncoders to preprocess images before classification, as a strategy to defend against adversarial examples. In our preliminary experiments we show that by reconstructing images with a Variational AutoEncoder, the accuracy of the classifier improves significantly, even against some of the most powerful attacks in the literature. As opposed to regular autoencoders, previously proposed in the literature as a defense mechanism, the presence of a stochastic layer plays a key role in the defense, which is not trivial to be circumvented by an attacker.

Pankaj Kumar
Copenhagen Business School, Denmark

Multiagent Deep Reinforcement Learning for Market Making

Market Making is high frequency trading strategy in which an agent provides liquidity simultaneously quoting a bid (buy) price and an ask (sell) price on an asset. Market Makers reaps profits in the form of the spread between the quoted price placed on the buy and sell prices. Due to complexity in inventory risk, counterparties to trades and information asymmetry, understating of market making algorithms is relatively unexplored by academicians. Quite a few body of literature, in particular in single deep reinforcement learning (DRL), has studied the problem of optimal execution and prediction market. The success of such single DRL’s can be accredited to the use of experience replay memories, which legitimate Deep Q-Networks (DQNs) to be trained efficiently through sampling stored state transitions. However, outmost care is required in multi-agent deep reinforcement learning (MA-DRL), as stored transitions can become obsolete when agents update their policies in parallel Motivated by above, in this talk, I will introduce a novel reformulation of the multi-agent deep reinforcement learning (MA-DRL) simulation framework for market making, which allows many agents interactions without fail. Using simple state reformulation of multi-agent like image, innovative multi-agent training and agent ambiguity, convolution neural network for the Q-value function approximation is used to learn distributed multi-agent policies. This approach alleviates convergence, non-stationarity training, and scalability issues encountered in the literature for multi-agent systems. Also, the market maker agents successfully reproduce stylized facts in historical trade data from each simulation.

Liam Solus
KTH Royal Institute of Technology, Sweden

Interventional Markov Equivalence for Mixed Graph Models

We will discuss the problem of characterizing Markov equivalence of graphical models under general interventions.
Recently, Yang et al. (2018) gave a graphical characterization of interventional Markov equivalence for DAG models that relates to the global Markov properties of DAGs. Based on this, we extend the notion of interventional Markov equivalence using global Markov properties of loopless mixed graphs and generalize their graphical characterization to ancestral graphs. On the other hand, we also extend the notion of interventional Markov equivalence via modifications of factors of distributions Markov to acyclic directed mixed graphs. We prove these two generalizations coincide at their intersection; i.e., for directed ancestral graphs. This yields a graphical characterization of interventional Markov equivalence for causal models that incorporate latent confounders and selection variables under assumptions on the intervention targets that are reasonable for biological applications.

Csongor-Huba Varady
Romanian Institute of Science and Technology, Romania

Learning Latent Representations for Audio Signals through Variational Autoencoders

In this short paper we are interested in exploring generative models for audio signals, with a particular focus on signal reconstruction and learning explainable latent representations. Similarly to the work of Engel et al., we consider generative models characterized by a Wavenet decoder, which produce in output an autoregressive model conditioned on the past signal as well as on the latent representation. The main contribution of our work consists in the proposal of an architecture based on Variational AutoEncoders, which allow us to define an approximate posterior able to explicitly capture the time dependence of the latent encoding over time. Moreover, the possibility to introduce variational bounds for the training of the model could possibly lead to disentangled representations for audio signals, and thus the learning of latent encoding easier to be interpreted.

Valeria Hünniger

Max-Planck-Institut für Mathematik in den Naturwissenschaften Contact via Mail

Guido Montúfar

Max Planck Institute for Mathematics in the Sciences