We offer a pleasant work environment within a friendly, dynamic, international and young team in Vienna, one of the cities with the highest quality of life worldwide. The working language will be English, and we are committed to diversity and inclusion. Three of the open positions are associated with the projects “Talking charts <https://vda.cs.univie.ac.at/research/projects/project/360/>” and “Interpretability and Explainability as Drivers to Democracy <https://informatik.univie.ac.at/en/research/projects/project/354/>” funded by the Vienna Science and Technology Fund. The other three open positions are on related topics. Please see the details below. We are looking forward to your application! Project “Talking charts” The project combines perspectives and methods from Computer Science and Science & Technology Studies to explore how charts related to climate change and COVID-19 are created and understood both by researchers and public audiences. We welcome candidates with a background in Computing (with a specialization in Human Computer Interaction, Data Visualisation or Data Science) or in Information Science, Science and Technology Studies, or related areas. Read more about the project and the job description here: Project-specific PhD (3-years) https://tinyurl.com/35fpetkd <https://tinyurl.com/35fpetkd> Open topic PhD (4-years): https://tinyurl.com/yhbh66m9 <https://tinyurl.com/yhbh66m9> Please get in touch via email with Laura Koesten (laura.koesten@univie.ac.a <mailto:laura.koesten@univie.ac>t) or Kathleen Gregory (kathleen.gregory@dans.knaw.nl <mailto:kathleen.gregory@dans.knaw.nl>) if you have any questions about the role descriptions or application process for the Talking Charts project. Project “Interpretability and Explainability as Drivers to Democracy” The project is about machine learning models that are used for making decisions with significant societal impact in democratic societies. In particular, the project's aim is to enable informed evaluation of machine learning models and their usage by the electorate through interpretability and explainability of the used models and communication of these models and their roles in the decision making process in suitable forms. The underlying decision making process is considered to involve different stakeholders with different levels of expertise and achieving the project's aims requires the development of novel machine learning models, visualization approaches, and guidelines. Project-specific PhD https://tschiatschek.net/opening.html <https://tschiatschek.net/opening.html> Please get in touch via email with Sebastian Tschiatschek (sebastian.tschiatschek@univie.ac.at <mailto:sebastian.tschiatschek@univie.ac.at>) if you have any questions about the role descriptions or application process for the Interpretability and Explainability project. Our research groups are also filling the following related PhD positions: * Toward explainable models for environmental geoscience: https://bit.ly/2UhPry2 <https://bit.ly/2UhPry2> * Toward explainability of deep-neural-network models https://tinyurl.com/5ck4fned <https://tinyurl.com/5ck4fned> * On reinforcement Learning or probabilistic models, https://tinyurl.com/4ud9jewv <https://tinyurl.com/4ud9jewv> Please get in touch via email with Torsten Möller (torsten.moeller@univie.ac.at <mailto:torsten.moeller@univie.ac.at>) with questions about these roles. -- Anne Marie Faisst, BA Universität Wien Fakultät für Informatik Forschungsgruppe Visualization and Data Analysis Coordinator Research Network Data Science @ Uni Vienna Währinger Strasse 29/S6/2.04, A - 1090 Wien contact me via MS Teams