Hoppa till innehållet

Talare

Mark Jelasity

BIO: Mark Jelasity is currently a full professor at the University of Szeged and heads the Department of Algorithms and AI there. Previously, he worked at several universities throughout Europe in Leiden, Amsterdam and Bologna. He visited Cornell as a Fulbright scholar. His interests lie mainly in the intersection of decentralized self-organizing systems and machine learning, and more recently also in adversarial examples for deep neural networks.

Description: As of today, nearly everything produces valuable personal data including not only computers and mobile phones, but also TV sets, smart meters, cars, sensors, and so on. At the same time, the industry is more and more sensitive to data protection due to increasing public awareness and changing regulations. In our research group, we have developed the necessary concepts and algorithms that make it possible to build a completely decentralized and open data mining platform. These techniques enable not only the commercial exploitation of private data but also applications for the public good. Unlike federated learning, our approach works without a central component. In this talk, I will summarize some basic ideas for self-organization, the basic idea of a decentralized gossip-based machine learning algorithm, and several techniques to improve efficiency and privacy that we developed over the past years.

Magnus Sahlgren

BIO: Magnus Sahlgren, PhD in computational linguistics, holds a position as senior expert and head of Natural Language Processing (NLP) at RISE (Research Institutes of Sweden). His research is situated at the intersection between NLP, AI, and Machine Learning, and focuses specifically on questions about how computers can learn and understand languages. Sahlgren is mostly known for his work on Distributional Semantics and Word Embeddings, and he was part of the research team that developed the Random Indexing framework for Hyperdimensional Computing. He has previously held positions at FOI (the Swedish Defense Research Agency), and founded the language technology company Gavagai AB.

Description: Swedish NLP: where are we and where are we going? We are at a transformative moment in NLP, with the development of large-scale language models that have unprecedented linguistic capabilities. In some specific test settings, we are approaching, or even exceeding, human performance. This development is likely to continue at an uninterrupted, or even accelerated, pace, but much of this development depends on access to extensive computational resources as well as large amounts of data. Swedish NLP has a long tradition of being at the forefront of NLP research, but the rapidly changing NLP landscape with an increased focus on data-driven methods presents significant challenges for smaller countries with smaller languages and limited economic resources. This talk discusses the current state of (Swedish) NLP, and identifies both opportunities and challenges with the recent developments. We cover some of the current research efforts in Swedish NLP, and discuss some potential directions for future development.

AbhishekVijayvargiaPortrait

BIO: Abhishek Vijayvargia is a Data Scientist in Microsoft. He worked in different domains including computer vision, natural language processing, AutoML and Explainable AI. In the past he worked on deploying deep learning models on small IoT devices and in-car sensor processing units. Using in-car machine learning, a lot of actions can be taken without transferring data to the cloud. He also worked on improving face recognition use cases with thermal camera. Currently he is working in improving cyber security using machine learning models. He deployed many models that was directly impactful in improving the revenue.

Description: Explainable AI is a technique to understand and interpret predictions made by ML models. Using Explainable AI or XAI, we can produce more explainable models with high level of learning performance. It enables users to trust and manage AI in their work. To enable AI more into our life, we need to understand the model. Black-box models are not always welcome. We want to know why the model generated the predictions. Using explanations, we can speed up the adoption of these systems. Some areas like medicine, law enforcement, XAI can be very useful as they have the explanations available for their predictions.