Main Speakers

 

Tim Rocktäschel (University College London)


Introduction to Deep Learning for NLP

Bio: Tim Rocktäschel is a Research Scientist at Facebook AI Research (FAIR) London and a Lecturer in the Department of Computer Science at University College London (UCL). At UCL, he is a member of the UCL Centre for Artificial Intelligence and the UCL Natural Language Processing group. Prior to that, he was a Postdoctoral Researcher in the Whiteson Research Lab, a Stipendiary Lecturer in Computer Science at Hertford College, and a Junior Research Fellow in Computer Science at Jesus College, at the University of Oxford. Tim obtained his Ph.D. in the Machine Reading group at University College London under the supervision of Sebastian Riedel. He received a Google Ph.D. Fellowship in Natural Language Processing in 2017 and a Microsoft Research Ph.D. Scholarship in 2013. In Summer 2015, he worked as a Research Intern at Google DeepMind. In 2012, he obtained his Diploma (equivalent to M.Sc) in Computer Science from the Humboldt-Universität zu Berlin. Between 2010 and 2012, he worked as Student Assistant and in 2013 as Research Assistant in the Knowledge Management in Bioinformatics group at Humboldt-Universität zu Berlin. Tim's research focuses on sample-efficient and interpretable machine learning models that learn from world, domain, and commonsense knowledge in symbolic and textual form. His work is at the intersection of deep learning, reinforcement learning, natural language processing, program synthesis, and formal logic.


Hinrich Schütze (Ludwig Maximilian University, Munich)


Neural-based representation Learning

Bio: Hinrich Schütze is professor of computational linguistics and director of the Center for Information and Language Processing at LMU Munich in Germany. Before moving to Munich in 2013, he taught at the University of Stuttgart. He received his PhD in Computational Linguistics from Stanford University in 1995 and worked on natural language processing and information retrieval technology at Xerox PARC, at several Silicon Valley startups and at Google 1995-2004 and 2008/9. He is a coauthor of Foundations of Statistical Natural Language Processing (with Chris Manning) and Introduction to Information Retrieval (with Chris Manning and Prabhakar Raghavan).


Kyunghyun Cho (New York University)


Latest topics in representation learning for language

Bio: Kyunghyun Cho is an assistant professor of computer science and data science at New York University and a research scientist at Facebook AI Research. He was a postdoctoral fellow at University of Montreal until summer 2015 under the supervision of Prof. Yoshua Bengio, and received PhD and MSc degrees from Aalto University early 2014 under the supervision of Prof. Juha Karhunen, Dr. Tapani Raiko and Dr. Alexander Ilin. He tries his best to find a balance among machine learning, natural language processing, and life, but almost always fails to do so.


Marek Rei (University of Cambridge)


Application of Deep Learning in NLP

Bio: Dr Marek Rei is a Senior Research Associate and Affiliated Lecturer at the University of Cambridge, working on machine learning and natural language understanding. He is also a Junior Research Fellow at King's College and a member of the ALTA Institute, working on automated technologies for language teaching and assessment. His primary research interests are in the areas of distributional and compositional semantics, representation learning and multi-task learning with neural architectures. In addition, he works on machine learning applications in the medical domain, creating predictive algorithms based on ECG signals and health records. Marek received his PhD from Cambridge, with a thesis on semi-supervised learning methods for NLP. He has also worked as a researcher in SwiftKey, a technology start-up now acquired by Microsoft, designing neural network models for machine learning applications on mobile devices.


Practical Session Speakers

 

Heike Adel


Bio: Heike Adel is a research scientist at Bosch Center for Artificial Intelligence. She works in the Natural Language Processing and Semantic Reasoning group, focusing on information extraction and knowledge base population from text. For her research, she is generally interested in developing deep learning methods for natural language processing and increasing their robustness and explainability. Before joining Bosch, she was a PostDoc in Sebastian Pado's group at University of Stuttgart. She did her PhD under the supervision of Hinrich Schütze at LMU Munich. During her time as a PhD student, she was a recipient of a Google European Doctoral Fellowship in Natural Language Processing.


Alexander Popov


Bio: Alexander Popov is a postdoctoral researcher at the Bulgarian Academy of Sciences. He has defended a dissertation on the topic of lexical modeling for natural language processing. His work focuses on handling lexical semantics via different methods, such as knowledge graphs, vector space models and neural networks for word sense disambiguation.


Omid Rohanian


Bio: Omid Rohanian is a 3rd-year PhD student in the Research Group in Computational Linguistics (RGCL) at the University of Wolverhampton. His research interest is in computational modelling of figurative language. He particularly focuses on irony, sarcasm, and non-compositional multiword expressions. In collaboration with colleagues, he has published extensively in NLP conferences, developing neural-based sequence labelling and classification models. In his recent work he proposed a novel deep learning model which specifically tackles the challenging issue of discontinuity in multiword expressions.


Shiva Taslimipoor


Bio: Shiva Taslimipoor is a postdoctoral research associate at the Research Group in Computational Linguistics at University of Wolverhampton. Her research lies in the in intersection of NLP and deep learning. At the moment, her focus is on incorporating transfer learning and multitask learning for improving deep learning systems in low resource settings. She did her PhD at the same university on automatic identification of multiword expressions, where she extensively investigated different methodologies for sequence tagging and structured labelling. With her co-authors, she devised a neural tagging system which ranked the best in a shared task on automatic identification of verbal multiword expressions.