Archive: Hackathon Workshop on Generative Modeling Overview This event was part of the Isaac Newton Institute (INI) satellite programme on Diffusions in machine learning: Foundations, generative models and non-convex optimisation and provided a platform for early career researchers and PhD students to work on challenges with a strong emphasis on generative modeling. Two industry-inspired challenges were provided by Amazon. On the first day of the event, introductory talks on the challenges and a training session about the use of High Performance Computing (HPC) Cirrus were provided to the participants. Participants were allocated into groups to work on specific challenges. These groups updated the participants of the INI programme during the week and presented their final results in a session livestreamed at the Alan Turing Institute. For further information or inquiries, please contact us at: generativehackathon@ed.ac.uk. Organisers: Stefano Bruno (Lead Organiser), Dong-Young Lim, Sotirios Sabanis, Sara Wade and Ying Zhang. Support for the workshop was provided by the Centre for Investing Innovation, CRC Press, the London Mathematical Society, and Springer. Challenge 1: Hallucinations in Large Vision-Language Models The aim of this industry-inspired challenge was to improve the reasoning capabilities of image + text multimodal models (e.g. GPT-4V) and reduce both visual hallucinations (generated images that fail to adhere to the input prompt) and text hallucinations (incorrect answers to Visual Question Answering prompts). Two benchmarks that can be used here are the T2I-CompBench and HallusionBench. Challenge 2: Unlearning for Large Language Models Due to their increased capacity for verbatim memorisation and reproduction of their training data, machine unlearning is a crucial area for Large Language Models, given the potential legal and privacy risks. In this industry-inspired challenge we explored methods for targeted and efficient unlearning focusing on two benchmarks: TOFU and Unlearning_LLM. Challenge 3: Towards frugal zero-shot diffusion-based image restoration This challenge is related to the use of Diffusion Models in Bayesian inverse problems. In solving imaging processing tasks, it has been observed that one can leverage a pre-trained diffusion model to recover a relevant solution, making Diffusion Models viable zero-shot learners that require no additional training. In this challenge, we will present the software toolkit Diffuser and showcase how imaging problems can be solved in practice through hands-on examples of running a complete pipeline of an inverse problem. The goal of this challenge is to guide the participants to develop robust and frugal algorithms that leverage the advances in Diffusion Models to solve inverse problems with no additional training of the models. The focus of this challenge was on common tasks encountered in image inpainting, deblurring, and colorization. Challenge 4: Implementing non-convex optimization algorithms This challenge provided an ideal platform for participants to engage in the hands-on implementation of non-convex optimization algorithms and to assess their performance across a variety of datasets and deep learning models. It was specifically tailored for participants who are beginners with deep learning frameworks like PyTorch but may possess a fundamental knowledge of Python. Our comprehensive five-day schedule included activities such as 1) building customized optimization algorithms from scratch within the PyTorch framework, 2) gaining a deeper, more intuitive understanding of optimizer behaviors through toy examples, 3) applying modern optimizers to train deep learning models on high-dimensional datasets, and 4) constructing flexible pipelines for optimizer tuning. Registration We welcomed applications coming from different disciplines (e.g. mathematics, computer science, statistics, engineering, and physics) which are relevant to generative modeling. We are dedicated to promoting diversity and inclusion within the research community. Our goal was to ensure a balanced representation of the participants, and we actively encourage applications from underrepresented groups within the scientific community. Registration closed at 6 pm on the 31st of May 2024. Registration form Image Image Jul 08 2024 00.00 - Jul 12 2024 23.59 Archive: Hackathon Workshop on Generative Modeling Learn about the Hackathon Workshop on Generative Modeling held in July 2024 and the four Hackathon challenges presented. 1 Lauriston Pl, Edinburgh EH3 9EF. Edinburgh Futures Institute Building
Archive: Hackathon Workshop on Generative Modeling Overview This event was part of the Isaac Newton Institute (INI) satellite programme on Diffusions in machine learning: Foundations, generative models and non-convex optimisation and provided a platform for early career researchers and PhD students to work on challenges with a strong emphasis on generative modeling. Two industry-inspired challenges were provided by Amazon. On the first day of the event, introductory talks on the challenges and a training session about the use of High Performance Computing (HPC) Cirrus were provided to the participants. Participants were allocated into groups to work on specific challenges. These groups updated the participants of the INI programme during the week and presented their final results in a session livestreamed at the Alan Turing Institute. For further information or inquiries, please contact us at: generativehackathon@ed.ac.uk. Organisers: Stefano Bruno (Lead Organiser), Dong-Young Lim, Sotirios Sabanis, Sara Wade and Ying Zhang. Support for the workshop was provided by the Centre for Investing Innovation, CRC Press, the London Mathematical Society, and Springer. Challenge 1: Hallucinations in Large Vision-Language Models The aim of this industry-inspired challenge was to improve the reasoning capabilities of image + text multimodal models (e.g. GPT-4V) and reduce both visual hallucinations (generated images that fail to adhere to the input prompt) and text hallucinations (incorrect answers to Visual Question Answering prompts). Two benchmarks that can be used here are the T2I-CompBench and HallusionBench. Challenge 2: Unlearning for Large Language Models Due to their increased capacity for verbatim memorisation and reproduction of their training data, machine unlearning is a crucial area for Large Language Models, given the potential legal and privacy risks. In this industry-inspired challenge we explored methods for targeted and efficient unlearning focusing on two benchmarks: TOFU and Unlearning_LLM. Challenge 3: Towards frugal zero-shot diffusion-based image restoration This challenge is related to the use of Diffusion Models in Bayesian inverse problems. In solving imaging processing tasks, it has been observed that one can leverage a pre-trained diffusion model to recover a relevant solution, making Diffusion Models viable zero-shot learners that require no additional training. In this challenge, we will present the software toolkit Diffuser and showcase how imaging problems can be solved in practice through hands-on examples of running a complete pipeline of an inverse problem. The goal of this challenge is to guide the participants to develop robust and frugal algorithms that leverage the advances in Diffusion Models to solve inverse problems with no additional training of the models. The focus of this challenge was on common tasks encountered in image inpainting, deblurring, and colorization. Challenge 4: Implementing non-convex optimization algorithms This challenge provided an ideal platform for participants to engage in the hands-on implementation of non-convex optimization algorithms and to assess their performance across a variety of datasets and deep learning models. It was specifically tailored for participants who are beginners with deep learning frameworks like PyTorch but may possess a fundamental knowledge of Python. Our comprehensive five-day schedule included activities such as 1) building customized optimization algorithms from scratch within the PyTorch framework, 2) gaining a deeper, more intuitive understanding of optimizer behaviors through toy examples, 3) applying modern optimizers to train deep learning models on high-dimensional datasets, and 4) constructing flexible pipelines for optimizer tuning. Registration We welcomed applications coming from different disciplines (e.g. mathematics, computer science, statistics, engineering, and physics) which are relevant to generative modeling. We are dedicated to promoting diversity and inclusion within the research community. Our goal was to ensure a balanced representation of the participants, and we actively encourage applications from underrepresented groups within the scientific community. Registration closed at 6 pm on the 31st of May 2024. Registration form Image Image Jul 08 2024 00.00 - Jul 12 2024 23.59 Archive: Hackathon Workshop on Generative Modeling Learn about the Hackathon Workshop on Generative Modeling held in July 2024 and the four Hackathon challenges presented. 1 Lauriston Pl, Edinburgh EH3 9EF. Edinburgh Futures Institute Building
Jul 08 2024 00.00 - Jul 12 2024 23.59 Archive: Hackathon Workshop on Generative Modeling Learn about the Hackathon Workshop on Generative Modeling held in July 2024 and the four Hackathon challenges presented.